id
int64
39
19.8M
url
stringlengths
31
264
title
stringlengths
1
182
text
stringlengths
1
316k
__index_level_0__
int64
1
7.91M
4,298
https://en.wikipedia.org/wiki/Baptism
Baptism
Baptism (from ) is a form of ritual purification—a characteristic of many religions throughout time and geography. In Christianity, it is a Christian sacrament of initiation and adoption, almost invariably with the use of water. It may be performed by sprinkling or pouring water on the head, or by immersing in water either partially or completely, traditionally three times, once for each person of the Trinity. The synoptic gospels recount that John the Baptist baptised Jesus. Baptism is considered a sacrament in most churches, and as an ordinance in others. Baptism according to the Trinitarian formula, which is done in most mainstream Christian denominations, is seen as being a basis for Christian ecumenism, the concept of unity amongst Christians. Baptism is also called christening, although some reserve the word "christening" for the baptism of infants. In certain Christian denominations, such as the Lutheran Churches, baptism is the door to church membership, with candidates taking baptismal vows. It has also given its name to the Baptist churches and denominations. Martyrdom was identified early in church history as "baptism by blood", enabling the salvation of martyrs who had not been baptized by water. Later, the Catholic Church identified a baptism of desire, by which those preparing for baptism who die before actually receiving the sacrament are considered saved. Some Christian thinking regards baptism as necessary for salvation, but some writers, such as Huldrych Zwingli (1484–1531), have denied its necessity. Quakers and the Salvation Army do not practice water baptism at all. Among denominations that practice water baptism, differences occur in the manner and mode of baptizing and in the understanding of the significance of the rite. Most Christians baptize using the trinitarian formula "in the name of the Father, and of the Son, and of the Holy Spirit" (following the Great Commission), but Oneness Pentecostals baptize using Jesus' name only. Much more than half of all Christians baptize infants; many others, such as Baptist Churches, regard only believer's baptism as true baptism. In certain denominations, such as the Eastern and Oriental Orthodox Churches, the individual being baptized receives a cross necklace that is worn for the rest of their life, inspired by the Sixth Ecumenical Council (Synod) of Constantinople. Mandaeans undergo repeated baptism for purification instead of initiation. They consider John the Baptist to be their greatest prophet and name all rivers yardena after the River Jordan. The term "baptism" has also been used metaphorically to refer to any ceremony, trial, or experience by which a person is initiated, purified, or given a name. Etymology The English word baptism is derived indirectly through Latin from the neuter Greek concept noun báptisma (Greek , "washing, dipping"), which is a neologism in the New Testament derived from the masculine Greek noun baptismós (), a term for ritual washing in Greek language texts of Hellenistic Judaism during the Second Temple period, such as the Septuagint. Both of these nouns are derived from the verb baptízō (, "I wash" transitive verb), which is used in Jewish texts for ritual washing, and in the New Testament both for ritual washing and also for the apparently new rite of báptisma. The Greek verb báptō (), "dip", from which the verb baptízō is derived, is in turn hypothetically traced to a reconstructed Indo-European root *gʷabh-, "dip". The Greek words are used in a great variety of meanings. and in Hellenism had the general usage of "immersion," "going under" (as a material in a liquid dye) or "perishing" (as in a ship sinking or a person drowning), with the same double meanings as in English "to sink into" or "to be overwhelmed by," with bathing or washing only occasionally used and usually in sacral contexts. History The practice of baptism emerged from Jewish ritualistic practices during the Second Temple Period, out of which figures such as John the Baptist emerged. For example, various texts in the Dead Sea Scrolls (DSS) corpus at Qumran describe ritual practices involving washing, bathing, sprinkling, and immersing. One example of such a text is a DSS known as the Rule of the Community, which says "And by the compliance of his soul with all the laws of God his flesh is cleansed by being sprinkled with cleansing waters and being made holy with the waters of repentance." The Mandaeans, who are followers of John the Baptist, practice frequent full immersion baptism (masbuta) as a ritual of purification. According to Mandaean sources, they left the Jordan Valley in the 1st century AD. John the Baptist, who is considered a forerunner to Christianity, used baptism as the central sacrament of his messianic movement. The apostle Paul distinguished between the baptism of John, ("baptism of repentance") and baptism in the name of Jesus, and it is questionable whether Christian baptism was in some way linked with that of John. However, according to Mark 1:8, John seems to connect his water baptism as a type of the true, ultimate baptism of Jesus, which is by the Spirit. Christians consider Jesus to have instituted the sacrament of baptism. Though some form of immersion was likely the most common method of baptism in the early church, many of the writings from the ancient church appeared to view this mode of baptism as inconsequential. The Didache 7.1–3 (AD 60–150) allowed for affusion practices in situations where immersion was not practical. Likewise, Tertullian (AD 196–212) allowed for varying approaches to baptism even if those practices did not conform to biblical or traditional mandates (cf. De corona militis 3; De baptismo 17). Finally, Cyprian (ca. AD 256) explicitly stated that the amount of water was inconsequential and defended immersion, affusion, and aspersion practices (Epistle 75.12). As a result, there was no uniform or consistent mode of baptism in the ancient church prior to the fourth century. By the third and fourth centuries, baptism involved catechetical instruction as well as chrismation, exorcisms, laying on of hands, and recitation of a creed. In the early middle ages infant baptism became common and the rite was significantly simplified and increasingly emphasized. In Western Europe Affusion became the normal mode of baptism between the twelfth and fourteenth centuries, though immersion was still practiced into the sixteenth. In the medieval period, some radical Christians rejected the practice of baptism as a sacrament. Sects such as the Tondrakians, Cathars, Arnoldists, Petrobrusians, Henricans, Brethren of the Free Spirit and the Lollards were regarded as heretics by the Catholic Church. In the sixteenth century, Martin Luther retained baptism as a sacrament, but Swiss reformer Huldrych Zwingli considered baptism and the Lord's supper to be symbolic. Anabaptists denied the validity of the practice of infant baptism, and rebaptized converts. Mode and manner Baptism is practiced in several different ways. Aspersion is the sprinkling of water on the head, and affusion is the pouring of water over the head. Traditionally, a person is sprinkled, poured, or immersed three times for each person of the Holy Trinity, with this ancient Christian practice called trine baptism or triune baptism. The Didache specifies: Aspersion or sprinkling best describes cleansing aspect of baptism as indicated in Psalm 51:7, "Cleanse me with hyssop, and I will be clean; wash me, and I will be whiter than snow". Affusion or pouring best describes anointing, which points to the pouring of the Holy Spirit unto the believing person as indicated in many of the Old Testament types of anointing kings, prophets, and priests with oil. Immersion or submersion best describes burial and resurrection of the believer in Christ. The word "immersion" is derived from late Latin immersio, a noun derived from the verb immergere (in – "into" + mergere "dip"). In relation to baptism, some use it to refer to any form of dipping, whether the body is put completely under water or is only partly dipped in water; they thus speak of immersion as being either total or partial. Others, of the Anabaptist belief, use "immersion" to mean exclusively plunging someone entirely under the surface of the water. The term "immersion" is also used of a form of baptism in which water is poured over someone standing in water, without submersion of the person. On these three meanings of the word "immersion", see Immersion baptism. When "immersion" is used in opposition to "submersion", it indicates the form of baptism in which the candidate stands or kneels in water and water is poured over the upper part of the body. Immersion in this sense has been employed in West and East since at least the 2nd century and is the form in which baptism is generally depicted in early Christian art. In the West, this method of baptism began to be replaced by affusion baptism from around the 8th century, but it continues in use in Eastern Christianity. The word submersion comes from the late Latin (sub- "under, below" + mergere "plunge, dip") and is also sometimes called "complete immersion". It is the form of baptism in which the water completely covers the candidate's body. Submersion is practiced in the Orthodox and several other Eastern Churches. In the Latin Church of the Catholic Church, baptism by submersion is used in the Ambrosian Rite and is one of the methods provided in the Roman Rite of the baptism of infants. It is seen as obligatory among some groups that have arisen since the Protestant Reformation, such as Baptists. Meaning of the Greek verb baptizein The Greek-English Lexicon of Liddell and Scott gives the primary meaning of the verb baptízein, from which the English verb "baptize" is derived, as "dip, plunge", and gives examples of plunging a sword into a throat or an embryo and for drawing wine by dipping a cup in the bowl; for New Testament usage it gives two meanings: "baptize", with which it associates the Septuagint mention of Naaman dipping himself in the Jordan River, and "perform ablutions", as in Luke 11:38. Although the Greek verb baptízein does not exclusively mean dip, plunge or immerse (it is used with literal and figurative meanings such as "sink", "disable", "overwhelm", "go under", "overborne", "draw from a bowl"), lexical sources typically cite this as a meaning of the word in both the Septuagint and the New Testament. "While it is true that the basic root meaning of the Greek words for baptize and baptism is immerse/immersion, it is not true that the words can simply be reduced to this meaning, as can be seen from Mark 10:38–39, Luke 12:50, Matthew 3:11 Luke 3:16 and Corinthians10:2." Two passages in the Gospels indicate that the verb baptízein did not always indicate submersion. The first is Luke 11:38, which tells how a Pharisee, at whose house Jesus ate, "was astonished to see that he did not first wash (ἐβαπτίσθη, aorist passive of βαπτίζω—literally, "was baptized") before dinner". This is the passage that Liddell and Scott cites as an instance of the use of to mean perform ablutions. Jesus' omission of this action is similar to that of his disciples: "Then came to Jesus scribes and Pharisees, which were of Jerusalem, saying, Why do thy disciples transgress the tradition of the elders? for they wash () not their hands when they eat bread". The other Gospel passage pointed to is: "The Pharisees...do not eat unless they wash (, the ordinary word for washing) their hands thoroughly, observing the tradition of the elders; and when they come from the market place, they do not eat unless they wash themselves (literally, "baptize themselves"—βαπτίσωνται, passive or middle voice of βαπτίζω)". Scholars of various denominations claim that these two passages show that invited guests, or people returning from market, would not be expected to immerse themselves ("baptize themselves") totally in water but only to practise the partial immersion of dipping their hands in water or to pour water over them, as is the only form admitted by present Jewish custom. In the second of the two passages, it is actually the hands that are specifically identified as "washed", not the entire person, for whom the verb used is baptízomai, literally "be baptized", "be immersed", a fact obscured by English versions that use "wash" as a translation of both verbs. Zodhiates concludes that the washing of the hands was done by immersing them. The Liddell–Scott–Jones Greek-English Lexicon (1996) cites the other passage (Luke 11:38) as an instance of the use of the verb baptízein to mean "perform ablutions", not "submerge". References to the cleaning of vessels which use βαπτίζω also refer to immersion. As already mentioned, the lexicographical work of Zodhiates says that, in the second of these two cases, the verb baptízein indicates that, after coming from the market, the Pharisees washed their hands by immersing them in collected water. Balz & Schneider understand the meaning of βαπτίζω, used in place of ῥαντίσωνται (sprinkle), to be the same as βάπτω, to dip or immerse, a verb used of the partial dipping of a morsel held in the hand into wine or of a finger into spilled blood. A possible additional use of the verb baptízein to relate to ritual washing is suggested by Peter Leithart (2007) who suggests that Paul's phrase "Else what shall they do who are baptized for the dead?" relates to Jewish ritual washing. In Jewish Greek the verb baptízein "baptized" has a wider reference than just "baptism" and in Jewish context primarily applies to the masculine noun baptismós "ritual washing" The verb baptízein occurs four times in the Septuagint in the context of ritual washing, baptismós; Judith cleansing herself from menstrual impurity, Naaman washing seven times to be cleansed from leprosy, etc. Additionally, in the New Testament only, the verb baptízein can also relate to the neuter noun báptisma "baptism" which is a neologism unknown in the Septuagint and other pre-Christian Jewish texts. This broadness in the meaning of baptízein is reflected in English Bibles rendering "wash", where Jewish ritual washing is meant: for example Mark 7:4 states that the Pharisees "except they wash (Greek "baptize"), they do not eat", and "baptize" where báptisma, the new Christian rite, is intended. Derived nouns Two nouns derived from the verb baptízō (βαπτίζω) appear in the New Testament: the masculine noun baptismós (βαπτισμός) and the neuter noun báptisma (βάπτισμα): baptismós (βαπτισμός) refers in Mark 7:4 to a water-rite for the purpose of purification, washing, cleansing, of dishes; in the same verse and in Hebrews 9:10 to Levitical cleansings of vessels or of the body; and in Hebrews 6:2 perhaps also to baptism, though there it may possibly refer to washing an inanimate object. According to Spiros Zodhiates when referring merely to the cleansing of utensils baptismós (βαπτισμός) is equated with rhantismós (ῥαντισμός, "sprinkling"), found only in Hebrews 12:24 and Peter 1:2, a noun used to indicate the symbolic cleansing by the Old Testament priest. báptisma (βάπτισμα), which is a neologism appearing to originate in the New Testament, and probably should not be confused with the earlier Jewish concept of baptismós (βαπτισμός), Later this is found only in writings by Christians. In the New Testament, it appears at least 21 times: 13 times with regard to the rite practised by John the Baptist; 3 times with reference to the specific Christian rite (4 times if account is taken of its use in some manuscripts of Colossians 2:12, where, however, it is most likely to have been changed from the original baptismós than vice versa); 5 times in a metaphorical sense. Manuscript variation: In Colossians, some manuscripts have neuter noun báptisma (βάπτισμα), but some have masculine noun baptismós (βαπτισμός), and this is the reading given in modern critical editions of the New Testament. If this reading is correct, then this is the only New Testament instance in which baptismós (βαπτισμός) is clearly used of Christian baptism, rather than of a generic washing, unless the opinion of some is correct that Hebrews 6:2 may also refer to Christian baptism. The feminine noun baptisis, along with the masculine noun baptismós both occur in Josephus' Antiquities (J. AJ 18.5.2) relating to the murder of John the Baptist by Herod. This feminine form is not used elsewhere by Josephus, nor in the New Testament. Apparel Until the Middle Ages, most baptisms were performed with the candidates naked—as is evidenced by most of the early portrayals of baptism (some of which are shown in this article), and the early Church Fathers and other Christian writers. Deaconesses helped female candidates for reasons of modesty. Typical of these is Cyril of Jerusalem who wrote "On the Mysteries of Baptism" in the 4th century (c. 350 AD): The symbolism is threefold: 1. Baptism is considered to be a form of rebirth—"by water and the Spirit"—the nakedness of baptism (the second birth) paralleled the condition of one's original birth. For example, John Chrysostom calls the baptism "λοχείαν", i.e., giving birth, and "new way of creation...from water and Spirit" ("to John" speech 25,2), and later elaborates: 2. The removal of clothing represented the "image of putting off the old man with his deeds" (as per Cyril, above), so the stripping of the body before for baptism represented taking off the trappings of sinful self, so that the "new man", which is given by Jesus, can be put on. 3. As Cyril again asserts above, as Adam and Eve in scripture were naked, innocent and unashamed in the Garden of Eden, nakedness during baptism was seen as a renewal of that innocence and state of original sinlessness. Other parallels can also be drawn, such as between the exposed condition of Christ during His crucifixion, and the crucifixion of the "old man" of the repentant sinner in preparation for baptism. Changing customs and concerns regarding modesty probably contributed to the practice of permitting or requiring the baptismal candidate to either retain their undergarments (as in many Renaissance paintings of baptism such as those by da Vinci, Tintoretto, Van Scorel, Masaccio, de Wit and others) or to wear, as is almost universally the practice today, baptismal robes. These robes are most often white, symbolizing purity. Some groups today allow any suitable clothes to be worn, such as trousers and a T-shirt—practical considerations include how easily the clothes will dry (denim is discouraged), and whether they will become see-through when wet. In certain Christian denominations, the individual being baptized receives a cross necklace that is worn for the rest of their life as a "sign of the triumph of Christ over death and our belonging to Christ" (though it is replaced with a new cross pendant if lost or broken). This practice of baptized Christians wearing a cross necklace at all times is derived from Canon 73 and Canon 82 of the Sixth Ecumenical Council (Synod) of Constantinople, which declared: Meaning and effects There are differences in views about the effect of baptism for a Christian. Catholics, Orthodox, and most mainline Protestant groups assert baptism is a requirement for salvation and a sacrament, and speak of "baptismal regeneration". Its importance is related to their interpretation of the meaning of the "Mystical Body of Christ" as found in the New Testament. This view is shared by the Catholic and Eastern Orthodox denominations, and by churches formed early during the Protestant Reformation such as Lutheran and Anglican. For example, Martin Luther said: The Churches of Christ," Jehovah's Witnesses, Christadelphians, and the Church of Jesus Christ of Latter-day Saints espouse baptism as necessary for salvation. For Roman Catholics, baptism by water is a sacrament of initiation into the life of the children of God (Catechism of the Catholic Church, 1212–13). It configures the person to Christ (CCC 1272), and obliges the Christian to share in the church's apostolic and missionary activity (CCC 1270). The Catholic holds that there are three types of baptism by which one can be saved: sacramental baptism (with water), baptism of desire (explicit or implicit desire to be part of the church founded by Jesus Christ), and baptism of blood (martyrdom). In his encyclical Mystici corporis Christi of June 29, 1943, Pope Pius XII spoke of baptism and profession of the true faith as what makes members of the one true church, which is the body of Jesus Christ himself, as God the Holy Spirit has taught through the Apostle Paul: By contrast, Anabaptist and Evangelical Protestants recognize baptism as an outward sign of an inward reality following on an individual believer's experience of forgiving grace. Reformed and Methodist Protestants maintain a link between baptism and regeneration, but insist that it is not automatic or mechanical, and that regeneration may occur at a different time than baptism. Churches of Christ consistently teach that in baptism a believer surrenders his life in faith and obedience to God, and that God "by the merits of Christ's blood, cleanses one from sin and truly changes the state of the person from an alien to a citizen of God's kingdom. Baptism is not a human work; it is the place where God does the work that only God can do." Thus, they see baptism as a passive act of faith rather than a meritorious work; it "is a confession that a person has nothing to offer God". Christian traditions The liturgy of baptism for Catholics, Eastern Orthodox, Lutheran, Anglican, and Methodist makes clear reference to baptism as not only a symbolic burial and resurrection, but an actual supernatural transformation, one that draws parallels to the experience of Noah and the passage of the Israelites through the Red Sea divided by Moses. Thus, baptism is literally and symbolically not only cleansing, but also dying and rising again with Christ. Catholics believe baptism is necessary to cleanse the taint of original sin, and so commonly baptise infants. The Eastern Churches (Eastern Orthodox Church and Oriental Orthodoxy) also baptize infants on the basis of texts, such as Matthew 19:14, which are interpreted as supporting full church membership for children. In these denominations, baptism is immediately followed by Chrismation and Communion at the next Divine Liturgy, regardless of age. Orthodox likewise believe that baptism removes what they call the ancestral sin of Adam. Anglicans believe that baptism is also the entry into the church. Most Methodists and Anglicans agree that it also cleanses the taint of what in the West is called original sin, in the East ancestral sin. Eastern Orthodox Christians usually insist on complete threefold immersion as both a symbol of death and rebirth into Christ, and as a washing away of sin. Latin Church Catholics generally baptize by affusion (pouring); Eastern Catholics usually by submersion, or at least partial immersion. However, submersion is gaining in popularity within the Latin Catholic Church. In newer church sanctuaries, the baptismal font may be designed to expressly allow for baptism by immersion. Anglicans baptize by immersion or affusion. According to evidence which can be traced back to about the year 200, sponsors or godparents are present at baptism and vow to uphold the Christian education and life of the baptized. Baptists argue that the Greek word originally meant "to immerse". They interpret some Biblical passages concerning baptism as requiring submersion of the body in water. They also state that only submersion reflects the symbolic significance of being "buried" and "raised" with Christ. Baptist Churches baptize in the name of the Trinity—the Father, the Son, and the Holy Spirit. However, they do not believe that baptism is necessary for salvation; but rather that it is an act of Christian obedience. Some "Full Gospel" charismatic churches such as Oneness Pentecostals baptize only in the name of Jesus Christ, citing Peter's preaching baptism in the name of Jesus as their authority. Ecumenical statements In 1982 the World Council of Churches published the ecumenical paper Baptism, Eucharist and Ministry. The preface of the document states: A 1997 document, Becoming a Christian: The Ecumenical Implications of Our Common Baptism, gave the views of a commission of experts brought together under the aegis of the World Council of Churches. It states: Those who heard, who were baptized and entered the community's life, were already made witnesses of and partakers in the promises of God for the last days: the forgiveness of sins through baptism in the name of Jesus and the outpouring of the Holy Ghost on all flesh. Similarly, in what may well be a baptismal pattern, 1 Peter testifies that proclamation of the resurrection of Jesus Christ and teaching about new life lead to purification and new birth. This, in turn, is followed by eating and drinking God's food, by participation in the life of the community—the royal priesthood, the new temple, the people of God—and by further moral formation. At the beginning of 1 Peter the writer sets this baptism in the context of obedience to Christ and sanctification by the Spirit. So baptism into Christ is seen as baptism into the Spirit. In the fourth gospel Jesus' discourse with Nicodemus indicates that birth by water and Spirit becomes the gracious means of entry into the place where God rules. Validity considerations by some churches The vast majority of Christian denominations admit the theological idea that baptism is a sacrament, that has actual spiritual, holy and salvific effects. Certain key criteria must be complied with for it to be valid, i.e., to actually have those effects. If these key criteria are met, violation of some rules regarding baptism, such as varying the authorized rite for the ceremony, renders the baptism illicit (contrary to the church's laws) but still valid. One of the criteria for validity is use of the correct form of words. The Roman Catholic Church teaches that the use of the verb "to baptize" is essential. Catholics of the Latin Church, Anglicans and Methodists use the form "I baptize you in the name of...". The passive voice is used by Eastern Orthodox and Byzantine Catholics, the form being "The Servant of God is baptized in the name of...". Use of the Trinitarian formula "in the name of the Father, and of the Son, and of the Holy Spirit" is also considered essential; thus these churches do not accept as valid baptisms of non-Trinitarian churches such as Oneness Pentecostals. Another essential condition is use of water. A baptism in which some liquid that would not usually be called water, such as wine, milk, soup or fruit juice was used would not be considered valid. Another requirement is that the celebrant intends to perform baptism. This requirement entails merely the intention "to do what the Church does", not necessarily to have Christian faith, since it is not the person baptizing, but the Holy Spirit working through the sacrament, who produces the effects of the sacrament. Doubt about the faith of the baptizer is thus no ground for doubt about the validity of the baptism. Some conditions expressly do not affect validity—for example, whether submersion, immersion, affusion (pouring) or aspersion (sprinkling) is used. However, if water is sprinkled, there is a danger that the water may not touch the skin of the unbaptized. As has been stated, "it is not sufficient for the water to merely touch the candidate; it must also flow, otherwise there would seem to be no real ablution. At best, such a baptism would be considered doubtful. If the water touches only the hair, the sacrament has probably been validly conferred, though in practice the safer course must be followed. If only the clothes of the person have received the aspersion, the baptism is undoubtedly void." For many communions, validity is not affected if a single submersion or pouring is performed rather than a triple, but in Orthodoxy this is controversial. According to the Catholic Church, baptism imparts an indelible "seal" upon the soul of the baptized and therefore a person who has already been baptized cannot be validly baptized again. This teaching was affirmed against the Donatists who practiced rebaptism. The grace received in baptism is believed to operate ex opere operato and is therefore considered valid even if administered in heretical or schismatic groups. Recognition by other denominations The Catholic, Lutheran, Anglican, Presbyterian and Methodist Churches accept baptism performed by other denominations within this group as valid, subject to certain conditions, including the use of the Trinitarian formula. It is only possible to be baptized once, thus people with valid baptisms from other denominations may not be baptized again upon conversion or transfer. For Roman Catholics, this is affirmed in the Canon Law 864, in which it is written that "[e]very person not yet baptized and only such a person is capable of baptism." Such people are accepted upon making a profession of faith and, if they have not yet validly received the sacrament/rite of confirmation or chrismation, by being confirmed. Specifically, "Methodist theologians argued that since God never abrogated a covenant made and sealed with proper intentionality, rebaptism was never an option, unless the original baptism had been defective by not having been made in the name of the Trinity." In some cases it can be difficult to decide if the original baptism was in fact valid; if there is doubt, conditional baptism is administered, with a formula on the lines of "If you are not yet baptized, I baptize you...." The Catholic Church ordinarily recognizes as valid the baptisms of Christians of the Eastern Orthodox, Churches of Christ, Congregationalist, Anglican, Lutheran, Old Catholic, Polish National Catholic, Reformed, Baptist, Brethren, Methodist, Presbyterian, Waldensian, and United Protestant denominations; Christians of these traditions are received into the Catholic Church through the sacrament of Confirmation. Some individuals of the Mennonite, Pentecostal and Adventist traditions who wish to be received into the Catholic Church may be required to receive a conditional baptism due to concerns about the validity of the sacraments in those traditions. On the other hand, the Catholic Church has explicitly denied the validity of the baptism conferred in The Church of Jesus Christ of Latter-day Saints. The Reformed Churches recognize as valid baptisms administered in the Catholic Church, among other churches using the Trinitarian formula. Practice in the Eastern Orthodox Church for converts from other communions is not uniform. However, generally baptisms performed in the name of the Holy Trinity are accepted by the Orthodox Christian Church; Christians of the Oriental Orthodox, Roman Catholic, Lutheran, Old Catholic, Moravian, Anglican, Methodist, Reformed, Presbyterian, Brethren, Assemblies of God, or Baptist traditions can be received into the Eastern Orthodox Church through the sacrament of Chrismation. If a convert has not received the sacrament (mysterion) of baptism, he or she must be baptised in the name of the Holy Trinity before they may enter into communion with the Orthodox Church. If he has been baptized in another Christian confession (other than Orthodox Christianity) his previous baptism is considered retroactively filled with grace by chrismation or, in rare circumstances, confession of faith alone as long as the baptism was done in the name of the Holy Trinity (Father, Son and Holy Spirit). The exact procedure is dependent on local canons and is the subject of some controversy. Oriental Orthodox Churches recognise the validity of baptisms performed within the Eastern Orthodox Communion. Some also recognise baptisms performed by Catholic Churches. Any supposed baptism not performed using the Trinitarian formula is considered invalid. In the eyes of the Catholic Church, all Orthodox Churches, Anglican and Lutheran Churches, the baptism conferred by The Church of Jesus Christ of Latter-day Saints is invalid. An article published together with the official declaration to that effect gave reasons for that judgment, summed up in the following words: "The Baptism of the Catholic Church and that of the Church of Jesus Christ of Latter-day Saints differ essentially, both for what concerns faith in the Father, Son and Holy Spirit, in whose name Baptism is conferred, and for what concerns the relationship to Christ who instituted it." The Church of Jesus Christ of Latter-day Saints stresses that baptism must be administered by one having proper authority; consequently, the church does not recognize the baptism of any other church as effective. Jehovah's Witnesses do not recognise any other baptism occurring after 1914 as valid, as they believe that they are now the one true church of Christ, and that the rest of "Christendom" is false religion. Officiator There is debate among Christian churches as to who can administer baptism. Some claim that the examples given in the New Testament only show apostles and deacons administering baptism. Ancient Christian churches interpret this as indicating that baptism should be performed by the clergy except in extremis, i.e., when the one being baptized is in immediate danger of death. Then anyone may baptize, provided, in the view of the Eastern Orthodox Church, the person who does the baptizing is a member of that church, or, in the view of the Catholic Church, that the person, even if not baptized, intends to do what the church does in administering the rite. Many Protestant churches see no specific prohibition in the biblical examples and permit any believer to baptize another. In the Roman Catholic Church, canon law for the Latin Church lays down that the ordinary minister of baptism is a bishop, priest or deacon, but its administration is one of the functions "especially entrusted to the parish priest". If the person to be baptized is at least fourteen years old, that person's baptism is to be referred to the bishop, so that he can decide whether to confer the baptism himself. If no ordinary minister is available, a catechist or some other person whom the local ordinary has appointed for this purpose may licitly do the baptism; indeed in a case of necessity any person (irrespective of that person's religion) who has the requisite intention may confer the baptism By "a case of necessity" is meant imminent danger of death because of either illness or an external threat. "The requisite intention" is, at the minimum level, the intention "to do what the Church does" through the rite of baptism. In the Eastern Catholic Churches, a deacon is not considered an ordinary minister. Administration of the sacrament is reserved to the Parish Priest or to another priest to whom he or the local hierarch grants permission, a permission that can be presumed if in accordance with canon law. However, "in case of necessity, baptism can be administered by a deacon or, in his absence or if he is impeded, by another cleric, a member of an institute of consecrated life, or by any other Christian faithful; even by the mother or father, if another person is not available who knows how to baptize." The discipline of the Eastern Orthodox Church, Oriental Orthodoxy and the Assyrian Church of the East is similar to that of the Eastern Catholic Churches. They require the baptizer, even in cases of necessity, to be of their own faith, on the grounds that a person cannot convey what he himself does not possess, in this case membership in the church. The Latin Catholic Church does not insist on this condition, considering that the effect of the sacrament, such as membership of the church, is not produced by the person who baptizes, but by the Holy Spirit. For the Orthodox, while Baptism in extremis may be administered by a deacon or any lay-person, if the newly baptized person survives, a priest must still perform the other prayers of the Rite of Baptism, and administer the Mystery of Chrismation. The discipline of Anglicanism and Lutheranism is similar to that of the Latin Catholic Church. For Methodists and many other Protestant denominations, too, the ordinary minister of baptism is a duly ordained or appointed minister of religion. Newer movements of Protestant Evangelical churches, particularly non-denominational, allow laypeople to baptize. In The Church of Jesus Christ of Latter-day Saints, only a man who has been ordained to the Aaronic priesthood holding the priesthood office of priest or higher office in the Melchizedek priesthood may administer baptism. A Jehovah's Witnesses baptism is performed by a "dedicated male" adherent. Only in extraordinary circumstances would a "dedicated" baptizer be unbaptized (see section Jehovah's Witnesses). Practitioners Protestantism Anabaptist Early Anabaptists were given that name because they re-baptized persons who they felt had not been properly baptized, as they did not recognize infant baptism. The traditional form of Anabaptist baptism was pouring, the form commonly used in Western Christianity in the early 16th century when they emerged. Pouring continues to be normative in Mennonite, Amish and Hutterite traditions of Anabaptist Christianity. The Mennonite Brethren Church, Schwarzenau Brethren and River Brethren denominations of Anabaptist Christianity practice immersion. The Schwarzenau church immerses in the forward position three times, for each person of the Holy Trinity and because "the Bible says Jesus bowed his head (letting it fall forward) and died. Baptism represents a dying of the old, sinful self." Today all modes of baptism (such as pouring and immersion) can be found among Anabaptists. Conservative Mennonite Anabaptists count baptism to be one of the seven ordinances. In Anabaptist theology, baptism is a part of the process of salvation. For Anabaptists, "believer's baptism consists of three parts, the Spirit, the water, and the blood—these three witnesses on earth." According to Anabaptist theology: (1) In believer's baptism, the Holy Spirit witnesses the candidate entering into a covenant with God. (2) God, in believer's baptism, "grants a baptized believer the water of baptism as a sign of His covenant with them—that such a one indicates and publicly confesses that he wants to live in true obedience towards God and fellow believers with a blameless life." (3) Integral to believer's baptism is the candidate's mission to witness to the world even unto martyrdom, echoing Jesus' words that "they would be baptized with His baptism, witnessing to the world when their blood was spilt." Baptist For the majority of Baptists, Christian baptism is the immersion of a believer in water in the name of the Father, the Son, and the Holy Spirit. Baptism does not accomplish anything in itself, but is an outward personal sign that the person's sins have already been washed away by the blood of Christ's cross. For a new convert the general practice is that baptism also allows the person to be a registered member of the local Baptist congregation (though some churches have adopted "new members classes" as a mandatory step for congregational membership). Regarding rebaptism the general rules are: baptisms by other than immersion are not recognized as valid and therefore rebaptism by immersion is required; and baptisms by immersion in other denominations may be considered valid if performed after the person having professed faith in Jesus Christ (though among the more conservative groups such as Independent Baptists, rebaptism may be required by the local congregation if performed in a non-Baptist church – and, in extreme cases, even if performed within a Baptist church that wasn't an Independent Baptist congregation) For newborns, there is a ceremony called child dedication. Tennessee antebellum Methodist circuit rider and newspaper publisher William G. Brownlow stated within his 1856 book The Great Iron Wheel Examined; or, Its False Spokes Extracted, and an Exhibition of Elder Graves, Its Builder that the immersion baptism practiced within the Baptist churches as found within the United States did not extend in a "regular line of succession...from John the Baptist - but from old Zeke Holliman and his true yoke-fellow, Mr. [Roger] Williams" as during 1639 Holliman and Williams first immersion baptized each other and then immersion baptized the ten other members of the first Baptist church in British America at Providence, Rhode Island. Churches of Christ Baptism in Churches of Christ is performed only by full bodily immersion, based on the Koine Greek verb baptizo which means to dip, immerse, submerge or plunge. Submersion is seen as more closely conforming to the death, burial and resurrection of Jesus than other modes of baptism. Churches of Christ argue that historically immersion was the mode used in the 1st century, and that pouring and sprinkling later emerged as secondary modes when immersion was not possible. Over time these secondary modes came to replace immersion. Only those mentally capable of belief and repentance are baptized (i.e., infant baptism is not practiced because the New Testament has no precedent for it). Churches of Christ have historically had the most conservative position on baptism among the various branches of the Restoration Movement, understanding baptism by immersion to be a necessary part of conversion. The most significant disagreements concerned the extent to which a correct understanding of the role of baptism is necessary for its validity. David Lipscomb insisted that if a believer was baptized out of a desire to obey God, the baptism was valid, even if the individual did not fully understand the role baptism plays in salvation. Austin McGary contended that to be valid, the convert must also understand that baptism is for the forgiveness of sins. McGary's view became the prevailing one in the early 20th century, but the approach advocated by Lipscomb never totally disappeared. As such, the general practice among churches of Christ is to require rebaptism by immersion of converts, even those who were previously baptized by immersion in other churches. More recently, the rise of the International Churches of Christ has caused some to reexamine the issue. Churches of Christ consistently teach that in baptism a believer surrenders his life in faith and obedience to God, and that God "by the merits of Christ's blood, cleanses one from sin and truly changes the state of the person from an alien to a citizen of God's kingdom. Baptism is not a human work; it is the place where God does the work that only God can do." Baptism is a passive act of faith rather than a meritorious work; it "is a confession that a person has nothing to offer God." While Churches of Christ do not describe baptism as a "sacrament", their view of it can legitimately be described as "sacramental." They see the power of baptism coming from God, who chose to use baptism as a vehicle, rather than from the water or the act itself, and understand baptism to be an integral part of the conversion process, rather than just a symbol of conversion. A recent trend is to emphasize the transformational aspect of baptism: instead of describing it as just a legal requirement or sign of something that happened in the past, it is seen as "the event that places the believer 'into Christ' where God does the ongoing work of transformation." There is a minority that downplays the importance of baptism to avoid sectarianism, but the broader trend is to "reexamine the richness of the biblical teaching of baptism and to reinforce its central and essential place in Christianity." Because of the belief that baptism is a necessary part of salvation, some Baptists hold that the Churches of Christ endorse the doctrine of baptismal regeneration. However, members of the Churches of Christ reject this, arguing that since faith and repentance are necessary, and that the cleansing of sins is by the blood of Christ through the grace of God, baptism is not an inherently redeeming ritual. Rather, their inclination is to point to the biblical passage in which Peter, analogizing baptism to Noah's flood, posits that "likewise baptism doth also now save us" but parenthetically clarifies that baptism is "not the putting away of the filth of the flesh but the response of a good conscience toward God" (1 Peter 3:21). One author from the churches of Christ describes the relationship between faith and baptism this way, "Faith is the reason why a person is a child of God; baptism is the time at which one is incorporated into Christ and so becomes a child of God" (italics are in the source). Baptism is understood as a confessional expression of faith and repentance, rather than a "work" that earns salvation. Lutheranism In Lutheran Christianity, baptism is a sacrament that regenerates the soul. Upon one's baptism, one receives the Holy Spirit and becomes a part of the church. Methodism The Methodist Articles of Religion, with regard to baptism, teach: While baptism imparts grace, Methodists teach that a personal acceptance of Jesus Christ (the first work of grace) is essential to one's salvation; during the second work of grace, entire sanctification, a believer is purified of original sin and made holy. In the Methodist Churches, baptism is a sacrament of initiation into the visible Church. Wesleyan covenant theology further teaches that baptism is a sign and a seal of the covenant of grace: Methodists recognize three modes of baptism as being valid—"immersion, sprinkling, or pouring" in the name of the Holy Trinity. Moravianism The Moravian Church teaches that baptism is a sign and a seal, recognizing three modes of baptism as being valid: immersion, aspersion, and affusion. Reformed Protestantism In Reformed baptismal theology, baptism is seen as primarily God's offer of union with Christ and all his benefits to the baptized. This offer is believed to be intact even when it is not received in faith by the person baptized. Reformed theologians believe the Holy Spirit brings into effect the promises signified in baptism. Baptism is held by almost the entire Reformed tradition to effect regeneration, even in infants who are incapable of faith, by effecting faith which would come to fruition later. Baptism also initiates one into the visible church and the covenant of grace. Baptism is seen as a replacement of circumcision, which is considered the rite of initiation into the covenant of grace in the Old Testament. Reformed Christians believe that immersion is not necessary for baptism to be properly performed, but that pouring or sprinkling are acceptable. Only ordained ministers are permitted to administer baptism in Reformed churches, with no allowance for emergency baptism, though baptisms performed by non-ministers are generally considered valid. Reformed churches, while rejecting the baptismal ceremonies of the Roman Catholic church, accept the validity of baptisms performed with them and do not rebaptize. United Protestants In United Protestant Churches, such as the United Church of Canada, Church of North India, Church of Pakistan, Church of South India, Protestant Church in the Netherlands, Uniting Church in Australia and United Church of Christ in Japan, baptism is a sacrament. Catholicism In Catholic teaching, baptism is stated to be "necessary for salvation by actual reception or at least by desire". Catholic discipline requires the baptism ceremony to be performed by deacons, priests, or bishops, but in an emergency such as danger of death, anyone can licitly baptize. This teaching is based on the Gospel according to John which says that Jesus proclaimed: "Truly, truly, I say to you, unless one is born of water and the Spirit, he cannot enter into the Kingdom of God." It dates back to the teachings and practices of 1st-century Christians, and the connection between salvation and baptism was not, on the whole, an item of major dispute until Huldrych Zwingli denied the necessity of baptism, which he saw as merely a sign granting admission to the Christian community. The Catechism of the Catholic Church states that "Baptism is necessary for salvation for those to whom the Gospel has been proclaimed and who have had the possibility of asking for this sacrament." The Council of Trent also states in the Decree Concerning Justification from session six that baptism is necessary for salvation. A person who knowingly, willfully and unrepentantly rejects baptism has no hope of salvation. However, if knowledge is absent, "those also can attain to salvation who through no fault of their own do not know the Gospel of Christ or His Church, yet sincerely seek God and moved by grace strive by their deeds to do His will as it is known to them through the dictates of conscience." The Catechism of the Catholic Church also states: "Since Baptism signifies liberation from sin and from its instigator the devil, one or more exorcisms are pronounced over the candidate". In the Roman Rite of the baptism of a child, the wording of the prayer of exorcism is: "Almighty and ever-living God, you sent your only Son into the world to cast out the power of Satan, spirit of evil, to rescue man from the kingdom of darkness and bring him into the splendour of your kingdom of light. We pray for this child: set him (her) free from original sin, make him (her) a temple of your glory, and send your Holy Spirit to dwell with him (her). Through Christ our Lord." In the Catholic Church by baptism all sins are forgiven, original sin and all personal sins. Baptism not only purifies from all sins, but also makes the neophyte "a new creature," an adopted son of God, who has become a "partaker of the divine nature," member of Christ and co-heir with him, and a temple of the Holy Spirit. Given once for all, baptism cannot be repeated: just as a man can be born only once, so he is baptized only once. For this reason the holy Fathers added to the Nicene Creed the words We acknowledge one Baptism. Sanctifying grace, the grace of justification, given by God by baptism, erases the original sin and personal actual sins. The power of Baptism consists in cleansing a man from all his sins as regards both guild and punishment, for which reason no penance is imposed on those who receive Baptism, no matter how great their sins may have been. And if they were to die immediately after Baptism, they would rise at once to eternal life. In the Western Catholic Church a valid baptism requires, according to Canon 758 of the 1917 Code of Canon Law, the baptizer to pronounce the formula "I baptize you in the name of the Father and of the Son and of the Holy Spirit" while putting the baptized in contact with water. The contact may be immersion, "affusion" (pouring), or "aspersion" (sprinkling). The formula requires "name" to be singular, emphasising the monotheism of the Trinity. It is claimed that Pope Stephen I, Ambrose and Pope Nicholas I declared that baptisms in the name of "Jesus" only as well as in the name of "Father, Son and Holy Spirit" were valid. The correct interpretation of their words is disputed. Current canonical law requires the Trinitarian formula and water for validity. The formula requires "I baptize" rather than "we baptize", as clarified by a responsum of June 24, 2020. In 2022 the Diocese of Phoenix accepted the resignation of a parish priest whose use of "we baptize" had invalidated "thousands of baptisms over more than 20 years". Note that in the Byzantine Rite the formla is in the passive voice, "The servant of God N. is baptized in the Name of the Father, and of the Son, and of the Holy Spirit." Offspring of practicing Catholic parents are typically baptized as infants. Baptism is part of the Rite of Christian Initiation of Adults, provided for converts from non-Christian backgrounds and others not baptized as infants. Baptism by non-Catholic Christians is valid if the formula and water are present, and so converts from other Christian denominations are not given a Catholic baptism. The church recognizes two equivalents of baptism with water: "baptism of blood" and "baptism of desire". Baptism of blood is that undergone by unbaptized individuals who are martyred for their faith, while baptism of desire generally applies to catechumens who die before they can be baptized. The Catechism of the Catholic Church describes these two forms: The Church has always held the firm conviction that those who suffer death for the sake of the faith without having received Baptism are baptized by their death for and with Christ. This Baptism of blood, like the desire for Baptism, brings about the fruits of Baptism without being a sacrament. — 1258 For catechumens who die before their Baptism, their explicit desire to receive it, together with repentance for their sins, and charity, assures them the salvation that they were not able to receive through the sacrament. — 1259 The Catholic Church holds that those who are ignorant of Christ's Gospel and of the church, but who seek the truth and do God's will as they understand it, may be supposed to have an implicit desire for baptism and can be saved: Since Christ died for all, and since all men are in fact called to one and the same destiny, which is divine, we must hold that the Holy Spirit offers to all the possibility of being made partakers, in a way known to God, of the Paschal mystery.' Every man who is ignorant of the Gospel of Christ and of his Church, but seeks the truth and does the will of God in accordance with his understanding of it, can be saved. It may be supposed that such persons would have desired Baptism explicitly if they had known its necessity." As for unbaptized infants, the church is unsure of their fate; "the Church can only entrust them to the mercy of God". Eastern Orthodoxy In Eastern Orthodoxy, baptism is considered a sacrament and mystery which transforms the old and sinful person into a new and pure one, where the old life, the sins, any mistakes made are gone and a clean slate is given. In Greek and Russian Orthodox traditions, it is taught that through Baptism a person is united to the Body of Christ by becoming an official member of the Orthodox Church. During the service, the Orthodox priest blesses the water to be used. The catechumen (the one baptised) is fully immersed in the water three times in the name of the Trinity. This is considered to be a death of the "old man" by participation in the crucifixion and burial of Christ, and a rebirth into new life in Christ by participation in his resurrection. Properly a new name is given, which becomes the person's name. Babies of Orthodox families are normally baptized shortly after birth. Older converts to Orthodoxy are usually formally baptized into the Orthodox Church, though exceptions are sometimes made. Those who choose to convert from a different religion to Eastern Orthodoxy typically undergo Chrismation, known as conformation in the Roman Catholic Church. Properly and generally, the Mystery of Baptism is administered by bishops and other priests; however, in emergencies any Orthodox Christian can baptize. In such cases, should the person survive the emergency, it is likely that the person will be properly baptized by a priest at some later date. This is not considered to be a second baptism, nor is it imagined that the person is not already Orthodox, but rather it is a fulfillment of the proper form. The service of baptism in Greek Orthodox (and other Eastern Orthodox) churches has remained largely unchanged for over 1500 years. This fact is witnessed to by Cyril of Jerusalem (d. 386), who, in his Discourse on the Sacrament of Baptism, describes the service in much the same way as is currently in use. Other groups Jehovah's Witnesses Jehovah's Witnesses believes that baptism should be performed by complete immersion (submersion) in water and only when an individual is old enough to understand its significance. They believe that water baptism is an outward symbol that a person has made an unconditional dedication through Jesus Christ to do the will of God. Only after baptism, is a person considered a full-fledged Witness, and an official member of the Christian Congregation. They consider baptism to constitute ordination as a minister. Prospective candidates for baptism must express their desire to be baptized well in advance of a planned baptismal event, to allow for congregation elders to assess their suitability (regarding true repentance and conversion). Elders approve candidates for baptism if the candidates are considered to understand what is expected of members of the religion and to demonstrate sincere dedication to the faith. Most baptisms among Jehovah's Witnesses are performed at scheduled assemblies and conventions by elders and ministerial servants, in special pools, or sometimes oceans, rivers, or lakes, depending on circumstances, and rarely occur at local Kingdom Halls. Prior to baptism, at the conclusion of a pre-baptism talk, candidates must affirm two questions: Only baptized males (elders or ministerial servants) may baptize new members. Baptizers and candidates wear swimsuits or other informal clothing for baptism, but are directed to avoid clothing that is considered undignified or too revealing. Generally, candidates are individually immersed by a single baptizer, unless a candidate has special circumstances such as a physical disability. In circumstances of extended isolation, a qualified candidate's dedication and stated intention to become baptized may serve to identify him as a member of Jehovah's Witnesses, even if immersion itself must be delayed. In rare instances, unbaptized males who had stated such an intention have reciprocally baptized each other, with both baptisms accepted as valid. Individuals who had been baptized in the 1930s and 1940s by female Witnesses due to extenuating circumstances, such as in concentration camps, were later re-baptized but still recognized their original baptism dates. Church of Jesus Christ of Latter-day Saints In the Church of Jesus Christ of Latter-day Saints (LDS Church), baptism is recognized as the first of several ordinances (rituals) of the gospel. In Mormonism, baptism has the main purpose of remitting the sins of the participant. It is followed by confirmation, which inducts the person into membership in the church and constitutes a baptism with the Holy Spirit. Latter-day Saints believe that baptism must be by full immersion, and by a precise ritualized ordinance: if some part of the participant is not fully immersed, or the ordinance was not recited verbatim, the ritual must be repeated. It typically occurs in a baptismal font. In addition, members of the LDS Church do not believe a baptism is valid unless it is performed by a Latter-day Saint one who has proper authority (a priest or elder). Authority is passed down through a form of apostolic succession. All new converts to the faith must be baptized or re-baptized. Baptism is seen as symbolic both of Jesus' death, burial and resurrection and is also symbolic of the baptized individual discarding their "natural" self and donning a new identity as a disciple of Jesus. According to Latter-day Saint theology, faith and repentance are prerequisites to baptism. The ritual does not cleanse the participant of original sin, as Latter-day Saints do not believe the doctrine of original sin. Mormonism rejects infant baptism and baptism must occur after the age of accountability, defined in Latter-day Saint scripture as eight years old. Latter-day Saint theology also teaches baptism for the dead in which deceased ancestors are baptized vicariously by the living, and believe that their practice is what Paul wrote of in Corinthians 15:29. This occurs in Latter-day Saint temples. Non-practitioners Quakers Quakers (members of the Religious Society of Friends) do not believe in the baptism of either children or adults with water, rejecting all forms of outward sacraments in their religious life. Robert Barclay's Apology for the True Christian Divinity (a historic explanation of Quaker theology from the 17th century), explains Quakers' opposition to baptism with water thus: Barclay argued that water baptism was only something that happened until the time of Christ, but that now, people are baptised inwardly by the spirit of Christ, and hence there is no need for the external sacrament of water baptism, which Quakers argue is meaningless. Salvation Army The Salvation Army does not practice water baptism, or indeed other outward sacraments. William Booth and Catherine Booth, the founders of the Salvation Army, believed that many Christians had come to rely on the outward signs of spiritual grace rather than on grace itself. They believed what was important was spiritual grace itself. However, although the Salvation Army does not practice baptism, they are not opposed to baptism within other Christian denominations. Hyperdispensationalism There are some Christians termed "Hyperdispensationalists" (Mid-Acts dispensationalism) who accept only Paul's Epistles as directly applicable for the church today. They do not accept water baptism as a practice for the church since Paul who was God's apostle to the nations was not sent to baptize. Ultradispensationalists (Acts 28 dispensationalism) who do not accept the practice of the Lord's supper, do not practice baptism because these are not found in the Prison Epistles. Both sects believe water baptism was a valid practice for covenant Israel. Hyperdispensationalists also teach that Peter's gospel message was not the same as Paul's. Hyperdispensationalists assert: The great commission and its baptism is directed to early Jewish believers, not the Gentile believers of mid-Acts or later. The baptism of Acts 2:36–38 is Peter's call for Israel to repent of complicity in the death of their Messiah; not as a Gospel announcement of atonement for sin, a later doctrine revealed by Paul. Water baptism found early in the Book of Acts is, according to this view, now supplanted by the one baptism foretold by John the Baptist. Others make a distinction between John's prophesied baptism by Christ with the Holy Spirit and the Holy Spirit's baptism of the believer into the body of Christ; the latter being the one baptism for today. The one baptism for today, it is asserted, is the "baptism of the Holy Spirit" of the believer into the Body of Christ church. Many in this group also argue that John's promised baptism by fire is pending, referring to the destruction of the world by fire. Other Hyperdispensationalists believe that baptism was necessary until mid-Acts. Debaptism Most Christian churches see baptism as a once-in-a-lifetime event that can be neither repeated nor undone. They hold that those who have been baptized remain baptized, even if they renounce the Christian faith by adopting a non-Christian religion or by rejecting religion entirely. But some other organizations and individuals are practicing debaptism. Comparative summary Comparative Summary of Baptisms of Denominations of Christian Influence. (This section does not give a complete listing of denominations, and therefore, it only mentions a fraction of the churches practicing "believer's baptism".) Other initiation ceremonies Many cultures practice or have practiced initiation rites, with or without the use of water, including the ancient Egyptian, the Hebraic/Jewish, the Babylonian, the Mayan, and the Norse cultures. The modern Japanese practice of Miyamairi is such as ceremony that does not use water. In some, such evidence may be archaeological and descriptive in nature, rather than a modern practice. Mystery religion initiation rites Many scholars have drawn parallels between rites from mystery religions and baptism in Christianity. Apuleius, a 2nd-century Roman writer, described an initiation into the mysteries of Isis. The initiation was preceded by a normal bathing in the public baths and a ceremonial sprinkling by the priest of Isis, after which the candidate was given secret instructions in the temple of the goddess. The candidate then fasted for ten days from meat and wine, after which he was dressed in linen and led at night into the innermost part of the sanctuary, where the actual initiation, the details of which were secret, took place. On the next two days, dressed in the robes of his consecration, he participated in feasting. Apuleius describes also an initiation into the cult of Osiris and yet a third initiation, of the same pattern as the initiation into the cult of Isis, without mention of a preliminary bathing. The water-less initiations of Lucius, the character in Apuleius's story who had been turned into an ass and changed back by Isis into human form, into the successive degrees of the rites of the goddess was accomplished only after a significant period of study to demonstrate his loyalty and trustworthiness, akin to catechumenal practices preceding baptism in Christianity. Jan Bremmer has written on the putative connection between rites from mystery religions and baptism: There are thus some verbal parallels between early Christianity and the Mysteries, but the situation is rather different as regards early Christian ritual practice. Much ink was spilled around 1900 arguing that the rituals of baptism and of the Last Supper derived from the ancient Mysteries, but Nock and others after him have easily shown that these attempts grossly misinterpreted the sources. Baptism is clearly rooted in Jewish purificatory rituals, and cult meals are so widespread in antiquity that any specific derivation is arbitrary. It is truly surprising to see how long the attempts to find some pagan background to these two Christian sacraments have persevered. Secularising ideologies clearly played an important part in these interpretations but, nevertheless, they have helped to clarify the relations between nascent Christianity and its surroundings. Thus the practice is derivative, whether from Judaism, the Mysteries or a combination (see the reference to Hellenistic Judaism in the Etymology section.) Gnostic Catholicism and Thelema The Ecclesia Gnostica Catholica, or Gnostic Catholic Church (the ecclesiastical arm of Ordo Templi Orientis), offers its Rite of Baptism to any person at least 11 years old. Baptism of objects The word "baptism" or "christening" is sometimes used to describe the inauguration of certain objects for use. Boats and ships Baptism of Ships: since at least the time of the Crusades, rituals have contained a blessing for ships. The priest begs God to bless the vessel and protect those who sail on it. The ship is usually sprinkled with holy water. Church bells The name Baptism of Bells has been given to the blessing of (musical, especially church) bells, at least in France, since the 11th century. It is derived from the washing of the bell with holy water by the bishop, before he anoints it with the oil of the infirm without and with chrism within; a fuming censer is placed under it and the bishop prays that these sacramentals of the church may, at the sound of the bell, put the demons to flight, protect from storms, and call the faithful to prayer. Dolls "Baptism of Dolls": the custom of 'dolly dunking' was once a common practice in parts of the United Kingdom, particularly in Cornwall where it has been revived in recent years. Mandaean baptism Mandaeans revere John the Baptist and practice frequent baptism (masbuta) as a ritual of purification, not of initiation. They are possibly the earliest people to practice baptism. Mandaeans undergo baptism on Sundays (Habshaba), wearing a white sacral robe (rasta). Baptism for Mandaeans consists of a triple full immersion in water, a triple signing of the forehead with water and a triple drinking of water. The priest (Rabbi) then removes a ring made of myrtle worn by the baptized and places it on their forehead. This is then followed by a handshake (kushta, "hand of truth") with the priest. The final blessing involves the priest laying his right hand on the baptized person's head. Living water (fresh, natural, flowing water) is a requirement for baptism, therefore can only take place in rivers. All rivers are named Jordan (yardena) and are believed to be nourished by the World of Light. By the river bank, a Mandaean's forehead is anointed with sesame oil (misha) and partakes in a communion of bread (pihta) and water. Baptism for Mandaeans allows for salvation by connecting with the World of Light and for forgiveness of sins. Sethian baptism The Sethian baptismal rite is known as the Five Seals, in which the initiate is immersed five times in running water. Yazidi baptism Yazidi baptism is called mor kirin (literally: "to seal"). Traditionally, Yazidi children are baptised at birth with water from the Kaniya Sipî ("White Spring") at Lalish. It essentially consists of pouring holy water from the spring on the child's head three times. Islamic practice of wudu Many Islamic scholars such as Shaikh Bawa Muhaiyaddeen have compared the Islamic practice of wudu to a baptism. Wudu is a practice that Muslims practice to go from ritual impurity to ritual purity. This is mandatory for a Muslim to do before each of the five daily prayers, as well as following sexual intercourse, using the restroom, and other acts. Wudu, which is done at least five times a day, by practicing Muslims, results in the purification of a person and the removal of their sins. In a famous hadith, the Prophet Muhammad says "Whenever a man performs his ablution intending to pray and he washes his hands, the sins of his hands fall down with the first drop. When he rinses his mouth and nose, the sins of his tongue and lips fall down with the first drop. When he washes his face, the sins of his hearing and sight fall down with the first drop. When he washes his arms to his elbows and his feet to his ankles, he is purified from every sin and fault like the day he was born from his mother. If he stands for prayer, Allah will raise his status by a degree. If he sits, he will sit in peace." See also Amrit Sanchar, in Sikhism Baptism by fire Baptistery Chrism Christifideles Consolamentum Disciple (Christianity) Divine filiation Ghusl Holy water in Eastern Christianity Mikvah Misogi Prevenient Grace Ritual purification Theophany Water and religion Notes References Further reading . 26 pp. N.B.: States the Evangelical Anglican position of the Reformed Episcopal Church. External links "Writings of the Early Church Fathers on Baptism" "Baptism." Encyclopædia Britannica Online. Christian terminology Conversion to Christianity Rites of passage Ritual purity in Christianity Sacraments Mandaean rituals
1,842
4,300
https://en.wikipedia.org/wiki/Bocce
Bocce
(, or , ), sometimes anglicized as bocce ball, bocci or boccie, is a ball sport belonging to the boules family. Developed into its present form in Italy, it is closely related to British bowls and French , with a common ancestry from ancient games played in the Roman Empire. Bocce is played around western, southern and southeastern Europe, as well as in overseas areas with historical Italian immigrant population, including Australia, North America, and South America, principally Argentina and the southern Brazilian state of Rio Grande do Sul. Initially played just by the Italian immigrants, the game has slowly become more popular with their descendants and more broadly. History Having developed from games played in the Roman Empire, Bocce developed into its present form in Italy, where it is called , the plural of the Italian word which means 'bowl' in the general sporting sense, it spread around Europe and also in regions to which Italians have migrated. First form of regulation was described in the book "Gioco delle bocchie" by Raffaele Bisteghi in 1753. In South America it is known as , or bolas criollas ('Criollo balls') in Venezuela, and in south Brazil. The accessibility of bocce to people of all ages and abilities has seen it grow in popularity among Special Olympics programmes globally and it is now the third most played sport among Special Olympics athletes. Geographical spread The sport is also very popular on the eastern side of the Adriatic, especially in Croatia, Serbia, Montenegro, and Bosnia and Herzegovina, where the sport is known in Serbo-Croatian as ('playing ') or (colloquially also ). In Slovenia the sport is known as or colloquially 'playing ', or (from Italian and Venetian , meaning 'balls'). There are numerous bocce leagues in the United States. Most have been founded by Italian Americans but contain members of all groups. Rules and play Bocce is traditionally played on a natural soil or asphalt court up to in length and wide. While the court walls are traditionally made of wood or stone, many social leagues and Special Olympics programs now use inflatable 'Packabocce' PVC courts due to their portability and ease of storage. Bocce balls can be made of wood (traditional), metal, baked clay, or various kinds of plastic. Unlike lawn bowls, bocce balls are spherical and have no inbuilt bias. A game can be conducted between two players, or two teams of two, three, or four. A match is started by a randomly chosen side being given the opportunity to throw a smaller ball, the jack (called a ('little bocce') or ('bullet' or 'little ball') in Italian, depending on local custom), from one end of the court into a zone in length, ending from the far end of the court. If the first team misses twice, the other team is awarded the opportunity to place the jack anywhere they choose within the prescribed zone. Casual play is common in reasonably flat areas of parks and yards lacking a Bocce court, but players should agree to the minimum and maximum distance the jack may be thrown before play begins. The side that first attempted to place the jack is given the opportunity to bowl first. Once the first bowl has taken place, the other side has the opportunity to bowl. From then on, the side which does not have the ball closest to the jack has a chance to bowl, up until one side or the other has used their four balls. At that point, the other side bowls its remaining balls. The object of the game is for a team to get as many of its balls as possible closer to the target ball (jack, boccino, pallino) than the opposing team. The team with the closest ball to the jack is the only team that can score points in any frame. The scoring team receives one point for each of their balls that is closer to the jack than the closest ball of the other team. The length of a game varies by region but is typically from 7 to 13 points. Players are permitted to throw the ball in the air using an underarm action. This is generally used to knock either the jack or another ball away to attain a more favorable position. Tactics can get quite complex when players have sufficient control over the ball to throw or roll it accurately. Variants Bocce volo A variation called uses a metal ball, which is thrown overhand (palm down), after a run-up to the throwing line. In that latter respect, it is similar to the French boules game also known as which is internationally called sport-boules. Another French variant of the game is called , and (lacking the run-up) is more similar in some respects to traditional . Boccia Another development, for persons with disabilities, is called . It is a shorter-range game, played with leather balls on an indoor, smooth surface. Boccia was first introduced to the Paralympics at the 1984 New York/Stoke Mandeville Summer Games, and is one of the only two Paralympic sports that do not have an Olympic counterpart (the other being goalball). See also Fédération Internationale de Boules References External links Confederation Mondiale des Sports de Boules International Bocce Federation (FIB) Boules Lawn games Sports originating in Italy Articles containing video clips
1,843
4,306
https://en.wikipedia.org/wiki/Beltane
Beltane
Beltane () is the Gaelic May Day festival. Commonly observed on the first of May, the festival falls midway between the spring equinox and summer solstice in the northern hemisphere. The festival name is synonymous with the month marking the start of summer in Ireland, May being Mí na Bealtaine. Historically, it was widely observed throughout Ireland, Scotland, and the Isle of Man. In Irish the name for the festival day is (), in Scottish Gaelic (), and in Manx Gaelic /. Beltane is one of the principal four Gaelic seasonal festivals—along with Samhain, Imbolc, and Lughnasadh—and is similar to the Welsh . Bealtaine is mentioned in the earliest Irish literature and is associated with important events in Irish mythology. Also known as ("first of summer"), it marked the beginning of summer and was when cattle were driven out to the summer pastures. Rituals were performed to protect cattle, people and crops, and to encourage growth. Special bonfires were kindled, whose flames, smoke and ashes were deemed to have protective powers. The people and their cattle would walk around or between bonfires, and sometimes leap over the flames or embers. All household fires would be doused and then re-lit from the Bealtaine bonfire. These gatherings would be accompanied by a feast, and some of the food and drink would be offered to the . Doors, windows, byres and livestock would be decorated with yellow May flowers, perhaps because they evoked fire. In parts of Ireland, people would make a May Bush: typically a thorn bush or branch decorated with flowers, ribbons, bright shells and rushlights. Holy wells were also visited, while Bealtaine dew was thought to bring beauty and maintain youthfulness. Many of these customs were part of May Day or Midsummer festivals in parts of Great Britain and Europe. Public celebrations of Beltane fell out of popularity, though some customs continue to be revived as local cultural events. Since the late 20th century, Celtic neopagans and Wiccans have observed a festival based on Beltane as a religious holiday. Some neopagans in the Southern Hemisphere celebrate Beltane on or around 1 November. Historic customs Beltane was one of four Gaelic seasonal festivals: Samhain (1 November), Imbolc (1 February), Beltane (1 May), and Lughnasadh (1 August). Beltane marked the beginning of the pastoral summer season, when livestock were driven out to the summer pastures. Rituals were held at that time to protect them from harm, both natural and supernatural, and this mainly involved the "symbolic use of fire". There were also rituals to protect crops, dairy products and people, and to encourage growth. The (often referred to as spirits or fairies) were thought to be especially active at Beltane (as at Samhain) and the goal of many Beltane rituals was to appease them. Most scholars see the as remnants of the pagan gods and nature spirits. Beltane was a "spring time festival of optimism" during which "fertility ritual again was important, perhaps connecting with the waxing power of the sun". Ancient and medieval Beltane (the beginning of summer) and Samhain (the beginning of winter) are thought to have been the most important of the four Gaelic festivals. Sir James George Frazer wrote in The Golden Bough: A Study in Magic and Religion that the times of Beltane and Samhain are of little importance to European crop-growers, but of great importance to herdsmen. Thus, he suggests that halving the year at 1 May and 1 November dates from a time when the Celts were mainly a pastoral people, dependent on their herds. The earliest mention of Beltane is in Old Irish literature from Gaelic Ireland. According to the early medieval texts (written by Cormac mac Cuilennáin) and , Beltane was held on 1 May and marked the beginning of summer. The texts say that, to protect cattle from disease, druids would make two fires "with great incantations" and drive the cattle between them. According to 17th-century historian Geoffrey Keating, there was a great gathering at the hill of Uisneach each Beltane in medieval Ireland, where a sacrifice was made to a god named Beil. Keating wrote that two bonfires would be lit in every district of Ireland, and cattle would be driven between them to protect them from disease. There is no reference to such a gathering in the annals, but the medieval Dindsenchas (lore of places) includes a tale of a hero lighting a holy fire on Uisneach that blazed for seven years. Ronald Hutton writes that this may "preserve a tradition of Beltane ceremonies there", but adds "Keating or his source may simply have conflated this legend with the information in Sanas Chormaic to produce a piece of pseudo-history". Nevertheless, excavations at Uisneach in the 20th century found evidence of large fires and charred bones, and showed it to have been a place of ritual since ancient times. Evidence suggests it was "a sanctuary-site, in which fire was kept burning perpetually, or kindled at frequent intervals", where animal sacrifices were offered. Beltane is also mentioned in medieval Scottish literature. An early reference is found in the poem 'Peblis to the Play', contained in the Maitland Manuscripts of 15th- and 16th-century Scots poetry, which describes the celebration in the town of Peebles. Modern era From the late 18th century to the mid 20th century, many accounts of Beltane customs were recorded by folklorists and other writers. For example John Jamieson, in his Etymological Dictionary of the Scottish Language (1808), describes some of the Beltane customs which persisted in the 18th and early 19th centuries in parts of Scotland, which he noted were beginning to die out. In the 19th century, folklorist Alexander Carmichael (1832–1912), collected the Scottish Gaelic song Am Beannachadh Bealltain (The Beltane Blessing) in his Carmina Gadelica, which he heard from a crofter in South Uist. The first two verses were sung as follows: Beannaich, a Thrianailt fhioir nach gann, (Bless, O Threefold true and bountiful,) Mi fein, mo cheile agus mo chlann, (Myself, my spouse and my children,) Mo chlann mhaoth's am mathair chaomh 'n an ceann, (My tender children and their beloved mother at their head,) Air chlar chubhr nan raon, air airidh chaon nam beann, (On the fragrant plain, at the gay mountain sheiling,) Air chlar chubhr nan raon, air airidh chaon nam beann. (On the fragrant plain, at the gay mountain sheiling.) Gach ni na m' fhardaich, no ta 'na m' shealbh, (Everything within my dwelling or in my possession,) Gach buar is barr, gach tan is tealbh, (All kine and crops, all flocks and corn,) Bho Oidhche Shamhna chon Oidhche Bheallt, (From Hallow Eve to Beltane Eve,) Piseach maith, agus beannachd mallt, (With goodly progress and gentle blessing,) Bho mhuir, gu muir, agus bun gach allt, (From sea to sea, and every river mouth,) Bho thonn gu tonn, agus bonn gach steallt. (From wave to wave, and base of waterfall.) Bonfires Bonfires continued to be a key part of the festival in the modern era. All hearth fires and candles would be doused before the bonfire was lit, generally on a mountain or hill. Ronald Hutton writes that "To increase the potency of the holy flames, in Britain at least they were often kindled by the most primitive of all means, of friction between wood." This is known as a need-fire or force-fire. In the 19th century, John Ramsay described Scottish Highlanders kindling such a fire at Beltane, which was deemed sacred. In the 19th century, the ritual of driving cattle between two fires—as described in Sanas Cormaic almost 1000 years before—was still practised across most of Ireland and in parts of Scotland. Sometimes the cattle would be driven around a bonfire or be made to leap over flames or embers. The people themselves would do likewise. On the Isle of Man, people ensured that the smoke blew over them and their cattle. When the bonfire had died down, people would daub themselves with its ashes and sprinkle it over their crops and livestock. Burning torches from the bonfire would be taken home, carried around the house or boundary of the farmstead, and used to re-light the hearth. From these rituals, it is clear that the fire was seen as having protective powers. Similar rituals were part of May Day or Midsummer customs in other parts of the British Isles and mainland Europe. Frazer believed the fire rituals are a kind of imitative or sympathetic magic. He suggests they were meant to mimic the Sun and "ensure a needful supply of sunshine for men, animals, and plants", as well as to symbolically "burn up and destroy all harmful influences". Food was also cooked at the bonfire and there were rituals involving it. In the Scottish Highlands, Alexander Carmichael recorded that there was a feast featuring lamb, and that formerly this lamb was sacrificed. In 1769, Thomas Pennant wrote of bonfires in Perthshire, where a caudle made from eggs, butter, oatmeal and milk was cooked. Some of the mixture was poured on the ground as a libation. Everyone present would then take an oatmeal cake, called the bannoch Bealltainn or "Beltane bannock". A bit was offered to the spirits to protect their livestock (one to protect the horses, one to protect the sheep, and so forth) and a bit offered to each of the predators that might harm their livestock (one to the fox, one to the eagle, and so forth). Afterwards, they would drink the caudle. According to 18th century writers, in parts of Scotland there was another ritual involving the oatmeal cake. The cake would be cut and one of the slices marked with charcoal. The slices would then be put in a bonnet and everyone would take one out while blindfolded. According to one writer, whoever got the marked piece had to leap through the fire three times. According to another, those present pretended to throw the person into the fire and, for some time afterwards, would speak of them as if they were dead. This "may embody a memory of actual human sacrifice", or it may have always been symbolic. A similar ritual (i.e. of pretending to burn someone in the fire) was part of spring and summer bonfire festivals in other parts of Europe. Flowers and May Bushes Yellow and white flowers such as primrose, rowan, hawthorn, gorse, hazel, and marsh marigold were traditionally placed at doorways and windows; this is documented in 19th century Ireland, Scotland and Mann. Sometimes loose flowers were strewn at doors and windows and sometimes they were made into bouquets, garlands or crosses and fastened to them. They would also be fastened to cows and equipment for milking and butter making. It is likely that such flowers were used because they evoked fire. Similar May Day customs are found across Europe. The May Bush or May Bough was popular in parts of Ireland until the late 19th century. This was a small tree or branch—typically hawthorn, rowan, holly or sycamore—decorated with bright flowers, ribbons, painted shells or eggshells from Easter Sunday, and so forth. The tree would either be decorated where it stood, or branches would be decorated and placed inside or outside the house (particularly above windows and doors, on the roof, and on barns). It was generally the responsibility of the oldest person of the house to decorate the May Bush, and the tree would remain up until May 31st. The tree would also be decorated with candles or rushlights. Sometimes a May Bush would be paraded through the town. In parts of southern Ireland, gold and silver hurling balls known as May Balls would be hung on these May Bushes and handed out to children or given to the winners of a hurling match. In Dublin and Belfast, May Bushes were brought into town from the countryside and decorated by the whole neighbourhood. Each neighbourhood vied for the most handsome tree and, sometimes, residents of one would try to steal the May Bush of another. This led to the May Bush being outlawed in Victorian times. In some places, it was customary to sing and dance around the May Bush, and at the end of the festivities it may be burnt in the bonfire. In some areas the May Bush or Bough has also been called the "May Pole", but it is the bush or tree described above, and not the more commonly-known European maypole. Thorn trees are traditionally seen as special trees, associated with the aos sí. Frazer believed the customs of decorating trees or poles in springtime are a relic of tree worship and wrote: "The intention of these customs is to bring home to the village, and to each house, the blessings which the tree-spirit has in its power to bestow." Emyr Estyn Evans suggests that the May Bush custom may have come to Ireland from England, because it seemed to be found in areas with strong English influence and because the Irish saw it as unlucky to damage certain thorn trees. However, "lucky" and "unlucky" trees varied by region, and it has been suggested that Beltane was the only time when cutting thorn trees was allowed. The practice of bedecking a May Bush with flowers, ribbons, garlands and bright shells is found among the Gaelic diaspora, most notably in Newfoundland, and in some Easter traditions on the East Coast of the United States. Appeasing the fairies Many Beltane practices were designed to ward off or appease the fairies and prevent them from stealing dairy products. For example, three black coals were placed under a butter churn to ensure the fairies did not steal the butter, and May Boughs were tied to milk pails, the tails of cattle or hung in the barns to ensure the cattle's milk was not stolen. Flowers were also used to decorate the horns of cattle, which was believed to bring good fortune. Food was left or milk poured at the doorstep or places associated with the , such as 'fairy trees', as an offering. However, milk was never given to a neighbor on May Day because it was feared that the milk would be transferred to the neighbor's cow. In Ireland, cattle would be brought to 'fairy forts', where a small amount of their blood would be collected. The owners would then pour it into the earth with prayers for the herd's safety. Sometimes the blood would be left to dry and then be burnt. It was thought that dairy products were especially at risk from harmful spirits. To protect farm produce and encourage fertility, farmers would lead a procession around the boundaries of their farm. They would "carry with them seeds of grain, implements of husbandry, the first well water, and the herb vervain (or rowan as a substitute). The procession generally stopped at the four cardinal points of the compass, beginning in the east, and rituals were performed in each of the four directions". People made the sign of the cross with milk for good luck on Beltane, and the sign of the cross was also made on the backsides of cattle. Other customs Holy wells were often visited at Beltane, and at the other Gaelic festivals of Imbolc and Lughnasadh. Visitors to holy wells would pray for health while walking sunwise (moving from east to west) around the well. They would then leave offerings; typically coins or clooties (see clootie well). The first water drawn from a well on Beltane was thought to be especially potent, and would bring good luck to the person who drew it. Beltane morning dew was also thought to bring good luck and health. At dawn or before sunrise on Beltane, maidens would roll in the dew or wash their faces with it. The dew was collected in a jar, left in sunlight, then filtered. The dew was thought to increase sexual attractiveness, maintain youthfulness, protect from sun damage (particularly freckles and sunburn) and help with skin ailments for the ensuing year. It was also thought that a man who washed his face with soap and water on Beltane will grow long whiskers on his face. It was widely believed that no one should light a fire on May Day morning until they saw smoke rising from a neighbor's house. It was also believed to be bad luck to put out ashes or clothes on May Day, and to give away coal or ashes would cause the giver difficulty in lighting fires for the next year. Also, if the family owned a white horse, it should remain in the barn all day, and if any other horse was owned, a red rag should be tied to its tail. Any foal born on May Day was fated to kill a man, and any cow that calved on May Day would die. Any birth or marriage on May Day was generally believed to be ill-fated. On May Night a cake and a jug were left on the table, because it was believed that the Irish who had died abroad would return on May Day to their ancestral homes, and it was also believed that the dead returned on May Day to visit their friends. A robin that flew into the house on Beltane was believed to portend the death of a household member. The festival persisted widely up until the 1950s, and in some places the celebration of Beltane continues today. Revival As a festival, Beltane had largely died out by the mid-20th century, although some of its customs continued and in some places it has been revived as a cultural event. In Ireland, Beltane fires were common until the mid-20th century, but the custom seems to have lasted to the present day only in County Limerick (especially in Limerick itself) and in Arklow, County Wicklow. The lighting of a community Beltane fire from which each hearth fire is then relit is observed today in some parts of the Gaelic diaspora, though in most of these cases it is a cultural revival rather than an unbroken survival of the ancient tradition. In parts of Newfoundland, the custom of decorating the May Bush also survives. The town of Peebles in the Scottish Borders holds a traditional week-long Beltane Fair every year in June, when a local girl is crowned Beltane Queen on the steps of the parish church. Like other Borders festivals, it incorporates a Common Riding. Since 1988, a Beltane Fire Festival has been held every year on the night of 30 April on Calton Hill in Edinburgh, Scotland. While inspired by traditional Beltane, it is a modern celebration of summer's beginning which draws on many influences. The performance art event involves fire dances and a procession by costumed performers, led by the May Queen and the Green Man, culminating in the lighting of a bonfire. Butser Ancient Farm, an open-air archaeology museum in Hampshire, UK, has also held a Beltane festival since the 1980s. The festival mixes historical reenactment with folk influences, and features a May Queen and Green Man, living history displays, reenactor battles, demonstrations of traditional crafts, performances of folk music, and Celtic storytelling. The festival ends with the burning of a 30-40ft wickerman, with a new historical or folk-inspired design each year. A similar Bealtaine Festival has been held each year since 2009 at Uisneach in Ireland. It culminates in a torchlit procession by participants in costume, some on horseback, and the lighting of a large bonfire at dusk. In 2017, the ceremonial fire was lit by the President of Ireland, Michael D Higgins. The 1970 recording 'Ride a White Swan', written and performed by Marc Bolan and his band T.Rex, contains the line "Ride a white Swan like the people of the Beltane". Neopaganism Beltane and Beltane-based festivals are held by some Neopagans. As there are many kinds of Neopaganism, their Beltane celebrations can be very different despite the shared name. Some try to emulate the historic festival as much as possible. Other Neopagans base their celebrations on many sources, the Gaelic festival being only one of them. Neopagans usually celebrate Beltane on 30 April – 1 May in the Northern Hemisphere and 31 October – 1 November in the Southern Hemisphere, beginning and ending at sunset. Some Neopagans celebrate it at the astronomical midpoint between the spring equinox and summer solstice (or the full moon nearest this point). In the Northern Hemisphere, this midpoint is when the ecliptic longitude of the Sun reaches 45 degrees. In 2014, this was on 5 May. Celtic Reconstructionist Celtic Reconstructionists strive to reconstruct ancient Celtic religion. Their religious practices are based on research and historical accounts, but modified to suit modern life. They avoid syncretism and eclecticism (i.e. combining practises from unrelated cultures). Celtic Reconstructionists usually celebrate Beltane when the local hawthorn trees are in bloom. Many observe the traditional bonfire rites, to whatever extent this is feasible where they live. This may involve passing themselves and their pets or livestock between two bonfires, and bringing home a candle lit from the bonfire. If they are unable to make a bonfire or attend a bonfire ceremony, candles may be used instead. They may decorate their homes with a May Bush, branches from blooming thorn trees, or equal-armed rowan crosses. Holy wells may be visited and offerings made to the spirits or deities of the wells. Traditional festival foods may also be prepared. Wicca Wiccans use the name Beltane or Beltain for their May Day celebrations. It is one of the yearly Sabbats of their Wheel of the Year, following Ostara and preceding Midsummer. Unlike Celtic Reconstructionism, Wicca is syncretic and melds practices from many different cultures. In general, the Wiccan Beltane is more akin to the Germanic/English May Day festival, both in its significance (focusing on fertility) and its rituals (such as maypole dancing). Some Wiccans enact a ritual union of the May Lord and May Lady. Name In Irish, the festival is usually called ('day of Beltane') while the month of May is ("month of Beltane"). In Scottish Gaelic, the festival is and the month is or . Sometimes the older Scottish Gaelic spelling is used. The word comes from ('first of summer'), an old alternative name for the festival. The term (Scottish) or (Irish), 'the bright or yellow day of Beltane', means the first of May. In Ireland it is referred to in a common folk tale as ; the first day of the week (Monday/) is added to highlight the first day of summer. The name is anglicized as Beltane, Beltain, Beltaine, Beltine and Beltany. Etymology Two modern etymologies have been proposed. Beltaine could derive from a Common Celtic , meaning 'bright fire'. The element might be cognate with the English word bale (as in ) meaning 'white', 'bright' or 'shining'. Alternatively, Beltaine might stem from a Common Celtic form reconstructed as , which would be cognate with the name of the Lithuanian goddess of death , both from an earlier *gʷel-tiōn-, formed with the Proto-Indo-European root * ('suffering, death'). The absence of syncope (Irish sound laws rather predict a **Beltne form) can be explained by the popular belief that Beltaine was a compound of the word for 'fire', tene. In Ó Duinnín's Irish dictionary (1904), Beltane is referred to as which it explains is short for meaning 'first (of) summer'. The dictionary also states that is May Day and is the month of May. Toponymy There are place names in Ireland containing the word , indicating places where Bealtaine festivities were once held. It is often anglicised as Beltany. There are three Beltanys in County Donegal, including the Beltany stone circle, and two in County Tyrone. In County Armagh there is a place called Tamnaghvelton/ ('the Beltane field'). Lisbalting/ ('the Beltane ringfort') is in County Tipperary, while Glasheennabaultina/ ('the Beltane stream') is the name of a stream joining the River Galey in County Limerick. See also Samhain Celtic calendar Calan Mai Walpurgis Night References Further reading Carmichael, Alexander (1992). Carmina Gadelica. Lindisfarne Press. Chadwick, Nora (1970) The Celts. London, Penguin Danaher, Kevin (1972) The Year in Ireland. Dublin, Mercier Evans-Wentz, W. Y. (1966, 1990) The Fairy-Faith in Celtic Countries. New York, Citadel MacKillop, James (1998). Dictionary of Celtic Mythology. Oxford University Press McNeill, F. Marian (1959) The Silver Bough, Vol. 1–4. William MacLellan, Glasgow Simpson, Eve Blantyre (1908), Folk Lore in Lowland Scotland, London: J.M. Dent. External links Edinburgh's Beltane Fire Society Extract on The Beltane Fires from Sir James George Frazer's book The Golden Bough – 1922; from bartleby.com April observances Cross-quarter days Gaelic culture Holidays in Scotland Irish culture Irish folklore Irish mythology Irish words and phrases Galician culture Manx culture May observances Modern pagan holidays November observances Scottish culture Scottish folklore Scottish mythology
1,845
4,312
https://en.wikipedia.org/wiki/Bethlehem
Bethlehem
Bethlehem (; ; ) is a city in the West Bank, Palestine, located about south of Jerusalem. It is the capital of the Bethlehem Governorate, and has a population of approximately 25,000 people. The city's economy is largely tourist-driven; international tourism peaks around and during Christmas, when Christians embark on a pilgrimage to the Church of the Nativity, revered as the location of the Nativity of Jesus. At the northern entrance of the city is Rachel's Tomb, the burial place of biblical matriarch Rachel. Movement around the city is limited due to the Israeli West Bank barrier. The earliest-known mention of Bethlehem is in the Amarna correspondence of ancient Egypt, dated to 1350–1330 BCE, when the town was inhabited by the Canaanites. In the Hebrew Bible, the period of the Israelites is described; it identifies Bethlehem as the birthplace of David as well as the city where he was anointed as the third monarch of the United Kingdom of Israel, and also states that it was built up as a fortified city by Rehoboam, the first monarch of the Kingdom of Judah. In the New Testament, the Gospel of Matthew and the Gospel of Luke identify the city as the birthplace of Jesus of Nazareth. Under the Roman Empire, the city of Bethlehem was destroyed by Hadrian, who was in the process of defeating Jews involved in the Bar Kokhba revolt. However, Bethlehem's rebuilding was later promoted by Helena, the mother of Constantine the Great; Constantine expanded on his mother's project by commissioning the Church of the Nativity in 327 CE. In 529, the Church of the Nativity was heavily damaged by Samaritans involved in the Samaritan revolts; following the victory of the Byzantine Empire, it was rebuilt a century later by Justinian I. Amidst the Muslim conquest of the Levant, Bethlehem became part of Jund Filastin in 637. Muslims continued to rule the city until 1099, when it was conquered by the Crusaders, who replaced the local Christian clergy—composed of representatives from the Greek Orthodox Church—with representatives from the Catholic Church. In the mid-13th century, Bethlehem's walls were demolished by the Mamluk Sultanate. However, they were rebuilt by the Ottoman Empire in the 16th century, following the Ottoman–Mamluk War. At the end of World War I, the defeated Ottomans lost control of Bethlehem to the British Empire. It was governed under the British Mandate for Palestine until 1948, when it was captured by Jordan during the First Arab–Israeli War (see Jordanian annexation of the West Bank). In 1967, the city was captured by Israel during the Third Arab–Israeli War. Since the Oslo Accords, which comprise a series of agreements between Israel and the Palestinian National Authority, Bethlehem has been designated as part of Area A of the West Bank, nominally rendering it as being under full Palestinian control. While it was historically a city of Arab Christians, Bethlehem now has a majority of Arab Muslims; it is still home to a significant community of Palestinian Christians, however. Presently, Bethlehem has become encircled by dozens of Israeli settlements, which effectively separate Palestinians in the city from being able to openly access their land and livelihoods, and has likewise triggered their steady exodus. Etymology The current name for Bethlehem in local languages is in Arabic (), literally meaning "house of meat," and in Hebrew (), literally "house of bread" or "house of food." The city was called in and in . The earliest mention of Bethlehem as a place appears in the Amarna correspondence (), in which it is referred to as Bit-Laḫmi, a name for which the origins remain unknown. One longstanding suggestion in scholarship is that it derives from the Mesopotamian or Canaanite fertility god Laḫmu and his consort sister Lahamu, lahmo being the Chaldean word for "fertility". Biblical scholar William F. Albright believed that this hypothesis, first put forth by Otto Schroeder, was "certainly accurate". Albright noted that the pronunciation of the name had remained essentially the same for 3,500 years, even if the perceived meaning had shifted over time: "'Temple of the God Lakhmu' in Canaanite, 'House of Bread' in Hebrew and Aramaic, 'House of Meat' in Arabic." While Schroeder's theory is not widely accepted, it continues to find favour in academic literature over the later literal translations. Another suggestion is an association with the root l-h-m "to fight", but this is thought unlikely. History Canaanite period The earliest reference to Bethlehem appears in the Amarna correspondence (). In one of his six letters to Pharaoh, Abdi-Heba, the Egyptian-appointed governor of Jerusalem, appeals for aid in retaking Bit-Laḫmi in the wake of disturbances by Apiru mercenaries: "Now even a town near Jerusalem, Bit-Lahmi by name, a village which once belonged to the king, has fallen to the enemy... Let the king hear the words of your servant Abdi-Heba, and send archers to restore the imperial lands of the king!" It is thought that the similarity of this name to its modern forms indicates that it was originally a settlement of Canaanites who shared a Semitic cultural and linguistic heritage with the later arrivals. Laḫmu was the Akkadian god of fertility, worshipped by the Canaanites as Leḥem. Some time in the third millennium BCE, Canaanites erected a temple on the hill now known as the Hill of the Nativity, probably dedicated to Laḫmu. The temple, and subsequently the town that formed around it, was then known as Beit Lahama, "House (Temple) of Lahmu". By 1200 BC, the area of Bethlehem, as well as much of the region, was conquered by the Philistines, which led the region to be known to the Greeks as "Philistia", later corrupted to "Palestine". A burial ground discovered in spring 2013, and surveyed in 2015 by a joint Italian-Palestinian team found that the necropolis covered 3 hectares (more than 7 acres) and originally contained more than 100 tombs in use between roughly 2200 BCE and 650 BCE. The archaeologists were able to identify at least 30 tombs. Israelite and Judean period Archaeological confirmation of Bethlehem as a city in the Kingdom of Judah was uncovered in 2012 at the archaeological dig at the City of David in the form of a bulla (seal impression in dried clay) in ancient Hebrew script that reads "From the town of Bethlehem to the King." According to the excavators, it was used to seal the string closing a shipment of grain, wine, or other goods sent as a tax payment in the 8th or 7th century BCE. Biblical scholars believe Bethlehem, located in the "hill country" of Judea, may be the same as the Biblical Ephrath, which means "fertile", as there is a reference to it in the Book of Micah as Bethlehem Ephratah. The Hebrew Bible also calls it Beth-Lehem Judah, and the New Testament describes it as the "City of David". It is first mentioned in the Bible as the place where the matriarch Rachel died and was buried "by the wayside" (). Rachel's Tomb, the traditional grave site, stands at the entrance to Bethlehem. According to the Book of Ruth, the valley to the east is where Ruth of Moab gleaned the fields and returned to town with Naomi. In the Books of Samuel, Bethlehem is mentioned as the home of Jesse, father of King David of Israel, and the site of David's anointment by the prophet Samuel. It was from the well of Bethlehem that three of his warriors brought him water when he was hiding in the cave of Adullam. Writing in the 4th century, the Pilgrim of Bordeaux reported that the sepulchers of David, Ezekiel, Asaph, Job, Jesse, and Solomon were located near Bethlehem. There has been no corroboration of this. Classical period The Gospel of Matthew Matthew 1:18-2:23 and the Gospel of Luke Luke 2:1-39 represent Jesus as having been born in Bethlehem. Modern scholars, however, regard the two accounts as contradictory and the Gospel of Mark, the earliest gospel, mentions nothing about Jesus having been born in Bethlehem, saying only that he came from Nazareth. Current scholars are divided on the actual birthplace of Jesus: some believe he was actually born in Nazareth, while others still hold that he was born in Bethlehem. Nonetheless, the tradition that Jesus was born in Bethlehem was prominent in the early church. In around 155, the apologist Justin Martyr recommended that those who doubted Jesus was really born in Bethlehem could go there and visit the very cave where he was supposed to have been born. The same cave is also referenced by the apocryphal Gospel of James and the fourth-century church historian Eusebius. After the Bar Kokhba revolt ( 132–136 CE) was crushed, the Roman emperor Hadrian converted the Christian site above the Grotto into a shrine dedicated to the Greek god Adonis, to honour his favourite, the Greek youth Antinous. In around 395 CE, the Church Father Jerome wrote in a letter: "Bethlehem... belonging now to us... was overshadowed by a grove of Tammuz, that is to say, Adonis, and in the cave where once the infant Christ cried, the lover of Venus was lamented." Many scholars have taken this letter as evidence that the cave of the nativity over which the Church of the Nativity was later built had at one point been a shrine to the ancient Near Eastern fertility god Tammuz. Eusebius, however, mentions nothing about the cave having been associated with Tammuz and there are no other Patristic sources that suggest Tammuz had a shrine in Bethlehem. Peter Welten has argued that the cave was never dedicated to Tammuz and that Jerome misinterpreted Christian mourning over the Massacre of the Innocents as a pagan ritual over Tammuz's death. Joan E. Taylor has countered this contention by arguing that Jerome, as an educated man, could not have been so naïve as to mistake Christian mourning over the Massacre of the Innocents as a pagan ritual for Tammuz. In 326–328, the empress Helena, consort of the emperor Constantius Chlorus, and mother of the emperor Constantine the Great, made a pilgrimage to Syra-Palaestina, in the course of which she visited the ruins of Bethlehem. The Church of the Nativity was built at her initiative over the cave where Jesus was purported to have been born. During the Samaritan revolt of 529, Bethlehem was sacked and its walls and the Church of the Nativity destroyed; they were rebuilt on the orders of the Emperor Justinian I. In 614, the Persian Sassanid Empire, supported by Jewish rebels, invaded Palestina Prima and captured Bethlehem. A story recounted in later sources holds that they refrained from destroying the church on seeing the magi depicted in Persian clothing in a mosaic. Middle Ages In 637, shortly after Jerusalem was captured by the Muslim armies, 'Umar ibn al-Khattāb, the second Caliph, promised that the Church of the Nativity would be preserved for Christian use. A mosque dedicated to Umar was built upon the place in the city where he prayed, next to the church. Bethlehem then passed through the control of the Islamic caliphates of the Umayyads in the 8th century, then the Abbasids in the 9th century. A Persian geographer recorded in the mid-9th century that a well preserved and much venerated church existed in the town. In 985, the Arab geographer al-Muqaddasi visited Bethlehem, and referred to its church as the "Basilica of Constantine, the equal of which does not exist anywhere in the country-round." In 1009, during the reign of the sixth Fatimid Caliph, al-Hakim bi-Amr Allah, the Church of the Nativity was ordered to be demolished, but was spared by local Muslims, because they had been permitted to worship in the structure's southern transept. In 1099, Bethlehem was captured by the Crusaders, who fortified it and built a new monastery and cloister on the north side of the Church of the Nativity. The Greek Orthodox clergy were removed from their sees and replaced with Latin clerics. Up until that point the official Christian presence in the region was Greek Orthodox. On Christmas Day 1100, Baldwin I, first king of the Frankish Kingdom of Jerusalem, was crowned in Bethlehem, and that year a Latin episcopate was also established in the town. In 1187, Saladin, the Sultan of Egypt and Syria who led the Muslim Ayyubids, captured Bethlehem from the Crusaders. The Latin clerics were forced to leave, allowing the Greek Orthodox clergy to return. Saladin agreed to the return of two Latin priests and two deacons in 1192. However, Bethlehem suffered from the loss of the pilgrim trade, as there was a sharp decrease of European pilgrims. William IV, Count of Nevers had promised the Christian bishops of Bethlehem that if Bethlehem should fall under Muslim control, he would welcome them in the small town of Clamecy in present-day Burgundy, France. As a result, the Bishop of Bethlehem duly took up residence in the hospital of Panthenor, Clamecy, in 1223. Clamecy remained the continuous 'in partibus infidelium' seat of the Bishopric of Bethlehem for almost 600 years, until the French Revolution in 1789. Bethlehem, along with Jerusalem, Nazareth, and Sidon, was briefly ceded to the Crusader Kingdom of Jerusalem by a treaty between Holy Roman Emperor Frederick II and Ayyubid Sultan al-Kamil in 1229, in return for a ten-year truce between the Ayyubids and the Crusaders. The treaty expired in 1239, and Bethlehem was recaptured by the Muslims in 1244. In 1250, with the coming to power of the Mamluks under Rukn al-Din Baibars, tolerance of Christianity declined. Members of the clergy left the city, and in 1263 the town walls were demolished. The Latin clergy returned to Bethlehem the following century, establishing themselves in the monastery adjoining the Basilica of the Nativity. The Greek Orthodox were given control of the basilica and shared control of the Milk Grotto with the Latins and the Armenians. Ottoman era From 1517, during the years of Ottoman control, custody of the Basilica was bitterly disputed between the Catholic and Greek Orthodox churches. By the end of the 16th century, Bethlehem had become one of the largest villages in the District of Jerusalem, and was subdivided into seven quarters. The Basbus family served as the heads of Bethlehem among other leaders during this period. The Ottoman tax record and census from 1596 indicates that Bethlehem had a population of 1,435, making it the 13th largest village in Palestine at the time. Its total revenue amounted to 30,000 akce. Bethlehem paid taxes on wheat, barley and grapes. The Muslims and Christians were organized into separate communities, each having its own leader. Five leaders represented the village in the mid-16th century, three of whom were Muslims. Ottoman tax records suggest that the Christian population was slightly more prosperous or grew more grain than grapes (the former being a more valuable commodity). From 1831 to 1841, Palestine was under the rule of the Muhammad Ali Dynasty of Egypt. During this period, the town suffered an earthquake as well as the destruction of the Muslim quarter in 1834 by Egyptian troops, apparently as a reprisal for the murder of a favored loyalist of Ibrahim Pasha. In 1841, Bethlehem came under Ottoman rule once again and remained so until the end of World War I. Under the Ottomans, Bethlehem's inhabitants faced unemployment, compulsory military service, and heavy taxes, resulting in mass emigration, particularly to South America. An American missionary in the 1850s reported a population of under 4,000, nearly all of whom belonged to the Greek Church. He also noted that a lack of water limited the town's growth. Socin found from an official Ottoman village list from about 1870 that Bethlehem had a population of 179 Muslims in 59 houses, 979 "Latins" in 256 houses, 824 "Greeks" in 213 houses, and 41 Armenians in 11 houses, a total of 539 houses. The population count only included men. Hartmann found that Bethlehem had 520 houses. Modern era Bethlehem was administered by the British Mandate from 1920 to 1948. In the United Nations General Assembly's 1947 resolution to partition Palestine, Bethlehem was included in the international enclave of Jerusalem to be administered by the United Nations. Jordan captured the city during the 1948 Arab-Israeli War. Many refugees from areas captured by Israeli forces in 1947–48 fled to the Bethlehem area, primarily settling in what became the official refugee camps of 'Azza (Beit Jibrin) and 'Aida in the north and Dheisheh in the south. The influx of refugees significantly transformed Bethlehem's Christian majority into a Muslim one. Jordan retained control of the city until the Six-Day War in 1967, when Bethlehem was captured by Israel, along with the rest of the West Bank. Following the Six-Day War, Israel took control of the city. During the early months of First Intifada, on 5 May 1989, Milad Anton Shahin, aged 12, was shot dead by Israeli soldiers. Replying to a Member of Knesset in August 1990 Defence Minister Yitzak Rabin stated that a group of reservists in an observation post had come under attack by stone throwers. The commander of the post, a senior non-commissioned officer, fired two plastic bullets in deviation of operational rules. No evidence was found that this caused the boy's death. The officer was found guilty of illegal use of a weapon and sentenced to 5 months imprisonment, two of them actually in prison doing public service. He was also demoted. On December 21, 1995, Israeli troops withdrew from Bethlehem, and three days later the city came under the administration and military control of the Palestinian National Authority in accordance with the Interim Agreement on the West Bank and the Gaza Strip. During the Second Palestinian Intifada in 2000–2005, Bethlehem's infrastructure and tourism industry were damaged. In 2002, it was a primary combat zone in Operation Defensive Shield, a major military counteroffensive by the Israeli Defense Forces (IDF). The IDF besieged the Church of the Nativity, where dozens of Palestinian militants had sought refuge. The siege lasted 39 days. Several militants were killed. It ended with an agreement to exile 13 of the militants to foreign countries. Today, the city is surrounded by two bypass roads for Israeli settlers, leaving the inhabitants squeezed between thirty-seven Jewish enclaves, where a quarter of all West Bank settlers, roughly 170,000, live; the gap between the two roads is closed by the 8-metre high Israeli West Bank barrier, which cuts Bethlehem off from its sister city Jerusalem. Christian families that have lived in Bethlehem for hundreds of years are being forced to leave as land in Bethlehem is seized, and homes bulldozed, for construction of thousands of new Israeli homes. Land seizures for Israeli settlements have also prevented construction of a new hospital for the inhabitants of Bethlehem, as well as the barrier separating dozens of Palestinian families from their farmland and Christian communities from their places of worship. Geography Bethlehem is located at an elevation of about above sea level, higher than nearby Jerusalem. Bethlehem is situated on the Judean Mountains. The city is located northeast of Gaza City and the Mediterranean Sea, west of Amman, Jordan, southeast of Tel Aviv, Israel and south of Jerusalem. Nearby cities and towns include Beit Safafa and Jerusalem to the north, Beit Jala to the northwest, Husan to the west, al-Khadr and Artas to the southwest, and Beit Sahour to the east. Beit Jala and the latter form an agglomeration with Bethlehem. The Aida and Azza refugee camps are located within the city limits. In the center of Bethlehem is its old city. The old city consists of eight quarters, laid out in a mosaic style, forming the area around the Manger Square. The quarters include the Christian an-Najajreh, al-Farahiyeh, al-Anatreh, al-Tarajmeh, al-Qawawsa and Hreizat quarters and al-Fawaghreh — the only Muslim quarter. Most of the Christian quarters are named after the Arab Ghassanid clans that settled there. Al-Qawawsa Quarter was formed by Arab Christian emigrants from the nearby town of Tuqu' in the 18th century. There is also a Syriac quarter outside of the old city, whose inhabitants originate from Midyat and Ma'asarte in Turkey. The total population of the old city is about 5,000. Climate Bethlehem has a Mediterranean climate, with hot and dry summers and mild, wetter winters. Winter temperatures (mid-December to mid-March) can be cool and rainy. January is the coldest month, with temperatures ranging from 1 to 13 degree Celsius (33–55 °F). From May through September, the weather is warm and sunny. August is the hottest month, with a high of 30 degrees Celsius (86 °F). Bethlehem receives an average of of rainfall annually, 70% between November and January. Bethlehem's average annual relative humidity is 60% and reaches its highest rates between January and February. Humidity levels are at their lowest in May. Night dew may occur in up to 180 days per year. The city is influenced by the Mediterranean Sea breeze that occurs around mid-day. However, Bethlehem is affected also by annual waves of hot, dry, sandy and dust Khamaseen winds from the Arabian Desert, during April, May and mid-June. Demographics Population According to Ottoman tax records, Christians made up roughly 60% of the population in the early 16th century, while the Christian and Muslim population became equal by the mid-16th century. However, there were no Muslim inhabitants counted by the end of the century, with a recorded population of 287 adult male tax-payers. Christians, like all non-Muslims throughout the Ottoman Empire, were required to pay the jizya tax. In 1867, an American visitor describes the town as having a population of 3,000 to 4,000; of whom about 100 were Protestants, 300 were Muslims and "the remainder belonging to the Latin and Greek Churches with a few Armenians." Another report from the same year puts the Christian population at 3,000, with an additional 50 Muslims. An 1885 source put the population at approximately 6,000 of "principally Christians, Latins and Greeks" with no Jewish inhabitants. In 1948, the religious makeup of the city was 85% Christian, mostly of the Greek Orthodox and Roman Catholic denominations, and 13% Muslim. In the 1967 census taken by Israel authorities, the town of Bethlehem proper numbered 14,439 inhabitants, its 7,790 Muslim inhabitants represented 53.9% of the population, while the Christians of various denominations numbered 6,231 or 46.1%. In the PCBS's 1997 census, the city had a population of 21,670, including a total of 6,570 refugees, accounting for 30.3% of the city's population. In 1997, the age distribution of Bethlehem's inhabitants was 27.4% under the age of 10, 20% from 10 to 19, 17.3% from 20 to 29, 17.7% from 30 to 44, 12.1% from 45 to 64 and 5.3% above the age of 65. There were 11,079 males and 10,594 females. In the 2007 PCBS census, Bethlehem had a population of 25,266, of which 12,753 were males and 12,513 were females. There were 6,709 housing units, of which 5,211 were households. The average household consisted of 4.8 family members. Christian population After the Muslim conquest of the Levant in the 630s, the local Christians were Arabized even though large numbers were ethnically Arabs of the Ghassanid clans. Bethlehem's two largest Arab Christian clans trace their ancestry to the Ghassanids, including al-Farahiyyah and an-Najajreh. The former have descended from the Ghassanids who migrated from Yemen and from the Wadi Musa area in present-day Jordan and an-Najajreh descend from Najran. Another Bethlehem clan, al-Anatreh, also trace their ancestry to the Ghassanids. The percentage of Christians in the town has been in a steady decline since the mid-twentieth century. In 1947, Christians made up 85% of the population, but by 1998, the figure had declined to 40%. In 2005, the mayor of Bethlehem, Victor Batarseh, explained that "due to the stress, either physical or psychological, and the bad economic situation, many people are emigrating, either Christians or Muslims, but it is more apparent among Christians, because they already are a minority." The Palestinian Authority is officially committed to equality for Christians, although there have been incidents of violence against them by the Preventive Security Service and militant factions. In 2006, the Palestinian Centre for Research and Cultural Dialogue conducted a poll among the city's Christians according to which 90% said they had had Muslim friends, 73.3% agreed that the PNA treated Christian heritage in the city with respect and 78% attributed the exodus of Christians to the Israeli blockade. The only mosque in the Old City is the Mosque of Omar, located in the Manger Square. By 2016, the Christian population of Bethlehem had declined to only 16%. A study by Pew Research Center concluded that the decline in the Arab Christian population of the area was partially a result of a lower birth rate among Christians than among Muslims, but also partially due to the fact that Christians were more likely to emigrate from the region than any other religious group. Amon Ramnon, a researcher at the Jerusalem Institute for Policy Research, stated that the reason why more Christians were emigrating than Muslims is because it is easier for Arab Christians to integrate into western communities than for Arab Muslims, since many of them attend church-affiliated schools, where they are taught European languages. A higher percentage of Christians in the region are urban-dwellers, which also makes it easier for them to emigrate and assimilate into western populations. A statistical analysis of the Christian exodus cited lack of economic and educational opportunity, especially due to the Christians' middle-class status and higher education. Since the Second Intifada, 10% of the Christian population have left the city. However, it is likely that there are many other factors, most of which are shared with the Palestinian population as a whole. Economy Shopping is a major attraction, especially during the Christmas season. The city's main streets and old markets are lined with shops selling Palestinian handicrafts, Middle Eastern spices, jewelry and oriental sweets such as baklawa. Olive wood carvings are the item most purchased by tourists visiting Bethlehem. Religious handicrafts include ornaments handmade from mother-of-pearl, as well as olive wood statues, boxes, and crosses. Other industries include stone and marble-cutting, textiles, furniture and furnishings. Bethlehem factories also produce paints, plastics, synthetic rubber, pharmaceuticals, construction materials and food products, mainly pasta and confectionery. Cremisan Wine, founded in 1885, is a winery run by monks in the Monastery of Cremisan. The grapes are grown mainly in the al-Khader district. In 2007, the monastery's wine production was around 700,000 liters per year. In 2008, Bethlehem hosted the largest economic conference to date in the Palestinian territories. It was initiated by Palestinian Prime Minister and former Finance Minister Salam Fayyad to convince more than a thousand businessmen, bankers and government officials from throughout the Middle East to invest in the West Bank and Gaza Strip. A total of 1.4 billion US dollars was secured for business investments in the Palestinian territories. Tourism Tourism is Bethlehem's main industry. Unlike other Palestinian localities prior to 2000, the majority of the employed residents did not have jobs in Israel. More than 20% of the working population is employed in the industry. Tourism accounts for approximately 65% of the city's economy and 11% of the Palestinian National Authority. The city has more than two million visitors every year. Tourism in Bethlehem ground to a halt for over a decade after the Second Intifada, but gradually began to pick back up in the early 2010s. The Church of the Nativity is one of Bethlehem's major tourist attractions and a magnet for Christian pilgrims. It stands in the center of the city — a part of the Manger Square — over a grotto or cave called the Holy Crypt, where Jesus is believed to have been born. Nearby is the Milk Grotto where the Holy Family took refuge on their Flight to Egypt and next door is the cave where St. Jerome spent thirty years creating the Vulgate, the dominant Latin version of the Bible until the Reformation. There are over thirty hotels in Bethlehem. Jacir Palace, built in 1910 near the church, is one of Bethlehem's most successful hotels and its oldest. It was closed down in 2000 due to the Israeli-Palestinian conflict, but reopened in 2005 as the Jacir Palace InterContinental at Bethlehem. Religious significance and commemoration Birthplace of Jesus In the New Testament, the Gospel of Luke says that Jesus' parents traveled from Nazareth to Bethlehem, where Jesus was born. The Gospel of Matthew mentions Bethlehem as the place of birth, and adds that King Herod was told that a 'King of the Jews' had been born in the town, prompting Herod to order the killing of all the boys who were two years old or under in the town and surrounding area. Joseph, warned of Herod's impending action by an angel of the Lord, decided to flee to Egypt with his family and then later settled in Nazareth after Herod's death. Early Christian traditions describe Jesus as being born in Bethlehem: in one account, a verse in the Book of Micah is interpreted as a prophecy that the Messiah would be born there. The second century Christian apologist Justin Martyr stated in his Dialogue with Trypho (written c. 155–161) that the Holy Family had taken refuge in a cave outside of the town and then placed Jesus in a manger. Origen of Alexandria, writing around the year 247, referred to a cave in the town of Bethlehem which local people believed was the birthplace of Jesus. This cave was possibly one which had previously been a site of the cult of Tammuz. The Gospel of Mark and the Gospel of John do not include a nativity narrative, but refer to him only as being from Nazareth. In a 2005 article in Archaeology magazine, archaeologist Aviram Oshri points to an absence of evidence for the settlement of Bethlehem near Jerusalem at the time when Jesus was born, and postulates that Jesus was born in Bethlehem of Galilee. In a 2011 article in Biblical Archaeology Review magazine, Jerome Murphy-O'Connor argues for the traditional position that Jesus was born in Bethlehem near Jerusalem. Christmas celebrations Christmas rites are held in Bethlehem on three different dates: December 25 is the traditional date by the Roman Catholic and Protestant denominations, but Greek, Coptic and Syrian Orthodox Christians celebrate Christmas on January 6 and Armenian Orthodox Christians on January 19. Most Christmas processions pass through Manger Square, the plaza outside the Basilica of the Nativity. Roman Catholic services take place in St. Catherine's Church and Protestants often hold services at Shepherds' Fields. Other religious festivals Bethlehem celebrates festivals related to saints and prophets associated with Palestinian folklore. One such festival is the annual Feast of Saint George (al-Khadr) on May 5–6. During the celebrations, Greek Orthodox Christians from the city march in procession to the nearby town of al-Khader to baptize newborns in the waters around the Monastery of St. George and sacrifice a sheep in ritual. The Feast of St. Elijah is commemorated by a procession to Mar Elias, a Greek Orthodox monastery north of Bethlehem. Culture Embroidery The women embroiderers of Bethlehem were known for their bridalwear. Bethlehem embroidery was renowned for its "strong overall effect of colors and metallic brilliance." Less formal dresses were made of indigo fabric with a sleeveless coat (bisht) from locally woven wool worn over top. Dresses for special occasions were made of striped silk with winged sleeves with a short taqsireh jacket known as the Bethlehem jacket. The taqsireh was made of velvet or broadcloth, usually with heavy embroidery. Bethlehem work was unique in its use of couched gold or silver cord, or silk cord onto the silk, wool, felt or velvet used for the garment, to create stylized floral patterns with free or rounded lines. This technique was used for "royal" wedding dresses (thob malak), taqsirehs and the shatwehs worn by married women. It has been traced by some to Byzantium, and by others to the formal costumes of the Ottoman Empire's elite. As a Christian village, local women were also exposed to the detailing on church vestments with their heavy embroidery and silver brocade. Mother-of-pearl carving The art of mother-of-pearl carving is said to have been a Bethlehem tradition since the 15th century when it was introduced by Franciscan friars from Italy. A constant stream of pilgrims generated a demand for these items, which also provided jobs for women. The industry was noted by Richard Pococke, who visited Bethlehem in 1727. Cultural centers and museums Bethlehem is home to the Palestinian Heritage Center, established in 1991. The center aims to preserve and promote Palestinian embroidery, art and folklore. The International Center of Bethlehem is another cultural center that concentrates primarily on the culture of Bethlehem. It provides language and guide training, woman's studies and arts and crafts displays, and training. The Bethlehem branch of the Edward Said National Conservatory of Music has about 500 students. Its primary goals are to teach children music, train teachers for other schools, sponsor music research, and the study of Palestinian folklore music. Bethlehem has four museums: The Crib of the Nativity Theatre and Museum offers visitors 31 three-dimensional models depicting the significant stages of the life of Jesus. Its theater presents a 20-minute animated show. The Badd Giacaman Museum, located in the Old City of Bethlehem, dates back to the 18th century and is primarily dedicated to the history and process of olive oil production. Baituna al-Talhami Museum, established in 1972, contains displays of Bethlehem culture. The International Museum of Nativity was built by United Nations Educational, Scientific and Cultural Organization (UNESCO) to exhibit "high artistic quality in an evocative atmosphere". Local government Bethlehem is the muhfaza (seat) or district capital of the Bethlehem Governorate. Bethlehem held its first municipal elections in 1876, after the mukhtars ("heads") of the quarters of Bethlehem's Old City (excluding the Syriac Quarter) made the decision to elect a local council of seven members to represent each clan in the town. A Basic Law was established so that if the victor for mayor was a Catholic, his deputy should be of the Greek Orthodox community. Throughout, Bethlehem's rule by the British and Jordan, the Syriac Quarter was allowed to participate in the election, as were the Ta'amrah Bedouins and Palestinian refugees, hence ratifying the number of municipal members in the council to 11. In 1976, an amendment was passed to allow women to vote and become council members and later the voting age was increased from 21 to 25. There are several branches of political parties on the council, including Communist, Islamist, and secular. The leftist factions of the Palestine Liberation Organization (PLO) such as the Popular Front for the Liberation of Palestine (PFLP) and the Palestinian People's Party (PPP) usually dominate the reserved seats. Hamas gained the majority of the open seats in the 2005 Palestinian municipal elections. Mayors In the October 2012 municipal elections, Fatah member Vera Baboun won, becoming the first female mayor of Bethlehem. Education According to the Palestinian Central Bureau of Statistics (PCBS), in 1997, approximately 84% of Bethlehem's population over the age of 10 was literate. Of the city's population, 10,414 were enrolled in schools (4,015 in primary school, 3,578 in secondary and 2,821 in high school). About 14.1% of high school students received diplomas. There were 135 schools in the Bethlehem Governorate in 2006; 100 run the Education Ministry of the Palestinian National Authority, seven by the United Nations Relief and Works Agency (UNRWA) and 28 were private. Bethlehem is home to Bethlehem University, a Catholic Christian co-educational institution of higher learning founded in 1973 in the Lasallian tradition, open to students of all faiths. Bethlehem University is the first university established in the West Bank, and can trace its roots to 1893 when the De La Salle Christian Brothers opened schools throughout Palestine and Egypt. Transportation Bethlehem has three bus stations owned by private companies which offer service to Jerusalem, Beit Jala, Beit Sahour, Hebron, Nahalin, Battir, al-Khader, al-Ubeidiya and Beit Fajjar. There are two taxi stations that make trips to Beit Sahour, Beit Jala, Jerusalem, Tuqu' and Herodium. There are also two car rental departments: Murad and 'Orabi. Buses and taxis with West Bank licenses are not allowed to enter Israel, including Jerusalem, without a permit. The Israeli construction of the West Bank barrier has affected Bethlehem politically, socially, and economically. The barrier is located along the northern side of the town's built-up area, within distance of houses in the Aida refugee camp on one side, and the Jerusalem municipality on the other. Most entrances and exits from the Bethlehem agglomeration to the rest of the West Bank are currently subjected to Israeli checkpoints and roadblocks. The level of access varies based on Israeli security directives. Travel for Bethlehem's Palestinian residents from the West Bank into Jerusalem is regulated by a permit-system. Palestinians require a permit to enter the Jewish holy site of Rachel's Tomb. Israeli citizens are barred from entering Bethlehem and the nearby biblical Solomon's Pools. Twin towns – sister cities Bethlehem is twinned with: Abu Dhabi, U.A.E. Assisi, Italy Athens, Greece Barranquilla, Colombia Brescia, Italy Burlington, USA Capri, Italy Catanzaro, Italy Chartres, France Chivasso, Italy Civitavecchia, Italy Cologne, Germany Concepción, Chile Cori, Italy Creil, France Cusco, Peru Częstochowa, Poland Dakhla, Western Sahara Este, Italy Faggiano, Italy Florence, Italy Gallipoli, Italy Għajnsielem, Malta Glasgow, Scotland, U.K. Greccio, Italy Grenoble, France Lourdes, France Monterrey, Mexico Montevarchi, Italy Montpellier, France Natal, Brazil Pratovecchio Stia, Italy Saint Petersburg, Russia Sarpsborg, Norway Steyr, Austria Villa Alemana, Chile Zaragoza, Spain See also Bethlehem, Pennsylvania Bethlehem, Wales Star of Bethlehem Notes References Bibliography Sawsan & Shomali, Q., Bethlehem 2000. A Guide to Bethlehem and it Surroundings. Waldbrol, Flamm Druck Wagener GMBH, 1997. External links Pastor's Vision to put Christ back in Bethlehem during Christmas Bethlehem Municipality Welcome To The City of Bethlehem Bethlehem Peace Center Franciscan Custody of the Holy Land website – pages on Bethlehem Bible Land Library* Open Bethlehem civil society project Bethlehem: Muslim-Christian living together Photo: Christmas in Bethlehem, 2008 Photo Gallery of Bethlehem from 2007 Bethlehem Fair Trade Artisans Bethlehem University Bethlehem City (Fact Sheet), Applied Research Institute–Jerusalem, ARIJ Bethlehem City Profile, ARIJ Bethlehem aerial photo, ARIJ The priorities and needs for development in Bethlehem city based on the community and local authorities' assessment, ARIJ Palestinian Christian communities Cities in the West Bank Holy cities New Testament cities Torah cities Historic Jewish communities David Books of Samuel Nativity of Jesus in the New Testament Municipalities of the State of Palestine Christian holy places
1,846
4,313
https://en.wikipedia.org/wiki/Benjamin
Benjamin
Benjamin ( Bīnyāmīn; "Son of (the) right") was the last of the two sons of Jacob and Rachel (Jacob's thirteenth child and twelfth and youngest son) in Jewish, Christian and Islamic tradition. He was also the progenitor of the Israelite Tribe of Benjamin. Unlike Rachel's first son, Joseph, Benjamin was born in Canaan according to biblical narrative. In the Samaritan Pentateuch, Benjamin's name appears as "Binyamēm" (Samaritan Hebrew: , "son of days"). In the Quran, Benjamin is referred to as a righteous young child, who remained with Jacob when the older brothers plotted against Joseph. Later rabbinic traditions name him as one of four ancient Israelites who died without sin, the other three being Chileab, Jesse and Amram. Name The name is first mentioned in letters from King Sîn-kāšid of Uruk (1801–1771 BC), who called himself “King of Amnanum” and was a member of the Amorite tribal group the “Binu-Jamina” (single name “Binjamin”; Akkadian "Mar-Jamin"). The name means "Sons/Son of the South" and is linguistically related as a forerunner to the Old Testament name "Benjamin". According to the Hebrew Bible, Benjamin's name arose when Jacob deliberately changed the name "Benoni", the original name of Benjamin, since Benoni was an allusion to Rachel's dying just after she had given birth, as it means "son of my pain". Textual scholars regard these two names as fragments of naming narratives coming from different sources - one being the Jahwist and the other being the Elohist. Unusual for one of the 12 tribes of Israel, the Bible does not explain the etymology of Benjamin's name. Medieval commentator Rashi gives two different explanations, based on Midrashic sources. "Son of the south", with south derived from the word for the right hand side, referring to the birth of Benjamin in Canaan, as compared with the birth of all the other sons of Jacob in Aram. Modern scholars have proposed that "son of the south" / "right" is a reference to the tribe being subordinate to the more dominant tribe of Ephraim. Alternatively, Rashi suggests it means "son of days", meaning a son born in Jacob's old age. The Samaritan Pentateuch consistently spells his name "בנימים", with a terminal mem, ("Binyamim"), which could be translated literally as "spirit man" but is in line with the interpretation that the name was a reference to the advanced age of Jacob when Benjamin was born. According to classical rabbinical sources, Benjamin was only born after Rachel had fasted for a long time, as a religious devotion with the hope of a new child as a reward. By then Jacob had become over 100 years old. Benjamin is treated as a young child in most of the Biblical narrative, but at one point is abruptly described as the father of ten sons. Textual scholars believe that this is the result of the genealogical passage, in which his children are named, being from a much later source than the Jahwist and Elohist narratives, which make up most of the Joseph narrative, and which consistently describe Benjamin as a child. By allusion to the biblical Benjamin, in French, Polish and Spanish, "Benjamin" (benjamin/ beniamin/benjamín, respectively) is a common noun meaning the youngest child of a family, especially a particularly favoured one (with a similar connotation to "baby of the family"). Israelites in Egypt The Torah's Joseph narrative, at a stage when Joseph is unrecognised by his brothers, describes Joseph as testing whether his brothers have reformed by secretly planting a silver cup in Benjamin's bag. Then, publicly searching the bags for it, and after finding it in Benjamin's possession, demanding that Benjamin become his slave as a punishment. The narrative goes on to state that when Judah (on behalf of the other brothers) begged Joseph not to enslave Benjamin and instead enslave him, since enslavement of Benjamin would break Jacob's heart. This caused Joseph to recant and reveal his identity. The midrashic book of Jasher argues that prior to revealing his identity, Joseph asked Benjamin to find his missing brother (i.e. Joseph) via astrology, using an astrolabe-like tool. It continues by stating that Benjamin divined that the man on the throne was Joseph, so Joseph identified himself to Benjamin (but not the other brothers), and revealed his scheme (as in the Torah) to test how fraternal the other brothers were. Some classical rabbinical sources argue that Joseph identified himself for other reasons. In these sources, Benjamin swore an oath, on the memory of Joseph, that he was innocent of theft, and, when challenged about how believable the oath would be, explained that remembering Joseph was so important to him that he had named his sons in Joseph's honour. These sources go on to state that Benjamin's oath touched Joseph so deeply that Joseph was no longer able to pretend to be a stranger. In the narrative, just prior to this test, when Joseph had first met all of his brothers (but not identified himself to them), he had held a feast for them; the narrative heavily implies that Benjamin was Joseph's favorite brother, since he is overcome with tears when he first meets Benjamin in particular, and he gives Benjamin five times as much food as he apportions to the others. According to textual scholars, this is really the Jahwist's account of the reunion after Joseph identifies himself, and the account of the threat to enslave Benjamin is just the Elohist's version of the same event, with the Elohist being more terse about Joseph's emotions towards Benjamin, merely mentioning that Benjamin was given five times as many gifts as the others. Origin Biblical scholars believe, due to their geographic overlap and their treatment in older passages, that Ephraim and Manasseh were originally considered one tribe, that of Joseph. According to several biblical scholars, Benjamin was also originally part of this single tribe, but the biblical account of Joseph as his father became lost. The description of Benjamin being born after the arrival in Canaan is thought by some scholars to refer to the tribe of Benjamin coming into existence by branching from the Joseph group after the tribe had settled in Canaan. A number of biblical scholars suspect that the distinction of the Joseph tribes (including Benjamin) is that they were the only Israelites which went to Egypt and returned, while the main Israelite tribes simply emerged as a subculture from the Canaanites and had remained in Canaan throughout. According to this view, the story of Jacob's visit to Laban to obtain a wife originated as a metaphor for this migration, with the property and family which were gained from Laban representing the gains of the Joseph tribes by the time they returned from Egypt. According to textual scholars, the Jahwist version of the Laban narrative only mentions the Joseph tribes, and Rachel, and does not mention the other tribal matriarchs whatsoever. Benjamin's sons According to Genesis 46:21, Benjamin had ten sons: Bela, Becher, Ashbel, Gera, Naaman, Ehi, Rosh, Muppim, Huppim, and Ard. The name of his wife/wives are not given, but the Book of Jubilees calls his wife Ijasaka and the Book of Jasher mentions two wives, Mechalia the daughter of Aram and Aribath the daughter of Shomron. The classical rabbinical tradition adds that each son's name honors Joseph: Belah (meaning swallow), in reference to Joseph disappearing (being swallowed up) Becher (meaning first born), in reference to Joseph being the first child of Rachel Ashbel (meaning capture), in reference to Joseph having suffered captivity Gera (meaning grain), in reference to Joseph living in a foreign land (Egypt) Naaman (meaning grace), in reference to Joseph having graceful speech Ehi (meaning my brother), in reference to Joseph being Benjamin's only full-brother (as opposed to half-brothers) Rosh (meaning elder), in reference to Joseph being older than Benjamin Muppim (meaning double mouth), in reference to Joseph passing on what he had been taught by Jacob Huppim (meaning marriage canopies), in reference to Joseph being married in Egypt, while Benjamin was not there Ard (meaning wanderer/fugitive), in reference to Joseph being like a rose There is a disparity between the list given in Genesis 46 and that in Numbers 26, where the sons of Benjamin are listed along with the tribes they are the progenitors of. Belah, progenitor of the Belaites, is in both lists Ashbel, progenitor of the Ashbelites, is in both lists Ahiram, progenitor of the Ahiramites, appears in this list but not the first Shupham, progenitor of the Shuphamites, corresponds to Muppim from the first list Hupham, progenitor of the Huphamites, corresponds to Huppim from the first list Becher, Gera, Ehi, and Rosh are omitted from the second list. Ard and Naaman, who are the sons of Benjamin according to Numbers 26, are listed as the sons of Belah and are the progenitors of the Ardites and the Naamites respectively. In Islam Though not named in the Quran, Benjamin (Arabic: بنيامين Benyamýn) is referred to as the righteous youngest son of Yaqub, in the narrative of Yusuf in Islamic tradition. Apart from that, however, Islamic tradition does not provide much detail regarding Benjamin's life, and refers to him as being born from Jacob's wife Rahýl. As with Jewish tradition, it also further links a connection between the names of Benjamin's children and Joseph. See also Benjamin (disambiguation) For a list of persons with the given name Benjamin see Tribe of Benjamin Paul the Apostle, also known as Rabbi Shaul—a student of Gamliel or Paul the Jew from the Tribe of Benjamin; see Romans 11:1 and Phillipians 3:5 Mordecai the Jew, from the Tribe of Benjamin see Esther 2:5 Queen Esther, also known as Hadassah, the cousin of Mordecai the Jew—see the Book of Esther Citations External links "Benjamin", The Jewish Encyclopedia, 1908: Material on the tribe, its territory, Rabbinical tradition and Islam. Children of Jacob Founders of biblical tribes Book of Jubilees Tribe of Benjamin
1,847
4,314
https://en.wikipedia.org/wiki/Black%20Sabbath
Black Sabbath
Black Sabbath were an English rock band formed in Birmingham in 1968 by guitarist Tony Iommi, drummer Bill Ward, bassist Geezer Butler and vocalist Ozzy Osbourne. They are often cited as pioneers of heavy metal music. The band helped define the genre with releases such as Black Sabbath (1970), Paranoid (1970) and Master of Reality (1971). The band had multiple line-up changes following Osbourne's departure in 1979 and Iommi is the only constant member throughout their history. After previous iterations of the group – the Polka Tulk Blues Band and Earth – the band settled on the name Black Sabbath in 1969. They distinguished themselves through occult themes with horror-inspired lyrics and down-tuned guitars. Signing to Philips Records in November 1969, they released their first single, "Evil Woman", in January 1970, and their debut album, Black Sabbath, was released the following month. Though it received a negative critical response, the album was a commercial success, leading to a follow-up record, Paranoid, later that year. The band's popularity grew, and by 1973's Sabbath Bloody Sabbath, critics were starting to respond favourably. Osbourne's excessive substance abuse led to his firing in 1979. He was replaced by former Rainbow vocalist Ronnie James Dio. Following two albums with Dio, Heaven and Hell and Mob Rules, the second of which saw drummer Vinny Appice replace Ward, Black Sabbath endured many personnel changes from the mid-1980s to the mid-1990s that included vocalists Ian Gillan, Glenn Hughes, Ray Gillen and Tony Martin, as well as several drummers and bassists, with Butler's departure in 1984 leaving Iommi as the only remaining original member. Martin, who replaced Gillen in 1987, was the second-longest serving vocalist after Osbourne and recorded three albums with Black Sabbath before his dismissal in 1991. That same year, Iommi rejoined with Butler, Dio and Appice to record Dehumanizer (1992). After two more studio albums with Martin, who returned to replace Dio in 1993, the band's original line-up reunited in 1997 and released a live album, Reunion, the following year; they continued to tour occasionally until 2005. Other than various back catalogue reissues and compilation albums, as well as the Mob Rules-era line-up reuniting as Heaven & Hell, there was no further activity under the Black Sabbath name until 2011 with the release of their final studio album and 19th overall, 13, in 2013, which features all of the original members except Ward. During their farewell tour, the band played their final concert in their home city of Birmingham on 4 February 2017. Occasional partial reunions have happened since, most recently when Osbourne and Iommi performed together at the closing ceremony of the 2022 Commonwealth Games in Birmingham. Black Sabbath have sold over 70 million records worldwide as of 2013, making them one of the most commercially successful heavy metal bands. Black Sabbath, together with Deep Purple and Led Zeppelin, have been referred to as the "unholy trinity of British hard rock and heavy metal in the early to mid-seventies". They were ranked by MTV as the "Greatest Metal Band of All Time" and placed second on VH1's "100 Greatest Artists of Hard Rock" list. Rolling Stone magazine ranked them number 85 on their "100 Greatest Artists of All Time". Black Sabbath were inducted into the UK Music Hall of Fame in 2005 and the Rock and Roll Hall of Fame in 2006. They have also won two Grammy Awards for Best Metal Performance, and in 2019 the band were presented a Grammy Lifetime Achievement Award. History 1968–1969: Formation and early days Following the break-up of their previous band, Mythology, in 1968, guitarist Tony Iommi and drummer Bill Ward sought to form a heavy blues rock band in Aston, Birmingham. They enlisted bassist Geezer Butler and vocalist Ozzy Osbourne, who had played together in a band called Rare Breed, Osbourne having placed an advertisement in a local music shop: "OZZY ZIG Needs Gig – has own PA". The new group was initially named the Polka Tulk Blues Band, the name taken either from a brand of talcum powder or an Indian/Pakistani clothing shop; the exact origin is confused. The Polka Tulk Blues Band included slide guitarist Jimmy Phillips, a childhood friend of Osbourne's, and saxophonist Alan "Aker" Clarke. After shortening the name to Polka Tulk, the band again changed their name to Earth (which Osbourne hated) and continued as a four-piece without Phillips and Clarke. Iommi became concerned that Phillips and Clarke lacked the necessary dedication and were not taking the band seriously. Rather than asking them to leave, they instead decided to break up and then quietly reformed the band as a four-piece. While the band was performing under the Earth moniker, they recorded several demos written by Norman Haines such as "The Rebel", "When I Came Down" and "Song for Jim", the latter of which being a reference to Jim Simpson, who was a manager for the bands Bakerloo Blues Line and Tea & Symphony, as well as the trumpet player for the group Locomotive. Simpson had recently started a new club named Henry's Blueshouse at The Crown Hotel in Birmingham and offered to let Earth play there after they agreed to waive the usual support band fee in return for free T-shirts. The audience response was positive and Simpson agreed to manage Earth. In December 1968, Iommi abruptly left Earth to join Jethro Tull. Although his stint with the band would be short-lived, Iommi made an appearance with Jethro Tull on The Rolling Stones Rock and Roll Circus TV show. Unsatisfied with the direction of Jethro Tull, Iommi returned to Earth by the end of the month. "It just wasn't right, so I left", Iommi said. "At first I thought Tull were great, but I didn't much go for having a leader in the band, which was Ian Anderson's way. When I came back from Tull, I came back with a new attitude altogether. They taught me that to get on, you got to work for it." While playing shows in England in 1969, the band discovered they were being mistaken for another English group named Earth, so they decided to change their name again. A cinema across the street from the band's rehearsal room was showing the 1963 horror film Black Sabbath, starring Boris Karloff and directed by Mario Bava. While watching people line up to see the film, Butler noted that it was "strange that people spend so much money to see scary movies". Following that, Osbourne and Butler wrote the lyrics for a song called "Black Sabbath", which was inspired by the work of horror and adventure-story writer Dennis Wheatley, along with a vision that Butler had of a black silhouetted figure standing at the foot of his bed. Making use of the musical tritone, also known as "the Devil's Interval", the song's ominous sound and dark lyrics pushed the band in a darker direction, a stark contrast to the popular music of the late 1960s, which was dominated by flower power, folk music and hippie culture. Judas Priest frontman Rob Halford has called the track "probably the most evil song ever written". Inspired by the new sound, the band changed their name to Black Sabbath in August 1969, and made the decision to focus on writing similar material in an attempt to create the musical equivalent of horror films. 1969–1971: Black Sabbath and Paranoid The band's first show as Black Sabbath took place on 30 August 1969 in Workington, England. They were signed to Philips Records in November 1969 and released their first single, "Evil Woman" (a cover of a song by the band Crow), which was recorded at Trident Studios through Philips subsidiary Fontana Records in January 1970. Later releases were handled by Philips' newly formed progressive rock label, Vertigo Records. Black Sabbath's first major exposure came when the band appeared on John Peel's Top Gear radio show in 1969, performing "Black Sabbath", "N.I.B.", "Behind the Wall of Sleep" and "Sleeping Village" to a national audience in Great Britain shortly before recording of their first album commenced. Although the "Evil Woman" single failed to chart, the band were afforded two days of studio time in November to record their debut album with producer Rodger Bain. Iommi recalls recording live: "We thought, 'We have two days to do it, and one of the days is mixing.' So we played live. Ozzy was singing at the same time; we just put him in a separate booth and off we went. We never had a second run of most of the stuff". Black Sabbath was released on Friday the 13th, February 1970, and reached number eight in the UK Albums Chart. Following its U.S. and Canadian release in May 1970 by Warner Bros. Records, the album reached number 23 on the Billboard 200, where it remained for over a year. The album was given negative reviews by many critics. Lester Bangs dismissed it in a Rolling Stone review as "discordant jams with bass and guitar reeling like velocitised speedfreaks all over each other's musical perimeters, yet never quite finding synch". It sold in substantial numbers despite being panned, giving the band their first mainstream exposure. It has since been certified Platinum in both U.S. by the Recording Industry Association of America (RIAA) and in the UK by British Phonographic Industry (BPI), and is now generally accepted as the first heavy metal album. The band returned to the studio in June 1970, just four months after Black Sabbath was released. The new album was initially set to be named War Pigs after the song "War Pigs", which was critical of the Vietnam War; however, Warner changed the title of the album to Paranoid. The album's lead single, "Paranoid", was written in the studio at the last minute. Ward explains: "We didn't have enough songs for the album, and Tony just played the [Paranoid] guitar lick and that was it. It took twenty, twenty-five minutes from top to bottom." The single was released in September 1970 and reached number four on the UK Singles Chart, remaining Black Sabbath's only top 10 hit. The album followed in the UK in October 1970, where, pushed by the success of the "Paranoid" single, it reached number one on the UK Albums Chart. The U.S. release was held off until January 1971, as the Black Sabbath album was still on the chart at the time of Paranoids UK release. The album reached No. 12 in the U.S. in March 1971, and would go on to sell four million copies in the U.S. with virtually no radio airplay. Like Black Sabbath, the album was panned by rock critics of the era, but modern-day reviewers such as AllMusic's Steve Huey cite Paranoid as "one of the greatest and most influential heavy metal albums of all time", which "defined the sound and style of heavy metal more than any other record in rock history". The album was ranked at number 131 on Rolling Stone magazine's list of The 500 Greatest Albums of All Time. Paranoids chart success allowed the band to tour the U.S. for the first time – their first U.S. show was at a club called Ungano's at 210 West 70th Street in New York City – and spawned the release of the album's second single, "Iron Man". Although the single failed to reach the top 40, it remains one of Black Sabbath's most popular songs, as well as the band's highest-charting U.S. single until 1998's "Psycho Man". 1971–1973: Master of Reality and Vol. 4 In February 1971, after a one-off performance at the Myponga Pop Festival in Australia, Black Sabbath returned to the studio to begin work on their third album. Following the chart success of Paranoid, the band were afforded more studio time, along with a "briefcase full of cash" to buy drugs. "We were getting into coke, big time", Ward explained. "Uppers, downers, Quaaludes, whatever you like. It got to the stage where you come up with ideas and forget them, because you were just so out of it." Production completed in April 1971, and in July the band released Master of Reality, just six months after the U.S. release of Paranoid. The album reached the top 10 in the U.S. and the United Kingdom, and was certified Gold in less than two months, eventually receiving Platinum certification in the 1980s and Double Platinum in the early 21st century. It contained Sabbath's first acoustic songs, alongside fan favourites such as "Children of the Grave" and "Sweet Leaf". Critical response of the era was generally unfavourable, with Lester Bangs delivering an ambivalent review of Master of Reality in Rolling Stone, describing the closing "Children of the Grave" as "naïve, simplistic, repetitive, absolute doggerel – but in the tradition [of rock 'n' roll] ... The only criterion is excitement, and Black Sabbath's got it". (In 2003, Rolling Stone would place the album at number 300 on their 500 Greatest Albums of All Time list.) Following the Master of Reality world tour in 1972, the band took their first break in three years. As Ward explained: "The band started to become very fatigued and very tired. We'd been on the road non-stop, year in and year out, constantly touring and recording. I think Master of Reality was kind of like the end of an era, the first three albums, and we decided to take our time with the next album." In June 1972, the band reconvened in Los Angeles to begin work on their next album at the Record Plant. With more time in the studio, the album saw the band experimenting with new textures, such as strings, piano, orchestration and multi-part songs. Recording was plagued with problems, many as a result of substance abuse issues. Struggling to record the song "Cornucopia" after "sitting in the middle of the room, just doing drugs", Ward was nearly fired. "I hated the song, there were some patterns that were just ... horrible," the drummer said. "I nailed it in the end, but the reaction I got was the cold shoulder from everybody. It was like, 'Well, just go home; you're not being of any use right now.' I felt like I'd blown it, I was about to get fired". Butler thought that the end product "was very badly produced, as far as I was concerned. Our then-manager insisted on producing it, so he could claim production costs". The album was originally titled Snowblind after the song of the same name, which deals with cocaine abuse. The record company changed the title at the last minute to Black Sabbath Vol. 4. Ward observed, "There was no Volume 1, 2 or 3, so it's a pretty stupid title, really". Vol. 4 was released in September 1972, and while critics were dismissive, it achieved Gold status in less than a month, and was the band's fourth consecutive release to sell a million in the U.S. "Tomorrow's Dream" was released as a single – the band's first since "Paranoid" – but failed to chart. Following an extensive tour of the U.S., in 1973 the band travelled again to Australia, followed by a tour for the first time to New Zealand, before moving onto mainland Europe. "The band were definitely in their heyday", recalled Ward, "in the sense that nobody had burnt out quite yet". 1973–1976: Sabbath Bloody Sabbath and Sabotage Following the Vol. 4 world tour, Black Sabbath returned to Los Angeles to begin work on their next release. Pleased with the Vol. 4 album, the band sought to recreate the recording atmosphere, and returned to the Record Plant studio in Los Angeles. With new musical innovations of the era, the band were surprised to find that the room they had used previously at the Record Plant was replaced by a "giant synthesiser". The band rented a house in Bel Air and began writing in the summer of 1973, but in part because of substance issues and fatigue, they were unable to complete any songs. "Ideas weren't coming out the way they were on Vol. 4, and we really got discontent", Iommi said. "Everybody was sitting there waiting for me to come up with something. I just couldn't think of anything. And if I didn't come up with anything, nobody would do anything". After a month in Los Angeles with no results, the band opted to return to England. They rented Clearwell Castle in The Forest of Dean. "We rehearsed in the dungeons and it was really creepy, but it had some atmosphere, it conjured up things and stuff started coming out again". While working in the dungeon, Iommi stumbled onto the main riff of "Sabbath Bloody Sabbath", which set the tone for the new material. Recorded at Morgan Studios in London by Mike Butcher and building off the stylistic changes introduced on Vol. 4, new songs incorporated synthesisers, strings and complex arrangements. Yes keyboardist Rick Wakeman was brought in as a session player, appearing on "Sabbra Cadabra". In November 1973, Black Sabbath began to receive positive reviews in the mainstream press after the release of Sabbath Bloody Sabbath, with Gordon Fletcher of Rolling Stone calling the album "an extraordinarily gripping affair" and "nothing less than a complete success". Later reviewers such as AllMusic's Eduardo Rivadavia cite the album as a "masterpiece, essential to any heavy metal collection", while also displaying "a newfound sense of finesse and maturity". The album marked the band's fifth consecutive Platinum-selling album in the U.S., reaching number four on the UK Albums Chart and number 11 in the U.S. The band began a world tour in January 1974, which culminated at the California Jam festival in Ontario, California, on 6 April 1974. Attracting over 200,000 fans, Black Sabbath appeared alongside popular 1970s rock and pop bands Deep Purple, Eagles, Emerson, Lake & Palmer, Rare Earth, Seals & Crofts, Black Oak Arkansas and Earth, Wind & Fire. Portions of the show were telecast on ABC Television in the U.S., exposing the band to a wider American audience. In the same year, the band shifted management, signing with notorious English manager Don Arden. The move caused a contractual dispute with Black Sabbath's former management, and while on stage in the U.S., Osbourne was handed a subpoena that led to two years of litigation. Black Sabbath began work on their sixth album in February 1975, again in England at Morgan Studios in Willesden, this time with a decisive vision to differ the sound from Sabbath, Bloody Sabbath. "We could've continued and gone on and on, getting more technical, using orchestras and everything else which we didn't particularly want to. We took a look at ourselves, and we wanted to do a rock album – Sabbath, Bloody Sabbath wasn't a rock album, really". Produced by Black Sabbath and Mike Butcher, Sabotage was released in July 1975. As with its precursor, the album initially saw favourable reviews, with Rolling Stone stating "Sabotage is not only Black Sabbath's best record since Paranoid, it might be their best ever", although later reviewers such as AllMusic noted that "the magical chemistry that made such albums as Paranoid and Volume 4 so special was beginning to disintegrate". Sabotage reached the top 20 in both the U.S. and the United Kingdom, but was the band's first release not to achieve Platinum status in the U.S., only achieving Gold certification. Although the album's only single "Am I Going Insane (Radio)" failed to chart, Sabotage features fan favourites such as "Hole in the Sky" and "Symptom of the Universe". Black Sabbath toured in support of Sabotage with openers Kiss, but were forced to cut the tour short in November 1975, following a motorcycle accident in which Osbourne ruptured a muscle in his back. In December 1975, the band's record companies released a greatest hits album without input from the band, titled We Sold Our Soul for Rock 'n' Roll. The album charted throughout 1976, eventually selling two million copies in the U.S. 1976–1979: Technical Ecstasy, Never Say Die!, and Osbourne's departure Black Sabbath began work for their next album at Criteria Studios in Miami, Florida, in June 1976. To expand their sound, the band added keyboard player Gerald Woodroffe, who also had appeared to a lesser extent on Sabotage. During the recording of Technical Ecstasy, Osbourne admits that he began losing interest in Black Sabbath and began to consider the possibility of working with other musicians. Recording of Technical Ecstasy was difficult; by the time the album was completed, Osbourne was admitted to Stafford County Asylum in Britain. It was released on 25 September 1976 to mixed reviews, and – for the first time – later music critics gave the album less favourable retrospective reviews; two decades after its release, AllMusic gave the album two stars, and noted that the band was "unravelling at an alarming rate". The album featured less of the doomy, ominous sound of previous efforts, and incorporated more synthesisers and uptempo rock songs. Technical Ecstasy failed to reach the top 50 in the U.S. and was the band's second consecutive release not to achieve Platinum status, although it was later certified Gold in 1997. The album included "Dirty Women", which remains a live staple, as well as Ward's first lead vocal on the song "It's Alright". Touring in support of Technical Ecstasy began in November 1976, with openers Boston and Ted Nugent in the U.S., and completed in Europe with AC/DC in April 1977. In late 1977, while in rehearsal for their next album and just days before the band was set to enter the studio, Osbourne abruptly quit the band. Iommi called vocalist Dave Walker, a longtime friend of the band who had previously been a member of Fleetwood Mac and Savoy Brown, and informed him that Osbourne had left the band. Walker, who was at that time fronting a band called Mistress, flew to Birmingham from California in late 1977 to write material and rehearse with Black Sabbath. On 8 January 1978, Black Sabbath made their only live performance with Walker on vocals, playing an early version of the song "Junior's Eyes" on the BBC Television programme "Look! Hear!" Walker later recalled that, while in Birmingham, he had bumped into Osbourne in a pub and came to the conclusion that Osbourne was not fully committed to leaving Black Sabbath. "The last Sabbath albums were just very depressing for me", Osbourne said. "I was doing it for the sake of what we could get out of the record company, just to get fat on beer and put a record out." Walker has said that he wrote a lot of lyrics during his brief time in the band, but none of them were ever used. If any recordings of this version of the band other than the "Look! Hear!" footage still exist, Walker says that he is not aware of them. Osbourne initially set out to form a solo project featuring former Dirty Tricks members John Frazer-Binnie, Terry Horbury and Andy Bierne. As the new band were in rehearsals in January 1978, Osbourne had a change of heart and rejoined Black Sabbath. "Three days before we were due to go into the studio, Ozzy wanted to come back to the band", Iommi explained. "He wouldn't sing any of the stuff we'd written with the other guy (Walker), so it made it very difficult. We went into the studio with basically no songs. We'd write in the morning so we could rehearse and record at night. It was so difficult, like a conveyor belt, because you couldn't get time to reflect on stuff. 'Is this right? Is this working properly?' It was very difficult for me to come up with the ideas and putting them together that quick". The band spent five months at Sounds Interchange Studios in Toronto, Ontario, Canada, writing and recording what would become Never Say Die!. "It took quite a long time", Iommi said. "We were getting really drugged out, doing a lot of dope. We'd go down to the sessions, and have to pack up because we were too stoned, we'd have to stop. Nobody could get anything right, we were all over the place, everybody's playing a different thing. We'd go back and sleep it off, and try again the next day". The album was released in September 1978, reaching number 12 in the United Kingdom and number 69 in the U.S. Press response was unfavourable and did not improve over time, with Eduardo Rivadavia of AllMusic stating two decades after its release that the album's "unfocused songs perfectly reflected the band's tense personnel problems and drug abuse". The album featured the singles "Never Say Die" and "Hard Road", both of which cracked the top 40 in the United Kingdom. The band also made their second appearance on the BBC's Top of the Pops, performing "Never Say Die". It took nearly 20 years for the album to be certified Gold in the U.S. Touring in support of Never Say Die! began in May 1978 with openers Van Halen. Reviewers called Black Sabbath's performance "tired and uninspired", a stark contrast to the "youthful" performance of Van Halen, who were touring the world for the first time. The band filmed a performance at the Hammersmith Odeon in June 1978, which was later released on DVD as Never Say Die. The final show of the tour – and Osbourne's last appearance with the band until later reunions – was in Albuquerque, New Mexico, on 11 December. Following the tour, Black Sabbath returned to Los Angeles and again rented a house in Bel Air, where they spent nearly a year working on new material for the next album. The entire band were abusing both alcohol and other drugs, but Iommi says Osbourne "was on a totally different level altogether". The band would come up with new song ideas, but Osbourne showed little interest and would refuse to sing them. Pressure from the record label and frustrations with Osbourne's lack of input coming to a head, Iommi made the decision to fire Osbourne in 1979. Iommi believed the only options available were to fire Osbourne or break the band up completely. "At that time, Ozzy had come to an end", Iommi said. "We were all doing a lot of drugs, a lot of coke, a lot of everything, and Ozzy was getting drunk so much at the time. We were supposed to be rehearsing and nothing was happening. It was like, 'Rehearse today? No, we'll do it tomorrow.' It really got so bad that we didn't do anything. It just fizzled out". Ward, who was close with Osbourne, was chosen by Tony to break the news to the singer on 27 April 1979. "I hope I was professional, I might not have been, actually. When I'm drunk I am horrible, I am horrid", Ward said. "Alcohol was definitely one of the most damaging things to Black Sabbath. We were destined to destroy each other. The band were toxic, very toxic". 1979–1982: Dio joins, Heaven and Hell and Mob Rules Sharon Arden (later Sharon Osbourne), daughter of Black Sabbath manager Don Arden, suggested former Rainbow vocalist Ronnie James Dio to replace Ozzy Osbourne in 1979. Don Arden was at this point still trying to convince Osbourne to rejoin the band, as he viewed the original line-up as the most profitable. Dio officially joined in June, and the band began writing their next album. With a notably different vocal style from Osbourne's, Dio's addition to the band marked a change in Black Sabbath's sound. "They were totally different altogether", Iommi explains. "Not only voice-wise, but attitude-wise. Ozzy was a great showman, but when Dio came in, it was a different attitude, a different voice and a different musical approach, as far as vocals. Dio would sing across the riff, whereas Ozzy would follow the riff, like in "Iron Man". Ronnie came in and gave us another angle on writing." Geezer Butler temporarily left the band in September 1979 for personal reasons. According to Dio, the band initially hired Craig Gruber, with whom Dio had previously played while in Elf, on bass to assist with writing the new album. Gruber was soon replaced by Geoff Nicholls of Quartz. The new line-up returned to Criteria Studios in November to begin recording work, with Butler returning to the band in January 1980 and Nicholls moving to keyboards. Produced by Martin Birch, Heaven and Hell was released on 25 April 1980, to critical acclaim. Over a decade after its release, AllMusic said the album was "one of Sabbath's finest records, the band sounds reborn and re-energised throughout". Heaven and Hell peaked at number nine in the United Kingdom and number 28 in the U.S., the band's highest-charting album since Sabotage. The album eventually sold a million copies in the U.S., and the band embarked on an extensive world tour, making their first live appearance with Dio in Germany on 17 April 1980. Black Sabbath toured the U.S. throughout 1980 with Blue Öyster Cult on the "Black and Blue" tour, with a show at Nassau Coliseum in Uniondale, New York, filmed and released theatrically in 1981 as Black and Blue. On 26 July 1980, the band played to 75,000 fans at a sold-out Los Angeles Memorial Coliseum with Journey, Cheap Trick and Molly Hatchet. The next day, the band appeared at the 1980 Day on the Green at Oakland Coliseum. While on tour, Black Sabbath's former label in England issued a live album culled from a seven-year-old performance, titled Live at Last without any input from the band. The album reached number five on the UK chart and saw the re-release of "Paranoid" as a single, which reached the top 20. On 18 August 1980, after a show in Minneapolis, Ward quit the band. "It was intolerable for me to get on the stage without Ozzy. And I drank 24 hours a day, my alcoholism accelerated". Geezer Butler stated that after Ward's final show, the drummer came in drunk, stating that "he might as well be a Martian". Ward then got angry, packed his things and got on a bus to leave. Following Ward's sudden departure, the group hired drummer Vinny Appice. Further trouble for the band came during their 9 October 1980 concert at the Milwaukee Arena, which degenerated into a riot that caused $10,000 in damages to the arena and resulted in 160 arrests. According to the Associated Press: "The crowd of mostly adolescent males first became rowdy in a performance by the Blue Oyster Cult" and then grew restless while waiting an hour for Black Sabbath to begin playing. A member of the audience threw a beer bottle that struck bassist Butler and effectively ended the show. "The band then abruptly halted its performance and began leaving" as the crowd rioted. The band completed the Heaven and Hell world tour in February 1981 and returned to the studio to begin work on their next album. Black Sabbath's second studio album that was produced by Martin Birch and featured Ronnie James Dio as vocalist, Mob Rules, was released in October 1981 and was well received by fans, but less so by critics. Rolling Stone reviewer J. D. Considine gave the album one star, claiming "Mob Rules finds the band as dull-witted and flatulent as ever". Like most of the band's earlier work, time helped to improve the opinions of the music press. A decade after its release, AllMusic's Eduardo Rivadavia called Mob Rules "a magnificent record". The album was certified Gold and reached the top 20 on the UK chart. The album's title track, "The Mob Rules", which was recorded at John Lennon's old house in England, was also featured in the 1981 animated film Heavy Metal, although the film version is an alternate take and differs from the album version. Unhappy with the quality of 1980's Live at Last, the band recorded another live album – titled Live Evil – during the Mob Rules world tour, across the United States in Dallas, San Antonio and Seattle, in 1982. During the mixing process for the album, Iommi and Butler had a falling-out with Dio. Misinformed by their then-current mixing engineer, Iommi and Butler accused Dio of sneaking into the studio at night to raise the volume of his vocals. In addition, Dio was not satisfied with the pictures of him in the artwork. Butler also accused Dio and Appice of working on a solo album during the album's mixing without telling the other members of Black Sabbath. "Ronnie wanted more say in things", Iommi said. "And Geezer would get upset with him and that is where the rot set in. Live Evil is when it all fell apart. Ronnie wanted to do more of his own thing, and the engineer we were using at the time in the studio didn't know what to do, because Ronnie was telling him one thing and we were telling him another. At the end of the day, we just said, 'That's it, the band is over'". "When it comes time for the vocal, nobody tells me what to do. Nobody! Because they're not as good as me, so I do what I want to do", Dio later said. "I refuse to listen to Live Evil, because there are too many problems. If you look at the credits, the vocals and drums are listed off to the side. Open up the album and see how many pictures there are of Tony, and how many there are of me and Vinny". Ronnie James Dio left Black Sabbath in November 1982 to start his own band and took drummer Vinny Appice with him. Live Evil was released in January 1983, but was overshadowed by Ozzy Osbourne's Platinum-selling album Speak of the Devil. 1982–1984: Gillan as singer and Born Again The remaining original members, Iommi and Butler, began auditioning singers for the band's next release. Deep Purple and Whitesnake's David Coverdale, Samson's Nicky Moore and Lone Star's John Sloman were all considered and Iommi states in his autobiography that Michael Bolton auditioned. The band settled on former Deep Purple vocalist Ian Gillan to replace Dio in December 1982. The project was initially not to be called Black Sabbath, but pressure from the record label forced the group to retain the name. The band entered The Manor Studios in Shipton-on-Cherwell, Oxfordshire, in June 1983 with a returned and newly sober Bill Ward on drums. "That was the very first album that I ever did clean and sober," Ward recalled. "I only got drunk after I finished all my work on the album – which wasn't a very good idea... Sixty to seventy per cent of my energy was taken up on learning how to get through the day without taking a drink and learning how to do things without drinking, and thirty per cent of me was involved in the album." Born Again (7 August 1983) was panned on release by critics. Despite this negative reception, it reached number four in the UK, and number 39 in the U.S. Even three decades after its release, AllMusic's Eduardo Rivadavia called the album "dreadful", noting that "Gillan's bluesy style and humorous lyrics were completely incompatible with the lords of doom and gloom". Unable to tour because of the pressures of the road, Ward quit the band. "I fell apart with the idea of touring," he later explained. "I got so much fear behind touring, I didn't talk about the fear, I drank behind the fear instead and that was a big mistake." He was replaced by former Electric Light Orchestra drummer Bev Bevan for the Born Again '83–'84 world tour, (often unofficially referred to as the 'Feighn Death Sabbath '83–'84' World Tour) which began in Europe with Diamond Head, and later in the U.S. with Quiet Riot and Night Ranger. The band headlined the 1983 Reading Festival in England, adding Deep Purple's "Smoke on the Water" to their encore. The tour in support of Born Again included a giant set of the Stonehenge monument. In a move later parodied in the mockumentary This Is Spinal Tap, the band made a mistake in ordering the set piece. Butler explained: 1984–1987: Hiatus, Hughes as singer, Seventh Star, and Gillen as singer Following the completion of the Born Again tour in March 1984, vocalist Ian Gillan left Black Sabbath to re-join Deep Purple, which was reforming after a long hiatus. Bevan left at the same time, and Gillan remarked that he and Bevan were made to feel like "hired help" by Iommi. The band then recruited an unknown Los Angeles vocalist named David Donato and Ward once again rejoined the band. The new line-up wrote and rehearsed throughout 1984, and eventually recorded a demo with producer Bob Ezrin in October. Unhappy with the results, the band parted ways with Donato shortly after. Disillusioned with the band's revolving line-up, Ward left shortly after stating "This isn't Black Sabbath". Butler would quit Sabbath next in November 1984 to form a solo band. "When Ian Gillan took over that was the end of it for me," he said. "I thought it was just a joke and I just totally left. When we got together with Gillan it was not supposed to be a Black Sabbath album. After we had done the album we gave it to Warner Bros. and they said they were going to put it out as a Black Sabbath album and we didn't have a leg to stand on. I got really disillusioned with it and Gillan was really pissed off about it. That lasted one album and one tour and then that was it." One vocalist whose status is disputed, both inside and outside Sabbath, is Christian evangelist and former Joshua frontman Jeff Fenholt. Fenholt insists he was a singer in Sabbath between January and May 1985. Iommi has never confirmed this. Fenholt gives a detailed account in Garry Sharpe-Young's book Sabbath Bloody Sabbath: The Battle for Black Sabbath. Following both Ward's and Butler's exits, sole remaining original member Iommi put Sabbath on hiatus, and began work on a solo album with long-time Sabbath keyboardist Geoff Nicholls. While working on new material, the original Sabbath line-up agreed to a spot at Bob Geldof's Live Aid, performing at the Philadelphia show on 13 July 1985. This event – which also featured reunions of The Who and Led Zeppelin – marked the first time the original line-up had appeared on stage since 1978. "We were all drunk when we did Live Aid," recalled Geezer Butler, "but we'd all got drunk separately." Returning to his solo work, Iommi enlisted bassist Dave Spitz (ex-Great White), drummer Eric Singer and initially intended to use multiple singers, including Rob Halford of Judas Priest, former Deep Purple and Trapeze vocalist Glenn Hughes, and former Sabbath vocalist Ronnie James Dio. This plan didn't work as he forecasted. "We were going to use different vocalists on the album, guest vocalists, but it was so difficult getting it together and getting releases from their record companies. Glenn Hughes came along to sing on one track and we decided to use him on the whole album." The band spent the remainder of the year in the studio, recording what would become Seventh Star (1986). Warner Bros. refused to release the album as a Tony Iommi solo release, instead insisting on using the name Black Sabbath. Pressured by the band's manager, Don Arden, the two compromised and released the album as "Black Sabbath featuring Tony Iommi" in January 1986. "It opened up a whole can of worms," Iommi explained. "If we could have done it as a solo album, it would have been accepted a lot more." Seventh Star sounded little like a Sabbath album, incorporating instead elements popularised by the 1980s Sunset Strip hard rock scene. It was panned by the critics of the era, although later reviewers such as AllMusic gave album verdicts, calling the album "often misunderstood and underrated". The new line-up rehearsed for six weeks preparing for a full world tour, although the band were eventually forced to use the Sabbath name. "I was into the 'Tony Iommi project', but I wasn't into the Black Sabbath moniker," Hughes said. "The idea of being in Black Sabbath didn't appeal to me whatsoever. Glenn Hughes singing in Black Sabbath is like James Brown singing in Metallica. It wasn't gonna work." Just four days before the start of the tour, Hughes got into a bar fight with the band's production manager John Downing which splintered the singer's orbital bone. The injury interfered with Hughes' ability to sing, and the band brought in vocalist Ray Gillen to continue the tour with W.A.S.P. and Anthrax, although nearly half of the U.S. dates would be cancelled because of poor ticket sales. Black Sabbath began work on new material in October 1986 at Air Studios in Montserrat with producer Jeff Glixman. The recording was fraught with problems from the beginning, as Glixman left after the initial sessions to be replaced by producer Vic Coppersmith-Heaven. Bassist Dave Spitz quit over "personal issues", and former Rainbow and Ozzy Osbourne bassist Bob Daisley was brought in. Daisley re-recorded all of the bass tracks, and wrote the album's lyrics, but before the album was complete, he left to join Gary Moore's backing band, taking drummer Eric Singer with him. After problems with second producer Coppersmith-Heaven, the band returned to Morgan Studios in England in January 1987 to work with new producer Chris Tsangarides. While working in the United Kingdom, new vocalist Ray Gillen abruptly left Black Sabbath to form Blue Murder with guitarist John Sykes (ex-Tygers of Pan Tang, Thin Lizzy, Whitesnake). 1987–1990: Martin joins, The Eternal Idol, Headless Cross, and Tyr The band enlisted heavy metal vocalist Tony Martin to re-record Gillen's tracks, and former Electric Light Orchestra drummer Bev Bevan to complete a few percussion overdubs. Before the release of the new album Black Sabbath accepted an offer to play six shows at Sun City, South Africa during the apartheid era. The band drew criticism from activists and artists involved with Artists United Against Apartheid, who had been boycotting South Africa since 1985. Drummer Bev Bevan refused to play the shows, and was replaced by Terry Chimes, formerly of the Clash. After nearly a year in production, The Eternal Idol was released on 8 December 1987 and ignored by contemporary reviewers. On-line internet era reviews were mixed. AllMusic said that "Martin's powerful voice added new fire" to the band, and the album contained "some of Iommi's heaviest riffs in years." Blender gave the album two stars, claiming the album was "Black Sabbath in name only". The album would stall at No. 66 in the United Kingdom, while peaking at 168 in the U.S. The band toured in support of Eternal Idol in Germany, Italy and for the first time, Greece. In part due to a backlash from promoters over the South Africa incident, other European shows were cancelled. Bassist Dave Spitz left the band shortly before the tour, and was replaced by Jo Burt, formerly of Virginia Wolf. Following the poor commercial performance of The Eternal Idol, Black Sabbath were dropped by both Vertigo Records and Warner Bros. Records, and signed with I.R.S. Records. The band took time off in 1988, returning in August to begin work on their next album. As a result of the recording troubles with Eternal Idol, Tony Iommi opted to produce the band's next album himself. "It was a completely new start", Iommi said. "I had to rethink the whole thing, and decided that we needed to build up some credibility again". Iommi enlisted former Rainbow drummer Cozy Powell, long-time keyboardist Nicholls and session bassist Laurence Cottle, and rented a "very cheap studio in England". Black Sabbath released Headless Cross in April 1989, and it was also ignored by contemporary reviewers, although AllMusic contributor Eduardo Rivadavia gave the album four stars and called it "the finest non-Ozzy or Dio Black Sabbath album". Anchored by the number 62 charting single "Headless Cross", the album reached number 31 on the UK chart, and number 115 in the U.S. Queen guitarist Brian May, a good friend of Iommi's, played a guest solo on the song "When Death Calls". Following the album's release the band added touring bassist Neil Murray, formerly of Colosseum II, National Health, Whitesnake, Gary Moore's backing band, and Vow Wow. The unsuccessful Headless Cross U.S. tour began in May 1989 with openers Kingdom Come and Silent Rage, but because of poor ticket sales, the tour was cancelled after just eight shows. The European leg of the tour began in September, where the band were enjoying chart success. After a string of Japanese shows the band embarked on a 23 date Russian tour with Girlschool. Black Sabbath was one of the first bands to tour Russia, after Mikhail Gorbachev opened the country to western acts for the first time in 1989. The band returned to the studio in February 1990 to record Tyr, the follow-up to Headless Cross. While not technically a concept album, some of the album's lyrical themes are loosely based on Norse mythology. Tyr was released on 6 August 1990, reaching number 24 on the UK albums chart, but was the first Black Sabbath release not to break the Billboard 200 in the U.S. The album would receive mixed internet-era reviews, with AllMusic noting that the band "mix myth with metal in a crushing display of musical synthesis", while Blender gave the album just one star, claiming that "Iommi continues to besmirch the Sabbath name with this unremarkable collection". The band toured in support of Tyr with Circus of Power in Europe, but the final seven United Kingdom dates were cancelled because of poor ticket sales. For the first time in their career, the band's touring cycle did not include U.S. dates. 1990–1992: Dio rejoins and Dehumanizer While on his Lock Up the Wolves U.S. tour in August 1990, former Sabbath vocalist Ronnie James Dio was joined onstage at the Roy Wilkins Auditorium by Geezer Butler to perform "Neon Knights". Following the show, the two expressed interest in rejoining Sabbath. Butler convinced Iommi, who in turn broke up the current line-up, dismissing vocalist Tony Martin and bassist Neil Murray. "I do regret that in a lot of ways," Iommi said. "We were at a good point then. We decided to [reunite with Dio] and I don't even know why, really. There's the financial aspect, but that wasn't it. I seemed to think maybe we could recapture something we had." Dio and Butler joined Iommi and Cozy Powell in autumn 1990 to begin the next Sabbath release. While rehearsing in November, Powell suffered a broken hip when his horse died and fell on the drummer's legs. Unable to complete the album, Powell was replaced by former drummer Vinny Appice, reuniting the Mob Rules line-up, and the band entered the studio with producer Reinhold Mack. The year-long recording was plagued with problems, primarily stemming from writing tension between Iommi and Dio. Songs were rewritten multiple times. "It was just hard work," Iommi said. "We took too long on it, that album cost us a million dollars, which is bloody ridiculous." Dio recalled the album as difficult, but worth the effort: "It was something we had to really wring out of ourselves, but I think that's why it works. Sometimes you need that kind of tension, or else you end up making the Christmas album". The resulting Dehumanizer was released on 22 June 1992. In the U.S., the album was released on 30 June 1992 by Reprise Records, as Dio and his namesake band were still under contract to the label at the time. While the album received mixed , it was the band's biggest commercial success in a decade. Anchored by the top 40 rock radio single "TV Crimes", the album peaked at number 44 on the Billboard 200. The album also featured "Time Machine", a version of which had been recorded for the 1992 film Wayne's World. Additionally, the perception among fans of a return of some semblance of the "real" Sabbath provided the band with much needed momentum. Sabbath began touring in support of Dehumanizer in July 1992 with Testament, Danzig, Prong, and Exodus. While on tour, former vocalist Ozzy Osbourne announced his first retirement, and invited Sabbath to open for his solo band at the final two shows of his No More Tours tour in Costa Mesa, California. The band agreed, aside from Dio, who told Iommi, "I'm not doing that. I'm not supporting a clown." Dio spoke of the situation years later: Dio quit Sabbath following a show in Oakland, California on 13 November 1992, one night before the band were set to appear at Osbourne's retirement show. Judas Priest vocalist Rob Halford stepped in at the last minute, performing two nights with the band. Iommi and Butler joined Osbourne and former drummer Ward on stage for the first time since 1985's Live Aid concert, performing a brief set of Sabbath songs. This set the stage for a longer-term reunion of the original line-up, though that plan proved short-lived. "Ozzy, Geezer, Tony and Bill announced the reunion of Black Sabbath – again," remarked Dio. "And I thought that it was a great idea. But I guess Ozzy didn't think it was such a great idea… I'm never surprised when it comes to whatever happens with them. Never at all. They are very predictable. They don't talk." 1992–1997: Martin rejoins, Cross Purposes, and Forbidden Drummer Vinny Appice left the band following the reunion show to rejoin Ronnie James Dio's solo band, later appearing on Dio's Strange Highways and Angry Machines. Iommi and Butler enlisted former Rainbow drummer Bobby Rondinelli, and reinstated former vocalist Tony Martin. The band returned to the studio to work on new material, although the project was not originally intended to be released under the Black Sabbath name. As Geezer Butler explains: Under pressure from their record label, the band released their seventeenth studio album, Cross Purposes, on 8 February 1994, under the Black Sabbath name. The album received mixed reviews, with Blender giving the album two stars, calling Soundgarden's 1994 album Superunknown "a far better Sabbath album than this by-the-numbers potboiler". AllMusic's Bradley Torreano called Cross Purposes "the first album since Born Again that actually sounds like a real Sabbath record". The album just missed the Top 40 in the UK reaching number 41, and also reached 122 on the Billboard 200 in the U.S. Cross Purposes contained the song "Evil Eye", which was co-written by Van Halen guitarist Eddie Van Halen, although uncredited because of record label restrictions. Touring in support of Cross Purposes began in February with Morbid Angel and Motörhead in the U.S. The band filmed a live performance at the Hammersmith Apollo on 13 April 1994, which was released on VHS accompanied by a CD, titled Cross Purposes Live. After the European tour with Cathedral and Godspeed in June 1994, drummer Bobby Rondinelli quit the band and was replaced by original Black Sabbath drummer Ward for five shows in South America. Following the touring cycle for Cross Purposes, bassist Geezer Butler quit the band for the second time. "I finally got totally disillusioned with the last Sabbath album, and I much preferred the stuff I was writing to the stuff Sabbath were doing". Butler formed a solo project called GZR, and released Plastic Planet in 1995. The album contained the song "Giving Up the Ghost", which was critical of Tony Iommi for carrying on with the Black Sabbath name, with the lyrics: You plagiarised and parodied / the magic of our meaning / a legend in your own mind / left all your friends behind / you can't admit that you're wrong / the spirit is dead and gone ("I heard it's something about me..." said Iommi. "I had the album given to me a while back. I played it once, then somebody else had it, so I haven't really paid any attention to the lyrics... It's nice to see him doing his own thing – getting things off his chest. I don't want to get into a rift with Geezer. He's still a friend." Following Butler's departure, newly returned drummer Ward once again left the band. Iommi reinstated former members Neil Murray on bass and Cozy Powell on drums, effectively reuniting the 1990 Tyr line-up. The band enlisted Body Count guitarist Ernie C to produce the new album, which was recorded in London in autumn of 1994. The album featured a guest vocal on "Illusion of Power" by Body Count vocalist Ice-T. The resulting Forbidden was released on 8 June 1995, but failed to chart in the U.S. The album was widely panned by critics; AllMusic's Bradley Torreano said "with boring songs, awful production, and uninspired performances, this is easily avoidable for all but the most enthusiastic fan"; while Blender magazine called Forbidden "an embarrassment... the band's worst album". Black Sabbath embarked on a world tour in July 1995 with openers Motörhead and Tiamat, but two months into the tour, drummer Cozy Powell left the band, citing health issues, and was replaced by former drummer Bobby Rondinelli. "The members I had in the last lineup – Bobby Rondinelli, Neil Murray – they're great, great characters..." Iommi told Sabbath fanzine Southern Cross. "That, for me, was an ideal lineup. I wasn't sure vocally what we should do, but Neil Murray and Bobby Rondinelli I really got on well with." After completing Asian dates in December 1995, Tony Iommi put the band on hiatus, and began work on a solo album with former Black Sabbath vocalist Glenn Hughes, and former Judas Priest drummer Dave Holland. The album was not officially released following its completion, although a widely traded bootleg called Eighth Star surfaced soon after. The album was officially released in 2004 as The 1996 DEP Sessions, with Holland's drums re-recorded by session drummer Jimmy Copley. In 1997, Tony Iommi disbanded the current line-up to officially reunite with Ozzy Osbourne and the original Black Sabbath line-up. Vocalist Tony Martin claimed that an original line-up reunion had been in the works since the band's brief reunion at Ozzy Osbourne's 1992 Costa Mesa show, and that the band released subsequent albums to fulfill their record contract with I.R.S. Records. Martin later recalled Forbidden (1995) as a "filler album that got the band out of the label deal, rid of the singer, and into the reunion. However I wasn't privy to that information at the time". I.R.S. Records released a compilation album in 1996 to fulfill the band's contract, titled The Sabbath Stones, which featured songs from Born Again (1983) to Forbidden (1995). 1997–2006: Osbourne rejoins and Reunion In the summer of 1997, Iommi, Butler and Osbourne reunited to coheadline the Ozzfest tour alongside Osbourne's solo band. The line-up featured Osbourne's drummer Mike Bordin filling in for Ward. "It started off with me going off to join Ozzy for a couple of numbers," explained Iommi, "and then it got into Sabbath doing a short set, involving Geezer. And then it grew as it went on… We were concerned in case Bill couldn't make it – couldn't do it – because it was a lot of dates, and important dates… The only rehearsal that we had to do was for the drummer. But I think if Bill had come in, it would have took a lot more time. We would have had to focus a lot more on him." In December 1997, the group was joined by Ward, marking the first reunion of the original quartet since Osbourne's 1992 "retirement show". This line-up recorded two shows at the Birmingham NEC, released as the double album Reunion on 20 October 1998. The album reached number eleven on the Billboard 200, went platinum in the U.S. and spawned the single "Iron Man", which won Sabbath their first Grammy Award in 2000 for Best Metal Performance, 30 years after the song was originally released. Reunion featured two new studio tracks, "Psycho Man" and "Selling My Soul", both of which cracked the top 20 of the Billboard Mainstream Rock Tracks chart. Shortly before a European tour in the summer of 1998, Ward had a heart attack and was temporarily replaced by former drummer Vinny Appice. Ward returned for a U.S. tour with openers Pantera, which began in January 1999 and continued through the summer, headlining the annual Ozzfest tour. Following these appearances, the band was put on hiatus while members worked on solo material. Iommi released his first official solo album, Iommi, in 2000, while Osbourne continued work on Down to Earth (2001). Sabbath returned to the studio to work on new material with all four original members and producer Rick Rubin in the spring of 2001, but the sessions were halted when Osbourne was called away to finish tracks for his solo album in the summer. "It just came to an end…" Iommi said. "It's a shame because [the songs] were really Iommi commented on the difficulty getting all the members together to work: In March 2002, Osbourne's Emmy-winning reality show The Osbournes debuted on MTV, and quickly became a worldwide hit. The show introduced Osbourne to a broader audience and to capitalise, the band's back catalogue label, Sanctuary Records released a double live album Past Lives (2002), which featured concert material recorded in the 1970s, including the Live at Last (1980) album. The band remained on hiatus until the summer of 2004 when they returned to headline Ozzfest 2004 and 2005. In November 2005, Black Sabbath were inducted into the UK Music Hall of Fame, and in March 2006, after eleven years of eligibility—Osbourne famously refused the Hall's "meaningless" initial nomination in 1999—the band were inducted into the U.S. Rock and Roll Hall of Fame. At the awards ceremony Metallica played two Sabbath songs, "Hole in the Sky" and "Iron Man" in tribute. 2006–2010: The Dio Years and Heaven & Hell While Ozzy Osbourne was working on new solo album material in 2006, Rhino Records released Black Sabbath: The Dio Years, a compilation of songs culled from the four Black Sabbath releases featuring Ronnie James Dio. For the release, Iommi, Butler, Dio, and Appice reunited to write and record three new songs as Black Sabbath. The Dio Years was released on 3 April 2007, reaching number 54 on the Billboard 200, while the single "The Devil Cried" reached number 37 on the Mainstream Rock Tracks chart. Pleased with the results, Iommi and Dio decided to reunite the Dio era line-up for a world tour. While the line-up of Osbourne, Butler, Iommi, and Ward was still officially called Black Sabbath, the new line-up opted to call themselves Heaven & Hell, after the album of the same title, to avoid confusion. When asked about the name of the group, Iommi stated "it really is Black Sabbath, whatever we do... so everyone knows what they're getting [and] so people won't expect to hear 'Iron Man' and all those songs. We've done them for so many years, it's nice to do just all the stuff we did with Ronnie again." Ward was initially set to participate, but dropped out before the tour began due to musical differences with "a couple of the band members". He was replaced by former drummer Vinny Appice, effectively reuniting the line-up that had featured on the Mob Rules (1981) and Dehumanizer (1992) albums. Heaven & Hell toured the U.S. with openers Megadeth and Machine Head, and recorded a live album and DVD in New York on 30 March 2007, titled Live from Radio City Music Hall. In November 2007, Dio confirmed that the band had plans to record a new studio album, which was recorded in the following year. In April 2008 the band announced the upcoming release of a new box set and their participation in the Metal Masters Tour, alongside Judas Priest, Motörhead and Testament. The box set, The Rules of Hell, featuring remastered versions of all the Dio fronted Black Sabbath albums, was supported by the Metal Masters Tour. In 2009, the band announced the title of their debut studio album, The Devil You Know, released on 28 April. On 26 May 2009, Osbourne filed suit in a federal court in New York against Iommi alleging that he illegally claimed the band name. Iommi noted that he has been the only constant band member for its full 41-year career and that his bandmates relinquished their rights to the name in the 1980s, therefore claiming more rights to the name of the band. Although in the suit, Osbourne was seeking 50% ownership of the trademark, he said that he hoped the proceedings would lead to equal ownership among the four original members. In March 2010, Black Sabbath announced that along with Metallica they would be releasing a limited edition single together to celebrate Record Store Day. It was released on 17 April 2010. Ronnie James Dio died on 16 May 2010 from stomach cancer. In June 2010, the legal battle between Ozzy Osbourne and Tony Iommi over the trademarking of the Black Sabbath name ended, but the terms of the settlement have not been disclosed. 2010–2014: Second Osbourne reunion and 13 In a January 2010 interview while promoting his biography I Am Ozzy, Osbourne stated that although he would not rule it out, he was doubtful there would be a reunion with all four original members of the band. Osbourne stated: "I'm not gonna say I've written it out forever, but right now I don't think there's any chance. But who knows what the future holds for me? If it's my destiny, fine." In July, Butler said that there would be no reunion in 2011, as Osbourne was already committed to touring with his solo band. However, by that August they had already met up to rehearse together, and continued to do so through the autumn. On 11 November 2011, Iommi, Butler, Osbourne, and Ward announced that they were reuniting to record a new album with a full tour in support beginning in 2012. Guitarist Iommi was diagnosed with lymphoma on 9 January 2012, which forced the band to cancel all but two shows (Download Festival, and Lollapalooza Festival) of a previously booked European tour. It was later announced that an intimate show would be played in their hometown Birmingham. It was the first concert since the reunion and the only indoors concerts that year. In February 2012, drummer Ward announced that he would not participate further in the band's reunion until he was offered a "signable contract". On 21 May 2012, at the O2 Academy in Birmingham, Black Sabbath played their first concert since 2005, with Tommy Clufetos playing the drums. In June, they performed at the Download Festival at the Donington Park motorsports circuit in Leicestershire, England, followed by the last concert of the short tour at Lollapalooza Festival in Chicago. Later that month, the band started recording an album. On 13 January 2013, the band announced that the album would be released in June under the title 13. Brad Wilk of Rage Against the Machine was chosen as the drummer, and Rick Rubin was chosen as the producer. Mixing of the album commenced in February. On 12 April 2013, the band released the album's track listing. The standard version of the album features eight new tracks, and the deluxe version features three bonus tracks. The band's first single from 13, "God Is Dead?", was released on 19 April 2013. On 20 April 2013, Black Sabbath commenced their first Australia/New Zealand tour in 40 years followed by a North American Tour in Summer 2013. The second single of the album, "End of the Beginning", debuted on 15 May in a CSI: Crime Scene Investigation episode, where all three members appeared. In June 2013, 13 topped both the UK Albums Chart and the U.S. Billboard 200, becoming their first album to reach number one on the latter chart. In 2014, Black Sabbath received their first Grammy Award since 2000 with "God Is Dead?" winning Best Metal Performance. In July 2013, Black Sabbath embarked on a North American Tour (for the first time since July 2001), followed by a Latin American tour in October 2013. In November 2013, the band started their European tour which lasted until December 2013. In March and April 2014, they made 12 stops in North America (mostly in Canada) as the second leg of their North American Tour before embarking in June 2014 on the second leg of their European tour, which ended with a concert at London's Hyde Park. 2014–2017: Cancelled twentieth album, The End, and disbandment On 29 September 2014, Osbourne told Metal Hammer that Black Sabbath would begin work on their twentieth studio album in early 2015 with producer Rick Rubin, followed by a final tour in 2016. In an April 2015 interview, however, Osbourne said that these plans "could change", and added, "We all live in different countries and some of them want to work and some of them don't want to, I believe. But we are going to do another tour together." On 3 September 2015, it was announced that Black Sabbath would embark on their final tour, titled The End, from January 2016 to February 2017. Numerous dates and locations across the U.S., Canada, Europe, Australia and New Zealand were announced. The final shows of The End tour took place at the Genting Arena in their home city of Birmingham, England on 2 and 4 February 2017. On 26 October 2015, it was announced the band consisting of Osbourne, Iommi and Butler would be returning to the Download Festival on 11 June 2016. Despite earlier reports that they would enter the studio before their farewell tour, Osbourne stated that there would not be another Black Sabbath studio album. However, an 8-track CD entitled The End was sold at dates on the tour. Along with some live recordings, the CD includes four unused tracks from the 13 sessions. On 4 March 2016, Iommi discussed future re-releases of the Tony Martin-era catalogue: "We've held back on the reissues of those albums because of the current Sabbath thing with Ozzy Osbourne, but they will certainly be happening... I'd like to do a couple of new tracks for those releases with Tony Martin... I'll also be looking at working on Cross Purposes and Forbidden." Martin had suggested that this could coincide with the 30th anniversary of The Eternal Idol, in 2017. In an interview that August, Martin added "[Iommi] still has his cancer issues of course and that may well stop it all from happening but if he wants to do something I am ready." On 10 August 2016, Iommi revealed that his cancer was in remission. Asked in November 2016 about his plans after Black Sabbath's final tour, Iommi replied, "I'll be doing some writing. Maybe I'll be doing something with the guys, maybe in the studio, but no touring." The band played their final concert on 4 February 2017 in Birmingham. The final song was streamed live on the band's Facebook page and fireworks went off as the band took their final bow. The band's final tour was not an easy one, as longstanding tensions between Osbourne and Iommi returned to the surface. Iommi stated that he would not rule out the possibility of one-off shows, "I wouldn't write that off, if one day that came about. That's possible. Or even doing an album, 'cause then, again, you're in one place. But I don't know if that would happen." In an April 2017 interview, Butler revealed that Black Sabbath considered making a blues album as the follow-up to 13, but added that, "the tour got in the way." On 7 March 2017, Black Sabbath announced their disbandment through posts made on their official social media accounts. 2017–present: Aftermath In a June 2018 interview with ITV News, Osbourne expressed interest in reuniting with Black Sabbath for a performance at the 2022 Commonwealth Games which would be held in their home city Birmingham. Iommi said that performing at the event as Black Sabbath would be "a great thing to do to help represent Birmingham. I'm up for it. Let's see what happens." He also did not rule out the possibility for the band to reform only for a one-off performance rather than a full-length tour. Iommi was later announced to be part of the opening ceremony for the 2022 Commonwealth Games alongside Duran Duran. On 8 August 2022, Osbourne and Iommi made a surprise reunion to end the closing ceremony of the 2022 Commonwealth Games at the Alexander Stadium in Birmingham. They were joined by 2017 Black Sabbath touring musicians Tommy Clufetos and Adam Wakeman for a medley of "Iron Man" and "Paranoid". In September 2020, Osbourne stated in an interview that he was no longer interested in a reunion: "Not for me. It's done. The only thing I do regret is not doing the last farewell show in Birmingham with Bill Ward. I felt really bad about that. It would have been so nice. I don't know what the circumstances behind it were, but it would have been nice. I've talked to Tony a few times, but I don't have any of the slightest interest in doing another gig. Maybe Tony's getting bored now." Butler also ruled out the possibility of any future Black Sabbath performances in an interview with Eonmusic on 10 November 2020, stating that the band is over: "There will definitely be no more Sabbath. It's done." Iommi however, pondered the possibility of another reunion tour in an interview with The Mercury News, stating that he "would like to play with the guys again" and that he misses the audiences and stage. Bill Ward stated in an interview with Eddie Trunk that he no longer has the ability or chops to perform with Black Sabbath in concert, but expressed that he would love to make another album with Osbourne, Butler and Iommi. Despite ruling out the possibility of another Black Sabbath reunion, Osbourne revealed in an episode of Ozzy Speaks on Ozzy's Boneyard that he is working with Iommi, who appeared as one of the guests for his thirteenth solo album, Patient Number 9. In an October 2021 interview with the Metro, Ward revealed that he has kept "in contact" with his former bandmates and stated that he is "very open-minded" to the possibility of recording another Black Sabbath album: "I haven't spoken to the guys about it, but I have talked to a couple of people in management about the possibility of making a recording." On 30 September 2020, Black Sabbath announced a new Dr. Martens shoe collection. The partnership with the British footwear company celebrated the 50th anniversaries of the band's Black Sabbath and Paranoid albums, with the boots depicting artwork from the former. On 13 January 2021, the band announced that they would reissue both Heaven & Hell and Mob Rules as expanded deluxe editions on 5 March 2021, with unreleased material included. In September 2022, Osbourne reiterated that he was unwilling to continue Black Sabbath, stating that if another Black Sabbath album is released, he won't sing on it. However, he is open to working with Iommi on more solo projects following the latter's involvement on Patient Number 9. Osbourne later retired from touring in February 2023 after not sufficiently recovering from medical treatment, putting the possibility of another Black Sabbath reunion in concert in further doubt. Musical style Black Sabbath were a heavy metal band. The band have also been cited as a key influence on genres including stoner rock, grunge, doom metal, and sludge metal. Early on, Black Sabbath were influenced by Cream, The Beatles, Fleetwood Mac, Jimi Hendrix, John Mayall & the Bluesbreakers, Blue Cheer, Led Zeppelin, and Jethro Tull. Although Black Sabbath went through many line-ups and stylistic changes, their core sound focuses on ominous lyrics and doomy music, often making use of the musical tritone, also called the "devil's interval". While their Ozzy-era albums such as Sabbath Bloody Sabbath (1973) had slight compositional similarities to the progressive rock genre that was growing in popularity at the time, standing in stark contrast to popular music of the early 1970s, Black Sabbath's dark sound was dismissed by rock critics of the era. Much like many of their early heavy metal contemporaries, the band received virtually no airplay on rock radio. As the band's primary songwriter, Tony Iommi wrote the majority of Black Sabbath's music, while Osbourne would write vocal melodies, and bassist Geezer Butler would write lyrics. The process was sometimes frustrating for Iommi, who often felt pressured to come up with new material: "If I didn't come up with anything, nobody would do anything." On Iommi's influence, Osbourne later said: Beginning with their third album, Master of Reality (1971), Black Sabbath began to feature tuned-down guitars. In 1965, before forming Black Sabbath, guitarist Tony Iommi suffered an accident while working in a sheet metal factory, losing the tips of two fingers on his right hand. Iommi almost gave up music, but was urged by the factory manager to listen to Django Reinhardt, a jazz guitarist who lost the use of two fingers in a fire. Inspired by Reinhardt, Iommi created two thimbles made of plastic and leather to cap off his missing fingertips. The guitarist began using lighter strings, and detuning his guitar, to better grip the strings with his prosthesis. Early in the band's history Iommi experimented with different dropped tunings, including C tuning, or 3 semitones down, before settling on E/D tuning, or a half-step down from standard tuning. Legacy Black Sabbath has sold over 70 million records worldwide, including a RIAA-certified 15 million in the U.S. They are one of the most influential heavy metal bands of all time. The band helped to create the genre with ground-breaking releases such as Paranoid (1970), an album that Rolling Stone magazine said "changed music forever", and called the band "the Beatles of heavy metal". Time magazine called Paranoid "the birthplace of heavy metal", placing it in their Top 100 Albums of All Time. MTV placed Black Sabbath at number one on their Top Ten Heavy Metal Bands and VH1 placed them at number two on their list of the 100 Greatest Artists of Hard Rock. VH1 ranked Black Sabbath's "Iron Man" the number one song on their 40 Greatest Metal Songs countdown. Rolling Stone magazine ranked the band number 85 in their list of the "100 Greatest Artists of All Time". AllMusic's William Ruhlmann said: According to Rolling Stone Holly George-Warren, "Black Sabbath was the heavy metal king of the 1970s." Although initially "despised by rock critics and ignored by radio programmers", the group sold more than 8 million albums by the end of that decade. "The heavy metal band…" marvelled Ronnie James Dio. "A band that didn't apologise for coming to town; it just stepped on buildings when it came to town." Influence and innovation Black Sabbath have influenced many acts including Judas Priest, Iron Maiden, Diamond Head, Slayer, Metallica, Nirvana, Korn, Black Flag, Mayhem, Venom, Guns N' Roses, Soundgarden, Body Count, Alice in Chains, Anthrax, Disturbed, Death, Opeth, Pantera, Megadeth, the Smashing Pumpkins, Slipknot, Foo Fighters, Fear Factory, Candlemass, Godsmack, and Van Halen. Two Gold-selling tribute albums have been released, Nativity in Black Volume 1 & 2, including covers by Sepultura, White Zombie, Type O Negative, Faith No More, Machine Head, Primus, System of a Down, and Monster Magnet. Metallica's Lars Ulrich, who, along with bandmate James Hetfield inducted Black Sabbath into the Rock and Roll Hall of Fame in 2006, said "Black Sabbath is and always will be synonymous with heavy metal", while Hetfield said "Sabbath got me started on all that evil-sounding shit, and it's stuck with me. Tony Iommi is the king of the heavy riff." Guns N' Roses guitarist Slash said of the Paranoid album: "There's just something about that whole record that, when you're a kid and you're turned onto it, it's like a whole different world. It just opens up your mind to another dimension...Paranoid is the whole Sabbath experience; very indicative of what Sabbath meant at the time. Tony's playing style—doesn't matter whether it's off Paranoid or if it's off Heaven and Hell—it's very distinctive." Anthrax guitarist Scott Ian said "I always get the question in every interview I do, 'What are your top five metal albums?' I make it easy for myself and always say the first five Sabbath albums." Lamb of God's Chris Adler said: "If anybody who plays heavy metal says that they weren't influenced by Black Sabbath's music, then I think that they're lying to you. I think all heavy metal music was, in some way, influenced by what Black Sabbath did." Judas Priest vocalist Rob Halford commented: "They were and still are a groundbreaking band...you can put on the first Black Sabbath album and it still sounds as fresh today as it did 30-odd years ago. And that's because great music has a timeless ability: To me, Sabbath are in the same league as the Beatles or Mozart. They're on the leading edge of something extraordinary." On Black Sabbath's standing, Rage Against the Machine guitarist Tom Morello states: "The heaviest, scariest, coolest riffs and the apocalyptic Ozzy wail are without peer. You can hear the despair and menace of the working-class Birmingham streets they came from in every kick-ass, evil groove. Their arrival ground hippy, flower-power psychedelia to a pulp and set the standard for all heavy bands to come." Phil Anselmo of Pantera and Down stated that "Only a fool would leave out what Black Sabbath brought to the heavy metal genre". According to Tracii Guns of L.A. Guns and former member of Guns N' Roses, the main riff of "Paradise City" by Guns N' Roses, from Appetite for Destruction (1987), was influenced by the song "Zero the Hero" from the Born Again album. King Diamond guitarist Andy LaRocque affirmed that the clean guitar part of "Sleepless Nights" from Conspiracy (1989) is inspired by Tony Iommi's playing on Never Say Die!. In addition to being pioneers of heavy metal, they also have been credited for laying the foundations for heavy metal subgenres stoner rock, sludge metal, thrash metal, black metal and doom metal as well as for alternative rock subgenre grunge. According to the critic Bob Gulla, the band's sound "shows up in virtually all of grunge's most popular bands, including Nirvana, Soundgarden, and Alice in Chains". Tony Iommi has been credited as the pioneer of lighter gauge guitar strings. The tips of his fingers were severed in a steel factory, and while using thimbles (artificial finger tips) he found that standard guitar strings were too difficult to bend and play. He found that there was only one size of strings available, so after years with Sabbath he had strings custom made. Culturally, Black Sabbath have exerted a huge influence in both television and literature and have in many cases become synonymous with heavy metal. In the film Almost Famous, Lester Bangs gives the protagonist an assignment to cover the band (plot point one) with the immortal line: 'Give me 500 words on Black Sabbath'. Contemporary music and arts publication Trebuchet Magazine has put this to practice by asking all new writers to write a short piece (500 words) on Black Sabbath as a means of proving their creativity and voice on a well documented subject. Band members Original line-up Tony Iommi – guitars Bill Ward – drums Geezer Butler – bass Ozzy Osbourne – vocals, harmonica Discography Studio albums Black Sabbath (1970) Paranoid (1970) Master of Reality (1971) Vol. 4 (1972) Sabbath Bloody Sabbath (1973) Sabotage (1975) Technical Ecstasy (1976) Never Say Die! (1978) Heaven and Hell (1980) Mob Rules (1981) Born Again (1983) Seventh Star (1986) The Eternal Idol (1987) Headless Cross (1989) Tyr (1990) Dehumanizer (1992) Cross Purposes (1994) Forbidden (1995) 13 (2013) Tours Polka Tulk Blues/Earth Tour 1968–1969 Black Sabbath Tour 1970 Paranoid Tour 1970–1971 Master of Reality Tour 1971–1972 Vol. 4 Tour 1972–1973 Sabbath Bloody Sabbath Tour 1973–1974 Sabotage Tour 1975–1976 Technical Ecstasy Tour 1976–1977 Never Say Die! Tour 1978 Heaven & Hell Tour 1980–1981 Mob Rules Tour 1981–1982 Born Again Tour 1983 Seventh Star Tour 1986 Eternal Idol Tour 1987 Headless Cross Tour 1989 Tyr Tour 1990 Dehumanizer Tour 1992 Cross Purposes Tour 1994 Forbidden Tour 1995 Ozzfest Tour 1997 European Tour 1998 Reunion Tour 1998–1999 Ozzfest Tour 1999 U.S. Tour 1999 European Tour 1999 Ozzfest Tour 2001 Ozzfest Tour 2004 European Tour 2005 Ozzfest Tour 2005 Black Sabbath Reunion Tour, 2012–2014 The End Tour 2016–2017 See also List of cover versions of Black Sabbath songs Heavy metal groups References Sources External links Black Sabbath biography by James Christopher Monger, discography and album reviews, credits & releases at AllMusic Black Sabbath discography, album releases & credits at Discogs.com Musical groups established in 1968 Musical groups disestablished in 2006 Musical groups reestablished in 2011 Musical groups disestablished in 2017 English heavy metal musical groups Grammy Lifetime Achievement Award winners 1968 establishments in England 2017 disestablishments in England Kerrang! Awards winners I.R.S. Records artists Vertigo Records artists Musical groups from Birmingham, West Midlands Musical quartets
1,848
4,319
https://en.wikipedia.org/wiki/Books%20of%20Chronicles
Books of Chronicles
The Book of Chronicles ( ) is a book in the Hebrew Bible, found as two books (1–2 Chronicles) in the Christian Old Testament. Chronicles is the final book of the Hebrew Bible, concluding the third section of the Jewish Tanakh, the Ketuvim ("Writings"). It contains a genealogy starting with Adam and a history of ancient Judah and Israel up to the Edict of Cyrus in 539 BC. The book was divided into two books in the Septuagint and translated mid 3rd century BC. In Christian contexts Chronicles is referred to in the plural as the Books of Chronicles, after the Latin name given to the text by Jerome, but is also rarely referred to by its Greek name as the Books of Paralipomenon. In Christian Bibles, they usually follow the two Books of Kings and precede Ezra–Nehemiah, the last history-oriented book of the Protestant Old Testament. Summary The Chronicles narrative begins with Adam, Seth and Enosh, and the story is then carried forward, almost entirely through genealogical lists, down to the founding of the United Kingdom of Israel in the "introductory chapters", 1 Chronicles 1–9. The bulk of the remainder of 1 Chronicles, after a brief account of Saul in chapter 10, is concerned with the reign of David. The next long section concerns David's son Solomon, and the final part is concerned with the Kingdom of Judah, with occasional references to the northern Kingdom of Israel (2 Chronicles 10–36). The final chapter covers briefly the reigns of the last four kings, until Judah is destroyed and the people taken into exile in Babylon. In the two final verses, identical to the opening verses of the Book of Ezra, the Persian king Cyrus the Great conquers the Neo-Babylonian Empire, and authorises the restoration of the Temple in Jerusalem and the return of the exiles. Structure Originally a single work, Chronicles was divided into two in the Septuagint, a Greek translation produced in the 3rd and 2nd centuries BC. It has three broad divisions: the genealogies in chapters 1–9 of 1 Chronicles the reigns of David and Solomon (constituting the remainder of 1 Chronicles, and chapters 1–9 of 2 Chronicles); and the narrative of the divided kingdom, focusing on the Kingdom of Judah, in the remainder of 2 Chronicles. Within this broad structure there are signs that the author has used various other devices to structure his work, notably through drawing parallels between David and Solomon (the first becomes king, establishes the worship of Israel's God in Jerusalem, and fights the wars that will enable the Temple to be built, then Solomon becomes king, builds and dedicates the Temple, and reaps the benefits of prosperity and peace). 1 Chronicles is divided into 29 chapters and 2 Chronicles into 36 chapters. Biblical commentator C. J. Ball suggests that the division into two books introduced by the translators of the Septuagint "occurs in the most suitable place", namely with the conclusion of David's reign as king and the initiation of Solomon's reign. The Talmud considered Chronicles one book. Composition Origins The last events recorded in Chronicles take place in the reign of Cyrus the Great, the Persian king who conquered Babylon in 539 BC; this sets the earliest possible date for this passage of the book. Chronicles appears to be largely the work of a single individual. The writer was probably male, probably a Levite (temple priest), and probably from Jerusalem. He was well-read, a skilled editor, and a sophisticated theologian. He aimed to use the narratives in the Torah and former prophets to convey religious messages to his peers, the literary and political elite of Jerusalem in the time of the Achaemenid Empire. Jewish and Christian tradition identified this author as the 5th-century BC figure Ezra, who gives his name to the Book of Ezra; Ezra is also believed by the Talmudic sages to have written both his own book (i. e., Ezra–Nehemiah) and Chronicles up to his own time, the latter having been finished by Nehemiah. Later critics, skeptical of the long-maintained tradition, preferred to call the author "the Chronicler". However, many scholars maintain support for Ezra's authorship, not only based on centuries of work by Jewish historians, but also due to the consistency of language and speech patterns between Chronicles and Ezra–Nehemiah. Professor Emeritus Menahem Haran of the Hebrew University of Jerusalem explains, "the overall unity of the Chronistic Work is … demonstrated by a common ideology, the uniformity of legal, cultic and historical conceptions and specific style, all of which reflect one opus." One of the most striking, although inconclusive, features of Chronicles is that its closing sentence is repeated as the opening of Ezra–Nehemiah. In antiquity, such repeated verses, like the "catch-lines" used by modern printers, often appeared at the end of a scroll to facilitate the reader's passing on to the correct second book-scroll after completing the first. This scribal device was employed in works that exceeded the scope of a single scroll and had to be continued on another scroll. The latter half of the 20th century, amid growing skepticism in academia regarding history in the Biblical tradition, saw a reappraisal of the authorship question. Though there is a general lack of corroborating evidence, many now regard it as improbable that the author of Chronicles was also the author of the narrative portions of Ezra–Nehemiah. These critics suggest that Chronicles was probably composed between 400 and 250 BC, with the period 350–300 BC the most likely. This timeframe is achieved by estimates made based on genealogies appearing in the Greek Septuagint. This theory bases its premise on the latest person mentioned in Chronicles, Anani. Anani is an eighth-generation descendant of King Jehoiachin according to the Masoretic Text. This has persuaded many supporters of the Septuagint's reading to place Anani's likely date of birth a century later than what had been largely accepted for two millennia. Sources Much of the content of Chronicles is a repetition of material from other books of the Bible, from Genesis to Kings, and so the usual scholarly view is that these books, or an early version of them, provided the author with the bulk of his material. It is, however, possible that the situation was rather more complex, and that books such as Genesis and Samuel should be regarded as contemporary with Chronicles, drawing on much of the same material, rather than a source for it. Despite much discussion of this issue, no agreement has been reached. Genre The translators who created the Greek version of the Jewish Bible (the Septuagint) called this book Paralipomenon, "Things Left Out", indicating that they thought of it as a supplement to another work, probably Genesis–Kings, but the idea seems inappropriate, since much of Genesis–Kings has been copied almost without change. Some modern scholars proposed that Chronicles is a midrash, or traditional Jewish commentary, on Genesis–Kings, but again this is not entirely accurate since the author or authors do not comment on the older books so much as use them to create a new work. Recent suggestions have been that it was intended as a clarification of the history in Genesis–Kings, or a replacement or alternative for it. Themes Presbyterian theologian Paul K. Hooker argues that the generally accepted message the author wished to give to his audience was a theological reflection, not a "history of Israel": God is active in history, and especially the history of Israel. The faithfulness or sins of individual kings are immediately rewarded or punished by God. (This is in contrast to the theology of the Books of Kings, where the faithlessness of kings was punished on later generations through the Babylonian exile). God calls Israel to a special relationship. The call begins with the genealogies, gradually narrowing the focus from all mankind to a single family, the Israelites, the descendants of Jacob. "True" Israel is those who continue to worship Yahweh at the Temple in Jerusalem (in the southern Kingdom of Judah), with the result that the history of the historical Kingdom of Israel is almost completely ignored. God chose David and his dynasty as the agents of his will. According to the author of Chronicles, the three great events of David's reign were his bringing the Ark of the Covenant to Jerusalem, his founding of an eternal royal dynasty, and his preparations for the construction of the Temple. God chose a site in Jerusalem as the location for the Temple, the place where God should be worshiped. More time and space are spent on the construction of the Temple and its rituals of worship than on any other subject. By stressing the central role of the Temple in pre-exilic Judah, the author also stresses the importance of the newly rebuilt Persian-era Second Temple to his own readers. God remains active in Israel. The past is used to legitimize the author's present: this is seen most clearly in the detailed attention he gives to the Temple built by Solomon, but also in the genealogy and lineages, which connect his own generation to the distant past and thus make the claim that the present is a continuation of that past. See also History of ancient Israel and Judah References Bibliography External links Translations Divrei Hayamim I – Chronicles I (Judaica Press) translation [with Rashi's commentary] at Chabad.org Divrei Hayamim II – Chronicles II (Judaica Press) translation [with Rashi's commentary] at Chabad.org 1 Chronicles at Biblegateway 2 Chronicles at Biblegateway 1 Chronicles at Bible-Book.org 2 Chronicles at Bible-Book.org Introductions Tuell, S., 1 & 2 Chronicles Audiobooks 4th-century BC books 3rd-century BC books Historical books Ketuvim King lists Works attributed to the Chronicler
1,851
4,353
https://en.wikipedia.org/wiki/Baldric
Baldric
A baldric (also baldrick, bawdrick, bauldrick as well as other rare or obsolete variations) is a belt worn over one shoulder that is typically used to carry a weapon (usually a sword) or other implement such as a bugle or drum. The word may also refer to any belt in general, but this usage is poetic or archaic. In modern contexts, military drum majors usually wear a baldric. Usage Baldrics have been used since ancient times, usually as part of military dress. The design offers more support for weight than a standard waist belt, without restricting movement of the arms, and while allowing easy access to the object carried. Alternatively, and especially in modern times, the baldric may fill a ceremonial role rather than a practical one. Most Roman tombstones in the third century had depictions of white baldrics. Design One end of the baldric was broad and finished in a straight edge, while the other was tapered to a narrow strip. The narrow end was brought through a scabbard runner, it was probably wrapped around the scabbard twice. Circular metal discs called Phalera were attached to the broad end. Four leather baldrics were found in Vimose and Thorsbjerg. One of these measured 118 long and 8 cm wide. Roman balteus During ancient Roman times the balteus (plural baltei) was a type of baldric commonly used to suspend a sword. It was a belt generally worn over the shoulder, passing obliquely down to the side, typically made of leather, often ornamented with precious stones, metals or both. There was also a similar belt worn by the Romans, particularly by soldiers, called a cintus (pl. cinti) that fastened around the waist. The word accintus meaning a soldier (literally, "girt" as for battle) attests to this differing usage. Today Many non-military or paramilitary organisations include baldrics as part of ceremonial dress. The Knights of Columbus 4th Degree Colour Corps uses a baldric as part of their uniform; it supports a ceremonial sword. The University of Illinois Marching Illini wore two baldrics as a part of their uniform until 2009, with one over each shoulder. They crossed in the front and back and were buttoned onto the jacket beneath a cape and epaulets. Today, the current Marching Illini wear one baldric with two sides, ILLINI on one side and the traditional orange and white baldric from the previous uniform on the other. A crossed pair of baldrics is often worn as part of the uniform of Morris dancers; different coloured baldrics help to distinguish different sides. In literature and culture Baldrics appear in the classical literary canon, and later in fantasy and science fiction genres. The decorated baldric of Pallas plays a key part in the Aeneid, leading Aeneas to kill Turnus. (1st century BC) In Sir Gawain and the Green Knight Gawain returns from his battle with the Green Knight wearing the green girdle "obliquely, like a baldric, bound at his side,/ below his left shoulder, laced in a knot, in betokening the blame he had borne for his fault." (14th century) The yeoman in Chaucer's Canterbury Tales is described as wearing a "baldrick of bright green." (14th century) Benedick, from William Shakespeare's Much Ado About Nothing, says "But that I will have a recheat winded in my forehead or hang my bugle in an invisible baldric all women shall pardon me." (15th century) Britomart, in Edmund Spenser's Faerie Queene, clothes herself in a borrowed armour "with brave bauldrick garnished" before embarking on her quest (Book III, canto iii,). (16th century) A baldric features prominently in Chapter 4 of Alexandre Dumas' The Three Musketeers. (19th century) Walter Scott in Ivanhoe published in 1819 describes a Yeoman "with a baldric and a badge of silver". (19th century) In The Fellowship of the Ring, Boromir is described: "On a baldric he wore a great horn tipped with silver that now was laid upon his knees." (20th century) A baldrick is also mentioned in the epic poem by Alfred, Lord Tennyson; The Lady of Shalott; in the tenth stanza: 'And from his blazon'd baldric slung, A mighty silver bugle hung'. (19th century, from 13th century) Some species and factions such as Klingons wear baldrics in Star Trek, such as Kor, Koloth, Kang or Worf although sometimes they are referred to as a sash. The character Worf does so in almost every one of his appearances through two series and four films. In The Next Generation episode "Conundrum", Worf, due to amnesia, mistakenly believes that the baldric indicates his rank or authority, so he briefly assumes command of the Enterprise. (20th century) Baldrick is a character played by Tony Robinson in the BBC comedy series Blackadder . (20th century) See also Baldrick (Blackadder character) Bandolier Sam Browne belt Shoulder belt Webbing References Ancient Roman legionary equipment Military uniforms Ancient Roman military clothing Belts (clothing) fr:Baudrier lt:Perpetė
1,871
4,431
https://en.wikipedia.org/wiki/Beauty
Beauty
Beauty is commonly described as a feature of objects that makes these objects pleasurable to perceive. Such objects include landscapes, sunsets, humans and works of art. Beauty, together with art and taste, is the main subject of aesthetics, one of the major branches of philosophy. As a positive aesthetic value, it is contrasted with ugliness as its negative counterpart. One difficulty in understanding beauty is because it has both objective and subjective aspects: it is seen as a property of things but also as depending on the emotional response of observers. Because of its subjective side, beauty is said to be "in the eye of the beholder". It has been argued that the ability on the side of the subject needed to perceive and judge beauty, sometimes referred to as the "sense of taste", can be trained and that the verdicts of experts coincide in the long run. This would suggest that the standards of validity of judgments of beauty are intersubjective, i.e. dependent on a group of judges, rather than fully subjective or fully objective. Conceptions of beauty aim to capture what is essential to all beautiful things. Classical conceptions define beauty in terms of the relation between the beautiful object as a whole and its parts: the parts should stand in the right proportion to each other and thus compose an integrated harmonious whole. Hedonist conceptions see a necessary connection between pleasure and beauty, e.g. that for an object to be beautiful is for it to cause disinterested pleasure. Other conceptions include defining beautiful objects in terms of their value, of a loving attitude towards them or of their function. Overview Beauty, together with art and taste, is the main subject of aesthetics, one of the major branches of philosophy. Beauty is usually categorized as an aesthetic property besides other properties, like grace, elegance or the sublime. As a positive aesthetic value, beauty is contrasted with ugliness as its negative counterpart. Beauty is often listed as one of the three fundamental concepts of human understanding besides truth and goodness. Objectivists or realists see beauty as an objective or mind-independent feature of beautiful things, which is denied by subjectivists. The source of this debate is that judgments of beauty seem to be based on subjective grounds, namely our feelings, while claiming universal correctness at the same time. This tension is sometimes referred to as the "antinomy of taste". Adherents of both sides have suggested that a certain faculty, commonly called a sense of taste, is necessary for making reliable judgments about beauty. David Hume, for example, suggests that this faculty can be trained and that the verdicts of experts coincide in the long run. Beauty is mainly discussed in relation to concrete objects accessible to sensory perception. It has been suggested that the beauty of a thing supervenes on the sensory features of this thing. It has also been proposed that abstract objects like stories or mathematical proofs can be beautiful. Beauty plays a central role in works of art and nature. An influential distinction among beautiful things, according to Immanuel Kant, is that between dependent and free beauty. A thing has dependent beauty if its beauty depends on the conception or function of this thing, unlike free or absolute beauty. Examples of dependent beauty include an ox which is beautiful as an ox but not beautiful as a horse or a photograph which is beautiful, because it depicts a beautiful building but that lacks beauty generally speaking because of its low quality. Objectivism and subjectivism Judgments of beauty seem to occupy an intermediary position between objective judgments, e.g. concerning the mass and shape of a grapefruit, and subjective likes, e.g. concerning whether the grapefruit tastes good. Judgments of beauty differ from the former because they are based on subjective feelings rather than objective perception. But they also differ from the latter because they lay claim on universal correctness. This tension is also reflected in common language. On the one hand, we talk about beauty as an objective feature of the world that is ascribed, for example, to landscapes, paintings or humans. The subjective side, on the other hand, is expressed in sayings like "beauty is in the eye of the beholder". These two positions are often referred to as objectivism (or realism) and subjectivism. Objectivism is the traditional view, while subjectivism developed more recently in western philosophy. Objectivists hold that beauty is a mind-independent feature of things. On this account, the beauty of a landscape is independent of who perceives it or whether it is perceived at all. Disagreements may be explained by an inability to perceive this feature, sometimes referred to as a "lack of taste". Subjectivism, on the other hand, denies the mind-independent existence of beauty. Influential for the development of this position was John Locke's distinction between primary qualities, which the object has independent of the observer, and secondary qualities, which constitute powers in the object to produce certain ideas in the observer. When applied to beauty, there is still a sense in which it depends on the object and its powers. But this account makes the possibility of genuine disagreements about claims of beauty implausible, since the same object may produce very different ideas in distinct observers. The notion of "taste" can still be used to explain why different people disagree about what is beautiful, but there is no objectively right or wrong taste, there are just different tastes. The problem with both the objectivist and the subjectivist position in their extreme form is that each has to deny some intuitions about beauty. This issue is sometimes discussed under the label "antinomy of taste". It has prompted various philosophers to seek a unified theory that can take all these intuitions into account. One promising route to solve this problem is to move from subjective to intersubjective theories, which hold that the standards of validity of judgments of taste are intersubjective or dependent on a group of judges rather than objective. This approach tries to explain how genuine disagreement about beauty is possible despite the fact that beauty is a mind-dependent property, dependent not on an individual but a group. A closely related theory sees beauty as a secondary or response-dependent property. On one such account, an object is beautiful "if it causes pleasure by virtue of its aesthetic properties". The problem that different people respond differently can be addressed by combining response-dependence theories with so-called ideal-observer theories: it only matters how an ideal observer would respond. There is no general agreement on how "ideal observers" are to be defined, but it is usually assumed that they are experienced judges of beauty with a fully developed sense of taste. This suggests an indirect way of solving the antinomy of taste: instead of looking for necessary and sufficient conditions of beauty itself, one can learn to identify the qualities of good critics and rely on their judgments. This approach only works if unanimity among experts was ensured. But even experienced judges may disagree in their judgments, which threatens to undermine ideal-observer theories. Conceptions Various conceptions of the essential features of beautiful things have been proposed but there is no consensus as to which is the right one. Classical The "classical conception" defines beauty in terms of the relation between the beautiful object as a whole and its parts: the parts should stand in the right proportion to each other and thus compose an integrated harmonious whole. On this account, which found its most explicit articulation in the Italian Renaissance, the beauty of a human body, for example, depends, among other things, on the right proportion of the different parts of the body and on the overall symmetry. One problem with this conception is that it is difficult to give a general and detailed description of what is meant by "harmony between parts" and raises the suspicion that defining beauty through harmony results in exchanging one unclear term for another one. Some attempts have been made to dissolve this suspicion by searching for laws of beauty, like the golden ratio. 18th century philosopher Alexander Baumgarten, for example, saw laws of beauty in analogy with laws of nature and believed that they could be discovered through empirical research. As of 2003, these attempts have failed to find a general definition of beauty and several authors take the opposite claim that such laws cannot be formulated, as part of their definition of beauty. Hedonism A very common element in many conceptions of beauty is its relation to pleasure. Hedonism makes this relation part of the definition of beauty by holding that there is a necessary connection between pleasure and beauty, e.g. that for an object to be beautiful is for it to cause pleasure or that the experience of beauty is always accompanied by pleasure. This account is sometimes labeled as "aesthetic hedonism" in order to distinguish it from other forms of hedonism. An influential articulation of this position comes from Thomas Aquinas, who treats beauty as "that which pleases in the very apprehension of it". Immanuel Kant explains this pleasure through a harmonious interplay between the faculties of understanding and imagination. A further question for hedonists is how to explain the relation between beauty and pleasure. This problem is akin to the Euthyphro dilemma: is something beautiful because we enjoy it or do we enjoy it because it is beautiful? Identity theorists solve this problem by denying that there is a difference between beauty and pleasure: they identify beauty, or the appearance of it, with the experience of aesthetic pleasure. Hedonists usually restrict and specify the notion of pleasure in various ways in order to avoid obvious counterexamples. One important distinction in this context is the difference between pure and mixed pleasure. Pure pleasure excludes any form of pain or unpleasant feeling while the experience of mixed pleasure can include unpleasant elements. But beauty can involve mixed pleasure, for example, in the case of a beautifully tragic story, which is why mixed pleasure is usually allowed in hedonist conceptions of beauty. Another problem faced by hedonist theories is that we take pleasure from many things that are not beautiful. One way to address this issue is to associate beauty with a special type of pleasure: aesthetic or disinterested pleasure. A pleasure is disinterested if it is indifferent to the existence of the beautiful object or if it did not arise owing to an antecedent desire through means-end reasoning. For example, the joy of looking at a beautiful landscape would still be valuable if it turned out that this experience was an illusion, which would not be true if this joy was due to seeing the landscape as a valuable real estate opportunity. Opponents of hedonism usually concede that many experiences of beauty are pleasurable but deny that this is true for all cases. For example, a cold jaded critic may still be a good judge of beauty because of her years of experience but lack the joy that initially accompanied her work. One way to avoid this objection is to allow responses to beautiful things to lack pleasure while insisting that all beautiful things merit pleasure, that aesthetic pleasure is the only appropriate response to them. Others G. E. Moore explained beauty in regard to intrinsic value as "that of which the admiring contemplation is good in itself". This definition connects beauty to experience while managing to avoid some of the problems usually associated with subjectivist positions since it allows that things may be beautiful even if they are never experienced. Another subjectivist theory of beauty comes from George Santayana, who suggested that we project pleasure onto the things we call "beautiful". So in a process akin to a category mistake, one treats one´s subjective pleasure as an objective property of the beautiful thing. Other conceptions include defining beauty in terms of a loving or longing attitude towards the beautiful object or in terms of its usefulness or function. In 1871, functionalist Charles Darwin explained beauty as result of accumulative sexual selection in "The Descent of Man and Selection in Relation to Sex". In philosophy Greco-Roman tradition The classical Greek noun that best translates to the English-language words "beauty" or "beautiful" was κάλλος, kallos, and the adjective was καλός, kalos. However, kalos may and is also translated as ″good″ or ″of fine quality″ and thus has a broader meaning than mere physical or material beauty. Similarly, kallos was used differently from the English word beauty in that it first and foremost applied to humans and bears an erotic connotation. The Koine Greek word for beautiful was ὡραῖος, hōraios, an adjective etymologically coming from the word ὥρα, hōra, meaning "hour". In Koine Greek, beauty was thus associated with "being of one's hour". Thus, a ripe fruit (of its time) was considered beautiful, whereas a young woman trying to appear older or an older woman trying to appear younger would not be considered beautiful. In Attic Greek, hōraios had many meanings, including "youthful" and "ripe old age". Another classical term in use to describe beauty was pulchrum (Latin). Beauty for ancient thinkers existed both in form, which is the material world as it is, and as embodied in the spirit, which is the world of mental formations. Greek mythology mentions Helen of Troy as the most beautiful woman. Ancient Greek architecture is based on this view of symmetry and proportion. Pre-Socratic In one fragment of Heraclitus's writings (Fragment 106) he mentions beauty, this reads: "To God all things are beautiful, good, right..." The earliest Western theory of beauty can be found in the works of early Greek philosophers from the pre-Socratic period, such as Pythagoras, who conceived of beauty as useful for a moral education of the soul. He wrote of how people experience pleasure when aware of a certain type of formal situation present in reality, perceivable by sight or through the ear and discovered the underlying mathematical ratios in the harmonic scales in music. The Pythagoreans conceived of the presence of beauty in universal terms, which is, as existing in a cosmological state, they observed beauty in the heavens. They saw a strong connection between mathematics and beauty. In particular, they noted that objects proportioned according to the golden ratio seemed more attractive. Classical period The classical concept of beauty is one that exhibits perfect proportion (Wolfflin). In this context, the concept belonged often within the discipline of mathematics. An idea of spiritual beauty emerged during the classical period, beauty was something embodying divine goodness, while the demonstration of behaviour which might be classified as beautiful, from an inner state of morality which is aligned to the good. The writing of Xenophon shows a conversation between Socrates and Aristippus. Socrates discerned differences in the conception of the beautiful, for example, in inanimate objects, the effectiveness of execution of design was a deciding factor on the perception of beauty in something. By the account of Xenophon, Socrates found beauty congruent with that to which was defined as the morally good, in short, he thought beauty coincident with the good. Beauty is a subject of Plato in his work Symposium. In the work, the high priestess Diotima describes how beauty moves out from a core singular appreciation of the body to outer appreciations via loved ones, to the world in its state of culture and society (Wright). In other words, Diotoma gives to Socrates an explanation of how love should begin with erotic attachment, and end with the transcending of the physical to an appreciation of beauty as a thing in itself. The ascent of love begins with one's own body, then secondarily, in appreciating beauty in another's body, thirdly beauty in the soul, which cognates to beauty in the mind in the modern sense, fourthly beauty in institutions, laws and activities, fifthly beauty in knowledge, the sciences, and finally to lastly love beauty itself, which translates to the original Greek language term as auto to kalon. In the final state, auto to kalon and truth are united as one. There is the sense in the text, concerning love and beauty they both co-exist but are still independent or, in other words, mutually exclusive, since love does not have beauty since it seeks beauty. The work toward the end provides a description of beauty in a negative sense. Plato also discusses beauty in his work Phaedrus, and identifies Alcibiades as beautiful in Parmenides. He considered beauty to be the Idea (Form) above all other Ideas. Platonic thought synthesized beauty with the divine. Scruton (cited: Konstan) states Plato states of the idea of beauty, of it (the idea), being something inviting desirousness (c.f seducing), and, promotes an intellectual renunciation (c.f. denouncing) of desire. For Alexander Nehamas, it is only the locating of desire to which the sense of beauty exists, in the considerations of Plato. Aristotle defines beauty in Metaphysics as having order, symmetry and definiteness which the mathematical sciences exhibit to a special degree. He saw a relationship between the beautiful (to kalon) and virtue, arguing that "Virtue aims at the beautiful." Roman In De Natura Deorum Cicero wrote: "the splendour and beauty of creation", in respect to this, and all the facets of reality resulting from creation, he postulated these to be a reason to see the existence of a God as creator. Western Middle Ages In the Middle Ages, Catholic philosophers like Thomas Aquinas included beauty among the transcendental attributes of being. In his Summa Theologica, Aquinas described the three conditions of beauty as: integritas (wholeness), consonantia (harmony and proportion), and claritas (a radiance and clarity that makes the form of a thing apparent to the mind). In the Gothic Architecture of the High and Late Middle Ages, light was considered the most beautiful revelation of God, which was heralded in design. Examples are the stained glass of Gothic Cathedrals including Notre-Dame de Paris and Chartres Cathedral. St. Augustine said of beauty "Beauty is indeed a good gift of God; but that the good may not think it a great good, God dispenses it even to the wicked." Renaissance Classical philosophy and sculptures of men and women produced according to the Greek philosophers' tenets of ideal human beauty were rediscovered in Renaissance Europe, leading to a re-adoption of what became known as a "classical ideal". In terms of female human beauty, a woman whose appearance conforms to these tenets is still called a "classical beauty" or said to possess a "classical beauty", whilst the foundations laid by Greek and Roman artists have also supplied the standard for male beauty and female beauty in western civilization as seen, for example, in the Winged Victory of Samothrace. During the Gothic era, the classical aesthetical canon of beauty was rejected as sinful. Later, Renaissance and Humanist thinkers rejected this view, and considered beauty to be the product of rational order and harmonious proportions. Renaissance artists and architects (such as Giorgio Vasari in his "Lives of Artists") criticised the Gothic period as irrational and barbarian. This point of view of Gothic art lasted until Romanticism, in the 19th century. Vasari aligned himself to the classical notion and thought of beauty as defined as arising from proportion and order. Age of Reason The Age of Reason saw a rise in an interest in beauty as a philosophical subject. For example, Scottish philosopher Francis Hutcheson argued that beauty is "unity in variety and variety in unity". He wrote that beauty was neither purely subjective nor purely objective—it could be understood not as "any Quality suppos'd to be in the Object, which should of itself be beautiful, without relation to any Mind which perceives it: For Beauty, like other Names of sensible Ideas, properly denotes the Perception of some mind; ... however we generally imagine that there is something in the Object just like our Perception." Immanuel Kant believed that there could be no "universal criterion of the beautiful" and that the experience of beauty is subjective, but that an object is judged to be beautiful when it seems to display "purposiveness"; that is, when its form is perceived to have the character of a thing designed according to some principle and fitted for a purpose. He distinguished "free beauty" from "merely dependent beauty", explaining that "the first presupposes no concept of what the object ought to be; the second does presuppose such a concept and the perfection of the object in accordance therewith." By this definition, free beauty is found in seashells and wordless music; dependent beauty in buildings and the human body. The Romantic poets, too, became highly concerned with the nature of beauty, with John Keats arguing in Ode on a Grecian Urn that: Beauty is truth, truth beauty, —that is all Ye know on earth, and all ye need to know. Western 19th and 20th century In the Romantic period, Edmund Burke postulated a difference between beauty in its classical meaning and the sublime. The concept of the sublime, as explicated by Burke and Kant, suggested viewing Gothic art and architecture, though not in accordance with the classical standard of beauty, as sublime. The 20th century saw an increasing rejection of beauty by artists and philosophers alike, culminating in postmodernism's anti-aesthetics. This is despite beauty being a central concern of one of postmodernism's main influences, Friedrich Nietzsche, who argued that the Will to Power was the Will to Beauty. In the aftermath of postmodernism's rejection of beauty, thinkers have returned to beauty as an important value. American analytic philosopher Guy Sircello proposed his New Theory of Beauty as an effort to reaffirm the status of beauty as an important philosophical concept. He rejected the subjectivism of Kant and sought to identify the properties inherent in an object that make it beautiful. He called qualities such as vividness, boldness, and subtlety "properties of qualitative degree" (PQDs) and stated that a PQD makes an object beautiful if it is not—and does not create the appearance of—"a property of deficiency, lack, or defect"; and if the PQD is strongly present in the object. Elaine Scarry argues that beauty is related to justice. Beauty is also studied by psychologists and neuroscientists in the field of experimental aesthetics and neuroesthetics respectively. Psychological theories see beauty as a form of pleasure. Correlational findings support the view that more beautiful objects are also more pleasing. Some studies suggest that higher experienced beauty is associated with activity in the medial orbitofrontal cortex. This approach of localizing the processing of beauty in one brain region has received criticism within the field. Philosopher and novelist Umberto Eco wrote On Beauty: A History of a Western Idea (2004) and On Ugliness (2007). The narrator of his novel The Name of the Rose follows Aquinas in declaring: "three things concur in creating beauty: first of all integrity or perfection, and for this reason, we consider ugly all incomplete things; then proper proportion or consonance; and finally clarity and light", before going on to say "the sight of the beautiful implies peace". Chinese philosophy Chinese philosophy has traditionally not made a separate discipline of the philosophy of beauty. Confucius identified beauty with goodness, and considered a virtuous personality to be the greatest of beauties: In his philosophy, "a neighborhood with a ren man in it is a beautiful neighborhood." Confucius's student Zeng Shen expressed a similar idea: "few men could see the beauty in some one whom they dislike." Mencius considered "complete truthfulness" to be beauty. Zhu Xi said: "When one has strenuously implemented goodness until it is filled to completion and has accumulated truth, then the beauty will reside within it and will not depend on externals." As an attribute to humans The word "beauty" is often used as a countable noun to describe a beautiful woman. The characterization of a person as “beautiful”, whether on an individual basis or by community consensus, is often based on some combination of inner beauty, which includes psychological factors such as personality, intelligence, grace, politeness, charisma, integrity, congruence and elegance, and outer beauty (i.e. physical attractiveness) which includes physical attributes which are valued on an aesthetic basis. Standards of beauty have changed over time, based on changing cultural values. Historically, paintings show a wide range of different standards for beauty. However, humans who are relatively young, with smooth skin, well-proportioned bodies, and regular features, have traditionally been considered the most beautiful throughout history. A strong indicator of physical beauty is "averageness". When images of human faces are averaged together to form a composite image, they become progressively closer to the "ideal" image and are perceived as more attractive. This was first noticed in 1883, when Francis Galton overlaid photographic composite images of the faces of vegetarians and criminals to see if there was a typical facial appearance for each. When doing this, he noticed that the composite images were more attractive compared to any of the individual images. Researchers have replicated the result under more controlled conditions and found that the computer-generated, mathematical average of a series of faces is rated more favorably than individual faces. It is argued that it is evolutionarily advantageous that sexual creatures are attracted to mates who possess predominantly common or average features, because it suggests the absence of genetic or acquired defects. Since the 1970´s there has been increasing evidence that a preference for beautiful faces emerges early in infancy, and is probably innate, and that the rules by which attractiveness is established are similar across different genders and cultures. A feature of beautiful women which has been explored by researchers is a waist–hip ratio of approximately 0.70. As of 2004, physiologists had shown that women with hourglass figures were more fertile than other women because of higher levels of certain female hormones, a fact that may subconsciously condition males choosing mates. However, in 2008 other commentators have suggested that this preference may not be universal. For instance, in some non-Western cultures in which women have to do work such as finding food, men tend to have preferences for higher waist-hip ratios. Exposure to the thin ideal in mass media, such as fashion magazines, directly correlates with body dissatisfaction, low self-esteem, and the development of eating disorders among female viewers. Further, the widening gap between individual body sizes and societal ideals continues to breed anxiety among young girls as they grow, highlighting the dangerous nature of beauty standards in society. Western concept Beauty standards are rooted in cultural norms crafted by societies and media over centuries. As of 2018, it has been argued that the predominance of white women featured in movies and advertising leads to a Eurocentric concept of beauty, which assigns inferiority to women of color. Thus, societies and cultures across the globe struggle to diminish the longstanding internalized racism. Eurocentric standards for men include tallness, leanness, and muscularity, which have been idolized through American media, such as in Hollywood films and magazine covers. The prevailing Eurocentric concept of beauty has varying effects on different cultures. Primarily, adherence to this standard among African American women has bred a lack of positive reification of African beauty, and philosopher Cornel West elaborates that, "much of black self-hatred and self-contempt has to do with the refusal of many black Americans to love their own black bodies-especially their black noses, hips, lips, and hair." These insecurities can be traced back to global idealization of women with light skin, green or blue eyes, and long straight or wavy hair in magazines and media that starkly contrast with the natural features of African women. Much criticism has been directed at models of beauty which depend solely upon Western ideals of beauty as seen for example in the Barbie model franchise. Criticisms of Barbie are often centered around concerns that children consider Barbie a role model of beauty and will attempt to emulate her. One of the most common criticisms of Barbie is that she promotes an unrealistic idea of body image for a young woman, leading to a risk that girls who attempt to emulate her will become anorexic. As of 1998, these criticisms, the lack of diversity in such franchises as the Barbie model of beauty in Western culture, had led to a dialogue to create non-exclusive models of Western ideals in body type and beauty. Mattel responded to these criticisms. Starting in 1980, it produced Hispanic dolls, and later came models from across the globe. For example, in 2007, it introduced "Cinco de Mayo Barbie" wearing a ruffled red, white, and green dress (echoing the Mexican flag). Hispanic magazine reports that: Black concept In the 1960s the black is beautiful cultural movement sought to dispel the notion of a Eurocentric concept of beauty. Asian concept In East Asian cultures, familial pressures and cultural norms shape beauty ideals; a 2017 experimental study concluded that expecting that men in Asian culture did not like women who look “fragile” was impacting Asian American women´s lifestyle, eating, and appearance choices. In addition to the "male gaze", media portrayals of Asian women as petite and the portrayal of beautiful women in American media as fair complexioned and slim-figured have induced anxiety and depressive symptoms among Asian American women who don't fit either of these beauty ideals. Further, the high status associated with fairer skin can be attributed to Asian societal history, as upper-class people hired workers to perform outdoor, manual labor, cultivating a visual divide over time between lighter complexioned, wealthier families and sun tanned, darker laborers. This along with the Eurocentric beauty ideals embedded in Asian culture has made skin lightening creams, rhinoplasty, and blepharoplasty (an eyelid surgery meant to give Asians a more European, "double-eyelid" appearance) commonplace among Asian women, illuminating the insecurity that results from cultural beauty standards. In Japan, the concept of beauty in men is known as 'bishōnen'. Bishōnen refers to males with distinctly feminine features, physical characteristics establishing the standard of beauty in Japan and typically exhibited in their pop culture idols. A multibillion-dollar industry of Japanese Aesthetic Salons exists for this reason. Effects on society Researchers have found that good-looking students get higher grades from their teachers than students with an ordinary appearance. Some studies using mock criminal trials have shown that physically attractive "defendants" are less likely to be convicted—and if convicted are likely to receive lighter sentences—than less attractive ones (although the opposite effect was observed when the alleged crime was swindling, perhaps because jurors perceived the defendant's attractiveness as facilitating the crime). Studies among teens and young adults, such as those of psychiatrist and self-help author Eva Ritvo show that skin conditions have a profound effect on social behavior and opportunity. How much money a person earns may also be influenced by physical beauty. One study found that people low in physical attractiveness earn 5 to 10 percent less than ordinary-looking people, who in turn earn 3 to 8 percent less than those who are considered good-looking. In the market for loans, the least attractive people are less likely to get approvals, although they are less likely to default. In the marriage market, women's looks are at a premium, but men's looks do not matter much. The impact of physical attractiveness on earnings varies across races, with the largest beauty wage gap among black women and black men. Conversely, being very unattractive increases the individual's propensity for criminal activity for a number of crimes ranging from burglary to theft to selling illicit drugs. Discrimination against others based on their appearance is known as lookism. See also Adornment Aesthetics Beauty pageant Body modification Feminine beauty ideal Glamour (presentation) Masculine beauty ideal Mathematical beauty Processing fluency theory of aesthetic pleasure Unattractiveness Cosmetics References Further reading Liebelt, C. (2022), Beauty: What Makes Us Dream, What Haunts Us. Feminist Anthropology. https://doi.org/10.1002/fea2.12076 External links BBC Radio 4's In Our Time programme on Beauty (requires RealAudio) Dictionary of the History of Ideas: Theories of Beauty to the Mid-Nineteenth Century beautycheck.de/english Regensburg University – Characteristics of beautiful faces Eli Siegel's "Is Beauty the Making One of Opposites?" Art and love in Renaissance Italy , Issued in connection with an exhibition held Nov. 11, 2008-Feb. 16, 2009, Metropolitan Museum of Art, New York (see Belle: Picturing Beautiful Women; pages 246–254). Plato - Symposium in S. Marc Cohen, Patricia Curd, C. D. C. Reeve (ed.) Aesthetic beauty Concepts in aesthetics Concepts in metaphysics Fashion Physical attractiveness
1,923
4,445
https://en.wikipedia.org/wiki/Bob%20Frankston
Bob Frankston
Robert M. Frankston (born June 14, 1949) is an American software engineer and businessman who co-created, with Dan Bricklin, the VisiCalc spreadsheet program. Frankston is also the co-founder of Software Arts. Early life and education Frankston was born and raised in Brooklyn, New York. He graduated from Stuyvesant High School in New York City in 1966. He earned a S.B degree in computer science and mathematics from the Massachusetts Institute of Technology, followed by a Master of Engineering degree computer science, also from MIT. Career Following his work with Dan Bricklin, Frankston later worked at Lotus Development Corporation and Microsoft. Frankston became an outspoken advocate for reducing the role of telecommunications companies in the evolution of the Internet, particularly with respect to broadband and mobile communications. He coined the term "Regulatorium" to describe what he considers collusion between telecommunication companies and their regulators that prevents change. Awards and recognition Fellow of the Association for Computing Machinery (1994) "for the invention of VisiCalc, a new metaphor for data manipulation that galvanized the personal computing industry" MIT William L. Stewart Award for co-founding the M.I.T. Student Information Processing Board (SIPB). The Association for Computing Machinery Software System Award (1985) The MIT LCS Industrial Achievement Award The Washington Award (2001) from the Western Society of Engineers (with Bricklin) In 2004, he was made a Fellow of the Computer History Museum "for advancing the utility of personal computers by developing the VisiCalc electronic spreadsheet." References External links Bob Frankston's site/blog Biographical article from Smart Computing Stuyvesant High School alumni 1949 births Living people People from Brooklyn Fellows of the Association for Computing Machinery People from Arlington, Massachusetts
1,927
4,446
https://en.wikipedia.org/wiki/Booker%20Prize
Booker Prize
The Booker Prize, formerly known as the Booker Prize for Fiction (1969–2001) and the Man Booker Prize (2002–2019), is a literary prize awarded each year for the best novel written in English and published in the United Kingdom or Ireland. The winner of the Booker Prize receives international publicity which usually leads to a sales boost. When the prize was created, only novels written by Commonwealth, Irish, and South African (and later Zimbabwean) citizens were eligible to receive the prize; in 2014 it was widened to any English-language novel—a change that proved controversial. A five-person panel constituted by authors, librarians, literary agents, publishers, and booksellers is appointed by the Booker Prize Foundation each year to choose the winning book. A high-profile literary award in British culture, the Booker Prize is greeted with anticipation and fanfare. Literary critics have noted that it is a mark of distinction for authors to be selected for inclusion in the shortlist or to be nominated for the "longlist". A sister prize, the International Booker Prize, is awarded for a book translated into English and published in the United Kingdom or Ireland. The £50,000 prize money is split evenly between the author and translator of the winning novel. History and administration The prize was established as the Booker Prize for Fiction after the company Booker, McConnell Ltd began sponsoring the event in 1969; it became commonly known as the "Booker Prize" or the "Booker". When administration of the prize was transferred to the Booker Prize Foundation in 2002, the title sponsor became the investment company Man Group, which opted to retain "Booker" as part of the official title of the prize. The foundation is an independent registered charity funded by the entire profits of Booker Prize Trading Ltd, of which it is the sole shareholder. The prize money awarded with the Booker Prize was originally £5,000. It doubled in 1978 to £10,000 and was subsequently raised to £50,000 in 2002 under the sponsorship of the Man Group, making it one of the world's richest literary prizes. Each of the shortlisted authors receives £2,500 and a specially bound edition of their book. The original Booker Prize trophy was designed by the artist Jan Pieńkowski. 1969–1979 The first winner of the Booker Prize was P. H. Newby in 1969 for his novel Something to Answer For. The inaugural set of five judges included Rebecca West, W.L. Webb, Stephen Spender, Frank Kermode and David Farrer. In 1970, Bernice Rubens became the first woman to win the Booker Prize, for The Elected Member. The rules of the Booker changed in 1971; previously, it had been awarded retrospectively to books published prior to the year in which the award was given. In 1971 the year of eligibility was changed to the same as the year of the award; in effect, this meant that books published in 1970 were not considered for the Booker in either year. The Booker Prize Foundation announced in January 2010 the creation of a special award called the "Lost Man Booker Prize", with the winner chosen from a longlist of 22 novels published in 1970. Alice Munro's The Beggar Maid was shortlisted in 1980, and remains the only short story collection to be shortlisted. John Sutherland, who was a judge for the 1999 prize, has said: In 1972, winning writer John Berger, known for his Marxist worldview, protested during his acceptance speech against Booker McConnell. He blamed Booker's 130 years of sugar production in the Caribbean for the region's modern poverty. Berger donated half of his £5,000 prize to the British Black Panther movement, because it had a socialist and revolutionary perspective in agreement with his own. 1980–1999 In 1980, Anthony Burgess, writer of Earthly Powers, refused to attend the ceremony unless it was confirmed to him in advance whether he had won. His was one of two books considered likely to win, the other being Rites of Passage by William Golding. The judges decided only 30 minutes before the ceremony, giving the prize to Golding. Both novels had been seen as favourites to win leading up to the prize, and the dramatic "literary battle" between two senior writers made front-page news. In 1981, nominee John Banville wrote a letter to The Guardian requesting that the prize be given to him so that he could use the money to buy every copy of the longlisted books in Ireland and donate them to libraries, "thus ensuring that the books not only are bought but also read – surely a unique occurrence". Judging for the 1983 award produced a draw between J. M. Coetzee's Life & Times of Michael K and Salman Rushdie's Shame, leaving chair of judges Fay Weldon to choose between the two. According to Stephen Moss in The Guardian, "Her arm was bent and she chose Rushdie", only to change her mind as the result was being phoned through. In 1992, the jury split the prize between Michael Ondaatje's The English Patient and Barry Unsworth's Sacred Hunger. This prompted the foundation to draw up a rule that made it mandatory for the appointed jury to make the award to just a single author/book. In 1993, two of the judges threatened to walk out when Trainspotting appeared on the longlist; Irvine Welsh's novel was pulled from the shortlist to satisfy them. The novel would later receive critical acclaim, and is now considered Welsh's masterpiece. The choice of James Kelman's book How Late It Was, How Late as 1994 Booker Prize winner proved to be one of the most controversial in the award's history. Rabbi Julia Neuberger, one of the judges, declared it "a disgrace" and left the event, later deeming the book to be "crap"; WHSmith's marketing manager called the award "an embarrassment to the whole book trade"; Waterstones in Glasgow sold a mere 13 copies of Kelman's book the following week. In 1994, The Guardians literary editor Richard Gott, citing the lack of objective criteria and the exclusion of American authors, described the prize as "a significant and dangerous iceberg in the sea of British culture that serves as a symbol of its current malaise". In 1997, the decision to award Arundhati Roy's The God of Small Things proved controversial. Carmen Callil, chair of the previous year's Booker judges, called it an "execrable" book and said on television that it should not even have been on the shortlist. Booker Prize chairman Martyn Goff said Roy won because nobody objected, following the rejection by the judges of Bernard MacLaverty's shortlisted book due to their dismissal of him as "a wonderful short-story writer and that Grace Notes was three short stories strung together". 2000–present Before 2001, each year's longlist of nominees was not publicly revealed. From 2001, the longlisted novels started to be published each year, and in 2007 the number of nominees was capped at 12 or 13 each year. In 2001, A. L. Kennedy, who was a judge in 1996, called the prize "a pile of crooked nonsense" with the winner determined by "who knows who, who's sleeping with who, who's selling drugs to who, who's married to who, whose turn it is". The Booker Prize created a permanent home for the archives from 1968 to present at Oxford Brookes University Library. The Archive, which encompasses the administrative history of the Prize from 1968 to date, collects together a diverse range of material, including correspondence, publicity material, copies of both the Longlists and the Shortlists, minutes of meetings, photographs and material relating to the awards dinner (letters of invitation, guest lists, seating plans). Embargoes of ten or twenty years apply to certain categories of material; examples include all material relating to the judging process and the Longlist prior to 2002. Between 2005 and 2008, the Booker Prize alternated between writers from Ireland and India. "Outsider" John Banville began this trend in 2005 when his novel The Sea was selected as a surprise winner: Boyd Tonkin, literary editor of The Independent, famously condemned it as "possibly the most perverse decision in the history of the award" and rival novelist Tibor Fischer poured scorn on Banville's victory. Kiran Desai of India won in 2006. Anne Enright's 2007 victory came about due to a jury badly split over Ian McEwan's novel On Chesil Beach. The following year it was India's turn again, with Aravind Adiga narrowly defeating Enright's fellow Irishman Sebastian Barry. Historically, the winner of the Booker Prize had been required to be a citizen of the Commonwealth of Nations, the Republic of Ireland, or Zimbabwe. It was announced on 18 September 2013 that future Booker Prize awards would consider authors from anywhere in the world, so long as their work was in English and published in the UK. This change proved controversial in literary circles. Former winner A. S. Byatt and former judge John Mullan said the prize risked diluting its identity, whereas former judge A. L. Kennedy welcomed the change. Following this expansion, the first winner not from the Commonwealth, Ireland, or Zimbabwe was American Paul Beatty in 2016. Another American, George Saunders, won the following year. In 2018, publishers sought to reverse the change, arguing that the inclusion of American writers would lead to homogenisation, reducing diversity and opportunities everywhere, including in America, to learn about "great books that haven't already been widely heralded". Man Group announced in early 2019 that the year's prize would be the last of eighteen under their sponsorship. A new sponsor, Crankstart – a charitable foundation run by Sir Michael Moritz and his wife, Harriet Heyman – then announced it would sponsor the award for five years, with the option to renew for another five years. The award title was changed to simply "The Booker Prize". In 2019, despite having been unequivocally warned against doing so, the foundation's jury – under the chair Peter Florence – split the prize, awarding it to two authors, in breach of a rule established in 1993. Florence justified the decision, saying: "We came down to a discussion with the director of the Booker Prize about the rules. And we were told quite firmly that the rules state that you can only have one winner ... and as we have managed the jury all the way through on the principle of consensus, our consensus was that it was our decision to flout the rules and divide this year’s prize to celebrate two winners." The two were British writer Bernardine Evaristo for her novel Girl, Woman, Other and Canadian writer Margaret Atwood for The Testaments. Evaristo's win marked the first time the Booker had been awarded to a black woman, while Atwood's win, at 79, made her the oldest winner. Judging The selection process for the winner of the prize commences with the formation of an advisory committee, which includes a writer, two publishers, a literary agent, a bookseller, a librarian, and a chairperson appointed by the Booker Prize Foundation. The advisory committee then selects the judging panel of five people, the membership of which changes each year, although on rare occasions a judge may be selected a second time. Judges are selected from amongst leading literary critics, writers, academics and leading public figures. The Booker judging process and the very concept of a "best book" being chosen by a small number of literary insiders is controversial for many. The Guardian introduced the "Not the Booker Prize" voted for by readers partly as a reaction to this. Author Amit Chaudhuri wrote: "The idea that a 'book of the year' can be assessed annually by a bunch of people – judges who have to read almost a book a day – is absurd, as is the idea that this is any way of honouring a writer." The winner is usually announced at a formal dinner in London's Guildhall in early October. However, in 2020, with COVID-19 pandemic restrictions in place, the winner ceremony was broadcast in November from The Roundhouse, in partnership with the BBC. Legacy of British Empire Luke Strongman noted that the rules for the Booker prize as laid out in 1969 with recipients limited to novelists writing in English from Great Britain or nations that had once belonged to the British Empire strongly suggested the purpose of the prize was to deepen ties between the nations that had all been a part of the empire. The first book to win the Booker, Something to Answer For in 1969, concerned the misadventures of an Englishman in Egypt in the 1950s at the time when British influence in Egypt was ending. Strongman wrote that most of the books that have won the Booker Prize have in some way been concerned with the legacy of the British Empire, with many of the prize winners having engaged in imperial nostalgia. However, over time many of the books that won the prize have reflected the changed balance of power from the emergence of new identities in the former colonies of the empire, and with it "culture after the empire". The attempts of successive British officials to mould "the natives" into their image did not fully succeed, but did profoundly and permanently change the cultures of the colonised, a theme which some non-white winners of the Booker prize have engaged with in various ways. Winners Special awards In 1993, to mark the prize's 25th anniversary, a "Booker of Bookers" Prize was given. Three previous judges of the award, Malcolm Bradbury, David Holloway and W. L. Webb, met and chose Salman Rushdie's Midnight's Children, the 1981 winner, as "the best novel out of all the winners". In 2006, the Man Booker Prize set up a "Best of Beryl" prize, for the author Beryl Bainbridge, who had been nominated five times and yet failed to win once. The prize is said to count as a Booker Prize. The nominees were An Awfully Big Adventure, Every Man for Himself, The Bottle Factory Outing, The Dressmaker and Master Georgie, which won. Similarly, The Best of the Booker was awarded in 2008 to celebrate the prize's 40th anniversary. A shortlist of six winners was chosen — Rushdie's Midnight's Children, Coetzee' Disgrace, Carey's Oscar and Lucinda, Gordimer's The Conservationist, Farrell's The Siege of Krishnapur, and Barker's The Ghost Road — and the decision was left to a public vote; the winner was again Midnight's Children. In 1971, the nature of the prize was changed so that it was awarded to novels published in that year instead of in the previous year; therefore, no novel published in 1970 could win the Booker Prize. This was rectified in 2010 by the awarding of the "Lost Man Booker Prize" to J. G. Farrell's Troubles. In 2018, to celebrate the 50th anniversary, the Golden Man Booker was awarded. One book from each decade was selected by a panel of judges: Naipaul's In a Free State (the 1971 winner), Lively's Moon Tiger (1987), Ondaatje's The English Patient (1992), Mantel's Wolf Hall (2009) and Saunders' Lincoln in the Bardo (2017). The winner, by popular vote, was The English Patient. Nomination Since 2014, each publisher's imprint may submit a number of titles based on their longlisting history (previously they could submit two). Non-longlisted publishers can submit one title, publishers with one or two longlisted books in the previous five years can submit two, publishers with three or four longlisted books are allowed three submissions, and publishers with five or more longlisted books can have four submissions. In addition, previous winners of the prize are automatically considered if they enter new titles. Books may also be called in: publishers can make written representations to the judges to consider titles in addition to those already entered. In the 21st century the average number of books considered by the judges has been approximately 130. Related awards for translated works A separate prize for which any living writer in the world may qualify, the Man Booker International Prize was inaugurated in 2005. Until 2015, it was given every two years to a living author of any nationality for a body of work published in English or generally available in English translation. In 2016, the award was significantly reconfigured, and is now given annually to a single book in English translation, with a £50,000 prize for the winning title, shared equally between author and translator. A Russian version of the Booker Prize was created in 1992 called the Booker-Open Russia Literary Prize, also known as the Russian Booker Prize. In 2007, Man Group plc established the Man Asian Literary Prize, an annual literary award given to the best novel by an Asian writer, either written in English or translated into English, and published in the previous calendar year. As part of The Times Literature Festival in Cheltenham, a Booker event is held on the last Saturday of the festival. Four guest speakers/judges debate a shortlist of four books from a given year from before the introduction of the Booker prize, and a winner is chosen. Unlike the real Man Booker (1969 through 2014), writers from outside the Commonwealth are also considered. In 2008, the winner for 1948 was Alan Paton's Cry, the Beloved Country, beating Norman Mailer's The Naked and the Dead, Graham Greene's The Heart of the Matter and Evelyn Waugh's The Loved One. In 2015, the winner for 1915 was Ford Madox Ford's The Good Soldier, beating The Thirty-Nine Steps (John Buchan), Of Human Bondage (W. Somerset Maugham), Psmith, Journalist (P. G. Wodehouse) and The Voyage Out (Virginia Woolf). See also International Booker Prize List of British literary awards List of literary awards Commonwealth Writers Prize Grand Prix of Literary Associations Costa Book Awards Prix Goncourt Governor General's Awards Scotiabank Giller Prize Miles Franklin Award Russian Booker Prize Samuel Johnson Prize (non-fiction) German Book Prize (Deutscher Buchpreis) References Further reading Lee, Hermione (1981). "The Booker Prize: Matters of judgment". The Times Literary Supplement, reprinted 22 October 2008. External links The Booker Prize Archive at Oxford Brookes University A primer on the Man Booker Prize and critical review of literature Man Booker Prize 2013 Longlist announced 23 July 2013, updated with Shortlist 10 September 2013 1968 establishments in the United Kingdom Awards established in 1968 British fiction awards English-language literary awards Booker authors' division Oxford Brookes University
1,928
4,455
https://en.wikipedia.org/wiki/Book%20of%20Malachi
Book of Malachi
The Book of Malachi (Hebrew: , ) is the last book of the Neviim contained in the Tanakh, canonically the last of the Twelve Minor Prophets. In the Christian ordering, the grouping of the prophetic books is the last section of the Old Testament, making Malachi the last book before the New Testament. The book is commonly attributed to a prophet named Malachi, as its title has frequently been understood as a proper name, although its Hebrew meaning is simply "My Messenger" (the Septuagint reads "his messenger") and may not be the author's name at all. The name occurs in the superscription at 1:1 and in 3:1, although it is highly unlikely that the word refers to the same character in both of these references. Thus, there is substantial debate regarding the identity of the book's author. One of the Targums identifies Ezra (or Esdras) as the author of Malachi. Priest and Historian Jerome suggests that this may be because Ezra is seen as an intermediary between the prophets and the "great synagogue." There is, however, no historical evidence yet to support this claim. Some scholars note affinities between Zechariah 9–14 and the Book of Malachi. Zechariah 9, Zechariah 12, and Malachi 1 are all introduced as The word of Elohim. Some scholars argue that this collection originally consisted of three independent and anonymous prophecies, two of which were subsequently appended to the Book of Zechariah as what they refer to as Deutero-Zechariah, with the third becoming the Book of Malachi. As a result, most scholars consider the Book of Malachi to be the work of a single author who may or may not have been identified by the title Malachi. The present division of the oracles results in a total of 12 books of minor prophets, a number parallelling the sons of Jacob who became the heads of the 12 Israelite tribes. The Catholic Encyclopedia asserts, "We are no doubt in presence of an abbreviation of the name Mál'akhîyah, that is Messenger of Elohim." Author Little is known of the biography of the author of the Book of Malachi, although it has been suggested that he may have been Levitical. The books of Zechariah and Haggai were written during the lifetime of Ezra (see 5:1); perhaps this may explain the similarities in style. According to the editors of the 1897 Easton's Bible Dictionary, some scholars believe the name "Malachi" is not a proper noun but rather an abbreviation of "messenger of Yah". This reading could be based on Malachi 3:1, "Behold, I will send my messenger...", if my messenger (מַלְאָכִ֔י mal’āḵî) is taken literally as the name Malachi. Several scholars consider both Zechariah 9–14 and Malachi to be anonymous and were therefore placed at the end of the Book of the Twelve. Wellhausen, Abraham Kuenen, and Wilhelm Gustav Hermann Nowack argue that Malachi 1:1 is a late addition, pointing to Zechariah 9:1 and 12:1. However, other scholars, including the editors of the Catholic Encyclopedia, argue that the grammatical evidence leads us to conclude that Malachi is in fact a name. Another interpretation of the authorship comes from the Septuagint superscription, ὲν χειρὶ ἀγγήλου αὐτοῦ (en cheiri angēlou autou, which can be read as either "by the hand of his messenger" or as "by the hand of his angel". The "angel" reading found an echo among the ancient Church Fathers and ecclesiastical writers, and even gave rise to the "strangest fancies", especially among the disciples of Origen of Alexandria. Period There are very few historical details in the Book of Malachi. The greatest clue as to its dating may lie in the fact that the Persian-era term for governor ( pehâ) is used in 1:8. This points to a post-exilic (that is, after 538 BC) date of composition both because of the use of the Persian period term and because Judah had a king before the exile. Since, in the same verse, the temple has been rebuilt, the book must also be later than 515 BC. Malachi was apparently known to the author of Ecclesiasticus early in the 2nd century BC. Because of the development of themes in the book of Malachi, most scholars assign it to a position after Haggai and Zechariah, close to the time when Ezra and Nehemiah came to Jerusalem in 445 BC. Aim The Book of Malachi was written to correct the lax religious and social behaviour of the Israelites – particularly the priests – in post-exilic Jerusalem. Although the prophets urged the people of Judah and Israel to see their exile as punishment for failing to uphold their covenant with God, it was not long after they had been restored to the land and to Temple worship that the people's commitment to their God began, once again, to wane. It was in this context that the prophet commonly referred to as Malachi delivered his prophecy. In 1:2, Malachi has the people of Israel question God's love for them. This introduction to the book illustrates the severity of the situation which Malachi addresses. The graveness of the situation is also indicated by the dialectical style with which Malachi confronts his audience. Malachi proceeds to accuse his audience of failing to respect God as God deserves. One way in which this disrespect is made manifest is through the substandard sacrifices which Malachi claims are being offered by the priests. While God demands animals that are "without blemish" (Leviticus 1:3, NRSV), the priests, who were "to determine whether the animal was acceptable" (Mason 143), were offering blind, lame and sick animals for sacrifice because they thought nobody would notice. In 2:1, Malachi states Yahweh Sabaoth is sending a curse on the priests who have not honored him with appropriate animal sacrifices: "Now, watch how I am going to paralyze your arm and throw dung in your face--the dung from your very solemnities--and sweep you away with it. Then you shall learn that it is I who have given you this warning of my intention to abolish my covenant with Levi, says Yahweh Sabaoth." In 2:10, Malachi addresses the issue of divorce. On this topic, Malachi deals with divorce both as a social problem ("Why then are we faithless to one another ... ?" 2:10) and as a religious problem ("Judah ... has married the daughter of a foreign god" 2:11). In contrast to the book of Ezra, Malachi urges each to remain steadfast to the wife of his youth. Malachi also criticizes his audience for questioning God's justice. He reminds them that God is just, exhorting them to be faithful as they await that justice. Malachi quickly goes on to point out that the people have not been faithful. In fact, the people are not giving God all that God deserves. Just as the priests have been offering unacceptable sacrifices, so the people have been neglecting to offer their full tithe to God. The result of these shortcomings is that the people come to believe that no good comes out of serving God. Malachi assures the faithful among his audience that in the eschaton, the differences between those who served God faithfully and those who did not will become clear. The book concludes by calling upon the teachings of Moses and by promising that Elijah will return prior to the Day of Yahweh. Interpretations The book of Malachi is divided into three chapters in the Hebrew Bible and the Greek Septuagint and four chapters in the Latin Vulgate. The fourth chapter in the Vulgate consists of the remainder of the third chapter starting at verse 3:19. Christianity The New Revised Standard Version of the Bible supplies headings for the book as follows: The majority of scholars consider the book to be made up of six distinct oracles. According to this scheme, the book of Malachi consists of a series of disputes between Yahweh and the various groups within the Israelite community. In the course of the book's three or four chapters, Yahweh is vindicated while those who do not adhere to the law of Moses are condemned. Some scholars have suggested that the book, as a whole, is structured along the lines of a judicial trial, a suzerain treaty or a covenant—one of the major themes throughout the Hebrew Scriptures. Implicit in the prophet's condemnation of Israel's religious practices is a call to keep Yahweh's statutes. The Book of Malachi draws upon various themes found in other books of the Bible. Malachi appeals to the rivalry between Jacob and Esau and of Yahweh's preference for Jacob contained in Book of Genesis 25–28. Malachi reminds his audience that, as descendants of Jacob (Israel), they have been and continue to be favoured by God as God's chosen people. In the second dispute, Malachi draws upon the Levitical Code (e.g. Leviticus 1:3) in condemning the priest for offering unacceptable sacrifices. In the third dispute (concerning divorce), the author of the Book of Malachi likely intends his argument to be understood on two levels. Malachi appears to be attacking either the practice of divorcing Jewish wives in favour of foreign ones (a practice which Ezra vehemently condemns) or, alternatively, Malachi could be condemning the practice of divorcing foreign wives in favour of Jewish wives (a practice which Ezra promoted). Malachi appears adamant that nationality is not a valid reason to terminate a marriage, "For I hate divorce, says the Lord . . ." (2:16). In many places throughout the Hebrew Scriptures – particularly the Book of Hosea – Israel is figured as Yahweh's wife or bride. Malachi's discussion of divorce may also be understood to conform to this metaphor. Malachi could very well be urging his audience not to break faith with Yahweh (the God of Israel) by adopting new gods or idols. It is quite likely that, since the people of Judah were questioning Yahweh's love and justice (1:2, 2:17), they might be tempted to adopt foreign gods. William LaSor suggests that, because the restoration to the land of Judah had not resulted in anything like the prophesied splendor of the messianic age which had been prophesied, the people were becoming quite disillusioned with their religion. Indeed, the fourth dispute asserts that judgment is coming in the form of a messenger who "is like refiner's fire and like fullers' soap . . ." (3:2). Following this, the prophet provides another example of wrongdoing in the fifth dispute – that is, failing to offer full tithes. In this discussion, Malachi has Yahweh request the people to "Bring the full tithe . . . [and] see if I will not open the windows of heaven for you and pour down on you an overflowing blessing" (3:10). This request offers the opportunity for the people to amend their ways. It also stresses that keeping the Lord's statutes will not only allow the people to avoid God's wrath, but will also lead to God's blessing. (It is this portion of Malachi which is used as support for the view that tithing is required of Christians.) In the sixth dispute, the people of Israel illustrate the extent of their disillusionment. Malachi has them say "'It is vain to serve God . . . Now we count the arrogant happy; evildoers not only prosper, but when they put God to the test they escape'" (3:14–15). Once again, Malachi has Yahweh assure the people that the wicked will be punished and the faithful will be rewarded. In the light of what Malachi understands to be an imminent judgment, he exhorts his audience to "Remember the teaching of my servant Moses, that statutes and ordinances that I commanded him at Horeb for all Israel" (4:4; 3:22, MT). Before the Day of the Lord, Malachi declares that Elijah (who "ascended in a whirlwind into heaven . . . [,]" 2 Kings 2:11) will return to earth in order that people might follow in God's ways. Primarily because of its messianic promise, the Book of Malachi is frequently referred to in the Christian New Testament. What follows is a brief comparison between the Book of Malachi and the New Testament texts which refer to it (as suggested in Hill 84–88). Although many Christians believe that the messianic prophecies of the Book of Malachi have been fulfilled in the life, ministry, transfiguration, death and resurrection of Jesus of Nazareth, most Jews continue to await the coming of the prophet Elijah who will prepare the way for the Lord. References External links New American Bible 21st Century KJV NIRV Malachi at Chabad.org Various versions Bibliography Hill, Andrew E. Malachi: A New Translation with Introduction and Commentary. The Anchor Bible Volume 25D. Toronto: Doubleday, 1998. LaSor, William Sanford et al. Old Testament Survey: the Message, Form, and Background of the Old Testament. Grand Rapids: William B. Eerdmans, 1996. Mason, Rex. The Books of Haggai, Zechariah and Malachi. The Cambridge Bible Commentary on the New English Bible. New York, Cambridge University Press, 1977. Singer, Isidore & Adolf Guttmacher. "Book of Malachi." JewishEncyclopedia.com. 2002. Van Hoonacker, A. "Malachias (Malachi)." Catholic Encyclopedia. Transcribed by Thomas J. Bress. 2003. 6th-century BC books 5th-century BC books Twelve Minor Prophets
1,936
4,468
https://en.wikipedia.org/wiki/Buddhist%20philosophy
Buddhist philosophy
Buddhist philosophy refers to the ancient Indian philosophical system of the Buddhist religion. It comprises all the philosophical investigations and systems of inquiry that developed among various schools of Buddhism in ancient India following the parinirvāṇa of Gautama Buddha (c. 5th century BCE) and later spread throughout Asia. The Buddhist path combines both philosophical reasoning and the practice of meditation. The Buddhist traditions present a multitude of Buddhist paths to liberation, and Buddhist thinkers in India and subsequently in East Asia have covered topics as varied as cosmology, ethics, epistemology, logic, metaphysics, ontology, phenomenology, the philosophy of mind, the philosophy of time, and soteriology in their analysis of these paths. Pre-sectarian Buddhism was based on empirical evidence gained by the sense organs (ayatana) and the Buddha seems to have retained a skeptical distance from certain metaphysical questions, refusing to answer them because they were not conducive to liberation but led instead to further speculation. A recurrent theme in Buddhist philosophy has been the reification of concepts, and the subsequent return to the Buddhist Middle Way. Particular points of Buddhist philosophy have often been the subject of disputes between different schools of Buddhism, as well as between representative thinkers of Buddhist schools and Hindu or Jaina philosophers. These elaborations and disputes gave rise to various schools in early Buddhism of Abhidharma, and to the Mahāyāna traditions such as Prajñāpāramitā, Mādhyamaka, Sautrāntika, Buddha-nature, and Yogācāra. Historical phases of Buddhist philosophy Edward Conze splits the development of Indian Buddhist philosophy into three phases: The phase of the pre-sectarian Buddhist doctrines derived from oral traditions that originated during the life of Gautama Buddha, and are common to all later schools of Buddhism. The second phase concerns non-Mahayana "scholastic" Buddhism, as evident in the Abhidharma texts beginning in the third century BCE that feature scholastic reworking and schematic classification of material in the sutras. The third phase concerns Mahayana Buddhism, beginning in the late first century CE. This movement emphasizes the path of a bodhisattva and includes various schools of thought, such as Prajñaparamita, Madhyamaka and Yogacara. Various elements of these three phases are incorporated and/or further developed in the philosophy and worldview of the various sects of Buddhism that then emerged. Philosophical orientation Philosophy in India was aimed mainly at spiritual liberation and had soteriological goals. In his study of Mādhyamaka Buddhist philosophy in India, Peter Deller Santina writes: For the Indian Buddhist philosophers, the teachings of the Buddha were not meant to be taken on faith alone, but to be confirmed by logical analysis (pramana) of the world. The early Buddhist texts mention that a person becomes a follower of the Buddha's teachings after having pondered them over with wisdom and the gradual training also requires that a disciple "investigate" (upaparikkhati) and "scrutinize" (tuleti) the teachings. The Buddha also expected his disciples to approach him as a teacher in a critical fashion and scrutinize his actions and words, as shown in the Vīmaṃsaka Sutta. The Buddha and early Buddhism The Buddha Scholarly opinion varies as to whether the Buddha himself was engaged in philosophical inquiry. The Buddha (c. 5th century BCE) was a north Indian śramaṇa (wandering ascetic), whose teachings are preserved in the Pali Nikayas and in the Agamas as well as in other surviving fragmentary textual collections (collectively known as the Early Buddhist Texts). Dating these texts is difficult, and there is disagreement on how much of this material goes back to a single religious founder. While the focus of the Buddha's teachings is about attaining the highest good of nirvana, they also contain an analysis of the source of human suffering, the nature of personal identity, and the process of acquiring knowledge about the world. The Middle Way The Buddha defined his teaching as "the middle way" (Pali: Majjhimāpaṭipadā). In the Dhammacakkappavattana Sutta, this is used to refer to the fact that his teachings steer a middle course between the extremes of asceticism and bodily denial (as practiced by the Jains and other ascetic groups) and sensual hedonism or indulgence. Many sramanas of the Buddha's time placed much emphasis on a denial of the body, using practices such as fasting, to liberate the mind from the body. The Buddha, however, realized that the mind was embodied and causally dependent on the body, and therefore that a malnourished body did not allow the mind to be trained and developed. Thus, Buddhism's main concern is not with luxury or poverty, but instead with the human response to circumstances. Basic teachings Certain basic teachings appear in many places throughout these early texts, so older studies by various scholars conclude that the Buddha must at least have taught some of these key teachings: The Middle Way The Four Noble Truths The Noble Eightfold Path The four dhyānas (meditations) The Three marks of existence The five aggregates of clinging Dependent origination Karma and rebirth Nirvana According to N. Ross Reat, all of these doctrines are shared by the Theravada Pali texts and the Mahasamghika school's Śālistamba Sūtra. A recent study by Bhikkhu Analayo concludes that the Theravada Majjhima Nikaya and Sarvastivada Madhyama Agama contain mostly the same major doctrines. Richard Salomon, in his study of the Gandharan texts (which are the earliest manuscripts containing early discourses), has confirmed that their teachings are "consistent with non-Mahayana Buddhism, which survives today in the Theravada school of Sri Lanka and Southeast Asia, but which in ancient times was represented by eighteen separate schools." However, some scholars such as Schmithausen, Vetter, and Bronkhorst argue that critical analysis reveals discrepancies among these various doctrines. They present alternative possibilities for what was taught in early Buddhism and question the authenticity of certain teachings and doctrines. For example, some scholars think that karma was not central to the teaching of the historical Buddha, while others disagree with this position. Likewise, there is scholarly disagreement on whether insight was seen as liberating in early Buddhism or whether it was a later addition to the practice of the four dhyāna. According to Vetter and Bronkhorst, dhyāna constituted the original "liberating practice", while discriminating insight into transiency as a separate path to liberation was a later development. Scholars such as Bronkhorst and Carol Anderson also think that the four noble truths may not have been formulated in earliest Buddhism but as Anderson writes "emerged as a central teaching in a slightly later period that still preceded the final redactions of the various Buddhist canons." According to some scholars, the philosophical outlook of earliest Buddhism was primarily negative, in the sense that it focused on what doctrines to reject more than on what doctrines to accept. Only knowledge that is useful in attaining liberation is valued. According to this theory, the cycle of philosophical upheavals that in part drove the diversification of Buddhism into its many schools and sects only began once Buddhists began attempting to make explicit the implicit philosophy of the Buddha and the early texts. The noble truths and causation The four noble truths or "truths of the noble one" are a central feature of the teachings and are put forth in the Dhammacakkappavattana Sutta. The first truth of dukkha, often translated as suffering, is the inherent unsatisfactoriness of life. This unpleasantness is said to be not just physical pain, but also a kind of existential unease caused by the inevitable facts of our mortality and ultimately by the impermanence of all phenomena. It also arises because of contact with unpleasant events, and due to not getting what one desires. The second truth is that this unease arises out of conditions, mainly 'craving' (tanha) and ignorance (avidya). The third truth is then the fact that if you let go of craving and remove ignorance through knowledge, dukkha ceases (nirodha). The fourth is the eightfold path which are eight practices that end suffering. They are: right view, right intention, right speech, right action, right livelihood, right effort, right mindfulness and right samadhi (mental unification, meditation). The goal taught by the Buddha, Nirvana, literally means 'extinguishing' and signified "the complete extinguishing of greed, hatred, and delusion (i.e. ignorance), the forces which power samsara. Nirvana also means that after an enlightened being's death, there is no further rebirth. In early Buddhism, the concept of dependent origination was most likely limited to processes of mental conditioning and not to all physical phenomena. The Buddha understood the world in procedural terms, not in terms of things or substances. His theory posits a flux of events arising under certain conditions which are interconnected and dependent, such that the processes in question at no time are considered to be static or independent. Craving, for example, is always dependent on, and caused by sensations. Sensations are always dependent on contact with our surroundings. Buddha's causal theory is simply descriptive: "This existing, that exists; this arising, that arises; this not existing, that does not exist; this ceasing, that ceases." This understanding of causation as "impersonal lawlike causal ordering" is important because it shows how the processes that give rise to suffering work, and also how they can be reversed. The removal of suffering, then, requires a deep understanding of the nature of reality (prajña). While philosophical analysis of arguments and concepts is clearly necessary to develop this understanding, it is not enough to remove our unskillful mental habits and deeply ingrained prejudices, which require meditation, paired with understanding. According to the Buddha of the early texts, we need to train the mind in meditation to be able to truly see the nature of reality, which is said to have the marks of suffering, impermanence and not-self. Understanding and meditation are said to work together to 'clearly see' (vipassana) the nature of human experience and this is said to lead to liberation. Anatta The Buddha argued that compounded entities lacked essence, correspondingly the self is without essence. This means there is no part of a person which is unchanging and essential for continuity, and it means that there is no individual "part of the person that accounts for the identity of that person over time". This is in opposition to the Upanishadic concept of an unchanging ultimate self (Atman) and any view of an eternal soul. The Buddha held that attachment to the appearance of a permanent self in this world of change is the cause of suffering, and the main obstacle to liberation. The most widely used argument that the Buddha employed against the idea of an unchanging ego is an empiricist one, based on the observation of the five aggregates that make up a person and the fact that these are always changing. This argument can be put in this way: All psycho-physical processes (skandhas) are impermanent. If there were a self it would be permanent. IP [There is no more to the person than the five skandhas.] ∴ There is no self. This argument requires the implied premise that the five aggregates are an exhaustive account of what makes up a person, or else the self could exist outside of these aggregates. This premise is affirmed in other suttas, such as SN 22.47 which states: "whatever ascetics and brahmins regard various kinds of things as self, all regard the five grasping aggregates, or one of them." This argument is famously expounded in the Anattalakkhana Sutta. According to this text, the apparently fixed self is merely the result of identification with the temporary aggregates, the changing processes making up an individual human being. In this view, a 'person' is only a convenient nominal designation on a certain grouping of processes and characteristics, and an 'individual' is a conceptual construction overlaid upon a stream of experiences just like a chariot is merely a conventional designation for the parts of a chariot and how they are put together. The foundation of this argument is empiricist, for it is based on the fact that all we observe is subject to change, especially everything observed when looking inwardly in meditation. Another argument for 'non-self', the 'argument from lack of control', is based on the fact that we often seek to change certain parts of ourselves, that the 'executive function' of the mind is that which finds certain things unsatisfactory and attempts to alter them. Furthermore, it is also based on the Indian 'Anti Reflexivity Principle' which states an entity cannot operate on or control itself (a knife can cut other things but not itself, a finger can point at other things but not at itself, etc.). This means then, that the self could never desire to change itself and could not do so (another reason for this is that in most Indian traditions besides Buddhism, the true self or Atman is perfectly blissful and does not suffer). The Buddha uses this idea to attack the concept of self. This argument could be structured thus: If the self existed it would be the part of the person that performs the executive function, the "controller." The self could never desire that it be changed (anti-reflexivity principle). Each of the five kinds of psycho-physical elements is such that one can desire that it be changed. IP [There is no more to the person than the five skandhas.] ∴ There is no self. This argument then denies that there is one permanent "controller" in the person. Instead, it views the person as a set of constantly changing processes which include volitional events seeking change and an awareness of that desire for change. According to Mark Siderits:"What the Buddhist has in mind is that on one occasion one part of the person might perform the executive function, on another occasion another part might do so. This would make it possible for every part to be subject to control without there being any part that always fills the role of the controller (and so is the self). On some occasions, a given part might fall on the controller side, while on other occasions it might fall on the side of the controlled. This would explain how it's possible for us to seek to change any of the skandhas while there is nothing more to us than just those skandhas."As noted by K.R. Norman and Richard Gombrich, the Buddha extended his anatta critique to the Brahmanical belief expounded in the Brihadaranyaka Upanishad that the Self (Atman) was indeed the whole world, or Brahman. This is shown by the Alagaddupama Sutta, where the Buddha argues that an individual cannot experience the suffering of the entire world. He used the example of someone carrying off and burning grass and sticks from the Jeta grove and how a monk would not sense or consider themselves harmed by that action. In this example, the Buddha is arguing that we do not have direct experience of the entire world, and hence the Self cannot be the whole world. In this sutta (as well as in the Soattā Sutta) the Buddha outlines six wrong views about Self: "There are six wrong views: An unwise, untrained person may think of the body, 'This is mine, this is me, this is my self'; he may think that of feelings; of perceptions; of volitions; or of what has been seen, heard, thought, cognized, reached, sought or considered by the mind. The sixth is to identify the world and self, to believe: 'At death, I shall become permanent, eternal, unchanging, and so remain forever the same; and that is mine, that is me, that is my self.' A wise and well-trained person sees that all these positions are wrong, and so he is not worried about something that does not exist." Furthermore, the Buddha argues that the world can be observed to be a cause of suffering (Brahman was held to be ultimately blissful) and that since we cannot control the world as we wish, the world cannot be the Self. The idea that "this cosmos is the self" is one of the views rejected by the Buddha along with the related Monistic theory that held that "everything is a Oneness" (SN 12.48 Lokayatika Sutta). The Buddha also held that understanding and seeing the truth of not-self led to un-attachment, and hence to the cessation of suffering, while ignorance about the true nature of personality led to further suffering. Epistemology All schools of Indian philosophy recognize various sets of valid justifications for knowledge, or pramana and many see the Vedas as providing access to truth. The Buddha denied the authority of the Vedas, though, like his contemporaries, he affirmed the soteriological importance of having a proper understanding of reality (right view). However, this understanding was not conceived primarily as metaphysical and cosmological knowledge, but as a piece of knowledge into the arising and cessation of suffering in human experience. Therefore, the Buddha's epistemic project is different from that of modern philosophy; it is primarily a solution to the fundamental human spiritual/existential problem. The Buddha's epistemology has been compared to empiricism, in the sense that it was based on the experience of the world through the senses. The Buddha taught that empirical observation through the six sense fields (ayatanas) was the proper way of verifying any knowledge claims. Some suttas go further, stating that "the All", or everything that exists (sabbam), are these six sense spheres (SN 35.23, Sabba Sutta) and that anyone who attempts to describe another "All" will be unable to do so because "it lies beyond range". This sutta seems to indicate that for the Buddha, things in themselves or noumena, are beyond our epistemological reach (avisaya). Furthermore, in the Kalama Sutta the Buddha tells a group of confused villagers that the only proper reason for one's beliefs is verification in one's own personal experience (and the experience of the wise) and denies any verification which stems from a personal authority, sacred tradition (anussava) or any kind of rationalism which constructs metaphysical theories (takka). In the Tevijja Sutta (DN 13), the Buddha rejects the personal authority of Brahmins because none of them can prove they have had personal experience of Brahman. The Buddha also stressed that experience is the only criterion for verification of the truth in this passage from the Majjhima Nikaya (MN.I.265): "Monks, do you only speak that which is known by yourselves seen by yourselves, found by yourselves?" "Yes, we do, sir." "Good, monks, That is how you have been instructed by me in this timeless doctrine which can be realized and verified, that leads to the goal and can be understood by those who are intelligent." Furthermore, the Buddha's standard for personal verification was a pragmatic and salvific one, for the Buddha a belief counts as truth only if it leads to successful Buddhist practice (and hence, to the destruction of craving). In the "Discourse to Prince Abhaya" (MN.I.392–4) the Buddha states this pragmatic maxim by saying that a belief should only be accepted if it leads to wholesome consequences. This tendency of the Buddha to see what is true as what was useful or 'what works' has been called by scholars such as Mrs Rhys Davids and Vallée-Poussin a form of Pragmatism. However, K. N. Jayatilleke argues the Buddha's epistemology can also be taken to be a form of correspondence theory (as per the 'Apannaka Sutta') with elements of Coherentism and that for the Buddha, it is causally impossible for something which is false to lead to cessation of suffering and evil. The Buddha discouraged his followers from indulging in intellectual disputation for its own sake, which is fruitless, and distracts one from the goal of awakening. Only philosophy and discussion which has pragmatic value for liberation from suffering is seen as important. According to the scriptures, during his lifetime the Buddha remained silent when asked several metaphysical questions which he regarded as the basis for "unwise reflection". These 'unanswered questions' (avyākata) regarded issues such as whether the universe is eternal or non-eternal (or whether it is finite or infinite), the unity or separation of the body and the self, the complete inexistence of a person after Nirvana and death, and others. The Buddha stated that thinking about these imponderable (Acinteyya) issues led to "a thicket of views, a wilderness of views, a contortion of views, a writhing of views, a fetter of views" (Aggi-Vacchagotta Sutta). One explanation for this pragmatic suspension of judgment or epistemic Epoché is that such questions contribute nothing to the practical methods of realizing awakeness and bring about the danger of substituting the experience of liberation by a conceptual understanding of the doctrine or by religious faith. According to the Buddha, the Dharma is not an ultimate end in itself or an explanation of all metaphysical reality, but a pragmatic set of teachings. The Buddha used two parables to clarify this point, the 'Parable of the raft' and the Parable of the Poisoned Arrow. The Dharma is like a raft in the sense that it is only a pragmatic tool for attaining nirvana ("for the purpose of crossing over, not for the purpose of holding onto", MN 22); once one has done this, one can discard the raft. It is also like medicine, in that the particulars of how one was injured by a poisoned arrow (i.e. metaphysics, etc.) do not matter in the act of removing and curing the arrow wound itself (removing suffering). In this sense, the Buddha was often called 'the great physician' because his goal was to cure the human condition of suffering first and foremost, not to speculate about metaphysics. Having said this, it is still clear that resisting (even refuting) a false or slanted doctrine can be useful to extricate the interlocutor, or oneself, from error; hence, to advance in the way of liberation. Witness the Buddha's confutation of several doctrines by Nigantha Nataputta and other purported sages which sometimes had large followings (e.g., Kula Sutta, Sankha Sutta, Brahmana Sutta). This shows that a virtuous and appropriate use of dialectics can take place. By implication, reasoning and argument shouldn't be disparaged by Buddhists. After the Buddha's death, some Buddhists such as Dharmakirti went on to use the sayings of the Buddha as sound evidence equal to perception and inference. Transcendence Another possible reason why the Buddha refused to engage in metaphysics is that he saw ultimate reality and nirvana as devoid of sensory mediation and conception and therefore language itself is a priori inadequate to explain it. Thus, the Buddha's silence does not indicate misology or disdain for philosophy. Rather, it indicates that he viewed the answers to these questions as not understandable by the unenlightened. Dependent arising provides a framework for analysis of reality that is not based on metaphysical assumptions regarding existence or non-existence, but instead on direct cognition of phenomena as they are presented to the mind in meditation. The Buddha of the earliest Buddhists texts describes Dharma (in the sense of "truth") as "beyond reasoning" or "transcending logic", in the sense that reasoning is a subjectively introduced aspect of the way unenlightened humans perceive things, and the conceptual framework which underpins their cognitive process, rather than a feature of things as they really are. Going "beyond reasoning" means in this context penetrating the nature of reasoning from the inside, and removing the causes for experiencing any future stress as a result of it, rather than functioning outside the system as a whole. Meta-ethics The Buddha's ethics are based on the soteriological need to eliminate suffering and on the premise of the law of karma. Buddhist ethics have been termed eudaimonic (with their goal being well-being) and also compared to virtue ethics (this approach began with Damien Keown). Keown writes that Buddhist Nirvana is analogous to the Aristotelian Eudaimonia, and that Buddhist moral acts and virtues derive their value from how they lead us to or act as an aspect of the nirvanic life. The Buddha outlined five precepts (no killing, stealing, sexual misconduct, lying, or drinking alcohol) which were to be followed by his disciples, lay and monastic. There are various reasons the Buddha gave as to why someone should be ethical. First, the universe is structured in such a way that if someone intentionally commits a misdeed, a bad karmic fruit will be the result. Hence, from a pragmatic point of view, it is best to abstain from these negative actions which bring forth negative results. However, the important word here is intentionally: for the Buddha, karma is nothing else but intention/volition, and hence unintentionally harming someone does not create bad karmic results. Unlike the Jains who believed that karma was a quasi-physical element, for the Buddha karma was a volitional mental event, what Richard Gombrich calls 'an ethnicized consciousness'. This idea leads into the second moral justification of the Buddha: intentionally performing negative actions reinforces and propagates mental defilements which keep persons bound to the cycle of rebirth and interfere with the process of liberation, and hence intentionally performing good karmic actions is participating in mental purification which leads to nirvana, the highest happiness. This perspective sees immoral acts as unskillful (akusala) in our quest for happiness, and hence it is pragmatic to do good. The third meta-ethical consideration takes the view of not-self and our natural desire to end our suffering to its logical conclusion. Since there is no self, there is no reason to prefer our own welfare over that of others because there is no ultimate grounding for the differentiation of "my" suffering and someone else's. Instead, an enlightened person would just work to end suffering tout court, without thinking of the conventional concept of persons. According to this argument, anyone who is selfish does so out of ignorance of the true nature of personal identity and irrationality. Buddhist schools and Abhidharma The main Indian Buddhist philosophical schools practiced a form of analysis termed Abhidharma which sought to systematize the teachings of the early Buddhist discourses (sutras). Abhidharma analysis broke down human experience into momentary phenomenal events or occurrences called "dharmas". Dharmas are impermanent and dependent on other causal factors, they arise and pass as part of a web of other interconnected dharmas, and are never found alone. The Abhidharma schools held that the teachings of the Buddha in the sutras were merely conventional, while the Abhidharma analysis was ultimate truth (paramattha sacca), the way things really are when seen by an enlightened being. The Abhidharmic project has been likened as a form of phenomenology or process philosophy. Abhidharma philosophers not only outlined what they believed to be an exhaustive listing of dharmas, or phenomenal events, but also the causal relations between them. In the Abhidharmic analysis, the only thing which is ultimately real is the interplay of dharmas in a causal stream; everything else is merely conceptual (paññatti) and nominal. This view has been termed "mereological reductionism" by Mark Siderits because it holds that only impartite entities are real, not wholes. Abhidharmikas such as Vasubandhu argued that conventional things (tables, persons, etc.) "disappear under analysis" and that this analysis reveals only a causal stream of phenomenal events and their relations. The mainstream Abhidharmikas defended this view against their main Hindu rivals, the Nyaya school, who were substance theorists and posited the existence of universals. Some Abhidharmikas such as the Prajñaptivāda were also strict nominalists, and held that all things - even dharmas - were merely conceptual. Competing Abhidharma schools An important Abhidhamma work from the Theravāda school is the Kathāvatthu ("Points of controversy"), attributed to the Indian scholar-monk Moggaliputta-Tissa (–247 BCE). This text is important because it attempts to refute several philosophical views which had developed after the death of the Buddha, especially the theory that 'all exists' (sarvāstivāda), the theory of momentariness (khāṇavāda) and the personalist view (pudgalavada) These were the major philosophical theories that divided the Buddhist Abhidharma schools in India. After being brought to Sri Lanka in the first century BCE, the Theravada Pali language Abhidhamma tradition was heavily influenced by the works of Buddhaghosa (4-5th century AD), the most important philosopher and commentator of the Theravada school. The Theravada philosophical enterprise was mostly carried out in the genre of Atthakatha, commentaries (as well as sub-commentaries) on the Pali Abhidhamma, but also included short summaries and compendiums. The Sarvāstivāda was one of the major Buddhist philosophical schools in India, and they were so named because of their belief that dharmas exist in all three times: past, present and future. Though the Sarvāstivāda Abhidharma system began as a mere categorization of mental events, their philosophers and exegetes such as Dharmatrata and Katyāyāniputra (the compiler of the Mahavibhasa, a central text of the school) eventually refined this system into a robust realism, which also included a type of essentialism. This realism was based on a quality of dharmas, which was called svabhava or 'intrinsic existence'. Svabhava is a sort of essence, though it is not a completely independent essence, since all dharmas were said to be causally dependent. The Sarvāstivāda system extended this realism across time, effectively positing a type of eternalism with regards to time; hence, the name of their school means "the view that everything exists". Other Buddhist schools such as the Prajñaptivadins ('nominalists'), the Purvasailas and the Vainasikas refused to accept the concept of svabhava. The main topic of the Tattvasiddhi Śāstra by Harivarman (3-4th century AD), an influential Abhidharma text, is the emptiness (shunyata) of dharmas. The Theravādins and other schools such as the Sautrāntikas attacked the realism of the Sarvāstivādins, especially their theory of time. A major figure in this argument was the scholar Vasubandhu, an ex-Sarvāstivādin, who critiqued the theory of all exists and argued for philosophical presentism in his comprehensive treatise, the Abhidharmakosa. This work is the major Abhidharma text used in Tibetan and East Asian Buddhism today. The Theravāda also holds that dharmas only exist in the present, and are thus also presentists. The Theravādin presentation of Abhidharma is also not as concerned with ontology as the Sarvāstivādin view, but is more of a phenomenology and hence the concept of svabhava for the Theravādins is more of a certain characteristic or dependent feature of a dharma, than any sort of essence or metaphysical grounding. According to Y Karunadasa: In the Pali tradition it is only for the sake of definition and description that each dhamma is postulated as if it were a separate entity; but in reality, it is by no means a solitary phenomenon having an existence of its own...If this Abhidhammic view of existence, as seen from its doctrine of dhammas, cannot be interpreted as a radical pluralism, neither can it be interpreted as an out-and-out monism. For what are called dhammas -- the component factors of the universe, both within us and outside us -- are not fractions of an absolute unity but a multiplicity of co-ordinate factors. They are not reducible to, nor do they emerge from, a single reality, the fundamental postulate of monistic metaphysics. If they are to be interpreted as phenomena, this should be done with the proviso that they are phenomena with no corresponding noumena, no hidden underlying ground. For they are not manifestations of some mysterious metaphysical substratum, but processes taking place due to the interplay of a multitude of conditions. Karunadasa also describes the Theravada system as a realist, rather than phenomenalist, system: What emerges from this Abhidhammic doctrine of dhammas is a critical realism, one which (unlike idealism) recognises the distinctness of the world from the experiencing subject yet also distinguishes between those types of entities that truly exist independently of the cognitive act and those that owe their being to the act of cognition itself. What emerges from the dhamma theory is best described as dhamma realism, for, as we have seen, it recognizes only the ultimate reality of the dhammas. ...the dhammas are ultimate existents with no possibility of further reduction. Although the dhamma theory is an Abhidhammic innovation, the antecedent trends that led to its formulation and its basic ingredients can be traced to the early Buddhist scriptures which seek to analyse empiric individuality and its relation to the external world. An important theory held by some Sarvāstivādins, Theravādins and Sautrāntikas was the theory of "momentariness" (Skt., kṣāṇavāda, Pali, khāṇavāda). This theory held that dhammas only last for a minute moment (ksana) after they arise. The Sarvāstivādins saw these 'moments' in an atomistic way, as the smallest length of time possible (they also developed a material atomism). Reconciling this theory with their eternalism regarding time was a major philosophical project of the Sarvāstivāda. The Theravādins initially rejected this theory, as evidenced by the Khaṇikakathā of the Kathavatthu which attempts to refute the doctrine that "all phenomena (dhamma) are as momentary as a single mental entity." However, momentariness with regards to mental dhammas (but not physical or rūpa dhammas) was later adopted by the Sri Lankan Theravādins, and it is possible that it was first introduced by the scholar Buddhagosa. All Abhidharma schools also developed complex theories of causation and conditionality to explain how dharmas interacted with each other. Another major philosophical project of the Abhidharma schools was the explanation of perception. Some schools such as the Sarvastivadins explained perception as a type of phenomenalist realism while others such as the Sautrantikas preferred representationalism and held that we only perceive objects indirectly. The major argument used for this view by the Sautrāntikas was the "time-lag argument." According to Mark Siderits: "The basic idea behind the argument is that since there is always a tiny gap between when the sense comes in contact with the external object and when there is sensory awareness, what we are aware of can't be the external object that the senses were in contact with, since it no longer exists." This is related to the theory of extreme momentariness. One major philosophical view which was rejected by all the schools mentioned above was the view held by the Pudgalavadin or 'personalist' schools. They seemed to have held that there was a sort of 'personhood' in some ultimately real sense which was not reducible to the five aggregates. This controversial claim was in contrast to the other Buddhists of the time who held that a personality was a mere conceptual construction (prajñapti) and only conventionally real. Indian Mahāyāna philosophy From about the 1st century BCE, a new textual tradition began to arise in Indian Buddhist thought called Mahāyāna (Great Vehicle), which would slowly come to dominate Indian Buddhist philosophy. Buddhist philosophy thrived in large monastery-university complexes such as Nalanda and Vikramasila, which became centres of learning in North India. Mahāyāna philosophers continued the philosophical projects of Abhidharma while at the same time critiquing them and introducing new concepts and ideas. Since the Mahāyāna held to the pragmatic concept of truth which states that doctrines are regarded as conditionally "true" in the sense of being spiritually beneficial, the new theories and practices were seen as 'skillful means' (Upaya). The Mahayana also promoted the Bodhisattva ideal, which included an attitude of compassion for all sentient beings. The Bodhisattva is someone who chooses to remain in samsara (the cycle of birth and death) to benefit all other beings who are suffering. Major Mahayana philosophical schools and traditions include the Prajnaparamita, Madhyamaka, Tathagatagarbha, the Epistemological school of Dignaga, Yogācāra, Huayan, Tiantai and the Chan/Zen schools. Prajñāpāramitā and Madhyamaka The earliest Prajñāpāramitā-sutras ("perfection of insight" sutras) (circa 1st century BCE) emphasize the shunyata (emptiness) of phenomena and dharmas. The Prajñāpāramitā is said to be true knowledge of the nature of ultimate reality, which is illusory and empty of essence. The Diamond Sutra states that: The Heart Sutra famously affirms the shunyata of phenomena: "Oh, Sariputra, form does not differ from shunyata,and shunyata does not differ from form. Form is shunyata and shunyata is form; the same is true for feelings, perceptions, volitions and consciousness". The Prajñāpāramitā teachings are associated with the work of the Buddhist philosopher Nāgārjuna ( – ) and the Madhyamaka (Middle way) school. Nāgārjuna was one of the most influential Indian Buddhist thinkers; he gave the classical arguments for the empty nature of phenomena and attacked the Sarvāstivāda and Pudgalavada schools' essentialism in his magnum opus, The Fundamental Verses on the Middle Way (Mūlamadhyamakakārikā). In the Mūlamadhyamakakārikā, Nagarjuna relies on reductio ad absurdum arguments to refute various theories which assume svabhava (an inherent essence or "own being"). In this work, he covers topics such as causation, motion, and the sense faculties. Nagarjuna asserted a direct connection between, even identity of, dependent origination, non-self (anatta), and emptiness (śūnyatā). He pointed out that implicit in the early Buddhist concept of dependent origination is the lack of anatta (substantial being) underlying the participants in origination, so that they have no independent existence, a state identified as śūnyatā (i.e., emptiness of a nature or essence (svabhāva sunyam). Later philosophers of the Madhyamaka school built upon Nagarjuna's analysis and defended Madhyamaka against their opponents. These included Āryadeva (3rd century CE), Nāgārjuna's pupil; Candrakīrti (600–), who wrote an important commentary on the Mūlamadhyamakakārikā; and Shantideva (8th century). Buddhapālita (470–550) has been understood as the originator of the 'prāsaṅgika' approach which is based on critiquing essentialism only through reductio ad absurdum arguments. He was criticized by Bhāvaviveka ( – ), who argued for the use of syllogisms "to set one's own doctrinal stance". These two approaches were later termed the Prāsaṅgika and the Svātantrika approaches to Madhyamaka by Tibetan philosophers and commentators. Influenced by the work of Dignaga, Bhāvaviveka's Madhyamika philosophy makes use of Buddhist epistemology. Candrakīrti, on the other hand, critiqued Bhāvaviveka's adoption of the epistemological (pramana) tradition on the grounds that it contained subtle essentialism. He quotes Nagarjuna's famous statement in the Vigrahavyavartani which says "I have no thesis" for his rejection of positive epistemic Madhyamaka statements. Candrakīrti held that a true Madhyamika could only use "consequence" (prasanga), in which one points out the inconsistencies of their opponent's position without asserting an "autonomous inference" (svatantra), for no such inference can be ultimately true from the point of view of Madhyamaka. In China, the Madhyamaka school (known as Sānlùn) was founded by Kumārajīva (344–413 CE), who translated the works of Nagarjuna to Chinese. Other Chinese Madhymakas include Kumārajīva 's pupil Sengzhao, Jizang (549–623), who wrote over 50 works on Madhyamaka, and Hyegwan, a Korean monk who brought Madhyamaka teachings to Japan. Yogācāra The Yogācāra school (Yoga practice) was a Buddhist philosophical tradition which arose in between the 2nd century CE and the 4th century CE and is associated with the philosophers Asanga and Vasubandhu and with various sutras such as the Sandhinirmocana Sutra and the Lankavatara Sutra. The central feature of Yogācāra thought is the concept of Vijñapti-mātra, often translated as "impressions only" or "appearance only" and this has been interpreted as a form of Idealism or as a form of Phenomenology. Other names for the Yogacara school are 'Vijñanavada' (the doctrine of consciousness) and 'Cittamatra' (mind-only). Yogacara thinkers like Vasubandhu argued against the existence of external objects by pointing out that we only ever have access to our own mental impressions, and hence our inference of the existence of external objects is based on faulty logic. Vasubandhu's Vijnaptimatratasiddhi, or "The Proof that There Are Only Impressions" (20 verses), begins thus:"I. This [world] is nothing but impressions, since it manifests itself as an unreal object, Just like the case of those with cataracts seeing unreal hairs in the moon and the like."According to Vasubandhu then, all our experiences are like seeing hairs on the moon when we have cataracts, that is, we project our mental images into something "out there" when there are no such things. Vasubandhu then goes on to use the dream argument to argue that mental impressions do not require external objects to (1) seem to be spatio-temporally located, (2) to seem to have an inter-subjective quality, and (3) to seem to operate by causal laws. The fact that purely mental events can have causal efficacy and be intersubjective is proved by the event of a wet dream and by the mass or shared hallucinations created by the karma of certain types of beings. After having argued that impressions-only is a theory that can explain our everyday experience, Vasubandhu then appeals to parsimony - since we do not need the concept of external objects to explain reality, then we can do away with those superfluous concepts altogether as they are most likely just mentally superimposed on our concepts of reality by the mind. Inter-subjective reality for Vasubandhu is then the causal interaction between various mental streams and their karma, and does not include any external physical objects. The soteriological importance of this theory is that, by removing the concept of an external world, it also weakens the 'internal' sense of self as an observer which is supposed to be separate from the external world. To dissolve the dualism of inner and outer is also to dissolve the sense of self and other. The later Yogacara commentator Sthiramati explains this thus:"There is a grasper if there is something to be grasped, but not in the absence of what is to be grasped. Where there is nothing to be grasped, the absence of a grasper also follows, there is not just the absence of the thing to be grasped. Thus there arises the extra-mundane non-conceptual cognition that is alike without object and without cognizer."Vasubandhu also attacked the realist theories of Buddhist atomism and the Abhidharma theory of svabhava. He argued that atoms, as conceived by the atomists (un-divisible entities), would not be able to come together to form larger aggregate entities, and hence that they were illogical concepts. Later Yogacara thinkers include Dharmapala of Nalanda, Sthiramati, Chandragomin (who debated Candrakirti), and Śīlabhadra. Yogacarins such as Paramartha and Guṇabhadra brought the school to China and translated Yogacara works there, where it is known as Wéishí-zōng or Fǎxiàng-zōng. An important contribution to East Asian Yogācāra is Xuanzang's Cheng Weishi Lun, or "Discourse on the Establishment of Consciousness Only". Yogācāra-Mādhyamika synthesis Jñānagarbha (8th century) and his student Śāntarakṣita (725–788) brought together Yogacara, Madhyamaka and the Dignaga school of epistemology into a philosophical synthesis known as the Yogācāra-Svatantrika-Mādhyamika. Śāntarakṣita was also instrumental in the introduction of Buddhism and the Sarvastivadin monastic ordination lineage to Tibet, which was conducted at Samye. Śāntarakṣita's disciples included Haribhadra and Kamalaśīla. This philosophical tradition is influential in Tibetan Buddhist thought. Tathāgatagarbha literature The tathāgathagarbha sutras, in a departure from mainstream Buddhist language, insist that the potential for awakening is inherent to every sentient being. They marked a shift from a largely apophatic (negative) philosophical trend within Buddhism to a decidedly more cataphatic (positive) modus. The main topic of this genre of literature is the tathāgata-garbha, which can mean the womb or embryo of a Tathāgata (i.e. a Buddha). Another similar term used for this idea is buddhadhātu (source of the Buddhas). Prior to the period of these scriptures, Mahāyāna metaphysics had been dominated by teachings on emptiness in the form of Madhyamaka philosophy. The language used by this approach is primarily negative, and the tathāgatagarbha genre of sutras can be seen as an attempt to state orthodox Buddhist teachings of dependent origination using positive language instead, to prevent people from being turned away from Buddhism by a false impression of nihilism. In these sutras, the perfection of the wisdom of not-self is stated to be the true self; the ultimate goal of the path is then characterized using a range of positive language that had been used previously in Indian philosophy by essentialist philosophers, but which was now transmuted into a new Buddhist vocabulary to describe a being who has successfully completed the Buddhist path. The word "self" (atman) is used in a way idiosyncratic to these sutras; the "true self" is described as the perfection of the wisdom of not-self in the Buddha-Nature Treatise, for example. Language that had previously been used by essentialist non-Buddhist philosophers was now adopted, with new definitions, by Buddhists to promote orthodox teachings. The tathāgatagarbha does not, according to some scholars, represent a substantial self; rather, it is a positive language expression of emptiness and represents the potentiality to realize Buddhahood through Buddhist practices. In this interpretation, the intention of the teaching of tathāgatagarbha is soteriological rather than theoretical. The tathāgathagarbha, the Theravāda doctrine of bhavaṅga, and the Yogācāra store consciousness were all identified at some point with the luminous mind of the Nikāyas. In the Mahayana Mahaparinirvana Sutra, the Buddha insists that while pondering upon Dharma is vital, one must then relinquish fixation on words and letters, as these are utterly divorced from liberation and the Buddha-nature. The Dignāga-Dharmakīrti tradition Dignāga (–540) and Dharmakīrti (c. 6-7th century) were Buddhist philosophers who developed a system of epistemology (pramana) and logic in their debates with the Brahminical philosophers in order to defend Buddhist doctrine. This tradition is called "those who follow reasoning" (Tibetan: rigs pa rjes su 'brang ba); in modern literature, it is sometimes known by the Sanskrit "pramāṇavāda", or "the Epistemological School." They were associated with the Yogacara and Sautrantika schools, and defended theories held by both of these schools. Dignaga's influence was profound and led to an "epistemological turn" among all Buddhists and also all Sanskrit language philosophers in India after his death. In the centuries following Dignaga's work, Sanskrit philosophers became much more focused on defending all of their propositions with fully developed theories of knowledge. The "School of Dignāga" includes later philosophers and commentators like Santabhadra, Dharmottara (8th century), Jñanasrimitra (975–1025), Ratnakīrti (11th century) and Śaṅkaranandana (fl. c. 9th or 10th century). The epistemology they developed defends the view that there are only two 'instruments of knowledge' or 'valid cognitions' (pramana): "perception" (pratyaksa) and "inference" (anumāṇa). Perception is a non-conceptual awareness of particulars which is bound by causality, while inference is reasonable, linguistic and conceptual. These Buddhist philosophers argued in favor of the theory of momentariness, the Yogacara "awareness only" view, the reality of particulars (svalakṣaṇa), atomism, nominalism and the self-reflexive nature of consciousness (svasaṃvedana). They attacked Hindu theories of God (Isvara), universals, the authority of the Vedas, and the existence of a permanent soul (atman). Vajrayāna Buddhism Vajrayāna (also Mantrayāna, Sacret Mantra, Tantrayāna and Esoteric Buddhism) is a Mahayana Buddhist tradition associated with a group of texts known as the Buddhist Tantras which had developed into a major force in India by the eighth century. By this time Indian Tantric scholars were developing philosophical defenses, hermeneutics and explanations of the Buddhist tantric systems, especially through commentaries on key tantras such as the Guhyasamāja Tantra and the Guhyagarbha Tantra. While the view of the Vajrayāna was based on Madhyamaka, Yogacara and Buddha-nature theories, it saw itself as being a faster vehicle to liberation containing many skillful methods (upaya) of tantric ritual. The need for an explication and defense of the Tantras arose out of the unusual nature of the rituals associated with them, which included the use of secret mantras, alcohol, sexual yoga, complex visualizations of mandalas filled with wrathful deities and other practices and injunctions which were discordant with or at least novel in comparison to traditional Buddhist practice. The Guhyasamāja Tantra, for example, states: "you should kill living beings, speak lying words, take things that are not given and have sex with many women". Other features of tantra included a focus on the physical body as the means to liberation and a reaffirmation of feminine elements, feminine deities and sexuality. The defense of these practices is based on the theory of transformation which states that negative mental factors and physical actions can be cultivated and transformed in a ritual setting. The Hevajra tantra states: Those things by which evil men are bound, others turn into means and gain thereby release from the bonds of existence. By passion the world is bound, by passion too it is released, but by heretical Buddhists, this practice of reversals is not known. Another hermeneutic of Buddhist Tantric commentaries such as the Vimalaprabha of Pundarika (a commentary on the Kalacakra Tantra) is one of interpreting taboo or unethical statements in the Tantras as metaphorical statements about tantric practice. For example, in the Vimalaprabha, "killing living beings" refers to stopping the prana at the top of the head. In the Tantric Candrakirti's Pradipoddyotana, a commentary to the Guhyasamaja Tantra, killing living beings is glossed as "making them void" by means of a "special samadhi" which according to Bus-ton is associated with completion stage tantric practice. Douglas Duckworth notes that Vajrayāna philosophical outlook is one of embodiment, which sees the physical and cosmological body as already containing wisdom and divinity. Liberation (nirvana) and Buddhahood are not seen as something outside or an event in the future, but as imminently present and accessible right now through unique tantric practices like deity yoga, and hence Vajrayāna is also called the "resultant vehicle". Duckworth names the philosophical view of Vajrayāna as a form of pantheism, by which he means the belief that every existing entity is in some sense divine and that all things express some form of unity. Major Indian Tantric Buddhist philosophers such as Buddhaguhya, Padmavajra (author of the Guhyasiddhi), Nagarjuna (7th-century disciple of Saraha), Indrabhuti (author of the Jñānasiddhi), Anangavajra, Dombiheruka, Durjayacandra, Ratnākaraśānti and Abhayakaragupta wrote tantric texts and commentaries systematizing the tradition. Others such as Vajrabodhi and Śubhakarasiṃha brought Tantra to Tang China (716 to 720), and tantric philosophy continued to be developed in Chinese and Japanese by thinkers such as Yi Xing (683–727) and Kūkai (774– 835). In Tibet, philosophers such as Sakya Pandita (1182-28–1251), Longchenpa (1308–1364) and Tsongkhapa (1357–1419) continued the tradition of Buddhist Tantric philosophy in Classical Tibetan. Tibetan Buddhist philosophy Tibetan Buddhist philosophy is mainly a continuation and refinement of the Indian traditions of Madhyamaka, Yogacara and the Dignaga-Dharmakīrti school of epistemology or "reliable cognition" (Sanskrit: pramana, Tib. tshad ma). The initial efforts of Śāntarakṣita and Kamalaśīla brought their eclectic scholarly tradition to Tibet. Other influences include Buddhist Tantras and the Buddha nature texts. The initial work of early Tibetan Buddhist philosophers was in the translation of classical Indian philosophical treatises and the writing of commentaries. This initial period is from the 8th to the 10th century. Early Tibetan commentator philosophers were heavily influenced by the work of Dharmakirti and these include Ngok Loden Sherab (1059-1109) and Chaba Chökyi Senge (1182-1251). Their works are now lost. The 12th and 13th centuries saw the translation of the works of Chandrakirti, the promulgation of his views in Tibet by scholars such as Patsab Nyima Drakpa, Kanakavarman and Jayananda (12th century) and the development of the Tibetan debate between the prasangika and svatantrika views which continues to this day among Tibetan Buddhist schools. The main disagreement between these views is the use of reasoned argument. For Śāntarakṣita, Kamalaśīla and their defenders, reason is useful in establishing arguments that lead one to a correct understanding of emptiness, then, through the use of meditation, one can reach non-conceptual gnosis that does not rely on reason. For Chandrakirti, however, this is wrong, because meditation on emptiness cannot possibly involve any object. Reason's role here is to negate any essence or essentialist views, and then eventually negate itself along with any conceptual proliferation (prapañca). Another very influential figure from this early period is Mabja Jangchub Tsöndrü (d. 1185), who wrote an important commentary on Nagarjuna's Mūlamadhyamakakārikā. Mabja was studied under the Dharmakirtian Chaba and also the Candrakirti scholar Patsab. His work shows an attempt to steer a middle course between their views, he affirms the conventional usefulness of Buddhist pramāṇa, but also accepts Candrakirti's prasangika views. Mabja's Madhyamaka scholarship was very influential on later Tibetan Madhyamikas such as Longchenpa, Tsongkhapa, Gorampa, and Mikyö Dorje. There are various Tibetan Buddhist schools or monastic orders. According to Georges B.J. Dreyfus, within Tibetan thought, the Sakya school holds a mostly anti-realist philosophical position (which sees saṁvṛtisatya / conventional truth as an illusion), while the Gelug school tends to defend a form of realism (which accepts that conventional truth is in some sense real and true, yet dependently originated). The Kagyu and Nyingma schools also tend to follow Sakya anti-realism (with some differences). Shentong and Buddha nature The 14th century saw increasing interest in the Buddha nature texts and doctrines. This can be seen in the work of the third Kagyu Karmapa Rangjung Dorje (1284-1339), especially his treatise "Profound Inner Meaning". This treatise describes ultimate nature or suchness as Buddha nature which is the basis for nirvana and samsara, radiant in nature and empty in essence, surpassing thought. Dolpopa (Dol-bo-ba, 1292–1361), founder of the Jonang school, developed a view called shentong (Wylie: gzhan , 'other empty'), which is closely tied to Yogacara and Buddha-nature theories. This view holds that the qualities of Buddhahood or Buddha nature are already present in the mind, and that it is empty of all conventional reality which occludes its own nature as Buddhahood or Dharmakaya. According to Dolpopa, all beings are said to have Buddha nature, which is real, unchanging, permanent, non-conditioned, eternal, blissful and compassionate. Dolpopa's shentong view taught that ultimate reality was truly a "Great Self" or "Supreme Self" referring to works such as the Mahāyāna Mahāparinirvāṇa Sūtra, the Aṅgulimālīya Sūtra and the Śrīmālādevī Siṃhanāda Sūtra. This view had an influence on philosophers of other schools, such as Nyingma and Kagyu thinkers, and was also widely criticized in some circles as being similar to the Hindu notions of Atman. The Shentong philosophy was also expounded in Tibet and Mongolia by the later Jonang scholar Tāranātha (1575–1634). In the late 17th century, the Jonang order and its teachings came under attack by the 5th Dalai Lama, who converted the majority of their monasteries in Tibet to the Gelug order, although several survived in secret. Gelug Je Tsongkhapa (Dzong-ka-ba) (1357–1419) founded the Gelug school of Tibetan Buddhism, which came to dominate the country through the office of the Dalai Lama and is the major defender of the Prasaṅgika Madhyamaka view. His work is influenced by the philosophy of Candrakirti and Dharmakirti. Tsongkhapa's magnum opus is The Ocean of Reasoning, a Commentary on Nagarjuna's Mulamadhyamakakarika. Gelug philosophy is based upon the study of Madhyamaka texts and Tsongkhapa's works as well as formal debate (rtsod pa). Tsongkhapa defended Prasangika Madhyamaka as the highest view and critiqued the Svatantrika. Tsongkhapa argued that, because the Svatantrika conventionally establishes things by their own characteristics, they fail to completely understand the emptiness of phenomena and hence do not achieve the same realization. Drawing on Chandrakirti, Tsongkhapa rejected the Yogacara teachings, even as a provisional stepping point to the Madhyamaka view. Tsongkhapa was also critical of the Shengtong view of Dolpopa, which he saw as dangerously absolutist and hence outside the middle way. Tsongkhapa identified two major flaws in interpretations of Madhyamika, under-negation (of svabhava or own essence), which could lead to Absolutism, and over-negation, which could lead to Nihilism. Tsongkhapa's solution to this dilemma was the promotion of the use of inferential reasoning only within the conventional realm of the two truths framework, allowing for the use of reason for ethics, conventional monastic rules and promoting a conventional epistemic realism, while holding that, from the view of ultimate truth (paramarthika satya), all things (including Buddha nature and Nirvana) are empty of inherent existence (svabhava), and that true liberation is this realization of emptiness. Sakya scholars such as Rongtön and Gorampa disagreed with Tsongkhapa, and argued that the prasangika svatantrika distinction was merely pedagogical. Gorampa also critiqued Tsongkhapa's realism, arguing that the structures which allow an empty object to be presented as conventionally real eventually dissolve under analysis and are thus unstructured and non-conceptual (spros bral). Tsongkhapa's students Gyel-tsap, Kay-drup, and Ge-dun-drup set forth an epistemological realism against the Sakya scholars' anti-realism. Sakya Sakya Pandita (1182–1251) was a 13th-century head of the Sakya school and ruler of Tibet. He was also one of the most important Buddhist philosophers in the Tibetan tradition, writing works on logic and epistemology and promoting Dharmakirti's Pramanavarttika (Commentary on Valid Cognition) as central to the scholastic study. Sakya Pandita's 'Treasury of Logic on Valid Cognition' (Tshad ma rigs pa'i gter) set forth the classic Sakya epistemic anti-realist position, arguing that concepts such as universals are not known through valid cognition and hence are not real objects of knowledge. Sakya Pandita was also critical of theories of sudden awakening, which were held by some teachers of the "Chinese Great Perfection" in Tibet. Later Sakyas such as Gorampa (1429–1489) and Sakya Chokden (1428–1507) would develop and defend Sakya anti-realism, and they are seen as the major interpreters and critics of Sakya Pandita's philosophy. Sakya Chokden also critiqued Tsongkhapa's interpretation of Madhyamaka and Dolpopa's Shentong. In his Definite ascertainment of the middle way, Chokden criticized Tsongkhapa's view as being too logo-centric and still caught up in conceptualization about the ultimate reality which is beyond language. Sakya Chokden's philosophy attempted to reconcile the views of the Yogacara and Madhyamaka, seeing them both as valid and complementary perspectives on ultimate truth. Madhyamaka is seen by Chokden as removing the fault of taking the unreal as being real, and Yogacara removes the fault of the denial of Reality. Likewise, the Shentong and Rangtong views are seen as complementary by Sakya Chokden; Rangtong negation is effective in cutting through all clinging to wrong views and conceptual rectification, while Shentong is more amenable for describing and enhancing meditative experience and realization. Therefore, for Sakya Chokden, the same realization of ultimate reality can be accessed and described in two different but compatible ways. Nyingma and Rimé The Nyingma school is strongly influenced by the view of Dzogchen (Great Perfection) and the Dzogchen Tantric literature. Longchenpa (1308–1364) was a major philosopher of the Nyingma school and wrote an extensive number of works on the Tibetan practice of Dzogchen and on Buddhist Tantra. These include the Seven Treasures, the Trilogy of Natural Ease, and his Trilogy of Dispelling Darkness. Longchenpa's works provide a philosophical understanding of Dzogchen, a defense of Dzogchen in light of the sutras, as well as practical instructions. For Longchenpa, the ground of reality is luminous clarity, rigpa, or Buddha nature, and this ground is also the bridge between sutra and tantra. Longchenpa's philosophy sought to establish the positive aspects of Buddha nature thought against the totally negative theology of Madhyamika without straying into the absolutism of Dolpopa. For Longchenpa, the basis for Dzogchen and Tantric practice in Vajrayana is the "Ground" (gzhi), the immanent Buddha nature, "the primordially luminous reality that is unconditioned and spontaneously present" which is "free from all elaborated extremes". The 19th century saw the rise of the Rimé movement (non-sectarian, unbiased) which sought to push back against the politically dominant Gelug school's criticisms of the Sakya, Kagyu, Nyingma and Bon philosophical views, and develop a more eclectic or universal system of textual study. Jamyang Khyentse Wangpo (1820-1892) and Jamgön Kongtrül (1813-1899) were the founders of Rimé. The Rimé movement came to prominence at a point in Tibetan history when the religious climate had become partisan. The aim of the movement was "a push towards a middle ground where the various views and styles of the different traditions were appreciated for their individual contributions rather than being refuted, marginalized, or banned." Philosophically, Jamgön Kongtrül defended Shentong as being compatible with Madhyamaka while another Rimé scholar Jamgon Ju Mipham Gyatso (1846–1912) criticized Tsongkhapa from a Nyingma perspective. Mipham argued that the view of the middle way is Unity (zung 'jug), meaning that from the ultimate perspective the duality of sentient beings and Buddhas is also dissolved. Mipham also affirmed the view of rangtong (self emptiness). The later Nyingma scholar Botrul (1894–1959) classified the major Tibetan Madhyamaka positions as shentong (other emptiness), Nyingma rangtong (self emptiness) and Gelug bdentong (emptiness of true existence). The main difference between them is their "object of negation"; shengtong states that inauthentic experience is empty, rangtong negates any conceptual reference and bdentong negates any true existence. The 14th Dalai Lama was also influenced by this eclectic approach. Having studied under teachers from all major Tibetan Buddhist schools, his philosophical position tends to be that the different perspectives on emptiness are complementary: There is a tradition of making a distinction between two different perspectives on the nature of emptiness: one is when emptiness is presented within a philosophical analysis of the ultimate reality of things, in which case it ought to be understood in terms of a non-affirming negative phenomena. On the other hand, when it is discussed from the point of view of experience, it should be understood more in terms of an affirming negation – 14th Dalai Lama East Asian Buddhism Tiantai The schools of Buddhism that had existed in China prior to the emergence of the Tiantai are generally believed to represent direct transplantations from India, with little modification to their basic doctrines and methods. The Tiantai school, founded by Zhiyi (538–597), was the first truly unique Chinese Buddhist philosophical school. The doctrine of Tiantai was based on the ekayana or "one vehicle" doctrine taught in the Lotus Sutra and sought to bring together all Buddhist teachings and texts into a comprehensively inclusive hierarchical system, which placed the Lotus Sutra at the top of this hierarchy. Tiantai's metaphysics is an immanent holism, which sees every phenomenon, moment or event as conditioned and manifested by the whole of reality. Every instant of experience is a reflection of every other, and hence, suffering and nirvana, good and bad, Buddhahood and evildoing, are all "inherently entailed" within each other. Each moment of consciousness is simply the Absolute itself, infinitely immanent and self-reflecting. This metaphysics is entailed in the Tiantai teaching of the "three truths", which is an extension of the Mādhyamaka two truths doctrine. The three truths are: the conventional truth of appearance, the truth of emptiness (shunyata) and the third truth of 'the exclusive Center' (但中 danzhong) or middle way, which is beyond conventional truth and emptiness. This third truth is the Absolute and expressed by the claim that nothing is "Neither-Same-Nor-Different" than anything else, but rather each 'thing' is the absolute totality of all things manifesting as a particular, everything is mutually contained within each thing. Everything is a reflection of 'The Ultimate Reality of All Appearances'(諸法實相 zhufashixiang) and each thought "contains three thousand worlds". This perspective allows the Tiantai school to state such seemingly paradoxical things as "evil is ineradicable from the highest good, Buddhahood." Moreover, in Tiantai, nirvana and samsara are ultimately the same; as Zhiyi writes, "A single, unalloyed reality is all there is – no entities whatever exist outside of it." Though Zhiyi did write "One thought contains three thousand worlds", this does not entail idealism. According to Zhiyi, "The objects of the [true] aspects of reality are not something produced by Buddhas, gods, or men. They exist inherently on their own and have no beginning" (The Esoteric Meaning, 210). This is then a form of realism, which sees the mind as real as the world, interconnected with and inseparable from it. In Tiantai thought, ultimate reality is simply the phenomenal world of interconnected events or dharmas. Other key figures of Tiantai thought are Zhanran (711–782) and Siming Zhili (960–1028). Zhanran developed the idea that non-sentient beings have Buddha nature, since they are also a reflection of the Absolute. In Japan, this school was known as Tendai and was first brought to the island by Saicho. Huayan The Huayan developed the doctrine of "interpenetration" or "coalescence" (Wylie: zung-'jug; Sanskrit: yuganaddha), based on the Avataṃsaka Sūtra (Flower Garland Sutra), a Mahāyāna scripture. Huayan holds that all phenomena (Sanskrit: dharmas) are deeply interconnected, mutually arising and that every phenomenon contains all other phenomena. Various metaphors and images are used to illustrate this idea. The first is known as Indra's net. The net is set with jewels which have the extraordinary property that they reflect all of the other jewels, while the reflections also contain every other reflection, ad infinitum. The second image is that of the world text. This image portrays the world as consisting of an enormous text which is as large as the universe itself. The words of the text are composed of the phenomena that make up the world. However, every atom of the world contains the whole text within it. It is the work of a Buddha to let out the text so that beings can be liberated from suffering. Fazang (Fa-tsang, 643–712), one of the most important Huayan thinkers, wrote 'Essay on the Golden Lion' and 'Treatise on the Five Teachings', which contain other metaphors for the interpenetration of reality. He also used the metaphor of a house of mirrors. Fazang introduced the distinction of "the Realm of Principle" and "the Realm of Things". This theory was further developed by Cheng-guan (738–839) into the major Huayan thesis of "the fourfold Dharmadhatu" (dharma realm): the Realm of Principle, the Realm of Things, the Realm of the Noninterference between Principle and Things, and the Realm of the Noninterference of All Things. The first two are the universal and the particular, the third is the interpenetration of universal and particular, and the fourth is the interpenetration of all particulars. The third truth was explained by the metaphor of a golden lion: the gold is the universal and the particular is the shape and features of the lion. While both Tiantai and Huayan hold to the interpenetration and interconnection of all things, their metaphysics have some differences. Huayan metaphysics is influenced by Yogacara thought and is closer to idealism. The Avatamsaka sutra compares the phenomenal world to a dream, an illusion, and a magician's conjuring. The sutra states nothing has true reality, location, beginning and end, or substantial nature. The Avatamsaka also states that "The triple world is illusory – it is only made by one mind", and Fazang echoes this by writing, "outside of mind there is not a single thing that can be apprehended." Furthermore, according to Huayan thought, each mind creates its own world "according to their mental patterns", and "these worlds are infinite in kind" and constantly arising and passing away. However, in Huayan, the mind is not real either, but also empty. The true reality in Huayan, the noumenon, or "Principle", is likened to a mirror, while phenomena are compared to reflections in the mirror. It is also compared to the ocean, and phenomena to waves. In Korea, this school was known as Hwaeom and is represented in the work of Wonhyo (617–686), who also wrote about the idea of essence-function, a central theme in Korean Buddhist thought. In Japan, Huayan is known as Kegon and one of its major proponents was Myōe, who also introduced Tantric practices. Chan and Japanese Buddhism The philosophy of Chinese Chan Buddhism and Japanese Zen is based on various sources; these include Chinese Madhyamaka (Sānlùn), Yogacara (Wéishí), the Laṅkāvatāra Sūtra, and the Buddha nature texts. An important issue in Chan is that of subitism or "sudden awakening", the idea that insight happens all at once in a flash of insight. This view was promoted by Shenhui and is a central issue discussed in the Platform Sutra, a key Chan scripture composed in China. Huayan philosophy also had an influence on Chan. The theory of the Fourfold Dharmadhatu influenced the Five Ranks of Dongshan Liangjie (806-869), the founder of the Caodong Chan lineage. Guifeng Zongmi, who was also a patriarch of Huayan Buddhism, wrote extensively on the philosophy of Chan and on the Avatamsaka sutra. Japanese Buddhism during the 6th and 7th centuries saw an increase in the proliferation of new schools and forms of thought, a period known as the six schools of Nara (Nanto Rokushū). The Kamakura Period (1185–1333) also saw another flurry of intellectual activity. During this period, the influential figure of Nichiren (1222–1282) made the practice and universal message of the Lotus Sutra more readily available to the population. He is of particular importance in the history of thought and religion, as his teachings constitute a separate sect of Buddhism, one of the only major sects to have originated in Japan Also during the Kamakura period, the founder of Soto Zen, Dogen (1200–1253), wrote many works on the philosophy of Zen, and the Shobogenzo is his magnum opus. In Korea, Chinul was an important exponent of Seon Buddhism at around the same time. Esoteric Buddhism Tantric Buddhism arrived in China in the 7th century, during the Tang Dynasty. In China, this form of Buddhism is known as Mìzōng (密宗), or "Esoteric School", and Zhenyan (true word, Sanskrit: Mantrayana). Kūkai (AD774–835) is a major Japanese Buddhist philosopher and the founder of the Tantric Shingon (true word) school in Japan. He wrote on a wide variety of topics such as public policy, language, the arts, literature, music and religion. After studying in China under Huiguo, Kūkai brought together various elements into a cohesive philosophical system of Shingon. Kūkai's philosophy is based on the Mahavairocana Tantra and the Vajrasekhara Sutra (both from the seventh century). His Benkenmitsu nikkyôron (Treatise on the Differences Between Esoteric and Exoteric Teachings) outlines the difference between exoteric, mainstream Mahayana Buddhism (kengyô) and esoteric Tantric Buddhism (mikkyô). Kūkai provided the theoretical framework for the esoteric Buddhist practices of Mantrayana, bridging the gap between the doctrine of the sutras and tantric practices. At the foundation of Kūkai's thought is the Trikaya doctrine, which holds there are three "bodies of the Buddha". According to Kūkai, esoteric Buddhism has the Dharmakaya (Jpn: hosshin, embodiment of truth) as its source, which is associated with Vairocana Buddha (Dainichi). Hosshin is embodied absolute reality and truth. Hosshin is mostly ineffable but can be experienced through esoteric practices such as mudras and mantras. While Mahayana is taught by the historical Buddha (nirmāṇakāya), it does not have ultimate reality as its source or the practices to experience the esoteric truth. For Shingon, from an enlightened perspective, the whole phenomenal world itself is also the teaching of Vairocana. The body of the world, its sounds and movements, is the body of truth (dharma) and furthermore it is also identical with the personal body of the cosmic Buddha. For Kūkai, world, actions, persons and Buddhas are all part of the cosmic monologue of Vairocana, they are the truth being preached, to its own self manifestations. This is hosshin seppô (literally: "the dharmakâya's expounding of the Dharma") which can be accessed through mantra which is the cosmic language of Vairocana emanating through cosmic vibration concentrated in sound. In a broad sense, the universe itself is a huge text expressing ultimate truth (Dharma) which must be "read". Dainichi means "Great Sun" and Kūkai uses this as a metaphor for the great primordial Buddha, whose teaching and presence illuminates and pervades all, like the light of the sun. This immanent presence also means that every being already has access to the liberated state (hongaku) and Buddha nature, and that, because of this, there is the possibility of "becoming Buddha in this very embodied existence" (sokushinjôbutsu). This is achieved because of the non-dual relationship between the macrocosm of Hosshin and the microcosm of the Shingon practitioner. Kūkai's exposition of what has been called Shingon's "metaphysics" is based on the three aspects of the cosmic truth or Hosshin – body, appearance and function. The body is the physical and mental elements, which are the body and mind of the cosmic Buddha and which is also empty (Shunyata). The physical universe for Shingon contains the interconnected mental and physical events. The appearance aspect is the form of the world, which appears as mandalas of interconnected realms and is depicted in mandala art such as the Womb Realm mandala. The function is the movement and change which happens in the world, which includes change in forms, sounds and thought. These forms, sounds and thoughts are expressed by the Shingon practitioner in various rituals and tantric practices which allow them to connect with and inter-resonate with Dainichi and hence attain liberation here and now. Modern philosophy In Sri Lanka, Buddhist modernists such as Anagarika Dharmapala (1864-1933) and the American convert Henry Steel Olcott sought to show that Buddhism was rational and compatible with modern Scientific ideas such as the theory of evolution. Dharmapala also argued that Buddhism included a strong social element, interpreting it as liberal, altruistic and democratic. A later Sri Lankan philosopher, K. N. Jayatilleke (1920–1970), wrote the classic modern account of Buddhist epistemology (Early Buddhist Theory of Knowledge, 1963). His student David Kalupahana wrote on the history of Buddhist thought and psychology. Other important Sri Lankan Buddhist thinkers include Ven Ñāṇananda (Concept and Reality), Walpola Rahula, Hammalawa Saddhatissa (Buddhist Ethics, 1987), Gunapala Dharmasiri (A Buddhist critique of the Christian concept of God, 1988), P. D. Premasiri and R. G. de S. Wettimuny. In 20th-century China, the modernist Taixu (1890-1947) advocated a reform and revival of Buddhism. He promoted an idea of a Buddhist Pure Land, not as a metaphysical place in Buddhist cosmology but as something possible to create here and now in this very world, which could be achieved through a "Buddhism for Human Life" () which was free of supernatural beliefs. Taixu also wrote on the connections between modern science and Buddhism, ultimately holding that "scientific methods can only corroborate the Buddhist doctrine, they can never advance beyond it". Like Taixu, Yin Shun (1906–2005) advocated a form of Humanistic Buddhism grounded in concern for humanitarian issues, and his students and followers have been influential in promoting Humanistic Buddhism in Taiwan. This period also saw a revival of the study of Weishi (Yogachara), by Yang Rensan (1837-1911), Ouyang Jinwu (1871-1943) and Liang Shuming (1893–1988). One of Tibetan Buddhism's most influential modernist thinkers is Gendün Chöphel (1903–1951), who, according to Donald S. Lopez Jr., "was arguably the most important Tibetan intellectual of the twentieth century." Gendün Chöphel travelled throughout India with the Indian Buddhist Rahul Sankrityayan and wrote a wide variety of material, including works promoting the importance of modern science to his Tibetan countrymen and also Buddhist philosophical texts such as Adornment for Nagarjuna's Thought. Another very influential Tibetan Buddhist modernist was Chögyam Trungpa, whose Shambhala Training was meant to be more suitable to modern Western sensitivities by offering a vision of "secular enlightenment". In Southeast Asia, thinkers such as Buddhadasa, Thích Nhất Hạnh, Sulak Sivaraksa and Aung San Suu Kyi have promoted a philosophy of socially Engaged Buddhism and have written on the socio-political application of Buddhism. Likewise, Buddhist approaches to economic ethics (Buddhist economics) have been explored in the works of E. F. Schumacher, Prayudh Payutto, Neville Karunatilake and Padmasiri de Silva. The study of the Pali Abhidhamma tradition continued to be influential in Myanmar, where it was developed by monks such as Ledi Sayadaw and Mahasi Sayadaw. Japanese philosophy was heavily influenced by the work of the Kyoto School which included Kitaro Nishida, Keiji Nishitani, Hajime Tanabe and Masao Abe. These thinkers brought Buddhist ideas in dialogue with Western philosophy, especially European phenomenologists and existentialists. The most important trend in Japanese Buddhist thought after the formation of the Kyoto school is Critical Buddhism, which argues against several Mahayana concepts such as Buddha nature and original enlightenment. In Nichiren Buddhism, the work of Daisaku Ikeda has also been popular. The Japanese Zen Buddhist D.T. Suzuki (1870–1966) was instrumental in bringing Zen Buddhism to the West and his Buddhist modernist works were very influential in the United States. Suzuki's worldview was a Zen Buddhism influenced by Romanticism and Transcendentalism, which promoted spiritual freedom as "a spontaneous, emancipatory consciousness that transcends rational intellect and social convention." This idea of Buddhism influenced the Beat writers, and a contemporary representative of Western Buddhist Romanticism is Gary Snyder. The American Theravada Buddhist monk Thanissaro Bhikkhu has critiqued 'Buddhist Romanticism' in his writings. Western Buddhist monastics and priests such as Nanavira Thera, Bhikkhu Bodhi, Nyanaponika Thera, Robert Aitken, Taigen Dan Leighton, and Matthieu Ricard have written texts on Buddhist philosophy. A feature of Buddhist thought in the West has been a desire for dialogue and integration with modern science and psychology, and various modern Buddhists such as B. Alan Wallace, James H. Austin, Mark Epstein and the 14th Dalai Lama have worked and written on this issue. Another area of convergence has been Buddhism and environmentalism, which is explored in the work of Joanna Macy. Another Western Buddhist philosophical trend has been the project to secularize Buddhism, as seen in the works of Stephen Batchelor. In the West, Comparative philosophy between Buddhist and Western thought began with the work of Charles A. Moore, who founded the journal Philosophy East and West. Contemporary Western Academics such as Mark Siderits, Jan Westerhoff, Jonardon Ganeri, Miri Albahari, Owen Flanagan, Damien Keown, Tom Tillemans, David Loy, Evan Thompson and Jay Garfield have written various works which interpret Buddhist ideas through Western philosophy. Comparison with other philosophies Scholars such as Thomas McEvilley, Christopher I. Beckwith, and Adrian Kuzminski have identified cross influences between ancient Buddhism and the ancient Greek philosophy of Pyrrhonism. The Greek philosopher Pyrrho spent 18 months in India as part of Alexander the Great's court on Alexander's conquest of western India, where ancient biographers say his contact with the gymnosophists caused him to create his philosophy. Because of the high degree of similarity between Nāgārjuna's philosophy and Pyrrhonism, particularly the surviving works of Sextus Empiricus, Thomas McEvilley suspects that Nāgārjuna was influenced by Greek Pyrrhonist texts imported into India. Baruch Spinoza, though he argued for the existence of a permanent reality, asserts that all phenomenal existence is transitory. In his opinion sorrow is conquered "by finding an object of knowledge which is not transient, not ephemeral, but is immutable, permanent, everlasting." The Buddha taught that the only thing which is eternal is Nirvana. David Hume, after a relentless analysis of the mind, concluded that consciousness consists of fleeting mental states. Hume's Bundle theory is a very similar concept to the Buddhist skandhas, though his skepticism about causation leads him to opposite conclusions in other areas. Arthur Schopenhauer's philosophy parallels Buddhism in his affirmation of asceticism and renunciation as a response to suffering and desire (cf. Schopenhauer's The World as Will and Representation, 1818). Ludwig Wittgenstein's "language-game" closely parallel the warning that intellectual speculation or papañca is an impediment to understanding, as found in the Buddhist Parable of the Poison Arrow. Friedrich Nietzsche, although himself dismissive of Buddhism as yet another nihilism, had a similar impermanent view of the self. Heidegger's ideas on being and nothingness have been held by some to be similar to Buddhism today. An alternative approach to the comparison of Buddhist thought with Western philosophy is to use the concept of the Middle Way in Buddhism as a critical tool for the assessment of Western philosophies. In this way, Western philosophies can be classified in Buddhist terms as eternalist or nihilist. In a Buddhist view, all philosophies are considered non-essential views (ditthis) and not to be clung to. See also Buddhism and science Buddhist ethics Buddhist logic Critical Buddhism God in Buddhism List of Buddhist terms and concepts List of Buddhist topics List of sutras Madhyamaka Mindstream Reality in Buddhism Notes References Sources External links Buddhism in a Nutshell 2500 Years of Buddhism by Prof. P.Y. Bapat (1956) at archive.org Buddhist philosophy Chinese philosophy Āstika
1,945
4,473
https://en.wikipedia.org/wiki/BIOS
BIOS
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process (power-on startup). The BIOS firmware comes pre-installed on an IBM PC or IBM PC compatible's system board and exists in some UEFI-based systems to maintain compatibility with operating systems that do not support UEFI native operation. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. The interface of that original system serves as a de facto standard. The BIOS in modern PCs initializes and tests the system hardware components (Power-on self-test), and loads a boot loader from a mass storage device which then initializes a kernel. In the era of DOS, the BIOS provided BIOS interrupt calls for the keyboard, display, storage, and other input/output (I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup. Most BIOS implementations are specifically designed to work with a particular computer or motherboard model, by interfacing with various devices especially system chipset. Originally, BIOS firmware was stored in a ROM chip on the PC motherboard. In later computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails could brick the motherboard. The last version of Microsoft Windows running on PCs which uses BIOS firmware is Windows 10. Unified Extensible Firmware Interface (UEFI) is a successor to the legacy PC BIOS, aiming to address its technical limitations. History The term BIOS (Basic Input/Output System) was created by Gary Kildall and first appeared in the CP/M operating system in 1975, describing the machine-specific part of CP/M loaded during boot time that interfaces directly with the hardware. (A CP/M machine usually has only a simple boot loader in its ROM.) Versions of MS-DOS, PC DOS or DR-DOS contain a file called variously "IO.SYS", "IBMBIO.COM", "IBMBIO.SYS", or "DRBIOS.SYS"; this file is known as the "DOS BIOS" (also known as the "DOS I/O System") and contains the lower-level hardware-specific part of the operating system. Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides in ROM, it represents the analogue to the "CP/M BIOS". The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions. The real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, and therefore was named "CBIOS" (for "Compatibility BIOS"), whereas the "ABIOS" (for "Advanced BIOS") provided new interfaces specifically suited for multitasking operating systems such as OS/2. User interface The BIOS of the original IBM PC and XT had no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when the power-on self-test (POST) had not proceeded to the point of successfully initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and on expansion cards. Starting around the mid-1990s, it became typical for the BIOS ROM to include a "BIOS configuration utility" (BCU) or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type formerly set using DIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCsincluding the IBM ATheld configuration settings in battery-backed RAM and used a bootable configuration program on floppy disk, not in the ROM, to set the configuration options contained in this memory. The floppy disk was supplied with the computer, and if it was lost the system settings could not be changed. The same applied in general to computers with an EISA bus, for which the configuration program was called an EISA Configuration Utility (ECU). A modern Wintel-compatible computer provides a setup routine essentially unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; the user can configure hardware options using the keyboard and video display. The modern Wintel machine may store the BIOS configuration settings in flash ROM, perhaps the same flash ROM that holds the BIOS itself. Operation System startup Early Intel processors started at physical address 000FFFF0h. Systems with later processors provide logic to start running the BIOS from the system ROM. If the system has just been powered up or the reset button was pressed ("cold boot"), the full power-on self-test (POST) is run. If Ctrl+Alt+Delete was pressed ("warm boot"), a special flag value stored in nonvolatile BIOS memory ("CMOS") tested by the BIOS allows bypass of the lengthy POST and memory detection. The POST identifies, tests and initializes system devices such as the CPU, chipset, RAM, motherboard, video card, keyboard, mouse, hard disk drive, optical disc drive and other hardware, including integrated peripherals. Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it. This feature was intended for factory test or diagnostic purposes. Boot process After the option ROM scan is completed and all detected ROM modules with valid checksums have been called, or immediately after POST in a BIOS version that does not scan for option ROMs, the BIOS calls INT 19h to start boot processing. Post-boot, programs loaded can also call INT 19h to reboot the system, but they must be careful to disable interrupts and other asynchronous hardware processes that may interfere with the BIOS rebooting process, or else the system may hang or crash while it is rebooting. When INT 19h is called, the BIOS attempts to locate boot loader software on a "boot device", such as a hard disk, a floppy disk, CD, or DVD. It loads and executes the first boot software it finds, giving it control of the PC. The BIOS uses the boot devices set in Nonvolatile BIOS memory (CMOS), or, in the earliest PCs, DIP switches. The BIOS checks each device in order to see if it is bootable by attempting to load the first sector (boot sector). If the sector cannot be read, the BIOS proceeds to the next device. If the sector is read successfully, some BIOSes will also check for the boot sector signature 0x55 0xAA in the last two bytes of the sector (which is 512 bytes long), before accepting a boot sector and considering the device bootable. When a bootable device is found, the BIOS transfers control to the loaded sector. The BIOS does not interpret the contents of the boot sector other than to possibly check for the boot sector signature in the last two bytes. Interpretation of data structures like partition tables and BIOS Parameter Blocks is done by the boot program in the boot sector itself or by other programs loaded through the boot process. A non-disk device such as a network adapter attempts booting by a procedure that is defined by its option ROM or the equivalent integrated into the motherboard BIOS ROM. As such, option ROMs may also influence or supplant the boot process defined by the motherboard BIOS ROM. With the El Torito optical media boot standard, the optical drive actually emulates a 3.5" high-density floppy disk to the BIOS for boot purposes. Reading the "first sector" of a CD-ROM or DVD-ROM is not a simply defined operation like it is on a floppy disk or a hard disk. Furthermore, the complexity of the medium makes it difficult to write a useful boot program in one sector. The bootable virtual floppy disk can contain software that provides access to the optical medium in its native format. Boot priority The user can select the boot priority implemented by the BIOS. For example, most computers have a hard disk that is bootable, but sometimes there is a removable-media drive that has higher boot priority, so the user can cause a removable disk to be booted. In most modern BIOSes, the boot priority order can be configured by the user. In older BIOSes, limited boot priority options are selectable; in the earliest BIOSes, a fixed priority scheme was implemented, with floppy disk drives first, fixed disks (i.e. hard disks) second, and typically no other boot devices supported, subject to modification of these rules by installed option ROMs. The BIOS in an early PC also usually would only boot from the first floppy disk drive or the first hard disk drive, even if there were two drives installed. Boot failure On the original IBM PC and XT, if no bootable disk was found, ROM BASIC was started by calling INT 18h. Since few programs used BASIC in ROM, clone PC makers left it out; then a computer that failed to boot from a disk would display "No ROM BASIC" and halt (in response to INT 18h). Later computers would display a message like "No bootable disk found"; some would prompt for a disk to be inserted and a key to be pressed to retry the boot process. A modern BIOS may display nothing or may automatically enter the BIOS configuration utility when the boot process fails. Boot environment The environment for the boot program is very simple: the CPU is in real mode and the general-purpose and segment registers are undefined, except SS, SP, CS, and DL. CS:IP always points to physical address 0x07C00. What values CS and IP actually have is not well defined. Some BIOSes use a CS:IP of 0x0000:0x7C00 while others may use 0x07C0:0x0000. Because boot programs are always loaded at this fixed address, there is no need for a boot program to be relocatable. DL may contain the drive number, as used with INT 13h, of the boot device. SS:SP points to a valid stack that is presumably large enough to support hardware interrupts, but otherwise SS and SP are undefined. (A stack must be already set up in order for interrupts to be serviced, and interrupts must be enabled in order for the system timer-tick interrupt, which BIOS always uses at least to maintain the time-of-day count and which it initializes during POST, to be active and for the keyboard to work. The keyboard works even if the BIOS keyboard service is not called; keystrokes are received and placed in the 15-character type-ahead buffer maintained by BIOS.) The boot program must set up its own stack, because the size of the stack set up by BIOS is unknown and its location is likewise variable; although the boot program can investigate the default stack by examining SS:SP, it is easier and shorter to just unconditionally set up a new stack. At boot time, all BIOS services are available, and the memory below address 0x00400 contains the interrupt vector table. BIOS POST has initialized the system timers, interrupt controller(s), DMA controller(s), and other motherboard/chipset hardware as necessary to bring all BIOS services to ready status. DRAM refresh for all system DRAM in conventional memory and extended memory, but not necessarily expanded memory, has been set up and is running. The interrupt vectors corresponding to the BIOS interrupts have been set to point at the appropriate entry points in the BIOS, hardware interrupt vectors for devices initialized by the BIOS have been set to point to the BIOS-provided ISRs, and some other interrupts, including ones that BIOS generates for programs to hook, have been set to a default dummy ISR that immediately returns. The BIOS maintains a reserved block of system RAM at addresses 0x00400–0x004FF with various parameters initialized during the POST. All memory at and above address 0x00500 can be used by the boot program; it may even overwrite itself. Extensions (option ROMs) Peripheral cards such as hard disk drive host bus adapters and video cards have their own firmware, and BIOS extension option ROM may be a part of the expansion card firmware, which provide additional functionality to BIOS. Code in option ROMs runs before the BIOS boots the operating system from mass storage. These ROMs typically test and initialize hardware, add new BIOS services, or replace existing BIOS services with their own services. For example, a SCSI controller usually has a BIOS extension ROM that adds support for hard drives connected through that controller. An extension ROM could in principle contain operating system, or it could implement an entirely different boot process such as network booting. Operation of an IBM-compatible computer system can be completely changed by removing or inserting an adapter card (or a ROM chip) that contains a BIOS extension ROM. The motherboard BIOS typically contains code for initializing and bootstrapping integrated display and integrated storage. In addition, plug-in adapter cards such as SCSI, RAID, network interface cards, and video cards often include their own BIOS (e.g. Video BIOS), complementing or replacing the system BIOS code for the given component. Even devices built into the motherboard can behave in this way; their option ROMs can be a part of the motherboard BIOS. An add-in card requires an option ROM if the card is not supported by the motherboard BIOS and the card needs to be initialized or made accessible through BIOS services before the operating system can be loaded (usually this means it is required in the boot process). An additional advantage of ROM on some early PC systems (notably including the IBM PCjr) was that ROM was faster than main system RAM. (On modern systems, the case is very much the reverse of this, and BIOS ROM code is usually copied ("shadowed") into RAM so it will run faster.) Boot procedure If an expansion ROM wishes to change the way the system boots (such as from a network device or a SCSI adapter) in a cooperative way, it can use the BIOS Boot Specification (BBS) API to register its ability to do so. Once the expansion ROMs have registered using the BBS APIs, the user can select among the available boot options from within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations will not allow the user to enter the BIOS's user interface until the expansion ROMs have finished executing and registering themselves with the BBS API. Also, if an expansion ROM wishes to change the way the system boots unilaterally, it can simply hook INT 19h or other interrupts normally called from interrupt 19h, such as INT 13h, the BIOS disk service, to intercept the BIOS boot process. Then it can replace the BIOS boot process with one of its own, or it can merely modify the boot sequence by inserting its own boot actions into it, by preventing the BIOS from detecting certain devices as bootable, or both. Before the BIOS Boot Specification was promulgated, this was the only way for expansion ROMs to implement boot capability for devices not supported for booting by the native BIOS of the motherboard. Initialization After the motherboard BIOS completes its POST, most BIOS versions search for option ROM modules, also called BIOS extension ROMs, and execute them. The motherboard BIOS scans for extension ROMs in a portion of the "upper memory area" (the part of the x86 real-mode address space at and above address 0xA0000) and runs each ROM found, in order. To discover memory-mapped option ROMs, a BIOS implementation scans the real-mode address space from 0x0C0000 to 0x0F0000 on 2 KB (2,048 bytes) boundaries, looking for a two-byte ROM signature: 0x55 followed by 0xAA. In a valid expansion ROM, this signature is followed by a single byte indicating the number of 512-byte blocks the expansion ROM occupies in real memory, and the next byte is the option ROM's entry point (also known as its "entry offset"). If the ROM has a valid checksum, the BIOS transfers control to the entry address, which in a normal BIOS extension ROM should be the beginning of the extension's initialization routine. At this point, the extension ROM code takes over, typically testing and initializing the hardware it controls and registering interrupt vectors for use by post-boot applications. It may use BIOS services (including those provided by previously initialized option ROMs) to provide a user configuration interface, to display diagnostic information, or to do anything else that it requires. It is possible that an option ROM will not return to BIOS, pre-empting the BIOS's boot sequence altogether. An option ROM should normally return to the BIOS after completing its initialization process. Once (and if) an option ROM returns, the BIOS continues searching for more option ROMs, calling each as it is found, until the entire option ROM area in the memory space has been scanned. Physical placement Option ROMs normally reside on adapter cards. However, the original PC, and perhaps also the PC XT, have a spare ROM socket on the motherboard (the "system board" in IBM's terms) into which an option ROM can be inserted, and the four ROMs that contain the BASIC interpreter can also be removed and replaced with custom ROMs which can be option ROMs. The IBM PCjr is unique among PCs in having two ROM cartridge slots on the front. Cartridges in these slots map into the same region of the upper memory area used for option ROMs, and the cartridges can contain option ROM modules that the BIOS would recognize. The cartridges can also contain other types of ROM modules, such as BASIC programs, that are handled differently. One PCjr cartridge can contain several ROM modules of different types, possibly stored together in one ROM chip. Operating system services The BIOS ROM is customized to the particular manufacturer's hardware, allowing low-level services (such as reading a keystroke or writing a sector of data to diskette) to be provided in a standardized way to programs, including operating systems. For example, an IBM PC might have either a monochrome or a color display adapter (using different display memory addresses and hardware), but a single, standard, BIOS system call may be invoked to display a character at a specified position on the screen in text mode or graphics mode. The BIOS provides a small library of basic input/output functions to operate peripherals (such as the keyboard, rudimentary text and graphics display functions and so forth). When using MS-DOS, BIOS services could be accessed by an application program (or by MS-DOS) by executing an INT 13h interrupt instruction to access disk functions, or by executing one of a number of other documented BIOS interrupt calls to access video display, keyboard, cassette, and other device functions. Operating systems and executive software that are designed to supersede this basic firmware functionality provide replacement software interfaces to application software. Applications can also provide these services to themselves. This began even in the 1980s under MS-DOS, when programmers observed that using the BIOS video services for graphics display were very slow. To increase the speed of screen output, many programs bypassed the BIOS and programmed the video display hardware directly. Other graphics programmers, particularly but not exclusively in the demoscene, observed that there were technical capabilities of the PC display adapters that were not supported by the IBM BIOS and could not be taken advantage of without circumventing it. Since the AT-compatible BIOS ran in Intel real mode, operating systems that ran in protected mode on 286 and later processors required hardware device drivers compatible with protected mode operation to replace BIOS services. In modern PCs running modern operating systems (such as Windows and Linux) the BIOS interrupt calls is used only during booting and initial loading of operating systems. Before the operating system's first graphical screen is displayed, input and output are typically handled through BIOS. A boot menu such as the textual menu of Windows, which allows users to choose an operating system to boot, to boot into the safe mode, or to use the last known good configuration, is displayed through BIOS and receives keyboard input through BIOS. Many modern PCs can still boot and run legacy operating systems such as MS-DOS or DR-DOS that rely heavily on BIOS for their console and disk I/O, providing that the system has a BIOS, or a CSM-capable UEFI firmware. Processor microcode updates Intel processors have reprogrammable microcode since the P6 microarchitecture. AMD processors have reprogrammable microcode since the K7 microarchitecture. The BIOS contain patches to the processor microcode that fix errors in the initial processor microcode; microcode is loaded into processor's SRAM so reprogramming is not persistent, thus loading of microcode updates is performed each time the system is powered up. Without reprogrammable microcode, an expensive processor swap would be required; for example, the Pentium FDIV bug became an expensive fiasco for Intel as it required a product recall because the original Pentium processor's defective microcode could not be reprogrammed. Operating systems can update main processor microcode also. Identification Some BIOSes contain a software licensing description table (SLIC), a digital signature placed inside the BIOS by the original equipment manufacturer (OEM), for example Dell. The SLIC is inserted into the ACPI data table and contains no active code. Computer manufacturers that distribute OEM versions of Microsoft Windows and Microsoft application software can use the SLIC to authenticate licensing to the OEM Windows Installation disk and system recovery disc containing Windows software. Systems with a SLIC can be preactivated with an OEM product key, and they verify an XML formatted OEM certificate against the SLIC in the BIOS as a means of self-activating (see System Locked Preinstallation, SLP). If a user performs a fresh install of Windows, they will need to have possession of both the OEM key (either SLP or COA) and the digital certificate for their SLIC in order to bypass activation. This can be achieved if the user performs a restore using a pre-customised image provided by the OEM. Power users can copy the necessary certificate files from the OEM image, decode the SLP product key, then perform SLP activation manually. Cracks for non-genuine Windows distributions usually edit the SLIC or emulate it in order to bypass Windows activation. Overclocking Some BIOS implementations allow overclocking, an action in which the CPU is adjusted to a higher clock rate than its manufacturer rating for guaranteed capability. Overclocking may, however, seriously compromise system reliability in insufficiently cooled computers and generally shorten component lifespan. Overclocking, when incorrectly performed, may also cause components to overheat so quickly that they mechanically destroy themselves. Modern use Some older operating systems, for example MS-DOS, rely on the BIOS to carry out most input/output tasks within the PC. Calling real mode BIOS services directly is inefficient for protected mode (and long mode) operating systems. BIOS interrupt calls are not used by modern multitasking operating systems after they initially load. In 1990s, BIOS provided some protected mode interfaces for Microsoft Windows and Unix-like operating systems, such as Advanced Power Management (APM), Plug and Play BIOS, Desktop Management Interface (DMI), VESA BIOS Extensions (VBE), e820 and MultiProcessor Specification (MPS). Starting from the 2000, most BIOSes provide ACPI, SMBIOS, VBE and e820 interfaces for modern operating systems. After operating systems load, the System Management Mode code is still running in SMRAM. Since 2010, BIOS technology is in a transitional process toward UEFI. Configuration Setup utility Historically, the BIOS in the IBM PC and XT had no built-in user interface. The BIOS versions in earlier PCs (XT-class) were not software configurable; instead, users set the options via DIP switches on the motherboard. Later computers, including all IBM-compatibles with 80286 CPUs, had a battery-backed nonvolatile BIOS memory (CMOS RAM chip) that held BIOS settings. These settings, such as video-adapter type, memory size, and hard-disk parameters, could only be configured by running a configuration program from a disk, not built into the ROM. A special "reference diskette" was inserted in an IBM AT to configure settings such as memory size. Early BIOS versions did not have passwords or boot-device selection options. The BIOS was hard-coded to boot from the first floppy drive, or, if that failed, the first hard disk. Access control in early AT-class machines was by a physical keylock switch (which was not hard to defeat if the computer case could be opened). Anyone who could switch on the computer could boot it. Later, 386-class computers started integrating the BIOS setup utility in the ROM itself, alongside the BIOS code; these computers usually boot into the BIOS setup utility if a certain key or key combination is pressed, otherwise the BIOS POST and boot process are executed. A modern BIOS setup utility has a text user interface (TUI) or graphical user interface (GUI) accessed by pressing a certain key on the keyboard when the PC starts. Usually, the key is advertised for short time during the early startup, for example "Press DEL to enter Setup". The actual key depends on specific hardware. Features present in the BIOS setup utility typically include: Configuring, enabling and disabling the hardware components Setting the system time Setting the boot order Setting various passwords, such as a password for securing access to the BIOS user interface and preventing malicious users from booting the system from unauthorized portable storage devices, or a password for booting the system Hardware monitoring A modern BIOS setup screen often features a PC Health Status or a Hardware Monitoring tab, which directly interfaces with a Hardware Monitor chip of the mainboard. This makes it possible to monitor CPU and chassis temperature, the voltage provided by the power supply unit, as well as monitor and control the speed of the fans connected to the motherboard. Once the system is booted, hardware monitoring and computer fan control is normally done directly by the Hardware Monitor chip itself, which can be a separate chip, interfaced through I2C or SMBus, or come as a part of a Super I/O solution, interfaced through Industry Standard Architecture (ISA) or Low Pin Count (LPC). Some operating systems, like NetBSD with envsys and OpenBSD with sysctl hw.sensors, feature integrated interfacing with hardware monitors. However, in some circumstances, the BIOS also provides the underlying information about hardware monitoring through ACPI, in which case, the operating system may be using ACPI to perform hardware monitoring. Reprogramming In modern PCs the BIOS is stored in rewritable EEPROM or NOR flash memory, allowing the contents to be replaced and modified. This rewriting of the contents is sometimes termed flashing. It can be done by a special program, usually provided by the system's manufacturer, or at POST, with a BIOS image in a hard drive or USB flash drive. A file containing such contents is sometimes termed "a BIOS image". A BIOS might be reflashed in order to upgrade to a newer version to fix bugs or provide improved performance or to support newer hardware. Hardware The original IBM PC BIOS (and cassette BASIC) was stored on mask-programmed read-only memory (ROM) chips in sockets on the motherboard. ROMs could be replaced, but not altered, by users. To allow for updates, many compatible computers used re-programmable BIOS memory devices such as EPROM, EEPROM and later flash memory (usually NOR flash) devices. According to Robert Braver, the president of the BIOS manufacturer Micro Firmware, Flash BIOS chips became common around 1995 because the electrically erasable PROM (EEPROM) chips are cheaper and easier to program than standard ultraviolet erasable PROM (EPROM) chips. Flash chips are programmed (and re-programmed) in-circuit, while EPROM chips need to be removed from the motherboard for re-programming. BIOS versions are upgraded to take advantage of newer versions of hardware and to correct bugs in previous revisions of BIOSes. Beginning with the IBM AT, PCs supported a hardware clock settable through BIOS. It had a century bit which allowed for manually changing the century when the year 2000 happened. Most BIOS revisions created in 1995 and nearly all BIOS revisions in 1997 supported the year 2000 by setting the century bit automatically when the clock rolled past midnight, 31 December 1999. The first flash chips were attached to the ISA bus. Starting in 1998, the BIOS flash moved to the LPC bus, following a new standard implementation known as "firmware hub" (FWH). In 2005, the BIOS flash memory moved to the SPI bus. The size of the BIOS, and the capacity of the ROM, EEPROM, or other media it may be stored on, has increased over time as new features have been added to the code; BIOS versions now exist with sizes up to 32 megabytes. For contrast, the original IBM PC BIOS was contained in an 8 KB mask ROM. Some modern motherboards are including even bigger NAND flash memory ICs on board which are capable of storing whole compact operating systems, such as some Linux distributions. For example, some ASUS notebooks included Splashtop OS embedded into their NAND flash memory ICs. However, the idea of including an operating system along with BIOS in the ROM of a PC is not new; in the 1980s, Microsoft offered a ROM option for MS-DOS, and it was included in the ROMs of some PC clones such as the Tandy 1000 HX. Another type of firmware chip was found on the IBM PC AT and early compatibles. In the AT, the keyboard interface was controlled by a microcontroller with its own programmable memory. On the IBM AT, that was a 40-pin socketed device, while some manufacturers used an EPROM version of this chip which resembled an EPROM. This controller was also assigned the A20 gate function to manage memory above the one-megabyte range; occasionally an upgrade of this "keyboard BIOS" was necessary to take advantage of software that could use upper memory. The BIOS may contain components such as the Memory Reference Code (MRC), which is responsible for the memory initialization (e.g. SPD and memory timings initialization). Modern BIOS includes Intel Management Engine or AMD Platform Security Processor firmware. Vendors and products IBM published the entire listings of the BIOS for its original PC, PC XT, PC AT, and other contemporary PC models, in an appendix of the IBM PC Technical Reference Manual for each machine type. The effect of the publication of the BIOS listings is that anyone can see exactly what a definitive BIOS does and how it does it. In May 1984 Phoenix Software Associates released its first ROM-BIOS, which enabled OEMs to build essentially fully compatible clones without having to reverse-engineer the IBM PC BIOS themselves, as Compaq had done for the Portable, helping fuel the growth in the PC-compatibles industry and sales of non-IBM versions of DOS. And the first American Megatrends (AMI) BIOS was released on 1986. New standards grafted onto the BIOS are usually without complete public documentation or any BIOS listings. As a result, it is not as easy to learn the intimate details about the many non-IBM additions to BIOS as about the core BIOS services. Most PC motherboard suppliers licensed a BIOS "core" and toolkit from a commercial third party, known as an "independent BIOS vendor" or IBV. The motherboard manufacturer then customized this BIOS to suit its own hardware. For this reason, updated BIOSes are normally obtained directly from the motherboard manufacturer. Major IBV included American Megatrends (AMI), Insyde Software, Phoenix Technologies, and Byosoft. Microid Research and Award Software were acquired by Phoenix Technologies in 1998; Phoenix later phased out the Award brand name. General Software, which was also acquired by Phoenix in 2007, sold BIOS for embedded systems based on Intel processors. The open-source community increased their effort to develop a replacement for proprietary BIOSes and their future incarnations with an open-sourced counterpart through the libreboot, coreboot and OpenBIOS/Open Firmware projects. AMD provided product specifications for some chipsets, and Google is sponsoring the project. Motherboard manufacturer Tyan offers coreboot next to the standard BIOS with their Opteron line of motherboards. Security EEPROM and Flash memory chips are advantageous because they can be easily updated by the user; it is customary for hardware manufacturers to issue BIOS updates to upgrade their products, improve compatibility and remove bugs. However, this advantage had the risk that an improperly executed or aborted BIOS update could render the computer or device unusable. To avoid these situations, more recent BIOSes use a "boot block"; a portion of the BIOS which runs first and must be updated separately. This code verifies if the rest of the BIOS is intact (using hash checksums or other methods) before transferring control to it. If the boot block detects any corruption in the main BIOS, it will typically warn the user that a recovery process must be initiated by booting from removable media (floppy, CD or USB flash drive) so the user can try flashing the BIOS again. Some motherboards have a backup BIOS (sometimes referred to as DualBIOS boards) to recover from BIOS corruptions. There are at least five known BIOS attack viruses, two of which were for demonstration purposes. The first one found in the wild was Mebromi, targeting Chinese users. The first BIOS virus was BIOS Meningitis, which instead of erasing BIOS chips it infected them. BIOS Meningitis has relatively harmless, compared to a virus like CIH. The second BIOS virus was CIH, also known as the "Chernobyl Virus", which was able to erase flash ROM BIOS content on compatible chipsets. CIH appeared in mid-1998 and became active in April 1999. Often, infected computers could no longer boot, and people had to remove the flash ROM IC from the motherboard and reprogram it. CIH targeted the then-widespread Intel i430TX motherboard chipset and took advantage of the fact that the Windows 9x operating systems, also widespread at the time, allowed direct hardware access to all programs. Modern systems are not vulnerable to CIH because of a variety of chipsets being used which are incompatible with the Intel i430TX chipset, and also other flash ROM IC types. There is also extra protection from accidental BIOS rewrites in the form of boot blocks which are protected from accidental overwrite or dual and quad BIOS equipped systems which may, in the event of a crash, use a backup BIOS. Also, all modern operating systems such as FreeBSD, Linux, macOS, Windows NT-based Windows OS like Windows 2000, Windows XP and newer, do not allow user-mode programs to have direct hardware access using a hardware abstraction layer. As a result, as of 2008, CIH has become essentially harmless, at worst causing annoyance by infecting executable files and triggering antivirus software. Other BIOS viruses remain possible, however; since most Windows home users without Windows Vista/7's UAC run all applications with administrative privileges, a modern CIH-like virus could in principle still gain access to hardware without first using an exploit. The operating system OpenBSD prevents all users from having this access and the grsecurity patch for the Linux kernel also prevents this direct hardware access by default, the difference being an attacker requiring a much more difficult kernel level exploit or reboot of the machine. The third BIOS virus was a technique presented by John Heasman, principal security consultant for UK-based Next-Generation Security Software. In 2006, at the Black Hat Security Conference, he showed how to elevate privileges and read physical memory, using malicious procedures that replaced normal ACPI functions stored in flash memory. The fourth BIOS virus was a technique called "Persistent BIOS infection." It appeared in 2009 at the CanSecWest Security Conference in Vancouver, and at the SyScan Security Conference in Singapore. Researchers Anibal Sacco and Alfredo Ortega, from Core Security Technologies, demonstrated how to insert malicious code into the decompression routines in the BIOS, allowing for nearly full control of the PC at start-up, even before the operating system is booted. The proof-of-concept does not exploit a flaw in the BIOS implementation, but only involves the normal BIOS flashing procedures. Thus, it requires physical access to the machine, or for the user to be root. Despite these requirements, Ortega underlined the profound implications of his and Sacco's discovery: "We can patch a driver to drop a fully working rootkit. We even have a little code that can remove or disable antivirus." Mebromi is a trojan which targets computers with AwardBIOS, Microsoft Windows, and antivirus software from two Chinese companies: Rising Antivirus and Jiangmin KV Antivirus. Mebromi installs a rootkit which infects the Master boot record. In a December 2013 interview with 60 Minutes, Deborah Plunkett, Information Assurance Director for the US National Security Agency claimed the NSA had uncovered and thwarted a possible BIOS attack by a foreign nation state, targeting the US financial system. The program cited anonymous sources alleging it was a Chinese plot. However follow-up articles in The Guardian, The Atlantic, Wired and The Register refuted the NSA's claims. Newer Intel platforms have Intel Boot Guard (IBG) technology enabled, this technology will check the BIOS digital signature at startup, and the IBG public key is fused into the PCH. End users can't disable this function. Alternatives and successors Unified Extensible Firmware Interface (UEFI) supplements the BIOS in many new machines. Initially written for the Intel Itanium architecture, UEFI is now available for x86 and ARM architecture platforms; the specification development is driven by the Unified EFI Forum, an industry Special Interest Group. EFI booting has been supported in only Microsoft Windows versions supporting GPT, the Linux kernel 2.6.1 and later, and macOS on Intel-based Macs. , new PC hardware predominantly ships with UEFI firmware. The architecture of the rootkit safeguard can also prevent the system from running the user's own software changes, which makes UEFI controversial as a legacy BIOS replacement in the open hardware community. Also, Windows 11 requires UEFI to boot. Other alternatives to the functionality of the "Legacy BIOS" in the x86 world include coreboot and libreboot. Some servers and workstations use a platform-independent Open Firmware (IEEE-1275) based on the Forth programming language; it is included with Sun's SPARC computers, IBM's RS/6000 line, and other PowerPC systems such as the CHRP motherboards, along with the x86-based OLPC XO-1. As of at least 2015, Apple has removed legacy BIOS support from MacBook Pro computers. As such the BIOS utility no longer supports the legacy option, and prints "Legacy mode not supported on this system". In 2017, Intel announced that it would remove legacy BIOS support by 2020. Since 2019, new Intel platform OEM PCs no longer support the legacy option. See also Double boot Extended System Configuration Data (ESCD) Input/Output Control System Advanced Configuration and Power Interface (ACPI) Ralf Brown's Interrupt List (RBIL) interrupts, calls, interfaces, data structures, memory and port addresses, and processor opcodes for the x86 architecture System Management BIOS (SMBIOS) Unified Extensible Firmware Interface (UEFI) Notes References Further reading BIOS Disassembly Ninjutsu Uncovered, 1st edition, a freely available book in PDF format More Power To Firmware, free bonus chapter to the Mac OS X Internals: A Systems Approach book External links CP/M technology DOS technology Windows technology
1,948
4,480
https://en.wikipedia.org/wiki/BC
BC
BC most often refers to: Before Christ, a calendar era based on the traditionally reckoned year of the birth of Jesus of Nazareth British Columbia, the westernmost province of Canada Baja California, a state of Mexico BC may also refer to: Arts and entertainment "B.C.", a song by Sparks from the 1974 album Propaganda B.C. (comic strip) by Johnny Hart, and one of its characters BC (video game) by Lionhead Studios BC The Archaeology of the Bible Lands, a BBC television series Bullet Club, a professional wrestling stable Businesses and organizations Basilian Chouerite Order of Saint John the Baptist, an order of the Greek Catholic Church BC Card, a Korean credit card company Bella Center, a conference center in Copenhagen, Denmark Brasseries du Cameroun, a brewery in Cameroon (also known as SABC) Brunswick Corporation (NYSE ticker symbol BC) Education United States Bakersfield College, a college in Bakersfield, California Bellevue College, a college in Bellevue, Washington Benedictine College, a college in Atchison, Kansas Benedictine Military School, a high school in Savannah, Georgia Bergen Catholic High School, a high school in Oradell, New Jersey Boston College, a university in Chestnut Hill, Massachusetts Boston College Eagles, its athletic teams Brazosport College, a college in Lake Jackson, Texas Broward College, a college in Fort Lauderdale, Florida Worldwide Baccalaureus or bc, a Bachelor's degree in the Netherlands Baghdad College, a high school in Baghdad, Iraq British Council Science and technology Backcrossing, a crossing of a hybrid with one of its parents, or a genetically similar individual Backward compatibility, the ability of new software to work similarly to its predecessor Ballistic coefficient, a measure of air drag on a projectile Base curve radius, a parameter of a contact lens Battle command, a military discipline Bayonet cap, a standard light bulb connection bc (programming language), an arbitrary-precision calculator language Black carbon, a carbonaceous component of soot Bliss bibliographic classification, a library cataloguing system × Brassocattleya or Bc., an orchid genus Buoyancy compensator (diving), a piece of scuba diving equipment Transportation NZR BC class, a type of steam locomotive Skymark Airlines (IATA airline code BC) Other uses Bullcrap, a phrase denoting something worthless "B.C.", nickname of Burr Chamberlain (1877–1933), American football player and coach Baguio, a city in the Philippines, locally abbreviated as "B.C." BC Powder, a brand of pain reliever BookCrossing, a website that encourages leaving books in public places to be found by others See also BC Cygni, a red supergiant star that is one of the largest stars Belaruskaja Čyhunka (BCh), the national railway company of Belarus Blind carbon copy (Bcc:), the practice of sending an e-mail to multiple recipients without disclosing the complete list of recipients
1,954
4,481
https://en.wikipedia.org/wiki/Beatrix%20Potter
Beatrix Potter
Helen Beatrix Potter (, 28 July 186622 December 1943) was an English writer, illustrator, natural scientist, and conservationist. She is best known for her children's books featuring animals, such as The Tale of Peter Rabbit, which was her first published work in 1902. Her books, including 23 Tales, have sold more than 250 million copies. Potter was also a pioneer of merchandising—in 1903, Peter Rabbit was the first fictional character to be made into a patented stuffed toy, making him the oldest licensed character. Born into an upper-middle-class household, Potter was educated by governesses and grew up isolated from other children. She had numerous pets and spent holidays in Scotland and the Lake District, developing a love of landscape, flora and fauna, all of which she closely observed and painted. Potter's study and watercolours of fungi led to her being widely respected in the field of mycology. In her thirties, Potter self-published the highly successful children's book The Tale of Peter Rabbit. Following this, Potter began writing and illustrating children's books full-time. Potter wrote over sixty books, with the best known being her twenty-three children's tales. With the proceeds from the books and a legacy from an aunt, in 1905 Potter bought Hill Top Farm in Near Sawrey, a village in the Lake District. Over the following decades, she purchased additional farms to preserve the unique hill country landscape. In 1913, at the age of 47, she married William Heelis, a respected local solicitor from Hawkshead. Potter was also a prize-winning breeder of Herdwick sheep and a prosperous farmer keenly interested in land preservation. She continued to write and illustrate, and to design spin-off merchandise based on her children's books for British publisher Warne until the duties of land management and her diminishing eyesight made it difficult to continue. Potter died of pneumonia and heart disease on 22 December 1943 at her home in Near Sawrey at the age of 77, leaving almost all her property to the National Trust. She is credited with preserving much of the land that now constitutes the Lake District National Park. Potter's books continue to sell throughout the world in many languages with her stories being retold in songs, films, ballet, and animations, and her life is depicted in two films and a television series. Biography Early life Potter's family on both sides were from the Manchester area. They were English Unitarians, associated with dissenting Protestant congregations, influential in 19th century England, that affirmed the oneness of God and that rejected the doctrine of the Trinity. Potter's paternal grandfather, Edmund Potter, from Glossop in Derbyshire, owned what was then the largest calico printing works in England, and later served as a Member of Parliament. Potter's father, Rupert William Potter (1832–1914), was educated at Manchester College by the Unitarian philosopher James Martineau. He then trained as a barrister in London. Rupert practised law, specialising in equity law and conveyancing. He married Helen Leech (1839–1932) on 8 August 1863 at Hyde Unitarian Chapel, Gee Cross. Helen was the daughter of Jane Ashton (1806–1884) and John Leech, a wealthy cotton merchant and shipbuilder from Stalybridge. Helen's first cousins were siblings Harriet Lupton (née Ashton) and Thomas Ashton, 1st Baron Ashton of Hyde. It was reported in July 2014 that Potter had personally given a number of her own original hand-painted illustrations to the two daughters of Arthur and Harriet Lupton, who were cousins to both Beatrix Potter and Catherine, Princess of Wales. Potter's parents lived comfortably at 2 Bolton Gardens, West Brompton, London, where Helen Beatrix was born on 28 July 1866 and her brother Walter Bertram on 14 March 1872. The house was destroyed in the Blitz. Bousfield Primary School now stands where the house once was. A blue plaque on the school building testifies to the former site of the Potter home. Both parents were artistically talented, and Rupert was an adept amateur photographer. Rupert had invested in the stock market, and by the early 1890s, he was extremely wealthy. Beatrix Potter was educated by three governesses, the last of whom was Annie Moore (née Carter), just three years older than Potter, who tutored Potter in German as well as acting as lady's companion. She and Potter remained friends throughout their lives, and Annie's eight children were the recipients of many of Potter's picture letters. It was Annie who later suggested that these letters might make good children's books. She and her younger brother Walter Bertram (1872–1918) grew up with few friends outside their large extended family. Her parents were artistic, interested in nature, and enjoyed the countryside. As children, Potter and Bertram had numerous small animals as pets which they observed closely and drew endlessly. In their schoolroom, Potter and Bertram kept a variety of small pets—mice, rabbits, a hedgehog and some bats, along with collections of butterflies and other insects—which they drew and studied. Potter was devoted to the care of her small animals, often taking them with her on long holidays. In most of the first fifteen years of her life, Potter spent summer holidays at Dalguise, an estate on the River Tay in Perthshire, Scotland. There she sketched and explored an area that nourished her imagination and her observation. Her first sketchbook from those holidays, kept at age 8, and dated 1875, is held at and has been digitised by the Victoria & Albert Museum, London. Potter and her brother were allowed great freedom in the country, and both children became adept students of natural history. In 1882, when Dalguise was no longer available, the Potters took their first summer holiday in the Lake District, at Wray Castle near Lake Windermere. Here Potter met Hardwicke Rawnsley, vicar of Wray and later the founding secretary of the National Trust, whose interest in the countryside and country life inspired the same in Potter and who was to have a lasting impact on her life. At about the age of 14, Potter began to keep a diary, written in a simple substitution cipher of her own devising. Her Journal was important to the development of her creativity, serving as both sketchbook and literary experiment. In tiny handwriting, she reported on society, recorded her impressions of art and artists, recounted stories and observed life around her. The Journal, deciphered and transcribed by Leslie Linder in 1958, does not provide an intimate record of her personal life, but it is an invaluable source for understanding a vibrant part of British society in the late 19th century. It describes Potter's maturing artistic and intellectual interests, her often amusing insights into the places she visited, and her unusual ability to observe nature and to describe it. Started in 1881, her journal ends in 1897 when her artistic and intellectual energies were absorbed in scientific study and in efforts to publish her drawings. Precocious but reserved and often bored, she was searching for more independent activities and wished to earn some money of her own while dutifully taking care of her parents, dealing with her especially demanding mother, and managing their various households. Scientific illustrations and work in mycology Beatrix Potter's parents did not discourage higher education. As was common in the Victorian era, women of her class were privately educated and rarely went to university. Beatrix Potter was interested in every branch of natural science except astronomy. Botany was a passion for most Victorians and nature study was a popular enthusiasm. She collected fossils, studied archaeological artefacts from London excavations, and was interested in entomology. In all these areas, she drew and painted her specimens with increasing skill. By the 1890s, her scientific interests centred on mycology. First drawn to fungi because of their colours and evanescence in nature and her delight in painting them, her interest deepened after meeting Charles McIntosh, a revered naturalist and amateur mycologist, during a summer holiday in Dunkeld in Perthshire in 1892. He helped improve the accuracy of her illustrations, taught her taxonomy, and supplied her with live specimens to paint during the winter. Rebuffed by William Thiselton-Dyer, the Director at Kew, because of her sex and her amateur status, Potter wrote up her conclusions and submitted a paper, On the Germination of the Spores of the Agaricineae, to the Linnean Society in 1897. It was introduced by Massee because, as a female, Potter could not attend proceedings or read her paper. She subsequently withdrew it, realising that some of her samples were contaminated, but continued her microscopic studies for several more years. Her paper has only recently been rediscovered, along with the rich, artistic illustrations and drawings that accompanied it. Her work is only now being properly evaluated. Potter later gave her other mycological and scientific drawings to the Armitt Museum and Library in Ambleside, where mycologists still refer to them to identify fungi. There is also a collection of her fungus paintings at the Perth Museum and Art Gallery in Perth, Scotland, donated by Charles McIntosh. In 1967, the mycologist W.P.K. Findlay included many of Potter's beautifully accurate fungus drawings in his Wayside & Woodland Fungi, thereby fulfilling her desire to one day have her fungus drawings published in a book. In 1997, the Linnean Society issued a posthumous apology to Potter for the sexism displayed in its handling of her research. Artistic and literary career Potter's artistic and literary interests were deeply influenced by fairy tales and fantasy. She was a student of the classic fairy tales of Western Europe. As well as stories from the Old Testament, John Bunyan's The Pilgrim's Progress and Harriet Beecher Stowe's Uncle Tom's Cabin, she grew up with Aesop's Fables, the fairy tales of the Brothers Grimm and Hans Christian Andersen, Charles Kingsley's The Water Babies, the folk tales and mythology of Scotland, the German Romantics, Shakespeare, and the romances of Sir Walter Scott. As a young child, before the age of eight, Edward Lear's A Book of Nonsense, including the much loved The Owl and the Pussycat, and Lewis Carroll's Alice in Wonderland had made their impression, although she later said of Alice that she was more interested in Tenniel's illustrations than what they were about. The Brer Rabbit stories of Joel Chandler Harris had been family favourites, and she later studied his Uncle Remus stories and illustrated them. She studied book illustration from a young age and developed her own tastes, but the work of the picture book triumvirate Walter Crane, Kate Greenaway and Randolph Caldecott, the last an illustrator whose work was later collected by her father, was a great influence. When she started to illustrate, she chose first the traditional rhymes and stories, "Cinderella", "Sleeping Beauty", "Ali Baba and the Forty Thieves", "Puss-in-boots", and "Red Riding Hood". However, most often her illustrations were fantasies featuring her own pets: mice, rabbits, kittens, and guinea pigs. In her teenage years, Potter was a regular visitor to the art galleries of London, particularly enjoying the summer and winter exhibitions at the Royal Academy in London. Her Journal reveals her growing sophistication as a critic as well as the influence of her father's friend, the artist Sir John Everett Millais, who recognised Potter's talent of observation. Although Potter was aware of art and artistic trends, her drawing and her prose style were uniquely her own. As a way to earn money in the 1890s, Potter printed Christmas cards of her own design, as well as cards for special occasions. These were her first commercially successful works as an illustrator. Mice and rabbits were the most frequent subject of her fantasy paintings. In 1890, the firm of Hildesheimer and Faulkner bought several of the drawings of her rabbit Benjamin Bunny to illustrate verses by Frederic Weatherly titled A Happy Pair. In 1893, the same printer bought several more drawings for Weatherly's Our Dear Relations, another book of rhymes, and the following year Potter sold a series of frog illustrations and verses for Changing Pictures, a popular annual offered by the art publisher Ernest Nister. Potter was pleased by this success and determined to publish her own illustrated stories. Whenever Potter went on holiday to the Lake District or Scotland, she sent letters to young friends, illustrating them with quick sketches. Many of these letters were written to the children of her former governess Annie Carter Moore, particularly to Moore's eldest son Noel, who was often ill. In September 1893, Potter was on holiday at Eastwood in Dunkeld, Perthshire. She had run out of things to say to Noel, and so she told him a story about "four little rabbits whose names were Flopsy, Mopsy, Cottontail and Peter". It became one of the most famous children's letters ever written and the basis of Potter's future career as a writer-artist-storyteller. In 1900, Potter revised her tale about the four little rabbits, and fashioned a dummy book of it – it has been suggested, in imitation of Helen Bannerman's 1899 bestseller The Story of Little Black Sambo. Unable to find a buyer for the work, she published it for family and friends at her own expense in December 1901. It was drawn in black and white with a coloured frontispiece. Rawnsley had great faith in Potter's tale, recast it in didactic verse, and made the rounds of the London publishing houses. Frederick Warne & Co had previously rejected the tale but, eager to compete in the booming small format children's book market, reconsidered and accepted the "bunny book" (as the firm called it) following the recommendation of their prominent children's book artist L. Leslie Brooke. The firm declined Rawnsley's verse in favour of Potter's original prose, and Potter agreed to colour her pen and ink illustrations, choosing the new Hentschel three-colour process to reproduce her watercolours. On 2 October 1902, The Tale of Peter Rabbit was published and was an immediate success. It was followed the next year by The Tale of Squirrel Nutkin and The Tailor of Gloucester, which had also first been written as picture letters to the Moore children. Working with Norman Warne as her editor, Potter published two or three little books each year: 23 books in all. The last book in this format was Cecily Parsley's Nursery Rhymes in 1922, a collection of favourite rhymes. Although The Tale of Little Pig Robinson was not published until 1930, it had been written much earlier. Potter continued creating her little books until after the First World War when her energies were increasingly directed toward her farming, sheep-breeding and land conservation. The immense popularity of Potter's books was based on the lively quality of her illustrations, the non-didactic nature of her stories, the depiction of the rural countryside, and the imaginative qualities she lent to her animal characters. Potter was also a canny businesswoman. As early as 1903, she made and patented a Peter Rabbit doll. It was followed by other "spin-off" merchandise over the years, including painting books, board games, wall-paper, figurines, baby blankets and china tea-sets. All were licensed by Frederick Warne & Co and earned Potter an independent income, as well as immense profits for her publisher. In 1905, Potter and Norman Warne became unofficially engaged. Potter's parents objected to the match because Warne was "in trade" and thus not socially suitable. The engagement lasted only one month—Warne died of pernicious anaemia at age 37. That same year, Potter used some of her income and a small inheritance from an aunt to buy Hill Top Farm in Near Sawrey in the English Lake District near Windermere. Potter and Warne may have hoped that Hill Top Farm would be their holiday home, but after Warne's death, Potter went ahead with its purchase as she had always wanted to own that farm and live in "that charming village". Country life and marriage The tenant farmer John Cannon and his family agreed to stay on to manage the farm for her while she made physical improvements and learned the techniques of fell farming and of raising livestock, including pigs, cows and chickens; the following year she added sheep. Realising she needed to protect her boundaries, she sought advice from W.H. Heelis & Son, a local firm of solicitors with offices in nearby Hawkshead. With William Heelis acting for her, she bought contiguous pasture, and in 1909 the Castle Farm across the road from Hill Top Farm. She visited Hill Top at every opportunity, and her books written during this period (such as The Tale of Ginger and Pickles, about the local shop in Near Sawrey and The Tale of Mrs. Tittlemouse, a wood mouse) reflect her increasing participation in village life and her delight in country living. Owning and managing these working farms required routine collaboration with the widely respected William Heelis. By the summer of 1912, Heelis had proposed marriage and Potter had accepted; although she did not immediately tell her parents, who once again disapproved because Heelis was only a country solicitor. Potter and Heelis were married on 15 October 1913 in London at St Mary Abbots in Kensington. The couple moved immediately to Near Sawrey, residing at Castle Cottage, the renovated farmhouse on Castle Farm, which was 34 acres large. Hill Top remained a working farm but was now remodelled to allow for the tenant family and Potter's private studio and workshop. At last her own woman, Potter settled into the partnerships that shaped the rest of her life: her country solicitor husband and his large family, her farms, the Sawrey community and the predictable rounds of country life. The Tale of Jemima Puddle-Duck and The Tale of Tom Kitten are representative of Hill Top Farm and her farming life and reflect her happiness with her country life. Rupert Potter died in 1914 and, with the outbreak of World War I, Potter, now a wealthy woman, persuaded her mother to move to the Lake District and found a property for her to rent in Sawrey. Finding life in Sawrey dull, Helen Potter soon moved to Lindeth Howe (now a 34-bedroomed hotel), a large house the Potters had previously rented for the summer in Bowness, on the other side of Lake Windermere. Potter continued to write stories for Frederick Warne & Co and fully participated in country life. She established a Nursing Trust for local villages and served on various committees and councils responsible for footpaths and other rural issues. Sheep farming Soon after acquiring Hill Top Farm, Potter became keenly interested in the breeding and raising of Herdwick sheep, the indigenous fell sheep. In 1923 she bought a large sheep farm in the Troutbeck Valley called Troutbeck Park Farm, formerly a deer park, restoring its land with thousands of Herdwick sheep. This established her as one of the major Herdwick sheep farmers in the county. She was admired by her shepherds and farm managers for her willingness to experiment with the latest biological remedies for the common diseases of sheep, and for her employment of the best shepherds, sheep breeders, and farm managers. By the late 1920s, Potter and her Hill Top farm manager Tom Storey had made a name for their prize-winning Herdwick flock, which took many prizes at the local agricultural shows, where Potter was often asked to serve as a judge. In 1942 she became President-elect of the Herdwick Sheepbreeders' Association, the first time a woman had been elected, but died before taking office. Lake District conservation Potter had been a disciple of the land conservation and preservation ideals of her long-time friend and mentor, Canon Hardwicke Rawnsley, the first secretary and founding member of the National Trust for Places of Historic Interest or Natural Beauty. According to the National Trust, "she supported the efforts of the National Trust to preserve not just the places of extraordinary beauty but also those heads of valleys and low grazing lands that would be irreparably ruined by development." Potter was also an authority on the traditional Lakeland crafts and period furniture, as well as local stonework. She restored and preserved the farms that she bought or managed, making sure that each farm house had in it a piece of antique Lakeland furniture. Potter was interested in preserving not only the Herdwick sheep but also the way of life of fell farming. In 1930 the Heelises became partners with the National Trust in buying and managing the fell farms included in the large Monk Coniston Estate. The estate was composed of many farms spread over a wide area of north-western Lancashire, including the Tarn Hows. Potter was the de facto estate manager for the Trust for seven years until the National Trust could afford to repurchase most of the property from her. Potter's stewardship of these farms earned her full regard, but she was not without her critics, not the least of which were her contemporaries who felt she used her wealth and the position of her husband to acquire properties in advance of their being made public. She was notable in observing the problems of afforestation, preserving the intact grazing lands, and husbanding the quarries and timber on these farms. All her farms were stocked with Herdwick sheep and frequently with Galloway cattle. Later life Potter continued to write stories and to draw, although mostly for her own pleasure. Her books in the late 1920s included the semi-autobiographical The Fairy Caravan, a fanciful tale set in her beloved Troutbeck fells. It was published only in the US during Potter's lifetime, and not until 1952 in the UK. Sister Anne, Potter's version of the story of Bluebeard, was written for her American readers, but illustrated by Katharine Sturges. A final folktale, Wag by Wall, was published posthumously by The Horn Book Magazine in 1944. Potter was a generous patron of the Girl Guides, whose troupes she allowed to make their summer encampments on her land, and whose company she enjoyed as an older woman. Potter and William Heelis enjoyed a happy marriage of thirty years, continuing their farming and preservation efforts throughout the hard days of World War II. Although they were childless, Potter played an important role in William's large family, particularly enjoying her relationship with several nieces whom she helped educate, and giving comfort and aid to her husband's brothers and sisters. Potter died of complications from pneumonia and heart disease on 22 December 1943 at Castle Cottage, and her remains were cremated at Carleton Crematorium, Blackpool. She left nearly all her property to the National Trust, including over of land, sixteen farms, cottages and herds of cattle and Herdwick sheep. Hers was the largest gift at that time to the National Trust, and it enabled the preservation of the land now included in the Lake District National Park and the continuation of fell farming. The central office of the National Trust in Swindon was named "Heelis" in 2005 in her memory. William Heelis continued his stewardship of their properties and of her literary and artistic work for the twenty months he survived her. When he died in August 1945, he left the remainder to the National Trust. Legacy Potter left almost all the original illustrations for her books to the National Trust. The copyright to her stories and merchandise was then given to her publisher Frederick Warne & Co, now a division of the Penguin Group. On 1 January 2014, the copyright expired in the UK and other countries with a 70-years-after-death limit. Hill Top Farm was opened to the public by the National Trust in 1946; her artwork was displayed there until 1985 when it was moved to William Heelis's former law offices in Hawkshead, also owned by the National Trust as the Beatrix Potter Gallery. Potter gave her folios of mycological drawings to the Armitt Library and Museum in Ambleside before her death. The Tale of Peter Rabbit is owned by Frederick Warne and Company, The Tailor of Gloucester by the Tate Gallery, and The Tale of the Flopsy Bunnies by the British Museum. In 1903, Potter created the first Peter Rabbit soft toy and registered him at the Patent Office in London, making Peter the oldest licensed fictional character. Erica Wagner of The Times states, "Beatrix Potter was the first to recognise that content—as we now call the stuff that makes up a book or a film—was only the beginning. In 1903, Peter hopped outside his pages to become a patented soft toy, which gave him the distinction of being not only Mr. McGregor‘s mortal enemy, but also becoming the first licensed character". Nicholas Tucker in The Guardian writes, "she was the first author to license fictional characters to a range of toys and household objects still on sale today". The largest public collection of her letters and drawings is the Leslie Linder Bequest and Leslie Linder Collection at the Victoria and Albert Museum in London. (Linder was the collector who—after five years of work—finally transcribed Potter's early journal, originally written in code.) In the United States, the largest public collections are those in the Rare Book Department of the Free Library of Philadelphia, and the Cotsen Children's Library at Princeton University. In 2015 a manuscript for an unpublished book was discovered by Jo Hanks, a publisher at Penguin Random House Children's Books, in the Victoria and Albert Museum archive. The book The Tale of Kitty-in-Boots, with illustrations by Quentin Blake, was published 1 September 2016, to mark the 150th anniversary of Potter's birth. Also in 2016, Peter Rabbit and other Potter characters featured on a series of UK postage stamps issued by the Royal Mail. In 2017, The Art of Beatrix Potter: Sketches, Paintings, and Illustrations by Emily Zach was published after San Francisco publisher Chronicle Books decided to mark the 150th anniversary of Beatrix Potter's birth by showing that she was "far more than a 19th-century weekend painter. She was an artist of astonishing range." In December 2017, the asteroid 13975 Beatrixpotter, discovered by Belgian astronomer Eric Elst in 1992, was named in her memory. In 2022, an exhibition Beatrix Potter: Drawn to Nature was held at the Victoria and Albert Museum. Research for the exhibition identified the man's court waistcoat c. 1780s, which inspired Potter's sketch in 'The Tailor of Gloucester'. Analysis There are many interpretations of Potter's literary work, the sources of her art, and her life and times. These include critical evaluations of her corpus of children's literature and Modernist interpretations of Humphrey Carpenter and Katherine Chandler. Judy Taylor, That Naughty Rabbit: Beatrix Potter and Peter Rabbit (rev. 2002) tells the story of the first publication and many editions. Potter's country life and her farming have been discussed in the work of Susan Denyer and other authors in the publications of The National Trust, such as Beatrix Potter at Home in the Lake District (2004). Potter's work as a scientific illustrator and her work in mycology are discussed in Linda Lear's books Beatrix Potter: A Life in Nature (2006) and Beatrix Potter: The Extraordinary Life of a Victorian Genius (2008). Adaptations In 1971, a ballet film was released, The Tales of Beatrix Potter, directed by Reginald Mills, set to music by John Lanchbery with choreography by Frederick Ashton, and performed in character costume by members of the Royal Ballet and the Royal Opera House orchestra. The ballet of the same name has been performed by other dance companies around the world. In 1992, Potter's children's book The Tale of Benjamin Bunny was featured in the film Lorenzo's Oil. Potter is also featured in Susan Wittig Albert's series of light mysteries called The Cottage Tales of Beatrix Potter. The first of the eight-book series is Tale of Hill Top Farm (2004), which deals with Potter's life in the Lake District and the village of Near Sawrey between 1905 and 1913. In film In 1982, the BBC produced The Tale of Beatrix Potter. This dramatization of her life was written by John Hawkesworth, directed by Bill Hayes, and starred Holly Aird and Penelope Wilton as the young and adult Potter, respectively. The World of Peter Rabbit and Friends, a TV series based on nine of her twenty-four stories, starred actress Niamh Cusack as Beatrix Potter. In 1993, Weston Woods Studios made an almost hour non-story film called "Beatrix Potter: Artist, Storyteller, and Countrywoman" with narration by Lynn Redgrave. In 2006, Chris Noonan directed Miss Potter, a biographical film of Potter's life focusing on her early career and romance with her editor Norman Warne. The film stars Renée Zellweger, Ewan McGregor and Emily Watson. On 9 February 2018, Columbia Pictures released Peter Rabbit, directed by Will Gluck, based on the work by Potter. The character Bea, played by Rose Byrne, is a re-imagined version of Potter. A sequel to the film titled Peter Rabbit 2: The Runaway was released in 2021. On 24 December 2020, Sky One premiered Roald & Beatrix: The Tail of the Curious Mouse, a made-for-television drama film inspired by the true story of a six-year-old Roald Dahl meeting his idol Potter. Set in 1922, the movie was written by Abigail Wilson, directed by David Kerr and starred Dawn French as Beatrix Potter, Rob Brydon as William Heelis and Jessica Hynes as Sofie Dahl. Filming took place in Wales (the birthland of Dahl, French and Brydon), during the COVID-19 pandemic. This production incorporates live action, stop motion and puppetry. The DVD was released on 26 April 2021. Publications The 23 Tales The Tale of Peter Rabbit (privately printed, 250 copies, 1901) The Tale of Peter Rabbit (1902) The Tale of Squirrel Nutkin (1903) The Tailor of Gloucester (1903) The Tale of Benjamin Bunny (1904) The Tale of Two Bad Mice (1904) The Tale of Mrs. Tiggy-Winkle (1905) The Tale of the Pie and the Patty-Pan (1905) The Tale of Mr. Jeremy Fisher (1906) The Story of a Fierce Bad Rabbit (1906) The Story of Miss Moppet (1906) The Tale of Tom Kitten (1907) The Tale of Jemima Puddle-Duck (1908) The Tale of Samuel Whiskers or, The Roly-Poly Pudding (1908) The Tale of the Flopsy Bunnies (1909) The Tale of Ginger and Pickles (1909) The Tale of Mrs. Tittlemouse (1910) The Tale of Timmy Tiptoes (1911) The Tale of Mr. Tod (1912) The Tale of Pigling Bland (1913) Appley Dapply's Nursery Rhymes (1917) The Tale of Johnny Town-Mouse (1918) Cecily Parsley's Nursery Rhymes (1922) The Tale of Little Pig Robinson (1930) Other books Peter Rabbit's Painting Book (1911) Tom Kitten's Painting Book (1917) Jemima Puddle-Duck's Painting Book (1925) Peter Rabbit's Almanac for 1929 (1928) The Fairy Caravan (1929) Sister Anne (illustrated by Katharine Sturges) (1932) Wag-by-Wall (decorations by J. J. Lankes) (1944) The Tale of the Faithful Dove (illustrated by Marie Angel) (1955, 1970) The Sly Old Cat (written 1906; first published 1971) The Tale of Tuppenny (illustrated by Marie Angel) (1973) The Tale of Kitty-in-Boots (2016) (Illustrated by Quentin Blake.) Red Riding Hood (2019) (Illustrated by Helen Oxenbury.) References Further reading Letters, journals and writing collections Potter, Beatrix. (rev. 1989). The Journal of Beatrix Potter, 1881–1897, transcribed from her code writings by Leslie Linder. F. Warne & Co. Art studies Biographical studies External links Beatrix Potter's fossils and her interest in geology – B. G. Gardiner Beatrix Potter at the Encyclopedia of Fantasy Collection of Potter materials at Victoria and Albert Museum Beatrix Potter online feature at the University of Pittsburgh School of Information Sciences Beatrix Potter Society, UK Exhibition of Beatrix Potter's Picture Letters at the Morgan Library Beatrix Potter Collection (digitized images from the Free Library of Philadelphia) 1866 births 1943 deaths 19th-century British artists 19th-century English women writers 19th-century English businesspeople 19th-century English writers 20th-century British artists 20th-century English women writers 20th-century English businesspeople 20th-century English writers Artists from London British children's book illustrators Deaths from pneumonia in England English botanists English children's writers British women children's writers English conservationists English illustrators English mycologists English Unitarians English watercolourists Fabulists People associated with Perth and Kinross People from Kensington Scientific illustrators Women botanists 19th-century English businesswomen Victorian women writers Writers from London Writers who illustrated their own writing People from Hawkshead
1,955
4,482
https://en.wikipedia.org/wiki/Liberal%20Party%20%28UK%29
Liberal Party (UK)
The Liberal Party was one of the two major political parties in the United Kingdom, along with the Conservative Party, in the 19th and early 20th centuries. Beginning as an alliance of Whigs, free trade–supporting Peelites and reformist Radicals in the 1850s, by the end of the 19th century it had formed four governments under William Gladstone. Despite being divided over the issue of Irish Home Rule, the party returned to government in 1905 and won a landslide victory in the 1906 general election. Under prime ministers Henry Campbell-Bannerman (1905–1908) and H. H. Asquith (1908–1916), the Liberal Party passed reforms that created a basic welfare state. Although Asquith was the party leader, its dominant figure was David Lloyd George. Asquith was overwhelmed by the wartime role of coalition prime minister and Lloyd George replaced him in late 1916, but Asquith remained as Liberal Party leader. The split between Lloyd George's breakaway faction and Asquith's official Liberal Party badly weakened the party. The coalition government of Lloyd George was increasingly dominated by the Conservative Party, which finally deposed him in 1922. By the end of the 1920s, the Labour Party had replaced the Liberals as the Conservatives' main rival. The Liberal Party went into decline after 1918 and by the 1950s won as few as six seats at general elections. Apart from notable by-election victories, its fortunes did not improve significantly until it formed the SDP–Liberal Alliance with the newly formed Social Democratic Party (SDP) in 1981. At the 1983 general election, the Alliance won over a quarter of the vote, but only 23 of the 650 seats it contested. At the 1987 general election, its share of the vote fell below 23% and the Liberals and the SDP merged in 1988 to form the Social and Liberal Democrats (SLD), who the following year were renamed the Liberal Democrats. A splinter group reconstituted the Liberal Party in 1989. Prominent intellectuals associated with the Liberal Party include the philosopher John Stuart Mill, the economist John Maynard Keynes and social planner William Beveridge. Winston Churchill authored Liberalism and the Social Problem (1909), praised by Henry William Massingham as "an impressive and convincing argument" and widely considered as the movement’s bible. History Origins The Liberal Party grew out of the Whigs, who had their origins in an aristocratic faction in the reign of Charles II and the early 19th century Radicals. The Whigs were in favour of reducing the power of the Crown and increasing the power of Parliament. Although their motives in this were originally to gain more power for themselves, the more idealistic Whigs gradually came to support an expansion of democracy for its own sake. The great figures of reformist Whiggery were Charles James Fox (died 1806) and his disciple and successor Earl Grey. After decades in opposition, the Whigs returned to power under Grey in 1830 and carried the First Reform Act in 1832. The Reform Act was the climax of Whiggism, but it also brought about the Whigs' demise. The admission of the middle classes to the franchise and to the House of Commons led eventually to the development of a systematic middle class liberalism and the end of Whiggery, although for many years reforming aristocrats held senior positions in the party. In the years after Grey's retirement, the party was led first by Lord Melbourne, a fairly traditional Whig, and then by Lord John Russell, the son of a Duke but a crusading radical, and by Lord Palmerston, a renegade Irish Tory and essentially a conservative, although capable of radical gestures. As early as 1839, Russell had adopted the name of "Liberals", but in reality his party was a loose coalition of Whigs in the House of Lords and Radicals in the Commons. The leading Radicals were John Bright and Richard Cobden, who represented the manufacturing towns which had gained representation under the Reform Act. They favoured social reform, personal liberty, reducing the powers of the Crown and the Church of England (many Liberals were Nonconformists), avoidance of war and foreign alliances (which were bad for business) and above all free trade. For a century, free trade remained the one cause which could unite all Liberals. In 1841, the Liberals lost office to the Conservatives under Sir Robert Peel, but their period in opposition was short because the Conservatives split over the repeal of the Corn Laws, a free trade issue; and a faction known as the Peelites (but not Peel himself, who died soon after) defected to the Liberal side. This allowed ministries led by Russell, Palmerston and the Peelite Lord Aberdeen to hold office for most of the 1850s and 1860s. A leading Peelite was William Gladstone, who was a reforming Chancellor of the Exchequer in most of these governments. The formal foundation of the Liberal Party is traditionally traced to 1859 and the formation of Palmerston's second government. However, the Whig-Radical amalgam could not become a true modern political party while it was dominated by aristocrats and it was not until the departure of the "Two Terrible Old Men", Russell and Palmerston, that Gladstone could become the first leader of the modern Liberal Party. This was brought about by Palmerston's death in 1865 and Russell's retirement in 1868. After a brief Conservative government (during which the Second Reform Act was passed by agreement between the parties), Gladstone won a huge victory at the 1868 election and formed the first Liberal government. The establishment of the party as a national membership organisation came with the foundation of the National Liberal Federation in 1877. The philosopher John Stuart Mill was also a Liberal MP from 1865 to 1868. Gladstone era For the next thirty years Gladstone and Liberalism were synonymous. William Gladstone served as prime minister four times (1868–74, 1880–85, 1886, and 1892–94). His financial policies, based on the notion of balanced budgets, low taxes and laissez-faire, were suited to a developing capitalist society, but they could not respond effectively as economic and social conditions changed. Called the "Grand Old Man" later in life, Gladstone was always a dynamic popular orator who appealed strongly to the working class and to the lower middle class. Deeply religious, Gladstone brought a new moral tone to politics, with his evangelical sensibility and his opposition to aristocracy. His moralism often angered his upper-class opponents (including Queen Victoria), and his heavy-handed control split the Liberal Party. In foreign policy, Gladstone was in general against foreign entanglements, but he did not resist the realities of imperialism. For example, he ordered the occupation of Egypt by British forces in the 1882 Anglo-Egyptian War. His goal was to create a European order based on co-operation rather than conflict and on mutual trust instead of rivalry and suspicion; the rule of law was to supplant the reign of force and self-interest. This Gladstonian concept of a harmonious Concert of Europe was opposed to and ultimately defeated by a Bismarckian system of manipulated alliances and antagonisms. As prime minister from 1868 to 1874, Gladstone headed a Liberal Party which was a coalition of Peelites like himself, Whigs and Radicals. He was now a spokesman for "peace, economy and reform". One major achievement was the Elementary Education Act of 1870, which provided England with an adequate system of elementary schools for the first time. He also secured the abolition of the purchase of commissions in the British Army and of religious tests for admission to Oxford and Cambridge; the introduction of the secret ballot in elections; the legalization of trade unions; and the reorganization of the judiciary in the Judicature Act. Regarding Ireland, the major Liberal achievements were land reform, where he ended centuries of landlord oppression, and the disestablishment of the (Anglican) Church of Ireland through the Irish Church Act 1869. In the 1874 general election Gladstone was defeated by the Conservatives under Benjamin Disraeli during a sharp economic recession. He formally resigned as Liberal leader and was succeeded by the Marquess of Hartington, but he soon changed his mind and returned to active politics. He strongly disagreed with Disraeli's pro-Ottoman foreign policy and in 1880 he conducted the first outdoor mass-election campaign in Britain, known as the Midlothian campaign. The Liberals won a large majority in the 1880 election. Hartington ceded his place and Gladstone resumed office. Ireland and Home Rule Among the consequences of the Third Reform Act (1884) was the giving of the vote to many Irish Catholics. In the 1885 general election the Irish Parliamentary Party held the balance of power in the House of Commons, and demanded Irish Home Rule as the price of support for a continued Gladstone ministry. Gladstone personally supported Home Rule, but a strong Liberal Unionist faction led by Joseph Chamberlain, along with the last of the Whigs, Hartington, opposed it. The Irish Home Rule bill proposed to offer all owners of Irish land a chance to sell to the state at a price equal to 20 years' purchase of the rents and allowing tenants to purchase the land. Irish nationalist reaction was mixed, Unionist opinion was hostile, and the election addresses during the 1886 election revealed English radicals to be against the bill also. Among the Liberal rank and file, several Gladstonian candidates disowned the bill, reflecting fears at the constituency level that the interests of the working people were being sacrificed to finance a costly rescue operation for the landed élite. Further, Home Rule had not been promised in the Liberals' election manifesto, and so the impression was given that Gladstone was buying Irish support in a rather desperate manner to hold on to power. The result was a catastrophic split in the Liberal Party, and heavy defeat in the 1886 election at the hands of Lord Salisbury, who was supported by the breakaway Liberal Unionist Party. There was a final weak Gladstone ministry in 1892, but it also was dependent on Irish support and failed to get Irish Home Rule through the House of Lords. Newcastle Programme Historically, the aristocracy was divided between Conservatives and Liberals. However, when Gladstone committed to home rule for Ireland, Britain's upper classes largely abandoned the Liberal party, giving the Conservatives a large permanent majority in the House of Lords. Following the Queen, High Society in London largely ostracized home rulers and Liberal clubs were badly split. Joseph Chamberlain took a major element of upper-class supporters out of the Party and into a third party called Liberal Unionism on the Irish issue. It collaborated with and eventually merged into the Conservative party. The Gladstonian liberals in 1891 adopted The Newcastle Programme that included home rule for Ireland, disestablishment of the Church of England in Wales, tighter controls on the sale of liquor, major extension of factory regulation and various democratic political reforms. The Programme had a strong appeal to the nonconformist middle-class Liberal element, which felt liberated by the departure of the aristocracy. Relations with trade unions A major long-term consequence of the Third Reform Act was the rise of Lib-Lab candidates. The Act split all county constituencies (which were represented by multiple MPs) into single-member constituencies, roughly corresponding to population patterns. With the foundation of the Labour Party not to come till 1906, many trade unions allied themselves with the Liberals. In areas with working class majorities, in particular coal-mining areas, Lib-Lab candidates were popular, and they received sponsorship and endorsement from trade unions. In the first election after the Act was passed (1885), thirteen were elected, up from two in 1874. The Third Reform Act also facilitated the demise of the Whig old guard; in two-member constituencies, it was common to pair a Whig and a radical under the Liberal banner. After the Third Reform Act, fewer former Whigs were selected as candidates. Reform policies A broad range of interventionist reforms were introduced by the 1892–1895 Liberal government. Amongst other measures, standards of accommodation and of teaching in schools were improved, factory inspection was made more stringent, and ministers used their powers to increase the wages and reduce the working hours of large numbers of male workers employed by the state. Historian Walter L. Arnstein concludes: Notable as the Gladstonian reforms had been, they had almost all remained within the nineteenth-century Liberal tradition of gradually removing the religious, economic, and political barriers that prevented men of varied creeds and classes from exercising their individual talents in order to improve themselves and their society. As the third quarter of the century drew to a close, the essential bastions of Victorianism still held firm: respectability; a government of aristocrats and gentlemen now influenced not only by middle-class merchants and manufacturers but also by industrious working people; a prosperity that seemed to rest largely on the tenets of laissez-faire economics; and a Britannia that ruled the waves and many a dominion beyond. After Gladstone Gladstone finally retired in 1894. Gladstone's support for Home Rule deeply divided the party, and it lost its upper and upper-middle-class base, while keeping support among Protestant nonconformists and the Celtic fringe. Historian R. C. K. Ensor reports that after 1886, the main Liberal Party was deserted by practically the entire whig peerage and the great majority of the upper-class and upper-middle-class members. High prestige London clubs that had a Liberal base were deeply split. Ensor notes that, "London society, following the known views of the Queen, practically ostracized home rulers." The new Liberal leader was the ineffectual Lord Rosebery. He led the party to a heavy defeat in the 1895 general election. Liberal factions The Liberal Party lacked a unified ideological base in 1906. It contained numerous contradictory and hostile factions, such as imperialists and supporters of the Boers; near-socialists and laissez-faire classical liberals; suffragettes and opponents of women's suffrage; antiwar elements and supporters of the military alliance with France. Nonconformists – Protestants outside the Anglican fold – were a powerful element, dedicated to opposing the established church in terms of education and taxation. However, the non-conformists were losing support amid society at large and played a lesser role in party affairs after 1900. The party, furthermore, also included Irish Catholics, and secularists from the labour movement. Many Conservatives (including Winston Churchill) had recently protested against high tariff moves by the Conservatives by switching to the anti-tariff Liberal camp, but it was unclear how many old Conservative traits they brought along, especially on military and naval issues. The middle-class business, professional and intellectual communities were generally strongholds, although some old aristocratic families played important roles as well. The working-class element was moving rapidly toward the newly emerging Labour Party. One uniting element was widespread agreement on the use of politics and Parliament as a device to upgrade and improve society and to reform politics. All Liberals were outraged when Conservatives used their majority in the House of Lords to block reform legislation. In the House of Lords, the Liberals had lost most of their members, who in the 1890s "became Conservative in all but name." The government could force the unwilling king to create new Liberal peers, and that threat did prove decisive in the battle for dominance of Commons over Lords in 1911. Rise of New Liberalism The late nineteenth century saw the emergence of New Liberalism within the Liberal Party, which advocated state intervention as a means of guaranteeing freedom and removing obstacles to it such as poverty and unemployment. The policies of the New Liberalism are now known as social liberalism. The New Liberals included intellectuals like L. T. Hobhouse, and John A. Hobson. They saw individual liberty as something achievable only under favourable social and economic circumstances. In their view, the poverty, squalor, and ignorance in which many people lived made it impossible for freedom and individuality to flourish. New Liberals believed that these conditions could be ameliorated only through collective action coordinated by a strong, welfare-oriented, and interventionist state. After the historic 1906 victory, the Liberal Party introduced multiple reforms on range of issues, including health insurance, unemployment insurance, and pensions for elderly workers, thereby laying the groundwork for the future British welfare state. Some proposals failed, such as licensing fewer pubs, or rolling back Conservative educational policies. The People's Budget of 1909, championed by David Lloyd George and fellow Liberal Winston Churchill, introduced unprecedented taxes on the wealthy in Britain and radical social welfare programmes to the country's policies. In the Liberal camp, as noted by one study, “the Budget was on the whole enthusiastically received.” It was the first budget with the expressed intent of redistributing wealth among the public. It imposed increased taxes on luxuries, liquor, tobacco, high incomes, and land – taxation that fell heavily on the rich. The new money was to be made available for new welfare programmes as well as new battleships. In 1911 Lloyd George succeeded in putting through Parliament his National Insurance Act, making provision for sickness and invalidism, and this was followed by his Unemployment Insurance Act. Historian Peter Weiler argues: Contrasting Old Liberalism with New Liberalism, David Lloyd George noted in a 1908 speech the following: Liberal zenith The Liberals languished in opposition for a decade while the coalition of Salisbury and Chamberlain held power. The 1890s were marred by infighting between the three principal successors to Gladstone, party leader William Harcourt, former prime minister Lord Rosebery, and Gladstone's personal secretary, John Morley. This intrigue finally led Harcourt and Morley to resign their positions in 1898 as they continued to be at loggerheads with Rosebery over Irish home rule and issues relating to imperialism. Replacing Harcourt as party leader was Sir Henry Campbell-Bannerman. Harcourt's resignation briefly muted the turmoil in the party, but the beginning of the Second Boer War soon nearly broke the party apart, with Rosebery and a circle of supporters including important future Liberal figures H. H. Asquith, Edward Grey and Richard Burdon Haldane forming a clique dubbed the Liberal Imperialists that supported the government in the prosecution of the war. On the other side, more radical members of the party formed a Pro-Boer faction that denounced the conflict and called for an immediate end to hostilities. Quickly rising to prominence among the Pro-Boers was David Lloyd George, a relatively new MP and a master of rhetoric, who took advantage of having a national stage to speak out on a controversial issue to make his name in the party. Harcourt and Morley also sided with this group, though with slightly different aims. Campbell-Bannerman tried to keep these forces together at the head of a moderate Liberal rump, but in 1901 he delivered a speech on the government's "methods of barbarism" in South Africa that pulled him further to the left and nearly tore the party in two. The party was saved after Salisbury's retirement in 1902 when his successor, Arthur Balfour, pushed a series of unpopular initiatives such as the Education Act 1902 and Joseph Chamberlain called for a new system of protectionist tariffs. Campbell-Bannerman was able to rally the party around the traditional liberal platform of free trade and land reform and led them to the greatest election victory in their history. This would prove the last time the Liberals won a majority in their own right. Although he presided over a large majority, Sir Henry Campbell-Bannerman was overshadowed by his ministers, most notably H. H. Asquith at the Exchequer, Edward Grey at the Foreign Office, Richard Burdon Haldane at the War Office and David Lloyd George at the Board of Trade. Campbell-Bannerman retired in 1908 and died soon after. He was succeeded by Asquith, who stepped up the government's radicalism. Lloyd George succeeded Asquith at the Exchequer, and was in turn succeeded at the Board of Trade by Winston Churchill, a recent defector from the Conservatives. One observer, the leading American liberal politician William Jennings Bryan, was enthusiastic about the new Liberal administration, writing that “Great Britain has recently experienced one of the greatest political revolutions she has ever known. The conservative party, with Mr. Balfour, one of the ablest of modern scholars, at its head, and with Mr. Joseph Chamberlain, a powerful orator and a forceful political leader, as its most conspicuous champion, had won a sweeping victory after the Boer war, and this victory, following a long lease of power, led the Conservatives to believe themselves invincible. They assumed, as parties made confident by success often do, that they are indispensable to the nation and paid but little attention to the warnings and threats of the Liberals. One mistake after another, however, alienated the voters and the special elections two years ago began to show a falling off in the Conservative strength, and when the general election was held last fall the Liberals rolled up a majority of something like two hundred in the House of Commons. A new ministry was formed from among the ablest men of the party — a ministry of radical and progressive men seldom equaled in moral purpose and intellectual strength.” The 1906 general election also represented a shift to the left by the Liberal Party. According to Rosemary Rees, almost half of the Liberal MPs elected in 1906 were supportive of the 'New Liberalism' (which advocated government action to improve people's lives),) while claims were made that “five-sixths of the Liberal party are left wing.” Other historians, however, have questioned the extent to which the Liberal Party experienced a leftward shift; according to Robert C. Self however, only between 50 and 60 Liberal MPs out of the 400 in the parliamentary party after 1906 were Social Radicals, with a core of 20 to 30. Nevertheless, important junior offices were held in the cabinet by what Duncan Tanner has termed "genuine New Liberals, Centrist reformers, and Fabian collectivists," and much legislation was pushed through by the Liberals in government. This included the regulation of working hours, National Insurance and welfare. A political battle erupted over the People's Budget, which was rejected by the House of Lords and for which the government obtained an electoral mandate at the January 1910 election. The election resulted in a hung parliament, with the government left dependent on the Irish Nationalists. Although the Lords now passed the budget, the government wished to curtail their power to block legislation. Asquith was required by King George V to fight a second general election in December 1910 (whose result was little changed from that in January) before he agreed, if necessary, to create hundreds of Liberals peers. Faced with that threat, the Lords voted to give up their veto power and allowed the passage of the Parliament Act 1911. As the price of Irish support, Asquith was now forced to introduce a third Home Rule bill in 1912. Since the House of Lords no longer had the power to block the bill, but only to delay it for two years, it was due to become law in 1914. The Unionist Ulster Volunteers, led by Sir Edward Carson, launched a campaign of opposition that included the threat of a provisional government and armed resistance in Ulster. The Ulster Protestants had the full support of the Conservatives, whose leader, Bonar Law, was of Ulster-Scots descent. Government plans to deploy troops into Ulster had to be cancelled after the threat of mass resignation of their commissions by army officers in March 1914 (see Curragh Incident). Ireland seemed to be on the brink of civil war when the First World War broke out in August 1914. Asquith had offered the Six Counties (later to become Northern Ireland) an opt out from Home Rule for six years (i.e. until after two more general elections were likely to have taken place) but the Nationalists refused to agree to permanent Partition of Ireland. Historian George Dangerfield has argued that the multiplicity of crises in 1910 to 1914, political and industrial, so weakened the Liberal coalition before the war broke out that it marked the Strange Death of Liberal England. However, most historians date the collapse to the crisis of the First World War. Decline The Liberal Party might have survived a short war, but the totality of the Great War called for measures that the Party had long rejected. The result was the permanent destruction of the ability of the Liberal Party to lead a government. Historian Robert Blake explains the dilemma: Blake further notes that it was the Liberals, not the Conservatives who needed the moral outrage of Belgium to justify going to war, while the Conservatives called for intervention from the start of the crisis on the grounds of realpolitik and the balance of power. However, Lloyd George and Churchill were zealous supporters of the war, and gradually forced the old peace-orientated Liberals out. Asquith was blamed for the poor British performance in the first year. Since the Liberals ran the war without consulting the Conservatives, there were heavy partisan attacks. However, even Liberal commentators were dismayed by the lack of energy at the top. At the time, public opinion was intensely hostile, both in the media and in the street, against any young man in civilian garb and labeled as a slacker. The leading Liberal newspaper, the Manchester Guardian complained: Asquith's Liberal government was brought down in , due in particular to a crisis in inadequate artillery shell production and the protest resignation of Admiral Fisher over the disastrous Gallipoli Campaign against Turkey. Reluctant to face doom in an election, Asquith formed a new coalition government on 25 May, with the majority of the new cabinet coming from his own Liberal party and the Unionist (Conservative) party, along with a token Labour representation. The new government lasted a year and a half, and was the last time Liberals controlled the government. The analysis of historian A. J. P. Taylor is that the British people were so deeply divided over numerous issues, But on all sides there was growing distrust of the Asquith government. There was no agreement whatsoever on wartime issues. The leaders of the two parties realized that embittered debates in Parliament would further undermine popular morale and so the House of Commons did not once discuss the war before May 1915. Taylor argues: The 1915 coalition fell apart at the end of 1916, when the Conservatives withdrew their support from Asquith and gave it instead to Lloyd George, who became prime minister at the head of a new coalition largely made up of Conservatives. Asquith and his followers moved to the opposition benches in Parliament and the Liberal Party was deeply split once again. Lloyd George as a Liberal heading a Conservative coalition Lloyd George remained a Liberal all his life, but he abandoned many standard Liberal principles in his crusade to win the war at all costs. He insisted on strong government controls over business as opposed to the laissez-faire attitudes of traditional Liberals. in 1915-16 he had insisted on conscription of young men into the Army, a position that deeply troubled his old colleagues. That brought him and a few like-minded Liberals into the new coalition on the ground long occupied by Conservatives. There was no more planning for world peace or liberal treatment of Germany, nor discomfit with aggressive and authoritarian measures of state power. More deadly to the future of the party, says historian Trevor Wilson, was its repudiation by ideological Liberals, who decided sadly that it no longer represented their principles. Finally the presence of the vigorous new Labour Party on the left gave a new home to voters disenchanted with the Liberal performance. The last majority Liberal Government in Britain was elected in 1906. The years preceding the First World War were marked by worker strikes and civil unrest and saw many violent confrontations between civilians and the police and armed forces. Other issues of the period included women's suffrage and the Irish Home Rule movement. After the carnage of 1914–1918, the democratic reforms of the Representation of the People Act 1918 instantly tripled the number of people entitled to vote in Britain from seven to twenty-one million. The Labour Party benefited most from this huge change in the electorate, forming its first minority government in 1924. In the 1918 general election, Lloyd George, hailed as "the Man Who Won the War", led his coalition into a khaki election. Lloyd George and the Conservative leader Bonar Law wrote a joint letter of support to candidates to indicate they were considered the official Coalition candidates—this "coupon", as it became known, was issued against many sitting Liberal MPs, often to devastating effect, though not against Asquith himself. The coalition won a massive victory: Labour increased their position slightly but the Asquithian Liberals were decimated. Those remaining Liberal MPs who were opposed to the Coalition Government went into opposition under the parliamentary leadership of Sir Donald MacLean who also became Leader of the Opposition. Asquith, who had appointed MacLean, remained as overall Leader of the Liberal Party even though he lost his seat in 1918. Asquith returned to Parliament in 1920 and resumed leadership. Between 1919 and 1923, the anti-Lloyd George Liberals were called Asquithian Liberals, Wee Free Liberals or Independent Liberals. Lloyd George was increasingly under the influence of the rejuvenated Conservative party who numerically dominated the coalition. In 1922, the Conservative backbenchers rebelled against the continuation of the coalition, citing, in particular, Lloyd George's plan for war with Turkey in the Chanak Crisis, and his corrupt sale of honours. He resigned as prime minister and was succeeded by Bonar Law. At the 1922 and 1923 elections the Liberals won barely a third of the vote and only a quarter of the seats in the House of Commons as many radical voters abandoned the divided Liberals and went over to Labour. In 1922, Labour became the official opposition. A reunion of the two warring factions took place in 1923 when the new Conservative prime minister Stanley Baldwin committed his party to protective tariffs, causing the Liberals to reunite in support of free trade. The party gained ground in the 1923 general election but made most of its gains from Conservatives whilst losing ground to Labour—a sign of the party's direction for many years to come. The party remained the third largest in the House of Commons, but the Conservatives had lost their majority. There was much speculation and fear about the prospect of a Labour government and comparatively little about a Liberal government, even though it could have plausibly presented an experienced team of ministers compared to Labour's almost complete lack of experience as well as offering a middle ground that could obtain support from both Conservatives and Labour in crucial Commons divisions. However, instead of trying to force the opportunity to form a Liberal government, Asquith decided instead to allow Labour the chance of office in the belief that they would prove incompetent and this would set the stage for a revival of Liberal fortunes at Labour's expense, but it was a fatal error. Labour was determined to destroy the Liberals and become the sole party of the left. Ramsay MacDonald was forced into a snap election in 1924 and although his government was defeated he achieved his objective of virtually wiping the Liberals out as many more radical voters now moved to Labour whilst moderate middle-class Liberal voters concerned about socialism moved to the Conservatives. The Liberals were reduced to a mere forty seats in Parliament, only seven of which had been won against candidates from both parties and none of these formed a coherent area of Liberal survival. The party seemed finished, and during this period some Liberals, such as Churchill, went over to the Conservatives while others went over to Labour. Several Labour ministers of later generations, such as Michael Foot and Tony Benn, were the sons of Liberal MPs. Asquith finally resigned as Liberal leader in 1926 (he died in 1928). Lloyd George, now party leader, began a drive to produce coherent policies on many key issues of the day. In the 1929 general election, he made a final bid to return the Liberals to the political mainstream, with an ambitious programme of state stimulation of the economy called We Can Conquer Unemployment!, largely written for him by the Liberal economist John Maynard Keynes. The Liberal Party stood in Northern Ireland for the first and only time in the 1929 general election, gaining 17% of the vote, but won no seats. Nationally the Liberals gained ground, but once again it was at the Conservatives' expense whilst also losing seats to Labour. Indeed, the urban areas of the country suffering heavily from unemployment, which might have been expected to respond the most to the radical economic policies of the Liberals, instead gave the party its worst results. By contrast, most of the party's seats were won either due to the absence of a candidate from one of the other parties or in rural areas on the Celtic fringe, where local evidence suggests that economic ideas were at best peripheral to the electorate's concerns. The Liberals now found themselves with 59 members, holding the balance of power in a Parliament where Labour was the largest party but lacked an overall majority. Lloyd George offered a degree of support to the Labour government in the hope of winning concessions, including a degree of electoral reform to introduce the alternative vote, but this support was to prove bitterly divisive as the Liberals increasingly divided between those seeking to gain what Liberal goals they could achieve, those who preferred a Conservative government to a Labour one and vice versa. Splits over the National Government A group of Liberal MPs led by Sir John Simon opposed the Liberal Party's support for the minority Labour government. They preferred to reach an accommodation with the Conservatives. In 1931 MacDonald's Labour government fell apart in response to the Great Depression. Macdonald agreed to lead a National Government of all parties, which passed a budget to deal with the financial crisis. When few Labour MPs backed the National government, it became clear that the Conservatives had the clear majority of government supporters. They then forced MacDonald to call a general election. Lloyd George called for the party to leave the National Government but only a few MPs and candidates followed. The majority, led by Sir Herbert Samuel, decided to contest the elections as part of the government. The bulk of Liberal MPs supported the government, – the Liberal Nationals (officially the "National Liberals" after 1947) led by Simon, also known as "Simonites", and the "Samuelites" or "official Liberals", led by Samuel who remained as the official party. Both groups secured about 34 MPs but proceeded to diverge even further after the election, with the Liberal Nationals remaining supporters of the government throughout its life. There were to be a succession of discussions about them rejoining the Liberals, but these usually foundered on the issues of free trade and continued support for the National Government. The one significant reunification came in 1946 when the Liberal and Liberal National party organisations in London merged. The National Liberals, as they were called by then, were gradually absorbed into the Conservative Party, finally merging in 1968. The official Liberals found themselves a tiny minority within a government committed to protectionism. Slowly they found this issue to be one they could not support. In early 1932 it was agreed to suspend the principle of collective responsibility to allow the Liberals to oppose the introduction of tariffs. Later in 1932 the Liberals resigned their ministerial posts over the introduction of the Ottawa Agreement on Imperial Preference. However, they remained sitting on the government benches supporting it in Parliament, though in the country local Liberal activists bitterly opposed the government. Finally in late 1933 the Liberals crossed the floor of the House of Commons and went into complete opposition. By this point their number of MPs was severely depleted. In the 1935 general election, just 17 Liberal MPs were elected, along with Lloyd George and three followers as independent Liberals. Immediately after the election the two groups reunited, though Lloyd George declined to play much of a formal role in his old party. Over the next ten years there would be further defections as MPs deserted to either the Liberal Nationals or Labour. Yet there were a few recruits, such as Clement Davies, who had deserted to the National Liberals in 1931 but now returned to the party during World War II and who would lead it after the war. Near extinction Samuel had lost his seat in the 1935 election and the leadership of the party fell to Sir Archibald Sinclair. With many traditional domestic Liberal policies now regarded as irrelevant, he focused the party on opposition to both the rise of Fascism in Europe and the appeasement foreign policy of the National Government, arguing that intervention was needed, in contrast to the Labour calls for pacifism. Despite the party's weaknesses, Sinclair gained a high profile as he sought to recall the Midlothian Campaign and once more revitalise the Liberals as the party of a strong foreign policy. In 1940, they joined Churchill's wartime coalition government, with Sinclair serving as Secretary of State for Air, the last British Liberal to hold Cabinet rank office for seventy years. However, it was a sign of the party's lack of importance that they were not included in the War Cabinet; some leading party members founded Radical Action, a group which called for liberal candidates to break the war-time electoral pact. At the 1945 general election, Sinclair and many of his colleagues lost their seats to both Conservatives and Labour and the party returned just 12 MPs to Westminster, but this was just the beginning of the decline. In 1950, the general election saw the Liberals return just nine MPs. Another general election was called in 1951 and the Liberals were left with just six MPs and all but one of them were aided by the fact that the Conservatives refrained from fielding candidates in those constituencies. In 1957, this total fell to five when one of the Liberal MPs died and the subsequent by-election was lost to the Labour Party, which selected the former Liberal Deputy Leader Megan Lloyd George as its own candidate. The Liberal Party seemed close to extinction. During this low period, it was often joked that Liberal MPs could hold meetings in the back of one taxi. Liberal revival Through the 1950s and into the 1960s the Liberals survived only because a handful of constituencies in rural Scotland and Wales clung to their Liberal traditions, whilst in two English towns, Bolton and Huddersfield, local Liberals and Conservatives agreed to each contest only one of the town's two seats. Jo Grimond, for example, who became Leader of the Liberal Party in 1956, was MP for the remote Orkney and Shetland islands. Under his leadership a Liberal revival began, marked by the Orpington by-election of March 1962 which was won by Eric Lubbock. There, the Liberals won a seat in the London suburbs for the first time since 1935. The Liberals became the first of the major British political parties to advocate British membership of the European Economic Community. Grimond also sought an intellectual revival of the party, seeking to position it as a non-socialist radical alternative to the Conservative government of the day. In particular he canvassed the support of the young post-war university students and recent graduates, appealing to younger voters in a way that many of his recent predecessors had not, and asserting a new strand of Liberalism for the post-war world. The new middle-class suburban generation began to find the Liberals' policies attractive again. Under Grimond (who retired in 1967) and his successor, Jeremy Thorpe, the Liberals regained the status of a serious third force in British politics, polling up to 20% of the vote, but unable to break the duopoly of Labour and Conservative and win more than fourteen seats in the Commons. An additional problem was competition in the Liberal heartlands in Scotland and Wales from the Scottish National Party and Plaid Cymru who both grew as electoral forces from the 1960s onwards. Although Emlyn Hooson held on to the seat of Montgomeryshire, upon Clement Davies death in 1962, the party lost five Welsh seats between 1950 and 1966. In September 1966, the Welsh Liberal Party formed their own state party, moving the Liberal Party into a fully federal structure. In local elections, Liverpool remained a Liberal stronghold, with the party taking the plurality of seats on the elections to the new Liverpool Metropolitan Borough Council in 1973. On 26 July 1973, the party won two by-elections on the same day, in the Isle of Ely (with Clement Freud), and Ripon (with David Austick). In the February 1974 general election, the Conservative government of Edward Heath won a plurality of votes cast, but the Labour Party gained a plurality of seats. The Conservatives were unable to form a government due to the Ulster Unionist MPs refusing to support the Conservatives after the Northern Ireland Sunningdale Agreement. The Liberals obtained 6.1 million votes, the most it would ever achieve, and now held the balance of power in the Commons. Conservatives offered Thorpe the Home Office if he would join a coalition government with Heath. Thorpe was personally in favour of it, but the party insisted it would only agree pending a clear government commitment to introducing proportional representation (PR) and a change of prime minister. The former was unacceptable to Heath's cabinet and the latter to Heath personally, so the talks collapsed. Instead, a minority Labour government was formed under Harold Wilson but with no formal support from Thorpe. In the October 1974 general election, the Liberals total vote slipped back slightly (and declined in each of the next three) and the Labour government won a wafer-thin majority. Thorpe was subsequently forced to resign after allegations that he attempted to have his homosexual lover murdered by a hitman. The party's new leader, David Steel, negotiated the Lib-Lab pact with Wilson's successor as prime minister, James Callaghan. According to this pact, the Liberals would support the government in crucial votes in exchange for some influence over policy. The agreement lasted from 1977 to 1978, but proved mostly fruitless, for two reasons: the Liberals' key demand of PR was rejected by most Labour MPs, whilst the contacts between Liberal spokespersons and Labour ministers often proved detrimental, such as between Treasury spokesperson John Pardoe and Chancellor of the Exchequer Denis Healey, who were mutually antagonistic. Alliance, Liberal Democrats and reconstituted Liberal Party The Conservative Party under the leadership of Margaret Thatcher won the 1979 general election, placing the Labour Party back in opposition, which served to push the Liberals back into the margins. In 1981, defectors from a moderate faction of the Labour Party, led by former Cabinet ministers Roy Jenkins, David Owen and Shirley Williams, founded the Social Democratic Party (SDP). The new party and the Liberals quickly formed the SDP–Liberal Alliance, which for a while polled as high as 50% in the opinion polls and appeared capable of winning the next general election. Indeed, Steel was so confident of an Alliance victory that he told the 1981 Liberal conference, "Go back to your constituencies, and prepare for government!". However, the Alliance was overtaken in the polls by the Tories in the aftermath of the Falkland Islands War and at the 1983 general election the Conservatives were re-elected by a landslide, with Labour once again forming the opposition. While the SDP–Liberal Alliance came close to Labour in terms of votes (a share of more than 25%), it only had 23 MPs compared to Labour's 209. The Alliance's support was spread out across the country, and was not concentrated in enough areas to translate into seats. In the 1987 general election, the Alliance's share of the votes fell slightly and it now had 22 MPs. In the election's aftermath Steel proposed a merger of the two parties. Most SDP members voted in favour of the merger, but SDP leader David Owen objected and continued to lead a "rump" SDP. In March 1988, the Liberal Party and Social Democratic Party merged to create the Social and Liberal Democrats, renamed the Liberal Democrats in October 1989. Over two-thirds of Liberal members joined the merged party, along with all sitting MPs. Steel and SDP leader Robert Maclennan served briefly as interim leaders of the merged party. A group of Liberal opponents of the merger with the Social Democrats, including Michael Meadowcroft (the former Liberal MP for Leeds West) and Paul Wiggin (who served on Peterborough City Council as a Liberal), continued with a new party organisation under the name of the 'Liberal Party'. Meadowcroft joined the Liberal Democrats in 2007, but the Liberal Party as reconstituted in 1989 continues to hold council seats and field candidates in Westminster Parliamentary elections. None of the nineteen Liberal candidates in 2019 achieved 5% of the votes, resulting in all losing their deposits. Ideology During the 19th century, the Liberal Party was broadly in favour of what would today be called classical liberalism, supporting laissez-faire economic policies such as free trade and minimal government interference in the economy (this doctrine was usually termed Gladstonian liberalism after the Victorian era Liberal Prime Minister William Gladstone). The Liberal Party favoured social reform, personal liberty, reducing the powers of the Crown and the Church of England (many of them were nonconformists) and an extension of the electoral franchise. Sir William Harcourt, a prominent Liberal politician in the Victorian era, said this about liberalism in 1872: If there be any party which is more pledged than another to resist a policy of restrictive legislation, having for its object social coercion, that party is the Liberal party. (Cheers.) But liberty does not consist in making others do what you think right, (Hear, hear.) The difference between a free Government and a Government which is not free is principally this—that a Government which is not free interferes with everything it can, and a free Government interferes with nothing except what it must. A despotic Government tries to make everybody do what it wishes; a Liberal Government tries, as far as the safety of society will permit, to allow everybody to do as he wishes. It has been the tradition of the Liberal party consistently to maintain the doctrine of individual liberty. It is because they have done so that England is the place where people can do more what they please than in any other country in the world. [...] It is this practice of allowing one set of people to dictate to another set of people what they shall do, what they shall think, what they shall drink, when they shall go to bed, what they shall buy, and where they shall buy it, what wages they shall get and how they shall spend them, against which the Liberal party have always protested. The political terms of "modern", "progressive" or "new" Liberalism began to appear in the mid to late 1880s and became increasingly common to denote the tendency in the Liberal Party to favour an increased role for the state as more important than the classical liberal stress on self-help and freedom of choice. By the early 20th century, the Liberals stance began to shift towards "New Liberalism", what would today be called social liberalism, namely a belief in personal liberty with a support for government intervention to provide social welfare. This shift was best exemplified by the Liberal government of H. H. Asquith and his Chancellor David Lloyd George, whose Liberal reforms in the early 1900s created a basic welfare state. David Lloyd George adopted a programme at the 1929 general election entitled We Can Conquer Unemployment!, although by this stage the Liberals had declined to third-party status. The Liberals as expressed in the Liberal Yellow Book now regarded opposition to state intervention as being a characteristic of right-wing extremists. After nearly becoming extinct in the 1940s and the 1950s, the Liberal Party revived its fortunes somewhat under the leadership of Jo Grimond in the 1960s by positioning itself as a radical centrist, non-socialist alternative to the Conservative and Labour Party governments of the time. Religious alignment Since 1660, nonconformist Protestants have played a major role in English politics. Relatively few MPs were Dissenters. However the Dissenters were a major voting bloc in many areas, such as the East Midlands. They were very well organised and highly motivated and largely won over the Whigs and Liberals to their cause. Down to the 1830s, Dissenters demanded removal of political and civil disabilities that applied to them (especially those in the Test and Corporation Acts). The Anglican establishment strongly resisted until 1828. Numerous reforms of voting rights, especially that of 1832, increased the political power of Dissenters. They demanded an end to compulsory church rates, in which local taxes went only to Anglican churches. They finally achieved the end of religious tests for university degrees in 1905. Gladstone brought the majority of Dissenters around to support for Home Rule for Ireland, putting the dissenting Protestants in league with the Irish Roman Catholics in an otherwise unlikely alliance. The Dissenters gave significant support to moralistic issues, such as temperance and sabbath enforcement. The nonconformist conscience, as it was called, was repeatedly called upon by Gladstone for support for his moralistic foreign policy. In election after election, Protestant ministers rallied their congregations to the Liberal ticket. In Scotland, the Presbyterians played a similar role to the Nonconformist Methodists, Baptists and other groups in England and Wales. By the 1820s, the different Nonconformists, including Wesleyan Methodists, Baptists, Congregationalists and Unitarians, had formed the Committee of Dissenting Deputies and agitated for repeal of the highly restrictive Test and Corporation Acts. These Acts excluded Nonconformists from holding civil or military office or attending Oxford or Cambridge, compelling them to set up their own Dissenting Academies privately. The Tories tended to be in favour of these Acts and so the Nonconformist cause was linked closely to the Whigs, who advocated civil and religious liberty. After the Test and Corporation Acts were repealed in 1828, all the Nonconformists elected to Parliament were Liberals. Nonconformists were angered by the Education Act 1902, which integrated Church of England denominational schools into the state system and provided for their support from taxes. John Clifford formed the National Passive Resistance Committee and by 1906 over 170 Nonconformists had gone to prison for refusing to pay school taxes. They included 60 Primitive Methodists, 48 Baptists, 40 Congregationalists and 15 Wesleyan Methodists. The political strength of Dissent faded sharply after 1920 with the secularisation of British society in the 20th century. The rise of the Labour Party reduced the Liberal Party strongholds into the nonconformist and remote "Celtic Fringe", where the party survived by an emphasis on localism and historic religious identity, thereby neutralising much of the class pressure on behalf of the Labour movement. Meanwhile, the Anglican Church was a bastion of strength for the Conservative Party. On the Irish issue, the Anglicans strongly supported unionism. Increasingly after 1850, the Roman Catholic element in England and Scotland was composed of recent emigrants from Ireland who largely voted for the Irish Parliamentary Party until its collapse in 1918. Liberal leaders Liberal Leaders in the House of Lords Granville George Leveson-Gower, 2nd Earl Granville (1859–1865) John Russell, 1st Earl Russell (1865–1868) Granville George Leveson-Gower, 2nd Earl Granville (1868–1891) John Wodehouse, 1st Earl of Kimberley (1891–1894) Archibald Philip Primrose, 5th Earl of Rosebery (1894–1896) John Wodehouse, 1st Earl of Kimberley (1896–1902) John Spencer, 5th Earl Spencer (1902–1905) George Robinson, 1st Marquess of Ripon (1905–1908) Robert Crewe-Milnes, 1st Marquess of Crewe (1908–1923) Edward Grey, 1st Viscount Grey of Fallodon (1923–1924) William Lygon, 7th Earl Beauchamp (1924–1931) Rufus Isaacs, 1st Marquess of Reading (1931–1936) Robert Crewe-Milnes, 1st Marquess of Crewe (1936–1944) Herbert Samuel, 1st Viscount Samuel (1944–1955) Philip Rea, 2nd Baron Rea (1955–1967) Frank Byers, Baron Byers (1967–1984) Nancy Seear, Baroness Seear (1984–1989) Liberal Leaders in the House of Commons Henry John Temple, 3rd Viscount Palmerston (1859–1865) William Gladstone (1865–1875) Spencer Cavendish, 8th Duke of Devonshire (1875–1880) William Gladstone (1880–1894) Sir William Harcourt (1894–1898) Sir Henry Campbell-Bannerman (1899–1908) H. H. Asquith (1908–1916) Leaders of the Liberal Party H. H. Asquith, 1st Earl of Oxford and Asquith, 1925 (1916–1926) Donald Maclean, Acting Leader (1919–1920) David Lloyd George (1926–1931) Sir Herbert Samuel (1931–1935) Sir Archibald Sinclair (1935–1945) Clement Davies (1945–1956) Jo Grimond (1956–1967) Jeremy Thorpe (1967–1976) Jo Grimond, Interim Leader (1976) David Steel (1976–1988) Deputy Leaders of the Liberal Party in the House of Commons Herbert Samuel (1929–1931) Archibald Sinclair (1931–1935) Post vacant (1935–1940) Percy Harris (1940–1945) Post vacant (1945–1949) Megan Lloyd George (1949–1951) Post vacant (1951–1962) Donald Wade (1962–1964) Post vacant (1964–1979) John Pardoe (1976–1979) Post vacant (1979–1985) Alan Beith (1985–1988) Deputy Leaders of the Liberal Party in the House of Lords Eric Drummond, 16th Earl of Perth (1946–1951) Walter Layton, 1st Baron Layton (1952–1955) Post vacant (1955–1965) Gladwyn Jebb, 1st Baron Gladwyn (1965–1988) Liberal Party front bench team members 1945–1956 1956–1967 1967–1976 Electoral performance Notes See also :Category:Liberal Party (UK) MPs List of Liberal Party (UK) MPs Liberalism in the United Kingdom Liberal Democrats List of United Kingdom Liberal Party Leaders List of United Kingdom Whig and allied party leaders (1801–59) List of Liberal Chief Whips President of the Liberal Party List of UK Liberal Party general election manifestos References Further reading Adelman, Paul. The decline of the Liberal Party 1910–1931 (2nd ed. Routledge, 2014). Bentley, Michael The Climax of Liberal Politics: British Liberalism in Theory and Practice, 1868–1918 (1987). Brack, Duncan; Ingham, Robert; Little, Tony, eds. British Liberal Leaders (2015). Campbell, John Lloyd George, The Goat in the Wilderness, 1922–31 (1977). Clarke, P. F. "The Electoral Position of the Liberal and Labour Parties, 1910–1914." English Historical Review 90.357 (1975): 828–836. in JSTOR. Cook, Chris. A Short History of the Liberal Party, 1900–2001 (6th edition). Basingstoke: Palgrave, 2002. . Cregier, Don M. "The Murder of the British Liberal Party," The History Teacher 3#4 (1970), pp. 27–36 online edition Cross, Colin. The Liberals in Power, 1905–1914 (1963). David, Edward. “The Liberal Party Divided 1916-1918.” Historical Journal 13#3 (1970, pp. 509–32, http://www.jstor.org/stable/2637886 online] Dangerfield, George. The Strange Death of Liberal England (1935), a famous classic online free. Dutton, David. A History of the Liberal Party Since 1900 (2nd ed. Palgrave Macmillan, 2013). Fairlie, Henry. "Oratory in Political Life," History Today (Jan 1960) 10#1 pp 3–13. A survey of political oratory in Britain from 1730 to 1960. Fahey, David M. “Temperance and the Liberal Party - Lord Peel’s Report, 1899.” Journal of British Studies 20#3 (1971), pp. 132–59, online. Gilbert, Bentley Brinkerhoff. David Lloyd George: A Political Life: The Architect of Change 1863–1912 (1987)' David Lloyd George: A Political Life: Organizer of Victory, 1912–1916 (1992). Goodlad, Graham D. “The Liberal Party and Gladstone’s Land Purchase Bill of 1886.” Historical Journal 32#3 (1989), pp. 627–41, online. Hammond, J. L. and M. R. D. Foot. Gladstone and Liberalism (1952) Häusermann, Silja, Georg Picot, and Dominik Geering. "Review article: Rethinking party politics and the welfare state–recent advances in the literature". British Journal of Political Science 43#1 (2013): 221–240.online. Hazlehurst, Cameron. "Asquith as Prime Minister, 1908–1916," The English Historical Review 85#336 (1970), pp. 502–531 in JSTOR. Heyck, Thomas William. “Home Rule, Radicalism, and the Liberal Party, 1886-1895.” Journal of British Studies 13#2 (1974), pp. 66–91, online. Hughes, K. M. “A Political Party and Education: Reflections on the Liberal Party’s Educational Policy, 1867-1902.” British Journal of Educational Studies 8#2, (1960), pp. 112–26, online. Jenkins, Roy. "From Gladstone to Asquith: The Late Victorian Pattern of Liberal Leadership," History Today (July 1964) 14#7 pp 445–452. Jenkins, Roy. Asquith: portrait of a man and an era (1964). Jenkins, T. A. “Gladstone, the Whigs and the Leadership of the Liberal Party, 1879-1880.” Historical Journal 27#2 (1984), pp. 337–60, online. Jones, Thomas. Lloyd George (1951), short biography Kellas, James G. “The Liberal Party in Scotland 1876-1895.” Scottish Historical Review 44#137, (1965), pp. 1–16, online Laybourn, Keith. "The rise of Labour and the decline of Liberalism: the state of the debate." History 80.259 (1995): 207–226, historiography. Lubenow, W. C. “Irish Home Rule and the Social Basis of the Great Separation in the Liberal Party in 1886.” Historical Journal 28#1 (1985), pp. 125–42, online. Lynch, Patricia. The Liberal Party in Rural England, 1885–1910: Radicalism and Community (2003). MacAllister, Iain, et al., "Yellow fever? The political geography of Liberal voting in Great Britain," Political Geography (2002) 21#4 pp. 421–447. McEwen, John M. “The Liberal Party and the Irish Question during the First World War.” Journal of British Studies, 12#1, (1972), pp. 109–31, online. McGill, Barry. “Francis Schnadhorst and Liberal Party Organization.” Journal of Modern History 34#1 (1962), pp. 19–39, online. Machin, G. I. T. "Gladstone and Nonconformity in the 1860s: The Formation of an Alliance." Historical Journal 17, no. 2 (1974): 347–64. online. McCready, H. W. “Home Rule and the Liberal Party, 1899-1906.” Irish Historical Studies 13#52, (1963), pp. 316–48, online. Mowat, Charles Loch. Britain between the Wars, 1918–1940 (1955) 694 pp. scholarly survey online Packer, Ian. Liberal government and politics, 1905–15 (Springer, 2006). Parry, Jonathan. The Rise and Fall of Liberal Government in Victorian Britain (Yale, 1993) . Poe, William A. “Conservative Nonconformists: Religious Leaders and the Liberal Party in Yorkshire/Lancashire.” Nineteenth Century Studies, vol. 2, (1988), pp. 63–72, online. Pugh, Martin D. "Asquith, Bonar Law and the First Coalition." Historical Journal 17.4 (1974): 813–836. Pugh, Martin. “The Liberal Party and the Popular Front.” English Historical Review 121#494, (2006), pp. 1327–50, online. Rossi, John P. “The Transformation of the British Liberal Party: A Study of the Tactics of the Liberal Opposition, 1874-1880.” Transactions of the American Philosophical Society 68#8, (1978), pp. 1–133, online. Rossi, John P. “English Catholics, the Liberal Party, and the General Election of 1880.” Catholic Historical Review, 63#3, (1977), pp. 411–27, [http://www.jstor.org/stable/25020158 online[. Russell, A.K. Liberal Landslide: The General Election of 1906 (David & Charles, 1973). Searle, G. R. “The Edwardian Liberal Party and Business.” English Historical Review 98#386, (1983), pp. 28–60, online. Searle, G. R. A New England? Peace and war, 1886–1918 (Oxford University Press, 2004), wide-ranging scholarly survey Thorpe, Andrew. "Labour Leaders and the Liberals, 1906–1924", Cercles 21 (2011), pp. 39–54. online. Tregidga, Garry. “Turning of the Tide? A Case Study of the Liberal Party in Provincial Britain in the Late 1930s.” History 92#3 (2007), pp. 347–66, online. Weiler, Peter. The New Liberalism: Liberal Social Theory in Great Britain, 1889–1914 (Routledge, 2016). Wilson, Trevor. The Downfall of the Liberal Party: 1914–1935 (1966). Historiography St. John, Ian. The Historiography of Gladstone and Disraeli (Anthem Press, 2016) 402 pp. excerpt. Thompson, J. A. “The Historians and the Decline of the Liberal Party.” Albion' 22#1, (1990), pp. 65–83, online. Primary sources Liberal Magazine 1901 in-depth coverage of 1900. Liberal Magazine 1900 in-depth coverage of 1899. Biographies and voting returns since 1880s. Craig, Frederick Walter Scott, ed. (1975). British General Election Manifestos, 1900–74''. Springer. External links Liberal Democrat History Group. Catalogue of the Liberal Party papers (mostly dating from after 1945) at LSE Archives. The Liberal Magazine Volume 2 1895. Liberal Magazine A Periodical for the Use of Liberal Speakers, Writers and Canvassers Volume 1 1893. Facts for Liberal Politicians By John Noble 1879. Proceedings in Connection with the Annual Meeting of the National Liberal Federation with the Annual Report By National Liberal Federation, 1881. Election Address and Speeches By Samuel Smith, 1882. Annual Report Presented at a Meeting of the Council By National Liberal Federation, 1887. Proceedings of the Annual Meeting of the Council By National Liberal Federation, 1895. Five Years of Liberal Policy and Conservative Opposition By George Charles Brodrick, 1874. Leaflets published by the Liberal Publication Department for the General Election of 1906, 1906. The Liberal year book for 1908. The Government's record, 1906-1913 : seven years of Liberal legislation and administration By Liberal Publication Dept. (Great Britain). The Yale Review Volume 4 1895. The Age of Lloyd George The Liberal Party and British Politics, 1890-1929 By Kenneth O. Morgan, 2021. Classical liberal parties Social liberal parties Defunct political parties in the United Kingdom United Kingdom 1860s Political parties established in 1859 Political parties disestablished in 1988 1859 establishments in the United Kingdom 1988 disestablishments in the United Kingdom Leeds Blue Plaques
1,956
4,484
https://en.wikipedia.org/wiki/Bank%20of%20England
Bank of England
The Bank of England is the central bank of the United Kingdom and the model on which most modern central banks have been based. Established in 1694 to act as the English Government's banker, and still one of the bankers for the Government of the United Kingdom, it is the world's eighth-oldest bank. It was privately owned by stockholders from its foundation in 1694 until it was nationalised in 1946 by the Attlee ministry. The bank became an independent public organisation in 1998, wholly owned by the Treasury Solicitor on behalf of the government, with a mandate to support the economic policies of the government of the day, but independence in maintaining price stability. The bank is one of eight banks authorised to issue banknotes in the United Kingdom, has a monopoly on the issue of banknotes in England and Wales, and regulates the issue of banknotes by commercial banks in Scotland and Northern Ireland. The bank's Monetary Policy Committee has devolved responsibility for managing monetary policy. The Treasury has reserve powers to give orders to the committee "if they are required in the public interest and by extreme economic circumstances", but Parliament must endorse such orders within 28 days. In addition, the bank's Financial Policy Committee was set up in 2011 as a macroprudential regulator to oversee the UK's financial sector. The bank's headquarters have been in London's main financial district, the City of London, on Threadneedle Street, since 1734. It is sometimes known as The Old Lady of Threadneedle Street a name taken from a satirical cartoon by James Gillray in 1797. The road junction outside is known as Bank Junction. As a regulator and central bank, the Bank of England has not offered consumer banking services for many years, but it still does manage some public-facing services, such as exchanging superseded bank notes. Until 2016, the bank provided personal banking services as a privilege for employees. History Founding England's crushing defeat by France, the dominant naval power, in naval engagements culminating in the 1690 Battle of Beachy Head, became the catalyst for England to rebuild itself as a global power. William III's government wanted to build a naval fleet that would rival that of France; however, the ability to construct this fleet was hampered both by a lack of available public funds and the low credit of the English government in London. This lack of credit made it impossible for the English government to borrow the £1,200,000 (at 8% per annum) that it wanted to construct the fleet. To induce subscription to the loan, the subscribers were to be incorporated by the name of the Governor and Company of the Bank of England. The bank was given exclusive possession of the government's balances and was the only limited-liability corporation allowed to issue bank notes. The lenders would give the government cash (bullion) and issue notes against the government bonds, which could be lent again. The £1.2 million was raised in 12 days; half of this was used to rebuild the navy. As a side effect, the huge industrial effort needed, including establishing ironworks to make more nails and advances in agriculture feeding the quadrupled strength of the navy, started to transform the economy. This helped the new Kingdom of Great Britain – England and Scotland were formally united in 1707 – to become powerful. The power of the navy made Britain the dominant world power in the late 18th and early 19th centuries. The establishment of the bank was devised by Charles Montagu, 1st Earl of Halifax, in 1694. The plan of 1691, which had been proposed by William Paterson three years before, had not then been acted upon. Fifty-eight years earlier, in 1636, Financier to the king, Philip Burlamachi, had proposed exactly the same idea in a letter addressed to Francis Windebank. He proposed a loan of £1.2 million to the government; in return the subscribers would be incorporated as The Governor and Company of the Bank of England with long-term banking privileges including the issue of notes. The royal charter was granted on 27 July through the passage of the Tonnage Act 1694. Public finances were in such dire condition at the time that the terms of the loan were that it was to be serviced at a rate of 8% per annum, and there was also a service charge of £4,000 per annum for the management of the loan. The first governor was John Houblon (who was later depicted on a £50 note). The bank initially did not have its own building, first opening on 1 August 1694 in Mercers' Hall on Cheapside. This however was found to be too small and from 31 December 1694 the bank operated from Grocers' Hall, located then on Poultry, where it would remain for almost 40 years. 18th century In 1700, the Hollow Sword Blade Company was purchased by a group of businessmen who wished to establish a competing English bank (in an action that would today be considered a "back door listing"). The Bank of England's initial monopoly on English banking was due to expire in 1710. However, it was instead renewed, and the Sword Blade company failed to achieve its goal. The South Sea Company was established in 1711. In 1720 it became responsible for part of the UK's national debt, becoming a major competitor to the Bank of England. While the "South Sea Bubble" disaster soon ensued, the company continued managing part of the UK national debt until 1853. The Bank of England moved to its current location in Threadneedle Street in 1734 and thereafter slowly acquired neighbouring land to create the site necessary for erecting the bank's original home at this location, under the direction of its chief architect John Soane, between 1790 and 1827. (Herbert Baker's rebuilding of the bank in the first half of the 20th century, demolishing most of Soane's masterpiece, was described by architectural historian Nikolaus Pevsner as "the greatest architectural crime, in the City of London, of the twentieth century".) The bank's charter was again renewed in 1742 and 1764. The Credit crisis of 1772 has been described as the first modern banking crisis faced by the Bank of England. The whole City of London was in uproar when Alexander Fordyce was declared bankrupt. In August 1773, the Bank of England assisted the EIC with a loan. The strain upon the reserves of the Bank of England was not eased until towards the end of the year. When the idea and reality of the national debt came about during the 18th century, this was also largely managed by the bank. During the American War of Independence, business for the bank was so good that George Washington remained a shareholder throughout the period. By the bank's charter renewal in 1781, it was also the bankers' bank – keeping enough gold to pay its notes on demand until 26 February 1797 when war had so diminished gold reserves that – following an invasion scare caused by the Battle of Fishguard days earlier – the government prohibited the bank from paying out in gold by the passing of the bank Restriction Act 1797. This prohibition lasted until 1821. 19th century In 1825–26 the bank was able to avert a liquidity crisis when Nathan Mayer Rothschild succeeded in supplying it with gold. The Bank Charter Act 1844 tied the issue of notes to the gold reserves and gave the bank sole rights with regard to the issue of banknotes in England. Private banks that had previously had that right retained it, provided that their headquarters were outside London and that they deposited security against the notes that they issued. The bank acted as lender of last resort for the first time in the panic of 1866. 20th century The last private bank in England to issue its own notes was Thomas Fox's Fox, Fowler and Company bank in Wellington, which rapidly expanded until it merged with Lloyds Bank in 1927. They were legal tender until 1964. There are nine notes left in circulation; one is housed at Tone Dale House, Wellington. (Scottish and Northern Irish private banks continue to issue notes regulated by the bank.) Britain was on the gold standard, meaning the value of sterling was fixed by the price of gold, until 1931 when the Bank of England had to take Britain off the gold standard due to the effects of Great Depression spreading to Europe. During the governorship of Montagu Norman, from 1920 to 1944, the bank made deliberate efforts to move away from commercial banking and become a central bank. During WWII, over 10% of the face value of circulating Pound Sterling banknotes were forgeries produced by Germany. In 1946, shortly after the end of Montagu Norman's tenure, the bank was nationalised by the Labour government. The bank pursued the multiple goals of Keynesian economics after 1945, especially "easy money" and low-interest rates to support aggregate demand. It tried to keep a fixed exchange rate and attempted to deal with inflation and sterling weakness by credit and exchange controls. The bank's "10 bob note" was withdrawn from circulation in 1970 in preparation for Decimal Day in 1971. In 1977 the Bank set up a wholly owned subsidiary called Bank of England Nominees Limited (BOEN), a now-defunct private limited company, with two of its hundred £1 shares issued. According to its memorandum of association, its objectives were: "To act as Nominee or agent or attorney either solely or jointly with others, for any person or persons, partnership, company, corporation, government, state, organisation, sovereign, province, authority, or public body, or any group or association of them". Bank of England Nominees Limited was granted an exemption by Edmund Dell, Secretary of State for Trade, from the disclosure requirements under Section 27(9) of the Companies Act 1976, because "it was considered undesirable that the disclosure requirements should apply to certain categories of shareholders". The Bank of England is also protected by its royal charter status and the Official Secrets Act. BOEN was a vehicle for governments and heads of state to invest in UK companies (subject to approval from the Secretary of State), providing they undertake "not to influence the affairs of the company". In its later years, BOEN was no longer exempt from company law disclosure requirements. Although a dormant company, dormancy does not preclude a company actively operating as a nominee shareholder. BOEN had two shareholders: the Bank of England, and the Secretary of the Bank of England. The reserve requirement for banks to hold a minimum fixed proportion of their deposits as reserves at the Bank of England was abolished in 1981: see for more details. The contemporary transition from Keynesian economics to Chicago economics was analysed by Nicholas Kaldor in The Scourge of Monetarism. The handing over of monetary policy to the bank became a key plank of the Liberal Democrats' economic policy for the 1992 general election. Conservative MP Nicholas Budgen had also proposed this as a private member's bill in 1996, but the bill failed as it had the support of neither the government nor the opposition. The UK government left the expensive-to-maintain European Exchange Rate Mechanism in September 1992, in an action that cost HM Treasury over £3 billion. This led to closer communication between the government and the bank. In 1993, the bank produced its first Inflation Report for the government, detailing inflationary trends and pressures. This annually produced report remains one of the bank's major publications. The success of inflation targeting in the United Kingdom has been attributed to the bank's focus on transparency. The Bank of England has been a leader in producing innovative ways of communicating information to the public, especially through its Inflation Report, which many other central banks have emulated. The bank celebrated its three-hundredth birthday in 1994. In 1996, the bank produced its first Financial Stability Review. This annual publication became known as the Financial Stability Report in 2006. Also that year, the bank set up its real-time gross settlement (RTGS) system to improve risk-free settlement between UK banks. On 6 May 1997, following the 1997 general election that brought a Labour government to power for the first time since 1979, it was announced by the Chancellor of the Exchequer, Gordon Brown, that the bank would be granted operational independence over monetary policy. Under the terms of the Bank of England Act 1998 (which came into force on 1 June 1998) the bank's Monetary Policy Committee (MPC) was given sole responsibility for setting interest rates to meet the Government's Retail Prices Index (RPI) inflation target of 2.5%. The target has changed to 2% since the Consumer Price Index (CPI) replaced the Retail Prices Index as the Treasury's inflation index. If inflation overshoots or undershoots the target by more than 1% the Governor has to write a letter to the Chancellor of the Exchequer explaining why, and how he will remedy the situation. Independent central banks that adopt an inflation target are known as Friedmanite central banks. This change in Labour's politics was described by Skidelsky in The Return of the Master as a mistake and as an adoption of the rational expectations hypothesis as promulgated by Alan Walters. Inflation targets combined with central bank independence have been characterised as a "starve the beast" strategy creating a lack of money in the public sector. 1913 attempted bombing A terrorist bombing was attempted outside the Bank of England building on 4 April 1913. A bomb was discovered smoking and ready to explode next to railings outside the building. The bomb had been planted as part of the suffragette bombing and arson campaign, in which the Women's Social and Political Union (WSPU) launched a series of politically motivated bombing and arson attacks nationwide as part of their campaign for women's suffrage. The bomb was defused before it could detonate, in what was then one of the busiest public streets in the capital, which likely prevented many civilian casualties. The bomb had been planted the day after WSPU leader Emmeline Pankhurst was sentenced to three years' imprisonment for carrying out a bombing on the home of politician David Lloyd George. The remains of the bomb, which was built into a milk churn, are now on display at the City of London Police Museum. 21st century Mervyn King became the Governor of the Bank of England on 30 June 2003. In 2009, a request made to HM Treasury under the Freedom of Information Act sought details about the 3% Bank of England stock owned by unnamed shareholders whose identity the bank is not at liberty to disclose. In a letter of reply dated 15 October 2009, HM Treasury explained that "Some of the 3% Treasury stock which was used to compensate former owners of Bank stock has not been redeemed. However, interest is paid out twice a year and it is not the case that this has been accumulating and compounding." The Financial Services Act 2012 gave the bank additional functions and bodies, including an independent Financial Policy Committee (FPC), the Prudential Regulation Authority (PRA), and more powers to supervise financial market infrastructure providers. Canadian Mark Carney assumed the post of Governor of the Bank of England on 1 July 2013. He served an initial five-year term rather than the typical eight. He became the first Governor not to be a United Kingdom citizen but has since been granted citizenship. At Government request, his term was extended to 2019, then again to 2020. , the bank also had four Deputy Governors. BOEN was dissolved, following liquidation, in July 2017. Andrew Bailey succeeded Carney as the Governor of the Bank of England on 16 March 2020. Functions Two main areas are tackled by the bank to ensure it carries out these functions efficiently: Monetary stability Stable prices and confidence in the currency are the two main criteria for monetary stability. Stable prices are maintained by seeking to ensure that price increases meet the Government's inflation target. The bank aims to meet this target by adjusting the base interest rate, which is decided by the Monetary Policy Committee, and through its communications strategy, such as publishing yield curves. Maintaining financial stability involves protecting against threats to the whole financial system. Threats are detected by the bank's surveillance and market intelligence functions. The threats are then dealt with through financial and other operations, both at home and abroad. In exceptional circumstances, the bank may act as the lender of last resort by extending credit when no other institution will. The bank works together with other institutions to secure both monetary and financial stability, including: HM Treasury, the Government department responsible for financial and economic policy; and Other central banks and international organisations, with the aim of improving the international financial system. The 1997 memorandum of understanding describes the terms under which the bank, the Treasury, and the FSA work toward the common aim of increased financial stability. In 2010, the incoming Chancellor announced his intention to merge the FSA back into the bank. As of 2012, the current director for financial stability is Andy Haldane. The bank acts as the government's banker, and it maintains the government's Consolidated Fund account. It also manages the country's foreign exchange and gold reserves. The bank also acts as the bankers' bank, especially in its capacity as a lender of last resort. The bank has a monopoly on the issue of banknotes in England and Wales. Scottish and Northern Irish banks retain the right to issue their own banknotes, but they must be backed one-for-one with deposits at the bank, excepting a few million pounds representing the value of notes they had in circulation in 1845. The bank decided to sell its banknote-printing operations to De La Rue in December 2002, under the advice of Close Brothers Corporate Finance Ltd. Since 1998 the Monetary Policy Committee (MPC) has had the responsibility for setting the official interest rate. However, with the decision to grant the bank operational independence, responsibility for government debt management was transferred in 1998 to the new Debt Management Office, which also took over government cash management in 2000. Computershare took over as the registrar for UK Government bonds (gilt-edged securities or 'gilts') from the bank at the end of 2004. The bank used to be responsible for the regulation and supervision of the banking and insurance industries. This responsibility was transferred to the Financial Services Authority in June 1998, but after the financial crises in 2008, new banking legislation transferred the responsibility for regulation and supervision of the banking and insurance industries back to the bank. In 2011 the interim Financial Policy Committee (FPC) was created as a mirror committee to the MPC to spearhead the bank's new mandate on financial stability. The FPC is responsible for macro-prudential regulation of all UK banks and insurance companies. To help maintain economic stability, the bank attempts to broaden understanding of its role, both through regular speeches and publications by senior Bank figures, a semiannual Financial Stability Report, and through a wider education strategy aimed at the general public. It currently maintains a free museum and ran the Target Two Point Zero competition for A-level students, closing in 2017. Asset purchase facility The bank has operated, since January 2009, an Asset Purchase Facility (APF) to buy "high-quality assets financed by the issue of Treasury bills and the DMO's cash management operations" and thereby improve liquidity in the credit markets. It has, since March 2009, also provided the mechanism by which the bank's policy of quantitative easing (QE) is achieved, under the auspices of the MPC. Along with managing the QE funds, which were £895 bn at peak, the APF continues to operate its corporate facilities. Both are undertaken by a subsidiary company of the Bank of England, the Bank of England Asset Purchase Facility Fund Limited (BEAPFF). QE was primarily designed as an instrument of monetary policy. The mechanism required the Bank of England to purchase government bonds on the secondary market, financed by creating new central bank money. This would have the effect of increasing the asset prices of the bonds purchased, thereby lowering yields and dampening longer-term interest rates. The policy's aim was initially to ease liquidity constraints in the sterling reserves system but evolved into a wider policy to provide economic stimulus. QE was enacted in six tranches between 2009 and 2020. At its peak in 2020, the portfolio totalled £895 billion, comprising £875 billion of UK government bonds and £20 billion of high-grade commercial bonds. In February 2022, the Bank of England announced its intention to commence winding down the QE portfolio. Initially this would be achieved by not replacing tranches of maturing bonds, and would later be accelerated through active bond sales. In August 2022, the Bank of England reiterated its intention to accelerate the QE wind-down through active bond sales. This policy was affirmed in an exchange of letters between the Bank of England and the UK Chancellor of the Exchequer in September 2022. Between February 2022 and September 2022, a total of £37.1bn of government bonds matured, reducing the outstanding stock from £875.0bn at the end of 2021 to £837.9bn. In addition, a total of £1.1bn of corporate bonds matured, reducing the stock from £20.0bn to £18.9bn, with sales of the remaining stock planned to begin on 27 September. Banknote issues The bank has issued banknotes since 1694. Notes were originally hand-written; although they were partially printed from 1725 onwards, cashiers still had to sign each note and make them payable to someone. Notes were fully printed from 1855. Until 1928 all notes were "White Notes", printed in black and with a blank reverse. In the 18th and 19th centuries, White Notes were issued in £1 and £2 denominations. During the 20th century, White Notes were issued in denominations between £5 and £1000. Until the mid-19th century, commercial banks were allowed to issue their own banknotes, and notes issued by provincial banking companies were commonly in circulation. The Bank Charter Act 1844 began the process of restricting note issue to the bank; new banks were prohibited from issuing their own banknotes, and existing note-issuing banks were not permitted to expand their issue. As provincial banking companies merged to form larger banks, they lost their right to issue notes, and the English private banknote eventually disappeared, leaving the bank with a monopoly of note issues in England and Wales. The last private bank to issue its own banknotes in England and Wales was Fox, Fowler and Company in 1921. However, the limitations of the 1844 Act only affected banks in England and Wales, and today three commercial banks in Scotland and four in Northern Ireland continue to issue their own banknotes, regulated by the bank. At the start of the First World War, the Currency and Bank Notes Act 1914 was passed, which granted temporary powers to HM Treasury for issuing banknotes to the values of £1 and 10/- (ten shillings). Treasury notes had full legal tender status and were not convertible into gold through the bank; they replaced the gold coin in circulation to prevent a run on sterling and to enable raw material purchases for armament production. These notes featured an image of King George V (Bank of England notes did not begin to display an image of the monarch until 1960). The wording on each note was: Treasury notes were issued until 1928 when the Currency and Bank Notes Act 1928 returned note-issuing powers to the banks. The Bank of England issued notes for ten shillings and one pound for the first time on 22 November 1928. During the Second World War, the German Operation Bernhard attempted to counterfeit denominations between £5 and £50, producing 500,000 notes each month in 1943. The original plan was to parachute the money into the UK in an attempt to destabilise the British economy, but it was found more useful to use the notes to pay German agents operating throughout Europe. Although most fell into Allied hands at the end of the war, forgeries frequently appeared for years afterward, which led banknote denominations above £5 to be removed from circulation. In 2006, over £53 million in banknotes belonging to the bank was stolen from a depot in Tonbridge, Kent. Modern banknotes are printed by contract with De La Rue Currency in Loughton, Essex. Gold vault The bank is custodian to the official gold reserves of the United Kingdom and around 30 other countries. , the bank holds around of gold, worth £141 billion. These estimates suggest that the vault could hold as much as 3% of the 171,300 tonnes of gold mined throughout human history. Governance of the Bank of England Governors Following is a list of the governors of the Bank of England since the beginning of the 20th century: Court of Directors The Court of Directors is a unitary board that is responsible for setting the organisation's strategy and budget and making key decisions on resourcing and appointments. It consists of five executive members from the bank plus up to 9 non-executive members, all of whom are appointed by the Crown. The Chancellor selects the Chairman of the Court from among the non-executive members. The Court is required to meet at least seven times a year. The Governor serves for a period of eight years, the Deputy Governors for five years, and the non-executive members for up to four years. Other staff Since 2013, the bank has had a chief operating officer (COO). , the bank's COO has been Joanna Place. , the bank's chief economist is Huw Pill. See also List of British currencies Bank of England Act Bank of England club Coins of the pound sterling Financial Sanctions Unit Fractional-reserve banking Commonwealth banknote-issuing institutions Bank of England Museum Deputy Governor of the Bank of England List of directors of the Bank of England Notes References Further reading , on nationalisation 1945–50, pp 43–76 Capie, Forrest. The Bank of England: 1950s to 1979 (Cambridge University Press, 2010). xxviii + 890 pp.  excerpt and text search Fforde, John. The Role of the Bank of England, 1941–1958 (1992) excerpt and text search Francis, John. History of the Bank of England: Its Times and Traditions excerpt and text search Hennessy, Elizabeth. A Domestic History of the Bank of England, 1930–1960 (2008) excerpt and text search Kynaston, David. 2017. Till Time's Last Sand: A History of the Bank of England, 1694–2013. Bloomsbury. Lane, Nicholas. "The Bank of England in the Nineteenth Century." History Today (Aug 1960) 19#8 pp 535–541. O'Brien, Patrick K.; Palma, Nuno (2022). "Not an ordinary bank but a great engine of state: The Bank of England and the British economy, 1694–1844". The Economic History Review. Roberts, Richard, and David Kynaston. The Bank of England: Money, Power and Influence 1694–1994 (1995) Sayers, R. S. The Bank of England, 1891–1944 (1986) excerpt and text search Schuster, F. The Bank of England and the State Wood, John H. A History of Central Banking in Great Britain and the United States (Cambridge University Press, 2005) External links 1694 establishments in England England Banks established in 1694 Grade I listed buildings in the City of London England Economy of the United Kingdom HM Treasury Organisations based in London with royal patronage Organisations based in the City of London Public corporations of the United Kingdom with a Royal Charter Herbert Baker buildings and structures John Soane buildings Georgian architecture in London Neoclassical architecture in London Grade I listed banks Leeds Blue Plaques
1,957
4,485
https://en.wikipedia.org/wiki/Bakelite
Bakelite
Bakelite ( ), formally Polyoxybenzylmethylenglycolanhydride, is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. The first plastic made from synthetic components, it was developed by Leo Baekeland in Yonkers, New York in 1907, and patented on December 7, 1909 (). Because of its electrical nonconductivity and heat-resistant properties, it became a great commercial success. It was used in electrical insulators, radio and telephone casings, and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms. The "retro" appeal of old Bakelite products has made them collectible. The creation of a synthetic plastic was revolutionary for the chemical industry, which at the time made most of its income from cloth dyes and explosives. Bakelite's commercial success inspired the industry to develop other synthetic plastics. As the world's first commercial synthetic plastic, Bakelite was named a National Historic Chemical Landmark by the American Chemical Society. History Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the secretion of lac insects (specifically Kerria lacca). He produced a soluble phenol-formaldehyde shellac called "Novolak", but it was not a market success, even though it is still used to this day (e.g., as a photoresist). He then began experimenting on strengthening wood by impregnating it with a synthetic resin rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced a hard moldable material that he named Bakelite, after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make. He considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers, though asbestos was gradually abandoned by all manufacturers due to stricter environmental laws. Baekeland filed a substantial number of related patents. Bakelite, his "method of making insoluble products of phenol and formaldehyde," was filed on July 13, 1907 and granted on December 7, 1909. He also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909. Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. In the summer of 1909 he licensed the continental European rights to Rütger AG. The subsidiary formed at that time, Bakelite AG, was the first to produce Bakelite on an industrial scale. By 1910, Baekeland was producing enough material in the US to justify expansion. He formed the General Bakelite Company of Perth Amboy, NJ as a U.S. company to manufacture and market his new industrial material, and made overseas connections to produce it in other countries. The Bakelite Company produced "transparent" cast resin (which did not include filler) for a small market during the 1910s and 1920s. Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders and jewelry. However, the demand for molded plastics led the company to concentrate on molding rather than cast solid resins. The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J. W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses". A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications. The first issue of Plastics magazine, October 1925, featured Bakelite on its cover, and included the article "Bakelite – What It Is" by Allan Brown. The range of colors available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms. "Bakelite is manufactured in several forms to suit varying requirements. In all these forms the fundamental basis is the initial Bakelite resin. This variety includes clear material, for jewelry, smokers' articles, etc.; cement, using in sealing electric light bulbs in metal bases; varnishes, for impregnating electric coils, etc.; lacquers, for protecting the surface of hardware; enamels, for giving resistive coating to industrial equipment; Laminated Bakelite, used for silent gears and insulation; and molding material, from which are formed innumerable articles of utility and beauty. The molding material is prepared ordinarily by the impregnation of cellulose substances with the initial 'uncured' resin." In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company". In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was the "heart of Bakelite production in the UK" until it closed in 1987. A new factory opened in Bound Brook, New Jersey, in 1931. In 1939, the companies were acquired by Union Carbide and Carbon Corporation. In 2005 German Bakelite manufacturer Bakelite AG was acquired by Borden Chemical of Columbus, OH, now Hexion Inc. In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than polyoxy­benzyl­methylenglycol­anhydride. Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company. Synthesis Making Bakelite is a multi-stage process. It begins with heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This creates a liquid condensation product, referred to as Bakelite A, which is soluble in alcohol, acetone, or additional phenol. Heated further, the product becomes partially soluble and can still be softened by heat. Sustained heating results in an "insoluble hard gum". However, the high temperatures required to create this tends to cause violent foaming of the mixture when done at standard atmospheric pressure, which results in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about , Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance is extremely hard and both infusible and insoluble. Compression molding Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Asbestos was gradually abandoned as filler because many countries banned the production of asbestos. Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder, or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle, but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply. Phenolic sheet Another market for Bakelite resin was the creation of phenolic sheet materials. Phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin. Paper, cotton fabrics, synthetic fabrics, glass fabrics and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic. Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical and thermal requirements. Some common types include: Paper reinforced NEMA XX per MIL-I-24768 PBG. Normal electrical applications, moderate mechanical strength, continuous operating temperature of . Canvas reinforced NEMA C per MIL-I-24768 TYPE FBM NEMA CE per MIL-I-24768 TYPE FBG. Good mechanical and impact strength with continuous operating temperature of . Linen reinforced NEMA L per MIL-I-24768 TYPE FBI NEMA LE per MIL-I-24768 TYPE FEI. Good mechanical and electrical strength. Recommended for intricate high strength parts. Continuous operating temperature . Nylon reinforced NEMA N-1 per MIL-I-24768 TYPE NPG. Superior electrical properties under humid conditions, fungus resistant, continuous operating temperature of . Properties Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible. Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor. Applications and uses The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating. Bakelite was particularly suitable for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action. The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company. Bakelite was soon used for non-conducting parts of telephones, radios and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory. During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellers in the NBS-1 bomber and the DH-4B aeroplane. Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods. It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels on handguns, and as furniture for submachine guns and machineguns, and the classic Bakelite magazines for Kalashnikov rifles, as well as numerous knife handles and "scales" through the first half of the 20th century. Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches. By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age". By the 1930s, Bakelite was used for game pieces like chessmen, poker chips, dominoes and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune. Charles Plimpton patented BAYKO in 1933 and rushed out his first construction sets for Christmas 1934. He called the toy Bayko Light Constructional Sets, the words "Bayko Light" being a pun on the word "Bakelite." During World War II, Bakelite was used in a variety of wartime equipment including pilot's goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel. During World War II, Bakelite buttons were part of British uniforms. These included brown buttons for the Army and black buttons for the RAF. In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener. Bakelite was sometimes used in the pistol grip, hand guard, and butt stock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-4S. By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS). Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod and tube form for industrial applications in the electronics, power generation and aerospace industries, and under a variety of commercial brand names. Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography. Collectible status Bakelite items, particularly jewelry and radios, have become popular collectibles. The term Bakelite is sometimes used in the resale market to indicate various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of Bakelite material. Patents The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry. Similar plastics Catalin is also a phenolic resin, similar to Bakelite, but contained different mineral fillers that allowed the production of light colors. Condensites are similar thermoset materials having much the same properties, characteristics, and uses. Crystalate is an early plastic. Faturan is phenolic resin, also similar to Bakelite, that turns red over time, regardless of its original color. Galalith is an early plastic derived from milk products. Micarta is an early composite insulating plate that used Bakelite as a binding agent. It was developed in 1910 by Westinghouse Elec. & Mfg Co. Novotext is a brand name for cotton textile-phenolic resin. See also Bakelite Museum, Williton, Somerset, England Ericsson DBH 1001 telephone Prodema, a construction material with a bakelite core. References External links All Things Bakelite: The Age of Plastic—trailer for a film by John Maher, with additional video & resources Amsterdam Bakelite Collection Large Bakelite Collection Bakelite: The Material of a Thousand Uses Virtual Bakelite Museum of Ghent 1907–2007 1909 introductions Belgian inventions Composite materials Dielectrics Phenol formaldehyde resins Plastic brands Thermosetting plastics
1,958
4,487
https://en.wikipedia.org/wiki/Bean
Bean
A bean is the seed of several plants in the family Fabaceae, which are used as vegetables for human or animal food. They can be cooked in many different ways, including boiling, frying, and baking, and are used in many traditional dishes throughout the world. Terminology The word "bean" and its Germanic cognates (e.g. German Bohne) have existed in common use in West Germanic languages since before the 12th century, referring to broad beans, chickpeas, and other pod-borne seeds. This was long before the New World genus Phaseolus was known in Europe. After Columbian-era contact between Europe and the Americas, use of the word was extended to pod-borne seeds of Phaseolus, such as the common bean and the runner bean, and the related genus Vigna. The term has long been applied generally to many other seeds of similar form, such as Old World soybeans, peas, other vetches, and lupins, and even to those with slighter resemblances, such as coffee beans, vanilla beans, castor beans, and cocoa beans. Thus the term "bean" in general usage can refer to a host of different species. Seeds called "beans" are often included among the crops called "pulses" (legumes), although the words are not always interchangeable (usage varies by plant variety and by region). Both terms, beans and pulses, are usually reserved for grain crops and thus exclude those legumes that have tiny seeds and are used exclusively for non-grain purposes (forage, hay, and silage), such as clover and alfalfa. The United Nations Food and Agriculture Organization defines "BEANS, DRY" (item code 176) as applicable only to species of Phaseolus. This is one of various examples of how narrower word senses enforced in trade regulations or botany often coexist in natural language with broader senses in culinary use and general use; other common examples are the narrow sense of the word nut and the broader sense of the word nut, and the fact that tomatoes are fruit, botanically speaking, but are often treated as vegetables in culinary and general usage. Relatedly, another detail of usage is that several species of plants that are sometimes called beans, including Vigna angularis (azuki bean), mungo (black gram), radiata (green gram), and aconitifolia (moth bean), were once classified as Phaseolus but later reclassified—but the taxonomic revision does not entirely stop the use of well-established senses in general usage. Cultivation Unlike the closely related pea, beans are a summer crop that needs warm temperatures to grow. Legumes are capable of nitrogen fixation and hence need less fertiliser than most plants. Maturity is typically 55–60 days from planting to harvest. As the bean pods mature, they turn yellow and dry up, and the beans inside change from green to their mature colour. As a vine, bean plants need external support, which may take the form of special "bean cages" or poles. Native Americans customarily grew them along with corn and squash (the so-called Three Sisters), with the tall cornstalks acting as support for the beans. In more recent times, the so-called "bush bean" has been developed which does not require support and has all its pods develop simultaneously (as opposed to pole beans which develop gradually). This makes the bush bean more practical for commercial production. History Beans were an important source of protein throughout Old and New World history, and still are today. Beans are one of the longest-cultivated plants in history. Broad beans, also called fava beans, are in their wild state the size of a small fingernail, and were first gathered in Afghanistan and the Himalayan foothills. An early cultivated form were grown in Thailand from the early seventh millennium BCE, predating ceramics. Beans were deposited with the dead in ancient Egypt. Not until the second millennium BCE did cultivated, large-seeded broad beans appear in the Aegean region, Iberia, and transalpine Europe. In the Iliad (8th century BCE), there is a passing mention of beans and chickpeas cast on the threshing floor. The oldest-known domesticated beans in the Americas were found in Guitarrero Cave, an archaeological site in Peru, and dated to around the second millennium BCE. However, genetic analyses of the common bean Phaseolus show that it originated in Mesoamerica, and subsequently spread southward, along with maize and squash, traditional companion crops. Most of the kinds of beans commonly eaten today are part of the genus Phaseolus, which originated in the Americas. The first European to encounter them was Christopher Columbus, while exploring what may have been the Bahamas, and saw them growing in fields. Five kinds of Phaseolus beans were domesticated by pre-Columbian peoples: common beans (P. vulgaris) grown from Chile to the northern part of what is now the United States; and lima and sieva beans (P. lunatus); as well as the less widely distributed teparies (P. acutifolius), scarlet runner beans (P. coccineus), and polyanthus beans. One well-documented use of beans by pre-Columbian people as far north as the Atlantic seaboard is the "Three Sisters" method of companion plant cultivation: Many tribes would grow beans together with maize (corn), and squash. The corn would not be planted in rows as is done by European agriculture, but in a checkerboard/hex fashion across a field, in separate patches of one to six stalks each. Beans would be planted around the base of the developing stalks, and would vine their way up as the stalks grew. All American beans at that time were vine plants; "bush beans" were cultivated more recently. The cornstalks would work as a trellis for the bean plants, and the beans would provide much-needed nitrogen for the corn. Squash would be planted in the spaces between the patches of corn in the field. They would be provided slight shelter from the sun by the corn, would shade the soil and reduce evaporation, and would deter many animals from attacking the corn and beans because their coarse, hairy vines and broad, stiff leaves are difficult or uncomfortable for animals such as deer and raccoons to walk through, crows to land on, and are a deterrent to other animals as well. Beans were cultivated across Chile in Pre-Hispanic times, likely as far south as Chiloé Archipelago. Dry beans come from both Old World varieties of broad beans (fava beans) and New World varieties (kidney, black, cranberry, pinto, navy/haricot). Common genera and species Currently, the world gene banks hold about 40,000 bean varieties, although only a fraction are mass-produced for regular consumption. Most of the foods we call "beans", "legumes", "lentils" and "pulses" belong to the same family, Fabaceae ("leguminous" plants), but are from different genera and species, native to different homelands and distributed worldwide depending on their adaptability. Many varieties are eaten both fresh (the whole pod, and the immature beans may or may not inside) or shelled (immature seeds, mature and fresh seeds, or mature and dried seeds). Numerous legumes look similar, and have become naturalized in locations across the world, which often lead to similar names for different species. Properties Nutrition Raw green beans are 90% water, 7% carbohydrates, 2% protein, and contain negligble fat (table). In a reference serving, raw green beans supply 31 calories of food energy, and are a moderate source (10-19% of the Daily Value, DV) of vitamin C (15% DV) and vitamin B6 (11% DV), with no other micronutrients in significant content (table). Antinutrients Many types of bean like kidney bean contain significant amounts of antinutrients that inhibit some enzyme processes in the body. Phytic acid and phytates, present in grains, nuts, seeds and beans, interfere with bone growth and interrupt vitamin D metabolism. Pioneering work on the effect of phytic acid was done by Edward Mellanby from 1939. Health concerns Toxins Some kinds of raw beans contain a harmful, tasteless toxin: the lectin phytohaemagglutinin, which must be removed by cooking. Red kidney beans are particularly toxic, but other types also pose risks of food poisoning. Many types of beans contain lectins, and kidney beans have the highest concentrations – especially red kidney beans. As few as 4 or 5 raw beans can cause severe stomachache, vomiting and diarrhoea. A recommended method is to boil the beans for at least ten minutes; under-cooked beans may be more toxic than raw beans. Cooking beans, without bringing them to a boil, in a slow cooker at a temperature well below boiling may not destroy toxins. A case of poisoning by butter beans used to make falafel was reported; the beans were used instead of traditional broad beans or chickpeas, soaked and ground without boiling, made into patties, and shallow fried. Bean poisoning is not well known in the medical community, and many cases may be misdiagnosed or never reported; figures appear not to be available. In the case of the UK National Poisons Information Service, available only to health professionals, the dangers of beans other than red beans were not flagged . Fermentation is used in some parts of Africa to improve the nutritional value of beans by removing toxins. Inexpensive fermentation improves the nutritional impact of flour from dry beans and improves digestibility, according to research co-authored by Emire Shimelis, from the Food Engineering Program at Addis Ababa University. Beans are a major source of dietary protein in Kenya, Malawi, Tanzania, Uganda and Zambia. Bacterial infection from bean sprouts It is common to make beansprouts by letting some types of bean, often mung beans, germinate in moist and warm conditions; beansprouts may be used as ingredients in cooked dishes, or eaten raw or lightly cooked. There have been many outbreaks of disease from bacterial contamination, often by salmonella, listeria, and Escherichia coli, of beansprouts not thoroughly cooked, some causing significant mortality. Flatulence Many edible beans, including broad beans, navy beans, kidney beans and soybeans, contain oligosaccharides (particularly raffinose and stachyose), a type of sugar molecule also found in cabbage. An anti-oligosaccharide enzyme is necessary to properly digest these sugar molecules. As a normal human digestive tract does not contain any anti-oligosaccharide enzymes, consumed oligosaccharides are typically digested by bacteria in the large intestine. This digestion process produces gases, such as methane as a byproduct, which are then released as flatulence. Production The production data for legumes are published by FAO in three categories: Pulses dry: all mature and dry seeds of leguminous plants except soybeans and groundnuts. Oil crops: soybeans and groundnuts. Fresh vegetable: immature green fresh fruits of leguminous plants. The following is a summary of FAO data. Main crops of "Pulses, Total (dry)" are "Beans, dry [176]" 26.83 million tons, "Peas, dry [187]" 14.36 million tons, "Chick peas [191]" 12.09 million tons, "Cow peas [195]" 6.99 million tons, "Lentils [201]" 6.32 million tons, "Pigeon peas [197]" 4.49 million tons, "Broad beans, horse beans [181]" 4.46 million tons. In general, the consumption of pulses per capita has been decreasing since 1961. Exceptions are lentils and cowpeas. The world leader in production of dry beans (Phaseolus spp), is India, followed by Myanmar (Burma) and Brazil. In Africa, the most important producer is Tanzania. No symbol = official figure, P = official figure, F = FAO estimate, * = unofficial/semi-official/mirror data, C = calculated figure A = aggregate (may include official, semi-official or estimates) Source: UN Food and Agriculture Organization (FAO) See also Baked beans Jelly beans Mexican jumping bean List of bean soups Fassoulada – a bean soup List of edible seeds List of legume dishes References Bibliography External links Everett H. Bickley Collection, 1919–1980 Archives Center, National Museum of American History, Smithsonian Institution. Discovery Online: The Skinny On Why Beans Give You Gas Fermentation improves nutritional value of beans Cook's Thesaurus on Beans Edible legumes Pod vegetables Staple foods Vegan cuisine Vegetarian cuisine Crops Plant common names
1,959
4,489
https://en.wikipedia.org/wiki/Breast
Breast
The breast is one of two prominences located on the upper ventral region of a primate's torso. Both females and males develop breasts from the same embryological tissues. In females, it serves as the mammary gland, which produces and secretes milk to feed infants. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its size and shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding. Humans are the only animals with permanent breasts. At puberty, estrogens, in conjunction with growth hormone, cause permanent breast growth in female humans. This happens only to a much lesser extent in other primates—breast development in other primates generally only occurs with pregnancy. Along with their major function in providing nutrition for infants, female breasts have social and sexual characteristics. Breasts have been featured in ancient and modern sculpture, art, and photography. They can figure prominently in the perception of a woman's body and sexual attractiveness. A number of cultures associate breasts with sexuality and tend to regard bare breasts in public as immodest or indecent. Breasts, especially the nipples, are an erogenous zone. Etymology and terminology The English word breast derives from the Old English word ('breast, bosom') from Proto-Germanic (breast), from the Proto-Indo-European base (to swell, to sprout). The breast spelling conforms to the Scottish and North English dialectal pronunciations. The Merriam-Webster Dictionary states that "Middle English , [comes] from Old English ; akin to Old High German ..., Old Irish [belly], [and] Russian "; the first known usage of the term was before the 12th century. A large number of colloquial terms for breasts are used in English, ranging from fairly polite terms to vulgar or slang. Some vulgar slang expressions may be considered to be derogatory or sexist to women. Evolutionary development Humans are the only mammals whose breasts become permanently enlarged after sexual maturity (known in humans as puberty). The reason for this evolutionary change was unknown. Several hypotheses have been put forward: A link has been proposed to processes for synthesizing the endogenous steroid hormone precursor dehydroepiandrosterone which takes place in fat rich regions of the body like the buttocks and breasts. These contributed to human brain development and played a part in increasing brain size. Breast enlargement may for this purpose may have occurred as early as Homo ergaster (1.7–1.4 MYA). Other breast formation hypotheses may have then taken over as principal drivers. It has been suggested by zoologists Avishag and Amotz Zahavi that the size of the human breasts can be explained by the handicap theory of sexual dimorphism. This would see the explanation for larger breasts as them being an honest display of the women's health and ability to grow and carry them in her life. Prospective mates can then evaluate the genes of a potential mate for their ability to sustain her health even with the additional energy demanding burden she is carrying. The zoologist Desmond Morris describes a sociobiological approach in his popular science book The Naked Ape. He suggests, by making comparisons with the other primates, that breasts evolved to replace swelling buttocks as a sex signal, of ovulation. He notes how humans have, relatively speaking, large penises as well as large breasts. Furthermore, early humans adopted bipedalism and face-to-face coitus. He therefore suggested enlarged sexual signals helped maintain the bond between a mated male and female even though they performed different duties and therefore were separated for lengths of time. The study The Evolution of the Human Breast (2001) proposed that the rounded shape of a woman's breast evolved to prevent the sucking infant offspring from suffocating while feeding at the teat; that is, because of the human infant's small jaw, which did not project from the face to reach the nipple, he or she might block the nostrils against the mother's breast if it were of a flatter form (cf. common chimpanzee). Theoretically, as the human jaw receded into the face, the woman's body compensated with round breasts. Ashley Montague (1965) proposed that breasts came about as an adaptation for infant feeding for a different reason as early human ancestors adopted bipedalism and the loss of body hair. Human upright stance meant infants must be carried at the hip or shoulder instead of on the back as in the apes. This gives the infant less opportunity to find the nipple or the purchase to cling on to the mother's body hair. The mobility of the nipple on a large breast in most human females gives the infant more ability to find grasp it and feed. Other suggestions include simply that permanent breasts attracted mates, that "pendulous" breasts gave infants something to cling to, or that permanent breasts shared the function of a camel's hump, to store fat as an energy reserve. Anatomy In women, the breasts overlie the pectoralis major muscles and extend on average from the level of the second rib to the level of the sixth rib in the front of the rib cage; thus, the breasts cover much of the chest area and the chest walls. At the front of the chest, the breast tissue can extend from the clavicle (collarbone) to the middle of the sternum (breastbone). At the sides of the chest, the breast tissue can extend into the axilla (armpit), and can reach as far to the back as the latissimus dorsi muscle, extending from the lower back to the humerus bone (the bone of the upper arm). As a mammary gland, the breast is composed of differing layers of tissue, predominantly two types: adipose tissue; and glandular tissue, which affects the lactation functions of the breasts. Morphologically the breast is tear-shaped. The superficial tissue layer (superficial fascia) is separated from the skin by 0.5–2.5 cm of subcutaneous fat (adipose tissue). The suspensory Cooper's ligaments are fibrous-tissue prolongations that radiate from the superficial fascia to the skin envelope. The female adult breast contains 14–18 irregular lactiferous lobes that converge at the nipple. The 2.0–4.5 mm milk ducts are immediately surrounded with dense connective tissue that support the glands. Milk exits the breast through the nipple, which is surrounded by a pigmented area of skin called the areola. The size of the areola can vary widely among women. The areola contains modified sweat glands known as Montgomery's glands. These glands secrete oily fluid that lubricate and protect the nipple during breastfeeding. Volatile compounds in these secretions may also serve as an olfactory stimulus for the newborn's appetite. The dimensions and weight of the breast vary widely among women. A small-to-medium-sized breast weighs 500 grams (1.1 pounds) or less, and a large breast can weigh approximately 750 to 1,000 grams (1.7 to 2.2 pounds) or more. The tissue composition ratios of the breast also vary among women. Some women's breasts have a higher proportion of glandular tissue than of adipose or connective tissues. The fat-to-connective-tissue ratio determines the density or firmness of the breast. During a woman's life, her breasts change size, shape, and weight due to hormonal changes during puberty, the menstrual cycle, pregnancy, breastfeeding, and menopause. Glandular structure The breast is an apocrine gland that produces the milk used to feed an infant. The nipple of the breast is surrounded by the areola (nipple-areola complex). The areola has many sebaceous glands, and the skin color varies from pink to dark brown. The basic units of the breast are the terminal duct lobular units (TDLUs), which produce the fatty breast milk. They give the breast its offspring-feeding functions as a mammary gland. They are distributed throughout the body of the breast. Approximately two-thirds of the lactiferous tissue is within 30 mm of the base of the nipple. The terminal lactiferous ducts drain the milk from TDLUs into 4–18 lactiferous ducts, which drain to the nipple. The milk-glands-to-fat ratio is 2:1 in a lactating woman, and 1:1 in a non-lactating woman. In addition to the milk glands, the breast is also composed of connective tissues (collagen, elastin), white fat, and the suspensory Cooper's ligaments. Sensation in the breast is provided by the peripheral nervous system innervation by means of the front (anterior) and side (lateral) cutaneous branches of the fourth-, fifth-, and sixth intercostal nerves. The T-4 nerve (Thoracic spinal nerve 4), which innervates the dermatomic area, supplies sensation to the nipple-areola complex. Lymphatic drainage Approximately 75% of the lymph from the breast travels to the axillary lymph nodes on the same side of the body, whilst 25% of the lymph travels to the parasternal nodes (beside the sternum bone). A small amount of remaining lymph travels to the other breast and to the abdominal lymph nodes. The subareolar region has a lymphatic plexus known as the "subareolar plexus of Sappey". The axillary lymph nodes include the pectoral (chest), subscapular (under the scapula), and humeral (humerus-bone area) lymph-node groups, which drain to the central axillary lymph nodes and to the apical axillary lymph nodes. The lymphatic drainage of the breasts is especially relevant to oncology because breast cancer is common to the mammary gland, and cancer cells can metastasize (break away) from a tumour and be dispersed to other parts of the body by means of the lymphatic system. Shape, texture, and support The morphologic variations in the size, shape, volume, tissue density, pectoral locale, and spacing of the breasts determine their natural shape, appearance, and position on a woman's chest. Breast size and other characteristics do not predict the fat-to-milk-gland ratio or the potential for the woman to nurse an infant. The size and the shape of the breasts are influenced by normal-life hormonal changes (thelarche, menstruation, pregnancy, menopause) and medical conditions (e.g. virginal breast hypertrophy). The shape of the breasts is naturally determined by the support of the suspensory Cooper's ligaments, the underlying muscle and bone structures of the chest, and by the skin envelope. The suspensory ligaments sustain the breast from the clavicle (collarbone) and the clavico-pectoral fascia (collarbone and chest) by traversing and encompassing the fat and milk-gland tissues. The breast is positioned, affixed to, and supported upon the chest wall, while its shape is established and maintained by the skin envelope. In most women, one breast is slightly larger than the other. More obvious and persistent asymmetry in breast size occurs in up to 25% of women. While it is a common belief that breastfeeding causes breasts to sag, researchers have found that a woman's breasts sag due to four key factors: cigarette smoking, number of pregnancies, gravity, and weight loss or gain. The base of each breast is attached to the chest by the deep fascia over the pectoralis major muscles. The space between the breast and the pectoralis major muscle, called retromammary space, gives mobility to the breast. The chest (thoracic cavity) progressively slopes outwards from the thoracic inlet (atop the breastbone) and above to the lowest ribs that support the breasts. The inframammary fold, where the lower portion of the breast meets the chest, is an anatomic feature created by the adherence of the breast skin and the underlying connective tissues of the chest; the IMF is the lower-most extent of the anatomic breast. Normal breast tissue typically has a texture that feels nodular or granular, to an extent that varies considerably from woman to woman. Development The breasts are principally composed of adipose, glandular, and connective tissues. Because these tissues have hormone receptors, their sizes and volumes fluctuate according to the hormonal changes particular to thelarche (sprouting of breasts), menstruation (egg production), pregnancy (reproduction), lactation (feeding of offspring), and menopause (end of menstruation). Puberty The morphological structure of the human breast is identical in males and females until puberty. For pubescent girls in thelarche (the breast-development stage), the female sex hormones (principally estrogens) in conjunction with growth hormone promote the sprouting, growth, and development of the breasts. During this time, the mammary glands grow in size and volume and begin resting on the chest. These development stages of secondary sex characteristics (breasts, pubic hair, etc.) are illustrated in the five-stage Tanner Scale. During thelarche the developing breasts are sometimes of unequal size, and usually the left breast is slightly larger. This condition of asymmetry is transitory and statistically normal in female physical and sexual development. Medical conditions can cause overdevelopment (e.g., virginal breast hypertrophy, macromastia) or underdevelopment (e.g., tuberous breast deformity, micromastia) in girls and women. Approximately two years after the onset of puberty (a girl's first menstrual cycle), estrogen and growth hormone stimulate the development and growth of the glandular fat and suspensory tissues that compose the breast. This continues for approximately four years until the final shape of the breast (size, volume, density) is established at about the age of 21. Mammoplasia (breast enlargement) in girls begins at puberty, unlike all other primates in which breasts enlarge only during lactation. Changes during the menstrual cycle During the menstrual cycle, the breasts are enlarged by premenstrual water retention and temporary growth. Pregnancy and breastfeeding The breasts reach full maturity only when a woman's first pregnancy occurs. Changes to the breasts are among the first signs of pregnancy. The breasts become larger, the nipple-areola complex becomes larger and darker, the Montgomery's glands enlarge, and veins sometimes become more visible. Breast tenderness during pregnancy is common, especially during the first trimester. By mid-pregnancy, the breast is physiologically capable of lactation and some women can express colostrum, a form of breast milk. Pregnancy causes elevated levels of the hormone prolactin, which has a key role in the production of milk. However, milk production is blocked by the hormones progesterone and estrogen until after delivery, when progesterone and estrogen levels plummet. Menopause At menopause, breast atrophy occurs. The breasts can decrease in size when the levels of circulating estrogen decline. The adipose tissue and milk glands also begin to wither. The breasts can also become enlarged from adverse side effects of combined oral contraceptive pills. The size of the breasts can also increase and decrease in response to weight fluctuations. Physical changes to the breasts are often recorded in the stretch marks of the skin envelope; they can serve as historical indicators of the increments and the decrements of the size and volume of a woman's breasts throughout the course of her life. Breastfeeding The primary function of the breasts, as mammary glands, is the nourishing of an infant with breast milk. Milk is produced in milk-secreting cells in the alveoli. When the breasts are stimulated by the suckling of her baby, the mother's brain secretes oxytocin. High levels of oxytocin trigger the contraction of muscle cells surrounding the alveoli, causing milk to flow along the ducts that connect the alveoli to the nipple. Full-term newborns have an instinct and a need to suck on a nipple, and breastfed babies nurse for both nutrition and for comfort. Breast milk provides all necessary nutrients for the first six months of life, and then remains an important source of nutrition, alongside solid foods, until at least one or two years of age. Clinical significance The breast is susceptible to numerous benign and malignant conditions. The most frequent benign conditions are puerperal mastitis, fibrocystic breast changes and mastalgia. Lactation unrelated to pregnancy is known as galactorrhea. It can be caused by certain drugs (such as antipsychotic medications), extreme physical stress, or endocrine disorders. Lactation in newborns is caused by hormones from the mother that crossed into the baby's bloodstream during pregnancy. Breast cancer Breast cancer is the most common cause of cancer death among women and it is one of the leading causes of death among women. Factors that appear to be implicated in decreasing the risk of breast cancer are regular breast examinations by health care professionals, regular mammograms, self-examination of breasts, healthy diet, exercise to decrease excess body fat, and breastfeeding. Male breasts Both females and males develop breasts from the same embryological tissues. Normally, males produce lower levels of estrogens and higher levels of androgens, namely testosterone, which suppress the effects of estrogens in developing excessive breast tissue. In boys and men, abnormal breast development is manifested as gynecomastia, the consequence of a biochemical imbalance between the normal levels of estrogen and testosterone in the male body. Around 70% of boys temporarily develop breast tissue during adolescence. The condition usually resolves by itself within two years. When male lactation occurs, it is considered a symptom of a disorder of the pituitary gland. Plastic surgery Plastic surgery can be performed to augment or reduce the size of breasts, or reconstruct the breast in cases of deformative disease, such as breast cancer. Breast augmentation and breast lift (mastopexy) procedures are done only for cosmetic reasons, whereas breast reduction is sometimes medically indicated. In cases where a woman's breasts are severely asymmetrical, surgery can be performed to either enlarge the smaller breast, reduce the size of the larger breast, or both. Breast augmentation surgery generally does not interfere with future ability to breastfeed. Breast reduction surgery more frequently leads to decreased sensation in the nipple-areola complex, and to low milk supply in women who choose to breastfeed. Implants can interfere with mammography (breast x-rays images). Society and culture General In Christian iconography, some works of art depict women with their breasts in their hands or on a platter, signifying that they died as a martyr by having their breasts severed; one example of this is Saint Agatha of Sicily. Femen is a feminist activist group which uses topless protests as part of their campaigns against sex tourism religious institutions, sexism, and homophobia. Femen activists have been regularly detained by police in response to their protests. There is a long history of female breasts being used by comedians as a subject for comedy fodder (e.g., British comic Benny Hill's burlesque/slapstick routines). Art history In European pre-historic societies, sculptures of female figures with pronounced or highly exaggerated breasts were common. A typical example is the so-called Venus of Willendorf, one of many Paleolithic Venus figurines with ample hips and bosom. Artifacts such as bowls, rock carvings and sacred statues with breasts have been recorded from 15,000 BC up to late antiquity all across Europe, North Africa and the Middle East. Many female deities representing love and fertility were associated with breasts and breast milk. Figures of the Phoenician goddess Astarte were represented as pillars studded with breasts. Isis, an Egyptian goddess who represented, among many other things, ideal motherhood, was often portrayed as suckling pharaohs, thereby confirming their divine status as rulers. Even certain male deities representing regeneration and fertility were occasionally depicted with breast-like appendices, such as the river god Hapy who was considered to be responsible for the annual overflowing of the Nile. Female breasts were also prominent in Minoan art in the form of the famous Snake Goddess statuettes, and a few other pieces, though most female breasts are covered. In Ancient Greece there were several cults worshipping the "Kourotrophos", the suckling mother, represented by goddesses such as Gaia, Hera and Artemis. The worship of deities symbolized by the female breast in Greece became less common during the first millennium. The popular adoration of female goddesses decreased significantly during the rise of the Greek city states, a legacy which was passed on to the later Roman Empire. During the middle of the first millennium BC, Greek culture experienced a gradual change in the perception of female breasts. Women in art were covered in clothing from the neck down, including female goddesses like Athena, the patron of Athens who represented heroic endeavor. There were exceptions: Aphrodite, the goddess of love, was more frequently portrayed fully nude, though in postures that were intended to portray shyness or modesty, a portrayal that has been compared to modern pin ups by historian Marilyn Yalom. Although nude men were depicted standing upright, most depictions of female nudity in Greek art occurred "usually with drapery near at hand and with a forward-bending, self-protecting posture". A popular legend at the time was of the Amazons, a tribe of fierce female warriors who socialized with men only for procreation and even removed one breast to become better warriors (the idea being that the right breast would interfere with the operation of a bow and arrow). The legend was a popular motif in art during Greek and Roman antiquity and served as an antithetical cautionary tale. Body image Many women regard their breasts as important to their sexual attractiveness, as a sign of femininity that is important to their sense of self. A woman with smaller breasts may regard her breasts as less attractive. Clothing Because breasts are mostly fatty tissue, their shape can—within limits—be molded by clothing, such as foundation garments. Bras are commonly worn by about 90% of Western women, and are often worn for support. The social norm in most Western cultures is to cover breasts in public, though the extent of coverage varies depending on the social context. Some religions ascribe a special status to the female breast, either in formal teachings or through symbolism. Islam forbids free women from exposing their breasts in public. Many cultures, including Western cultures in North America, associate breasts with sexuality and tend to regard bare breasts as immodest or indecent. In some cultures, like the Himba in northern Namibia, bare-breasted women are normal. In some African cultures, for example, the thigh is regarded as highly sexualised and never exposed in public, but breast exposure is not taboo. In a few Western countries and regions female toplessness at a beach is acceptable, although it may not be acceptable in the town center. Social attitudes and laws regarding breastfeeding in public vary widely. In many countries, breastfeeding in public is common, legally protected, and generally not regarded as an issue. However, even though the practice may be legal or socially accepted, some mothers may nevertheless be reluctant to expose a breast in public to breastfeed due to actual or potential objections by other people, negative comments, or harassment. It is estimated that around 63% of mothers across the world have publicly breast-fed. Bare-breasted women are legal and culturally acceptable at public beaches in Australia and much of Europe. Filmmaker Lina Esco made a film entitled Free the Nipple, which is about "...laws against female toplessness or restrictions on images of female, but not male, nipples", which Esco states is an example of sexism in society. Sexual characteristic In some cultures, breasts play a role in human sexual activity. In Western culture, breasts have a "...hallowed sexual status, arguably more fetishized than either sex's genitalia". Breasts and especially the nipples are among the various human erogenous zones. They are sensitive to the touch as they have many nerve endings; and it is common to press or massage them with hands or orally before or during sexual activity. During sexual arousal, breast size increases, venous patterns across the breasts become more visible, and nipples harden. Compared to other primates, human breasts are proportionately large throughout adult females' lives. Some writers have suggested that they may have evolved as a visual signal of sexual maturity and fertility. Many people regard bare female breasts to be aesthetically pleasing or erotic, and they can elicit heightened sexual desires in men in many cultures. In the ancient Indian work the Kama Sutra, light scratching of the breasts with nails and biting with teeth are considered erotic. Some people show a sexual interest in female breasts distinct from that of the person, which may be regarded as a breast fetish. A number of Western fashions include clothing which accentuate the breasts, such as the use of push-up bras and decollete (plunging neckline) gowns and blouses which show cleavage. While U.S. culture prefers breasts that are youthful and upright, some cultures venerate women with drooping breasts, indicating mothering and the wisdom of experience. Research conducted at the Victoria University of Wellington showed that breasts are often the first thing men look at, and for a longer time than other body parts. The writers of the study had initially speculated that the reason for this is due to endocrinology with larger breasts indicating higher levels of estrogen and a sign of greater fertility, but the researchers said that "Men may be looking more often at the breasts because they are simply aesthetically pleasing, regardless of the size." Some women report achieving an orgasm from nipple stimulation, but this is rare. Research suggests that the orgasms are genital orgasms, and may also be directly linked to "the genital area of the brain". In these cases, it seems that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix. Nipple stimulation may trigger uterine contractions, which then produce a sensation in the genital area of the brain. Anthropomorphic geography There are many mountains named after the breast because they resemble it in appearance and so are objects of religious and ancestral veneration as a fertility symbol and of well-being. In Asia, there was "Breast Mountain", which had a cave where the Buddhist monk Bodhidharma (Da Mo) spent much time in meditation. Other such breast mountains are Mount Elgon on the Uganda–Kenya border; and the Maiden Paps in Scotland; the ('Maiden's breast mountains') in Talim Island, Philippines, the twin hills known as the Paps of Anu ( or 'the breasts of Anu'), near Killarney in Ireland; the 2,086 m high or in the , Spain; in Thailand, in Puerto Rico; and the Breasts of Aphrodite in Mykonos, among many others. In the United States, the Teton Range is named after the French word for 'nipple'. See also Udder References Bibliography Morris, Desmond The Naked Ape: a zoologist's study of the human animal Bantam Books, Canada. 1967 External links Breastfeeding Human sexuality
1,960
4,493
https://en.wikipedia.org/wiki/Outline%20of%20biology
Outline of biology
Biology – The natural science that studies life. Areas of focus include structure, function, growth, origin, evolution, distribution, and taxonomy. History of biology History of anatomy History of biochemistry History of biotechnology History of ecology History of genetics History of evolutionary thought: The eclipse of Darwinism – Catastrophism – Lamarckism – Orthogenesis – Mutationism – Structuralism – Vitalism Modern (evolutionary) synthesis History of molecular evolution History of speciation History of medicine History of model organisms History of molecular biology Natural history History of plant systematics Overview Biology Science Life Properties: Adaptation – Energy processing – Growth – Order – Regulation – Reproduction – Response to environment Biological organization: atom – molecule – cell – tissue – organ – organ system – organism – population – community – ecosystem – biosphere Approach: Reductionism – emergent property – mechanistic Biology as a science: Natural science Scientific method: observation – research question – hypothesis – testability – prediction – experiment – data – statistics Scientific theory – scientific law Research method List of research methods in biology Scientific literature List of biology journals: peer review Chemical basis Outline of biochemistry Atoms and molecules matter – element – atom – proton – neutron – electron– Bohr model – isotope – chemical bond – ionic bond – ions – covalent bond – hydrogen bond – molecule Water: properties of water – solvent – cohesion – surface tension – Adhesion – pH Organic compounds: carbon – carbon-carbon bonds – hydrocarbon – monosaccharide – amino acids – nucleotide – functional group – monomer – adenosine triphosphate (ATP) – lipids – oil – sugar – vitamins – neurotransmitter – wax Macromolecules: polysaccharide: cellulose – carbohydrate – chitin – glycogen – starch proteins: primary structure – secondary structure – tertiary structure – conformation – native state – protein folding – enzyme – receptor – transmembrane receptor – ion channel – membrane transporter – collagen – pigments: chlorophyll – carotenoid – xanthophyll – melanin – prion lipids: cell membrane – fats – phospholipids nucleic acids: DNA – RNA Cells Outline of cell biology Cell structure: Cell coined by Robert Hooke Techniques: cell culture – microscope – light microscope – electron microscopy – SEM – TEM Organelles: Cytoplasm – Vacuole – Peroxisome – Plastid Cell nucleus Nucleoplasm – Nucleolus – Chromatin – Chromosome Endomembrane system Nuclear envelope – Endoplasmic reticulum – Golgi apparatus – Vesicles – Lysosome Energy creators: Mitochondrion and Chloroplast Biological membranes: Plasma membrane – Mitochondrial membrane – Chloroplast membrane Other subcellular features: Cell wall – pseudopod – cytoskeleton – mitotic spindle – flagellum – cilium Cell transport: Diffusion – Osmosis – isotonic – active transport – phagocytosis Cellular reproduction: cytokinesis – centromere – meiosis Nuclear reproduction: mitosis – interphase – prophase – metaphase – anaphase – telophase programmed cell death – apoptosis – cell senescence Metabolism: enzyme - activation energy - proteolysis – cooperativity Cellular respiration Glycolysis – Pyruvate dehydrogenase complex – Citric acid cycle – electron transport chain – fermentation Photosynthesis light-dependent reactions – Calvin cycle Cell cycle mitosis – chromosome – haploid – diploid – polyploidy – prophase – metaphase – anaphase – cytokinesis – meiosis Genetics Outline of Genetics Inheritance heredity – Mendelian inheritance – gene – locus – trait – allele – polymorphism – homozygote – heterozygote – hybrid – hybridization – dihybrid cross – Punnett square – inbreeding genotype–phenotype distinction – genotype – phenotype – dominant gene – recessive gene genetic interactions – Mendel's law of segregation – genetic mosaic – maternal effect – penetrance – complementation – suppression – epistasis – genetic linkage Model organisms: Drosophila – Arabidopsis – Caenorhabditis elegans – mouse – Saccharomyces cerevisiae – Escherichia coli – Lambda phage – Xenopus – chicken – zebrafish – Ciona intestinalis – amphioxus Techniques: genetic screen – linkage map – genetic map DNA Nucleic acid double helix Nucleobase: adenine (A) – cytosine (C) – guanine (G) – thymine (T) – uracil (U) DNA replication – mutation – mutation rate – proofreading – DNA mismatch repair – point mutation – crossover – recombination – plasmid – transposon Gene expression Central dogma of molecular biology: nucleosome – genetic code – codon – transcription factor – transcription – translation – RNA – histone – telomere heterochromatin – promoter – RNA polymerase Protein biosynthesis – ribosomes Gene regulation operon – activator – repressor – corepressor – enhancer – alternative splicing Genomes DNA sequencing – high throughput sequencing – bioinformatics Proteome – proteomics – metabolome – metabolomics DNA paternity testing Biotechnology (see also Outline of biochemical techniques and Molecular biology): DNA fingerprinting – genetic fingerprint – microsatellite – gene knockout – imprinting – RNA interference Genomics – computational biology – bioinformatics – gel electrophoresis – transformation – PCR – PCR mutagenesis – primer – chromosome walking – RFLP – restriction enzyme – sequencing – shotgun sequencing – cloning – culture – DNA microarray – electrophoresis – protein tag – affinity chromatography – x-ray diffraction – Proteomics – mass spectrometry Genes, development, and evolution Apoptosis French flag model Pattern formation Evo-devo gene toolkit Transcription factor Evolution Outline of evolution (see also evolutionary biology) Evolutionary processes evolution microevolution: adaptation – selection – natural selection – directional selection – sexual selection – genetic drift – sexual reproduction – asexual reproduction – colony – allele frequency – neutral theory of molecular evolution – population genetics – Hardy–Weinberg principle Speciation Species Phylogeny Lineage (evolution) – evolutionary tree – cladistics – species – taxon – clade – monophyletic – polyphyly – paraphyly – heredity – phenotypic trait – nucleic acid sequence – synapomorphy – homology – molecular clock – outgroup (cladistics) – maximum parsimony (phylogenetics) – Computational phylogenetics Linnaean taxonomy: Carl Linnaeus – domain (biology) – kingdom (biology) – phylum – class (biology) – order (biology) – family (biology) – genus – species Three-domain system: archaea – bacteria – eukaryote – protist – fungi – plant – animal Binomial nomenclature: scientific classification – Homo sapiens History of life Origin of life – hierarchy of life – Miller–Urey experiment Macroevolution: adaptive radiation – convergent evolution – extinction – mass extinction – fossil – taphonomy – geologic time – plate tectonics – continental drift – vicariance – Gondwana – Pangaea – endosymbiosis Diversity Bacteria and Archaea Protists Plant diversity Green algae Chlorophyta Charophyta Bryophytes Marchantiophyta Anthocerotophyta Moss Pteridophytes Lycopodiophyta Polypodiophyta Seed plants Cycadophyta Ginkgophyta Pinophyta Gnetophyta Magnoliophyta Fungi Yeast – mold (fungus) – mushroom Animal diversity Invertebrates: sponge – cnidarian – coral – jellyfish – Hydra (genus) – sea anemone flatworms – nematodes arthropods: crustacean – chelicerata – myriapoda – arachnids – insects – annelids – molluscs Vertebrates: fishes: – agnatha – chondrichthyes – osteichthyes Tiktaalik tetrapods amphibians reptiles birds flightless birds – Neognathae – dinosaurs mammals placental: primates marsupial monotreme Viruses DNA viruses – RNA viruses – retroviruses Plant form and function Plant body Organ systems: root – shoot – stem – leaf – flower Plant nutrition and transport Vascular tissue – bark (botany) – Casparian strip – turgor pressure – xylem – phloem – transpiration – wood – trunk (botany) Plant development tropism – taxis seed – cotyledon – meristem – apical meristem – vascular cambium – cork cambium alternation of generations – gametophyte – antheridium – archegonium – sporophyte – spore – sporangium Plant reproduction angiosperms – flower – reproduction – sperm – pollination – self-pollination – cross-pollination – nectar – pollen Plant responses Plant hormone – ripening – fruit – Ethylene as a plant hormone – toxin – pollinator – phototropism – skototropism – phototropin – phytochrome – auxin – photoperiodism – gravity Animal form and function General features: morphology (biology) – anatomy – physiology – biological tissues – organ (biology) – organ systems Water and salt balance Body fluids: osmotic pressure – ionic composition – volume Diffusion – osmosis) – Tonicity – sodium – potassium – calcium – chloride Excretion Nutrition and digestion Digestive system: stomach – intestine – liver – nutrition – primary nutritional groups metabolism – kidney – excretion Breathing Respiratory system: lungs Circulation Circulatory system: heart – artery – vein – capillary – Blood – blood cell Lymphatic system: lymph node Muscle and movement Skeletal system: bone – cartilage – joint – tendon Muscular system: muscle – actin – myosin – reflex Nervous system Neuron – dendrite – axon – nerve – electrochemical gradient – electrophysiology – action potential – signal transduction – synapse – receptor – Central nervous system: brain – spinal cord limbic system – memory – vestibular system Peripheral nervous system Sensory nervous system: eye – vision – audition – proprioception – olfaction – Integumentary system: skin cell Hormonal control Endocrine system: hormone Animal reproduction Reproductive system: testes – ovary – pregnancy Fish#Reproductive system Mammalian reproductive system Human reproductive system Mammalian penis Os penis Penile spines Genitalia of bottlenose dolphins Genitalia of marsupials Equine reproductive system Even-toed ungulate#Genitourinary system Bull#Reproductive anatomy Carnivora#Reproductive system Fossa (animal)#External genitalia Female genitalia of spotted hyenas Cat anatomy#Genitalia Genitalia of dogs Canine penis Bulbus glandis Animal development stem cell – blastula – gastrula – egg (biology) – fetus – placenta - gamete – spermatid – ovum – zygote – embryo – cellular differentiation – morphogenesis – homeobox Immune system antibody – host – vaccine – immune cell – AIDS – T cell – leucocyte Animal behavior Behavior: mating – animal communication – seek shelter – migration (ecology) Fixed action pattern Altruism (biology) Ecology Outline of ecology Ecosystems: Ecology – Biodiversity – habitat – plankton – thermocline – saprobe Abiotic component: water – light – radiation – temperature – humidity – atmosphere – acidity Microbe – biomass – organic matter – decomposer – decomposition – carbon – nutrient cycling – solar energy – topography – tilt – Windward and leeward – precipitation Temperature – biome Populations Population ecology: organism – geographical area – sexual reproduction – population density – population growth – birth rate – death Rate – immigration rate – exponential growth – carrying capacity – logistic function – natural environment – competition (biology) – mating – biological dispersal – endemic (ecology) – growth curve (biology) – habitat – drinking water – resource – human population – technology – Green revolution Communities Community (ecology) – ecological niche – keystone species – mimicry – symbiosis – pollination – mutualism – commensalism – parasitism – predation – invasive species – environmental heterogeneity – edge effect Consumer–resource interactions: food chain – food web – autotroph – heterotrophs – herbivore – carnivore – trophic level Biosphere lithosphere – atmosphere – hydrosphere biogeochemical cycle: nitrogen cycle – carbon cycle – water cycle Climate change: Fossil fuel – coal – oil – natural gas – World energy consumption – Climate change feedback – Albedo – water vapor Carbon sink Conservation Biodiversity – habitats – Ecosystem services – biodiversity loss – extinction – Sustainability – Holocene extinction Branches Anatomy – study of form in animals, plants and other organisms, or specifically in humans. Simply, the study of internal structure of living organisms. Comparative anatomy – the study of evolution of species through similarities and differences in their anatomy. Osteology – study of bones. Osteomyoarthrology – the study of the movement apparatus, including bones, joints, ligaments and muscles. Viscerology – the study of organs Neuroanatomy – the study of the nervous system. Histology – also known as microscopic anatomy or microanatomy, the branch of biology which studies the microscopic anatomy of biological tissues. Astrobiology – study of origin, early-evolution, distribution, and future of life in the universe. Also known as exobiology, and bioastronomy. Bioarchaeology – study of human remains from archaeological sites. Biochemistry – study of the chemical reactions required for life to exist and function, usually a focus on the cellular level. Biocultural anthropology – the study of the relations between human biology and culture. Biogeography – study of the distribution of species spatially and temporally. Biolinguistics – study of biology and the evolution of language. Biological economics – an interdisciplinary field in which the interaction of human biology and economics is studied. Biophysics – study of biological processes through the methods traditionally used in the physical sciences. Biomechanics – the study of the mechanics of living beings. Neurophysics – study of the development of the nervous system on a molecular level. Quantum biology – application of quantum mechanics and theoretical chemistry to biological objects and problems. Virophysics – study of mechanics and dynamics driving the interactions between virus and cells. Biotechnology – new and sometimes controversial branch of biology that studies the manipulation of living matter, including genetic modification and synthetic biology. Bioinformatics – use of information technology for the study, collection, and storage of genomic and other biological data. Bioengineering – study of biology through the means of engineering with an emphasis on applied knowledge and especially related to biotechnology. Synthetic biology – research integrating biology and engineering; construction of biological functions not found in nature. Botany – study of plants. Photobiology – scientific study of the interactions of light (technically, non-ionizing radiation) and living organisms. The field includes the study of photosynthesis, photomorphogenesis, visual processing, circadian rhythms, bioluminescence, and ultraviolet radiation effects. Phycology – scientific study of algae. Plant physiology – subdiscipline of botany concerned with the functioning, or physiology, of plants. Cell biology – study of the cell as a complete unit, and the molecular and chemical interactions that occur within a living cell. Histology – study of the anatomy of cells and tissues of plants and animals using microscopy. Chronobiology – field of biology that examines periodic (cyclic) phenomena in living organisms and their adaptation to solar- and lunar-related rhythms. Dendrochronology – study of tree rings, using them to date the exact year they were formed in order to analyze atmospheric conditions during different periods in natural history. Developmental biology – study of the processes through which an organism forms, from zygote to full structure Embryology – study of the development of embryo (from fecundation to birth). Gerontology – study of aging processes. Ecology – study of the interactions of living organisms with one another and with the non-living elements of their environment. Epidemiology – major component of public health research, studying factors affecting the health of populations. Evolutionary biology – study of the origin and descent of species over time. Evolutionary developmental biology – field of biology that compares the developmental processes of different organisms to determine the ancestral relationship between them, and to discover how developmental processes evolved. Paleobiology – discipline which combines the methods and findings of the life sciences with the methods and findings of the earth science, paleontology. Paleoanthropology – the study of fossil evidence for human evolution, mainly using remains from extinct hominin and other primate species to determine the morphological and behavioral changes in the human lineage, as well as the environment in which human evolution occurred. Paleobotany – study of fossil plants. Paleontology – study of fossils and sometimes geographic evidence of prehistoric life. Paleopathology – the study of pathogenic conditions observable in bones or mummified soft tissue, and on nutritional disorders, variation in stature or morphology of bones over time, evidence of physical trauma, or evidence of occupationally derived biomechanic stress. Genetics – study of genes and heredity. Quantitative genetics – study of phenotypes that vary continuously (in characters such as height or mass)—as opposed to discretely identifiable phenotypes and gene-products (such as eye-colour, or the presence of a particular biochemical). Geobiology – study of the interactions between the physical Earth and the biosphere. Marine biology – study of ocean ecosystems, plants, animals, and other living beings. Microbiology – study of microscopic organisms (microorganisms) and their interactions with other living things. Bacteriology – study of bacteria Immunology – study of immune systems in all organisms. Mycology – study of fungi Parasitology – study of parasites and parasitism. Virology – study of viruses Molecular biology – study of biology and biological functions at the molecular level, with some cross over from biochemistry. Structural biology – a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules. Neuroscience – study of the nervous system, including anatomy, physiology and emergent proprieties. Behavioral neuroscience – study of physiological, genetic, and developmental mechanisms of behavior in humans and other animals. Cellular neuroscience – study of neurons at a cellular level. Cognitive neuroscience – study of biological substrates underlying cognition, with a focus on the neural substrates of mental processes. Computational neuroscience – study of the information processing functions of the nervous system, and the use of digital computers to study the nervous system. Developmental neuroscience – study of the cellular basis of brain development and addresses the underlying mechanisms. Molecular neuroscience – studies the biology of the nervous system with molecular biology, molecular genetics, protein chemistry and related methodologies. Neuroanatomy – study of the anatomy of nervous tissue and neural structures of the nervous system. Neuroendocrinology – studies the interaction between the nervous system and the endocrine system, that is how the brain regulates the hormonal activity in the body. Neuroethology – study of animal behavior and its underlying mechanistic control by the nervous system. Neuroimmunology – study of the nervous system, and immunology, the study of the immune system. Neuropharmacology – study of how drugs affect cellular function in the nervous system. Neurophysiology – study of the function (as opposed to structure) of the nervous system. Systems neuroscience – studies the function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural networks. Physiology – study of the internal workings of organisms. Endocrinology – study of the endocrine system. Oncology – study of cancer processes, including virus or mutation, oncogenesis, angiogenesis and tissues remoldings. Systems biology – computational modeling of biological systems. Theoretical Biology – the mathematical modeling of biological phenomena. – study of animals, including classification, physiology, development, and behavior. Subbranches include: Arthropodology – biological discipline concerned with the study of arthropods, a phylum of animals that include the insects, arachnids, crustaceans and others that are characterized by the possession of jointed limbs. Acarology – study of the taxon of arachnids that contains mites and ticks. Arachnology – scientific study of spiders and related animals such as scorpions, pseudoscorpions, harvestmen, collectively called arachnids. Entomology – study of insects. Coleopterology – study of beetles. Lepidopterology – study of a large order of insects that includes moths and butterflies (called lepidopterans). Myrmecology – scientific study of ants. Carcinology – study of crustaceans. Myriapodology – study of centipedes, millipedes, and other myriapods. – scientific study of animal behavior, usually with a focus on behavior under natural conditions. Helminthology – study of worms, especially parasitic worms. Herpetology – study of amphibians (including frogs, toads, salamanders, newts, and gymnophiona) and reptiles (including snakes, lizards, amphisbaenids, turtles, terrapins, tortoises, crocodilians, and the tuataras). Batrachology – subdiscipline of herpetology concerned with the study of amphibians alone. Ichthyology – study of fishes. This includes bony fishes (Osteichthyes), cartilaginous fishes (Chondrichthyes), and jawless fishes (Agnatha). Malacology – branch of invertebrate zoology which deals with the study of the Mollusca (mollusks or molluscs), the second-largest phylum of animals in terms of described species after the arthropods. Teuthology – branch of Malacology which deals with the study of cephalopods. Mammalogy – study of mammals, a class of vertebrates with characteristics such as homeothermic metabolism, fur, four-chambered hearts, and complex nervous systems. Mammalogy has also been known as "mastology," "theriology," and "therology." There are about 4,200 different species of animals which are considered mammals. Cetology – branch of marine mammal science that studies the approximately eighty species of whales, dolphins, and porpoise in the scientific order Cetacea. Primatology – scientific study of primates Human biology – interdisciplinary field studying the range of humans and human populations via biology/life sciences, anthropology/social sciences, applied/medical sciences Biological anthropology – subfield of anthropology that studies the physical morphology, genetics and behavior of the human genus, other hominins and hominids across their evolutionary development Human behavioral ecology – the study of behavioral adaptations (foraging, reproduction, ontogeny) from the evolutionary and ecologic perspectives (see behavioral ecology). It focuses on human adaptive responses (physiological, developmental, genetic) to environmental stresses. Nematology – scientific discipline concerned with the study of nematodes, or roundworms. Ornithology – scientific study of birds. Biologists Lists of notable biologists List of notable biologists List of Nobel Prize winners in physiology or medicine Lists of biologists by author abbreviation List of authors of names published under the ICZN Lists of biologists by subject List of biochemists List of ecologists List of neuroscientists List of physiologists See also Bibliography of biology Earliest known life forms Invasion biology terminology List of omics topics in biology Related outlines Outline of life forms Outline of zoology Outline of engineering Outline of technology List of social sciences Journals Biology journals References External links OSU's Phylocode The Tree of Life: A multi-authored, distributed Internet project containing information about phylogeny and biodiversity. MIT video lecture series on biology A wiki site for protocol sharing run from MIT. Biology and Bioethics. Biology online wiki dictionary. Biology Video Sharing Community. What is Biotechnology : a voluntary program as Biotech for Beginners. Biology Biology
1,962
4,495
https://en.wikipedia.org/wiki/British%20thermal%20unit
British thermal unit
The British thermal unit (BTU or Btu) is a measure of heat, which is measured in units of energy. It is defined as the amount of heat required to raise the temperature of one pound of water by one degree Fahrenheit. It is also part of the United States customary units. The modern SI unit for energy is the joule (J); one BTU equals about 1055 J (varying within the range 1054–1060 J depending on the specific definition; see below). While units of heat are often supplanted by energy units in scientific work, they are still used in some fields. For example, in the United States the price of natural gas is quoted in dollars per the amount of natural gas that would give 1 million BTUs (1 "MMBtu") of heat energy if burned. Definitions A BTU was originally defined as the amount of heat required to raise the temperature of 1 avoirdupois pound of liquid water by 1 degree Fahrenheit at a constant pressure of one atmospheric unit. There are several different definitions of the BTU that differ slightly. This reflects the fact that the temperature change of a mass of water due to the addition of a specific amount of heat (calculated in energy units, usually joules) depends slightly upon the water's initial temperature. As seen in the table below, definitions of the BTU based on different water temperatures vary by up to 0.5%. Prefixes Units kBtu are used in building energy use tracking and heating system sizing. Energy Use Index (EUI) represents kBtu per square foot of conditioned floor area. "k" stands for 1,000. The unit Mbtu is used in natural gas and other industries to indicate 1,000 BTUs. However, there is an ambiguity in that the metric system (SI) uses the prefix "M" to indicate 'Mega-', one million (1,000,000). Even so, "MMbtu" is often used to indicate one million BTUs particularly in the oil and gas industry. Energy analysts accustomed to the metric "k" ('kilo-') for 1,000 are more likely to use MBtu to represent one million, especially in documents where M represents one million in other energy or cost units, such as MW, MWh and $. The unit 'therm' is used to represent 100,000 BTUs. A decatherm is 10 therms or one MMBtu (million Btu). The unit quad is commonly used to represent one quadrillion (1015) BTUs. Conversions One Btu is approximately: (kilojoules) (watt hours) (calories) (kilocalories) 25,031 to 25,160 ft⋅pdl (foot-poundal) (foot-pounds-force) 5.40395 (lbf/in2)⋅ft3 A Btu can be approximated as the heat produced by burning a single wooden kitchen match or as the amount of energy it takes to lift a weight . For natural gas In natural gas pricing, the Canadian definition is that ≡ . The energy content (high or low heating value) of a volume of natural gas varies with the composition of the natural gas, which means there is no universal conversion factor for energy to volume. of average natural gas yields ≈ 1030 Btu (between 1010 Btu and 1070 Btu, depending on quality, when burned) As a coarse approximation, of natural gas yields ≈ ≈ . For natural gas price conversion ≈ 36.9 million Btu and ≈ BTU/h The SI unit of power for heating and cooling systems is the watt. Btu per hour (Btu/h) is sometimes used in North America and the United Kingdom - the latter for air conditioning mainly, though "Btu/h" is sometimes abbreviated to just "Btu". MBH—thousands of Btus per hour—is also common. 1 W is approximately 1,000 Btu/h is approximately 1 hp is approximately Associated units 1 ton of cooling, a common unit in North American refrigeration and air conditioning applications, is . It is the rate of heat transfer needed to freeze of water into ice in 24 hours. In the United States and Canada, the R-value that describes the performance of thermal insulation is typically quoted in square foot degree Fahrenheit hours per British thermal unit (ft2⋅°F⋅h/Btu). For one square foot of the insulation, one BTU per hour of heat flows across the insulator for each degree of temperature difference across it. 1 therm is defined in the United States and European Union as 100,000 BTU—but the U.S. uses the while the EU uses the BTUIT. United Kingdom regulations were amended to replace therms with joules with effect from 1 January 2000. the therm is still used in natural gas pricing in the United Kingdom. 1 quad (short for quadrillion Btu) is 1015 Btu, which is about 1 exajoule (). Quads are used in the United States for representing the annual energy consumption of large economies: for example, the U.S. economy used 99.75 quads in 2005. One quad/year is about 33.43 gigawatts. The Btu should not be confused with the Board of Trade Unit (BTU), an obsolete UK synonym for kilowatt hour (). The Btu is often used to express the conversion-efficiency of heat into electrical energy in power plants. Figures are quoted in terms of the quantity of heat in Btu required to generate 1 kW⋅h of electrical energy. A typical coal-fired power plant works at , an efficiency of 32–33%. The centigrade heat unit (CHU) is the amount of heat required to raise the temperature of one pound of water by one Celsius degree. It is equal to 1.8 BTU or 1,899 joules. In 1974, this unit was "still sometimes used" in the United Kingdom as an alternative to BTU. Another legacy unit for energy in the metric system is the calorie, which is defined as the amount of heat required to raise the temperature of one gram of water by one degree Celsius. See also Conversion of units Latent heat Metrication Ton of refrigeration Notes References External links Units of energy Imperial units Customary units of measurement in the United States
1,963
4,513
https://en.wikipedia.org/wiki/Banshee
Banshee
A banshee ( ; Modern Irish , from , "woman of the fairy mound" or "fairy woman") is a female spirit in Irish folklore who heralds the death of a family member, usually by screaming, wailing, shrieking, or keening. Her name is connected to the mythologically important tumuli or "mounds" that dot the Irish countryside, which are known as (singular ) in Old Irish. Description Sometimes she has long streaming hair and wears a grey cloak over a green dress, and her eyes are red from continual weeping. She may be dressed in white with red hair and a ghastly complexion, according to a firsthand account by Ann, Lady Fanshawe in her Memoirs. Lady Wilde in her books provides others: In John O'Brien's Irish-English dictionary, the entry for Síth-Bhróg states:"hence bean-síghe, plural mná-síghe, she-fairies or women-fairies, credulously supposed by the common people to be so affected to certain families that they are heard to sing mournful lamentations about their houses by night, whenever any of the family labours under a sickness which is to end by death, but no families which are not of an ancient & noble Stock, are believed to be honoured with this fairy privilege". Keening In Ireland and parts of Scotland, a traditional part of mourning is the keening woman (bean chaointe), who wails a lament —in ('weeping'), pronounced in the Irish dialects of Munster and Southern Galway, in Connacht (except South Galway) and (particularly West) Ulster, and in Ulster, particularly in the traditional dialects of North and East Ulster, including Louth. This keening woman may in some cases be a professional, and the best keeners would be in high demand. Irish legend speaks of a lament being sung by a fairy woman, or banshee. She would sing it when a family member died or was about to die, even if the person had died far away and news of their death had not yet come. In those cases, her wailing would be the first warning the household had of the death. The banshee also is a predictor of death. If someone is about to enter a situation where it is unlikely they will come out alive she will warn people by screaming or wailing, giving rise to a banshee also being known as a wailing woman. It is often stated that the banshee laments only the descendants of the pure Milesian stock of Ireland, sometimes clarified as surnames prefixed with O' and Mac, and some accounts even state that each family has its own banshee. One account, however, also included the Geraldines, as they had apparently become "more Irish than the Irish themselves," countering the lore ascribing banshees exclusively to those of Milesian stock. Other exceptions were the Bunworth Banshee, which heralded the death of the Rev. Charles Bunworth, a name of Anglo-Saxon origin, and the Rossmore banshee, which supposedly heralded the death of a member of the family of Baron Rossmore, whose ancestry was predominantly Scottish and Dutch. When several banshees appear at once, it indicates the death of someone great or holy. The tales sometimes recounted that the woman, though called a fairy, was a ghost, often of a specific murdered woman, or a mother who died in childbirth. Origin Most, though not all, surnames associated with banshees have the Ó or Mc/Mac prefix – that is, surnames of Goidelic origin, indicating a family native to the Insular Celtic lands rather than those of the Norse, Anglo-Saxon, or Norman. Accounts reach as far back as 1380 to the publication of the Cathreim Thoirdhealbhaigh (Triumphs of Torlough) by Sean mac Craith. Mentions of banshees can also be found in Norman literature of that time. The Ua Briain banshee is thought to be named Aibell and the ruler of 25 other banshees who would always be at her attendance. It is possible that this particular story is the source of the idea that the wailing of numerous banshees signifies the death of a great person. In some parts of Leinster, she is referred to as the bean chaointe (keening woman) whose wail can be so piercing that it shatters glass. In Scottish folklore, a similar creature is known as the bean nighe or ban nigheachain (little washerwoman) or nigheag na h-àth (little washer at the ford) and is seen washing the bloodstained clothes or armour of those who are about to die. In Welsh folklore, a similar creature is known as the cyhyraeth. In popular culture Banshees, or creatures based upon them, have appeared in many forms in popular culture. See also Baobhan Sith Cailleach Caoineag Clíodhna La Llorona Madam Koi Koi Psychopomp Siren White Lady (ghost) Devil Bird, a similar omen in Sri lankan folklore References Further reading External links Aos Sí Fairies Fantasy creatures Female legendary creatures Irish folklore Irish ghosts Irish legendary creatures Personifications of death Psychopomps Supernatural legends Tuatha Dé Danann
1,975
4,545
https://en.wikipedia.org/wiki/BDSM
BDSM
BDSM is a variety of often erotic practices or roleplaying involving bondage, discipline, dominance and submission, sadomasochism, and other related interpersonal dynamics. Given the wide range of practices, some of which may be engaged in by people who do not consider themselves to be practising BDSM, inclusion in the BDSM community or subculture often is said to depend on self-identification and shared experience. The initialism BDSM is first recorded in a Usenet post from 1991, and is interpreted as a combination of the abbreviations B/D (Bondage and Discipline), D/s (Dominance and submission), and S/M (Sadism and Masochism). BDSM is now used as a catch-all phrase covering a wide range of activities, forms of interpersonal relationships, and distinct subcultures. BDSM communities generally welcome anyone with a non-normative streak who identifies with the community; this may include cross-dressers, body modification enthusiasts, animal roleplayers, rubber fetishists, and others. Activities and relationships in BDSM are often characterized by the participants' taking on roles that are complementary and involve inequality of power; thus, the idea of informed consent of both the partners is essential. The terms submissive and dominant are often used to distinguish these roles: the dominant partner ("dom") takes psychological control over the submissive ("sub"). The terms top and bottom are also used; the top is the instigator of an action while the bottom is the receiver of the action. The two sets of terms are subtly different: for example, someone may choose to act as bottom to another person, for example, by being whipped, purely recreationally, without any implication of being psychologically dominated, and submissives may be ordered to massage their dominant partners. Although the bottom carries out the action and the top receives it, they have not necessarily switched roles. The abbreviations sub and dom are frequently used instead of submissive and dominant. Sometimes the female-specific terms mistress, domme, and dominatrix are used to describe a dominant woman, instead of the sometimes gender-neutral term dom. Individuals who change between top/dominant and bottom/submissive roles—whether from relationship to relationship or within a given relationship—are called switches. The precise definition of roles and self-identification is a common subject of debate among BDSM participants. Fundamentals BDSM is an umbrella term for certain kinds of erotic behavior between consenting adults, encompassing various subcultures. Terms for roles vary widely among the subcultures. Top and dominant are widely used for those partner(s) in the relationship or activity who are, respectively, the physically active or controlling participants. Bottom and submissive are widely used for those partner(s) in the relationship or activity who are, respectively, the physically receptive or controlled participants. The interaction between tops and bottoms—where physical or mental control of the bottom is surrendered to the top—is sometimes known as "power exchange", whether in the context of an encounter or a relationship. BDSM actions can often take place during a specific period of time agreed to by both parties, referred to as "play", a "scene", or a "session". Participants usually derive pleasure from this, even though many of the practices—such as inflicting pain or humiliation or being restrained—would be unpleasant under other circumstances. Explicit sexual activity, such as sexual penetration, may occur within a session, but is not essential. For legal reasons, such explicit sexual interaction is seen only rarely in public play spaces and is sometimes banned by the rules of a party or playspace. Whether it is a public "playspace"—ranging from a party at an established community dungeon to a hosted play "zone" at a nightclub or social event—the parameters of allowance can vary. Some have a policy of panties/nipple sticker for women (underwear for men) and some allow full nudity with explicit sexual acts. The fundamental principles for the exercise of BDSM require that it be performed with the informed consent of all parties. Since the 1980s, many practitioners and organizations have adopted the motto (originally from the statement of purpose of GMSMA—a gay SM activist organization) safe, sane and consensual (SSC), which means that everything is based on safe activities, that all participants are of sufficiently sound mind to consent, and that all participants do consent. Mutual consent makes a clear legal and ethical distinction between BDSM and such crimes as sexual assault and domestic violence. Some BDSM practitioners prefer a code of behavior that differs from SSC. Described as "risk-aware consensual kink" (RACK), this code shows a preference for a style in which the individual responsibility of the involved parties is emphasized more strongly, with each participant being responsible for their own well-being. Advocates of RACK argue that SSC can hamper discussion of risk because no activity is truly "safe", and that discussion of even low-risk possibilities is necessary for truly informed consent. They further argue that setting a discrete line between "safe" and "not-safe" activities ideologically denies consenting adults the right to evaluate risks versus rewards for themselves; that some adults will be drawn to certain activities regardless of the risk; and that BDSM play—particularly higher-risk play or edgeplay—should be treated with the same regard as extreme sports, with both respect and the demand that practitioners educate themselves and practice the higher-risk activities to decrease risk. RACK may be seen as focusing primarily upon awareness and informed consent, rather than accepted safe practices. Consent is the most important criterion. The consent and compliance for a sadomasochistic situation can be granted only by people who can judge the potential results. For their consent, they must have relevant information (the extent to which the scene will go, potential risks, if a safeword will be used, what that is, and so on) at hand and the necessary mental capacity to judge. The resulting consent and understanding is occasionally summarized in a written "contract", which is an agreement of what can and cannot take place. BDSM play is usually structured such that it is possible for the consenting partner to withdraw their consent at any point during a scene; for example, by using a safeword that was agreed on in advance. Use of the agreed safeword (or occasionally a "safe symbol" such as dropping a ball or ringing a bell, especially when speech is restricted) is seen by some as an explicit withdrawal of consent. Failure to honor a safeword is considered serious misconduct and could constitute a crime, depending on the relevant law, since the bottom or top has explicitly revoked their consent to any actions that follow the use of the safeword. For other scenes, particularly in established relationships, a safeword may be agreed to signify a warning ("this is getting too intense") rather than explicit withdrawal of consent; and a few choose not to use a safeword at all. Terminology and subtypes The initialism BDSM stands for: Bondage and discipline (B&D) Dominance and submission (D&s) Sadomasochism (or S&M) These terms replaced sadomasochism, as they more broadly cover BDSM activities and focus on the submissive roles instead of psychological pain. The model is only an attempt at phenomenological differentiation. Individual tastes and preferences in the area of human sexuality may overlap among these areas. Under the initialism BDSM, these psychological and physiological facets are also included: Male dominance Male submission Female dominance Female submission The term bondage describes the practice of physical restraint. Bondage is usually, but not always, a sexual practice. While bondage is a very popular variation within the larger field of BDSM, it is nevertheless sometimes differentiated from the rest of this field. A 2015 study of over 1,000 Canadians showed that about half of all men held fantasies of bondage, and almost half of all women did as well. In a strict sense, bondage means binding the partner by tying their appendages together; for example, by the use of handcuffs or ropes, or by lashing their arms to an object. Bondage can also be achieved by spreading the appendages and fastening them with chains or ropes to a St. Andrew's cross or spreader bars. The term discipline describes psychological restraining, with the use of rules and punishment to control overt behavior. Punishment can be pain caused physically (such as caning), humiliation caused psychologically (such as a public flagellation) or loss of freedom caused physically (for example, chaining the submissive partner to the foot of a bed). Another aspect is the structured training of the bottom. Dominance and submission (also known as D&s, Ds or D/s) is a set of behaviors, customs and rituals relating to the giving and accepting of control of one individual over another in an erotic or lifestyle context. It explores the more mental aspect of BDSM. This is also the case in many relationships not considering themselves as sadomasochistic; it is considered to be a part of BDSM if it is practiced purposefully. The range of its individual characteristics is thereby wide. Often, BDSM contracts are set out in writing to record the formal consent of the parties to the power exchange, stating their common vision of the relationship dynamic. The purpose of this kind of agreement is primarily to encourage discussion and negotiation in advance and then to document that understanding for the benefit of all parties. Such documents have not been recognized as being legally binding, nor are they intended to be. These agreements are binding in the sense that the parties have the expectation that the negotiated rules will be followed. Often other friends and community members may witness the signing of such a document in a ceremony, and so parties violating their agreement can result in loss of face, respect or status with their friends in the community. In general, as compared to conventional relationships, BDSM participants go to greater lengths to negotiate the important aspects of their relationships in advance, and to contribute significant effort toward learning about and following safe practices. In D/s, the dominant is the top and the submissive is the bottom. In S/M, the sadist is usually the top and the masochist the bottom, but these roles are frequently more complicated or jumbled (as in the case of being dominant, masochists who may arrange for their submissive to carry out S/M activities on them). As in B/D, the declaration of the top/bottom may be required, though sadomasochists may also play without any power exchange at all, with both partners equally in control of the play. Etymology The term sadomasochism is derived from the words sadism and masochism. These terms differ somewhat from the same terms used in psychology since those require that the sadism or masochism cause significant distress or involve non-consenting partners. Sadomasochism refers to the aspects of BDSM surrounding the exchange of physical or emotional pain. Sadism describes sexual pleasure derived by inflicting pain, degradation, humiliation on another person or causing another person to suffer. On the other hand, the masochist enjoys being hurt, humiliated, or suffering within the consensual scenario. Sadomasochistic scenes sometimes reach a level that appears more extreme or cruel than other forms of BDSM—for example, when a masochist is brought to tears or is severely bruised—and is occasionally unwelcome at BDSM events or parties. Sadomasochism does not imply enjoyment through causing or receiving pain in other situations (for example, accidental injury, medical procedures). The terms sadism and masochism are derived from the names of the Marquis de Sade and Leopold von Sacher-Masoch, based on the content of the authors' works. Although the names of de Sade and Sacher-Masoch are attached to the terms sadism and masochism respectively, the scenes described in de Sade's works do not meet modern BDSM standards of informed consent. BDSM is solely based on consensual activities, and based on its system and laws. The concepts presented by de Sade are not in accordance with the BDSM culture, even though they are sadistic in nature. In 1843, the Ruthenian physician Heinrich Kaan published (Psychopathy of Sex), a writing in which he converts the sin conceptions of Christianity into medical diagnoses. With his work, the originally theological terms perversion, aberration and deviation became part of the scientific terminology for the first time. The German psychiatrist Richard von Krafft-Ebing introduced the terms sadism and masochism to the medical community in his work (New research in the area of Psychopathy of Sex) in 1890. In 1905, Sigmund Freud described sadism and masochism in his Three Essays on the Theory of Sexuality as diseases developing from an incorrect development of the child psyche and laid the groundwork for the scientific perspective on the subject in the following decades. This led to the first time use of the compound term sado-masochism (German ) by the Viennese psychoanalytic Isidor Isaak Sadger in their work, "" ("Regarding the sadomasochistic complex") in 1913. In the later 20th century, BDSM activists have protested against these conceptual models, as they were derived from the philosophies of two singular historical figures. Both Freud and Krafft-Ebing were psychiatrists; their observations on sadism and masochism were dependent on psychiatric patients, and their models were built on the assumption of psychopathology. BDSM activists argue that it is illogical to attribute human behavioural phenomena as complex as sadism and masochism to the "inventions" of two historic individuals. Advocates of BDSM have sought to distinguish themselves from widely held notions of antiquated psychiatric theory by the adoption of the term BDSM as a distinction from the now common usage of those psychological terms, abbreviated as S&M. Behavioral and physiological aspects BDSM is commonly mistaken as being "all about pain". Freud was confounded by the complexity and counterintuitiveness of practitioners' doing things that are self-destructive and painful. Rather than pain, BDSM practitioners are primarily concerned with power, humiliation, and pleasure. The aspects of D/s and B/D may not include physical suffering at all, but include the sensations experienced by different emotions of the mind. Of the three categories of BDSM, only sadomasochism specifically requires pain, but this is typically a means to an end, as a vehicle for feelings of humiliation, dominance, etc. In psychology, this aspect becomes a deviant behavior once the act of inflicting or experiencing pain becomes a substitute for or the main source of sexual pleasure. In its most extreme, the preoccupation on this kind of pleasure can lead participants to view humans as insensate means of sexual gratification. Dominance and submission of power are an entirely different experience, and are not always psychologically associated with physical pain. Many BDSM activities involve no pain or humiliation, but just the exchange of power and control. During the activities, the participants may feel endorphin effects comparable to "runner's high" and to the afterglow of orgasm. The corresponding trance-like mental state is also called subspace, for the submissive, and domspace, for the dominant. Some use body stress to describe this physiological sensation. The experience of algolagnia is important, but is not the only motivation for many BDSM practitioners. The philosopher Edmund Burke called the sensation of pleasure derived from pain "sublime". Couples engaging in consensual BDSM tend to show hormonal changes that indicate decreases in stress and increases in emotional bonding. There is an array of BDSM practitioners who take part in sessions in which they do not receive any personal gratification. They enter such situations solely with the intention to allow their partners to indulge their own needs or fetishes. Professional dominants do this in exchange for money, but non-professionals do it for the sake of their partners. In some BDSM sessions, the top exposes the bottom to a range of sensual experiences, such as pinching; biting; scratching with fingernails; erotic spanking; erotic electrostimulation; and the use of crops, whips, liquid wax, ice cubes, and Wartenberg wheels. Fixation by handcuffs, ropes, or chains may occur. The repertoire of possible "toys" is limited only by the imagination of both partners. To some extent, everyday items, such as clothespins, wooden spoons, and plastic wrap, are used in sex play. It is commonly considered that a pleasurable BDSM experience during a session depends strongly on the top's competence and experience and the bottom's physical and mental state. Trust and sexual arousal help the partners enter a shared mindset. Types of play Following are some of the types of BDSM play: Animal roleplay Breast torture Cock and ball torture Erotic electrostimulation Edgeplay Flogging Urolagnia Human furniture Japanese bondage Medical play Paraphilic infantilism Play piercing Predicament bondage Pussy torture Salirophilia Sexual roleplay Spanking Suspension Tickle torture Wax play Safety Besides safe sex, BDSM sessions often require a wider array of safety precautions than vanilla sex (sexual behaviour without BDSM elements). To ensure consent related to BDSM activity, pre-play negotiations are commonplace, especially among partners who do not know each other very well. In practice, pick-up scenes at clubs or parties may sometimes be low in negotiation (much as pick-up sex from singles bars may not involve much negotiation or disclosure). These negotiations concern the interests and fantasies of each partner and establish a framework of both acceptable and unacceptable activities. This kind of discussion is a typical "unique selling proposition" of BDSM sessions and quite commonplace. Additionally, safewords are often arranged to provide for an immediate stop of any activity if any participant should so desire. Safewords are words or phrases that are called out when things are either not going as planned or have crossed a threshold one cannot handle. They are something both parties can remember and recognize and are, by definition, not words commonly used playfully during any kind of scene. Words such as no, stop, and don't, are often inappropriate as a safeword if the roleplaying aspect includes the illusion of non-consent. The traffic light system (TLS) is the most commonly used set of safewords. Red – meaning: stop immediately and check the status of your partner Yellow – meaning: slow down, be careful Green – meaning: I'm all good, we can start. If used it's normally uttered by everyone involved before the scene can start. At most clubs and group-organized BDSM parties and events, dungeon monitors (DMs) provide an additional safety net for the people playing there, ensuring that house rules are followed and safewords respected. BDSM participants are expected to understand practical safety aspects, such as the potential for harm to body parts. Contusion or scarring of the skin can be a concern. Using crops, whips, or floggers, the top's fine motor skills and anatomical knowledge can make the difference between a satisfying session for the bottom and a highly unpleasant experience that may even entail severe physical harm. The very broad range of BDSM "toys" and physical and psychological control techniques often requires a far-reaching knowledge of details related to the requirements of the individual session, such as anatomy, physics, and psychology. Despite these risks, BDSM activities usually result in far less severe injuries than sports like boxing and football, and BDSM practitioners do not visit emergency rooms any more often than the general population. It is necessary to be able to identify each person's psychological "squicks" or triggers in advance to avoid them. Such losses of emotional balance due to sensory or emotional overload are a fairly commonly discussed issue. It is important to follow participants' reactions empathetically and continue or stop accordingly. For some players, sparking "freakouts" or deliberately using triggers may be the desired outcome. Safewords are one way for BDSM practices to protect both parties. However, partners should be aware of each other's psychological states and behaviors to prevent instances where the "freakouts" prevent the use of safewords. After any BDSM activities, it is important that the participants go through sexual aftercare, to process and calm down from the activity. After the sessions, participants can need aftercare because their bodies have experienced trauma and they need to mentally come out of the role play. Social aspects Roles Top and bottom At one end of the spectrum are those who are indifferent to, or even reject physical stimulation. At the other end of the spectrum are bottoms who enjoy discipline and erotic humiliation but are not willing to be subordinate to the person who applies it. The bottom is frequently the partner who specifies the basic conditions of the session and gives instructions, directly or indirectly, in the negotiation, while the top often respects this guidance. Other bottoms, often called "brats", try to incur punishment from their tops by provoking them or "misbehaving". Nevertheless, a purist "school" exists within the BDSM community, which regards such "topping from the bottom" as rude or even incompatible with the standards of BDSM relations. Types of relationships Play BDSM practitioners sometimes regard the practice of BDSM in their sex life as roleplaying and so often use the terms play and playing to describe activities where in their roles. Play of this sort for a specified period of time is often called a session, and the contents and the circumstances of play are often referred to as the scene. It is also common in personal relationships to use the term kink play for BDSM activities, or more specific terms for the type of activity. The relationships can be of varied types. Long term Early writings on BDSM both by the academic and BDSM community spoke little of long-term relationships with some in the gay leather community suggesting short-term play relationships to be the only feasible relationship models, and recommending people to get married and "play" with BDSM outside of marriage. In recent times though writers of BDSM and sites for BDSM have been more focused on long-term relationships. A 2003 study, the first to look at these relationships, fully demonstrated that "quality long-term functioning relationships" exist among practitioners of BDSM, with either sex being the top or bottom (the study was based on 17 heterosexual couples). Respondents in the study expressed their BDSM orientation to be built into who they are, but considered exploring their BDSM interests an ongoing task, and showed flexibility and adaptability in order to match their interests with their partners. The "perfect match" where both in the relationship shared the same tastes and desires was rare, and most relationships required both partners to take up or put away some of their desires. The BDSM activities that the couples partook in varied in sexual to nonsexual significance for the partners who reported doing certain BDSM activities for "couple bonding, stress release, and spiritual quests". The most reported issue amongst respondents was not finding enough time to be in role with most adopting a lifestyle wherein both partners maintain their dominant or submissive role throughout the day. Amongst the respondents, it was typically the bottoms who wanted to play harder, and be more restricted into their roles when there was a difference in desire to play in the relationship. The author of the study, Bert Cutler, speculated that tops may be less often in the mood to play due to the increased demand for responsibility on their part: being aware of the safety of the situation and prepared to remove the bottom from a dangerous scenario, being conscious of the desires and limits of the bottom, and so on. The author of the study stressed that successful long-term BDSM relationships came after "early and thorough disclosure" from both parties of their BDSM interests. Many of those engaged in long-term BDSM relationships learned their skills from larger BDSM organizations and communities. There was a lot of discussion by the respondents on the amount of control the top possessed in the relationships but "no discussion of being better, or smarter, or of more value" than the bottom. Couples were generally of the same mind of whether or not they were in an ongoing relationship, but in such cases, the bottom was not locked up constantly, but that their role in the context of the relationship was always present, even when the top was doing non-dominant activities such as household chores, or the bottom being in a more dominant position. In its conclusion the study states: The study further goes on to list three aspects that made the successful relationships work: early disclosure of interests and continued transparency, a commitment to personal growth, and the use of the dominant/submissive roles as a tool to maintain the relationship. In closing remarks, the author of the study theorizes that due to the serious potential for harm, couples in BDSM relationships develop increased communication that may be higher than in mainstream relationships. Professional services A professional dominatrix or professional dominant, often referred to within the culture as a pro-dom(me), offers services encompassing the range of bondage, discipline, and dominance in exchange for money. The term dominatrix is little-used within the non-professional BDSM scene. A non-professional dominant woman is more commonly referred to simply as a domme, dominant, or femdom (short for female dominance). Professional submissives ("pro-subs"), although far more rare, do exist. A professional submissive consents to their client's dominant behavior within negotiated limits, and often works within a professional dungeon. Most of the people who work as subs normally have tendencies towards such activities, especially when sadomasochism is involved. Males also work as professional "tops" in BDSM, and are called masters or doms. However, it is much rarer to find a male in this profession. Scenes In BDSM, a "scene" is the stage or setting where BDSM activity takes place, as well as the activity itself. The physical place where a BDSM activity takes place is usually called a dungeon, though some prefer less dramatic terms, including playspace or club. A BDSM activity can, but need not, involve sexual activity or sexual roleplay. A characteristic of many BDSM relationships is the power exchange from the bottom to the dominant partner, and bondage features prominently in BDSM scenes and sexual roleplay. "The Scene" (including use of the definite article the) is also used in the BDSM community to refer to the BDSM community as a whole. Thus someone who is on "the Scene", and prepared to play in public, might take part in "a scene" at a public play party. A scene can take place in private between two or more people and can involve a domestic arrangement, such as servitude or a casual or committed lifestyle master/slave relationship. BDSM elements may involve settings of slave training or punishment for breaches of instructions. A scene can also take place in a club, where the play can be viewed by others. When a scene takes place in a public setting, it may be because the participants enjoy being watched by others, or because of the equipment available, or because having third parties present adds safety for play partners who have only recently met. Etiquette Most standard social etiquette rules still apply when at a BDSM event, such as not intimately touching someone you do not know, not touching someone else's belongings (including toys), and abiding by dress codes. Many events open to the public also have rules addressing alcohol consumption, recreational drugs, cell phones, and photography. A specific scene takes place within the general conventions and etiquette of BDSM, such as requirements for mutual consent and agreement as to the limits of any BDSM activity. This agreement can be incorporated into a formal contract. In addition, most clubs have additional rules which regulate how onlookers may interact with the actual participants in a scene. As is common in BDSM, these are founded on the catchphrase "safe, sane, and consensual". Parties and clubs BDSM play parties are events in which BDSM practitioners and other similarly interested people meet in order to communicate, share experiences and knowledge, and to "play" in an erotic atmosphere. BDSM parties show similarities to ones in the dark culture, being based on a more or less strictly enforced dress code; often clothing made of latex, leather or vinyl/PVC, lycra and so on, emphasizing the body's shape and the primary and secondary sexual characteristics. The requirement for such dress codes differ. While some events have none, others have a policy in order to create a more coherent atmosphere and to prevent outsiders from taking part. At these parties, BDSM can be publicly performed on a stage, or more privately in separate "dungeons". A reason for the relatively fast spread of this kind of event is the opportunity to use a wide range of "playing equipment", which in most apartments or houses is unavailable. Slings, St. Andrew's crosses (or similar restraining constructs), spanking benches, and punishing supports or cages are often made available. The problem of noise disturbance is also lessened at these events, while in the home setting many BDSM activities can be limited by this factor. In addition, such parties offer both exhibitionists and voyeurs a forum to indulge their inclinations without social criticism. Sexual intercourse is not permitted within most public BDSM play spaces or not often seen in others, because it is not the emphasis of this kind of play. In order to ensure the maximum safety and comfort for the participants, certain standards of behavior have evolved; these include aspects of courtesy, privacy, respect and safewords. Today BDSM parties are taking place in most of the larger cities in the Western world. This scene appears particularly on the Internet, in publications, and in meetings such as at fetish clubs (like Torture Garden), SM parties, gatherings called munches, and erotic fairs like Venus Berlin. The annual Folsom Street Fair held in San Francisco is the world's largest BDSM event. It has its roots in the gay leather movement. The weekend-long festivities include a wide range of sadomasochistic erotica in a public clothing optional space between 8th and 13th streets with nightly parties associated with the organization. There are also conventions such as Living in Leather and Black Rose. Psychology Research indicates that there is no evidence that a preference for BDSM is a consequence of childhood abuse. Some reports suggest that people abused as children may have more BDSM injuries and have difficulty with safe words being recognized as meaning stop the previously consensual behavior, thus, it is possible that people choosing BDSM as part of their lifestyle, who also were previously abused, may have had more police or hospital reports of injuries. There is also a link between transgender individuals who have been abused and violence occurring in BDSM activities. Joseph Merlino, author and psychiatry adviser to the New York Daily News, said in an interview that a sadomasochistic relationship, as long as it is consensual, is not a psychological problem: Some psychologists agree that experiences during early sexual development can have a profound effect on the character of sexuality later in life. Sadomasochistic desires, however, seem to form at a variety of ages. Some individuals report having had them before puberty, while others do not discover them until well into adulthood. According to one study, the majority of male sadomasochists (53%) developed their interest before the age of 15, while the majority of females (78%) developed their interest afterward (Breslow, Evans, and Langley 1985). The prevalence of sadomasochism within the general population is unknown. Despite female sadists being less visible than males, some surveys have resulted in comparable amounts of sadistic fantasies between females and males. The results of such studies demonstrate that one's sex does not determine preference for sadism. Following a phenomenological study of nine individuals involved in sexual masochistic sessions who regarded pain as central to their experience, sexual masochism was described as an addiction-like tendency, with several features resembling that of drug addiction: craving, intoxication, tolerance and withdrawal. It was also demonstrated how the first masochistic experience is placed on a pedestal, with subsequent use aiming at retrieving this lost sensation, much as described in the descriptive literature on addiction. The addictive pattern presented in this study suggests an association with behavioral spin as found in problem gamblers. Prevalence BDSM occurs among people of all genders and sexual orientations, and in varied occurrences and intensities. The spectrum ranges from couples with no connections to the subculture outside of their bedrooms or homes, without any awareness of the concept of BDSM, playing "tie-me-up-games", to public scenes on St. Andrew's crosses at large events such as the Folsom Street Fair in San Francisco. Estimation on the overall percentage of BDSM-related sexual behaviour varies. Alfred Kinsey stated in his 1953 nonfiction book Sexual Behavior in the Human Female that 12% of females and 22% of males reported having an erotic response to a sadomasochistic story. In that book erotic responses to being bitten were given as: A non-representative survey on the sexual behaviour of American students published in 1997 and based on questionnaires had a response rate of about 8–9%. Its results showed 15% of homosexual and bisexual males, 21% of lesbian and female bisexual students, 11% of heterosexual males and 9% of female heterosexual students committed to BDSM related fantasies. In all groups the level of practical BDSM experiences were around 6%. Within the group of openly lesbian and bisexual females, the quote was significantly higher, at 21%. Independent of their sexual orientation, about 12% of all questioned students, 16% of lesbians and female bisexuals and 8% of heterosexual males articulated an interest in spanking. Experience with this sexual behaviour was indicated by 30% of male heterosexuals, 33% of female bisexuals and lesbians, and 24% of the male gay and bisexual men and female heterosexual women. Even though this study was not considered representative, other surveys indicate similar dimensions in differing target groups. A representative study done from 2001 to 2002 in Australia found that 1.8% of sexually active people (2.2% men, 1.3% women but no significant sex difference) had engaged in BDSM activity in the previous year. Of the entire sample, 1.8% of men and 1.3% of women had been involved in BDSM. BDSM activity was significantly more likely among bisexuals and homosexuals of both sexes. But among men in general, there was no relationship effect of age, education, language spoken at home or relationship status. Among women, in this study, activity was most common for those between 16 and 19 years of age and least likely for females over 50 years. Activity was also significantly more likely for women who had a regular partner they did not live with, but was not significantly related with speaking a language other than English or education. Another representative study, published in 1999 by the German Institut für rationale Psychologie, found that about 2/3 of the interviewed women stated a desire to be at the mercy of their sexual partners from time to time. 69% admitted to fantasies dealing with sexual submissiveness, 42% stated interest in explicit BDSM techniques, 25% in bondage. A 1976 study in the general US population suggests three percent have had positive experiences with Bondage or master-slave roleplaying. Overall 12% of the interviewed females and 18% of the males were willing to try it. A 1990 Kinsey Institute report stated that 5% to 10% of Americans occasionally engage in sexual activities related to BDSM. 11% of men and 17% of women reported trying bondage. Some elements of BDSM have been popularized through increased media coverage since the middle 1990s. Thus both black leather clothing, sexual jewelry such as chains and dominance roleplay appear increasingly outside of BDSM contexts. According to yet another survey of 317,000 people in 41 countries, about 20% of the surveyed have at least used masks, blindfolds or other bondage utilities once, and 5% explicitly connected themselves with BDSM. In 2004, 19% mentioned spanking as one of their practices and 22% confirmed the use of blindfolds or handcuffs. A 1985 study found 52 out of 182 female respondents (28%) were involved in sadomasochistic activities. Recent surveys A 2009 study on two separate samples of male undergraduate students in Canada found that 62 to 65%, depending on the sample, had entertained sadistic fantasies, and 22 to 39% engaged in sadistic behaviors during sex. The figures were 62 and 52% for bondage fantasies, and 14 to 23% for bondage behaviors. A 2014 study involving a mixed sample of Canadian college students and online volunteers, both male and female, reported that 19% of male samples and 10% of female samples rated the sadistic scenarios described in a questionnaire as being at least "slightly arousing" on a scale that ranged from "very repulsive" to "very arousing"; the difference was statistically significant. The corresponding figures for the masochistic scenarios were 15% for male students and 17% for female students, a non-significant difference. In a 2011 study on 367 middle-aged and elderly men recruited from the broader community in Berlin, 21.8% of the men self-reported sadistic fantasies and 15.5% sadistic behaviors; 24.8% self-reported any such fantasy and/or behavior. The corresponding figures for self-reported masochism were 15.8% for fantasy, 12.3% for behavior, and 18.5% for fantasy and/or behavior. In a 2008 study on gay men in Puerto Rico, 14.8% of the over 425 community volunteers reported any sadistic fantasy, desire or behavior in their lifetime; the corresponding figure for masochism was 15.7%. A 2017 cross-sectional representative survey among the general Belgian population demonstrated a substantial prevalence of BDSM fantasies and activities; 12.5% of the population performed one of more BDSM-practices on a regular basis. Medical categorization Reflecting changes in social norms, modern medical opinion is now moving away from regarding BDSM activities as medical disorders, unless they are nonconsensual or involve significant distress or harm. DSM In the past, the Diagnostic and Statistical Manual of Mental Disorders (DSM), the American Psychiatric Association's manual, defined some BDSM activities as sexual disorders. Following campaigns from advocacy organizations including the National Coalition for Sexual Freedom, the current version of the DSM, DSM-5, excludes consensual BDSM from diagnosis when the sexual interests cause no harm or distress. ICD The World Health Organization's International Classification of Diseases (ICD) has made similar moves in recent years. Section F65 of the current revision, ICD-10, indicates that "mild degrees of sadomasochistic stimulation are commonly used to enhance otherwise normal sexual activity". The diagnostic guidelines for the ICD-10 state that this class of diagnosis should only be made "if sadomasochistic activity is the most important source of stimulation or necessary for sexual gratification". In Europe, an organization called ReviseF65 has worked to remove sadomasochism from the ICD. In 1995, Denmark became the first European Union country to have completely removed sadomasochism from its national classification of diseases. This was followed by Sweden in 2009, Norway in 2010 and Finland 2011. Recent surveys on the spread of BDSM fantasies and practices show strong variations in the range of their results. Nonetheless, researchers assume that 5 to 25 percent of the population practices sexual behavior related to pain or dominance and submission. The population with related fantasies is believed to be even larger. The ICD is in the process of revision, and recent drafts have reflected these changes in social norms. , the final advance preview of the ICD-11 has de-pathologised most things listed in ICD-10 section F65, characterizing as pathological only those activities which are either coercive, or involving significant risk of injury or death, or distressing to the individual committing them, and specifically excluding consensual sexual sadism and masochism from being regarded as pathological. The ICD-11 classification consider sadomasochism as a variant in sexual arousal and private behaviour without appreciable public health impact and for which treatment is neither indicated nor sought." According to the WHO ICD-11 Working Group on Sexual Disorders and Sexual Health, stigmatization and discrimination of fetish- and BDSM individuals are inconsistent with human rights principles endorsed by the United Nations and the World Health Organization. The final advance text is to be officially presented to the members of the WHO in 2019, ready to come into effect in 2022. Coming out Some people who are interested in or curious about BDSM decide to come out of the closet, although many sadomasochists remain closeted. Depending upon a survey's participants, about 5 to 25 percent of the US population show affinity to the subject. Other than a few artists and writers, practically no celebrities are publicly known as sadomasochists. Public knowledge of one's BDSM lifestyle can have detrimental vocational and social effects for sadomasochists. Many face severe professional consequences or social rejection if they are exposed, either voluntarily or involuntarily, as sadomasochists. Within feminist circles, the discussion is split roughly into two camps: some who see BDSM as an aspect or reflection of oppression (for example, Alice Schwarzer) and, on the other side, pro-BDSM feminists, often grouped under the banner of sex-positive feminism (see Samois); both of them can be traced back to the 1970s. Some feminists have criticized BDSM for eroticizing power and violence and reinforcing misogyny. They argue that women who engage in BDSM are making a choice that is ultimately bad for women. Feminist defenders of BDSM argue that consensual BDSM activities are enjoyed by many women and validate the sexual inclinations of these women. They argue that there is no connection between consensual kinky activities and sex crimes, and that feminists should not attack other women's sexual desires as being "anti-feminist". They also state that the main point of feminism is to give an individual woman free choices in her life; which includes her sexual desire. While some feminists suggest connections between consensual BDSM scenes and non-consensual rape and sexual assault, other sex-positive ones find the notion insulting to women. Roles are not fixed to gender, but personal preferences. The dominant partner in a heterosexual relationship may be the woman rather than the man, or BDSM may be part of male/male or female/female sexual relationships. Finally, some people switch, taking either a dominant or submissive role on different occasions. Several studies investigating the possibility of a correlation between BDSM pornography and the violence against women also indicate a lack of correlation. In 1991, a lateral survey came to the conclusion that between 1964 and 1984, despite the increase in amount and availability of sadomasochistic pornography in the U.S., Germany, Denmark and Sweden, there is no correlation with the national number of rapes to be found. Operation Spanner in the U.K. proves that BDSM practitioners still run the risk of being stigmatized as criminals. In 2003, the media coverage of Jack McGeorge showed that simply participating and working in BDSM support groups poses risks to one's job, even in countries where no law restricts it. Here a clear difference can be seen to the situation of homosexuality. The psychological strain appearing in some individual cases is normally neither articulated nor acknowledged in public. Nevertheless, it leads to a difficult psychological situation in which the person concerned can be exposed to high levels of emotional stress. In the stages of "self-awareness", he or she realizes their desires related to BDSM scenarios or decides to be open for such. Some authors call this internal coming-out. Two separate surveys on this topic independently came to the conclusion that 58 percent and 67 percent of the sample respectively, had realized their disposition before their 19th birthday. Other surveys on this topic show comparable results. Independent of age, coming-out can potentially result in a difficult life crisis, sometimes leading to thoughts or acts of suicide. While homosexuals have created support networks in the last decades, sadomasochistic support networks are just starting to develop in most countries. In German-speaking countries they are only moderately more developed. The Internet is the prime contact point for support groups today, allowing for local and international networking. In the U.S., Kink Aware Professionals (KAP) a privately funded, non-profit service provides the community with referrals to psychotherapeutic, medical, and legal professionals who are knowledgeable about and sensitive to the BDSM, fetish, and leather community. In the U.S. and the U.K., the Woodhull Freedom Foundation & Federation, National Coalition for Sexual Freedom (NCSF) and Sexual Freedom Coalition (SFC) have emerged to represent the interests of sadomasochists. The German Bundesvereinigung Sadomasochismus is committed to the same aim of providing information and driving press relations. In 1996, the website and mailing list Datenschlag went online in German and English providing the largest bibliography, as well as one of the most extensive historical collections of sources related to BDSM. Social (non-medical) research Richters et al. (2008) found that people who engaged in BDSM were more likely to have experienced a wider range of sexual practices (e.g., oral or anal sex, more than one partner, group sex, phone sex, viewed pornography, used a sex toy, fisting, rimming, etc.). They were, however, not any more likely to have been coerced, unhappy, anxious, or experiencing sexual difficulties. On the contrary, men who had engaged in BDSM scored lower on a psychological distress scale than men who did not. There have been few studies on the psychological aspects of BDSM using modern scientific standards. Psychotherapist Charles Moser has said there is no evidence for the theory that BDSM has common symptoms or any common psychopathology, emphasizing that there is no evidence that BDSM practitioners have any special psychiatric other problems based on their sexual preferences. Problems sometimes occur with self-classification. During the phase of the "coming-out", self-questioning related to one's own "normality" is common. According to Moser, the discovery of BDSM preferences can result in fear of the current non-BDSM relationship's destruction. This, combined with the fear of discrimination in everyday life, leads in some cases to a double life which can be highly burdensome. At the same time, the denial of BDSM preferences can induce stress and dissatisfaction with one's own "vanilla"-lifestyle, feeding the apprehension of finding no partner. Moser states that BDSM practitioners having problems finding BDSM partners would probably have problems in finding a non-BDSM partner as well. The wish to remove BDSM preferences is another possible reason for psychological problems since it is not possible in most cases. Finally, the scientist states that BDSM practitioners seldom commit violent crimes. From his point of view, crimes of BDSM practitioners usually have no connection with the BDSM components existing in their life. Moser's study comes to the conclusion that there is no scientific evidence, which could give reason to refuse members of this group work- or safety certificates, adoption possibilities, custody or other social rights or privileges. The Swiss psychoanalyst Fritz Morgenthaler shares a similar perspective in his book, Homosexuality, Heterosexuality, Perversion (1988). He states that possible problems result not necessarily from the non-normative behavior, but in most cases primarily from the real or feared reactions of the social environment towards their own preferences. In 1940 psychoanalyst Theodor Reik reached implicitly the same conclusion in his standard work Aus Leiden Freuden. Masochismus und Gesellschaft. Moser's results are further supported by a 2008 Australian study by Richters et al. on the demographic and psychosocial features of BDSM participants. The study found that BDSM practitioners were no more likely to have experienced sexual assault than the control group, and were not more likely to feel unhappy or anxious. The BDSM males reported higher levels of psychological well-being than the controls. It was concluded that "BDSM is simply a sexual interest or subculture attractive to a minority, not a pathological symptom of past abuse or difficulty with 'normal' sex." Gender differences in research Several recent studies have been conducted on the gender differences and personality traits of BDSM practitioners. Wismeijer and van Assen (2013) found that "the association of BDSM role and gender was strong and significant" with only 8% of women in the study being dominant compared to 75% being submissive.; Hébert and Weaver (2014) found that 9% of women in their study were dominant compared to 88% submissive; Weierstall1 and Giebel (2017) likewise found a significant difference, with 19% of women in the study as dominant compared to 74% as submissive, and a study from Andrea Duarte Silva (2015) indicated that 61.7% of females who are active in BDSM expressed a preference for a submissive role, 25.7% consider themselves a switch, while 12.6% prefer the dominant role. In contrast, 46.6% of men prefer the submissive role, 24% consider themselves to be switches and 29.5% prefer the dominant role. They concluded that "men more often display an engagement in dominant practices, whereas females take on the submissive part. This result is inline with a recent study about mate preferences that has shown that women have a generally higher preference for a dominant partner than men do (Giebel, Moran, Schawohl, & Weierstall, 2015). Women also prefer dominant men, and even men who are aggressive, for a short-term relationship and for the purpose of sexual intercourse (Giebel, Weierstall, Schauer, & Elbert, 2013)". Similarly, studies on sexual fantasy differences between men and women show the latter prefer submissive and passive fantasies over dominant and active ones, with rape and force being common. Gender differences in masochistic scripts One common belief of BDSM and kink is that women are more likely to take on masochistic roles than men. Roy Baumeister (2010) actually had more male masochists in his study than female, and fewer male dominants than female. The lack of statistical significance in these gender differences suggests that no assumptions should be made regarding gender and masochistic roles in BDSM. One explanation why we might think otherwise lies in our social and cultural ideals about femininity; masochism may emphasize certain stereotypically feminine elements through activities like feminization of men and ultra-feminine clothing for women. But such tendencies of the submissive masochistic role should not be interpreted as a connection between it and the stereotypical female role—many masochistic scripts do not include any of these tendencies. Baumeister found that masochistic males experienced greater: severity of pain, frequency of humiliation (status-loss, degrading, oral), partner infidelity, active participation by other persons, and cross-dressing. Trends also suggested that male masochism included more bondage and oral sex than female (though the data was not significant). Female masochists, on the other hand, experienced greater: frequency in pain, pain as punishment for 'misdeeds' in the relationship context, display humiliation, genital intercourse, and presence of non-participating audiences. The exclusiveness of dominant males in a heterosexual relationship happens because, historically, men in power preferred multiple partners. Finally, Baumeister observes a contrast between the 'intense sensation' focus of male masochism to a more 'meaning and emotion' centred female masochistic script. Prior argues that although some of these women may appear to be engaging in traditional subordinate or submissive roles, BDSM allows women in both dominant and submissive roles to express and experience personal power through their sexual identities. In a study that she conducted in 2013, she found that the majority of the women she interviewed identified as bottom, submissive, captive, or slave/sex slave. In turn, Prior was able to answer whether or not these women found an incongruity between their sexual identities and feminist identity. Her research found that these women saw little to no incongruity, and in fact felt that their feminist identity supported identities of submissive and slave. For them, these are sexually and emotionally fulfilling roles and identities that, in some cases, feed other aspects of their lives. Prior contends that third wave feminism provides a space for women in BDSM communities to express their sexual identities fully, even when those identities seem counter-intuitive to the ideals of feminism. Furthermore, women who do identify as submissive, sexually or otherwise, find a space within BDSM where they can fully express themselves as integrated, well-balanced, and powerful women. Women in S/M culture Levitt, Moser, and Jamison's 1994 study provides a general, if outdated, description of characteristics of women in the sadomasochistic (S/M) subculture. They state that women in S/M tend to have higher education, become more aware of their desires as young adults, are less likely to be married than the general population. The researchers found the majority of females identified as heterosexual and submissive, a substantial minority were versatile—able to switch between dominant and submissive roles—and a smaller minority identified with the dominant role exclusively. Oral sex, bondage and master-slave script were among the most popular activities, while feces/watersports were the least popular. Orientation observances in research BDSM is considered by some of its practitioners to be a sexual orientation. The BDSM and kink scene is more often seen as a diverse pansexual community. Often this is a non-judgmental community where gender, sexuality, orientation, preferences are accepted as is or worked at to become something a person can be happy with. In research, studies have focused on bisexuality and its parallels with BDSM, as well as gay-straight differences between practitioners. Comparison between gay and straight men in S/M Demographically, Nordling et al.'s (2006) study found no differences in age, but 43% of gay male respondents compared to 29% of straight males had university-level education. The gay men also had higher incomes than the general population and tended to work in white-collar jobs while straight men tended toward blue-collar ones. Because there were not enough female respondents (22), no conclusions could be drawn from them. Sexually speaking, the same 2006 study by Nordling et al. found that gay males were aware of their S/M preferences and took part in them at an earlier age, preferring leather, anal sex, rimming, dildos and special equipment or uniform scenes. In contrast, straight men preferred verbal humiliation, mask and blindfolds, gags, rubber/latex outfits, caning, vaginal sex, straitjackets, and cross-dressing among other activities. From the questionnaire, researchers were able to identify four separate sexual themes: hyper-masculinity, giving and receiving pain, physical restriction (i.e. bondage), and psychological humiliation. Gay men preferred activities that tended towards hyper-masculinity while straight men showed greater preference for humiliation, significantly higher master/madame-slave role play at ≈84%. Though there were not enough female respondents to draw a similar conclusion with, the fact that there is a difference in gay and straight men suggests strongly that S/M (and BDSM in general) can not be considered a homogenous phenomenon. As Nordling et al. (2006) puts it, "People who identify as sadomasochists mean different things by these identifications." (54) Bisexuality In Steve Lenius' original 2001 paper, he explored the acceptance of bisexuality in a supposedly pansexual BDSM community. The reasoning behind this is that 'coming-out' had become primarily the territory of the gay and lesbian, with bisexuals feeling the push to be one or the other (and being right only half the time either way). What he found in 2001, was that people in BDSM were open to discussion about the topic of bisexuality and pansexuality and all controversies they bring to the table, but personal biases and issues stood in the way of actively using such labels. A decade later, Lenius (2011) looks back on his study and considers if anything has changed. He concluded that the standing of bisexuals in the BDSM and kink community was unchanged, and believed that positive shifts in attitude were moderated by society's changing views towards different sexualities and orientations. But Lenius (2011) does emphasize that the pansexual promoting BDSM community helped advance greater acceptance of alternative sexualities. Brandy Lin Simula (2012), on the other hand, argues that BDSM actively resists gender-conforming and identified three different types of BDSM bisexuality: gender-switching, gender-based styles (taking on a different gendered style depending on the gender of partner when playing), and rejection of gender (resisting the idea that gender matters in their play partners). Simula (2012) explains that practitioners of BDSM routinely challenge our concepts of sexuality by pushing the limits on pre-existing ideas of sexual orientation and gender norms. For some, BDSM and kink provides a platform in creating identities that are fluid, ever-changing. History of psychotherapy and current recommendations Psychiatry has an insensitive history in the area of BDSM. There have been many involvements by institutions of political power to marginalize subgroups and sexual minorities. Mental health professionals have a long history of holding negative assumptions and stereotypes about the BDSM community. Beginning with the DSM-II, Sexual Sadism and Sexual Masochism have been listed as sexually deviant behaviours. Sadism and masochism were also found in the personality disorder section. This negative assumption has not changed significantly which is evident in the continued inclusion of Sexual Sadism and Sexual Masochism as paraphilias in the DSM-IV-TR. The DSM-V, however, has depathologized the language around paraphilias in a way that signifies "the APA's intent to not demand treatment for healthy consenting adult sexual expression". These biases and misinformation can result in pathologizing and unintentional harm to clients who identify as sadists and/or masochists and medical professionals who have been trained under older editions of the DSM can be slow to change in their ways of clinical practice. According to Kolmes et al. (2006), major themes of biased and inadequate care to BDSM clients are: Considering BDSM to be unhealthy Requiring a client to give up BDSM activities in order to continue in treatment Confusing BDSM with abuse Having to educate the therapist about BDSM Assuming that BDSM interests are indicative of past family/spousal abuse Therapists misrepresenting their expertise by stating that they are BDSM-positive when they are not actually knowledgeable about BDSM practices These same researchers suggested that therapists should be open to learning more about BDSM, to show comfort in talking about BDSM issues, and to understand and promote "safe, sane, consensual" BDSM. There has also been research which suggests BDSM can be a beneficial way for victims of sexual assault to deal with their trauma, most notably by Corie Hammers, but this work is limited in scope and to date, has not undergone empirical testing as a treatment. Clinical issues Nichols (2006) compiled some common clinical issues: countertransference, non-disclosure, coming out, partner/families, and bleed-through. Countertransference is a common problem in clinical settings. Despite having no evidence, therapists may find themselves believing that their client's pathology is "self-evident". Therapists may feel intense disgust and aversive reactions. Feelings of countertransference can interfere with therapy. Another common problem is when clients conceal their sexual preferences from their therapists. This can compromise any therapy. To avoid non-disclosure, therapists are encouraged to communicate their openness in indirect ways with literature and artworks in the waiting room. Therapists can also deliberately bring up BDSM topics during the course of therapy. With less informed therapists, sometimes they over-focus on clients' sexuality which detracts from original issues such as family relationships, depression, etc. A special subgroup that needs counselling is the "newbie". Individuals just coming out might have internalized shame, fear, and self-hatred about their sexual preferences. Therapists need to provide acceptance, care, and model positive attitude; providing reassurance, psychoeducation, and bibliotherapy for these clients is crucial. The average age when BDSM individuals realize their sexual preference is around 26 years. Many people hide their sexuality until they can no longer contain their desires. However, they may have married or had children by this point. History Origins Practices of BDSM survive from some of the oldest textual records in the world, associated with rituals to the goddess Inanna (Ishtar in Akkadian). Cuneiform texts dedicated to Inanna which incorporate domination rituals. In particular, she points to ancient writings such as Inanna and Ebih (in which the goddess dominates Ebih), and Hymn to Inanna describing cross-dressing transformations and rituals "imbued with pain and ecstasy, bringing about initation and journeys of altered states of consciousness; punishment, moaning, ecstasy, lament and song, participants exhausting themselves in weeping and grief." During the 9th century BC, ritual flagellations were performed in Artemis Orthia, one of the most important religious areas of ancient Sparta, where the Cult of Orthia, a pre-Olympic religion, was practiced. Here, ritual flagellation called diamastigosis took place, in which young adolescent men were whipped in a ceremony overseen by the priestess. These are referred to by a number of ancient authors, including Pausanius (III, 16: 10–11). One of the oldest graphical proofs of sadomasochistic activities is found in the Etruscan Tomb of the Whipping near Tarquinia, which dates to the 5th century BC. Inside the tomb, there is a fresco which portrays two men who flagellate a woman with a cane and a hand during an erotic situation. Another reference related to flagellation is to be found in the sixth book of the Satires of the ancient Roman Poet Juvenal (1st–2nd century A.D.), further reference can be found in Petronius's Satyricon where a delinquent is whipped for sexual arousal. Anecdotal narratives related to humans who have had themselves voluntary bound, flagellated or whipped as a substitute for sex or as part of foreplay reach back to the 3rd and 4th century BC. In Pompeii, a whip-mistress figure with wings is depicted on the wall of the Villa of Mysteries, as part of an initiation of a young woman into the Mysteries. The whip-mistress role drove the sacred initiation of ceremonial death and rebirth. The archaic Greek Aphrodite may too once have been armed with an implement, with archaeological evidence of armed Aphrodites known from a number of locations in Cythera, Acrocorinth and Sparta, and which may have been a whip. The Kama Sutra of India describes four different kinds of hitting during lovemaking, the allowed regions of the human body to target and different kinds of joyful "cries of pain" practiced by bottoms. The collection of historic texts related to sensuous experiences explicitly emphasizes that impact play, biting and pinching during sexual activities should only be performed consensually since only some women consider such behavior to be joyful. From this perspective, the Kama Sutra can be considered one of the first written resources dealing with sadomasochistic activities and safety rules. Further texts with sadomasochistic connotation appear worldwide during the following centuries on a regular basis. There are anecdotal reports of people willingly being bound or whipped, as a prelude to or substitute for sex, during the 14th century. The medieval phenomenon of courtly love in all of its slavish devotion and ambivalence has been suggested by some writers to be a precursor of BDSM. Some sources claim that BDSM as a distinct form of sexual behavior originated at the beginning of the 18th century when Western civilization began medically and legally categorizing sexual behavior (see Etymology). Flagellation practiced within an erotic setting has been recorded from at least the 1590s evidenced by a John Davies epigram, and references to "flogging schools" in Thomas Shadwell's The Virtuoso (1676) and Tim Tell-Troth's Knavery of Astrology (1680). Visual evidence such as mezzotints and print media is also identified revealing scenes of flagellation, such as "The Cully Flaug'd" from the British Museum collection. John Cleland's novel Fanny Hill, published in 1749, incorporates a flagellation scene between the character's protagonist Fanny Hill and Mr Barville. A large number of flagellation publications followed, including Fashionable Lectures: Composed and Delivered with Birch Discipline (), promoting the names of women offering the service in a lecture room with rods and cat o' nine tails. Other sources give a broader definition, citing BDSM-like behavior in earlier times and other cultures, such as the medieval flagellates and the physical ordeal rituals of some Native American societies. BDSM ideas and imagery have existed on the fringes of Western culture throughout the 20th century. Robert Bienvenu attributes the origins of modern BDSM to three sources, which he names as "European Fetish" (from 1928), "American Fetish" (from 1934), and "Gay Leather" (from 1950). Another source are the sexual games played in brothels, which go back to the 19th century, if not earlier. Charles Guyette was the first American to produce and distribute fetish related material (costumes, footwear, photography, props and accessories) in the U.S. His successor, Irving Klaw, produced commercial sexploitation film and photography with a BDSM theme (most notably with Bettie Page) and issued fetish comics (known then as "chapter serials") by the now-iconic artists John Willie, Gene Bilbrew, and Eric Stanton. Stanton's model Bettie Page became at the same time one of the first successful models in the area of fetish photography and one of the most famous pin-up girls of American mainstream culture. Italian author and designer Guido Crepax was deeply influenced by him, coining the style and development of European adult comics in the second half of the 20th century. The artists Helmut Newton and Robert Mapplethorpe are the most prominent examples of the increasing use of BDSM-related motives in modern photography and the public discussions still resulting from this. Alfred Binet first coined the term erotic fetishism in his 1887 book, Du fétichisme dans l'amour Richard von Krafft-Ebing saw BDSM interests as the end of a continuum. Leather movement Leather has been a predominantly gay male term to refer to one fetish, but it can stand for many more. Members of the gay male leather community may wear leathers such as motorcycle leathers, or may be attracted to men wearing leather. Leather and BDSM are seen as two parts of one whole. Much of the BDSM culture can be traced back to the gay male leather culture, which formalized itself out of the group of men who were soldiers returning home after World War II (1939–1945). World War II was the setting where countless homosexual men and women tasted the life among homosexual peers. Post-war, homosexual individuals congregated in larger cities such as New York, Chicago, San Francisco, and Los Angeles. They formed leather clubs and bike clubs; some were fraternal services. The establishment of Mr. Leather Contest and Mr. Drummer Contest were made around this time. This was the genesis of the gay male leather community. Many of the members were attracted to extreme forms of sexuality, for which peak expression was in the pre-AIDS 1970s. This subculture is epitomized by the Leatherman's Handbook by Larry Townsend, published in 1972, which describes in detail the practices and culture of gay male sadomasochists in the late 1960s and early 1970s. In the early 1980s, lesbians also joined the leathermen as a recognizable element of the gay leather community. They also formed leather clubs, but there were some gender differences, such as the absence of leatherwomen's bars. In 1981, the publication of Coming to Power by lesbian-feminist group Samois led to a greater knowledge and acceptance of BDSM in the lesbian community. By the 1990s, the gay men's and women's leather communities were no longer underground and played an important role in the kink community. Today, the leather movement is generally seen as a part of the BDSM-culture instead of as a development deriving from gay subculture, even if a huge part of the BDSM-subculture was gay in the past. In the 1990s, the so-called New Guard leather subculture evolved. This new orientation started to integrate psychological aspects into their play. The San Francisco South of Market Leather History Alley consists of four works of art along Ringold Alley honoring leather culture; it opened in 2017. One of the works of art is metal bootprints along the curb which honor 28 people (including Steve McEachern, owner of the Catacombs, a gay and lesbian S/M fisting club, and Cynthia Slater, a founder of the Society of Janus, the second oldest BDSM organization in the United States) who were an important part of the leather communities of San Francisco. Internet In the late 1980s, the Internet provided a way of finding people with specialized interests around the world as well as on a local level, and communicating with them anonymously. This brought about an explosion of interest and knowledge of BDSM, particularly on the usenet group alt.sex.bondage. When that group became too cluttered with spam, the focus moved to soc.subculture.bondage-bdsm. With an increased focus on forms of social media, FetLife was formed, which advertises itself as "a social network for the BDSM and fetish community". It operates similarly to other social media sites, with the ability to make friends with other users, events, and pages of shared interests. In addition to traditional sex shops, which sell sex paraphernalia, there has also been an explosive growth of online adult toy companies that specialize in leather/latex gear and BDSM toys. Once a very niche market, there are now very few sex toy companies that do not offer some sort of BDSM or fetish gear in their catalog. Kinky elements seem to have worked their way into "vanilla" markets. The former niche expanded to an important pillar of the business with adult accessories. Today practically all suppliers of sex toys do offer items which originally found usage in the BDSM subculture. Padded handcuffs, latex and leather garments, as well as more exotic items like soft whips for fondling and TENS for erotic electro stimulation, can be found in catalogs aiming at classical vanilla target groups, indicating that former boundaries increasingly seem to shift. During the last years, the Internet also provides a central platform for networking among individuals who are interested in the subject. Besides countless private and commercial choices, there is an increasing number of local networks and support groups emerging. These groups often offer comprehensive background and health-related information for people who have been unwillingly outed as well as contact lists with information on psychologists, physicians and lawyers who are familiar with BDSM related topics. University clubs Increasingly, American universities are witnessing BDSM and kink education by providing student clubs, such as Columbia University's Conversio Virium and Iowa State University's Cuffs. University BDSM clubs are also found in the U.K., Canada, Belgium, and Taiwan. Some American universities—such as Indiana University and Michigan State University—have professors who research and teach classes on BDSM. Legal status Austria Section 90 of the Austrian criminal code declares bodily injury (Sections 83–84) or the endangerment of physical security (Section 89) to not be subject to penalty in cases in which the victim has consented and the injury or endangerment does not offend moral sensibilities. Case law from the Austrian Supreme Court has consistently shown that bodily injury is only offensive to moral sensibilities, thus it is only punishable when a "serious injury" (damage to health or an employment disability lasting more than 24 days) or the death of the "victim" results. A light injury is generally considered permissible when the "victim" has consented to it. In cases of threats to bodily well being the standard depends on the probability that an injury will actually occur. If serious injury or even death would be a likely result of a threat being carried out, then even the threat itself is considered punishable. Canada In 2004, a judge in Canada ruled that videos seized by the police featuring BDSM activities were not obscene and did not constitute violence, but a "normal and acceptable" sexual activity between two consenting adults. In 2011, the Supreme Court of Canada ruled in R. v. J.A. that a person must have an active mind during the specific sexual activity in order to legally consent. The Court ruled that it is a criminal offence to perform a sexual act on an unconscious person—whether or not that person consented in advance. Germany According to Section 194 of the German criminal code, the charge of insult (slander) can only be prosecuted if the defamed person chooses to press charges. False imprisonment can be charged if the victim—when applying an objective view—can be considered to be impaired in their rights of free movement. According to Section 228, a person inflicting a bodily injury on another person with that person's permission violates the law only in cases where the act can be considered to have violated good morals in spite of permission having been given. On 26 May 2004, the Criminal Panel No. 2 of the Bundesgerichtshof (German Federal Court) ruled that sado-masochistically motivated physical injuries are not per se indecent and thus subject to Section 228. Following cases in which sado-masochistic practices had been repeatedly used as pressure tactics against former partners in custody cases, the Appeals Court of Hamm ruled in February 2006 that sexual inclinations toward sado-masochism are no indication of a lack of capabilities for successful child-raising. Italy In Italian law, BDSM is right on the border between crime and legality, and everything lies in the interpretation of the legal code by the judge. This concept is that anyone willingly causing "injury" to another person is to be punished. In this context, though, "injury" is legally defined as "anything causing a condition of illness", and "illness" is ill-defined itself in two different legal ways. The first is "any anatomical or functional alteration of the organism" (thus technically including little scratches and bruises too); the second is "a significant worsening of a previous condition relevant to organic and relational processes, requiring any kind of therapy". This could make it somewhat risky to play with someone, as later the "victim" may call foul play citing even an insignificant mark as evidence against the partner. Also, any injury requiring over 20 days of medical care must be denounced by the professional medic who discovers it, leading to automatic indictment of the person who caused it. Nordic countries In September 2010, a Swedish court acquitted a 32-year-old man of assault for engaging in consensual BDSM play with a 16-year-old woman (the age of consent in Sweden is 15). Norway's legal system has likewise taken a similar position, that safe and consensual BDSM play should not be subject to criminal prosecution. This parallels the stance of the mental health professions in the Nordic countries which have removed sadomasochism from their respective lists of psychiatric illnesses. Switzerland The age of consent in Switzerland is 16 years, which also applies to BDSM play. Minors (i.e., those under 16) are not subject to punishment for BDSM play as long as the age difference between them is less than three years. Certain practices, however, require granting consent for light injuries, with only those over 18 permitted to give consent. On 1 April 2002, Articles 135 and 197 of the Swiss Criminal Code were tightened to make ownership of "objects or demonstrations [...] which depict sexual acts with violent content" a punishable offense. This law amounts to a general criminalization of sado-masochism since nearly every sado-masochist will have some kind of media that fulfills this criterion. Critics also object to the wording of the law which puts sado-masochists in the same category as pedophiles and pederasts. United Kingdom In British law, consent is an absolute defense to common assault, but not necessarily to actual bodily harm, where courts may decide that consent is not valid, as occurred in the case of R v Brown. Accordingly, consensual activities in the U.K. may not constitute "assault occasioning actual or grievous bodily harm" in law. The Spanner Trust states that this is defined as activities which have caused injury "of a lasting nature" but that only a slight duration or injury might be considered "lasting" in law. The decision contrasts with the later case of R v Wilson in which conviction for non-sexual consensual branding within a marriage was overturned, the appeal court ruling that R v Brown was not an authority in all cases of consensual injury and criticizing the decision to prosecute. Following Operation Spanner, the European Court of Human Rights ruled in January 1999 in Laskey, Jaggard and Brown v. United Kingdom that no violation of Article 8 occurred because the amount of physical or psychological harm that the law allows between any two people, even consenting adults, is to be determined by the jurisdiction the individuals live in, as it is the State's responsibility to balance the concerns of public health and well-being with the amount of control a State should be allowed to exercise over its citizens. In the Criminal Justice and Immigration Bill 2007, the British Government cited the Spanner case as justification for criminalizing images of consensual acts, as part of its proposed criminalization of possession of "extreme pornography". Another contrasting case was that of Stephen Lock in 2013, who was cleared of actual bodily harm on the grounds that the woman consented. In this case, the act was deemed to be sexual. United States The United States Federal law does not list a specific criminal determination for consensual BDSM acts. Many BDSM practitioners cite the legal decision of People v. Jovanovic, 95 N.Y.2d 846 (2000), or the "Cybersex Torture Case", which was the first U.S. appellate decision to hold (in effect) that one does not commit assault if the victim consents. However, many individual states do criminalize specific BDSM actions within their state borders. Some states specifically address the idea of "consent to BDSM acts" within their assault laws, such as the state of New Jersey, which defines "simple assault" to be "a disorderly persons offense unless committed in a fight or scuffle entered into by mutual consent, in which case it is a petty disorderly persons offense". Oregon Ballot Measure 9 was a ballot measure in the U.S. state of Oregon in 1992, concerning sadism, masochism, gay rights, pedophilia, and public education, that drew widespread national attention. It would have added the following text to the Oregon Constitution: It was defeated in the 3 November 1992 general election with 638,527 votes in favor, 828,290 votes against. The National Coalition for Sexual Freedom collects reports about punishment for sexual activities engaged in by consenting adults, and about its use in child custody cases. Cultural aspects Today, the BDSM culture exists in most Western countries. This offers BDSM practitioners the opportunity to discuss BDSM relevant topics and problems with like-minded people. This culture is often viewed as a subculture, mainly because BDSM is often still regarded as "unusual" by some of the public. Many people hide their leaning from society since they are afraid of the incomprehension and of social exclusion. In contrast to frameworks seeking to explain sadomasochism through psychological, psychoanalytic, medical or forensic approaches, which seek to categorize behaviour and desires and find a root "cause", Romana Byrne suggests that such practices can be seen as examples of "aesthetic sexuality", in which a founding physiological or psychological impulse is irrelevant. Rather, sadism and masochism may be practiced through choice and deliberation, driven by certain aesthetic goals tied to style, pleasure, and identity. These practices, in certain circumstances and contexts, can be compared with the creation of art. Symbols One of the most commonly used symbols of the BDSM community is a derivation of a triskelion shape within a circle. Various forms of triskele have had many uses and many meanings in many cultures; its BDSM usage derives from the Ring of O in the classic book Story of O. The BDSM Emblem Project claims copyright over one particular specified form of the triskelion symbol; other variants of the triskelion are free from such copyright claims. The leather pride flag is a symbol for the leather subculture and also widely used within BDSM. In continental Europe, the Ring of O is widespread among BDSM practitioners. The triskelion as a BDSM symbol can easily be perceived as the three separate parts of the acronym BDSM; which are BD, DS, and SM (Bondage & Discipline, Dominance & Submission, Sadism & Masochism). They are three separate items, that are normally associated together. The BDSM rights flag, shown to the right, is intended to represent the belief that people whose sexuality or relationship preferences include BDSM practises deserve the same human rights as everyone else, and should not be discriminated against for pursuing BDSM with consenting adults. The flag is inspired by the leather pride flag and BDSM emblem but is specifically intended to represent the concept of BDSM rights and to be without the other symbols' restrictions against commercial use. It is designed to be recognizable by people familiar with either the leather pride flag or BDSM triskelion (or triskele) as "something to do with BDSM"; and to be distinctive whether reproduced in full colour, or in black and white (or another pair of colours). BDSM and fetish items and styles have been spread widely in Western societies' everyday life by different factors, such as avant-garde fashion, heavy metal, goth subculture, and science fiction TV series, and are often not consciously connected with their BDSM roots by many people. While it was mainly confined to the punk and BDSM subcultures in the 1990s, it has since spread into wider parts of Western societies. Film and music In music: the Romanian singer-songwriter Navi featured BDSM and Shibari scenes in her music video "Picture Perfect" (2014). The video was banned in Romania for its explicit content. In 2010, Rihanna's song "S&M" and Christina Aguilera's single "Not Myself Tonight" appeared, both full of BDSM imagery. In movies: While BDSM activity appeared initially in subtle form, in the 1960s famous works of literature like Story of O and Venus in Furs were filmed explicitly. With the release of the 1986 film 9½ Weeks, the topic of BDSM was transferred to mainstream cinema. From the 1990s, cinematic representation of alternative sexualities, including BDSM, increased dramatically, as seen in documentary productions such as Graphic Sexual Horror (a 2009 film based on the website Insex), KinK (a 2013 film based on the website Kink.com), and movies such as Fifty Shades of Grey (2015) and its two sequels Fifty Shades Darker (2017) and Fifty Shades Freed (2018). However, mistakenly considered in mainstream society as having BDSM activities that are, rather, abusive are the movies Fifty Shades of Grey (2015) and its two sequels Fifty Shades Darker (2017) and Fifty Shades Freed (2018). "A lot of what happens in the main relationship of Fifty Shades of Grey is domestic abuse, both physical and emotional, and for people whose entire understanding of BDSM now comes from jiggle balls and rooms of pain this is a dangerous misconception to foster." Theater Although it would be possible to establish certain elements related to BDSM in classical theater, not until the emergence of contemporary theater would some plays have BDSM as the main theme. Exemplifying this are two works: one Austrian, one German, in which BDSM is not only incorporated but integral to the storyline of the play. Worauf sich Körper kaprizieren, Austria. Peter Kern directed and wrote the script for this comedy which is a present-day adaption of Jean Genet's 1950 film, Un chant d'amour. It is about a marriage in which the wife (film veteran Miriam Goldschmidt) submits her husband (Heinrich Herkie) and the butler (Günter Bubbnik) to her sadistic treatment until two new characters take their places. Ach, Hilde (Oh, Hilda), Germany. This play by Anna Schwemmer premiered in Berlin. A young Hilde becomes pregnant, and after being abandoned by her boyfriend she decides to become a professional dominatrix to earn money. The play carefully crafts a playful and frivolous picture of the field of professional dominatrices. Literature Although examples of literature catering to BDSM and fetishistic tastes were created in earlier periods, BDSM literature as it exists today cannot be found much earlier than World War II. The word sadism originates from the works of Donatien Alphonse François, Marquis de Sade, and the word masochism originates from Leopold von Sacher-Masoch, the author of Venus in Furs. However, it is worth noting that the Marquis de Sade describes non-consensual abuse in his works, such as in Justine. Venus in Furs describes a consensual dom-sub relationship. A central work in modern BDSM literature is undoubtedly Story of O (1954) by Anne Desclos under the pseudonym Pauline Réage. Other notable works include 9½ Weeks (1978) by Elizabeth McNeill, some works of the writer Anne Rice (Exit to Eden, and her Claiming of Sleeping Beauty series of books), Jeanne de Berg (L'Image (1956) dedicated to Pauline Réage), the Gor series by John Norman, and naturally all the works of Patrick Califia, Gloria Brame, the group Samois and many of the writer Georges Bataille (Histoire de l'oeil-Story of the Eye, Madame Edwarda, 1937), as well as those of Bob Flanagan (Slave Sonnets (1986), Fuck Journal (1987), A Taste of Honey (1990)). A common part of many of the poems of Pablo Neruda is a reflection on feelings and sensations arising from the relations of EPE or erotic exchange of power. The Fifty Shades trilogy is a series of very popular erotic romance novels by E. L. James which involves BDSM; however, the novels have been criticized for their inaccurate and harmful depiction of BDSM. In the 21st century, a number of prestigious university presses, such as Duke University, Indiana University and University of Chicago, have published books on BDSM written by professors, thereby lending academic legitimacy to this once taboo topic. Art In photography: Eric Kroll and Irving Klaw (with Bettie Page, the first well-known bondage model), and Japanese photographer Araki Nobuyoshi, whose works are exhibited in several major art museums, galleries and private collections, such as the Baroness Marion Lambert, the world's largest holder of contemporary photographic art. Also Robert Mapplethorpe, whose most controversial work is that of the underground BDSM scene in the late 1960s and early 1970s of New York. The homoeroticism of this work fuelled a national debate over the public funding of controversial artwork. Comic book drawings: Guido Crepax with Histoire d'O (1975), Justine (1979) and Venere in Pelliccia (1984); inspired by the work of Pauline Réage, the Marquis de Sade and Leopold von Sacher-Masoch. John Willie and The Adventures of Sweet Gwendoline (1984) which was the basis for the film The Perils of Gwendoline in the Land of the Yik-Yak. The Sunstone/Mercy (2011-ongoing) books by Stjepan Sejic have become very popular and are found in many conventional bookstores around the world. In graphic design: Eric Stanton and his work on dominance and female bondage, as well as Hajime Sorayama and Robert Bishop. In art deco sculpture: Bruno Zach produced perhaps his best known sculpture—called "The Riding Crop" ()—which features a scantily clad dominatrix wielding a riding crop. See also Autosadism Dominance hierarchy Index of BDSM articles Glossary of BDSM List of BDSM equipment List of BDSM organizations List of bondage positions Leather subculture Outline of BDSM Vulnerability and care theory of love References Further reading Baldwin, Guy. Ties That Bind: SM/Leather/Fetish Erotic Style: Issues, Communication, and Advice, Daedalus Publishing, 1993. . Brame, Gloria G., Brame, William D., and Jacobs, Jon. Different Loving: An Exploration of the World of Sexual Dominance and Submission Villard Books, New York, 1993. Brame, Gloria. Come Hither: A Commonsense Guide to Kinky Sex, Fireside, 2000. . Califia, Pat. Sensuous Magic. New York, Masquerade Books, 1993. Dollie Llama. Diary of an S&M Romance., PEEP! Press (California), 2006, Henkin, Wiliiam A., Sybil Holiday. Consensual Sadomasochism: How to Talk About It and How to Do It Safely, Daedalus Publishing, 1996. . Janus, Samuel S., and Janus, Cynthia L. The Janus Report on Sexual Behavior, John Wiley & Sons, 1994. Masters, Peter. This Curious Human Phenomenon: An Exploration of Some Uncommonly Explored Aspects of BDSM. The Nazca Plains Corporation, 2008. Phillips, Anita. A Defence of Masochism, Faber and Faber, new edition 1999. Newmahr, Staci (2011). Playing on the Edge: Sadomasochism, Risk and Intimacy. Bloomington: Indiana University Press. . Nomis, Anne O. (2013) The History & Arts of the Dominatrix Mary Egan Publishing & Anna Nomis Ltd, U.K. Rinella, Jack. The Complete Slave: Creating and Living an Erotic Dominant/submissive Lifestyle, Daedalus Publishing, 2002. . Saez, Fernando y Viñuales, Olga, Armarios de Cuero, Ed. Bellaterra, 2007. Larry Townsend. Leatherman's Handbook, first edition 1972 (this was the first book to publicize BDSM to the general public—it was a paperback book widely available on newsstands and at bookstores throughout the United States) Wiseman, Jay. SM 101: A Realistic Introduction (1st ed., 1992); 2nd ed. Greenery Press, 2000. Byrne, Romana (2013) Aesthetic Sexuality: A Literary History of Sadomasochism , New York: Bloomsbury. Dominari, Rajan (2019). Welcome to the Darkside: A BDSM Primer. AKO Publishing Company. . External links "Pain and the erotic" by Lesley Hall Spoken articles Control (social and political) Sexuality and gender identity-based cultures
1,994
4,560
https://en.wikipedia.org/wiki/Braveheart
Braveheart
Braveheart is a 1995 American historical drama film directed by, produced by, and starring Mel Gibson. Gibson portrays Sir William Wallace, a late-13th century Scottish warrior who led the Scots in the First War of Scottish Independence against King Edward I of England. The film also stars Sophie Marceau, Patrick McGoohan and Catherine McCormack. The story is inspired by Blind Harry's 15th century epic poem The Actes and Deidis of the Illustre and Vallyeant Campioun Schir William Wallace and was adapted for the screen by Randall Wallace. Development on the film initially started at Metro-Goldwyn-Mayer (MGM) when producer Alan Ladd Jr. picked up the project from Wallace, but when MGM was going through new management, Ladd left the studio and took the project with him. Despite initially declining, Gibson eventually decided to direct the film, as well as star as Wallace. Braveheart was filmed in Scotland and Ireland from June to October 1994 with a budget around $65–70 million. The film, which was produced by Gibson's Icon Productions and The Ladd Company, was distributed by Paramount Pictures in North America and by 20th Century Fox internationally. Released on May 24, 1995, Braveheart was praised for its action, drama, and romance, though it was criticized for being historically inaccurate. Nonetheless, the film was successful both critically and commercially. At the 68th Academy Awards, the film won five awards, including Best Picture and Best Director, from ten nominations. A legacy sequel, Robert the Bruce, was released on June 28, 2019, with Angus Macfadyen reprising his role. Plot In 1280, King Edward "Longshanks" invades and conquers Scotland following the death of Alexander III of Scotland, who left no heir to the throne. Young William Wallace witnesses Longshanks' execution of several Scottish nobles, suffers the deaths of his father and brother fighting against the English, and is taken abroad on a pilgrimage throughout Europe by his paternal uncle Argyle, who has Wallace educated. Years later, Longshanks grants his noblemen land and privileges in Scotland, including jus primae noctis. Meanwhile, a grown Wallace returns to Scotland and falls in love with his childhood friend Murron MacClannough, and the two marry in secret. Wallace rescues Murron from being raped by English soldiers, but as Wallace fights off the soldiers Murron is captured and publicly executed. In retribution, Wallace leads his clan to fight the English garrison in his hometown and sends the surviving garrison back to England with a message of rebellion for Longshanks. Longshanks orders his son Prince Edward to stop Wallace by any means necessary while he visits the French King to secure England's alliance with France. Alongside his friend Hamish, Wallace rebels against the English, and as his legend spreads, hundreds of Scots from the surrounding clans join him. Wallace leads his army to victory at the Battle of Stirling Bridge where he decapitates the English commander Cheltham, and sacks York after Prince Edward fails to send reinforcements there, killing Longshanks' nephew whose severed head is sent to the king. Wallace seeks the assistance of Robert the Bruce, the son of nobleman Robert the Elder, a contender for the Scottish crown. Robert is dominated by his leper father, who wishes to secure the Scottish throne for his son by submitting to the English. Worried by the threat of the rebellion, Longshanks sends his son's wife Isabella of France to try to negotiate with Wallace as a distraction for the landing of another invasion force in Scotland. After meeting him in person, Isabella becomes enamored of Wallace. She warns him of the coming invasion, and Wallace implores the Scottish nobility to take immediate action to counter the threat and take back their country, asking Robert the Bruce to lead. Leading the English army himself, Longshanks confronts the Scots at Falkirk. During the battle, Scottish noblemen Mornay and Lochlan, having been bribed by Longshanks, withdraw their men, resulting in Wallace's army being routed and the death of Hamish's father, Campbell. Wallace is further betrayed when he discovers Robert the Bruce was fighting alongside Longshanks; after the battle, seeing the damage he helped do to his countrymen, Robert reprimands his father and vows never to be on the wrong side again. Wallace kills Lochlan and Mornay for their betrayal and wages a guerrilla war against the English assisted by Isabella, with whom he eventually has an affair. Robert sets up a meeting with Wallace in Edinburgh, but Robert's father conspires with other nobles to capture and hand over Wallace to the English. Learning of his treachery, Robert disowns and banishes his father. Isabella exacts revenge on the now terminally ill Longshanks, who can no longer speak, by telling him that his bloodline will be destroyed upon his death as she is pregnant with Wallace's child and will ensure that Prince Edward spends as short a time as possible on the throne before Wallace's child replaces him. In London, Wallace is brought before an English magistrate, tried for high treason, and condemned to public torture and beheading. Even whilst being disemboweled alive, Wallace refuses to submit to the king. The watching crowd, deeply moved by the Scotsman's valor, begin crying for mercy on Wallace's behalf. The magistrate offers him one final chance, asking him only to utter the word, "Mercy", and be granted a quick death. Wallace instead shouts, "Freedom!", and his cry rings through the square, the dying Longshanks hearing it. Before being beheaded, Wallace sees a vision of Murron in the crowd, smiling at him. In 1314, Robert, now Scotland's king, leads a Scottish army before a ceremonial line of English troops on the fields of Bannockburn, where he is supposed to formally accept English rule. Instead, he invokes Wallace's memory, imploring his men to fight with him as they did with Wallace. Hamish throws Wallace's sword point-down in front of the English army, and he and the Scots chant Wallace's name as Robert leads them into battle against the English, winning the Scots their freedom. Cast Mel Gibson as William Wallace James Robinson as young William Wallace Sophie Marceau as Princess Isabella of France Angus Macfadyen as Robert the Bruce Patrick McGoohan as King Edward "Longshanks" Catherine McCormack as Murron MacClannough Mhairi Calvey as young Murron MacClannough Brendan Gleeson as Hamish Andrew Weir as young Hamish Peter Hanly as Prince Edward James Cosmo as Campbell David O'Hara as Stephen of Ireland Ian Bannen as Bruce's father Seán McGinley as MacClannough Brian Cox as Argyle Wallace Sean Lawlor as Malcolm Wallace Sandy Nelson as John Wallace Stephen Billington as Phillip John Kavanagh as Craig Alun Armstrong as Mornay John Murtagh as Lochlan Tommy Flanagan as Morrison Donal Gibson as Stewart Jeanne Marine as Nicolette Michael Byrne as Smythe Malcolm Tierney as Magistrate Bernard Horsfall as Balliol Peter Mullan as Veteran Gerard McSorley as Cheltham Richard Leaf as Governor of York Mark Lees as Old Crippled Scotsman Tam White as MacGregor Jimmy Chisholm as Faudron David Gant as the Royal Magistrate Production Producer Alan Ladd Jr. initially had the project at MGM-Pathé Communications when he picked up the script from Wallace. When MGM was going through new management in 1993, Ladd left the studio and took some of its top properties, including Braveheart. Gibson came across the script and even though he liked it, he initially passed on it. However, the thought of it kept coming back to him and he ultimately decided to take on the project. Terry Gilliam was offered to direct the film but he declined. Gibson was initially interested in directing only and considered Brad Pitt in the role of Sir William Wallace, but Gibson reluctantly agreed to play Wallace as well. Gibson also considered Jason Patric for William Wallace. Sean Connery was approached to play King Edward, but he declined due to other commitments. Gibson said that Connery's pronunciation of "Goulash" helped him for the scottish accent for the film. Gibson and his production company, Icon Productions, had difficulty raising enough money for the film. Warner Bros. was willing to fund the project on the condition that Gibson sign for another Lethal Weapon sequel, which he refused. Gibson eventually gained enough financing for the film, with Paramount Pictures financing a third of the budget in exchange for North American distribution rights to the film, and 20th Century Fox putting up two-thirds of the budget in exchange for international distribution rights. Principal photography on the film began on June 6, 1994. While the crew spent three weeks shooting on location in Scotland, the major battle scenes were shot in Ireland using members of the Irish Army Reserve as extras. To lower costs, Gibson had the same extras, up to 1,600 in some scenes, portray both armies. The reservists had been given permission to grow beards and swapped their military uniforms for medieval garb. Principal photography ended on October 28, 1994. The film was shot in the anamorphic format with Panavision C- and E-Series lenses. Gibson had to tone down the film's battle scenes to avoid an NC-17 rating from the MPAA; the final version was rated R for "brutal medieval warfare". Gibson and editor Steven Rosenblum initially had a film at 195 minutes, but Sherry Lansing, who was the head of Paramount at the time, requested Gibson and Rosenblum to cut the film down to 177 minutes. According to Gibson in a 2016 interview with Collider, there is a four-hour version of the film and he would be interested in reassembling it if both Paramount and Fox are interested. Soundtrack The score was composed and conducted by James Horner and performed by the London Symphony Orchestra. It is Horner's second of three collaborations with Mel Gibson as director. The score has gone on to be one of the most commercially successful soundtracks of all time. It received considerable acclaim from film critics and audiences and was nominated for a number of awards, including the Academy Award, Saturn Award, BAFTA Award, and Golden Globe Award. Release and reception Braveheart premiered at the Seattle International Film Festival on May 18, 1995, and received its wide release in U.S. cinemas six days later. Box office On its opening weekend, Braveheart grossed $9,938,276 in the United States and $75.6 million in its box office run in the U.S. and Canada. Worldwide, the film grossed $210,409,945 and was the thirteenth-highest-grossing film of 1995. Critical response On Rotten Tomatoes the film has an approval rating of 75% and an average score of 7.20/10 based on 125 reviews. The site's consensus states: "Distractingly violent and historically dodgy, Mel Gibson's Braveheart justifies its epic length by delivering enough sweeping action, drama, and romance to match its ambition." On Metacritic the film has a score of 68 out of 100 based on 20 critic reviews, indicating "generally favorable reviews". Audiences surveyed by CinemaScore gave the film a grade A- on scale of A to F. Caryn James of The New York Times praised the film, calling it "one of the most spectacular entertainments in years." Roger Ebert gave the film three and a half out of four stars, calling it "An action epic with the spirit of the Hollywood swordplay classics and the grungy ferocity of The Road Warrior." In a positive review, Gene Siskel wrote that "in addition to staging battle scenes well, Gibson also manages to recreate the filth and mood of 700 years ago." Peter Travers of Rolling Stone felt that "though the film dawdles a bit with the shimmery, dappled love stuff involving Wallace with a Scottish peasant and a French princess, the action will pin you to your seat." The depiction of the Battle of Stirling Bridge was listed by CNN as one of the best battles in cinema history. Not all reviews were positive, Richard Schickel of Time magazine argued that "everybody knows that a non-blubbering clause is standard in all movie stars' contracts. Too bad there isn't one banning self-indulgence when they direct." Peter Stack of San Francisco Chronicle felt "at times the film seems an obsessive ode to Mel Gibson machismo." In a 2005 poll by British film magazine Empire, Braveheart was No. 1 on their list of "The Top 10 Worst Pictures to Win Best Picture Oscar". Empire readers had previously voted Braveheart the best film of 1995. Effect on tourism The European premiere was on September 3, 1995, in Stirling. In 1996, the year after the film was released, the annual three-day "Braveheart Conference" at Stirling Castle attracted fans of Braveheart, increasing the conference's attendance to 167,000 from 66,000 in the previous year. In the following year, research on visitors to the Stirling area indicated that 55% of the visitors had seen Braveheart. Of visitors from outside Scotland, 15% of those who saw Braveheart said it influenced their decision to visit the country. Of all visitors who saw Braveheart, 39% said the film influenced in part their decision to visit Stirling, and 19% said the film was one of the main reasons for their visit. In the same year, a tourism report said that the "Braveheart effect" earned Scotland £7 million to £15 million in tourist revenue, and the report led to various national organizations encouraging international film productions to take place in Scotland. The film generated huge interest in Scotland and in Scottish history, not only around the world, but also in Scotland itself. At a Braveheart Convention in 1997, held in Stirling the day after the Scottish Devolution vote and attended by 200 delegates from around the world, Braveheart author Randall Wallace, Seoras Wallace of the Wallace Clan, Scottish historian David Ross and Bláithín FitzGerald from Ireland gave lectures on various aspects of the film. Several of the actors also attended including James Robinson (Young William), Andrew Weir (Young Hamish), Julie Austin (the young bride) and Mhairi Calvey (Young Murron). Awards and honors Braveheart was nominated for many awards during the 1995 awards season, though it was not viewed by many as a major contender such as Apollo 13, Il Postino: The Postman, Leaving Las Vegas, Sense and Sensibility, and The Usual Suspects. It wasn't until after the film won the Golden Globe Award for Best Director at the 53rd Golden Globe Awards that it was viewed as a serious Oscar contender. When the nominations were announced for the 68th Academy Awards, Braveheart received ten Academy Award nominations, and a month later, won five including Best Picture, Best Director for Gibson, Best Cinematography, Best Sound Effects Editing, and Best Makeup. Braveheart became the ninth film to win Best Picture with no acting nominations and is one of only four films to win Best Picture without being nominated for the Screen Actors Guild Award for Outstanding Performance by a Cast in a Motion Picture, the others being The Shape of Water in 2017, Green Book in 2018, and Nomadland in 2020. The film also won the Writers Guild of America Award for Best Original Screenplay. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years American Film Institute lists AFI's 100 Years ... 100 Movies – Nominated AFI's 100 Years ... 100 Thrills – No. 91 AFI's 100 Years...100 Heroes & Villains: William Wallace – Nominated Hero AFI's 100 Years ... 100 Movie Quotes: "They may take away our lives, but they'll never take our freedom!" – Nominated AFI's 100 Years of Film Scores – Nominated AFI's 100 Years...100 Cheers – No. 62 AFI's 100 Years ... 100 Movies (10th Anniversary Edition) – Nominated AFI's 10 Top 10 – Nominated Epic Film Cultural effects and accusations of Anglophobia Lin Anderson, author of Braveheart: From Hollywood To Holyrood, credits the film with playing a significant role in affecting the Scottish political landscape in the mid-to-late 1990s. Peter Jackson cited Braveheart as influence in making the Lord of the Rings film trilogy. Sections of the English media accused the film of harbouring Anti-English sentiment. The Economist called it "xenophobic", and John Sutherland writing in The Guardian stated that: "Braveheart gave full rein to a toxic Anglophobia". In The Times, Colin McArthur said "the political effects are truly pernicious. It's a xenophobic film." Ian Burrell of The Independent has said, "The Braveheart phenomenon, a Hollywood-inspired rise in Scottish nationalism, has been linked to a rise in anti-English prejudice". Wallace Monument In 1997, a , sandstone statue depicting Mel Gibson as William Wallace in Braveheart was placed in the car park of the Wallace Monument near Stirling, Scotland. The statue, which was the work of Tom Church, a monumental mason from Brechin, included the word 'Braveheart' on Wallace's shield. The installation became the cause of much controversy; one local resident stated that it was wrong to "desecrate the main memorial to Wallace with a lump of crap". In 1998, someone wielding a hammer vandalized the statue's face. After repairs were made, the statue was encased in a cage every night to prevent further vandalism. This only incited more calls for the statue to be removed, as it then appeared that the Gibson/Wallace figure was imprisoned. The statue was described as "among the most loathed pieces of public art in Scotland". In 2008, the statue was returned to its sculptor to make room for a new visitor centre being built at the foot of the Wallace Monument. Historical inaccuracy Randall Wallace, who wrote the screenplay, has acknowledged Blind Harry's 15th-century epic poem The Acts and Deeds of Sir William Wallace, Knight of Elderslie as a major inspiration for the film. In defending his script, Randall Wallace has said, "Is Blind Harry true? I don't know. I know that it spoke to my heart and that's what matters to me, that it spoke to my heart." Blind Harry's poem is not regarded as historically accurate, and although some incidents in the film that are not historically accurate are taken from Blind Harry (e.g. the hanging of Scottish nobles at the start), there are large parts that are based neither on history nor Blind Harry (e.g. Wallace's affair with Princess Isabella). Elizabeth Ewan describes Braveheart as a film that "almost totally sacrifices historical accuracy for epic adventure". It has been described as one of the most historically inaccurate modern films. Sharon Krossa noted that the film contains numerous historical inaccuracies, beginning with the wearing of belted plaid by Wallace and his men. In that period "no Scots [...] wore belted plaids (let alone kilts of any kind)." Moreover, when Highlanders finally did begin wearing the belted plaid, it was not "in the rather bizarre style depicted in the film". She compares the inaccuracy to "a film about Colonial America showing the colonial men wearing 20th century business suits, but with the jackets worn back-to-front instead of the right way around." In a previous essay about the film, she wrote, "The events aren't accurate, the dates aren't accurate, the characters aren't accurate, the names aren't accurate, the clothes aren't accurate—in short, just about nothing is accurate." The belted plaid (feileadh mór léine) was not introduced until the 16th century. Peter Traquair has referred to Wallace's "farcical representation as a wild and hairy highlander painted with woad (1,000 years too late) running amok in a tartan kilt (500 years too early)." Caroline White of The Times described the film as being made up of a "litany of fibs." Irish historian Seán Duffy remarked that "the battle of Stirling Bridge could have done with a bridge." In 2009, the film was second on a list of "most historically inaccurate movies" in The Times. In the humorous non-fictional historiography An Utterly Impartial History of Britain (2007), author John O'Farrell claims that Braveheart could not have been more historically inaccurate, even if a Plasticine dog had been inserted in the film and the title changed to "William Wallace and Gromit". In the DVD audio commentary of Braveheart, Mel Gibson acknowledges the historical inaccuracies but defends his choices as director, noting that the way events were portrayed in the film was much more "cinematically compelling" than the historical fact or conventional mythos. Jus primae noctis Edward Longshanks is shown invoking Jus primae noctis in the film, allowing the lord of a medieval estate to take the virginity of his serfs' maiden daughters on their wedding nights. Critical medieval scholarship regards this supposed right as a myth: "the simple reason why we are dealing with a myth here rests in the surprising fact that practically all writers who make any such claims have never been able or willing to cite any trustworthy source, if they have any." Occupation and independence The film suggests Scotland had been under English occupation for some time, at least during Wallace's childhood, and in the run-up to the Battle of Falkirk Wallace says to the younger Bruce, "[W]e'll have what none of us have ever had before, a country of our own." In fact, Scotland had been invaded by England only the year before Wallace's rebellion; prior to the death of King Alexander III it had been a fully separate kingdom. Portrayal of William Wallace As John Shelton Lawrence and Robert Jewett writes, "Because [William] Wallace is one of Scotland's most important national heroes and because he lived in the very distant past, much that is believed about him is probably the stuff of legend. But there is a factual strand that historians agree to", summarized from Scots scholar Matt Ewart: A. E. Christa Canitz writes about the historical William Wallace further: "[He] was a younger son of the Scottish gentry, usually accompanied by his own chaplain, well-educated, and eventually, having been appointed Guardian of the Kingdom of Scotland, engaged in diplomatic correspondence with the Hanseatic cities of Lübeck and Hamburg". She finds that in Braveheart, "any hint of his descent from the lowland gentry (i.e., the lesser nobility) is erased, and he is presented as an economically and politically marginalized Highlander and 'a farmer'—as one with the common peasant, and with a strong spiritual connection to the land which he is destined to liberate." Colin McArthur writes that Braveheart "constructs Wallace as a kind of modern, nationalist guerrilla leader in a period half a millennium before the appearance of nationalism on the historical stage as a concept under which disparate classes and interests might be mobilised within a nation state." Writing about Bravehearts "omissions of verified historical facts", McArthur notes that Wallace made "overtures to Edward I seeking less severe treatment after his defeat at Falkirk", as well as "the well-documented fact of Wallace's having resorted to conscription and his willingness to hang those who refused to serve." Canitz posits that depicting "such lack of class solidarity" as the conscriptions and related hangings "would contaminate the movie's image of Wallace as the morally irreproachable primus inter pares among his peasant fighters." Portrayal of Isabella of France Isabella of France is shown having an affair with Wallace after the Battle of Falkirk. She later tells Edward I she is pregnant, implying that her son, Edward III, was a product of the affair. In reality, Isabella was around three years old and living in France at the time of the Battle of Falkirk, was not married to Edward II until he was already king, and Edward III was born seven years after Wallace died. The breakdown of the couple's relationship over his liaisons, and the menacing suggestion to a dying Longshanks that she would overthrow and destroy Edward II mirror and foreshadow actual facts; although not until 1326, over 20 years after Wallace's death, Isabella, her son Edward, and her lover Roger Mortimer would invade England to depose - and later murder - Edward II. Portrayal of Robert the Bruce Robert the Bruce did change sides between the Scots loyalists and the English more than once in the earlier stages of the Wars of Scottish Independence, but he probably did not fight on the English side at the Battle of Falkirk (although this claim does appear in a few medieval sources). Later, the Battle of Bannockburn was not a spontaneous battle; he had already been fighting a guerrilla campaign against the English for eight years. His title before becoming king was Earl of Carrick, not Earl of Bruce. Bruce's father is portrayed as an infirm leper, although it was Bruce himself who allegedly suffered from leprosy in later life. The actual Bruce's machinations around Wallace, rather than the meek idealist in the film, suggests the father-son relationship represent different aspects of the historical Bruce's character. In the film, Bruce's father betrays Wallace to his son's disgust, acknowledging it as the price of his crown, although in real life Wallace was betrayed by the nobleman John de Menteith and delivered to the English. Portrayal of Longshanks and Prince Edward The actual Edward I was ruthless and temperamental, but the film exaggerates his negative aspects for effect. Edward enjoyed poetry and harp music, was a devoted and loving husband to his wife Eleanor of Castile, and as a religious man, he gave generously to charity. The film's scene where he scoffs cynically at Isabella for distributing gold to the poor after Wallace refuses it as a bribe would have been unlikely. Furthermore, Edward died on campaign two years after Wallace's execution, not in bed at his home. The depiction of the future Edward II as an effeminate homosexual drew accusations of homophobia against Gibson. Gibson defended his depiction of Prince Edward as weak and ineffectual, saying: In response to Longshanks' murder of the Prince's male lover Phillip, Gibson replied: "The fact that King Edward throws this character out a window has nothing to do with him being gay ... He's terrible to his son, to everybody." Gibson asserted that the reason Longshanks kills his son's lover is that the king is a "psychopath". Wallace's military campaign "MacGregors from the next glen" joining Wallace shortly after the action at Lanark is dubious, since it is questionable whether Clan Gregor existed at that stage, and when they did emerge their traditional home was Glen Orchy, some distance from Lanark. Wallace did win an important victory at the Battle of Stirling Bridge, but the version in Braveheart is highly inaccurate, as it was filmed without a bridge (and without Andrew Moray, joint commander of the Scots army, who was fatally injured in the battle). Later, Wallace did carry out a large-scale raid into the north of England, but he did not get as far south as York, nor did he kill Longshanks' nephew. The "Irish conscripts" at the Battle of Falkirk are unhistorical; there were no Irish troops at Falkirk (although many of the English army were, in fact, Welsh). The two-handed long swords used by Gibson in the film were not in wide use in the period. A one-handed sword and shield would have been more accurate. The depiction of English cavalry and infantry soldiers using uniform dress and armor is historically inaccurate. In the feudal armies of the late 13th and early 14th century, cavalry would have been made up of nobility and knights all in their self-purchased armour and displaying their coat of arms on surcoats and shields. The armour depicted in the film, i.e. small metal plates sewn on a fabric did not exist and would have been ineffective since it could have been easily pierced by swords, spears, arrows etc. Indeed, knights of that time period would have worn mail chausses to protect their legs, a mail hauberk over a patted gambeson to protect the upper body and arms as well as a mail coif and a great helm to protect the head. Another layer of protection, the coat-of-plates would have been worn over the hauberk, but under the surcoat. Infantry would have looked very diverse utilizing any kind of armor they could obtain and afford. The Scottish fighters would have been dressed and armed in the same way as their English opponents. Kilts appeared only in the 16th century, so two centuries after the events in the film. However, the cavalry charge depicted at the battle of Stirling bridge (which did not take place at this battle) is a rare example where a movie maker correctly depicts the knights charging towards their enemies with laid in lances rather than drawn swords. Home media Braveheart was released on DVD on August 29, 2000. It was released on Blu-ray as part of the Paramount Sapphire Series on September 1, 2009. It was released on 4K UHD Blu-ray as part of the 4K upgrade of the Paramount Sapphire Series on May 15, 2018. Sequel A sequel, titled Robert the Bruce, was released in 2019. The film continues directly on from Braveheart and follows the widow Moira, portrayed by Anna Hutchison, and her family (portrayed by Gabriel Bateman and Talitha Bateman), who save Robert the Bruce, with Angus Macfadyen reprising his role from Braveheart. The cast includes Jared Harris, Patrick Fugit, Zach McGowan, Emma Kenney, Diarmaid Murtagh, Seoras Wallace, Shane Coffey, Kevin McNally, and Melora Walters. Richard Gray directed the film, with Macfadyen and Eric Belgau writing the script. Helmer Gray, Macfadyen, Hutchison, Kim Barnard, Nick Farnell, Cameron Nuggent, and Andrew Curry produced the film. Filming took place in 2019 and was completed with a limited cinematic release the same year. See also Outlaw King; although not a sequel, it depicts events that occurred immediately after the events in Braveheart Rob Roy; historical action drama film featuring Robert Roy MacGregor, an 18th-century Scottish clan chief. References Notes External links 1995 films 1995 drama films 1990s biographical drama films 1990s historical films 1990s war films 20th Century Fox films American biographical drama films American epic films American historical films American war drama films Anti-English sentiment BAFTA winners (films) Best Picture Academy Award winners Cultural depictions of William Wallace 1990s English-language films English-language Scottish films Biographical films about military leaders Drama films based on actual events Cultural depictions of Edward I of England Cultural depictions of Edward II of England Epic films based on actual events Rating controversies in film Films about nobility Films directed by Mel Gibson Films produced by Bruce Davey Films produced by Mel Gibson Films scored by James Horner Films set in the 13th century Films set in the 14th century Films set in Edinburgh Films set in London Films set in Yorkshire Films shot in County Kildare Films shot in County Meath Films shot in County Wicklow Films shot in Fingal Films shot in Highland (council area) Films that won the Academy Award for Best Makeup Films that won the Best Sound Editing Academy Award Films whose cinematographer won the Best Cinematography Academy Award Films whose director won the Best Directing Academy Award Films whose director won the Best Director Golden Globe Icon Productions films The Ladd Company films Paramount Pictures films War epic films War films based on actual events Robert the Bruce Films based on poems 1990s American films
2,001
4,563
https://en.wikipedia.org/wiki/Battle%20of%20Jutland
Battle of Jutland
The Battle of Jutland (, the Battle of the Skagerrak) was a naval battle fought between Britain's Royal Navy Grand Fleet, under Admiral Sir John Jellicoe, and the Imperial German Navy's High Seas Fleet, under Vice-Admiral Reinhard Scheer, during the First World War. The battle unfolded in extensive manoeuvring and three main engagements (the battlecruiser action, the fleet action and the night action), from 31 May to 1 June 1916, off the North Sea coast of Denmark's Jutland Peninsula. It was the largest naval battle and the only full-scale clash of battleships in that war. Jutland was the third fleet action between steel battleships, following the Battle of the Yellow Sea in 1904 and the Battle of Tsushima in 1905, during the Russo-Japanese War. Jutland was the last major battle in history fought primarily by battleships. Germany's High Seas Fleet intended to lure out, trap, and destroy a portion of the British Grand Fleet, as the German naval force was insufficient to openly engage the entire British fleet. This formed part of a larger strategy to break the British blockade of Germany and to allow German naval vessels access to the Atlantic. Meanwhile, Great Britain's Royal Navy pursued a strategy of engaging and destroying the High Seas Fleet, thereby keeping German naval forces contained and away from Britain and her shipping lanes. The Germans planned to use Vice-Admiral Franz Hipper's fast scouting group of five modern battlecruisers to lure Vice-Admiral Sir David Beatty's battlecruiser squadrons into the path of the main German fleet. They stationed submarines in advance across the likely routes of the British ships. However, the British learned from signal intercepts that a major fleet operation was likely, so on 30 May Jellicoe sailed with the Grand Fleet to rendezvous with Beatty, passing over the locations of the German submarine picket lines while they were unprepared. The German plan had been delayed, causing further problems for their submarines, which had reached the limit of their endurance at sea. On the afternoon of 31 May, Beatty encountered Hipper's battlecruiser force long before the Germans had expected. In a running battle, Hipper successfully drew the British vanguard into the path of the High Seas Fleet. By the time Beatty sighted the larger force and turned back towards the British main fleet, he had lost two battlecruisers from a force of six battlecruisers and four powerful battleships—though he had sped ahead of his battleships of 5th Battle Squadron earlier in the day, effectively losing them as an integral component for much of this opening action against the five ships commanded by Hipper. Beatty's withdrawal at the sight of the High Seas Fleet, which the British had not known were in the open sea, would reverse the course of the battle by drawing the German fleet in pursuit towards the British Grand Fleet. Between 18:30, when the sun was lowering on the western horizon, back-lighting the German forces, and nightfall at about 20:30, the two fleets—totalling 250 ships between them—directly engaged twice. Fourteen British and eleven German ships sank, with a total of 9,823 casualties. After sunset, and throughout the night, Jellicoe manoeuvred to cut the Germans off from their base, hoping to continue the battle the next morning, but under the cover of darkness Scheer broke through the British light forces forming the rearguard of the Grand Fleet and returned to port. Both sides claimed victory. The British lost more ships and twice as many sailors but succeeded in containing the German fleet. The British press criticised the Grand Fleet's failure to force a decisive outcome, while Scheer's plan of destroying a substantial portion of the British fleet also failed. The British strategy of denying Germany access to both the United Kingdom and the Atlantic did succeed, which was the British long-term goal. The Germans' "fleet in being" continued to pose a threat, requiring the British to keep their battleships concentrated in the North Sea, but the battle reinforced the German policy of avoiding all fleet-to-fleet contact. At the end of 1916, after further unsuccessful attempts to reduce the Royal Navy's numerical advantage, the German Navy accepted that its surface ships had been successfully contained, subsequently turning its efforts and resources to unrestricted submarine warfare and the destruction of Allied and neutral shipping, which—along with the Zimmermann Telegram—by April 1917 triggered the United States of America's declaration of war on Germany. Subsequent reviews commissioned by the Royal Navy generated strong disagreement between supporters of Jellicoe and Beatty concerning the two admirals' performance in the battle. Debate over their performance and the significance of the battle continues to this day. Background and planning German planning With 16 dreadnought-type battleships, compared with the Royal Navy's 28, the German High Seas Fleet stood little chance of winning a head-to-head clash. The Germans therefore adopted a divide-and-conquer strategy. They would stage raids into the North Sea and bombard the English coast, with the aim of luring out small British squadrons and pickets, which could then be destroyed by superior forces or submarines. In January 1916, Admiral von Pohl, commander of the German fleet, fell ill. He was replaced by Scheer, who believed that the fleet had been used too defensively, had better ships and men than the British, and ought to take the war to them. According to Scheer, the German naval strategy should be: On 25 April 1916, a decision was made by the German Imperial Admiralty to halt indiscriminate attacks by submarines on merchant shipping. This followed protests from neutral countries, notably the United States, that their nationals had been the victims of attacks. Germany agreed that future attacks would only take place in accord with internationally agreed prize rules, which required an attacker to give a warning and allow the crews of vessels time to escape, and not to attack neutral vessels at all. Scheer believed that it would not be possible to continue attacks on these terms, which took away the advantage of secret approach by submarines and left them vulnerable to even relatively small guns on the target ships. Instead, he set about deploying the submarine fleet against military vessels. It was hoped that, following a successful German submarine attack, fast British escorts, such as destroyers, would be tied down by anti-submarine operations. If the Germans could catch the British in the expected locations, good prospects were thought to exist of at least partially redressing the balance of forces between the fleets. "After the British sortied in response to the raiding attack force", the Royal Navy's centuries-old instincts for aggressive action could be exploited to draw its weakened units towards the main German fleet under Scheer. The hope was that Scheer would thus be able to ambush a section of the British fleet and destroy it. Submarine deployments A plan was devised to station submarines offshore from British naval bases, and then stage some action that would draw out the British ships to the waiting submarines. The battlecruiser had been damaged in a previous engagement, but was due to be repaired by mid-May, so an operation was scheduled for 17 May 1916. At the start of May, difficulties with condensers were discovered on ships of the third battleship squadron, so the operation was put back to 23 May. Ten submarines—, , , , , , , , , and —were given orders first to patrol in the central North Sea between 17 and 22 May, and then to take up waiting positions. U-43 and U-44 were stationed in the Pentland Firth, which the Grand Fleet was likely to cross when leaving Scapa Flow, while the remainder proceeded to the Firth of Forth, awaiting battlecruisers departing Rosyth. Each boat had an allocated area, within which it could move around as necessary to avoid detection, but was instructed to keep within it. During the initial North Sea patrol the boats were instructed to sail only north–south so that any enemy who chanced to encounter one would believe it was departing or returning from operations on the west coast (which required them to pass around the north of Britain). Once at their final positions, the boats were under strict orders to avoid premature detection that might give away the operation. It was arranged that a coded signal would be transmitted to alert the submarines exactly when the operation commenced: "Take into account the enemy's forces may be putting to sea". Additionally, UB-27 was sent out on 20 May with instructions to work its way into the Firth of Forth past May Island. U-46 was ordered to patrol the coast of Sunderland, which had been chosen for the diversionary attack, but because of engine problems it was unable to leave port and U-47 was diverted to this task. On 13 May, U-72 was sent to lay mines in the Firth of Forth; on the 23rd, U-74 departed to lay mines in the Moray Firth; and on the 24th, U-75 was dispatched similarly west of the Orkney Islands. UB-21 and UB-22 were sent to patrol the Humber, where (incorrect) reports had suggested the presence of British warships. U-22, U-46 and U-67 were positioned north of Terschelling to protect against intervention by British light forces stationed at Harwich. On 22 May 1916, it was discovered that Seydlitz was still not watertight after repairs and would not now be ready until the 29th. The ambush submarines were now on station and experiencing difficulties of their own: visibility near the coast was frequently poor due to fog, and sea conditions were either so calm the slightest ripple, as from the periscope, could give away their position, or so rough as to make it very hard to keep the vessel at a steady depth. The British had become aware of unusual submarine activity, and had begun counter-patrols that forced the submarines out of position. UB-27 passed Bell Rock on the night of 23 May on its way into the Firth of Forth as planned, but was halted by engine trouble. After repairs it continued to approach, following behind merchant vessels, and reached Largo Bay on 25 May. There the boat became entangled in nets that fouled one of the propellers, forcing it to abandon the operation and return home. U-74 was detected by four armed trawlers on 27 May and sunk south-east of Peterhead. U-75 laid its mines off the Orkney Islands, which, although they played no part in the battle, were responsible later for sinking the cruiser carrying Lord Kitchener, the Secretary of State for War on 5 June, killing him and all but 12 of the crew. U-72 was forced to abandon its mission without laying any mines when an oil leak meant it was leaving a visible surface trail astern. Zeppelins The Germans maintained a fleet of Zeppelins that they used for aerial reconnaissance and occasional bombing raids. The planned raid on Sunderland intended to use Zeppelins to watch out for the British fleet approaching from the north, which might otherwise surprise the raiders. By 28 May, strong north-easterly winds meant that it would not be possible to send out the Zeppelins, so the raid again had to be postponed. The submarines could only stay on station until 1 June before their supplies would be exhausted and they had to return, so a decision had to be made quickly about the raid. It was decided to use an alternative plan, abandoning the attack on Sunderland but instead sending a patrol of battlecruisers to the Skagerrak, where it was likely they would encounter merchant ships carrying British cargo and British cruiser patrols. It was felt this could be done without air support, because the action would now be much closer to Germany, relying instead on cruiser and torpedo boat patrols for reconnaissance. Orders for the alternative plan were issued on 28 May, although it was still hoped that last-minute improvements in the weather would allow the original plan to go ahead. The German fleet assembled in the Jade River and at Wilhelmshaven and was instructed to raise steam and be ready for action from midnight on 28 May. By 14:00 on 30 May, the wind was still too strong and the final decision was made to use the alternative plan. The coded signal "31 May G.G.2490" was transmitted to the ships of the fleet to inform them the Skagerrak attack would start on 31 May. The pre-arranged signal to the waiting submarines was transmitted throughout the day from the E-Dienst radio station at Bruges, and the U-boat tender Arcona anchored at Emden. Only two of the waiting submarines, U-66 and U-32, received the order. British response Unfortunately for the German plan, the British had obtained a copy of the main German codebook from the light cruiser , which had been boarded by the Russian Navy after the ship ran aground in Russian territorial waters in 1914. German naval radio communications could therefore often be quickly deciphered, and the British Admiralty usually knew about German activities. The British Admiralty's Room 40 maintained direction finding and interception of German naval signals. It had intercepted and decrypted a German signal on 28 May that provided "ample evidence that the German fleet was stirring in the North Sea". Further signals were intercepted, and although they were not decrypted it was clear that a major operation was likely. At 11:00 on 30 May, Jellicoe was warned that the German fleet seemed prepared to sail the following morning. By 17:00, the Admiralty had intercepted the signal from Scheer, "31 May G.G.2490", making it clear something significant was imminent. Not knowing the Germans' objective, Jellicoe and his staff decided to position the fleet to head off any attempt by the Germans to enter the North Atlantic or the Baltic through the Skagerrak, by taking up a position off Norway where they could potentially cut off any German raid into the shipping lanes of the Atlantic or prevent the Germans from heading into the Baltic. A position further west was unnecessary, as that area of the North Sea could be patrolled by air using aircraft. Consequently, Admiral Jellicoe led the sixteen dreadnought battleships of the 1st and 4th Battle Squadrons of the Grand Fleet and three battlecruisers of the 3rd Battlecruiser Squadron eastwards out of Scapa Flow at 22:30 on 30 May. He was to meet the 2nd Battle Squadron of eight dreadnought battleships commanded by Vice-Admiral Martyn Jerram coming from Cromarty. Beatty's force of six ships of the 1st and 2nd Battlecruiser Squadrons plus the 5th Battle Squadron of four fast battleships left the Firth of Forth at around the same time; Jellicoe intended to rendezvous with him west of the mouth of the Skagerrak off the coast of Jutland and wait for the Germans to appear or for their intentions to become clear. The planned position would give him the widest range of responses to likely German moves. Hipper's raiding force did not leave the Outer Jade Roads until 01:00 on 31 May, heading west of Heligoland Island following a cleared channel through the minefields, heading north at . The main German fleet of sixteen dreadnought battleships of 1st and 3rd Battle Squadrons left the Jade at 02:30, being joined off Heligoland at 04:00 by the six pre-dreadnoughts of the 2nd Battle Squadron coming from the Elbe River. Naval tactics in 1916 The principle of concentration of force was fundamental to the fleet tactics of this time. As outlined by Captain Reginald Hall in 1914, tactical doctrine called for a fleet approaching battle to be in a compact formation of parallel columns, allowing relatively easy manoeuvring, and giving shortened sight lines within the formation, which simplified the passing of the signals necessary for command and control. A fleet formed in several short columns could change its heading faster than one formed in a single long column. Since most command signals were made with flags or signal lamps between ships, the flagship was usually placed at the head of the centre column so that its signals might be more easily seen by the many ships of the formation. Wireless telegraphy was in use, though security (radio direction finding), encryption, and the limitation of the radio sets made their extensive use more problematic. Command and control of such huge fleets remained difficult. Thus, it might take a very long time for a signal from the flagship to be relayed to the entire formation. It was usually necessary for a signal to be confirmed by each ship before it could be relayed to other ships, and an order for a fleet movement would have to be received and acknowledged by every ship before it could be executed. In a large single-column formation, a signal could take 10 minutes or more to be passed from one end of the line to the other, whereas in a formation of parallel columns, visibility across the diagonals was often better (and always shorter) than in a single long column, and the diagonals gave signal "redundancy", increasing the probability that a message would be quickly seen and correctly interpreted. However, before battle was joined the heavy units of the fleet would, if possible, deploy into a single column. To form the battle line in the correct orientation relative to the enemy, the commanding admiral had to know the enemy fleet's distance, bearing, heading, and speed. It was the task of the scouting forces, consisting primarily of battlecruisers and cruisers, to find the enemy and report this information in sufficient time, and, if possible, to deny the enemy's scouting forces the opportunity of obtaining the equivalent information. Ideally, the battle line would cross the intended path of the enemy column so that the maximum number of guns could be brought to bear, while the enemy could fire only with the forward guns of the leading ships, a manoeuvre known as "crossing the T". Admiral Tōgō, commander of the Japanese battleship fleet, had achieved this against Admiral Zinovy Rozhestvensky's Russian battleships in 1905 at the Battle of Tsushima, with devastating results. Jellicoe achieved this twice in one hour against the High Seas Fleet at Jutland, but on both occasions, Scheer managed to turn away and disengage, thereby avoiding a decisive action. Ship design Within the existing technological limits, a trade-off had to be made between the weight and size of guns, the weight of armour protecting the ship, and the maximum speed. Battleships sacrificed speed for armour and heavy naval guns ( or larger). British battlecruisers sacrificed weight of armour for greater speed, while their German counterparts were armed with lighter guns and heavier armour. These weight savings allowed them to escape danger or catch other ships. Generally, the larger guns mounted on British ships allowed an engagement at greater range. In theory, a lightly armoured ship could stay out of range of a slower opponent while still scoring hits. The fast pace of development in the pre-war years meant that every few years, a new generation of ships rendered its predecessors obsolete. Thus, fairly young ships could still be obsolete compared with the newest ships, and fare badly in an engagement against them. Admiral John Fisher, responsible for reconstruction of the British fleet in the pre-war period, favoured large guns, oil fuel, and speed. Admiral Tirpitz, responsible for the German fleet, favoured ship survivability and chose to sacrifice some gun size for improved armour. The German battlecruiser had belt armour equivalent in thickness—though not as comprehensive—to the British battleship , significantly better than on the British battlecruisers such as Tiger. German ships had better internal subdivision and had fewer doors and other weak points in their bulkheads, but with the disadvantage that space for crew was greatly reduced. As they were designed only for sorties in the North Sea they did not need to be as habitable as the British vessels and their crews could live in barracks ashore when in harbour. Order of battle Warships of the period were armed with guns firing projectiles of varying weights, bearing high explosive warheads. The sum total of weight of all the projectiles fired by all the ship's broadside guns is referred to as "weight of broadside". At Jutland, the total of the British ships' weight of broadside was , while the German fleet's total was . This does not take into consideration the ability of some ships and their crews to fire more or less rapidly than others, which would increase or decrease amount of fire that one combatant was able to bring to bear on their opponent for any length of time. Jellicoe's Grand Fleet was split into two sections. The dreadnought Battle Fleet, with which he sailed, formed the main force and was composed of 24 battleships and three battlecruisers. The battleships were formed into three squadrons of eight ships, further subdivided into divisions of four, each led by a flag officer. Accompanying them were eight armoured cruisers (classified by the Royal Navy since 1913 as "cruisers"), eight light cruisers, four scout cruisers, 51 destroyers, and one destroyer-minelayer. The Grand Fleet sailed without three of its battleships: in refit at Invergordon, dry-docked at Rosyth and in refit at Devonport. The brand new was left behind; with only three weeks in service, her untrained crew was judged unready for battle. HMS Audacious had been sunk by a German mine on 27 October 1914. British reconnaissance was provided by the Battlecruiser Fleet under David Beatty: six battlecruisers, four fast s, 14 light cruisers and 27 destroyers. Air scouting was provided by the attachment of the seaplane tender , one of the first aircraft carriers in history to participate in a naval engagement. The German High Seas Fleet under Scheer was also split into a main force and a separate reconnaissance force. Scheer's main battle fleet was composed of 16 battleships and six pre-dreadnought battleships arranged in an identical manner to the British. With them were six light cruisers and 31 torpedo-boats, (the latter being roughly equivalent to a British destroyer). The only German battleship missing was SMS König Albert. The German scouting force, commanded by Franz Hipper, consisted of five battlecruisers, five light cruisers and 30 torpedo-boats. The Germans had no equivalent to Engadine and no heavier-than-air aircraft to operate with the fleet but had the Imperial German Naval Airship Service's force of rigid airships available to patrol the North Sea. All of the battleships and battlecruisers on both sides carried torpedoes of various sizes, as did the lighter craft. The British battleships carried three or four underwater torpedo tubes. The battlecruisers carried from two to five. All were either 18-inch or 21-inch diameter. The German battleships carried five or six underwater torpedo tubes in three sizes from 18 to 21 inch and the battlecruisers carried four or five tubes. The German battle fleet was hampered by the slow speed and relatively poor armament of the six pre-dreadnoughts of II Squadron, which limited maximum fleet speed to , compared to maximum British fleet speed of . On the British side, the eight armoured cruisers were deficient in both speed and armour protection. Both of these obsolete squadrons were notably vulnerable to attacks by more modern enemy ships. Battlecruiser action The route of the British battlecruiser fleet took it through the patrol sector allocated to U-32. After receiving the order to commence the operation, the U-boat moved to a position east of the Isle of May at dawn on 31 May. At 03:40, it sighted the cruisers and leaving the Forth at . It launched one torpedo at the leading cruiser at a range of , but its periscope jammed 'up', giving away the position of the submarine as it manoeuvred to fire a second. The lead cruiser turned away to dodge the torpedo, while the second turned towards the submarine, attempting to ram. U-32 crash dived, and on raising its periscope at 04:10 saw two battlecruisers (the 2nd Battlecruiser Squadron) heading south-east. They were too far away to attack, but Kapitänleutnant von Spiegel reported the sighting of two battleships and two cruisers to Germany. U-66 was also supposed to be patrolling off the Firth of Forth but had been forced north to a position off Peterhead by patrolling British vessels. This now brought it into contact with the 2nd Battle Squadron, coming from the Moray Firth. At 05:00, it had to crash dive when the cruiser appeared from the mist heading toward it. It was followed by another cruiser, , and eight battleships. U-66 got within of the battleships preparing to fire, but was forced to dive by an approaching destroyer and missed the opportunity. At 06:35, it reported eight battleships and cruisers heading north. The courses reported by both submarines were incorrect, because they reflected one leg of a zigzag being used by British ships to avoid submarines. Taken with a wireless intercept of more ships leaving Scapa Flow earlier in the night, they created the impression in the German High Command that the British fleet, whatever it was doing, was split into separate sections moving apart, which was precisely as the Germans wished to meet it. Jellicoe's ships proceeded to their rendezvous undamaged and undiscovered. However, he was now misled by an Admiralty intelligence report advising that the German main battle fleet was still in port. The Director of Operations Division, Rear Admiral Thomas Jackson, had asked the intelligence division, Room 40, for the current location of German call sign DK, used by Admiral Scheer. They had replied that it was currently transmitting from Wilhelmshaven. It was known to the intelligence staff that Scheer deliberately used a different call sign when at sea, but no one asked for this information or explained the reason behind the query – to locate the German fleet. The German battlecruisers cleared the minefields surrounding the Amrum swept channel by 09:00. They then proceeded north-west, passing west of the Horn's Reef lightship heading for the Little Fisher Bank at the mouth of the Skagerrak. The High Seas Fleet followed some behind. The battlecruisers were in line ahead, with the four cruisers of the II scouting group plus supporting torpedo boats ranged in an arc ahead and to either side. The IX torpedo boat flotilla formed close support immediately surrounding the battlecruisers. The High Seas Fleet similarly adopted a line-ahead formation, with close screening by torpedo boats to either side and a further screen of five cruisers surrounding the column away. The wind had finally moderated so that Zeppelins could be used, and by 11:30 five had been sent out: L14 to the Skagerrak, L23 east of Noss Head in the Pentland Firth, L21 off Peterhead, L9 off Sunderland, and L16 east of Flamborough Head. Visibility, however, was still bad, with clouds down to . Contact By around 14:00, Beatty's ships were proceeding eastward at roughly the same latitude as Hipper's squadron, which was heading north. Had the courses remained unchanged, Beatty would have passed between the two German fleets, south of the battlecruisers and north of the High Seas Fleet at around 16:30, possibly trapping his ships just as the German plan envisioned. His orders were to stop his scouting patrol when he reached a point east of Britain and then turn north to meet Jellicoe, which he did at this time. Beatty's ships were divided into three columns, with the two battlecruiser squadrons leading in parallel lines apart. The 5th Battle Squadron was stationed to the north-west, on the side furthest away from any expected enemy contact, while a screen of cruisers and destroyers was spread south-east of the battlecruisers. After the turn, the 5th Battle Squadron was now leading the British ships in the westernmost column, and Beatty's squadron was centre and rearmost, with the 2nd BCS to the west. At 14:20 on 31 May, despite heavy haze and scuds of fog giving poor visibility, scouts from Beatty's force reported enemy ships to the south-east; the British light units, investigating a neutral Danish steamer (N J Fjord), which was stopped between the two fleets, had found two German destroyers engaged on the same mission ( and ). The first shots of the battle were fired at 14:28 when Galatea and Phaeton of the British 1st Light Cruiser Squadron opened on the German torpedo boats, which withdrew toward their approaching light cruisers. At 14:36, the Germans scored the first hit of the battle when , of Rear-Admiral Friedrich Boedicker's Scouting Group II, hit her British counterpart Galatea at extreme range. Beatty began to move his battlecruisers and supporting forces south-eastwards and then east to cut the German ships off from their base and ordered Engadine to launch a seaplane to try to get more information about the size and location of the German forces. This was the first time in history that a carrier-based aeroplane was used for reconnaissance in naval combat. Engadines aircraft did locate and report some German light cruisers just before 15:30 and came under anti-aircraft gunfire but attempts to relay reports from the aeroplane failed. Unfortunately for Beatty, his initial course changes at 14:32 were not received by Sir Hugh Evan-Thomas's 5th Battle Squadron (the distance being too great to read his flags), because the battlecruiser —the last ship in his column—was no longer in a position where she could relay signals by searchlight to Evan-Thomas, as she had previously been ordered to do. Whereas before the north turn, Tiger had been the closest ship to Evan-Thomas, she was now further away than Beatty in Lion. Matters were aggravated because Evan-Thomas had not been briefed regarding standing orders within Beatty's squadron, as his squadron normally operated with the Grand Fleet. Fleet ships were expected to obey movement orders precisely and not deviate from them. Beatty's standing instructions expected his officers to use their initiative and keep station with the flagship. As a result, the four Queen Elizabeth-class battleships—which were the fastest and most heavily armed in the world at that time—remained on the previous course for several minutes, ending up behind rather than five. Beatty also had the opportunity during the previous hours to concentrate his forces, and no reason not to do so, whereas he steamed ahead at full speed, faster than the battleships could manage. Dividing the force had serious consequences for the British, costing them what would have been an overwhelming advantage in ships and firepower during the first half-hour of the coming battle. With visibility favouring the Germans, Hipper's battlecruisers at 15:22, steaming approximately north-west, sighted Beatty's squadron at a range of about , while Beatty's forces did not identify Hipper's battlecruisers until 15:30. (position 1 on map). At 15:45, Hipper turned south-east to lead Beatty toward Scheer, who was south-east with the main force of the High Seas Fleet. Run to the south Beatty's conduct during the next 15 minutes has received a great deal of criticism, as his ships out-ranged and outnumbered the German squadron, yet he held his fire for over 10 minutes with the German ships in range. He also failed to use the time available to rearrange his battlecruisers into a fighting formation, with the result that they were still manoeuvring when the battle started. At 15:48, with the opposing forces roughly parallel at , with the British to the south-west of the Germans (i.e., on the right side), Hipper opened fire, followed by the British ships as their guns came to bear upon targets (position 2). Thus began the opening phase of the battlecruiser action, known as the Run to the South, in which the British chased the Germans, and Hipper intentionally led Beatty toward Scheer. During the first minutes of the ensuing battle, all the British ships except Princess Royal fired far over their German opponents, due to adverse visibility conditions, before finally getting the range. Only Lion and Princess Royal had settled into formation, so the other four ships were hampered in aiming by their own turning. Beatty was to windward of Hipper, and therefore funnel and gun smoke from his own ships tended to obscure his targets, while Hipper's smoke blew clear. Also, the eastern sky was overcast and the grey German ships were indistinct and difficult to range. Beatty had ordered his ships to engage in a line, one British ship engaging with one German and his flagship doubling on the German flagship . However, due to another mistake with signalling by flag, and possibly because Queen Mary and Tiger were unable to see the German lead ship because of smoke, the second German ship, Derfflinger, was left un-engaged and free to fire without disruption. drew fire from two of Beatty's battlecruisers, but still fired with great accuracy during this time, hitting Tiger 9 times in the first 12 minutes. The Germans drew first blood. Aided by superior visibility, Hipper's five battlecruisers quickly registered hits on three of the six British battlecruisers. Seven minutes passed before the British managed to score their first hit. The first near-kill of the Run to the South occurred at 16:00, when a shell from Lützow wrecked the "Q" turret amidships on Beatty's flagship Lion. Dozens of crewmen were instantly killed, but far larger destruction was averted when the mortally wounded turret commander – Major Francis Harvey of the Royal Marines – promptly ordered the magazine doors shut and the magazine flooded. This prevented a magazine explosion at 16:28, when a flash fire ignited ready cordite charges beneath the turret and killed everyone in the chambers outside "Q" magazine. Lion was saved. was not so lucky; at 16:02, just 14 minutes into the gunnery exchange, she was hit aft by three shells from , causing damage sufficient to knock her out of line and detonating "X" magazine aft. Soon after, despite the near-maximum range, Von der Tann put another shell on Indefatigables "A" turret forward. The plunging shells probably pierced the thin upper armour, and seconds later Indefatigable was ripped apart by another magazine explosion, sinking immediately and leaving only two survivors from her crew of 1,019 officers and men. (position 3). Hipper's position deteriorated somewhat by 16:15 as the 5th Battle Squadron finally came into range, so that he had to contend with gunfire from the four battleships astern as well as Beatty's five remaining battlecruisers to starboard. But he knew his baiting mission was close to completion, as his force was rapidly closing with Scheer's main body. At 16:08, the lead battleship of the 5th Battle Squadron, , caught up with Hipper and opened fire at extreme range, scoring a hit on Von der Tann within 60 seconds. Still, it was 16:15 before all the battleships of the 5th were able to fully engage at long range. At 16:25, the battlecruiser action intensified again when was hit by what may have been a combined salvo from Derfflinger and Seydlitz; she disintegrated when both forward magazines exploded, sinking with all but nine of her 1,275 man crew lost. (position 4). Commander von Hase, the first gunnery officer aboard Derfflinger, noted: During the Run to the South, from 15:48 to 16:54, the German battlecruisers made an estimated total of forty-two hits on the British battlecruisers (nine on Lion, six on Princess Royal, seven on Queen Mary, 14 on Tiger, one on New Zealand, five on Indefatigable), and two more on the battleship Barham, compared with only eleven hits by the British battlecruisers (four on Lützow, four on Seydlitz, two on Moltke, one on von der Tann), and six hits by the battleships (one on Seydlitz, four on Moltke, one on von der Tann). Shortly after 16:26, a salvo struck on or around , which was obscured by spray and smoke from shell bursts. A signalman promptly leapt on to the bridge of Lion and announced "Princess Royals blown up, Sir." Beatty famously turned to his flag captain, saying "Chatfield, there seems to be something wrong with our bloody ships today." (In popular legend, Beatty also immediately ordered his ships to "turn two points to port", i.e., two points nearer the enemy, but there is no official record of any such command or course change.) Princess Royal, as it turned out, was still afloat after the spray cleared. At 16:30, Scheer's leading battleships sighted the distant battlecruiser action; soon after, of Beatty's 2nd Light Cruiser Squadron led by Commodore William Goodenough sighted the main body of Scheer's High Seas Fleet, dodging numerous heavy-calibre salvos to report in detail the German strength: 16 dreadnoughts with six older battleships. This was the first news that Beatty and Jellicoe had that Scheer and his battle fleet were even at sea. Simultaneously, an all-out destroyer action raged in the space between the opposing battlecruiser forces, as British and German destroyers fought with each other and attempted to torpedo the larger enemy ships. Each side fired many torpedoes, but both battlecruiser forces turned away from the attacks and all escaped harm except Seydlitz, which was hit forward at 16:57 by a torpedo fired by the British destroyer . Though taking on water, Seydlitz maintained speed. The destroyer , under the command of Captain Barry Bingham, led the British attacks. The British disabled the German torpedo boat , which the Germans soon abandoned and sank, and Petard then torpedoed and sank , her second score of the day. and rescued the crews of their sunken sister ships. But Nestor and another British destroyer – – were immobilised by shell hits, and were later sunk by Scheer's passing dreadnoughts. Bingham was rescued, and awarded the Victoria Cross for his leadership in the destroyer action. Run to the north As soon as he himself sighted the vanguard of Scheer's distant battleship line away, at 16:40, Beatty turned his battlecruiser force 180°, heading north to draw the Germans toward Jellicoe. (position 5). Beatty's withdrawal toward Jellicoe is called the "Run to the North", in which the tables turned and the Germans chased the British. Because Beatty once again failed to signal his intentions adequately, the battleships of the 5th Battle Squadron – which were too far behind to read his flags – found themselves passing the battlecruisers on an opposing course and heading directly toward the approaching main body of the High Seas Fleet. At 16:48, at extreme range, Scheer's leading battleships opened fire. Meanwhile, at 16:47, having received Goodenough's signal and knowing that Beatty was now leading the German battle fleet north to him, Jellicoe signalled to his own forces that the fleet action they had waited so long for was finally imminent; at 16:51, by radio, he informed the Admiralty so in London. The difficulties of the 5th Battle Squadron were compounded when Beatty gave the order to Evan-Thomas to "turn in succession" (rather than "turn together") at 16:48 as the battleships passed him. Evan-Thomas acknowledged the signal, but Lieutenant-Commander Ralph Seymour, Beatty's flag lieutenant, aggravated the situation when he did not haul down the flags (to execute the signal) for some minutes. At 16:55, when the 5BS had moved within range of the enemy battleships, Evan-Thomas issued his own flag command warning his squadron to expect sudden manoeuvres and to follow his lead, before starting to turn on his own initiative. The order to turn in succession would have resulted in all four ships turning in the same patch of sea as they reached it one by one, giving the High Seas Fleet repeated opportunity with ample time to find the proper range. However, the captain of the trailing ship () turned early, mitigating the adverse results. For the next hour, the 5th Battle Squadron acted as Beatty's rearguard, drawing fire from all the German ships within range, while by 17:10 Beatty had deliberately eased his own squadron out of range of Hipper's now-superior battlecruiser force. Since visibility and firepower now favoured the Germans, there was no incentive for Beatty to risk further battlecruiser losses when his own gunnery could not be effective. Illustrating the imbalance, Beatty's battlecruisers did not score any hits on the Germans in this phase until 17:45, but they had rapidly received five more before he opened the range (four on Lion, of which three were by Lützow, and one on Tiger by Seydlitz). Now the only targets the Germans could reach, the ships of the 5th Battle Squadron, received simultaneous fire from Hipper's battlecruisers to the east (which HMS Barham and engaged) and Scheer's leading battleships to the south-east (which and Malaya engaged). Three took hits: Barham (four by Derfflinger), Warspite (two by Seydlitz), and Malaya (seven by the German battleships). Only Valiant was unscathed. The four battleships were far better suited to take this sort of pounding than the battlecruisers, and none were lost, though Malaya suffered heavy damage, an ammunition fire, and heavy crew casualties. At the same time, the fire of the four British ships was accurate and effective. As the two British squadrons headed north at top speed, eagerly chased by the entire German fleet, the 5th Battle Squadron scored 13 hits on the enemy battlecruisers (four on Lützow, three on Derfflinger, six on Seydlitz) and five on battleships (although only one, on , did any serious damage). (position 6). The fleets converge Jellicoe was now aware that full fleet engagement was nearing, but had insufficient information on the position and course of the Germans. To assist Beatty, early in the battle at about 16:05, Jellicoe had ordered Rear-Admiral Horace Hood's 3rd Battlecruiser Squadron to speed ahead to find and support Beatty's force, and Hood was now racing SSE well in advance of Jellicoe's northern force. Rear-Admiral Arbuthnot's 1st Cruiser Squadron patrolled the van of Jellicoe's main battleship force as it advanced steadily to the south-east. At 17:33, the armoured cruiser of Arbuthnot's squadron, on the far southwest flank of Jellicoe's force, came within view of , which was about ahead of Beatty with the 3rd Light Cruiser Squadron, establishing the first visual link between the converging bodies of the Grand Fleet. At 17:38, the scout cruiser , screening Hood's oncoming battlecruisers, was intercepted by the van of the German scouting forces under Rear-Admiral Boedicker. Heavily outnumbered by Boedicker's four light cruisers, Chester was pounded before being relieved by Hood's heavy units, which swung westward for that purpose. Hood's flagship disabled the light cruiser shortly after 17:56. Wiesbaden became a sitting target for most of the British fleet during the next hour, but remained afloat and fired some torpedoes at the passing enemy battleships from long range. Meanwhile, Boedicker's other ships turned toward Hipper and Scheer in the mistaken belief that Hood was leading a larger force of British capital ships from the north and east. A chaotic destroyer action in mist and smoke ensued as German torpedo boats attempted to blunt the arrival of this new formation, but Hood's battlecruisers dodged all the torpedoes fired at them. In this action, after leading a torpedo counter-attack, the British destroyer was disabled, but continued to return fire at numerous passing enemy ships for the next hour. Fleet action Deployment In the meantime, Beatty and Evan-Thomas had resumed their engagement with Hipper's battlecruisers, this time with the visual conditions to their advantage. With several of his ships damaged, Hipper turned back toward Scheer at around 18:00, just as Beatty's flagship Lion was finally sighted from Jellicoe's flagship Iron Duke. Jellicoe twice demanded the latest position of the German battlefleet from Beatty, who could not see the German battleships and failed to respond to the question until 18:14. Meanwhile, Jellicoe received confused sighting reports of varying accuracy and limited usefulness from light cruisers and battleships on the starboard (southern) flank of his force. Jellicoe was in a worrying position. He needed to know the location of the German fleet to judge when and how to deploy his battleships from their cruising formation (six columns of four ships each) into a single battle line. The deployment could be on either the westernmost or the easternmost column, and had to be carried out before the Germans arrived; but early deployment could mean losing any chance of a decisive encounter. Deploying to the west would bring his fleet closer to Scheer, gaining valuable time as dusk approached, but the Germans might arrive before the manoeuvre was complete. Deploying to the east would take the force away from Scheer, but Jellicoe's ships might be able to cross the "T", and visibility would strongly favour British gunnery – Scheer's forces would be silhouetted against the setting sun to the west, while the Grand Fleet would be indistinct against the dark skies to the north and east, and would be hidden by reflection of the low sunlight off intervening haze and smoke. Deployment would take twenty irreplaceable minutes, and the fleets were closing at full speed. In one of the most critical and difficult tactical command decisions of the entire war, Jellicoe ordered deployment to the east at 18:15. Windy Corner Meanwhile, Hipper had rejoined Scheer, and the combined High Seas Fleet was heading north, directly toward Jellicoe. Scheer had no indication that Jellicoe was at sea, let alone that he was bearing down from the north-west, and was distracted by the intervention of Hood's ships to his north and east. Beatty's four surviving battlecruisers were now crossing the van of the British dreadnoughts to join Hood's three battlecruisers; at this time, Arbuthnot's flagship, the armoured cruiser , and her squadron-mate both charged across Beatty's bows, and Lion narrowly avoided a collision with Warrior. Nearby, numerous British light cruisers and destroyers on the south-western flank of the deploying battleships were also crossing each other's courses in attempts to reach their proper stations, often barely escaping collisions, and under fire from some of the approaching German ships. This period of peril and heavy traffic attending the merger and deployment of the British forces later became known as "Windy Corner". Arbuthnot was attracted by the drifting hull of the crippled Wiesbaden. With Warrior, Defence closed in for the kill, only to blunder right into the gun sights of Hipper's and Scheer's oncoming capital ships. Defence was deluged by heavy-calibre gunfire from many German battleships, which detonated her magazines in a spectacular explosion viewed by most of the deploying Grand Fleet. She sank with all hands (903 officers and men). Warrior was also hit badly, but was spared destruction by a mishap to the nearby battleship Warspite. Warspite had her steering gear overheat and jam under heavy load at high speed as the 5th Battle Squadron made a turn to the north at 18:19. Steaming at top speed in wide circles, Warspite attracted the attention of German dreadnoughts and took 13 hits, inadvertently drawing fire away from the hapless Warrior. Warspite was brought back under control and survived the onslaught, but was badly damaged, had to reduce speed, and withdrew northward; later (at 21:07), she was ordered back to port by Evan-Thomas. Warspite went on to a long and illustrious career, serving also in World War II. Warrior, on the other hand, was abandoned and sank the next day after her crew was taken off at 08:25 on 1 June by Engadine, which towed the sinking armoured cruiser during the night. As Defence sank and Warspite circled, at about 18:19, Hipper moved within range of Hood's 3rd Battlecruiser Squadron, but was still also within range of Beatty's ships. At first, visibility favoured the British: hit Derfflinger three times and Seydlitz once, while Lützow quickly took 10 hits from Lion, and Invincible, including two below-waterline hits forward by Invincible that would ultimately doom Hipper's flagship. But at 18:30, Invincible abruptly appeared as a clear target before Lützow and Derfflinger. The two German ships then fired three salvoes each at Invincible, and sank her in 90 seconds. A shell from the third salvo struck Invincibles Q-turret amidships, detonating the magazines below and causing her to blow up and sink. All but six of her crew of 1,032 officers and men, including Rear-Admiral Hood, were killed. Of the remaining British battlecruisers, only Princess Royal received heavy-calibre hits at this time (two by the battleship Markgraf). Lützow, flooding forward and unable to communicate by radio, was now out of action and began to attempt to withdraw; therefore Hipper left his flagship and transferred to the torpedo boat , hoping to board one of the other battlecruisers later. Crossing the T By 18:30, the main battle fleet action was joined for the first time, with Jellicoe effectively "crossing Scheer's T". The officers on the lead German battleships, and Scheer himself, were taken completely by surprise when they emerged from drifting clouds of smoky mist to suddenly find themselves facing the massed firepower of the entire Grand Fleet main battle line, which they did not know was even at sea. Jellicoe's flagship Iron Duke quickly scored seven hits on the lead German dreadnought, , but in this brief exchange, which lasted only minutes, as few as 10 of the Grand Fleet's 24 dreadnoughts actually opened fire. The Germans were hampered by poor visibility, in addition to being in an unfavourable tactical position, just as Jellicoe had intended. Realising he was heading into a death trap, Scheer ordered his fleet to turn and disengage at 18:33. Under a pall of smoke and mist, Scheer's forces succeeded in disengaging by an expertly executed 180° turn in unison ("battle about turn to starboard", German Gefechtskehrtwendung nach Steuerbord), which was a well-practised emergency manoeuvre of the High Seas Fleet. Scheer declared: Conscious of the risks to his capital ships posed by torpedoes, Jellicoe did not chase directly but headed south, determined to keep the High Seas Fleet west of him. Starting at 18:40, battleships at the rear of Jellicoe's line were in fact sighting and avoiding torpedoes, and at 18:54 was hit by a torpedo (probably from the disabled Wiesbaden), which reduced her speed to . Meanwhile, Scheer, knowing that it was not yet dark enough to escape and that his fleet would suffer terribly in a stern chase, doubled back to the east at 18:55. In his memoirs he wrote, "the manoeuvre would be bound to surprise the enemy, to upset his plans for the rest of the day, and if the blow fell heavily it would facilitate the breaking loose at night." But the turn to the east took his ships, again, directly towards Jellicoe's fully deployed battle line. Simultaneously, the disabled British destroyer HMS Shark fought desperately against a group of four German torpedo boats and disabled with gunfire, but was eventually torpedoed and sunk at 19:02 by the German destroyer . Sharks Captain Loftus Jones was awarded the Victoria Cross for his heroism in continuing to fight against all odds. Turn Of The Battle Commodore Goodenough's 2nd Light Cruiser Squadron dodged the fire of German battleships for a second time to re-establish contact with the High Seas Fleet shortly after 19:00. By 19:15, Jellicoe had crossed Scheer's "T" again. This time his arc of fire was tighter and deadlier, causing severe damage to the German battleships, particularly Rear-Admiral Behncke's leading 3rd Squadron (SMS König, , Markgraf, and all being hit, along with of the 1st Squadron), while on the British side, only the battleship was hit (twice, by Seydlitz but with little damage done). At 19:17, for the second time in less than an hour, Scheer turned his outnumbered and out-gunned fleet to the west using the "battle about turn" (German: Gefechtskehrtwendung), but this time it was executed only with difficulty, as the High Seas Fleet's lead squadrons began to lose formation under concentrated gunfire. To deter a British chase, Scheer ordered a major torpedo attack by his destroyers and a potentially sacrificial charge by Scouting Group I's four remaining battlecruisers. Hipper was still aboard the torpedo boat G39 and was unable to command his squadron for this attack. Therefore, Derfflinger, under Captain Hartog, led the already badly damaged German battlecruisers directly into "the greatest concentration of naval gunfire any fleet commander had ever faced", at ranges down to . In what became known as the "death ride", all the battlecruisers except Moltke were hit and further damaged, as 18 of the British battleships fired at them simultaneously. Derfflinger had two main gun turrets destroyed. The crews of Scouting Group I suffered heavy casualties, but survived the pounding and veered away with the other battlecruisers once Scheer was out of trouble and the German destroyers were moving in to attack. In this brief but intense portion of the engagement, from about 19:05 to about 19:30, the Germans sustained a total of 37 heavy hits while inflicting only two; Derfflinger alone received 14. While his battlecruisers drew the fire of the British fleet, Scheer slipped away, laying smoke screens. Meanwhile, from about 19:16 to about 19:40, the British battleships were also engaging Scheer's torpedo boats, which executed several waves of torpedo attacks to cover his withdrawal. Jellicoe's ships turned away from the attacks and successfully evaded all 31 of the torpedoes launched at them – though, in several cases, only barely – and sank the German destroyer S35, attributed to a salvo from Iron Duke. British light forces also sank V48, which had previously been disabled by HMS Shark. This action, and the turn away, cost the British critical time and range in the last hour of daylight – as Scheer intended, allowing him to get his heavy ships out of immediate danger. The last major exchanges between capital ships in this battle - and in the war - took place just after sunset, from about 20:19 to about 20:35, as the surviving British battlecruisers caught up with their German counterparts, which were briefly relieved by Rear-Admiral Mauve's obsolete pre-dreadnoughts (the German 2nd Squadron). The British received one heavy hit on Princess Royal but scored five more on Seydlitz and three on other German ships. As twilight faded to night, exchanged a few final shots with . Night action and German withdrawal At 21:00, Jellicoe, conscious of the Grand Fleet's deficiencies in night fighting, decided to try to avoid a major engagement until early dawn. He placed a screen of cruisers and destroyers behind his battle fleet to patrol the rear as he headed south to guard Scheer's expected escape route. In reality, Scheer opted to cross Jellicoe's wake and escape via Horns Reef. Luckily for Scheer, most of the light forces in Jellicoe's rearguard failed to report the seven separate encounters with the German fleet during the night; the very few radio reports that were sent to the British flagship were never received, possibly because the Germans were jamming British frequencies. Many of the destroyers failed to make the most of their opportunities to attack discovered ships, despite Jellicoe's expectations that the destroyer forces would, if necessary, be able to block the path of the German fleet. Jellicoe and his commanders did not understand that the furious gunfire and explosions to the north (seen and heard for hours by all the British battleships) indicated that the German heavy ships were breaking through the screen astern of the British fleet. Instead, it was believed that the fighting was the result of night attacks by German destroyers. The most powerful British ships of all (the 15-inch-guns of the 5th Battle Squadron) directly observed German battleships crossing astern of them in action with British light forces, at ranges of or less, and gunners on HMS Malaya made ready to fire, but her captain declined, deferring to the authority of Rear-Admiral Evan-Thomas – and neither commander reported the sightings to Jellicoe, assuming that he could see for himself and that revealing the fleet's position by radio signals or gunfire was unwise. While the nature of Scheer's escape, and Jellicoe's inaction, indicate the overall German superiority in night fighting, the results of the night action were no more clear-cut than were those of the battle as a whole. In the first of many surprise encounters by darkened ships at point-blank range, Southampton, Commodore Goodenough's flagship, which had scouted so proficiently, was heavily damaged in action with a German Scouting Group composed of light cruisers, but managed to torpedo , which went down at 22:23 with all but 9 hands (320 officers and men). From 23:20 to approximately 02:15, several British destroyer flotillas launched torpedo attacks on the German battle fleet in a series of violent and chaotic engagements at extremely short range (often under ). At the cost of five destroyers sunk and some others damaged, they managed to torpedo the light cruiser , which sank several hours later, and the pre-dreadnought , which blew up and sank with all hands (839 officers and men) at 03:10 during the last wave of attacks before dawn. Three of the British destroyers collided in the chaos, and the German battleship rammed the British destroyer , blowing away most of the British ship's superstructure merely with the muzzle blast of its big guns, which could not be aimed low enough to hit the ship. Nassau was left with an hole in her side, reducing her maximum speed to , while the removed plating was left lying on Spitfires deck. Spitfire survived and made it back to port. Another German cruiser, Elbing, was accidentally rammed by the dreadnought and abandoned, sinking early the next day. Of the British destroyers, , , , and were lost during the night fighting. Just after midnight on 1 June, and other German battleships sank Black Prince of the ill-fated 1st Cruiser Squadron, which had blundered into the German battle line. Deployed as part of a screening force several miles ahead of the main force of the Grand Fleet, Black Prince had lost contact in the darkness and took a position near what she thought was the British line. The Germans soon identified the new addition to their line and opened fire. Overwhelmed by point-blank gunfire, Black Prince blew up (all 857 officers and men were lost), as her squadron leader Defence had done hours earlier. Lost in the darkness, the battlecruisers Moltke and Seydlitz had similar point-blank encounters with the British battle line and were recognised, but were spared the fate of Black Prince when the captains of the British ships, again, declined to open fire, reluctant to reveal their fleet's position. At 01:45, the sinking battlecruiser Lützow – fatally damaged by Invincible during the main action – was torpedoed by the destroyer on orders of Lützows Captain Viktor von Harder after the surviving crew of 1,150 transferred to destroyers that came alongside. At 02:15, the German torpedo boat suddenly had its bow blown off; V2 and V6 came alongside and took off the remaining crew, and the V2 then sank the hulk. Since there was no enemy nearby, it was assumed that she had hit a mine or had been torpedoed by a submarine. At 02:15, five British ships of the 13th Destroyer Flotilla under Captain James Uchtred Farie regrouped and headed south. At 02:25, they sighted the rear of the German line. inquired of the leader as to whether he thought they were British or German ships. Answering that he thought they were German, Farie then veered off to the east and away from the German line. All but Moresby in the rear followed, as through the gloom she sighted what she thought were four pre-dreadnought battleships away. She hoisted a flag signal indicating that the enemy was to the west and then closed to firing range, letting off a torpedo set for high running at 02:37, then veering off to rejoin her flotilla. The four pre-dreadnought battleships were in fact two pre-dreadnoughts, Schleswig-Holstein and , and the battlecruisers Von der Tann and Derfflinger. Von der Tann sighted the torpedo and was forced to steer sharply to starboard to avoid it as it passed close to her bows. Moresby rejoined Champion convinced she had scored a hit. Finally, at 05:20, as Scheer's fleet was safely on its way home, the battleship struck a British mine on her starboard side, killing one man and wounding ten, but was able to make port. Seydlitz, critically damaged and very nearly sinking, barely survived the return voyage: after grounding and taking on even more water on the evening of 1 June, she had to be assisted stern first into port, where she dropped anchor at 07:30 on the morning of 2 June. The Germans were helped in their escape by the failure of the British Admiralty in London to pass on seven critical radio intercepts obtained by naval intelligence indicating the true position, course and intentions of the High Seas Fleet during the night. One message was transmitted to Jellicoe at 23:15 that accurately reported the German fleet's course and speed as of 21:14. However, the erroneous signal from earlier in the day that reported the German fleet still in port, and an intelligence signal received at 22:45 giving another unlikely position for the German fleet, had reduced his confidence in intelligence reports. Had the other messages been forwarded, which confirmed the information received at 23:15, or had British ships reported accurately sightings and engagements with German destroyers, cruisers and battleships, then Jellicoe could have altered course to intercept Scheer at the Horns Reef. The unsent intercepted messages had been duly filed by the junior officer left on duty that night, who failed to appreciate their significance. By the time Jellicoe finally learned of Scheer's whereabouts at 04:15, the German fleet was too far away to catch and it was clear that the battle could no longer be resumed. Outcome As both the Grand Fleet and the High Seas Fleet could claim to have at least partially satisfied their objectives, both Britain and Germany have at various points claimed victory in the Battle of Jutland. There is no consensus over which nation was victorious, or if there was a victor at all. Reporting At midday on 2 June, German authorities released a press statement claiming a victory, including the destruction of a battleship, two battlecruisers, two armoured cruisers, a light cruiser, a submarine and several destroyers, for the loss of Pommern and Wiesbaden. News that Lützow, Elbing and Rostock had been scuttled was withheld, on the grounds this information would not be known to the enemy. The victory of the Skagerrak was celebrated in the press, children were given a holiday and the nation celebrated. The Kaiser announced a new chapter in world history. Post-war, the official German history hailed the battle as a victory and it continued to be celebrated until after World War II. In Britain, the first official news came from German wireless broadcasts. Ships began to arrive in port, their crews sending messages to friends and relatives both of their survival and the loss of some 6,000 others. The authorities considered suppressing the news, but it had already spread widely. Some crews coming ashore found rumours had already reported them dead to relatives, while others were jeered for the defeat they had suffered. At 19:00 on 2 June, the Admiralty released a statement based on information from Jellicoe containing the bare news of losses on each side. The following day British newspapers reported a German victory. The Daily Mirror described the German Director of the Naval Department telling the Reichstag: "The result of the fighting is a significant success for our forces against a much stronger adversary". The British population was shocked that the long anticipated battle had been a victory for Germany. On 3 June, the Admiralty issued a further statement expanding on German losses, and another the following day with exaggerated claims. However, on 7 June the German admission of the losses of Lützow and Rostock started to redress the sense of the battle as a loss. International perception of the battle began to change towards a qualified British victory, the German attempt to change the balance of power in the North Sea having been repulsed. In July, bad news from the Somme campaign swept concern over Jutland from the British consciousness. Assessments At Jutland, the Germans, with a 99-strong fleet, sank of British ships, while a 151-strong British fleet sank of German ships. The British lost 6,094 seamen; the Germans 2,551. Several other ships were badly damaged, such as Lion and Seydlitz. As of the summer of 1916, the High Seas Fleet's strategy was to whittle away the numerical advantage of the Royal Navy by bringing its full strength to bear against isolated squadrons of enemy capital ships whilst declining to be drawn into a general fleet battle until it had achieved something resembling parity in heavy ships. In tactical terms, the High Seas Fleet had clearly inflicted significantly greater losses on the Grand Fleet than it had suffered itself at Jutland, and the Germans never had any intention of attempting to hold the site of the battle, so some historians support the German claim of victory at Jutland. However, Scheer seems to have quickly realised that further battles with a similar rate of attrition would exhaust the High Seas Fleet long before they reduced the Grand Fleet. Further, after the 19 August advance was nearly intercepted by the Grand Fleet, he no longer believed that it would be possible to trap a single squadron of Royal Navy warships without having the Grand Fleet intervene before he could return to port. Therefore, the High Seas Fleet abandoned its forays into the North Sea and turned its attention to the Baltic for most of 1917 whilst Scheer switched tactics against Britain to unrestricted submarine warfare in the Atlantic. At a strategic level, the outcome has been the subject of a huge amount of literature with no clear consensus. The battle was widely viewed as indecisive in the immediate aftermath, and this view remains influential. Despite numerical superiority, the British had been disappointed in their hopes for a decisive battle comparable to Trafalgar and the objective of the influential strategic doctrines of Alfred Mahan. The High Seas Fleet survived as a fleet in being. Most of its losses were made good within a month – even Seydlitz, the most badly damaged ship to survive the battle, was repaired by October and officially back in service by November. However, the Germans had failed in their objective of destroying a substantial portion of the British Fleet, and no progress had been made towards the goal of allowing the High Seas Fleet to operate in the Atlantic Ocean. Subsequently, there has been considerable support for the view of Jutland as a strategic victory for the British. While the British had not destroyed the German fleet and had lost more ships and lives than their enemy, the Germans had retreated to harbour; at the end of the battle, the British were in command of the area. Britain enforced the blockade, reducing Germany's vital imports to 55%, affecting the ability of Germany to fight the war. The German fleet would only sortie into the North Sea thrice more, with a raid on 19 August, one in October 1916, and another in April 1918. All three were unopposed by capital ships and quickly aborted as neither side was prepared to take the risks of mines and submarines. Apart from these three abortive operations the High Seas Fleet – unwilling to risk another encounter with the British fleet – confined its activities to the Baltic Sea for the remainder of the war. Jellicoe issued an order prohibiting the Grand Fleet from steaming south of the line of Horns Reef owing to the threat of mines and U-boats. A German naval expert, writing publicly about Jutland in November 1918, commented, "Our Fleet losses were severe. On 1 June 1916, it was clear to every thinking person that this battle must, and would be, the last one". There is also significant support for viewing the battle as a German tactical victory, due to the much higher losses sustained by the British. The Germans declared a great victory immediately afterwards, while the British by contrast had only reported short and simple results. In response to public outrage, the First Lord of the Admiralty Arthur Balfour asked Winston Churchill to write a second report that was more positive and detailed. At the end of the battle, the British had maintained their numerical superiority and had 23 dreadnoughts ready and four battlecruisers still able to fight, while the Germans had only 10 dreadnoughts. One month after the battle, the Grand Fleet was stronger than it had been before sailing to Jutland. Warspite was dry-docked at Rosyth, returning to the fleet on 22 July, while Malaya was repaired in the floating dock at Invergordon, returning to duty on 11 July. Barham was docked for a month at Devonport before undergoing speed trials and returning to Scapa Flow on 8 July. Princess Royal stayed initially at Rosyth but transferred to dry dock at Portsmouth before returning to duty at Rosyth 21 July. Tiger was dry-docked at Rosyth and ready for service 2 July. Queen Elizabeth, Emperor of India and , which had been undergoing maintenance at the time of the battle, returned to the fleet immediately, followed shortly after by Resolution and Ramillies. Lion initially remained ready for sea duty despite the damaged turret, then underwent a month's repairs in July when Q turret was removed temporarily and replaced in September. A third view, presented in a number of recent evaluations, is that Jutland, the last major fleet action between battleships, illustrated the irrelevance of battleship fleets following the development of the submarine, mine and torpedo. In this view, the most important consequence of Jutland was the decision of the Germans to engage in unrestricted submarine warfare. Although large numbers of battleships were constructed in the decades between the wars, it has been argued that this outcome reflected the social dominance among naval decision-makers of battleship advocates who constrained technological choices to fit traditional paradigms of fleet action. Battleships played a relatively minor role in World War II, in which the submarine and aircraft carrier emerged as the dominant offensive weapons of naval warfare. British self-critique The official British Admiralty examination of the Grand Fleet's performance recognised two main problems: British armour-piercing shells exploded outside the German armour rather than penetrating and exploding within. As a result, some German ships with only -thick armour survived hits from projectiles. Had these shells penetrated the armour and then exploded, German losses would probably have been far greater. Communication between ships and the British commander-in-chief was comparatively poor. For most of the battle, Jellicoe had no idea where the German ships were, even though British ships were in contact. They failed to report enemy positions, contrary to the Grand Fleet's Battle Plan. Some of the most important signalling was carried out solely by flag instead of wireless or using redundant methods to ensure communications—a questionable procedure, given the mixture of haze and smoke that obscured the battlefield, and a foreshadowing of similar failures by habit-bound and conservatively minded professional officers of rank to take advantage of new technology in World War II. Shell performance German armour-piercing shells were far more effective than the British ones, which often failed to penetrate heavy armour. The issue particularly concerned shells striking at oblique angles, which became increasingly the case at long range. Germany had adopted trinitrotoluene (TNT) as the explosive filler for artillery shells in 1902, while the United Kingdom was still using a picric acid mixture (Lyddite). The shock of impact of a shell against armour often prematurely detonated Lyddite in advance of fuze function while TNT detonation could be delayed until after the shell had penetrated and the fuze had functioned in the vulnerable area behind the armour plate. Some 17 British shells hit the side armour of the German dreadnoughts or battlecruisers. Of these, four would not have penetrated under any circumstances. Of the remaining 13, one penetrated the armour and exploded inside. This showed a 7.5% chance of proper shell function on the British side, a result of overly brittle shells and Lyddite exploding too soon. The issue of poorly performing shells had been known to Jellicoe, who as Third Sea Lord from 1908 to 1910 had ordered new shells to be designed. However, the matter had not been followed through after his posting to sea and new shells had never been thoroughly tested. Beatty discovered the problem at a party aboard Lion a short time after the battle, when a Swedish Naval officer was present. He had recently visited Berlin, where the German navy had scoffed at how British shells had broken up on their ships' armour. The question of shell effectiveness had also been raised after the Battle of Dogger Bank, but no action had been taken. Hipper later commented, "It was nothing but the poor quality of their bursting charges which saved us from disaster." Admiral Dreyer, writing later about the battle, during which he had been captain of the British flagship Iron Duke, estimated that effective shells as later introduced would have led to the sinking of six more German capital ships, based upon the actual number of hits achieved in the battle. The system of testing shells, which remained in use up to 1944, meant that, statistically, a batch of shells of which 70% were faulty stood an even chance of being accepted. Indeed, even shells that failed this relatively mild test had still been issued to ships. Analysis of the test results afterwards by the Ordnance Board suggested the likelihood that 30–70% of shells would not have passed the standard penetration test specified by the Admiralty. Efforts to replace the shells were initially resisted by the Admiralty, and action was not taken until Jellicoe became First Sea Lord in December 1916. As an initial response, the worst of the existing shells were withdrawn from ships in early 1917 and replaced from reserve supplies. New shells were designed, but did not arrive until April 1918, and were never used in action. Battlecruiser losses British battlecruisers were designed to chase and destroy enemy cruisers from out of the range of those ships. They were not designed to be ships of the line and exchange broadsides with the enemy. One German and three British battlecruisers were sunk—but none were destroyed by enemy shells penetrating the belt armour and detonating the magazines. Each of the British battlecruisers was penetrated through a turret roof and their magazines ignited by flash fires passing through the turret and shell-handling rooms. Lützow sustained 24 hits and her flooding could not be contained. She was eventually sunk by her escorts' torpedoes after most of her crew had been safely removed (though six trapped stokers died when the ship was scuttled). Derfflinger and Seydlitz sustained 22 hits each but reached port (although in Seydlitzs case only just). Jellicoe and Beatty, as well as other senior officers, gave an impression that the loss of the battlecruisers was caused by weak armour, despite reports by two committees and earlier statements by Jellicoe and other senior officers that Cordite and its management were to blame. This led to calls for armour to be increased, and an additional was placed over the relatively thin decks above magazines. To compensate for the increase in weight, ships had to carry correspondingly less fuel, water and other supplies. Whether or not thin deck armour was a potential weakness of British ships, the battle provided no evidence that it was the case. At least amongst the surviving ships, no enemy shell was found to have penetrated deck armour anywhere. The design of the new battlecruiser (which was being built at the time of the battle) was altered to give her of additional armour. Ammunition handling British and German propellant charges differed in packaging, handling, and chemistry. The British propellant was of two types, MK1 and MD. The Mark 1 cordite had a formula of 37% nitrocellulose, 58% nitroglycerine, and 5% petroleum jelly. It was a good propellant but burned hot and caused an erosion problem in gun barrels. The petroleum jelly served as both a lubricant and a stabiliser. Cordite MD was developed to reduce barrel wear, its formula being 65% nitrocellulose, 30% nitroglycerine, and 5% petroleum jelly. While cordite MD solved the gun-barrel erosion issue, it did nothing to improve its storage properties, which were poor. Cordite was very sensitive to variations of temperature, and acid propagation/cordite deterioration would take place at a very rapid rate. Cordite MD also shed micro-dust particles of nitrocellulose and iron pyrite. While cordite propellant was manageable, it required a vigilant gunnery officer, strict cordite lot control, and frequent testing of the cordite lots in the ships' magazines. British cordite propellant (when uncased and exposed in the silk bag) tended to burn violently, causing uncontrollable "flash fires" when ignited by nearby shell hits. In 1945, a test was conducted by the U.S.N. Bureau of Ordnance (Bulletin of Ordnance Information, No.245, pp. 54–60) testing the sensitivity of cordite to then-current U.S. Naval propellant powders against a measurable and repeatable flash source. It found that cordite would ignite at from the flash, the current U.S. powder at , and the U.S. flashless powder at . This meant that about 75 times the propellant would immediately ignite when exposed to flash, as compared to the U.S. powder. British ships had inadequate protection against these flash fires. German propellant (RP C/12, handled in brass cartridge cases, which were used in German artillery since traditional there sliding wedge breeches were hard to obturate with smokeless powder otherwise) was less vulnerable and less volatile in composition. German propellants were not that different in composition from cordite—with one major exception: centralite. This was symmetrical diethyl diphenyl urea, which served as a stabiliser that was superior to the petroleum jelly used in British practice. It stored better and burned but did not explode. Stored and used in brass cases, it proved much less sensitive to flash. RP C/12 was composed of 64.13% nitrocellulose, 29.77% nitroglycerine, 5.75% centralite, 0.25% magnesium oxide and 0.10% graphite. The Royal Navy Battle Cruiser Fleet had also emphasised speed in ammunition handling over established safety protocol. In practice drills, cordite could not be supplied to the guns rapidly enough through the hoists and hatches. To bring up the propellant in good time to load for the next broadside, many safety doors were kept open that should have been shut to safeguard against flash fires. Bags of cordite were also stocked and kept locally, creating a total breakdown of safety design features. By staging charges in the chambers between the gun turret and magazine, the Royal Navy enhanced their rate of fire but left their ships vulnerable to chain reaction ammunition fires and magazine explosions.Campbell, pp. 371–372 This 'bad safety habit' carried over into real battle practices. Furthermore, the doctrine of a high rate of fire also led to the decision in 1913 to increase the supply of shells and cordite held on the British ships by 50%, for fear of running out of ammunition. When this exceeded the capacity of the ships' magazines, cordite was stored in insecure places. The British cordite charges were stored two silk bags to a metal cylindrical container, with a 16-oz gunpowder igniter charge, which was covered with a thick paper wad, four charges being used on each projectile. The gun crews were removing the charges from their containers and removing the paper covering over the gunpowder igniter charges. The effect of having eight loads at the ready was to have of exposed explosive, with each charge leaking small amounts of gunpowder from the igniter bags. In effect, the gun crews had laid an explosive train from the turret to the magazines, and one shell hit to a battlecruiser turret was enough to end a ship. A diving expedition during the summer of 2003 provided corroboration of this practice. It examined the wrecks of Invincible, Queen Mary, Defence, and Lützow to investigate the cause of the British ships' tendency to suffer from internal explosions. From this evidence, a major part of the blame may be laid on lax handling of the cordite propellant for the shells of the main guns. The wreck of the Queen Mary revealed cordite containers stacked in the working chamber of the X turret instead of the magazine. There was a further difference in the propellant itself. While the German RP C/12 burned when exposed to fire, it did not explode, as opposed to cordite. RP C/12 was extensively studied by the British and, after World War I, would form the basis of the later Cordite SC. The memoirs of Alexander Grant, Gunner on Lion, suggest that some British officers were aware of the dangers of careless handling of cordite: Grant had already introduced measures onboard Lion to limit the number of cartridges kept outside the magazine and to ensure doors were kept closed, probably contributing to her survival. On 5 June 1916, the First Lord of the Admiralty advised Cabinet Members that the three battlecruisers had been lost due to unsafe cordite management. After the battle, the B.C.F. Gunnery Committee issued a report (at the command of Admiral David Beatty) advocating immediate changes in flash protection and charge handling. It reported, among other things, that: Some vent plates in magazines allowed flash into the magazines and should be retro-fitted to a new standard. Bulkheads in HMS Lions magazine showed buckling from fire under pressure (overpressure) – despite being flooded and therefore supported by water pressure – and must be made stronger. Doors opening inward to magazines were an extreme danger. Current designs of turrets could not eliminate flash from shell bursts in the turret from reaching the handling rooms. Ignition pads must not be attached to charges but instead be placed just before ramming. Better methods must be found for safe storage of ready charges than the current method. Some method for rapidly drowning charges already in the handling path must be devised. Handling scuttles (special flash-proof fittings for moving propellant charges through ship's bulkheads), designed to handle overpressure, must be fitted. Gunnery British gunnery control systems, based on Dreyer tables, were well in advance of the German ones, as demonstrated by the proportion of main calibre hits made on the German fleet. Because of its demonstrated advantages, it was installed on ships progressively as the war went on, had been fitted to a majority of British capital ships by May 1916, and had been installed on the main guns of all but two of the Grand Fleet's capital ships. The Royal Navy used centralised fire-control systems on their capital ships, directed from a point high up on the ship where the fall of shells could best be seen, utilising a director sight for both training and elevating the guns. In contrast, the German battlecruisers controlled the fire of turrets using a training-only director, which also did not fire the guns at once. The rest of the German capital ships were without even this innovation. German range-finding equipment was generally superior to the British FT24, as its operators were trained to a higher standard due to the complexity of the Zeiss range finders. Their stereoscopic design meant that in certain conditions they could range on a target enshrouded by smoke. The German equipment was not superior in range to the British Barr & Stroud rangefinder found in the newest British capital ships, and, unlike the British range finders, the German range takers had to be replaced as often as every thirty minutes, as their eyesight became impaired, affecting the ranges provided to their gunnery equipment. The results of the battle confirmed the value of firing guns by centralised director. The battle prompted the Royal Navy to install director firing systems in cruisers and destroyers, where it had not thus far been used, and for secondary armament on battleships. German ships were considered to have been quicker in determining the correct range to targets, thus obtaining an early advantage. The British used a 'bracket system', whereby a salvo was fired at the best-guess range and, depending where it landed, the range was progressively corrected up or down until successive shots were landing in front of and behind the enemy. The Germans used a 'ladder system', whereby an initial volley of three shots at different ranges was used, with the centre shot at the best-guess range. The ladder system allowed the gunners to get ranging information from the three shots more quickly than the bracket system, which required waiting between shots to see how the last had landed. British ships adopted the German system. It was determined that range finders of the sort issued to most British ships were not adequate at long range and did not perform as well as the range finders on some of the most modern ships. In 1917, range finders of base lengths of were introduced on the battleships to improve accuracy. Signalling Throughout the battle, British ships experienced difficulties with communications, whereas the Germans did not suffer such problems. The British preferred signalling using ship-to-ship flag and lamp signals, avoiding wireless, whereas the Germans used wireless successfully. One conclusion drawn was that flag signals were not a satisfactory way to control the fleet. Experience using lamps, particularly at night when issuing challenges to other ships, demonstrated this was an excellent way to advertise your precise location to an enemy, inviting a reply by gunfire. Recognition signals by lamp, once seen, could also easily be copied in future engagements. British ships both failed to report engagements with the enemy but also, in the case of cruisers and destroyers, failed to actively seek out the enemy. A culture had arisen within the fleet of not acting without orders, which could prove fatal when any circumstances prevented orders being sent or received. Commanders failed to engage the enemy because they believed other, more senior officers must also be aware of the enemy nearby, and would have given orders to act if this was expected. Wireless, the most direct way to pass messages across the fleet (although it was being jammed by German ships), was avoided either for perceived reasons of not giving away the presence of ships or for fear of cluttering up the airwaves with unnecessary reports. Fleet Standing Orders Naval operations were governed by standing orders issued to all the ships. These attempted to set out what ships should do in all circumstances, particularly in situations where ships would have to react without referring to higher authority, or when communications failed. A number of changes were introduced as a result of experience gained in the battle. A new signal was introduced instructing squadron commanders to act independently as they thought best while still supporting the main fleet, particularly for use when circumstances would make it difficult to send detailed orders. The description stressed that this was not intended to be the only time commanders might take independent action, but was intended to make plain times when they definitely should. Similarly, instructions on what to do if the fleet was instructed to take evasive action against torpedoes were amended. Commanders were given discretion that if their part of the fleet was not under immediate attack, they should continue engaging the enemy rather than turning away with the rest of the fleet. In this battle, when the fleet turned away from Scheer's destroyer attack covering his retreat, not all the British ships had been affected, and could have continued to engage the enemy. A number of opportunities to attack enemy ships by torpedo had presented themselves but had been missed. All ships, not just the destroyers armed principally with torpedoes but also battleships, were reminded that they carried torpedoes intended to be used whenever an opportunity arose. Destroyers were instructed to close the enemy fleet to fire torpedoes as soon as engagements between the main ships on either side would keep enemy guns busy directed at larger targets. Destroyers should also be ready to immediately engage enemy destroyers if they should launch an attack, endeavouring to disrupt their chances of launching torpedoes and keep them away from the main fleet. To add some flexibility when deploying for attack, a new signal was provided for deploying the fleet to the centre, rather than as previously only either to left or right of the standard closed-up formation for travelling. The fast and powerful 5th Battle Squadron was moved to the front of the cruising formation so it would have the option of deploying left or right depending upon the enemy position. In the event of engagements at night, although the fleet still preferred to avoid night fighting, a destroyer and cruiser squadron would be specifically detailed to seek out the enemy and launch destroyer attacks. Controversy At the time, Jellicoe was criticised for his caution and for allowing Scheer to escape. Beatty, in particular, was convinced that Jellicoe had missed a tremendous opportunity to annihilate the High Seas Fleet and win what would amount to another Trafalgar. Jellicoe was promoted away from active command to become First Sea Lord, the professional head of the Royal Navy, while Beatty replaced him as commander of the Grand Fleet. The controversy raged within the navy and in public for about a decade after the war. Criticism focused on Jellicoe's decision at 19:15. Scheer had ordered his cruisers and destroyers forward in a torpedo attack to cover the turning away of his battleships. Jellicoe chose to turn to the south-east, and so keep out of range of the torpedoes. Supporters of Jellicoe, including the historian Cyril Falls, pointed to the folly of risking defeat in battle when one already has command of the sea. Jellicoe himself, in a letter to the Admiralty seventeen months before the battle, said that he intended to turn his fleet away from any mass torpedo attack (that being the universally accepted proper tactical response to such attacks, practised by all the major navies of the world). He said that, in the event of a fleet engagement in which the enemy turned away, he would assume they intended to draw him over mines or submarines, and he would decline to be so drawn. The Admiralty approved this plan and expressed full confidence in Jellicoe at the time (October 1914). The stakes were high, the pressure on Jellicoe immense, and his caution certainly understandable. His judgement might have been that even 90% odds in favour were not good enough to bet the British Empire. Churchill said of the battle that Jellicoe "was the only man on either side who could have lost the war in an afternoon." The criticism of Jellicoe also fails to sufficiently credit Scheer, who was determined to preserve his fleet by avoiding the full British battle line, and who showed great skill in effecting his escape. Beatty's actions On the other hand, some of Jellicoe's supporters condemned the actions of Beatty for the British failure to achieve a complete victory. Although Beatty was undeniably brave, his mismanagement of the initial encounter with Hipper's squadron and the High Seas Fleet cost him a considerable advantage in the first hours of the battle. His most glaring failure was in not providing Jellicoe with periodic information on the position, course, and speed of the High Seas Fleet. Beatty, aboard the battlecruiser Lion, left behind the four fast battleships of the 5th Battle Squadron – the most powerful warships in the world at the time – engaging with six ships when better control would have given him 10 against Hipper's five. Though Beatty's larger guns out-ranged Hipper's guns by thousands of yards, Beatty held his fire for 10 minutes and closed the German squadron until within range of the Germans' superior gunnery, under lighting conditions that favoured the Germans. Most of the British losses in tonnage occurred in Beatty's force. Death toll The total loss of life on both sides was 9,823 personnel: the British losses numbered 6,784 and the German 3,039. Counted among the British losses were two members of the Royal Australian Navy and one member of the Royal Canadian Navy. Six Australian nationals serving in the Royal Navy were also killed. British 113,300 tons sunk: Battlecruisers , , Armoured cruisers , , Flotilla leader Destroyers , , , , , , German 62,300 tons sunk: Battlecruiser Pre-dreadnought Light cruisers , , , Destroyers (Heavy torpedo-boats) , , , , Selected honours The Victoria Cross is the highest military decoration awarded for valour "in the face of the enemy" to members of the British Empire armed forces. The Ordre pour le Mérite was the Kingdom of Prussia and consequently the German Empire's highest military order until the end of the First World War. Pour le Mérite Franz Hipper () Reinhard Scheer () Victoria Cross The Hon. Edward Barry Stewart Bingham () John Travers Cornwell () Francis John William Harvey () Loftus William Jones () Status of the survivors and wrecks In the years following the battle the wrecks were slowly discovered. Invincible was found by the Royal Navy minesweeper in 1919. After the Second World War some of the wrecks seem to have been commercially salvaged. For instance, the Hydrographic Office record for SMS Lützow (No.32344) shows that salvage operations were taking place on the wreck in 1960. During 2000–2016 a series of diving and marine survey expeditions involving veteran shipwreck historian and archaeologist Innes McCartney located all of the wrecks sunk in the battle. It was discovered that over 60 per cent of them had suffered from metal theft. In 2003 McCartney led a detailed survey of the wrecks for the Channel 4 documentary "Clash of the Dreadnoughts". The film examined the last minutes of the lost ships and revealed for the first time how both 'P' and 'Q' turrets of Invincible had been blasted out of the ship and tossed into the sea before she broke in half. This was followed by the Channel 4 documentary "Jutland: WWI's Greatest Sea Battle", broadcast in May 2016, which showed how several of the major losses at Jutland had actually occurred and just how accurate the "Harper Record" actually was. On the 90th anniversary of the battle, in 2006, the UK Ministry of Defence belatedly announced that the 14 British vessels lost in the battle were being designated as protected places under the Protection of Military Remains Act 1986. This legislation only affects British ships and citizens and in practical terms offers no real protection from non-British salvors of the wreck sites. In May 2016 a number of British newspapers named the Dutch salvage company "Friendship Offshore" as one of the main salvors of the Jutland wrecks in recent years and depicted leaked photographs revealing the extent of their activities on the wreck of Queen Mary. The last surviving veteran of the battle, Henry Allingham, a British RAF (originally RNAS) airman, died on 18 July 2009, aged 113, by which time he was the oldest documented man in the world and one of the last surviving veterans of the whole war. Also among the combatants was the then 20-year-old Prince Albert, serving as a junior officer aboard HMS Collingwood. He was second in the line to the throne, but would become king as George VI following his brother Edward's abdication in 1936. One ship from the battle survives and is still (in 2023) afloat: the light cruiser . Decommissioned in 2011, she is docked at the Alexandra Graving Dock in Belfast, Northern Ireland and is a museum ship. Remembrance The Battle of Jutland was annually celebrated as a great victory by the right wing in Weimar Germany. This victory was used to repress the memory of the German navy's initiation of the German Revolution of 1918–1919, as well as the memory of the defeat in World War I in general. (The celebrations of the Battle of Tannenberg played a similar role.) This is especially true for the city of Wilhelmshaven, where wreath-laying ceremonies and torch-lit parades were performed until the end of the 1960s. In 1916 Contreadmiral Friedrich von Kühlwetter (1865–1931) wrote a detailed analysis of the battle and published it in a book under the title Skagerrak (first anonymously published), which was reprinted in large numbers until after WWII and had a huge influence in keeping the battle in public memory amongst Germans as it was not tainted by the ideology of the Third Reich. Kühlwetter built the School for Naval Officers at Mürwik near Flensburg, where he is still remembered. In May 2016, the 100th-anniversary commemoration of the Battle of Jutland was held. On 29 May, a commemorative service was held at St Mary's Church, Wimbledon, where the ensign from HMS Inflexible is on permanent display. On 31 May, the main service was held at St Magnus Cathedral in Orkney, attended by the British prime minister, David Cameron, and the German president, Joachim Gauck, along with Princess Anne and Vice Admiral Sir Tim Laurence. A centennial exposition was held at the Deutsches Marinemuseum in Wilhemshaven from 29 May 2016 to 28 February 2017. Film Wrath of the Seas (Die versunkene Flotte), 1926, director Manfred Noa See also List of the largest artificial non-nuclear explosions Sea War Museum Jutland Naval warfare of World War I Notes Citations Bibliography Black, Jeremy. "Jutland's Place in History," Naval History (June 2016) 30#3 pp 16–21. Corbett, Sir Julian. (2015) Maritime Operations In The Russo-Japanese War 1904-1905. Vol. 1, originally published Jan 1914. Naval Institute Press; Corbett, Sir Julian. (2015) Maritime Operations In The Russo-Japanese War 1904-1905. Vol. 2, originally published Oct 1915. Naval Institute Press; Costello, John (1976) Jutland 1916 with Terry Hughes Friedman, Norman. (2013) Naval Firepower, Battleship Guns And Gunnery In The Dreadnaught Era. Seaforth Publishing; Further reading H.W. Fawcett & G.W.W. Hooper, RN (editors), The fighting at Jutland (abridged edition); the personal experiences of forty-five officers and men of the British Fleet London: MacMillan & Co, 1921 Lambert, Andrew. "Writing Writing the Battle: Jutland in Sir Julian Corbett's Naval Operations," Mariner's Mirror 103#2 (2017) 175–95, Historiography.. External links WW1 Centenary News - Battle of Jutland Jutland Centenary Initiative Jutland Commemoration Exhibition Beatty's official report Jellicoe's official despatch Jellicoe, extract from The Grand Fleet, published 1919 World War I Naval Combat – Despatches Scheer, Germany's High Seas Fleet in the World War , published 1920 Henry Allingham Last known survivor of the Battle of Jutland Table of Jutland Casualties Listed by Ship germannavalwarfare.info Some Original Documents from the British Admiralty, Room 40, regarding the Battle of Jutland: Photocopies from The National Archives, Kew, Richmond, UK. Sailors, with biographies, plotted on the Jutland Interactive Map of the NMRN Battle of Jutland Crew Lists Project Battle of Jutland Crew Lists Project Wiki Memorial park for the Battle of Jutland Battle-of-Jutland.com The website owner has a package of original documents Transcript of post-battle correspondence between the Grand Fleet and the Admiralty concerning the loss of the battlecruisers. Notable accounts by Rudyard Kipling Retrieved 2009-10-31. by Alexander Grant, a gunner aboard HMS Lion A North Sea diary, 1914–1918, by Stephen King-Hall, a junior officer on the light cruiser by Paul Berryman, a junior officer on by Moritz von Egidy, captain of SMS Seydlitz by Richard Foerster, gunnery officer on Seydlitz by Georg von Hase, gunnery officer on Derfflinger (Note:' Due to the time difference, entries in some of the German accounts are one hour ahead of the times in this article.) Conflicts in 1916 1916 in Denmark 1916 in Germany 1916 in the United Kingdom Naval battles of World War I involving Australia Naval battles of World War I involving Germany Naval battles of World War I involving the United Kingdom North Sea operations of World War I Protected Wrecks of the United Kingdom Military history of the North Sea May 1916 events June 1916 events Germany–United Kingdom military relations
2,003
4,614
https://en.wikipedia.org/wiki/Boeing%20747
Boeing 747
The Boeing 747 is a large, long-range wide-body airliner designed and manufactured by Boeing Commercial Airplanes in the United States between 1968 and 2023. After introducing the 707 in October 1958, Pan Am wanted a jet times its size, to reduce its seat cost by 30%. In 1965, Joe Sutter left the 737 development program to design the 747, the first twin-aisle airliner. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft, and in late 1966, Pratt & Whitney agreed to develop the JT9D engine, a high-bypass turbofan. On September 30, 1968, the first 747 was rolled out of the custom-built Everett Plant, the world's largest building by volume. The first flight took place on February 9, 1969, and the 747 was certified in December of that year. It entered service with Pan Am on January 22, 1970. The 747 was the first airplane called a "Jumbo Jet" as the first wide-body airliner. The 747 is a four-engined jet aircraft, initially powered by Pratt & Whitney JT9D turbofan engines, then General Electric CF6 and Rolls-Royce RB211 engines for the original variants. With a ten-abreast economy seating, it typically accommodates 366 passengers in three travel classes. It has a pronounced 37.5° wing sweep, allowing a cruise speed, and its heavy weight is supported by four main landing gear legs, each with a four-wheel bogie. The partial double-deck aircraft was designed with a raised cockpit so it could be converted to a freighter airplane by installing a front cargo door, as it was initially thought that it would eventually be superseded by supersonic transports. Boeing introduced the -200 in 1971, with more powerful engines for a heavier maximum takeoff weight (MTOW) of from the initial , increasing the maximum range from . It was shortened for the longer-range 747SP in 1976, and the 747-300 followed in 1983 with a stretched upper deck for up to 400 seats in three classes. The heavier 747-400 with improved RB211 and CF6 engines or the new PW4000 engine (the JT9D successor), and a two-crew glass cockpit, was introduced in 1989 and is the most common variant. After several studies, the stretched 747-8 was launched on November 14, 2005, with new General Electric GEnx engines, and was first delivered in October 2011. The 747 is the basis for several government and military variants, such as the VC-25 (Air Force One), E-4 Emergency Airborne Command Post, Shuttle Carrier Aircraft, and some experimental testbeds such as the YAL-1 and SOFIA airborne observatory. Initial competition came from the smaller trijet widebodies: the Lockheed L-1011 (introduced in 1972), McDonnell Douglas DC-10 (1971) and later MD-11 (1990). Airbus competed with later variants with the heaviest versions of the A340 until surpassing the 747 in size with the A380, delivered between 2007 and 2021. Freighter variants of the 747 remain popular with cargo airlines. The final 747 was delivered to Atlas Air in January 2023 after a 54-year production run, with 1,574 aircraft built. , 64 Boeing 747s have been lost in accidents and incidents, in which a total of 3,746 people have died. Development Background In 1963, the United States Air Force started a series of study projects on a very large strategic transport aircraft. Although the C-141 Starlifter was being introduced, officials believed that a much larger and more capable aircraft was needed, especially to carry cargo that would not fit in any existing aircraft. These studies led to initial requirements for the CX-Heavy Logistics System (CX-HLS) in March 1964 for an aircraft with a load capacity of and a speed of Mach 0.75 (), and an unrefueled range of with a payload of . The payload bay had to be wide by high and long with access through doors at the front and rear. The desire to keep the number of engines to four required new engine designs with greatly increased power and better fuel economy. In May 1964, airframe proposals arrived from Boeing, Douglas, General Dynamics, Lockheed, and Martin Marietta; engine proposals were submitted by General Electric, Curtiss-Wright, and Pratt & Whitney. Boeing, Douglas, and Lockheed were given additional study contracts for the airframe, along with General Electric and Pratt & Whitney for the engines. The airframe proposals shared several features. As the CX-HLS needed to be able to be loaded from the front, a door had to be included where the cockpit usually was. All of the companies solved this problem by moving the cockpit above the cargo area; Douglas had a small "pod" just forward and above the wing, Lockheed used a long "spine" running the length of the aircraft with the wing spar passing through it, while Boeing blended the two, with a longer pod that ran from just behind the nose to just behind the wing. In 1965, Lockheed's aircraft design and General Electric's engine design were selected for the new C-5 Galaxy transport, which was the largest military aircraft in the world at the time. Boeing carried the nose door and raised cockpit concepts over to the design of the 747. Airliner proposal The 747 was conceived while air travel was increasing in the 1960s. The era of commercial jet transportation, led by the enormous popularity of the Boeing 707 and Douglas DC-8, had revolutionized long-distance travel. In this growing jet age, Juan Trippe, president of Pan Am, one of Boeing's most important airline customers, asked for a new jet airliner times size of the 707, with a 30% lower cost per unit of passenger-distance and the capability to offer mass air travel on international routes. Trippe also thought that airport congestion could be addressed by a larger new aircraft. In 1965, Joe Sutter was transferred from Boeing's 737 development team to manage the design studies for the new airliner, already assigned the model number 747. Sutter began a design study with Pan Am and other airlines to better understand their requirements. At the time, many thought that long-range subsonic airliners would eventually be superseded by supersonic transport aircraft. Boeing responded by designing the 747 so it could be adapted easily to carry freight and remain in production even if sales of the passenger version declined. In April 1966, Pan Am ordered 25 Boeing 747-100 aircraft for US$525 million (equivalent to $ billion in dollars). During the ceremonial 747 contract-signing banquet in Seattle on Boeing's 50th Anniversary, Juan Trippe predicted that the 747 would be "…a great weapon for peace, competing with intercontinental missiles for mankind's destiny". As launch customer, and because of its early involvement before placing a formal order, Pan Am was able to influence the design and development of the 747 to an extent unmatched by a single airline before or since. Design effort Ultimately, the high-winged CX-HLS Boeing design was not used for the 747, although technologies developed for their bid had an influence. The original design included a full-length double-deck fuselage with eight-across seating and two aisles on the lower deck and seven-across seating and two aisles on the upper deck. However, concern over evacuation routes and limited cargo-carrying capability caused this idea to be scrapped in early 1966 in favor of a wider single deck design. The cockpit was, therefore, placed on a shortened upper deck so that a freight-loading door could be included in the nose cone; this design feature produced the 747's distinctive "hump". In early models, what to do with the small space in the pod behind the cockpit was not clear, and this was initially specified as a "lounge" area with no permanent seating. (A different configuration that had been considered to keep the flight deck out of the way for freight loading had the pilots below the passengers, and was dubbed the "anteater".) One of the principal technologies that enabled an aircraft as large as the 747 to be drawn up was the high-bypass turbofan engine. This engine technology was thought to be capable of delivering double the power of the earlier turbojets while consuming one-third less fuel. General Electric had pioneered the concept but was committed to developing the engine for the C-5 Galaxy and did not enter the commercial market until later. Pratt & Whitney was also working on the same principle and, by late 1966, Boeing, Pan Am and Pratt & Whitney agreed to develop a new engine, designated the JT9D to power the 747. The project was designed with a new methodology called fault tree analysis, which allowed the effects of a failure of a single part to be studied to determine its impact on other systems. To address concerns about safety and flyability, the 747's design included structural redundancy, redundant hydraulic systems, quadruple main landing gear and dual control surfaces. Additionally, some of the most advanced high-lift devices used in the industry were included in the new design, to allow it to operate from existing airports. These included Krueger flaps running almost the entire length of the wing's leading edge, as well as complex three-part slotted flaps along the trailing edge of the wing. The wing's complex three-part flaps increase wing area by 21% and lift by 90% when fully deployed compared to their non-deployed configuration. Boeing agreed to deliver the first 747 to Pan Am by the end of 1969. The delivery date left 28 months to design the aircraft, which was two-thirds of the normal time. The schedule was so fast-paced that the people who worked on it were given the nickname "The Incredibles". Developing the aircraft was such a technical and financial challenge that management was said to have "bet the company" when it started the project. Production plant As Boeing did not have a plant large enough to assemble the giant airliner, they chose to build a new plant. The company considered locations in about 50 cities, and eventually decided to build the new plant some north of Seattle on a site adjoining a military base at Paine Field near Everett, Washington. It bought the site in June 1966. Developing the 747 had been a major challenge, and building its assembly plant was also a huge undertaking. Boeing president William M. Allen asked Malcolm T. Stamper, then head of the company's turbine division, to oversee construction of the Everett factory and to start production of the 747. To level the site, more than of earth had to be moved. Time was so short that the 747's full-scale mock-up was built before the factory roof above it was finished. The plant is the largest building by volume ever built, and has been substantially expanded several times to permit construction of other models of Boeing wide-body commercial jets. Development and testing Before the first 747 was fully assembled, testing began on many components and systems. One important test involved the evacuation of 560 volunteers from a cabin mock-up via the aircraft's emergency chutes. The first full-scale evacuation took two and a half minutes instead of the maximum of 90 seconds mandated by the Federal Aviation Administration (FAA), and several volunteers were injured. Subsequent test evacuations achieved the 90-second goal but caused more injuries. Most problematic was evacuation from the aircraft's upper deck; instead of using a conventional slide, volunteer passengers escaped by using a harness attached to a reel. Tests also involved taxiing such a large aircraft. Boeing built an unusual training device known as "Waddell's Wagon" (named for a 747 test pilot, Jack Waddell) that consisted of a mock-up cockpit mounted on the roof of a truck. While the first 747s were still being built, the device allowed pilots to practice taxi maneuvers from a high upper-deck position. In 1968, the program cost was US$1 billion (equivalent to $ billion in dollars). On September 30, 1968, the first 747 was rolled out of the Everett assembly building before the world's press and representatives of the 26 airlines that had ordered the airliner. Over the following months, preparations were made for the first flight, which took place on February 9, 1969, with test pilots Jack Waddell and Brien Wygle at the controls and Jess Wallick at the flight engineer's station. Despite a minor problem with one of the flaps, the flight confirmed that the 747 handled extremely well. The 747 was found to be largely immune to "Dutch roll", a phenomenon that had been a major hazard to the early swept-wing jets. During later stages of the flight test program, flutter testing showed that the wings suffered oscillation under certain conditions. This difficulty was partly solved by reducing the stiffness of some wing components. However, a particularly severe high-speed flutter problem was solved only by inserting depleted uranium counterweights as ballast in the outboard engine nacelles of the early 747s. This measure caused anxiety when these aircraft crashed, for example El Al Flight 1862 at Amsterdam in 1992 with of uranium in the tailplane (horizontal stabilizer). The flight test program was hampered by problems with the 747's JT9D engines. Difficulties included engine stalls caused by rapid throttle movements and distortion of the turbine casings after a short period of service. The problems delayed 747 deliveries for several months; up to 20 aircraft at the Everett plant were stranded while awaiting engine installation. The program was further delayed when one of the five test aircraft suffered serious damage during a landing attempt at Renton Municipal Airport, the site of Boeing's Renton factory. The incident happened on December 13, 1969, when a test aircraft was flown to Renton to have test equipment removed and a cabin installed. Pilot Ralph C. Cokely undershot the airport's short runway and the 747's right, outer landing gear was torn off and two engine nacelles were damaged. However, these difficulties did not prevent Boeing from taking a test aircraft to the 28th Paris Air Show in mid-1969, where it was displayed to the public for the first time. The 747 received its FAA airworthiness certificate in December 1969, clearing it for introduction into service. The huge cost of developing the 747 and building the Everett factory meant that Boeing had to borrow heavily from a banking syndicate. During the final months before delivery of the first aircraft, the company had to repeatedly request additional funding to complete the project. Had this been refused, Boeing's survival would have been threatened. The firm's debt exceeded $2 billion, with the $1.2 billion owed to the banks setting a record for all companies. Allen later said, "It was really too large a project for us." Ultimately, the gamble succeeded, and Boeing held a monopoly in very large passenger aircraft production for many years. Entry into service On January 15, 1970, First Lady of the United States Pat Nixon christened Pan Am's first 747 at Dulles International Airport (later Washington Dulles International Airport) in the presence of Pan Am chairman Najeeb Halaby. Instead of champagne, red, white, and blue water was sprayed on the aircraft. The 747 entered service on January 22, 1970, on Pan Am's New York–London route; the flight had been planned for the evening of January 21, but engine overheating made the original aircraft unusable. Finding a substitute delayed the flight by more than six hours to the following day when Clipper Victor was used. The 747 enjoyed a fairly smooth introduction into service, overcoming concerns that some airports would not be able to accommodate an aircraft that large. Although technical problems occurred, they were relatively minor and quickly solved. After the aircraft's introduction with Pan Am, other airlines that had bought the 747 to stay competitive began to put their own 747s into service. Boeing estimated that half of the early 747 sales were to airlines desiring the aircraft's long range rather than its payload capacity. While the 747 had the lowest potential operating cost per seat, this could only be achieved when the aircraft was fully loaded; costs per seat increased rapidly as occupancy declined. A moderately loaded 747, one with only 70 percent of its seats occupied, used more than 95 percent of the fuel needed by a fully occupied 747. Nonetheless, many flag-carriers purchased the 747 due to its prestige "even if it made no sense economically" to operate. During the 1970s and 1980s, over 30 regularly scheduled 747s could often be seen at John F. Kennedy International Airport. The recession of 1969–1970, despite having been characterized as relatively mild, greatly affected Boeing. For the year and a half after September 1970, it only sold two 747s in the world, both to Irish flag carrier Aer Lingus. No 747s were sold to any American carrier for almost three years. When economic problems in the US and other countries after the 1973 oil crisis led to reduced passenger traffic, several airlines found they did not have enough passengers to fly the 747 economically, and they replaced them with the smaller and recently introduced McDonnell Douglas DC-10 and Lockheed L-1011 TriStar trijet wide bodies (and later the 767 and A300/A310 twinjets). Having tried replacing coach seats on its 747s with piano bars in an attempt to attract more customers, American Airlines eventually relegated its 747s to cargo service and in 1983 exchanged them with Pan Am for smaller aircraft; Delta Air Lines also removed its 747s from service after several years. Later, Delta acquired 747s again in 2008 as part of its merger with Northwest Airlines, although it retired the Boeing 747-400 fleet in December 2017. International flights bypassing traditional hub airports and landing at smaller cities became more common throughout the 1980s, thus eroding the 747's original market. Many international carriers continued to use the 747 on Pacific routes. In Japan, 747s on domestic routes were configured to carry nearly the maximum passenger capacity. Improved 747 versions After the initial , Boeing developed the , a higher maximum takeoff weight (MTOW) variant, and the (Short Range), with higher passenger capacity. Increased maximum takeoff weight allows aircraft to carry more fuel and have longer range. The model followed in 1971, featuring more powerful engines and a higher MTOW. Passenger, freighter and combination passenger-freighter versions of the were produced. The shortened 747SP (special performance) with a longer range was also developed, and entered service in 1976. The 747 line was further developed with the launch of the on June 11, 1980, followed by interest from Swissair a month later and the go-ahead for the project. The 300 series resulted from Boeing studies to increase the seating capacity of the 747, during which modifications such as fuselage plugs and extending the upper deck over the entire length of the fuselage were rejected. The first , completed in 1983, included a stretched upper deck, increased cruise speed, and increased seating capacity. The -300 variant was previously designated 747SUD for stretched upper deck, then 747-200 SUD, followed by 747EUD, before the 747-300 designation was used. Passenger, short range and combination freighter-passenger versions of the 300 series were produced. In 1985, development of the longer range 747-400 began. The variant had a new glass cockpit, which allowed for a cockpit crew of two instead of three, new engines, lighter construction materials, and a redesigned interior. Development costs soared, and production delays occurred as new technologies were incorporated at the request of airlines. Insufficient workforce experience and reliance on overtime contributed to early production problems on the . The -400 entered service in 1989. In 1991, a record-breaking 1,087 passengers were flown in a 747 during a covert operation to airlift Ethiopian Jews to Israel. Generally, the 747-400 held between 416 and 524 passengers. The 747 remained the heaviest commercial aircraft in regular service until the debut of the Antonov An-124 Ruslan in 1982; variants of the 747-400 surpassed the An-124's weight in 2000. The Antonov An-225 Mriya cargo transport, which debuted in 1988, remains the world's largest aircraft by several measures (including the most accepted measures of maximum takeoff weight and length); one aircraft has been completed and was in service until 2022. The Scaled Composites Stratolaunch is currently the largest aircraft by wingspan. Further developments After the arrival of the , several stretching schemes for the 747 were proposed. Boeing announced the larger 747-500X and preliminary designs in 1996. The new variants would have cost more than US$5 billion to develop, and interest was not sufficient to launch the program. In 2000, Boeing offered the more modest 747X and 747X stretch derivatives as alternatives to the Airbus A3XX. However, the 747X family was unable to attract enough interest to enter production. A year later, Boeing switched from the 747X studies to pursue the Sonic Cruiser, and after the Sonic Cruiser program was put on hold, the 787 Dreamliner. Some of the ideas developed for the 747X were used on the 747-400ER, a longer range variant of the . After several variants were proposed but later abandoned, some industry observers became skeptical of new aircraft proposals from Boeing. However, in early 2004, Boeing announced tentative plans for the 747 Advanced that were eventually adopted. Similar in nature to the 747-X, the stretched 747 Advanced used technology from the 787 to modernize the design and its systems. The 747 remained the largest passenger airliner in service until the Airbus A380 began airline service in 2007. On November 14, 2005, Boeing announced it was launching the 747 Advanced as the Boeing 747-8. The last 747-400s were completed in 2009. , most orders of the 747-8 were for the freighter variant. On February 8, 2010, the 747-8 Freighter made its maiden flight. The first delivery of the 747-8 went to Cargolux in 2011. The first 747-8 Intercontinental passenger variant was delivered to Lufthansa on May 5, 2012. The 1,500th Boeing 747 was delivered in June 2014 to Lufthansa. In January 2016, Boeing stated it was reducing 747-8 production to six a year beginning in September 2016, incurring a $569 million post-tax charge against its fourth-quarter 2015 profits. At the end of 2015, the company had 20 orders outstanding. On January 29, 2016, Boeing announced that it had begun the preliminary work on the modifications to a commercial 747-8 for the next Air Force One presidential aircraft, then expected to be operational by 2020. On July 12, 2016, Boeing announced that it had finalized an order from Volga-Dnepr Group for 20 747-8 freighters, valued at $7.58 billion at list prices. Four aircraft were delivered beginning in 2012. Volga-Dnepr Group is the parent of three major Russian air-freight carriers – Volga-Dnepr Airlines, AirBridgeCargo Airlines and Atran Airlines. The new 747-8 freighters would replace AirBridgeCargo's current 747-400 aircraft and expand the airline's fleet and will be acquired through a mix of direct purchases and leasing over the next six years, Boeing said. End of production On July 27, 2016, in its quarterly report to the Securities and Exchange Commission, Boeing discussed the potential termination of 747 production due to insufficient demand and market for the aircraft. With a firm order backlog of 21 aircraft and a production rate of six per year, program accounting had been reduced to 1,555 aircraft. In October 2016, UPS Airlines ordered 14 -8Fs to add capacity, along with 14 options, which it took in February 2018 to increase the total to 28 -8Fs on order. The backlog then stood at 25 aircraft, though several of these were orders from airlines that no longer intended to take delivery. On July 2, 2020, it was reported that Boeing planned to end 747 production in 2022 upon delivery of the remaining jets on order to UPS and the Volga-Dnepr Group due to low demand. On July 29, 2020, Boeing confirmed that the final 747 would be delivered in 2022 as a result of "current market dynamics and outlook" stemming from the COVID-19 pandemic, according to CEO David Calhoun. The last aircraft, a 747-8F for Atlas Air, rolled off the production line on December 6, 2022, and was delivered on January 31, 2023. Boeing hosted an event at the Everett factory for thousands of workers as well as industry executives to commemorate the delivery. Design The Boeing 747 is a large, wide-body (two-aisle) airliner with four wing-mounted engines. Its wings have a high sweep angle of 37.5° for a fast, efficient cruise speed of Mach 0.84 to 0.88, depending on the variant. The sweep also reduces the wingspan, allowing the 747 to use existing hangars. Its seating capacity is over 366 with a 3–4–3 seat arrangement (a cross section of three seats, an aisle, four seats, another aisle, and three seats) in economy class and a 2–3–2 layout in first class on the main deck. The upper deck has a 3–3 seat arrangement in economy class and a 2–2 layout in first class. Raised above the main deck, the cockpit creates a hump. This raised cockpit allows front loading of cargo on freight variants. The upper deck behind the cockpit provides space for a lounge and/or extra seating. The "stretched upper deck" became available as an alternative on the variant and later as standard beginning on the 747-300. The upper deck was stretched more on the 747-8. The 747 cockpit roof section also has an escape hatch from which crew can exit during the events of an emergency if they cannot do so through the cabin. The 747's maximum takeoff weight ranges from for the -100 to for the -8. Its range has increased from on the -100 to on the -8I. The 747 has redundant structures along with four redundant hydraulic systems and four main landing gears each with four wheels; these provide a good spread of support on the ground and safety in case of tire blow-outs. The main gear are redundant so that landing can be performed on two opposing landing gears if the others are not functioning properly. The 747 also has split control surfaces and was designed with sophisticated triple-slotted flaps that minimize landing speeds and allow the 747 to use standard-length runways. For transportation of spare engines, the 747 can accommodate a non-functioning fifth-pod engine under the aircraft's port wing between the inner functioning engine and the fuselage. The fifth engine mount point is also used by Virgin Orbit's LauncherOne program to carry an orbital-class rocket to cruise altitude where it is deployed. Variants The 747-100 with a range of 4,620 nautical miles (8,556 km), was the original variant launched in 1966. The 747-200 soon followed, with its launch in 1968. The 747-300 was launched in 1980 and was followed by the in 1985. Ultimately, the 747-8 was announced in 2005. Several versions of each variant have been produced, and many of the early variants were in production simultaneously. The International Civil Aviation Organization (ICAO) classifies variants using a shortened code formed by combining the model number and the variant designator (e.g. "B741" for all -100 models). 747-100 The first 747-100s were built with six upper deck windows (three per side) to accommodate upstairs lounge areas. Later, as airlines began to use the upper deck for premium passenger seating instead of lounge space, Boeing offered an upper deck with ten windows on either side as an option. Some early -100s were retrofitted with the new configuration. The -100 was equipped with Pratt & Whitney JT9D-3A engines. No freighter version of this model was developed, but many 747-100s were converted into freighters as 747-100(SF). The first 747-100(SF) was delivered to Flying Tiger Line in 1974. A total of 168 747-100s were built; 167 were delivered to customers, while Boeing kept the prototype, City of Everett. In 1972, its unit cost was US$24M (M today). 747SR Responding to requests from Japanese airlines for a high-capacity aircraft to serve domestic routes between major cities, Boeing developed the 747SR as a short-range version of the with lower fuel capacity and greater payload capability. With increased economy class seating, up to 498 passengers could be carried in early versions and up to 550 in later models. The 747SR had an economic design life objective of 52,000 flights during 20 years of operation, compared to 24,600 flights in 20 years for the standard 747. The initial 747SR model, the -100SR, had a strengthened body structure and landing gear to accommodate the added stress accumulated from a greater number of takeoffs and landings. Extra structural support was built into the wings, fuselage, and the landing gear along with a 20% reduction in fuel capacity. The initial order for the -100SR – four aircraft for Japan Air Lines (JAL, later Japan Airlines) – was announced on October 30, 1972; rollout occurred on August 3, 1973, and the first flight took place on August 31, 1973. The type was certified by the FAA on September 26, 1973, with the first delivery on the same day. The -100SR entered service with JAL, the type's sole customer, on October 7, 1973, and typically operated flights within Japan. Seven -100SRs were built between 1973 and 1975, each with a MTOW and Pratt & Whitney JT9D-7A engines derated to of thrust. Following the -100SR, Boeing produced the -100BSR, a 747SR variant with increased takeoff weight capability. Debuting in 1978, the -100BSR also incorporated structural modifications for a high cycle-to-flying hour ratio; a related standard -100B model debuted in 1979. The -100BSR first flew on November 3, 1978, with first delivery to All Nippon Airways (ANA) on December 21, 1978. A total of 20 -100BSRs were produced for ANA and JAL. The -100BSR had a MTOW and was powered by the same JT9D-7A or General Electric CF6-45 engines used on the -100SR. ANA operated this variant on domestic Japanese routes with 455 or 456 seats until retiring its last aircraft in March 2006. In 1986, two -100BSR SUD models, featuring the stretched upper deck (SUD) of the -300, were produced for JAL. The type's maiden flight occurred on February 26, 1986, with FAA certification and first delivery on March 24, 1986. JAL operated the -100BSR SUD with 563 seats on domestic routes until their retirement in the third quarter of 2006. While only two -100BSR SUDs were produced, in theory, standard -100Bs can be modified to the SUD certification. Overall, 29 Boeing 747SRs were built. 747-100B The 747-100B model was developed from the -100SR, using its stronger airframe and landing gear design. The type had an increased fuel capacity of , allowing for a range with a typical 452-passenger payload, and an increased MTOW of was offered. The first -100B order, one aircraft for Iran Air, was announced on June 1, 1978. This version first flew on June 20, 1979, received FAA certification on August 1, 1979, and was delivered the next day. Nine -100Bs were built, one for Iran Air and eight for Saudi Arabian Airlines. Unlike the original -100, the -100B was offered with Pratt & Whitney JT9D-7A, CF6-50, or Rolls-Royce RB211-524 engines. However, only RB211-524 (Saudia) and JT9D-7A (Iran Air) engines were ordered. The last 747-100B, EP-IAM was retired by Iran Air in 2014, the last commercial operator of the 747-100 and -100B. 747SP The development of the 747SP stemmed from a joint request between Pan American World Airways and Iran Air, who were looking for a high-capacity airliner with enough range to cover Pan Am's New York–Middle Eastern routes and Iran Air's planned Tehran–New York route. The Tehran–New York route, when launched, was the longest non-stop commercial flight in the world. The 747SP is shorter than the . Fuselage sections were eliminated fore and aft of the wing, and the center section of the fuselage was redesigned to fit mating fuselage sections. The SP's flaps used a simplified single-slotted configuration. The 747SP, compared to earlier variants, had a tapering of the aft upper fuselage into the empennage, a double-hinged rudder, and longer vertical and horizontal stabilizers. Power was provided by Pratt & Whitney JT9D-7(A/F/J/FW) or Rolls-Royce RB211-524 engines. The 747SP was granted a type certificate on February 4, 1976, and entered service with launch customers Pan Am and Iran Air that same year. The aircraft was chosen by airlines wishing to serve major airports with short runways. A total of 45 747SPs were built, with the 44th 747SP delivered on August 30, 1982. In 1987, Boeing re-opened the 747SP production line after five years to build one last 747SP for an order by the United Arab Emirates government. In addition to airline use, one 747SP was modified for the NASA/German Aerospace Center SOFIA experiment. Iran Air is the last civil operator of the type; its final 747-SP (EP-IAC) was to be retired in June 2016. 747-200 While the 747-100 powered by Pratt & Whitney JT9D-3A engines offered enough payload and range for medium-haul operations, it was marginal for long-haul route sectors. The demand for longer range aircraft with increased payload quickly led to the improved -200, which featured more powerful engines, increased MTOW, and greater range than the -100. A few early -200s retained the three-window configuration of the -100 on the upper deck, but most were built with a ten-window configuration on each side. The 747-200 was produced in passenger (-200B), freighter (-200F), convertible (-200C), and combi (-200M) versions. The 747-200B was the basic passenger version, with increased fuel capacity and more powerful engines; it entered service in February 1971. In its first three years of production, the -200 was equipped with Pratt & Whitney JT9D-7 engines (initially the only engine available). Range with a full passenger load started at over and increased to with later engines. Most -200Bs had an internally stretched upper deck, allowing for up to 16 passenger seats. The freighter model, the 747-200F, had a hinged nose cargo door and could be fitted with an optional side cargo door, and had a capacity of 105 tons (95.3 tonnes) and an MTOW of up to . It entered service in 1972 with Lufthansa. The convertible version, the 747-200C, could be converted between a passenger and a freighter or used in mixed configurations, and featured removable seats and a nose cargo door. The -200C could also be outfitted with an optional side cargo door on the main deck. The combi aircraft model, the 747-200M (originally designated 747-200BC), could carry freight in the rear section of the main deck via a side cargo door. A removable partition on the main deck separated the cargo area at the rear from the passengers at the front. The -200M could carry up to 238 passengers in a three-class configuration with cargo carried on the main deck. The model was also known as the 747-200 Combi. As on the -100, a stretched upper deck (SUD) modification was later offered. A total of 10 747-200s operated by KLM were converted. Union de Transports Aériens (UTA) also had two aircraft converted. After launching the -200 with Pratt & Whitney JT9D-7 engines, on August 1, 1972, Boeing announced that it had reached an agreement with General Electric to certify the 747 with CF6-50 series engines to increase the aircraft's market potential. Rolls-Royce followed 747 engine production with a launch order from British Airways for four aircraft. The option of RB211-524B engines was announced on June 17, 1975. The -200 was the first 747 to provide a choice of powerplant from the three major engine manufacturers. In 1976, its unit cost was US$39M (M today). A total of 393 of the 747-200 versions had been built when production ended in 1991. Of these, 225 were -200B, 73 were -200F, 13 were -200C, 78 were -200M, and 4 were military. Iran Air retired the last passenger in May 2016, 36 years after it was delivered. , five 747-200s remain in service as freighters. 747-300 The 747-300 features a upper deck than the -200. The stretched upper deck (SUD) has two emergency exit doors and is the most visible difference between the -300 and previous models. After being made standard on the 747-300, the SUD was offered as a retrofit, and as an option to earlier variants still in-production. An example for a retrofit were two UTA -200 Combis being converted in 1986, and an example for the option were two brand-new JAL -100 aircraft (designated -100BSR SUD), the first of which was delivered on March 24, 1986. The 747-300 introduced a new straight stairway to the upper deck, instead of a spiral staircase on earlier variants, which creates room above and below for more seats. Minor aerodynamic changes allowed the -300's cruise speed to reach Mach 0.85 compared with Mach 0.84 on the -200 and -100 models, while retaining the same takeoff weight. The -300 could be equipped with the same Pratt & Whitney and Rolls-Royce powerplants as on the -200, as well as updated General Electric CF6-80C2B1 engines. Swissair placed the first order for the on June 11, 1980. The variant revived the 747-300 designation, which had been previously used on a design study that did not reach production. The 747-300 first flew on October 5, 1982, and the type's first delivery went to Swissair on March 23, 1983. In 1982, its unit cost was US$83M (M today). Besides the passenger model, two other versions (-300M, -300SR) were produced. The 747-300M features cargo capacity on the rear portion of the main deck, similar to the -200M, but with the stretched upper deck it can carry more passengers. The 747-300SR, a short range, high-capacity domestic model, was produced for Japanese markets with a maximum seating for 584. No production freighter version of the 747-300 was built, but Boeing began modifications of used passenger -300 models into freighters in 2000. A total of 81 series aircraft were delivered, 56 for passenger use, 21 -300M and 4 -300SR versions. In 1985, just two years after the -300 entered service, the type was superseded by the announcement of the more advanced 747-400. The last 747-300 was delivered in September 1990 to Sabena. While some -300 customers continued operating the type, several large carriers replaced their 747-300s with 747-400s. Air France, Air India, Pakistan International Airlines, and Qantas were some of the last major carriers to operate the . On December 29, 2008, Qantas flew its last scheduled 747-300 service, operating from Melbourne to Los Angeles via Auckland. In July 2015, Pakistan International Airlines retired their final 747-300 after 30 years of service. , only two 747-300s remain in commercial service, with Mahan Air (1) and TransAVIAexport Airlines (1). 747-400 The 747-400 is an improved model with increased range. It has wingtip extensions of and winglets of , which improve the type's fuel efficiency by four percent compared to previous 747 versions. The 747-400 introduced a new glass cockpit designed for a flight crew of two instead of three, with a reduction in the number of dials, gauges and knobs from 971 to 365 through the use of electronics. The type also features tail fuel tanks, revised engines, and a new interior. The longer range has been used by some airlines to bypass traditional fuel stops, such as Anchorage. A 747-400 loaded with 126,000 lb of fuel flying 3,500 statute miles consumes an average of five gallons per mile. Powerplants include the Pratt & Whitney PW4062, General Electric CF6-80C2, and Rolls-Royce RB211-524. As a result of the Boeing 767 development overlapping with the 747-400's development, both aircraft can use the same three powerplants and are even interchangeable between the two aircraft models. The was offered in passenger (-400), freighter (-400F), combi (-400M), domestic (-400D), extended range passenger (-400ER), and extended range freighter (-400ERF) versions. Passenger versions retain the same upper deck as the , while the freighter version does not have an extended upper deck. The 747-400D was built for short-range operations with maximum seating for 624. Winglets were not included, but they can be retrofitted. Cruising speed is up to Mach 0.855 on different versions of the . The passenger version first entered service in February 1989 with launch customer Northwest Airlines on the Minneapolis to Phoenix route. The combi version entered service in September 1989 with KLM, while the freighter version entered service in November 1993 with Cargolux. The 747-400ERF entered service with Air France in October 2002, while the 747-400ER entered service with Qantas, its sole customer, in November 2002. In January 2004, Boeing and Cathay Pacific launched the Boeing 747-400 Special Freighter program, later referred to as the Boeing Converted Freighter (BCF), to modify passenger 747-400s for cargo use. The first 747-400BCF was redelivered in December 2005. In March 2007, Boeing announced that it had no plans to produce further passenger versions of the -400. However, orders for 36 -400F and -400ERF freighters were already in place at the time of the announcement. The last passenger version of the 747-400 was delivered in April 2005 to China Airlines. Some of the last built 747-400s were delivered with Dreamliner livery along with the modern Signature interior from the Boeing 777. A total of 694 of the series aircraft were delivered. At various times, the largest 747-400 operator has included Singapore Airlines, Japan Airlines, and British Airways. , 331 Boeing 747-400s were in service; there were only 10 Boeing 747-400s in passenger service as of September 2021. 747 LCF Dreamlifter The 747-400 Dreamlifter (originally called the 747 Large Cargo Freighter or LCF) is a Boeing-designed modification of existing 747-400s into a larger outsize cargo freighter configuration to ferry 787 Dreamliner sub-assemblies. Evergreen Aviation Technologies Corporation of Taiwan was contracted to complete modifications of 747-400s into Dreamlifters in Taoyuan. The aircraft flew for the first time on September 9, 2006, in a test flight. Modification of four aircraft was completed by February 2010. The Dreamlifters have been placed into service transporting sub-assemblies for the 787 program to the Boeing plant in Everett, Washington, for final assembly. The aircraft is certified to carry only essential crew and not passengers. 747-8 Boeing announced a new 747 variant, the , on November 14, 2005. Referred to as the 747 Advanced prior to its launch, the 747-8 uses similar General Electric GEnx engines and cockpit technology to the 787. The variant is designed to be quieter, more economical, and more environmentally friendly. The 747-8's fuselage is lengthened from to , marking the first stretch variant of the aircraft. The 747-8 Freighter, or 747-8F, has 16% more payload capacity than its predecessor, allowing it to carry seven more standard air cargo containers, with a maximum payload capacity 154 tons (140 tonnes) of cargo. As on previous 747 freighters, the 747-8F features a flip up nose-door, a side-door on the main deck, and a side-door on the lower deck ("belly") to aid loading and unloading. The 747-8F made its maiden flight on February 8, 2010. The variant received its amended type certificate jointly from the FAA and the European Aviation Safety Agency (EASA) on August 19, 2011. The -8F was first delivered to Cargolux on October 12, 2011. The passenger version, named 747-8 Intercontinental or 747-8I, is designed to carry up to 467 passengers in a 3-class configuration and fly more than at Mach 0.855. As a derivative of the already common , the 747-8I has the economic benefit of similar training and interchangeable parts. The type's first test flight occurred on March 20, 2011. The 747-8 has surpassed the Airbus A340-600 as the world's longest airliner, a record it would hold until the 777X, which first flew in 2020. The first -8I was delivered in May 2012 to Lufthansa. The 747-8 has received 155 total orders, including 106 for the -8F and 47 for the -8I . The final 747-8F was delivered to Atlas Air on January 31, 2023. Government, military, and other variants VC-25 – This aircraft is the U.S. Air Force very important person (VIP) version of the 747-200B. The U.S. Air Force operates two of them in VIP configuration as the VC-25A. Tail numbers 28000 and 29000 are popularly known as Air Force One, which is technically the air-traffic call sign for any United States Air Force aircraft carrying the U.S. President. Partially completed aircraft from Everett, Washington, were flown to Wichita, Kansas, for final outfitting by Boeing Military Airplane Company. Two new aircraft, based around the , are being procured which will be designated as VC-25B. E-4B – This is an airborne command post designed for use in nuclear war. Three E-4As, based on the 747-200B, with a fourth aircraft, with more powerful engines and upgraded systems delivered in 1979 as a E-4B, with the three E-4As upgraded to this standard. Formerly known as the National Emergency Airborne Command Post (referred to colloquially as "Kneecap"), this type is now referred to as the National Airborne Operations Center (NAOC). YAL-1 – This was the experimental Airborne Laser, a planned component of the U.S. National Missile Defense. Shuttle Carrier Aircraft (SCA) – Two 747s were modified to carry the Space Shuttle orbiter. The first was a 747-100 (N905NA), and the other was a 747-100SR (N911NA). The first SCA carried the prototype Enterprise during the Approach and Landing Tests in the late 1970s. The two SCA later carried all five operational Space Shuttle orbiters. C-33 – This aircraft was a proposed U.S. military version of the 747-400F intended to augment the C-17 fleet. The plan was canceled in favor of additional C-17s. KC-33 – A proposed 747-200F was also adapted as an aerial refueling tanker and was bid against the DC-10-30 during the 1970s Advanced Cargo Transport Aircraft (ACTA) program that produced the KC-10 Extender. Before the 1979 Iranian Revolution, Iran bought four 747-100 aircraft with air-refueling boom conversions to support its fleet of F-4 Phantoms. There is a report of the Iranians using a 747 Tanker in H-3 airstrike during Iran–Iraq War. It is unknown whether these aircraft remain usable as tankers. Since then there have been proposals to use a 747-400 for that role. 747 CMCA – This "Cruise Missile Carrier Aircraft" variant was considered by the U.S. Air Force during the development of the B-1 Lancer strategic bomber. It would have been equipped with 50 to 100 AGM-86 ALCM cruise missiles on rotary launchers. This plan was abandoned in favor of more conventional strategic bombers. 747 AAC – A Boeing study under contract from the USAF for an "airborne aircraft carrier" for up to 10 Boeing Model 985-121 "microfighters" with the ability to launch, retrieve, re-arm, and refuel. Boeing believed that the scheme would be able to deliver a flexible and fast carrier platform with global reach, particularly where other bases were not available. Modified versions of the 747-200 and Lockheed C-5A were considered as the base aircraft. The concept, which included a complementary 747 AWACS version with two reconnaissance "microfighters", was considered technically feasible in 1973. Evergreen 747 Supertanker – A Boeing 747-200 modified as an aerial application platform for fire fighting using of firefighting chemicals. Stratospheric Observatory for Infrared Astronomy (SOFIA) – A former Pan Am Boeing 747SP modified to carry a large infrared-sensitive telescope, in a joint venture of NASA and DLR. High altitudes are needed for infrared astronomy, to rise above infrared-absorbing water vapor in the atmosphere. A number of other governments also use the 747 as a VIP transport, including Bahrain, Brunei, India, Iran, Japan, Kuwait, Oman, Pakistan, Qatar, Saudi Arabia and United Arab Emirates. Several Boeing 747-8s have been ordered by Boeing Business Jet for conversion to VIP transports for several unidentified customers. Undeveloped variants Boeing has studied a number of 747 variants that have not gone beyond the concept stage. 747 trijet During the late 1960s and early 1970s, Boeing studied the development of a shorter 747 with three engines, to compete with the smaller Lockheed L-1011 TriStar and McDonnell Douglas DC-10. The center engine would have been fitted in the tail with an S-duct intake similar to the L-1011's. Overall, the 747 trijet would have had more payload, range, and passenger capacity than both of them. However, engineering studies showed that a major redesign of the 747 wing would be necessary. Maintaining the same 747 handling characteristics would be important to minimize pilot retraining. Boeing decided instead to pursue a shortened four-engine 747, resulting in the 747SP. 747-500 In January 1986, Boeing outlined preliminary studies to build a larger, ultra-long haul version named the , which would enter service in the mid- to late-1990s. The aircraft derivative would use engines evolved from unducted fan (UDF) (propfan) technology by General Electric, but the engines would have shrouds, sport a bypass ratio of 15–20, and have a propfan diameter of . The aircraft would be stretched (including the upper deck section) to a capacity of 500 seats, have a new wing to reduce drag, cruise at a faster speed to reduce flight times, and have a range of at least , which would allow airlines to fly nonstop between London, England and Sydney, Australia. 747 ASB Boeing announced the 747 ASB (Advanced Short Body) in 1986 as a response to the Airbus A340 and the McDonnell Douglas MD-11. This aircraft design would have combined the advanced technology used on the 747-400 with the foreshortened 747SP fuselage. The aircraft was to carry 295 passengers over a range of . However, airlines were not interested in the project and it was canceled in 1988 in favor of the 777. 747-500X, -600X, and -700X Boeing announced the 747-500X and -600X at the 1996 Farnborough Airshow. The proposed models would have combined the 747's fuselage with a new wing spanning derived from the 777. Other changes included adding more powerful engines and increasing the number of tires from two to four on the nose landing gear and from 16 to 20 on the main landing gear. The 747-500X concept featured a fuselage length increased by to , and the aircraft was to carry 462 passengers over a range up to , with a gross weight of over 1.0 Mlb (450 tonnes). The 747-600X concept featured a greater stretch to with seating for 548 passengers, a range of up to , and a gross weight of 1.2 Mlb (540 tonnes). A third study concept, the 747-700X, would have combined the wing of the 747-600X with a widened fuselage, allowing it to carry 650 passengers over the same range as a . The cost of the changes from previous 747 models, in particular the new wing for the 747-500X and -600X, was estimated to be more than US$5 billion. Boeing was not able to attract enough interest to launch the aircraft. 747X and 747X Stretch As Airbus progressed with its A3XX study, Boeing offered a 747 derivative as an alternative in 2000; a more modest proposal than the previous -500X and -600X that retained the 747's overall wing design and add a segment at the root, increasing the span to . Power would have been supplied by either the Engine Alliance GP7172 or the Rolls-Royce Trent 600, which were also proposed for the 767-400ERX. A new flight deck based on the 777's would be used. The 747X aircraft was to carry 430 passengers over ranges of up to . The 747X Stretch would be extended to long, allowing it to carry 500 passengers over ranges of up to . Both would feature an interior based on the 777. Freighter versions of the 747X and 747X Stretch were also studied. Like its predecessor, the 747X family was unable to garner enough interest to justify production, and it was shelved along with the 767-400ERX in March 2001, when Boeing announced the Sonic Cruiser concept. Though the 747X design was less costly than the 747-500X and -600X, it was criticized for not offering a sufficient advance from the existing . The 747X did not make it beyond the drawing board, but the 747-400X being developed concurrently moved into production to become the 747-400ER. 747-400XQLR After the end of the 747X program, Boeing continued to study improvements that could be made to the 747. The 747-400XQLR (Quiet Long Range) was meant to have an increased range of , with improvements to boost efficiency and reduce noise. Improvements studied included raked wingtips similar to those used on the 767-400ER and a sawtooth engine nacelle for noise reduction. Although the 747-400XQLR did not move to production, many of its features were used for the 747 Advanced, which was launched as the 747-8 in 2005. Operators In 1979, Qantas became the first airline in the world to operate an all Boeing 747 fleet, with seventeen aircraft. , there were 462 Boeing 747s in airline service, with Atlas Air and British Airways being the largest operators with 33 747-400s each. The last US passenger Boeing 747 was retired from Delta Air Lines in December 2017, after it flew for every American major carrier since its 1970 introduction. Delta flew three of its last four aircraft on a farewell tour, from Seattle to Atlanta on December 19 then to Los Angeles and Minneapolis/St Paul on December 20. As the IATA forecast an increase in air freight from 4% to 5% in 2018 fueled by booming trade for time-sensitive goods, from smartphones to fresh flowers, demand for freighters is strong while passenger 747s are phased out. Of the 1,544 produced, 890 are retired; , a small subset of those which were intended to be parted-out got $3 million D-checks before flying again. Young -400s were sold for 320 million yuan ($50 million) and Boeing stopped converting freighters, which used to cost nearly $30 million. This comeback helped the airframer financing arm Boeing Capital to shrink its exposure to the 747-8 from $1.07 billion in 2017 to $481 million in 2018. In July 2020, British Airways announced that it was retiring its 747 fleet. The final British Airways 747 flights departed London Heathrow on October 8, 2020. Orders and deliveries Boeing 747 orders and deliveries (cumulative, by year): Orders and deliveries through to the end of January 2023. Model summary Accidents and incidents , the 747 has been involved in 173 aviation accidents and incidents, including 64 hull loss accidents causing fatalities. There have been several hijackings of Boeing 747s, such as Pan Am Flight 73, a 747-100 hijacked by four terrorists, causing 20 deaths. Few crashes have been attributed to 747 design flaws. The Tenerife airport disaster resulted from pilot error and communications failure, while the Japan Airlines Flight 123 and China Airlines Flight 611 crashes stemmed from improper aircraft repair. United Airlines Flight 811, which suffered an explosive decompression mid-flight on February 24, 1989, led the National Transportation Safety Board (NTSB) to issue a recommendation that the Boeing 747-100 and 747-200 cargo doors similar to those on the Flight 811 aircraft be modified to those featured on the Boeing . Korean Air Lines Flight 007 was shot down by a Soviet fighter aircraft in 1983 after it had strayed into Soviet territory, causing US President Ronald Reagan to authorize the then-strictly-military global positioning system (GPS) for civilian use. Accidents due to design deficiencies included TWA Flight 800, where a 747-100 exploded in mid-air on July 17, 1996, probably due to sparking electrical wires inside the fuel tank. This finding led the FAA to adopt a rule in July 2008 requiring installation of an inerting system in the center fuel tank of most large aircraft, after years of research into solutions. At the time, the new safety system was expected to cost US$100,000 to $450,000 per aircraft and weigh approximately . El Al Flight 1862 crashed after the fuse pins for an engine broke off shortly after take-off due to metal fatigue. Instead of simply dropping away from the wing, the engine knocked off the adjacent engine and damaged the wing. Aircraft on display As increasing numbers of "classic" 747-100 and series aircraft have been retired, some have been used for other uses such as museum displays. Some older 747-300s and 747-400s were later added to museum collections. 20235/001 – 747-121 registration N7470 City of Everett, the first 747 and prototype, is at the Museum of Flight, Seattle, Washington. 19651/025 – 747-121 registration N747GE at the Pima Air & Space Museum, Tucson, Arizona, US. 19778/027 – 747-151 registration N601US nose at the National Air and Space Museum, Washington, D.C. 19661/070 – 747-121(SF) registration N681UP preserved at a plaza on Jungong Road, Shanghai, China. 19896/072 – 747-132(SF) registration N481EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. 20107/086 – 747-123 registration N905NA, a NASA Shuttle Carrier Aircraft, at the Johnson Space Center, Houston, Texas. 20269/150 – 747-136 registration G-AWNG nose at Hiller Aviation Museum, San Carlos, California. 20239/160 – 747-244B registration ZS-SAN nicknamed Lebombo, at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. 20541/200 – 747-128 registration F-BPVJ at Musée de l'Air et de l'Espace, Paris, France. 20770/213 – 747-2B5B registration HL7463 at Jeongseok Aviation Center, Jeju, South Korea. 20713/219 - 747-212B(SF) registration N482EV at the Evergreen Aviation & Space Museum, McMinnville, Oregon, US. 21134/288 – 747SP-44 registration ZS-SPC at the South African Airways Museum Society, Rand Airport, Johannesburg, South Africa. 21549/336 – 747-206B registration PH-BUK at the Aviodrome, Lelystad, Netherlands. 21588/342 – 747-230B(M) registration D-ABYM preserved at Technik Museum Speyer, Germany. 21650/354 – 747-2R7F/SCD registration G-MKGA preserved at Cotswold Airport as an event space. 22145/410 – 747-238B registration VH-EBQ at the Qantas Founders Outback Museum, Longreach, Queensland, Australia. 23223/606 – 747-338 registration VH-EBU at Melbourne Avalon Airport, Avalon, Victoria, Australia. VH-EBU is an ex-Qantas airframe formerly decorated in the Nalanji Dreaming livery, currently in use as a training aircraft and film set. 23719/696 – 747-451 registration N661US at the Delta Flight Museum, Atlanta, Georgia, US. This particular plane was the first in service, as well as the prototype. 24354/731 – 747-438 registration VH-OJA at Shellharbour Airport, Albion Park Rail, New South Wales, Australia. 21441/306 - SOFIA - 747SP-21 registration N747NA at Pima Air and Space Museum in Tucson, Arizona. Former Pan Am and United Airlines 747SP bought by NASA and converted into a flying telescope, for astronomy purposes. Named Clipper Lindbergh. Other uses Upon its retirement from service, the 747 which was number two in the production line was dismantled and shipped to Hopyeong, Namyangju, Gyeonggi-do, South Korea where it was re-assembled, repainted in a livery similar to that of Air Force One and converted into a restaurant. Originally flown commercially by Pan Am as N747PA, Clipper Juan T. Trippe, and repaired for service following a tailstrike, it stayed with the airline until its bankruptcy. The restaurant closed by 2009, and the aircraft was scrapped in 2010. A former British Airways 747-200B, G-BDXJ, is parked at the Dunsfold Aerodrome in Surrey, England and has been used as a movie set for productions such as the 2006 James Bond film, Casino Royale. The airplane also appears frequently in the television series Top Gear, which is filmed at Dunsfold. The Jumbo Stay hostel, using a converted 747-200 formerly registered as 9V-SQE, opened at Arlanda Airport, Stockholm in January 2009. A former Pakistan International Airlines 747-300 was converted into a restaurant by Pakistan's Airports Security Force in 2017. It is located at Jinnah International Airport, Karachi. The wings of a 747 have been repurposed as roofs of a house in Malibu, California. Specifications Cultural impact Following its debut, the 747 rapidly achieved iconic status. The aircraft entered the cultural lexicon as the original Jumbo Jet, a term coined by the aviation media to describe its size, and was also nicknamed Queen of the Skies. Test pilot David P. Davies described it as "a most impressive aeroplane with a number of exceptionally fine qualities", and praised its flight control system as "truly outstanding" because of its redundancy. Appearing in over 300 film productions, the 747 is one of the most widely depicted civilian aircraft and is considered by many as one of the most iconic in film history. It has appeared in film productions such as Airport 1975 and Airport '77 disaster films, Air Force One, Die Hard 2, and Executive Decision. See also References Notes Bibliography Bowers, Peter M. Boeing Aircraft Since 1916. London: Putnam Aeronautical Books, 1989. . Davies, R.E.G. Delta: An Airline and Its Aircraft: The Illustrated History of a Major U.S. Airline and the People Who Made It. McLean, VA: Paladwr Press, 1990. . Donald, David and Lake, Jon. Encyclopedia of World Military Aircraft. London: Aerospace Publishing, 1996. . Haenggi, Michael. Boeing Widebodies. St. Paul, MN: MBI Publishing Co., 2003. . Irving, Clive. Wide Body: The Making of the Boeing 747. Philadelphia: Coronet, 1994. . Itabashi, M., K. Kawata and S. Kusaka. "Pre-fatigued 2219-T87 and 6061-T6 aluminium alloys." Structural Failure: Technical, Legal and Insurance Aspects. Milton Park, Abingdon, Oxon.: Taylor & Francis, 1995. . Jenkins, Dennis R. Boeing 747-100/200/300/SP (AirlinerTech Series, Vol. 6). North Branch, MN: Specialty Press, 2000. . Kane, Robert M. Air Transportation: 1903–2003. Dubuque, IA: Kendall Hunt Publishing Co., 2004. . Lawrence, Philip K. and David Weldon Thornton. Deep Stall: The Turbulent Story of Boeing Commercial Airplanes. Burlington, VT: Ashgate Publishing Co., 2005, . Norris, Guy and Mark Wagner. Boeing 747: Design and Development Since 1969. St. Paul, MN: MBI Publishing Co., 1997. . Norton, Bill. Lockheed Martin C-5 Galaxy. North Branch, MN: Specialty Press, 2003. . Orlebar, Christopher. The Concorde Story. Oxford: Osprey Publishing, 5th ed., 2002. . Sutter, Joe. 747: Creating the World's First Jumbo Jet and Other Adventures from a Life in Aviation. Washington, DC: Smithsonian Books, 2006. . Taylor, John W. R. (editor). Jane's All the World's Aircraft 1988–89. Coulsdon, UK: Jane's Defence Data, 1988. . Thisdell, Dan and Seymour, Chris. "World Airliner Census". Flight International, July 30 – August 5, 2019, Vol. 196, No. 5697. pp. 24–47. . Further reading Ingells, Douglas J. 747: Story of the Boeing Super Jet. Fallbrook, CA: Aero Publishers, 1970. . The Great Gamble: The Boeing 747. The Boeing – Pan Am Project to Develop, Produce, and Introduce the 747. Tuscaloosa: University of Alabama Press, 1973. . Seo, Hiroshi. Boeing 747. Worthing, West Sussex: Littlehampton Book Services Ltd., 1984. . Lucas, Jim. Boeing 747 – The First 20 Years. Browcom Pub. Ltd, 1988. . Wright, Alan J. Boeing 747. Hersham, Surrey: Ian Allan, 1989. . Minton, David H. The Boeing 747 (Aero Series 40). Fallbrook, CA: Aero Publishers, 1991. . Shaw, Robbie. Boeing 747 (Osprey Civil Aircraft series). London: Osprey, 1994. . Baum, Brian. Boeing 747-SP (Great Airliners, Vol. 3). Osceola, WI: Motorbooks International, 1997. . Falconer, Jonathan. Boeing 747 in Color. Hersham, Surrey: Ian Allan, 1997. . Gilchrist, Peter. Boeing 747-400 (Airliner Color History). Osceola, WI: Motorbooks International, 1998. . Henderson, Scott. Boeing 747-100/200 In Camera. Minneapolis, MN: Scoval Publishing, 1999. . Pealing, Norman, and Savage, Mike. Jumbo Jetliners: Boeing's 747 and the Widebodies (Osprey Color Classics). Osceola, WI: Motorbooks International, 1999. . Shaw, Robbie. Boeing 747-400: The Mega-Top (Osprey Civil Aircraft series)/ London: Osprey, 1999. . Wilson, Stewart. Boeing 747 (Aviation Notebook Series). Queanbeyan, NSW: Wilson Media Pty. Ltd, 1999. . Wilson, Stewart. Airliners of the World. Fyshwick, Australia: Aerospace Publications Pty Ltd., 1999. . Birtles, Philip. Boeing 747-400. Hersham, Surrey: Ian Allan, 2000. . Bowman, Martin. Boeing 747 (Crowood Aviation Series). Marlborough, Wilts.: Crowood, 2000. Dorr, Robert F. Boeing 747-400 (AirlinerTech Series, Vol. 10). North Branch, MN: Specialty Press, 2000. . Gesar, Aram. Boeing 747: The Jumbo. New York: Pyramid Media Group, 2000. . Gilchrist, Peter. Boeing 747 Classic (Airliner Color History). Osceola, WI: Motorbooks International, 2000. . Graham, Ian. In Control: How to Fly a 747. Somerville, MA: Candlewick, 2000. . Nicholls, Mark. The Airliner World Book of the Boeing 747. New York: Osprey Publishing, 2002. . March, Peter. The Boeing 747 Story. Stroud, Glos.: The History Press, 2009. . External links 1960s United States airliners Quadjets Aircraft first flown in 1969 Double-deck aircraft
2,031
4,616
https://en.wikipedia.org/wiki/Burgundian
Burgundian
Burgundian can refer to any of the following: Someone or something from Burgundy. Burgundians, an East Germanic tribe, who first appear in history in South East Europe. Later Burgundians colonised the area of Gaul that is now known as Burgundy (French Bourgogne) The Old Burgundian language (Germanic), an East Germanic language spoken by the Burgundians The Modern Burgundian language (Oïl), an Oïl language also known as spoken in the region of Burgundy, France. Frainc-Comtou dialect, sometimes regarded as part of the Burgundian group of languages Burgundian (party), a political faction in early 15th century during the Hundred Years' War See also Burgundian War (disambiguation) Language and nationality disambiguation pages
2,033
4,620
https://en.wikipedia.org/wiki/Bronze%20Age
Bronze Age
The Bronze Age is a historic period, lasting approximately from 3300 BC to 1200 BC, characterized by the use of bronze, the presence of writing in some areas, and other early features of urban civilization. The Bronze Age is the second principal period of the three-age system proposed in 1836 by Christian Jürgensen Thomsen for classifying and studying ancient societies and history. An ancient civilization is deemed to be part of the Bronze Age because it either produced bronze by smelting its own copper and alloying it with tin, arsenic, or other metals, or traded other items for bronze from production areas elsewhere. Bronze is harder and more durable than the other metals available at the time, allowing Bronze Age civilizations to gain a technological advantage. While terrestrial iron is naturally abundant, the higher temperature required for smelting, , in addition to the greater difficulty of working with the metal, placed it out of reach of common use until the end of the second millennium BC. Tin's low melting point of and copper's relatively moderate melting point of placed them within the capabilities of the Neolithic pottery kilns, which date back to 6,000 BC and were able to produce temperatures greater than . Copper and tin ores are rare, since there were no tin bronzes in Western Asia before trading in bronze began in the 3rd millennium BC. Worldwide, the Bronze Age generally followed the Neolithic period, with the Chalcolithic serving as a transition. Bronze Age cultures differed in their development of writing. According to archaeological evidence, cultures in Mesopotamia (cuneiform script) and Egypt (hieroglyphs) developed the earliest practical writing systems. Metal use The period is characterized by the widespread use of bronze, even if only by elites in its early part, though the introduction and development of bronze technology were not universally synchronous. Human-made tin bronze technology requires set production techniques. Tin must be mined (mainly as the tin ore cassiterite) and smelted separately, then added to hot copper to make bronze alloy. The Bronze Age was a time of extensive use of metals and of developing trade networks (See Tin sources and trade in ancient times). A 2013 report suggests that the earliest tin-alloy bronze dates to the mid-5th millennium BC in a Vinča culture site in Pločnik (Serbia), although this culture is not conventionally considered part of the Bronze Age. The dating of the foil has been disputed. Near East Western Asia and the Near East were the first regions to enter the Bronze Age, which began with the rise of the Mesopotamian civilization of Sumer in the mid-4th millennium BC. Cultures in the ancient Near East (often called one of "the cradles of civilization") practiced intensive year-round agriculture, developed writing systems, invented the potter's wheel, created centralized governments (usually in form of hereditary monarchies), written law codes, city-states and nation-states and empires, embarked on advanced architectural projects, introduced social stratification, economic and civil administration, slavery, and practiced organized warfare, medicine and religion. Societies in the region laid the foundations for astronomy, mathematics and astrology. Dates are approximate, consult particular article for details Anatolia The Hittite Empire was established in Hattusa in northern Anatolia from the 18th century BC. In the 14th century BC the Hittite Kingdom was at its height, encompassing central Anatolia, southwestern Syria as far as Ugarit, and upper Mesopotamia. After 1180 BC, amid general turmoil in the Levant conjectured to have been associated with the sudden arrival of the Sea Peoples, the kingdom disintegrated into several independent "Neo-Hittite" city-states, some of which survived until as late as the 8th century BC. Arzawa in Western Anatolia during the second half of the second millennium BC likely extended along southern Anatolia in a belt that reaches from near the Turkish Lakes Region to the Aegean coast. Arzawa was the western neighbor—sometimes a rival and sometimes a vassal—of the Middle and New Hittite Kingdoms. The Assuwa league was a confederation of states in western Anatolia that was defeated by the Hittites under an earlier Tudhaliya I, around 1400 BC. Arzawa has been associated with the much more obscure Assuwa generally located to its north. It probably bordered it, and may even be an alternative term for it (at least during some periods). Egypt Early Bronze dynasties In Ancient Egypt, the Bronze Age begins in the Protodynastic period, 3150 BC. The archaic Early Bronze Age of Egypt, known as the Early Dynastic Period of Egypt, immediately follows the unification of Lower and Upper Egypt, 3100 BC. It is generally taken to include the First and Second Dynasties, lasting from the Protodynastic Period of Egypt until about 2686 BC, or the beginning of the Old Kingdom. With the First Dynasty, the capital moved from Abydos to Memphis with a unified Egypt ruled by an Egyptian god-king. Abydos remained the major holy land in the south. The hallmarks of ancient Egyptian civilization, such as art, architecture and many aspects of religion, took shape during the Early Dynastic Period. Memphis in the Early Bronze Age was the largest city of the time. The Old Kingdom of the regional Bronze Age is the name given to the period in the 3rd millennium BC when Egypt attained its first continuous peak of civilization in complexity and achievement – the first of three "Kingdom" periods, which mark the high points of civilization in the lower Nile Valley (the others being Middle Kingdom and the New Kingdom). The First Intermediate Period of Egypt, often described as a "dark period" in ancient Egyptian history, spanned about 100 years after the end of the Old Kingdom from about 2181 to 2055 BC. Very little monumental evidence survives from this period, especially from the early part of it. The First Intermediate Period was a dynamic time when the rule of Egypt was roughly divided between two competing for power bases: Heracleopolis in Lower Egypt and Thebes in Upper Egypt. These two kingdoms would eventually come into conflict, with the Theban kings conquering the north, resulting in the reunification of Egypt under a single ruler during the second part of the 11th Dynasty. Nubia The Bronze Age in Nubia started as early as 2300 BC. Copper smelting was introduced by Egyptians to the Nubian city of Meroë, in modern-day Sudan, around 2600 BC. A furnace for bronze casting has been found in Kerma that is dated to 2300–1900 BC. Middle Bronze dynasties The Middle Kingdom of Egypt lasted from 2055 to 1650 BC. During this period, the Osiris funerary cult rose to dominate Egyptian popular religion. The period comprises two phases: the 11th Dynasty, which ruled from Thebes and the 12th and 13th Dynasties centered on el-Lisht. The unified kingdom was previously considered to comprise the 11th and 12th Dynasties, but historians now at least partially consider the 13th Dynasty to belong to the Middle Kingdom. During the Second Intermediate Period, Ancient Egypt fell into disarray for a second time, between the end of the Middle Kingdom and the start of the New Kingdom. It is best known for the Hyksos, whose reign comprised the 15th and 16th dynasties. The Hyksos first appeared in Egypt during the 11th Dynasty, began their climb to power in the 13th Dynasty, and emerged from the Second Intermediate Period in control of Avaris and the Delta. By the 15th Dynasty, they ruled lower Egypt, and they were expelled at the end of the 17th Dynasty. Late Bronze dynasties The New Kingdom of Egypt, also referred to as the Egyptian Empire, lasted from the 16th to the 11th century BC. The New Kingdom followed the Second Intermediate Period and was succeeded by the Third Intermediate Period. It was Egypt's most prosperous time and marked the peak of Egypt's power. The later New Kingdom, i.e. the 19th and 20th Dynasties (1292–1069 BC), is also known as the Ramesside period, after the eleven pharaohs that took the name of Ramesses. Iranian Plateau Elam was a pre-Iranian ancient civilization located to the east of Mesopotamia. In the Old Elamite period (Middle Bronze Age), Elam consisted of kingdoms on the Iranian Plateau, centered in Anshan, and from the mid-2nd millennium BC, it was centered in Susa in the Khuzestan lowlands. Its culture played a crucial role in the Gutian Empire and especially during the Iranian Achaemenid dynasty that succeeded it. The Oxus civilization was a Bronze Age Central Asian culture dated to 2300–1700 BC and centered on the upper Amu Darya (Oxus). In the Early Bronze Age, the culture of the Kopet Dag oases and Altyndepe developed a proto-urban society. This corresponds to level IV at Namazga-Tepe. Altyndepe was a major center even then. Pottery was wheel-turned. Grapes were grown. The height of this urban development was reached in the Middle Bronze Age 2300 BC, corresponding to level V at Namazga-Depe. This Bronze Age culture is called the Bactria–Margiana Archaeological Complex (BMAC). The Kulli culture, similar to those of the Indus Valley civilisation, was located in southern Balochistan (Gedrosia) 2500–2000 BC. Agriculture was the economic base of these people. At several places, dams were found, providing evidence for a highly developed water management system. Konar Sandal is associated with the hypothesized "Jiroft culture", a 3rd-millennium-BC culture postulated based on a collection of artifacts confiscated in 2001. Levant In modern scholarship, the chronology of the Bronze Age Levant is divided into Early/Proto Syrian; corresponding to the Early Bronze. Old Syrian; corresponding to the Middle Bronze. Middle Syrian; corresponding to the Late Bronze. The term Neo-Syria is used to designate the early Iron Age. The old Syrian period was dominated by the Eblaite first kingdom, Nagar and the Mariote second kingdom. The Akkadians conquered large areas of the Levant and were followed by the Amorite kingdoms, 2000–1600 BC, which arose in Mari, Yamhad, Qatna, Assyria. From the 15th century BC onward, the term Amurru is usually applied to the region extending north of Canaan as far as Kadesh on the Orontes River. The earliest-known Ugaritic contact with Egypt (and the first exact dating of Ugaritic civilization) comes from a carnelian bead identified with the Middle Kingdom pharaoh Senusret I, 1971–1926 BC. A stela and a statuette from the Egyptian pharaohs Senusret III and Amenemhet III have also been found. However, it is unclear at what time these monuments got to Ugarit. In the Amarna letters, messages from Ugarit 1350 BC written by Ammittamru I, Niqmaddu II, and his queen, were discovered. From the 16th to the 13th century BC, Ugarit remained in constant touch with Egypt and Cyprus (named Alashiya). The Mitanni was a loosely organized state in northern Syria and south-east Anatolia from 1500–1300 BC. Founded by an Indo-Aryan ruling class that governed a predominantly Hurrian population, Mitanni came to be a regional power after the Hittite destruction of Kassite Babylon created a power vacuum in Mesopotamia. At its beginning, Mitanni's major rival was Egypt under the Thutmosids. However, with the ascent of the Hittite empire, Mitanni and Egypt allied to protect their mutual interests from the threat of Hittite domination. At the height of its power, during the 14th century BC, it had outposts centered on its capital, Washukanni, which archaeologists have located on the headwaters of the Khabur River. Eventually, Mitanni succumbed to Hittite, and later Assyrian attacks, and was reduced to a province of the Middle Assyrian Empire. The Israelites were an ancient Semitic-speaking people of the Ancient Near East who inhabited part of Canaan during the tribal and monarchic periods (15th to 6th centuries BC), and lived in the region in smaller numbers after the fall of the monarchy. The name "Israel" first appears 1209 BC, at the end of the Late Bronze Age and the very beginning of the Iron Age, on the Merneptah Stele raised by the Egyptian pharaoh Merneptah. The Aramaeans were a Northwest Semitic semi-nomadic and pastoralist people who originated in what is now modern Syria (Biblical Aram) during the Late Bronze Age and the early Iron Age. Large groups migrated to Mesopotamia, where they intermingled with the native Akkadian (Assyrian and Babylonian) population. The Aramaeans never had a unified empire; they were divided into independent kingdoms all across the Near East. After the Bronze Age collapse, their political influence was confined to many Syro-Hittite states, which were entirely absorbed into the Neo-Assyrian Empire by the 8th century BC. Mesopotamia The Mesopotamian Bronze Age began about 3500 BC and ended with the Kassite period ( 1500 BC – 1155 BC). The usual tripartite division into an Early, Middle and Late Bronze Age is not used. Instead, a division primarily based on art-historical and historical characteristics is more common. The cities of the Ancient Near East housed several tens of thousands of people. Ur, Kish, Isin, Larsa and Nippur in the Middle Bronze Age and Babylon, Calah and Assur in the Late Bronze Age similarly had large populations. The Akkadian Empire (2335–2154 BC) became the dominant power in the region, and after its fall the Sumerians enjoyed a renaissance with the Neo-Sumerian Empire. Assyria became a regional power, under the Amorite king Shamshi-Adad I, with the Old Assyrian Empire ( 1800–1600 BC). The earliest mention of Babylon (then a small administrative town) appears on a tablet from the reign of Sargon of Akkad in the 23rd century BC. The Amorite dynasty established the city-state of Babylon in the 19th century BC. Over 100 years later, it briefly took over the other city-states and formed the short-lived First Babylonian Empire during what is also called the Old Babylonian Period. Akkad, Assyria, and Babylonia all used the written East Semitic Akkadian language for official use and as a spoken language. By that time, the Sumerian language was no longer spoken, but was still in religious use in Assyria and Babylonia, and would remain so until the 1st century AD. The Akkadian and Sumerian traditions played a major role in later Assyrian and Babylonian culture, even though Babylonia (unlike the more militarily powerful Assyria) itself was founded by non-native Amorites and often ruled by other non-indigenous peoples, such as Kassites, Aramaeans and Chaldeans, as well as its Assyrian neighbors. Asia Central Asia Agropastoralism For many decades scholars made superficial reference to Central Asia as the "pastoral realm" or alternatively, the "nomadic world", in what researchers have come to call the "Central Asian void": a 5,000 year span that was neglected in studies of the origins of agriculture. Foothill regions and glacial melt streams supported Bronze Age agropastoralists who developed complex east–west trade routes between Central Asia and China that introduced wheat and barley to China and spread millet across Central Asia. Bactria–Margiana Archaeological Complex The Bactria–Margiana Archaeological Complex (BMAC), also known as the Oxus civilization, was a Bronze Age civilization in Central Asia, dated to c. 2400–1600 BC, located in present-day northern Afghanistan, eastern Turkmenistan, southern Uzbekistan and western Tajikistan, centred on the upper Amu Darya (Oxus River). Its sites were discovered and named by the Soviet archaeologist Viktor Sarianidi (1976). Bactria was the Greek name for the area of Bactra (modern Balkh), in what is now northern Afghanistan, and Margiana was the Greek name for the Persian satrapy of Marguš, the capital of which was Merv, in modern-day southeastern Turkmenistan. A wealth of information indicates that the BMAC had close international relations with the Indus Valley, the Iranian Plateau, and possibly even indirectly with Mesopotamia, and all civilizations were very familiar with lost wax casting. According to recent studies, the BMAC was not a primary contributor to later South-Asian genetics. Seima-Turbino phenomenon The Altai Mountains in what is now southern Russia and central Mongolia have been identified as the point of origin of a cultural enigma termed the Seima-Turbino Phenomenon. It is conjectured that changes in climate in this region around 2000 BC and the ensuing ecological, economic and political changes triggered a rapid and massive migration westward into northeast Europe, eastward into China and southward into Vietnam and Thailand across a frontier of some 4,000 miles. This migration took place in just five to six generations and led to peoples from Finland in the west to Thailand in the east employing the same metal working technology and, in some areas, horse breeding and riding. It is further conjectured that the same migrations spread the Uralic group of languages across Europe and Asia: some 39 languages of this group are still extant, including Hungarian, Finnish and Estonian. However, recent genetic testings of sites in south Siberia and Kazakhstan (Andronovo horizon) would rather support a spreading of the bronze technology via Indo-European migrations eastwards, as this technology was well known for quite a while in western regions. East Asia China In China, the earliest bronze artifacts have been found in the Majiayao culture site (between 3100 and 2700 BC). The term "Bronze Age" has been transferred to the archaeology of China from that of Western Eurasia, and there is no consensus or universally used convention delimiting the "Bronze Age" in the context of Chinese prehistory. By convention, the "Early Bronze Age" in China is sometimes taken as equivalent to the "Shang dynasty" period (16th to 11th centuries BC), and the "Later Bronze Age" as equivalent to the "Zhou dynasty" period (11th to 3rd centuries BC, from the 5th century, also dubbed "Iron Age"), although there is an argument to be made that the "Bronze Age" proper never ended in China, as there is no recognizable transition to an "Iron Age". Significantly, together with the jade art that precedes it, bronze was seen as a "fine" material for ritual art when compared with iron or stone. Bronze metallurgy in China originated in what is referred to as the Erlitou () period, which some historians argue places it within the range of dates controlled by the Shang dynasty. Others believe the Erlitou sites belong to the preceding Xia () dynasty. The U.S. National Gallery of Art defines the Chinese Bronze Age as the "period between about 2000 BC and 771 BC", a period that begins with the Erlitou culture and ends abruptly with the disintegration of Western Zhou rule. There is reason to believe that bronze work developed inside China separately from outside influence. However, the discovery of Europoid mummies in Xinjiang has caused some scholars such as Johan Gunnar Andersson, Jan Romgard, and An Zhimin to suggest a possible route of transmission from the West eastwards. According to An Zhimin, "It can be imagined that initially bronze and iron technology took its rise in West Asia, first influenced the Xinjiang region, and then reached the Yellow River valley, providing external impetus for the rise of the Shang and Zhou civilizations." According to Jan Romgard, "bronze and iron tools seems to have traveled from west to east as well as the use of wheeled wagons and the domestication of the horse." There are also possible links to Seima-Turbino culture, "a transcultural complex across northern Eurasia," the Eurasian steppe, and the Urals. However the oldest bronze objects found in China so far were discovered at the Majiayao site in Gansu rather than at Xinjiang. The Shang dynasty (also known as the Yin dynasty) of the Yellow River Valley rose to power after the Xia dynasty around 1600 BC. While some direct information about the Shang dynasty comes from Shang-era inscriptions on bronze artifacts, most comes from oracle bones—turtle shells, cattle scapulae, or other bones—which bear glyphs that form the first significant corpus of recorded Chinese characters. The production of Erlitou in Henan represents the earliest large-scale metallurgy industry in the Central Plains of China. The influence of the Saima-Turbino metalworking tradition from the north is supported by a series of recent discoveries in China of many unique perforated spearheads with downward hooks and small loops on the same or opposite side of the socket, which could be associated with the Seima-Turbino visual vocabulary of southern Siberia. The metallurgical centers of northwestern China, especially Qijia in Gansu and Kexingzhuang culture in Shaanxi, played an intermediary role in this process. Iron has been found from the Zhou dynasty, but its use was minimal. Chinese literature dating to the 6th century BC attests knowledge of iron smelting, yet bronze continues to occupy the seat of significance in the archaeological and historical record for some time after this. Historian W.C. White argues that iron did not supplant bronze "at any period before the end of the Zhou dynasty (256 BC)" and that bronze vessels make up the majority of metal vessels through the Later Han period, or to 221 BC. The Chinese bronze artifacts generally are either utilitarian, like spear points or adze heads, or "ritual bronzes", which are more elaborate versions in precious materials of everyday vessels, as well as tools and weapons. Examples are the numerous large sacrificial tripods known as dings in Chinese; there are many other distinct shapes. Surviving identified Chinese ritual bronzes tend to be highly decorated, often with the taotie motif, which involves highly stylized animal faces. These appear in three main motif types: those of demons, of symbolic animals, and abstract symbols. Many large bronzes also bear cast inscriptions that are the great bulk of the surviving body of early Chinese writing and have helped historians and archaeologists piece together the history of China, especially during the Zhou dynasty (1046–256 BC). The bronzes of the Western Zhou dynasty document large portions of history not found in the extant texts that were often composed by persons of varying rank and possibly even social class. Further, the medium of cast bronze lends the record they preserve a permanence not enjoyed by manuscripts. These inscriptions can commonly be subdivided into four parts: a reference to the date and place, the naming of the event commemorated, the list of gifts given to the artisan in exchange for the bronze, and a dedication. The relative points of reference these vessels provide have enabled historians to place most of the vessels within a certain time frame of the Western Zhou period, allowing them to trace the evolution of the vessels and the events they record. Korea The beginning of the Bronze Age on the peninsula is around 1000–800 BC. Initially centered around Liaoning and southern Manchuria, Korean Bronze Age culture exhibits unique typology and styles, especially in ritual objects. The Mumun pottery period is named after the Korean name for undecorated or plain cooking and storage vessels that form a large part of the pottery assemblage over the entire length of the period, but especially 850–550 BC. The Mumun period is known for the origins of intensive agriculture and complex societies in both the Korean Peninsula and the Japanese Archipelago. The Middle Mumun pottery period culture of the southern Korean Peninsula gradually adopted bronze production ( 700–600? BC) after a period when Liaoning-style bronze daggers and other bronze artifacts were exchanged as far as the interior part of the Southern Peninsula ( 900–700 BC). The bronze daggers lent prestige and authority to the personages who wielded and were buried with them in high-status megalithic burials at south-coastal centers such as the Igeum-dong site. Bronze was an important element in ceremonies and as for mortuary offerings until 100 BC. Japan The Japanese archipelago saw the introduction of bronze during the beginning of the Early Yayoi period (≈300 BC), which saw the introduction of metalworking and agricultural practices brought in by settlers arriving from the continent. Bronze and iron smelting techniques spread to the Japanese archipelago through contact with other ancient East Asian civilizations, particularly immigration and trade from ancient Korean peninsula and ancient mainland China. Iron was mainly used for agricultural and other tools, whereas ritual and ceremonial artifacts were mainly made of bronze. South Asia (Dates are approximate, consult particular article for details) Indus Valley The Bronze Age on the Indian subcontinent began around 3300 BC with the beginning of the Indus Valley civilization. Inhabitants of the Indus Valley, the Harappans, developed new techniques in metallurgy and produced copper, bronze, lead and tin. The Late Harappan culture, which dates from 1900 to 1400 BC, overlapped the transition from the Bronze Age to the Iron Age; thus it is difficult to date this transition accurately. It has been claimed that a 6,000-year-old copper amulet manufactured in Mehrgarh in the shape of wheel spoke is the earliest example of lost-wax casting in the world. The civilization's cities were noted for their urban planning, baked brick houses, elaborate drainage systems, water supply systems, clusters of large non-residential buildings, and new techniques in handicraft (carnelian products, seal carving) and metallurgy (copper, bronze, lead, and tin). The large cities of Mohenjo-daro and Harappa very likely grew to contain between 30,000 and 60,000 individuals, and the civilization itself during its florescence may have contained between one and five million individuals. Southeast Asia The Vilabouly Complex in Laos is a significant archaeological site for dating the origin of bronze metallurgy in Southeast Asia. Thailand In Ban Chiang, Thailand, (Southeast Asia) bronze artifacts have been discovered dating to 2100 BC. However, according to the radiocarbon dating on the human and pig bones in Ban Chiang, some scholars propose that the initial Bronze Age in Ban Chiang was in late 2nd millennium. In Nyaunggan, Burma, bronze tools have been excavated along with ceramics and stone artifacts. Dating is still currently broad (3500–500 BC). Ban Non Wat, excavated by Charles Higham, was a rich site with over 640 graves excavated that gleaned many complex bronze items that may have had social value connected to them. Ban Chiang, however, is the most thoroughly documented site while having the clearest evidence of metallurgy when it comes to Southeast Asia. With a rough date range of late 3rd millennium BC to the first millennium AD, this site alone has various artifacts such as burial pottery (dating from 2100 to 1700 BC), fragments of Bronze, copper-base bangles, and much more. What's interesting about this site, however, is not just the old age of the artifacts but that this technology suggested on-site casting from the very beginning. The on-site casting supports the theory that Bronze was first introduced in Southeast Asia as fully developed which therefore shows that Bronze was innovated from a different country. Some scholars believe that the copper-based metallurgy was disseminated from northwest and central China via south and southwest areas such as Guangdong province and Yunnan province and finally into southeast Asia around 1000 BC. Archaeology also suggests that Bronze Age metallurgy may not have been as significant a catalyst in social stratification and warfare in Southeast Asia as in other regions, social distribution shifting away from chiefdom-states to a heterarchical network. Data analyses of sites such as Ban Lum Khao, Ban Na Di, Non-Nok Tha, Khok Phanom Di, and Nong Nor have consistently led researchers to conclude that there was no entrenched hierarchy. Vietnam Dating back to the Neolithic Age, the first bronze drum, called the Dong Son drum, were uncovered in and around the Red River Delta regions of Northern Vietnam and Southern China. These relate to the Dong Son culture of Vietnam. Archaeological research in Northern Vietnam indicates an increase in rates of infectious disease following the advent of metallurgy; skeletal fragments in sites dating to the early and mid-Bronze Age evidence a greater proportion of lesions than in sites of earlier periods. There are a few possible implications of this. One is the increased contact with bacterial and/or fungal pathogens due to increased population density and land clearing/cultivation. The other one is decreased levels of immunocompetence in the Metal age due to changes in the diet caused by agriculture. The last is that there may have been an emergence of infectious disease in the Da But the period that evolved into a more virulent form in the metal period. Myanmar Europe A few examples of named Bronze Age cultures in Europe in roughly relative order. (Dates are approximate, consult particular article for details) The chosen cultures overlapped in time and the indicated periods do not fully correspond to their estimated extents. Aegean The Aegean Bronze Age began around 3200 BC, when civilizations first established a far-ranging trade network. This network imported tin and charcoal to Cyprus, where copper was mined and alloyed with the tin to produce bronze. Bronze objects were then exported far and wide and supported the trade. Isotopic analysis of tin in some Mediterranean bronze artifacts suggests that they may have originated from Great Britain. Knowledge of navigation was well developed at this time and reached a peak of skill not exceeded (except perhaps by Polynesian sailors) until 1730 when the invention of the chronometer enabled the precise determination of longitude. The Minoan civilization based in Knossos on the island of Crete appears to have coordinated and defended its Bronze Age trade. Ancient empires valued luxury goods in contrast to staple foods, leading to famine. Aegean collapse Bronze Age collapse theories have described aspects of the end of the Bronze Age in this region. At the end of the Bronze Age in the Aegean region, the Mycenaean administration of the regional trade empire followed the decline of Minoan primacy. Several Minoan client states lost much of their population to famine and/or pestilence. This would indicate that the trade network may have failed, preventing the trade that would previously have relieved such famines and prevented illness caused by malnutrition. It is also known that in this era the breadbasket of the Minoan empire, the area north of the Black Sea, also suddenly lost much of its population, and thus probably some capacity to cultivate crops. Drought and famine in Anatolia may have also led to the Aegean collapse by disrupting trade networks, and therefore preventing the Aegean from accessing bronze and luxury goods. The Aegean collapse has been attributed to the exhaustion of the Cypriot forests causing the end of the bronze trade. These forests are known to have existed into later times, and experiments have shown that charcoal production on the scale necessary for the bronze production of the late Bronze Age would have exhausted them in less than fifty years. The Aegean collapse has also been attributed to the fact that as iron tools became more common, the main justification for the tin trade ended, and that trade network ceased to function as it did formerly. The colonies of the Minoan empire then suffered drought, famine, war, or some combination of those three, and had no access to the distant resources of an empire by which they could easily recover. The Thera eruption occurred 1600 BC, north of Crete. Speculation includes that a tsunami from Thera (more commonly known today as Santorini) destroyed Cretan cities. A tsunami may have destroyed the Cretan navy in its home harbor, which then lost crucial naval battles; so that in the LMIB/LMII event ( 1450 BC) the cities of Crete burned and the Mycenaean civilization took over Knossos. If the eruption occurred in the late 17th century BC (as most chronologists now think) then its immediate effects belong to the Middle to Late Bronze Age transition, and not to the end of the Late Bronze Age, but it could have triggered the instability that led to the collapse first of Knossos and then of Bronze Age society overall. One such theory highlights the role of Cretan expertise in administering the empire, post—Thera. If this expertise was concentrated in Crete, then the Mycenaeans may have made political and commercial mistakes in administering the Cretan empire. Archaeological findings, including some on the island of Thera, suggest that the center of the Minoan civilization at the time of the eruption was actually on Thera rather than on Crete. According to this theory, the catastrophic loss of the political, administrative and economic center due to the eruption, as well as the damage wrought by the tsunami to the coastal towns and villages of Crete precipitated the decline of the Minoans. A weakened political entity with a reduced economic and military capability and fabled riches would have then been more vulnerable to conquest. Indeed, the Santorini eruption is usually dated to 1630 BC, while the Mycenaean Greeks first enter the historical record a few decades later, 1600 BC. The later Mycenaean assaults on Crete ( 1450 BC) and Troy ( 1250 BC) would have been a continuation of the steady encroachment of the Greeks upon the weakened Minoan world. Balkans Radivojevic et al. (2013) reported the discovery of a tin bronze foil from the Pločnik archaeological site securely dated to c. 4650 BC as well as 14 other artifacts from Serbia and Bulgaria dated to before 4000 BC has shown that early tin bronze was more common than previously thought, and developed independently in Europe 1500 years before the first tin bronze alloys in the Near East. The production of complex tin bronzes lasted for c. 500 years in the Balkans. The authors reported that evidence for the production of such complex bronzes disappears at the end of the 5th millennium coinciding with the "collapse of large cultural complexes in north-eastern Bulgaria and Thrace in the late fifth millennium BC". Tin bronzes using cassiterite tin would be reintroduced to the area again some 1500 years later. The Dabene Treasure was unearthed from 2004 to 2007 near Karlovo, Plovdiv Province, central Bulgaria. The whole treasure consists of 20,000 gold jewelry items from 18 to 23 carats. The most important of them was a dagger made of gold and platinum with an unusual edge. The treasure was dated to the end of the 3rd millennium B.C. The scientists suggest that the Karlovo valley used to be a major crafts center which exported golden jewelry all over Europe. It is considered as one of the largest prehistoric golden treasure in the world. Central Europe In Central Europe, the early Bronze Age Unetice culture (2300–1600 BC) includes numerous smaller groups like the Straubing, Adlerberg and Hatvan cultures. Some very rich burials, such as the one located at Leubingen with grave gifts crafted from gold, point to an increase of social stratification already present in the Unetice culture. All in all, cemeteries of this period are rare and of small size. The Unetice culture is followed by the middle Bronze Age (1600–1200 BC) Tumulus culture, which is characterised by inhumation burials in tumuli (barrows). In the eastern Hungarian Körös tributaries, the early Bronze Age first saw the introduction of the Mako culture, followed by the Otomani and Gyulavarsand cultures. The late Bronze Age Urnfield culture (1300–700 BC) is characterized by cremation burials. It includes the Lusatian culture in eastern Germany and Poland (1300–500 BC) that continues into the Iron Age. The Central European Bronze Age is followed by the Iron Age Hallstatt culture (700–450 BC). Important sites include: Biskupin (Poland) Nebra (Germany) Vráble (Slovakia) Zug-Sumpf, Zug, Switzerland The Bronze Age in Central Europe has been described in the chronological schema of German prehistorian Paul Reinecke. He described Bronze A1 (Bz A1) period (2300–2000 BC: triangular daggers, flat axes, stone wrist-guards, flint arrowheads) and Bronze A2 (Bz A2) period (1950–1700 BC: daggers with metal hilt, flanged axes, halberds, pins with perforated spherical heads, solid bracelets) and phases Hallstatt A and B (Ha A and B). South Europe The Apennine culture (also called Italian Bronze Age) is a technology complex of central and southern Italy spanning the Chalcolithic and Bronze Age proper. The Camuni were an ancient people of uncertain origin (according to Pliny the Elder, they were Euganei; according to Strabo, they were Rhaetians) who lived in Val Camonica—in what is now northern Lombardy—during the Iron Age, although human groups of hunters, shepherds and farmers are known to have lived in the area since the Neolithic. Located in Sardinia and Corsica, the Nuragic civilization lasted from the early Bronze Age (18th century BC) to the 2nd century AD, when the islands were already Romanized. They take their name from the characteristic Nuragic towers, which evolved from the pre-existing megalithic culture, which built dolmens and menhirs. The nuraghe towers are unanimously considered the best-preserved and largest megalithic remains in Europe. Their effective use is still debated: some scholars considered them as monumental tombs, others as Houses of the Giants, other as fortresses, ovens for metal fusion, prisons or, finally, temples for a solar cult. Around the end of the 3rd millennium BC, Sardinia exported towards Sicily a Culture that built small dolmens, trilithic or polygonal shaped, that served as tombs as it has been ascertained in the Sicilian dolmen of "Cava dei Servi". From this region, they reached Malta island and other countries of Mediterranean basin. The Terramare was an early Indo-European civilization in the area of what is now Pianura Padana (northern Italy) before the arrival of the Celts and in other parts of Europe. They lived in square villages of wooden stilt houses. These villages were built on land, but generally near a stream, with roads that crossed each other at right angles. The whole complex denoted the nature of a fortified settlement. Terramare was widespread in the Pianura Padana (especially along the Panaro river, between Modena and Bologna) and in the rest of Europe. The civilization developed in the Middle and Late Bronze Age, between the 17th and the 13th centuries BC. The Castellieri culture developed in Istria during the Middle Bronze Age. It lasted for more than a millennium, from the 15th century BC until the Roman conquest in the 3rd century BC. It takes its name from the fortified boroughs (Castellieri, Friulian: cjastelir) that characterized the culture. The Canegrate culture developed from the mid-Bronze Age (13th century BC) until the Iron Age in the Pianura Padana, in what are now western Lombardy, eastern Piedmont and Ticino. It takes its name from the township of Canegrate where, in the 20th century, some fifty tombs with ceramics and metal objects were found. The Canegrate culture migrated from the northwest part of the Alps and descended to Pianura Padana from the Swiss Alps passes and the Ticino. The Golasecca culture developed starting from the late Bronze Age in the Po plain. It takes its name from Golasecca, a locality next to the Ticino where, in the early 19th century, abbot Giovanni Battista Giani excavated its first findings (some fifty tombs with ceramics and metal objects). Remains of the Golasecca culture span an area of c. 20,000 square kilometers south to the Alps, between the Po, Sesia and Serio rivers, dating from the 9th to the 4th century BC. West Europe Great Britain In Great Britain, the Bronze Age is considered to have been the period from around 2100 to 750 BC. Migration brought new people to the islands from the continent. Recent tooth enamel isotope research on bodies found in early Bronze Age graves around Stonehenge indicates that at least some of the migrants came from the area of modern Switzerland. Another example site is Must Farm, near Whittlesey, which has recently been host to the most complete Bronze Age wheel ever to be found. The Beaker culture displayed different behaviors from the earlier Neolithic people, and cultural change was significant. Integration is thought to have been peaceful, as many of the early henge sites were seemingly adopted by the newcomers. The rich Wessex culture developed in southern Britain at this time. Additionally, the climate was deteriorating; where once the weather was warm and dry it became much wetter as the Bronze Age continued, forcing the population away from easily defended sites in the hills and into the fertile valleys. Large livestock farms developed in the lowlands and appear to have contributed to economic growth and inspired increasing forest clearances. The Deverel-Rimbury culture began to emerge in the second half of the Middle Bronze Age ( 1400–1100 BC) to exploit these conditions. Devon and Cornwall were major sources of tin for much of western Europe and copper was extracted from sites such as the Great Orme mine in northern Wales. Social groups appear to have been tribal but with growing complexity and hierarchies becoming apparent. The burial of the dead (which, until this period, had usually been communal) became more individual. For example, whereas in the Neolithic a large chambered cairn or long barrow housed the dead, Early Bronze Age people buried their dead in individual barrows (also commonly known and marked on modern British Ordnance Survey maps as tumuli), or sometimes in cists covered with cairns. The greatest quantities of bronze objects in England were discovered in East Cambridgeshire, where the most important finds were recovered in Isleham (more than 6500 pieces). Alloying of copper with zinc or tin to make brass or bronze was practiced soon after the discovery of copper itself. One copper mine at Great Orme in North Wales, extended to a depth of 70 meters. At Alderley Edge in Cheshire, carbon dates have established mining at around 2280 to 1890 BC (at 95% probability). The earliest identified metalworking site (Sigwells, Somerset) is much later, dated by Globular Urn style pottery to approximately the 12th century BC. The identifiable sherds from over 500 mould fragments included a perfect fit of the hilt of a sword in the Wilburton style held in Somerset County Museum. Atlantic Bronze Age The Atlantic Bronze Age is a cultural complex of the period of approximately 1300–700 BC that includes different cultures in Portugal, Andalusia, Galicia, and Britain and Ireland. It is marked by economic and cultural exchange. Commercial contacts extend to Denmark and the Mediterranean. The Atlantic Bronze Age was defined by many distinct regional centers of metal production, unified by a regular maritime exchange of some of their products. Ireland The Bronze Age in Ireland commenced around 2000 BC when copper was alloyed with tin and used to manufacture Ballybeg type flat axes and associated metalwork. The preceding period is known as the Copper Age and is characterised by the production of flat axes, daggers, halberds and awls in copper. The period is divided into three phases: Early Bronze Age (2000–1500 BC), Middle Bronze Age (1500–1200 BC), and Late Bronze Age (1200– 500 BC). Ireland is also known for a relatively large number of Early Bronze Age burials. One of the characteristic types of artifact of the Early Bronze Age in Ireland is the flat axe. There are five main types of flat axes: Lough Ravel ( 2200 BC), Ballybeg ( 2000 BC), Killaha ( 2000 BC), Ballyvalley ( 2000–1600 BC), Derryniggin ( 1600 BC), and a number of metal ingots in the shape of axes. North Europe The Bronze Age in Northern Europe spans the entire 2nd millennium BC (Unetice culture, Urnfield culture, Tumulus culture, Terramare culture, Lusatian culture) lasting until 600 BC. The Northern Bronze Age was both a period and a Bronze Age culture in Scandinavian pre-history, 1700–500 BC, with sites that reached as far east as Estonia. Succeeding the Late Neolithic culture, its ethnic and linguistic affinities are unknown in the absence of written sources. It is followed by the Pre-Roman Iron Age. Even though Northern European Bronze Age cultures were relatively late, and came into existence via trade, sites present rich and well-preserved objects made of wool, wood and imported Central European bronze and gold. Many rock carvings depict ships, and the large stone burial monuments known as stone ships suggest that shipping played an important role. Thousands of rock carvings depict ships, most probably representing sewn plank built canoes for warfare, fishing, and trade. These may have a history as far back as the neolithic period and continue into the Pre-Roman Iron Age, as shown by the Hjortspring boat. There are many mounds and rock carving sites from the period. Numerous artifacts of bronze and gold are found. No written language existed in the Nordic countries during the Bronze Age. The rock carvings have been dated through comparison with depicted artifacts. Caucasus Arsenical bronze artifacts of the Maykop culture in the North Caucasus have been dated around the 4th millennium BC. This innovation resulted in the circulation of arsenical bronze technology over southern and eastern Europe. Pontic–Caspian steppe The Yamnaya culture is a Late Copper Age/Early Bronze Age culture of the Southern Bug/Dniester/Ural region (the Pontic steppe), dating to the 36th–23rd centuries BC. The name also appears in English as Pit-Grave Culture or Ochre-Grave Culture. The Catacomb culture, 2800–2200 BC, comprises several related Early Bronze Age cultures occupying what is presently Russia and Ukraine. The Srubnaya culture was a Late Bronze Age (18th–12th centuries BC) culture. It is a successor to the Yamnaya and the Poltavka culture. Africa Sub-Saharan Africa Iron and copper smelting appeared around the same time in most parts of Africa. As such, most African civilizations outside of Egypt did not experience a distinct Bronze Age. Evidence for iron smelting appears earlier or at the same time as copper smelting in Nigeria c. 900–800 BC, Rwanda and Burundi c. 700–500 BC and Tanzania c. 300 BC. There is a longstanding debate about whether the development of both copper and iron metallurgy were independently developed in sub-Saharan Africa or were introduced from the outside across the Sahara Desert from North Africa or the Indian Ocean. Evidence for theories of independent development and outside introduction are scarce and subject to active scholarly debate. Scholars have suggested that both the relative dearth of archeological research in sub-Saharan Africa as well as long-standing prejudices have limited or biased our understanding of pre-historic metallurgy on the continent. One scholar characterized the state of historical knowledge as such: "To say that the history of metallurgy in sub-Saharan Africa is complicated is perhaps an understatement." West Africa Copper smelting took place in West Africa prior to the appearance of iron smelting in the region. Evidence for copper smelting furnaces was found near Agadez, Niger that has been dated as early as 2200 BC. However, evidence for copper production in this region before 1000 BC is debated. Evidence of copper mining and smelting has been found at Akjoujt, Mauretania that suggests small scale production 800 to 400 BC. Americas The Moche civilization of South America independently discovered and developed bronze smelting. Bronze technology was developed further by the Incas and used widely both for utilitarian objects and sculpture. A later appearance of limited bronze smelting in West Mexico suggests either contact of that region with Andean cultures or separate discovery of the technology. The Calchaquí people of Northwest Argentina had bronze technology. Trade Trade and industry played a major role in the development of the ancient Bronze Age civilizations. With artifacts of the Indus Valley civilization being found in ancient Mesopotamia and Egypt, it is clear that these civilizations were not only in touch with each other but also trading with each other. Early long-distance trade was limited almost exclusively to luxury goods like spices, textiles and precious metals. Not only did this make cities with ample amounts of these products extremely rich but also led to an intermingling of cultures for the first time in history. Trade routes were not only over land but also over water. The first and most extensive trade routes were over rivers such as the Nile, the Tigris and the Euphrates which led to growth of cities on the banks of these rivers. The domestication of camels at a later time also helped encourage the use of trade routes over land, linking the Indus Valley with the Mediterranean. This further led to towns sprouting up in numbers anywhere and everywhere there was a pit-stop or caravan-to-ship port. See also Altyndepe Dover Bronze Age Boat Ferriby Boats Hillfort Human timeline Langdon Bay hoard Middle Bronze Age migrations (Ancient Near East) Namazga Oxhide ingot Shropshire bulla Tollense valley battlefield Notes References Eogan, George (1983). The hoards of the Irish later Bronze Age, Dublin: University College, 331 p., Hall, David and Coles, John (1994). Fenland survey : an essay in landscape and persistence, Archaeological report 1, London : English Heritage, 170 p., Pernicka, E., Eibner, C., Öztunah, Ö., Wagener, G.A. (2003). "Early Bronze Age Metallurgy in the Northeast Aegean", In: Wagner, G.A., Pernicka, E. and Uerpmann, H-P. (eds), Troia and the Troad: scientific approaches, Natural science in archaeology, Berlin; London : Springer, , pp. 143–172 Piccolo, Salvatore (2013). Ancient Stones: The Prehistoric Dolmens of Sicily. Abingdon (GB): Brazen Head Publishing, , Waddell, John (1998). The prehistoric archaeology of Ireland, Galway University Press, 433 p., Further reading External links Links to the Bronze Age in Europe and beyond Commented web index, geographically structured (private website) Bronze Age Experimental Archeology and Museum Reproductions Umha Aois – Reconstructed Bronze Age metal casting Umha Aois – ancient bronze casting videoclip Aegean and Balkan Prehistory articles, site-reports and bibliography database concerning the Aegean, Balkans and Western Anatolia "The Transmission of Early Bronze Technology to Thailand: New Perspectives" Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016). Seafaring Divers unearth Bronze Age hoard off the coast of Devon Articles which contain graphical timelines Historical eras
2,034
4,628
https://en.wikipedia.org/wiki/Bilinear%20transform
Bilinear transform
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa. The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters. The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. This is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency. Discrete-time approximation The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of where is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for or a similar approximation for can be performed. The inverse of this mapping (and its first-order bilinear approximation) is The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function, That is Stability and minimum-phase property preserved A continuous-time causal filter is stable if the poles of its transfer function fall in the left half of the complex s-plane. A discrete-time causal filter is stable if the poles of its transfer function fall inside the unit circle in the complex z-plane. The bilinear transform maps the left half of the complex s-plane to the interior of the unit circle in the z-plane. Thus, filters designed in the continuous-time domain that are stable are converted to filters in the discrete-time domain that preserve that stability. Likewise, a continuous-time filter is minimum-phase if the zeros of its transfer function fall in the left half of the complex s-plane. A discrete-time filter is minimum-phase if the zeros of its transfer function fall inside the unit circle in the complex z-plane. Then the same mapping property assures that continuous-time filters that are minimum-phase are converted to discrete-time filters that preserve that property of being minimum-phase. Transformation of a General LTI System A general LTI system has the transfer function The order of the transfer function is the greater of and (in practice this is most likely as the transfer function must be proper for the system to be stable). Applying the bilinear transform where is defined as either or otherwise if using frequency warping, gives Multiplying the numerator and denominator by the largest power of present, , gives It can be seen here that after the transformation, the degree of the numerator and denominator are both . Consider then the pole-zero form of the continuous-time transfer function The roots of the numerator and denominator polynomials, and , are the zeros and poles of the system. The bilinear transform is a one-to-one mapping, hence these can be transformed to the z-domain using yielding some of the discretized transfer function's zeros and poles and As described above, the degree of the numerator and denominator are now both , in other words there is now an equal number of zeros and poles. The multiplication by means the additional zeros or poles are Given the full set of zeros and poles, the z-domain transfer function is then Example As an example take a simple low-pass RC filter. This continuous-time filter has a transfer function If we wish to implement this filter as a digital filter, we can apply the bilinear transform by substituting for the formula above; after some reworking, we get the following filter representation: {| |- | | |- | | |- | | |- | | |} The coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used to implement a real-time digital filter. Transformation for a general first-order continuous-time filter It is possible to relate the coefficients of a continuous-time, analog filter with those of a similar discrete-time digital filter created through the bilinear transform process. Transforming a general, first-order continuous-time filter with the given transfer function using the bilinear transform (without prewarping any frequency specification) requires the substitution of where . However, if the frequency warping compensation as described below is used in the bilinear transform, so that both analog and digital filter gain and phase agree at frequency , then . This results in a discrete-time digital filter with coefficients expressed in terms of the coefficients of the original continuous time filter: Normally the constant term in the denominator must be normalized to 1 before deriving the corresponding difference equation. This results in The difference equation (using the Direct form I) is General second-order biquad transformation A similar process can be used for a general second-order filter with the given transfer function This results in a discrete-time digital biquad filter with coefficients expressed in terms of the coefficients of the original continuous time filter: Again, the constant term in the denominator is generally normalized to 1 before deriving the corresponding difference equation. This results in The difference equation (using the Direct form I) is Frequency warping To determine the frequency response of a continuous-time filter, the transfer function is evaluated at which is on the axis. Likewise, to determine the frequency response of a discrete-time filter, the transfer function is evaluated at which is on the unit circle, . The bilinear transform maps the axis of the s-plane (of which is the domain of ) to the unit circle of the z-plane, (which is the domain of ), but it is not the same mapping which also maps the axis to the unit circle. When the actual frequency of is input to the discrete-time filter designed by use of the bilinear transform, then it is desired to know at what frequency, , for the continuous-time filter that this is mapped to. {| |- | | |- | | |- | | |- | | |- | | |- | | |} This shows that every point on the unit circle in the discrete-time filter z-plane, is mapped to a point on the axis on the continuous-time filter s-plane, . That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is and the inverse mapping is The discrete-time filter behaves at frequency the same way that the continuous-time filter behaves at frequency . Specifically, the gain and phase shift that the discrete-time filter has at frequency is the same gain and phase shift that the continuous-time filter has at frequency . This means that every feature, every "bump" that is visible in the frequency response of the continuous-time filter is also visible in the discrete-time filter, but at a different frequency. For low frequencies (that is, when or ), then the features are mapped to a slightly different frequency; . One can see that the entire continuous frequency range is mapped onto the fundamental frequency interval The continuous-time filter frequency corresponds to the discrete-time filter frequency and the continuous-time filter frequency correspond to the discrete-time filter frequency One can also see that there is a nonlinear relationship between and This effect of the bilinear transform is called frequency warping. The continuous-time filter can be designed to compensate for this frequency warping by setting for every frequency specification that the designer has control over (such as corner frequency or center frequency). This is called pre-warping the filter design. It is possible, however, to compensate for the frequency warping by pre-warping a frequency specification (usually a resonant frequency or the frequency of the most significant feature of the frequency response) of the continuous-time system. These pre-warped specifications may then be used in the bilinear transform to obtain the desired discrete-time system. When designing a digital filter as an approximation of a continuous time filter, the frequency response (both amplitude and phase) of the digital filter can be made to match the frequency response of the continuous filter at a specified frequency , as well as matching at DC, if the following transform is substituted into the continuous filter transfer function. This is a modified version of Tustin's transform shown above. However, note that this transform becomes the original transform as . The main advantage of the warping phenomenon is the absence of aliasing distortion of the frequency response characteristic, such as observed with Impulse invariance. See also Impulse invariance Matched Z-transform method References External links MIT OpenCourseWare Signal Processing: Continuous to Discrete Filter Design Lecture Notes on Discrete Equivalents The Art of VA Filter Design Digital signal processing Transforms Control theory
2,038
4,640
https://en.wikipedia.org/wiki/Bogie
Bogie
A bogie ( ) (or truck in North American English) is a chassis or framework that carries a wheelset, attached to a vehicle—a modular subassembly of wheels and axles. Bogies take various forms in various modes of transport. A bogie may remain normally attached (as on many railroad cars and semi-trailers) or be quickly detachable (as the dolly in a road train or in railway bogie exchange); it may contain a suspension within it (as most rail and trucking bogies do), or be solid and in turn be suspended (as most bogies of tracked vehicles are); it may be mounted on a swivel, as traditionally on a railway carriage or locomotive, additionally jointed and sprung (as in the landing gear of an airliner), or held in place by other means (centreless bogies). In Scotland, the term is used for a child’s (usually home-made) wooden cart. While bogie is the preferred spelling and first-listed variant in various dictionaries, bogey and bogy are also used. Railway A bogie in the UK, or a railroad truck, wheel truck, or simply truck in North America, is a structure underneath a railway vehicle (wagon, coach or locomotive) to which axles (and, hence, wheels) are attached through bearings. In Indian English, bogie may also refer to an entire railway carriage. In South Africa, the term bogie is often alternatively used to refer to a freight or goods wagon (shortened from bogie wagon). The bogie was invented by John B. Jervis along with the 4-2-0 locomotive to support the smokebox on it in the early 1830s, but it didn't get accepted for decades. The first standard gauge British railway to build coaches with bogies, instead of rigidly mounted axles, was the Midland Railway in 1874. Purpose Bogies serve a number of purposes: Support of the rail vehicle body Stability on both straight and curved track Improve ride quality by absorbing vibration and minimizing the impact of centrifugal forces when the train runs on curves at high speed Minimizing generation of track irregularities and rail abrasion Usually, two bogies are fitted to each carriage, wagon or locomotive, one at each end. Another configuration is often used in articulated vehicles, which places the bogies (often Jacobs bogies) under the connection between the carriages or wagons. Most bogies have two axles, but some cars designed for heavy loads have more axles per bogie. Heavy-duty cars may have more than two bogies using span bolsters to equalize the load and connect the bogies to the cars. Usually, the train floor is at a level above the bogies, but the floor of the car may be lower between bogies, such as for a bilevel rail car to increase interior space while staying within height restrictions, or in easy-access, stepless-entry, low-floor trains. Components Key components of a bogie include: The bogie frame: This can be of inside frame type where the main frame and bearings are between the wheels, or (more commonly) of outside frame type where the main frame and bearings are outside the wheels. Suspension to absorb shocks between the bogie frame and the rail vehicle body. Common types are coil springs, leaf springs and rubber airbags. At least one wheelset, composed of an axle with bearings and a wheel at each end. The bolster, the main crossmember, connected to the bogie frame through the secondary suspension. The railway car is supported at the pivot point on the bolster. Axle box suspensions absorb shocks between the axle bearings and the bogie frame. The axle box suspension usually consists of a spring between the bogie frame and axle bearings to permit up-and-down movement, and sliders to prevent lateral movement. A more modern design uses solid rubber springs. Brake equipment: Two main types are used: brake shoes that are pressed against the tread of the wheel, and disc brakes and pads. In powered vehicles, some form of transmission, usually electrically powered traction motors with a single speed gearbox or a hydraulically powered torque converter. The connections of the bogie with the rail vehicle allow a certain degree of rotational movement around a vertical axis pivot (bolster), with side bearers preventing excessive movement. More modern, bolsterless bogie designs omit these features, instead taking advantage of the sideways movement of the suspension to permit rotational movement. Locomotives Diesel and electric Modern diesel and electric locomotives are mounted on bogies. Those commonly used in the North America include Type A, Blomberg, HT-C and Flexicoil trucks. Steam On a steam locomotive, the leading and trailing wheels may be mounted on bogies like pony trucks or Bissel bogies. Articulated locomotives (e.g. Fairlie, Garratt or Mallet locomotives) have power bogies similar to those on diesel and electric locomotives. Rollbock A rollbock is a specialized type of bogie that is inserted under the wheels of a rail wagon/car, usually to convert for another track gauge. Transporter wagons carry the same concept to the level of a flatcar specialized to take other cars as its load. Archbar bogies In archbar or diamond frame bogies, the side frames are fabricated rather than cast. Tramway Modern Tram bogies are much simpler in design because of their axle load, and the tighter curves found on tramways mean tram bogies almost never have more than two axles. Furthermore, some tramways have steeper gradients and vertical, as well as horizontal, curves, which means tram bogies often need to pivot on the horizontal axis, as well. Some articulated trams have bogies located under articulations, a setup referred to as a Jacobs bogie. Often, low-floor trams are fitted with nonpivoting bogies and many tramway enthusiasts see this as a retrograde step, as it leads to more wear of both track and wheels and also significantly reduces the speed at which a tram can round a curve. Historic In the past, many different types of bogie (truck) have been used under tramcars (e.g. Brill, Peckham, maximum traction). A maximum traction truck has one driving axle with large wheels and one nondriving axle with smaller wheels. The bogie pivot is located off-centre, so more than half the weight rests on the driving axle. Hybrid systems The retractable stadium roof on Toronto's Rogers Centre used modified off-the-shelf train bogies on a circular rail. The system was chosen for its proven reliability. Rubber-tyred metro trains use a specialised version of railway bogies. Special flanged steel wheels are behind the rubber-tired running wheels, with additional horizontal guide wheels in front of and behind the running wheels, as well. The unusually large flanges on the steel wheels guide the bogie through standard railroad switches, and in addition keep the train from derailing in case the tires deflate. Variable gauge axles To overcome breaks of gauge some bogies are being fitted with variable gauge axles (VGA) so that they can operate on two different gauges. These include the SUW 2000 system from ZNTK Poznań. Cleminson system The Cleminson system is not a true bogie, but serves a similar purpose. It was based on a patent of 1883 by James Cleminson, and was once popular on narrow-gauge rolling stock, e.g. on the Isle of Man and Manx Northern Railways. The vehicle would have three axles and the outer two could pivot to adapt to curvature of the track. The pivoting was controlled by levers attached to the third (centre) axle, which could slide sideways. Tracked vehicles Some tanks and other tracked vehicles have bogies as external suspension components (see armoured fighting vehicle suspension). This type of bogie usually has two or more road wheels and some type of sprung suspension to smooth the ride across rough terrain. Bogie suspensions keep much of their components on the outside of the vehicle, saving internal space. Although vulnerable to antitank fire, they can often be repaired or replaced in the field. Articulated bogie An articulated bogie is any one of a number of bogie designs that allow railway equipment to safely turn sharp corners, while reducing or eliminating the "screeching" normally associated with metal wheels rounding a bend in the rails. There are a number of such designs, and the term is also applied to train sets that incorporate articulation in the vehicle, as opposed to the bogies themselves. If one considers a single bogie "up close", it resembles a small rail car with axles at either end. The same effect that causes the bogies to rub against the rails at longer radius causes each of the pairs of wheels to rub on the rails and cause the screeching. Articulated bogies add a second pivot point between the two axles (wheelsets) to allow them to rotate to the correct angle even in these cases. Articulated lorries (tractor-trailers) In trucking, a bogie is the subassembly of axles and wheels that supports a semi-trailer, whether permanently attached to the frame (as on a single trailer) or making up the dolly that can be hitched and unhitched as needed when hitching up a second or third semi-trailer (as when pulling doubles or triples). Bogie (aircraft) Radial steering truck Radial steering trucks, also known as radial bogies, allow the individual axles to align with curves in addition to the bogie frame as a whole pivoting. For non-radial bogies, the more axles in the assembly, the more difficulty it has negotiating curves, due to wheel flange to rail friction. For radial bogies, the wheel sets actively "steer" through curves, thus reducing wear at the wheel flange to rail interface and improving adhesion. In the USA, this has been implemented for locomotives both by EMD and GE. The EMD version, designated HTCR, was made standard equipment for the SD70 series, first sold in 1993. However, the HTCR in actual operation had mixed results and relatively high purchase and maintenance costs. Thus EMD introduced the HTSC truck in 2003, which basically is the HTCR stripped of radial components. GE introduced their version in 1995 as a buyer option for the AC4400CW and later Evolution Series locomotives. However it also met with limited acceptance due to relatively high purchase and maintenance costs, and customers have generally chosen GE Hi-Ad standard trucks for newer and rebuilt locomotives. See also Articles on bogies and trucks Arnoux system Bissel bogie Blomberg B Gölsdorf axle ICF Bogie Jacobs bogie Krauss-Helmholtz bogie Lateral motion device Mason Bogie Pony truck Rocker-bogie Scheffel bogie Schwartzkopff-Eckhardt II bogie Syntegra Related topics Caster Dolly Flange List of railroad truck parts Luttermöller axle Road–rail vehicle Skateboard truck Spring (device) Timmis system, an early form of coil spring used on railway axles. Trailing wheel Wheel arrangement Wheelbase Wheelset References Further reading External links Truck (bogie) with tyres Track modelling Bogies/Trucks Barber truck parts Suspension systems Locomotive’s Bogies & Components Locomotive parts Rail technologies Vehicle technology
2,043
4,649
https://en.wikipedia.org/wiki/Billy%20Crystal
Billy Crystal
William Edward Crystal (born March 14, 1948) is an American actor, comedian, and filmmaker. He gained prominence in the 1970s and 1980s for television roles as Jodie Dallas on the ABC sitcom Soap and as a cast member and frequent host of Saturday Night Live. Crystal then became a Hollywood film star during the late 1980s and 1990s, appearing in Running Scared (1986), The Princess Bride (1987), Throw Momma from the Train (1987), Memories of Me (1988), When Harry Met Sally... (1989), City Slickers (1991), Mr. Saturday Night (1992), Hamlet (1996), Analyze This (1999), and Parental Guidance (2012). He provided the voice of Mike Wazowski in the Monsters, Inc. franchise. He also starred on the Broadway stage in 700 Sundays in 2004 and again in 2014 and in Mr. Saturday Night in 2022. Crystal has received numerous accolades, including six Primetime Emmy Awards (out of 21 nominations), a Tony Award, a Mark Twain Prize for American Humor, and a star on the Hollywood Walk of Fame in 1991. He has hosted the Academy Awards nine times, beginning in 1990 and most recently in 2012. In 2022, he was announced as the recipient of the Lifetime Achievement Award from the Critics Choice Awards. Early life Crystal was born at Doctors Hospital on the Upper East Side of Manhattan, and initially raised in the Bronx. As a toddler, he moved with his family to 549 East Park Avenue in Long Beach, New York, on Long Island. He and his older brothers Joel, who later became an art teacher, and Richard, nicknamed Rip, were the sons of Helen (née Gabler), a housewife, and Jack Crystal, who owned and operated the Commodore Music Store, founded by Crystal's grandfather, Julius Gabler. Crystal's father was also a jazz promoter, a producer, and an executive for an affiliated jazz record label, Commodore Records, founded by Crystal's uncle, musician and songwriter Milt Gabler. Crystal is Jewish (his family emigrated from Austria, Russia, and Lithuania), and he grew up attending Temple Emanu-El (Long Beach, New York) where he had his bar mitzvah. The three young brothers would entertain by reprising comedy routines from the likes of Bob Newhart, Rich Little and Sid Caesar records their father would bring home. Jazz artists such as Arvell Shaw, Pee Wee Russell, Eddie Condon, and Billie Holiday were often guests in the home. With the decline of Dixieland jazz and the rise of discount record stores, in 1963, Crystal's father lost his business and died later that year at the age of 54 after having a heart attack. His mother died in 2001. After graduating from Long Beach High School in 1965, Crystal attended Marshall University in Huntington, West Virginia, on a baseball scholarship, having learned the game from his father, who pitched for St. John's University. Crystal never played baseball at Marshall because the program was suspended during his first year. He did not return to Marshall as a sophomore, instead deciding to stay in New York to be close to his future wife. He studied acting at HB Studio. He attended Nassau Community College with her and later transferred to New York University, where he was a film and television directing major. He graduated from NYU in 1970 with a BFA from its then School of Fine Arts. One of his instructors was Martin Scorsese, while Oliver Stone and Christopher Guest were among his classmates. Career Television Crystal returned to New York City. For four years, he was part of a comedy trio with two friends. They played colleges and coffee houses and Crystal worked as a substitute teacher on Long Island. He later became a solo act and performed regularly at The Improv and Catch a Rising Star. In 1976, Crystal appeared on an episode of All in the Family. He was on the dais for The Dean Martin Celebrity Roast of Muhammad Ali on February 19, 1976, where he did impressions of both Ali and sportscaster Howard Cosell. He was scheduled to appear on the first episode of NBC Saturday Night on October 11, 1975 (The show was later renamed Saturday Night Live on March 26, 1977), but his sketch was cut. He did perform on episode 17 of that first season, doing a monologue of an old jazz man capped by the line "Can you dig it? I knew that you could." Host Ron Nessen introduced him as "Bill Crystal". Crystal also made game show appearances such as The Hollywood Squares, All Star Secrets and The $20,000 Pyramid. To this day, he holds the Pyramid franchise's record for getting his contestant partner to the top of the pyramid in the winner's circle in the fastest time: 26 seconds. Crystal's earliest prominent role was as Jodie Dallas on Soap, one of the first unambiguously gay characters in the cast of an American television series. He continued in the role during the series's entire 1977–1981 run. In 1982, Billy Crystal hosted his own variety show, The Billy Crystal Comedy Hour on NBC. When Crystal arrived to shoot the fifth episode, he learned it had been canceled after only the first two aired. After hosting Saturday Night Live twice, on March 17, 1984, and the show's ninth season finale on May 5, he joined the regular cast for the 1984–85 season. His most famous recurring sketch was his parody of Fernando Lamas, a smarmy talk-show host whose catchphrase, "You look... mahvelous!", became a media sensation. Also in the 1980s, Crystal starred in an episode of Shelley Duvall's Faerie Tale Theatre as the smartest of the three little pigs. Crystal was a guest on the first and the last episode of The Tonight Show with Jay Leno, which concluded February 6, 2014, after 22 seasons on the air. In 1996, Crystal was the guest star of the third episode of Muppets Tonight and hosted three Grammy Awards Telecasts: the 29th Grammys; the 30th Grammys; and the 31st Grammys. In 2015, Crystal co-starred alongside Josh Gad on the FX comedy series The Comedians, which ran for just one season before being canceled. Film career Crystal's first film role was in Joan Rivers' 1978 film Rabbit Test, the story of the "world's first pregnant man." Crystal appeared briefly in the Rob Reiner "rockumentary" This Is Spinal Tap (1984) as Morty The Mime, a waiter dressed as a mime at one of Spinal Tap's parties. He shared the scene with a then-unknown, non-speaking Dana Carvey, stating famously that "Mime is money." He later starred in the action comedy Running Scared (1986) and was directed by Reiner again in The Princess Bride (1987), in a comedic supporting role as "Miracle Max". Reiner got Crystal to accept the part by saying, "How would you like to play Mel Brooks?" Reiner also allowed Crystal to ad-lib, and his parting shot, "Have fun storming the castle!" is a frequently quoted line. Reiner directed Crystal for a third time in the romantic comedy When Harry Met Sally... (1989), in which Crystal starred alongside Meg Ryan and for which he was nominated for a Golden Globe. The film has since become an iconic classic for the genre and is Crystal's most celebrated film. Crystal then starred in the award-winning buddy comedy City Slickers (1991), which proved very successful both commercially and critically and for which Crystal was nominated for his second Golden Globe. The film was followed by a sequel, which was less successful. In 1992, he narrated Dr. Seuss Video Classics: Horton Hatches the Egg. The name of his company is Face Productions. Following the significant success of these films, Crystal wrote, directed, and starred in Mr. Saturday Night (1992) and Forget Paris (1995). In the former, Crystal played a serious role in aging makeup, as an egotistical comedian who reflects back on his career. Crystal starred in Woody Allen's critically acclaimed comedy ensemble film Deconstructing Harry (1997). Crystal had another success alongside Robert De Niro in Harold Ramis' mobster comedy Analyze This (1999). More recent performances include roles in America's Sweethearts (2001), the sequel Analyze That (2002), and Parental Guidance (2012). He directed the made-for-television movie 61* (2001) based on Roger Maris's and Mickey Mantle's race to break Babe Ruth's single-season home run record in 1961. This earned Crystal an Emmy nomination for Outstanding Directing for a Miniseries, Movie or a Special. Crystal was originally asked to voice Buzz Lightyear in Toy Story (1995) but turned it down, a decision he later regretted due to the popularity of the series. Crystal later went on to provide the voice of Mike Wazowski in the blockbuster Pixar film Monsters, Inc. (2001), Cars (2006), during the epilogue in the end credits, and to reprise his voice role in the prequel, Monsters University (2013). Crystal also provided the voice of Calcifer in the English version of Hayao Miyazaki's Howl's Moving Castle (2004). Albums and music career Due to the success of Crystal's standup and SNL career, in 1985, he released an album of his stand-up material titled Mahvelous!. The title track You Look Marvelous, written by Crystal and Paul Shaffer, had an accompanying music video that debuted on MTV. Both the song and video features Crystal in character as his SNL persona of talk show host Fernando Lamas. The video features Lamas cruising around in what was at the time the world's longest stretch limousine, built by custom-coach designer and builder Vini Bergeman, surrounded by models in bikinis. The single peaked at No. 58 on the Billboard Hot 100 in the US, and No. 17 in Canada. The album was nominated for a Grammy Award for Best Comedy Recording at the 1986 Grammy Awards. In 2013, Crystal released his autobiographical memoir Still Foolin' Em. The audiobook version was nominated for a Grammy Award for Best Spoken Word Album at the 2014 Grammy Awards. Academy Awards host Crystal hosted the Academy Awards broadcast a total of 9 times, from 1990 to 1993, 1997, 1998, 2000, 2004 and 2012. His hosting was critically praised, resulting in two Primetime Emmy Award wins for hosting and writing the 63rd Academy Awards and an Emmy win for writing the 64th Academy Awards. He returned as the host for the 2012 Oscar ceremony, after Eddie Murphy resigned from hosting. His nine times is second only to Bob Hope's 19 in most ceremonies hosted. At the 83rd Academy Awards ceremony in 2011, he appeared as a presenter for a digitally inserted Bob Hope and before doing so was given a standing ovation. Film critic Roger Ebert said when Crystal came onstage about two hours into the show, he got the first laughs of the broadcast. Crystal's hosting gigs have regularly included an introductory video segment in which he comedically inserts himself into scenes of that year's nominees in addition to a song following his opening monologue. Broadway Crystal won the 2005 Tony Award for Best Special Theatrical Event for 700 Sundays, a two-act, one-man play, which he conceived and wrote about his parents and his childhood growing up on Long Island. He toured throughout the US with the show in 2006 and then Australia in 2007. Following the initial success of the play, Crystal wrote the book 700 Sundays for Warner Books, which was published on October 31, 2005. In conjunction with the book and the play that also paid tribute to his uncle, Milt Gabler, Crystal produced two CD compilations: Billy Crystal Presents: The Milt Gabler Story, which featured his uncle's most influential recordings from Billie Holiday's "Strange Fruit" to "Rock Around the Clock" by Bill Haley & His Comets; and Billy Remembers Billie featuring Crystal's favorite Holiday recordings. In the fall of 2013, he brought the show back to Broadway for a two-month run at the Imperial Theatre. HBO filmed the January 3–4, 2014 performances for a special, which debuted on their network on April 19, 2014 entitled Billy Crystal: 700 Sundays. The televised special received three Primetime Emmy Award nominations including Outstanding Variety Special, and Outstanding Writing for a Variety Special. In 2022, Crystal adapted his 1992 movie Mr. Saturday Night into a Broadway musical with the same name. Crystal stars in the musical reprising his role from the film alongside David Paymer. The production began previews on Broadway at the Nederlander Theatre on March 29, 2022, prior to officially opening on April 27. Crystal earned the Drama League Award for Contribution to the Theater Award for "his extraordinary work on stages across the country and commitment to mentorship in the field". Crystal performed a number with the ensemble from his musical at the 75th Tony Awards. He also performed what he described as Yiddish scat singing. He went into the crowd teaching Lin-Manuel Miranda and Samuel L. Jackson as well as the rest of the audience. The New York Times praised Crystal on his bit, describing it as a highlight of the telecast writing, "one of the few moments that broke through...is when [Crystal] brought it out into the audience, and threw it up to the balcony, he showed how precision delivery and command of a room can make even the oldest, silliest material impossibly compelling." Other appearances In 2014, Crystal paid tribute to his close friend Robin Williams at the 66th Primetime Emmy Awards. In his tribute he talked about their friendship, saying, "As genius as he was on stage, he was the greatest friend you could ever imagine. Supportive. Protective. Loving. It's very hard to talk about him in the past because he was so present in all of our lives. For almost 40 years, he was the brightest star in the comedy galaxy…[His] beautiful light will continue to shine on us forever. And the glow will be so bright, it'll warm your heart. It'll make your eyes glisten. And you'll think to yourselves: Robin Williams. What a concept." Crystal stated that paying tribute to Williams so publicly and so soon after Williams had died was one of "the hardest things I've had to do" and that "I was really worried that I wasn't going to get through it." Crystal soon after appeared on The View where he and Whoopi Goldberg shared stories about Williams, reminiscing about their friendship, and their collaborations together on Comic Relief. In 2016, Crystal gave one of the eulogies for Muhammad Ali at his funeral. In his remembrance of Ali, Crystal talked about his admiration for Ali as a boxer, and humanitarian. He also shared stories of their unlikely friendship after Crystal did a series of impersonations of him. Crystal stated of Ali's legacy, "Only once in a thousand years or so, do we get to hear a Mozart, or see a Picasso, or read a Shakespeare. Ali was one of them. And yet, at his heart, he was still a kid from Louisville who ran with the gods and walked with the crippled and smiled at the foolishness of it all." In the fall of 2021, Crystal reprised the role of Buddy Young Jr., in a theatrical musical staging of Mr. Saturday Night at the Barrington Stage Company in Pittsfield, MA. Discography Albums Mahvelous!, (A&M Records, 1985) [#65 US] Singles "You Look Marvelous", (A&M Records, 1985) [#58 US] "I Hate When That Happens", (A&M Records, 1985) "The Christmas Song", (A&M Records, 1985) Bibliography Awards and nominations Personal life On June 4, 1970, Crystal married his high school sweetheart, Janice Goldfinger. Billy has long credited his parents, "who always looked like they loved being together," with setting an example for his own marriage. They have two daughters: actress Jennifer and Lindsay, a producer, and are grandparents. They live in the Pacific Palisades neighborhood of Los Angeles, California. Crystal received an honorary Doctor of Fine Arts degree from New York University in 2016 and spoke at the commencement at Yankee Stadium. Philanthropy In 1986, Crystal started hosting Comic Relief on HBO with Robin Williams and Whoopi Goldberg. Founded by Bob Zmuda, Comic Relief raises money for homeless people in the United States. On September 6, 2005, on The Tonight Show with Jay Leno, Crystal and Jay Leno were the first celebrities to sign a Harley-Davidson motorcycle to be auctioned off for Gulf Coast relief. Crystal has participated in the Simon Wiesenthal Center Museum of Tolerance in Los Angeles. Crystal's personal history is featured in the "Finding Our Families, Finding Ourselves" exhibit in the genealogy wing of the museum. Sports On March 12, 2008, Crystal signed a one-day minor league contract to play with the New York Yankees, and was invited to the team's major league spring training. He wore uniform number 60 in honor of his upcoming 60th birthday. On March 13, in a spring training game against the Pittsburgh Pirates, Crystal led off as the designated hitter. He managed to make contact, fouling a fastball up the first base line, but was eventually struck out by Pirates pitcher Paul Maholm on six pitches and was later replaced in the batting order by Johnny Damon. He was released on March 14, his 60th birthday. Crystal's boyhood idol was Yankee Hall of Fame legend Mickey Mantle, who had signed a program for him when Crystal attended a game where Mantle had hit a home run. Years later on The Dinah Shore Show, in one of his first television appearances, Crystal met Mantle in person and had Mantle re-sign the same program. Crystal would be good friends with Mantle until Mantle's death in 1995. He and Bob Costas together wrote the eulogy Costas read at Mantle's funeral, and George Steinbrenner then invited Crystal to emcee the unveiling of Mantle's monument at Yankee Stadium. In his 2013 memoir Still Foolin' 'Em, Crystal claimed that after the ceremony, near the Yankees clubhouse, he was punched in the stomach by Joe DiMaggio, who was angry at Crystal for not having introduced him to the crowd as the "Greatest living player". Crystal also was well known for his impressions of Yankees Hall of Famer turned broadcaster Phil Rizzuto. Rizzuto, known for his quirks calling games, did not travel to Anaheim, California in 1996 to call the game for WPIX. Instead, Crystal joined the broadcasters in the booth and pretended to be Rizzuto for a few minutes during the August 31 game. Although a lifelong Yankees fan, he is a part-owner of the Arizona Diamondbacks, even earning a World Series ring in 2001 when the Diamondbacks beat his beloved Yankees. In City Slickers, Crystal wore a New York Mets baseball cap. In the 1986 film Running Scared, his character is an avid Chicago Cubs fan, wearing a Cubs' jersey in several scenes. In the 2012 film Parental Guidance, his character is the announcer for the Fresno Grizzlies, a Minor League Baseball team, who aspires to announce for their Major League affiliate, the San Francisco Giants. Crystal appeared in Ken Burns's 1994 documentary Baseball, telling personal stories about his life-long love of baseball, including meeting Casey Stengel as a child and Ted Williams as an adult. Crystal is also a longtime Los Angeles Clippers fan and season ticket holder. References External links Website for Billy Crystal's book Still Foolin' 'Em 1948 births Age controversies Living people 20th-century American male actors 21st-century American male actors American comedy musicians American film producers American impressionists (entertainers) American male comedians American male film actors American male musical theatre actors American male screenwriters American male singers American male stage actors American male television actors American male voice actors American people of Austrian-Jewish descent American people of Lithuanian-Jewish descent American people of Russian-Jewish descent Long Beach High School (New York) alumni American sketch comedians American stand-up comedians American television directors Television producers from New York (state) American television writers Audiobook narrators Arizona Diamondbacks owners Comedians from New York (state) Jewish American male comedians Jewish American male actors Jewish American writers American male television writers Mark Twain Prize recipients Marshall Thundering Herd baseball players Marshall University alumni Nassau Community College alumni People from Long Beach, New York Primetime Emmy Award winners Tisch School of the Arts alumni Tony Award winners Film directors from New York (state) Screenwriters from New York (state) People from Pacific Palisades, California
2,049
4,651
https://en.wikipedia.org/wiki/Beta%20decay
Beta decay
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which a beta particle (fast energetic electron or positron) is emitted from an atomic nucleus, transforming the original nuclide to an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β+ decay is also known as positron emission. Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number −1. Since a proton or neutron has lepton number zero, β+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β− decay (an electron) must be accompanied by an electron antineutrino. An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14 with a half-life of about 5,730 years: → + + In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the parent nuclide while the resulting element (in this case ) is known as the daughter nuclide. Another example is the decay of hydrogen-3 (tritium) into helium-3 with a half-life of about 12.3 years: → + + An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23 with a half-life of about 11.3 s: → + + β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one. The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass. History Discovery and initial characterization Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet. In 1900, Becquerel measured the mass-to-charge ratio () for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron. In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left. Neutrinos The study of beta decay provided the first physical evidence for the existence of the neutrino. In both alpha and gamma decay, the resulting alpha or gamma particle has a narrow energy distribution, since the particle carries the energy from the difference between the initial and final nuclear states. However, the kinetic energy distribution, or spectrum, of beta particles measured by Lise Meitner and Otto Hahn in 1911 and by Jean Danysz in 1913 showed multiple lines on a diffuse background. These measurements offered the first hint that beta particles have a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make more accurate measurements which showed that the spectrum was continuous. The distribution of beta particle energies was in apparent contradiction to the law of conservation of energy. If beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should have a particular, well-defined value. For beta decay, however, the observed broad distribution of energies suggested that energy is lost in the beta decay process. This spectrum was puzzling for many years. A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission. From 1920 to 1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay. However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute. In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" ('little neutral one' in Italian). In 1933, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by Clyde Cowan and Frederick Reines in the Cowan–Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi. decay and electron capture In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction  +  →  + , and observed that the product isotope emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of  decay (positron emission), which they termed artificial radioactivity since is a short-lived nuclide which does not exist in nature. In recognition of their discovery the couple were awarded the Nobel Prize in Chemistry in 1935. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides. Non-conservation of parity In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. However Wu, who was female, was not awarded the Nobel prize. β− decay In  decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron () and an electron antineutrino ().  decay generally occurs in neutron-rich nuclei. The generic equation is: → + + where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final elements, respectively. Another example is when the free neutron () decays by  decay into a proton (): → + + . At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged () down quark to the positively charged () up quark by emission of a boson; the boson subsequently decays into an electron and an electron antineutrino: → + + . β+ decay In  decay, or positron emission, the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron () and an electron neutrino ().  decay generally occurs in proton-rich nuclei. The generic equation is: → + + This may be considered as the decay of a proton inside the nucleus to a neutron: p → n + + However,  decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton.  decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a . When a boson is emitted, it decays into a positron and an electron neutrino: → + + . Electron capture (K-capture) In all cases where  decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino: + → + An example of electron capture is one of the decay modes of krypton-81 into bromine-81: + → + All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than ,  decay is not energetically possible, and electron capture is the sole decay mode. If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc. Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino. Nuclear transmutation If the proton and neutron are part of an atomic nucleus, the above described decay processes transmute one chemical element into another. For example: :{|border="0" |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta minus decay) |- style="height:2em;" | || || ||→ || ||+ || ||+ || ||(beta plus decay) |- style="height:2em;" | ||+ || ||→ || ||+ || || || ||(electron capture) |} Beta decay does not change the number () of nucleons in the nucleus, but changes only its charge . Thus the set of all nuclides with the same  can be introduced; these isobaric nuclides may turn into each other via beta decay. For a given there is one that is most stable. It is said to be beta stable, because it presents a local minimum of the mass excess: if such a nucleus has numbers, the neighbour nuclei and have higher mass excess and can beta decay into , but not vice versa. For all odd mass numbers , there is only one known beta-stable isobar. For even , there are up to three different beta-stable isobars experimentally known; for example, , , and are all beta-stable. There are about 350 known beta-decay stable nuclides. Competition of beta decay types Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is the single isotope (29 protons, 35 neutrons), which illustrates three types of beta decay in competition. Copper-64 has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission () or electron capture () to , as it is through neutron decay by electron emission () to . Stability of naturally occurring nuclides Most naturally occurring nuclides on earth are beta stable. Nuclides that are not beta stable have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide , which undergoes all three types of beta decay (, and electron capture) with a half-life of . Conservation rules for beta decay Baryon number is conserved where is the number of constituent quarks, and is the number of constituent antiquarks. Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin. Up and down quarks have total isospin and isospin projections All other quarks have . In general Lepton number is conserved so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0. Angular momentum For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered. The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total (parallel) or (anti-parallel). For forbidden decays, orbital angular momentum must also be taken into consideration. Energy release The value is defined as the total energy released in a given nuclear decay. In beta decay, is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to . A typical is around 1 MeV, but can range from a few keV to a few tens of MeV. Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light. In the case of Re, the maximum speed of the beta particle is only 9.8% of the speed of light. The following table gives some examples: β− decay Consider the generic equation for beta decay → + + . The value for this decay is , where is the mass of the nucleus of the atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus is related to the standard atomic mass by That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all electron binding energies for the atom. This equation is rearranged to find , and is found similarly. Substituting these nuclear masses into the -value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high- atoms, we have This energy is carried away as kinetic energy by the electron and antineutrino. Because the reaction will proceed only when the  value is positive, β− decay can occur when the mass of atom is greater than the mass of atom . β+ decay The equations for β+ decay are similar, with the generic equation → + + giving However, in this equation, the electron masses do not cancel, and we are left with Because the reaction will proceed only when the  value is positive, β+ decay can occur when the mass of atom exceeds that of by at least twice the mass of the electron. Electron capture The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture + → + we have which simplifies to where is the binding energy of the captured electron. Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true. Beta emission spectrum Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum of emitted betas as follows: where is the kinetic energy, is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), is the Fermi Function (see below) with Z the charge of the final-state nucleus, is the total energy, is the momentum, and is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by minus the kinetic energy of the beta. As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right. Fermi function The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be: where is the final momentum, Γ the Gamma function, and (if is the fine-structure constant and the radius of the final state nucleus) , (+ for electrons, − for positrons), and . For non-relativistic betas (), this expression can be approximated by: Other approximations can be found in the literature. Kurie plot A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's  value). With a Kurie plot one can find the limit on the effective mass of a neutrino. Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity. The higher the energy of the particles, the higher their polarization. Types of beta decay transitions Beta decays can be classified according to the angular momentum ( value) and total spin ( value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow–Teller" transitions. When beta decay particles carry no angular momentum (), the decay is referred to as "allowed", otherwise it is "forbidden". Other decay modes, which are rare, are known as bound state decay and double beta decay. Fermi transitions A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by with the weak vector coupling constant, the isospin raising and lowering operators, and running over all protons and neutrons in the nucleus. Gamow–Teller transitions A Gamow–Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In this case, the nuclear part of the operator is given by with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon. Forbidden transitions When , the decay is referred to as "forbidden". Nuclear selection rules require high  values to be accompanied by changes in nuclear spin () and parity (). The selection rules for the th forbidden transitions are: where corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the Δ and Δ values for the first few values of : Rare decay modes Bound-state β− decay A very small minority of free neutron decays (about four per million) are so-called "two-body decays", in which the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino. For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons. Bound-state β decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for 163Dy66+ in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research Center. Although neutral is a stable isotope, the fully ionized 163Dy66+ undergoes β decay into the K and L shells with a half-life of 47 days. The resulting nucleus - - is stable only in the fully ionized state and will decay via electron capture into in the neutral state. The half life for the latter is 4750 years. Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for 187Re by Bosch et al., also at Darmstadt. Neutral 187Re does undergo β decay with a half-life of  years, but for fully ionized 187Re75+ this is shortened to only 32.9 years. For comparison the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Due to the difference in the price of rhenium and osmium and the high share of in rhenium samples found on earth, this could some day be of commercial interest in the synthesis of precious metals. Double beta decay Some nuclei can undergo double beta decay (ββ decay) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as the process has an extremely long half-life. In nuclei for which both β decay and ββ decay are possible, the rarer ββ decay process is effectively impossible to observe. However, in nuclei where β decay is forbidden but ββ decay is allowed, the process can be seen and a half-life measured. Thus, ββ decay is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change ; thus, at least one of the nuclides with some given has to be stable with regard to both single and double beta decay. "Ordinary" double beta decay results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless double beta decay has never been observed. See also Common beta emitters Neutrino Betavoltaics Particle radiation Radionuclide Tritium illumination, a form of fluorescent lighting powered by beta decay Pandemonium effect Total absorption spectroscopy References Bibliography External links The Live Chart of Nuclides - IAEA with filter on decay type Beta decay simulation Nuclear physics Radioactivity
2,051
4,652
https://en.wikipedia.org/wiki/Blitzkrieg
Blitzkrieg
Blitzkrieg ( , ; from 'lightning' + 'war') is a word used to describe a surprise attack using a rapid, overwhelming force concentration that may consist of armored and motorized or mechanized infantry formations, together with close air support, that has the intent to break through the opponent's lines of defense, then dislocate the defenders, unbalance the enemy by making it difficult to respond to the continuously changing front, and defeat them in a decisive : a battle of annihilation. During the interwar period, aircraft and tank technologies matured and were combined with systematic application of the traditional German tactic of (maneuver warfare), deep penetrations and the bypassing of enemy strong points to encircle and destroy enemy forces in a (cauldron battle). During the Invasion of Poland, Western journalists adopted the term blitzkrieg to describe this form of armored warfare. The term had appeared in 1935, in a German military periodical (German Defense), in connection to quick or lightning warfare. German maneuver operations were successful in the campaigns of 1939–1941 and by 1940 the term blitzkrieg was extensively used in Western media. Blitzkrieg operations capitalized on surprise penetrations (e.g., the penetration of the Ardennes forest region), general enemy unreadiness and their inability to match the pace of the German attack. During the Battle of France, the French made attempts to reform defensive lines along rivers but were frustrated when German forces arrived first and pressed on. Despite being common in German and English-language journalism during World War II, the word was never used by the Wehrmacht as an official military term, except for propaganda. According to David Reynolds, "Hitler himself called the term Blitzkrieg 'A completely idiotic word' ()". Some senior officers, including Kurt Student, Franz Halder and Johann Adolf von Kielmansegg, even disputed the idea that it was a military concept. Kielmansegg asserted that what many regarded as blitzkrieg was nothing more than "ad hoc solutions that simply popped out of the prevailing situation". Student described it as ideas that "naturally emerged from the existing circumstances" as a response to operational challenges. The Wehrmacht never officially adopted it as a concept or doctrine. In 2005, the historian Karl-Heinz Frieser summarized blitzkrieg as the result of German commanders using the latest technology in the most advantageous way according to traditional military principles and employing "the right units in the right place at the right time". Modern historians now understand blitzkrieg as the combination of the traditional German military principles, methods and doctrines of the 19th century with the military technology of the interwar period. Modern historians use the term casually as a generic description for the style of maneuver warfare practiced by Germany during the early part of World War II, rather than as an explanation. According to Frieser, in the context of the thinking of Heinz Guderian on mobile combined arms formations, blitzkrieg can be used as a synonym for modern maneuver warfare on the operational level. Definition Common interpretation The traditional meaning of blitzkrieg is that of German tactical and operational methodology in the first half of the Second World War, that is often hailed as a new method of warfare. The word, meaning "lightning war" or "lightning attack" in its strategic sense describes a series of quick and decisive short battles to deliver a knockout blow to an enemy state before it could fully mobilize. Tactically, blitzkrieg is a coordinated military effort by tanks, motorized infantry, artillery and aircraft, to create an overwhelming local superiority in combat power, to defeat the opponent and break through its defenses. Blitzkrieg as used by Germany had considerable psychological, or "terror" elements, such as the Jericho Trompete, a noise-making siren on the Junkers Ju 87 dive-bomber, to affect the morale of enemy forces. The devices were largely removed when the enemy became used to the noise after the Battle of France in 1940 and instead bombs sometimes had whistles attached. It is also common for historians and writers to include psychological warfare by using Fifth columnists to spread rumors and lies among the civilian population in the theater of operations. Origin of the term The origin of the term blitzkrieg is obscure. It was never used in the title of a military doctrine or handbook of the German army or air force, and no "coherent doctrine" or "unifying concept of blitzkrieg" existed. The term seems rarely to have been used in the German military press before 1939 and recent research at the German Militärgeschichtliches Forschungsamt at Potsdam found it in only two military articles from the 1930s. Both used the term to mean a swift strategic knock-out, rather than a radical new military doctrine or approach to war. The first article (1935) deals primarily with supplies of food and materiel in wartime. The term blitzkrieg is used with reference to German efforts to win a quick victory in the First World War but is not associated with the use of armored, mechanized or air forces. It argued that Germany must develop self-sufficiency in food, because it might again prove impossible to deal a swift knock-out to its enemies, leading to a long war. In the second article (1938), launching a swift strategic knock-out is described as an attractive idea for Germany but difficult to achieve on land under modern conditions (especially against systems of fortification like the Maginot Line), unless an exceptionally high degree of surprise could be achieved. The author vaguely suggests that a massive strategic air attack might hold out better prospects but the topic is not explored in detail. A third relatively early use of the term in German occurs in Die Deutsche Kriegsstärke (German War Strength) by Fritz Sternberg, a Jewish, Marxist, political economist and refugee from Nazi Germany, published in 1938 in Paris and in London as Germany and a Lightning War. Sternberg wrote that Germany was not prepared economically for a long war but might win a quick war ("Blitzkrieg"). He did not go into detail about tactics or suggest that the German armed forces had evolved a radically new operational method. His book offers scant clues as to how German lightning victories might be won. In English and other languages, the term had been used since the 1920s. The term was first used in the publications of Ferdinand Otto Miksche, first in the magazine "Army Quarterly", and in his 1941 book, Blitzkrieg where he defined the concept. In September 1939, Time magazine termed the German military action as a "war of quick penetration and obliteration – Blitzkrieg, lightning war." After the invasion of Poland, the British press commonly used the term to describe German successes in that campaign, something Harris called "a piece of journalistic sensationalism – a buzz-word with which to label the spectacular early successes of the Germans in the Second World War". It was later applied to the bombing of Britain, particularly London, hence "The Blitz". The German popular press followed suit nine months later, after the fall of France in 1940; hence although the word had been used in German, it was first popularized by British journalism. Heinz Guderian referred to it as a word coined by the Allies: "as a result of the successes of our rapid campaigns our enemies ... coined the word Blitzkrieg". After the German failure in the Soviet Union in 1941, use of the term began to be frowned upon in Nazi Germany, and Hitler then denied ever using the term, saying in a speech in November 1941, "I have never used the word Blitzkrieg, because it is a very silly word". In early January 1942, Hitler dismissed it as "Italian phraseology". Military evolution, 1919–1939 Germany In 1914, German strategic thinking derived from the writings of Carl von Clausewitz (1 June 1780 – 16 November 1831), Helmuth von Moltke the Elder (26 October 1800 – 24 April 1891) and Alfred von Schlieffen (28 February 1833 – 4 January 1913), who advocated maneuver, mass and envelopment to create the conditions for a decisive battle (). During the war, officers such as Willy Rohr developed tactics to restore maneuver on the battlefield. Specialist light infantry (Stosstruppen, "storm troops") were to exploit weak spots to make gaps for larger infantry units to advance with heavier weapons and exploit the success, leaving isolated strong points to troops following up. Infiltration tactics were combined with short hurricane artillery bombardments using massed artillery, devised by Colonel Georg Bruchmüller. Attacks relied on speed and surprise rather than on weight of numbers. These tactics met with great success in Operation Michael, the German spring offensive of 1918 and restored temporarily the war of movement, once the Allied trench system had been overrun. The German armies pushed on towards Amiens and then Paris, coming within before supply deficiencies and Allied reinforcements halted the advance. Historian James Corum criticized the German leadership for failing to understand the technical advances of the First World War, having conducted no studies of the machine gun prior to the war, and giving tank production the lowest priority during the war. Following Germany's defeat, the Treaty of Versailles limited the Reichswehr to a maximum of 100,000 men, making impossible the deployment of mass armies. The German General Staff was abolished by the treaty but continued covertly as the Truppenamt (Troop Office), disguised as an administrative body. Committees of veteran staff officers were formed within the Truppenamt to evaluate 57 issues of the war to revise German operational theories. By the time of the Second World War, their reports had led to doctrinal and training publications, including H. Dv. 487, Führung und Gefecht der verbundenen Waffen (Command and Battle of the Combined Arms), known as das Fug (1921–23) and Truppenführung (1933–34), containing standard procedures for combined-arms warfare. The Reichswehr was influenced by its analysis of pre-war German military thought, in particular infiltration tactics, which at the end of the war had seen some breakthroughs on the Western Front and the maneuver warfare which dominated the Eastern Front. On the Eastern Front, the war did not bog down into trench warfare; German and Russian armies fought a war of maneuver over thousands of miles, which gave the German leadership unique experience not available to the trench-bound western Allies. Studies of operations in the east led to the conclusion that small and coordinated forces possessed more combat power than large, uncoordinated forces. After the war, the Reichswehr expanded and improved infiltration tactics. The commander in chief, Hans von Seeckt, argued that there had been an excessive focus on encirclement and emphasized speed instead. Seeckt inspired a revision of Bewegungskrieg (maneuver warfare) thinking and its associated Auftragstaktik, in which the commander expressed his goals to subordinates and gave them discretion in how to achieve them; the governing principle was "the higher the authority, the more general the orders were", so it was the responsibility of the lower echelons to fill in the details. Implementation of higher orders remained within limits determined by the training doctrine of an elite officer-corps. Delegation of authority to local commanders increased the tempo of operations, which had great influence on the success of German armies in the early war period. Seeckt, who believed in the Prussian tradition of mobility, developed the German army into a mobile force, advocating technical advances that would lead to a qualitative improvement of its forces and better coordination between motorized infantry, tanks, and planes. Britain The British Army took lessons from the successful infantry and artillery offensives on the Western Front in late 1918. To obtain the best co-operation between all arms, emphasis was placed on detailed planning, rigid control and adherence to orders. Mechanization of the army was considered a means to avoid mass casualties and indecisive nature of offensives, as part of a combined-arms theory of war. The four editions of Field Service Regulations published after 1918 held that only combined-arms operations could create enough fire power to enable mobility on a battlefield. This theory of war also emphasized consolidation, recommending caution against overconfidence and ruthless exploitation. In the Sinai and Palestine campaign, operations involved some aspects of what would later be called blitzkrieg. The decisive Battle of Megiddo included concentration, surprise and speed; success depended on attacking only in terrain favoring the movement of large formations around the battlefield and tactical improvements in the British artillery and infantry attack. General Edmund Allenby used infantry to attack the strong Ottoman front line in co-operation with supporting artillery, augmented by the guns of two destroyers. Through constant pressure by infantry and cavalry, two Ottoman armies in the Judean Hills were kept off-balance and virtually encircled during the Battles of Sharon and Nablus (Battle of Megiddo). The British methods induced "strategic paralysis" among the Ottomans and led to their rapid and complete collapse. In an advance of , captures were estimated to be "at least prisoners and 260 guns." Liddell Hart considered that important aspects of the operation were the extent to which Ottoman commanders were denied intelligence on the British preparations for the attack through British air superiority and air attacks on their headquarters and telephone exchanges, which paralyzed attempts to react to the rapidly deteriorating situation. France Norman Stone detects early blitzkrieg operations in offensives by the French generals Charles Mangin and Marie-Eugène Debeney in 1918. However, French doctrine in the interwar years became defense-oriented. Colonel Charles de Gaulle advocated concentration of armor and airplanes. His opinions appeared in his book Vers l'Armée de métier (Towards the Professional Army, 1933). Like von Seeckt, de Gaulle concluded that France could no longer maintain the huge armies of conscripts and reservists which had fought World War I, and he sought to use tanks, mechanized forces and aircraft to allow a smaller number of highly trained soldiers to have greater impact in battle. His views little endeared him to the French high command, but are claimed by some to have influenced Heinz Guderian. Russia/USSR In 1916 General Alexei Brusilov had used surprise and infiltration tactics during the Brusilov Offensive. Later, Marshal Mikhail Tukhachevsky (1893-1937), (1898-1976) and other members of the Red Army developed a concept of deep battle from the experience of the Polish–Soviet War of 1919–1920. These concepts would guide Red Army doctrine throughout World War II. Realizing the limitations of infantry and cavalry, Tukhachevsky advocated mechanized formations and the large-scale industrialization they required. Robert Watt (2008) wrote that blitzkrieg has little in common with Soviet deep battle. In 2002 H. P. Willmott had noted that deep battle contained two important differences: it was a doctrine of total war (not of limited operations), and rejected decisive battle in favor of several large, simultaneous offensives. The Reichswehr and the Red Army began a secret collaboration in the Soviet Union to evade the Treaty of Versailles occupational agent, the Inter-Allied Commission. In 1926 war-games and tests began at Kazan and Lipetsk in the RSFSR. The centers served to field-test aircraft and armored vehicles up to the battalion level and housed aerial- and armoured-warfare schools, through which officers rotated. Nazi Germany After becoming Chancellor of Germany (head of government) in 1933, Adolf Hitler ignored the Versailles Treaty provisions. Within the Wehrmacht (established in 1935) the command for motorized armored forces was named the Panzerwaffe in 1936. The Luftwaffe (the German air force) was officially established in February 1935, and development began on ground-attack aircraft and doctrines. Hitler strongly supported this new strategy. He read Guderian's 1937 book Achtung – Panzer! and upon observing armored field exercises at Kummersdorf he remarked, "That is what I want – and that is what I will have." Guderian Guderian summarized combined-arms tactics as the way to get the mobile and motorized armored divisions to work together and support each other to achieve decisive success. In his 1950 book, Panzer Leader, he wrote: Guderian believed that developments in technology were required to support the theory; especially, equipping armored divisions—tanks foremost–with wireless communications. Guderian insisted in 1933 to the high command that every tank in the German armored force must be equipped with a radio. At the start of World War II, only the German army was thus prepared with all tanks "radio-equipped". This proved critical in early tank battles where German tank commanders exploited the organizational advantage over the Allies that radio communication gave them. Later all Allied armies would copy this innovation. During the Polish campaign, the performance of armored troops, under the influence of Guderian's ideas, won over a number of skeptics who had initially expressed doubt about armored warfare, such as von Rundstedt and Rommel. Rommel According to David A.Grossman, by the 12th Battle of Isonzo (October–November 1917), while conducting a light-infantry operation, Rommel had perfected his maneuver-warfare principles, which were the very same ones that were applied during the Blitzkrieg against France in 1940 (and repeated in the Coalition ground offensive against Iraq in the 1991 Gulf War). During the Battle of France and against his staff advisor's advice, Hitler ordered that everything should be completed in a few weeks; fortunately for the Führer, Rommel and Guderian disobeyed the General Staff's orders (particularly General von Kleist) and forged ahead making quicker progress than anyone expected, and on the way, "inventing the idea of Blitzkrieg". It was Rommel who created the new archetype of Blitzkrieg, leading his division far ahead of flanking divisions. MacGregor and Williamson remark that Rommel's version of Blitzkrieg displayed a significantly better understanding of combined-arms warfare than that of Guderian. General Hoth submitted an official report in July 1940 which declared that Rommel had "explored new paths in the command of Panzer divisions". Methods of operations Schwerpunkt Schwerpunktprinzip was a heuristic device (conceptual tool or thinking formula) used in the German army since the nineteenth century, to make decisions from tactics to strategy about priority. Schwerpunkt has been translated as center of gravity, crucial, focal point and point of main effort. None of these forms is sufficient to describe the universal importance of the term and the concept of Schwerpunktprinzip. Every unit in the army, from the company to the supreme command, decided on a Schwerpunkt through schwerpunktbildung, as did the support services, which meant that commanders always knew what was most important and why. The German army was trained to support the Schwerpunkt, even when risks had to be taken elsewhere to support the point of main effort as well as attacking with overwhelming firepower. Through Schwerpunktbildung, the German army could achieve superiority at the Schwerpunkt, whether attacking or defending, to turn local success at the Schwerpunkt into the progressive disorganisation of the opposing force, creating more opportunities to exploit this advantage, even if numerically and strategically inferior in general. In the 1930s, Guderian summarized this as "Klotzen, nicht kleckern!" ("Kick, don't spatter them!"). Pursuit Having achieved a breakthrough of the enemy's line, units comprising the Schwerpunkt were not supposed to become decisively engaged with enemy front line units to the right and left of the breakthrough area. Units pouring through the hole were to drive upon set objectives behind the enemy front line. In World War II, German Panzer forces used motorized mobility to paralyze the opponent's ability to react. Fast-moving mobile forces seized the initiative, exploited weaknesses and acted before opposing forces could respond. Central to this was the decision cycle (tempo). Through superior mobility and faster decision-making cycles, mobile forces could act quicker than the forces opposing them. Directive control was a fast and flexible method of command. Rather than receiving an explicit order, a commander would be told of his superior's intent and the role which his unit was to fill in this concept. The method of execution was then a matter for the discretion of the subordinate commander. Staff burden was reduced at the top and spread among tiers of command with knowledge about their situation. Delegation and the encouragement of initiative aided implementation, important decisions could be taken quickly and communicated verbally or with brief written orders. Mopping-up The last part of an offensive operation was the destruction of un-subdued pockets of resistance, which had been enveloped earlier and by-passed by the fast-moving armored and motorized spearheads. The Kesselschlacht 'cauldron battle' was a concentric attack on such pockets. It was here that most losses were inflicted upon the enemy, primarily through the mass capture of prisoners and weapons. During Operation Barbarossa, huge encirclements in 1941 produced nearly 3.5 million Soviet prisoners, along with masses of equipment. Air power Close air support was provided in the form of the dive bomber and medium bomber. They would support the focal point of attack from the air. German successes are closely related to the extent to which the German Luftwaffe was able to control the air war in early campaigns in Western and Central Europe, and the Soviet Union. However, the Luftwaffe was a broadly based force with no constricting central doctrine, other than its resources should be used generally to support national strategy. It was flexible and it was able to carry out both operational-tactical, and strategic bombing. Flexibility was the Luftwaffes strength in 1939–1941. Paradoxically, from that period onward it became its weakness. While Allied Air Forces were tied to the support of the Army, the Luftwaffe deployed its resources in a more general, operational way. It switched from air superiority missions, to medium-range interdiction, to strategic strikes, to close support duties depending on the need of the ground forces. In fact, far from it being a specialist panzer spearhead arm, less than 15 percent of the Luftwaffe was intended for close support of the army in 1939. Stimulants Amphetamine use is believed to have played a role in the speed of Germany's initial Blitzkrieg, since military success employing combined arms demanded long hours of continuous operations with minimal rest. Limitations and countermeasures Environment The concepts associated with the term blitzkrieg—deep penetrations by armor, large encirclements, and combined arms attacks—were largely dependent upon terrain and weather conditions. Where the ability for rapid movement across "tank country" was not possible, armored penetrations often were avoided or resulted in failure. Terrain would ideally be flat, firm, unobstructed by natural barriers or fortifications, and interspersed with roads and railways. If it were instead hilly, wooded, marshy, or urban, armor would be vulnerable to infantry in close-quarters combat and unable to break out at full speed. Additionally, units could be halted by mud (thawing along the Eastern Front regularly slowed both sides) or extreme snow. Operation Barbarossa helped confirm that armor effectiveness and the requisite aerial support were dependent on weather and terrain. It should however be noted that the disadvantages of terrain could be nullified if surprise was achieved over the enemy by an attack through areas considered natural obstacles, as occurred during the Battle of France when the German blitzkrieg-style attack went through the Ardennes. Since the French thought the Ardennes unsuitable for massive troop movement, particularly for tanks, they were left with only light defenses which were quickly overrun by the Wehrmacht. The Germans quickly advanced through the forest, knocking down the trees the French thought would impede this tactic. Air superiority The influence of air forces over forces on the ground changed significantly over the course of the Second World War. Early German successes were conducted when Allied aircraft could not make a significant impact on the battlefield. In May 1940, there was near parity in numbers of aircraft between the Luftwaffe and the Allies, but the Luftwaffe had been developed to support Germany's ground forces, had liaison officers with the mobile formations, and operated a higher number of sorties per aircraft. In addition, German air parity or superiority allowed the unencumbered movement of ground forces, their unhindered assembly into concentrated attack formations, aerial reconnaissance, aerial resupply of fast moving formations and close air support at the point of attack. The Allied air forces had no close air support aircraft, training or doctrine. The Allies flew 434 French and 160 British sorties a day but methods of attacking ground targets had yet to be developed; therefore Allied aircraft caused negligible damage. Against these 600 sorties the Luftwaffe on average flew 1,500 sorties a day. On 13 May, Fliegerkorps VIII flew 1,000 sorties in support of the crossing of the Meuse. The following day the Allies made repeated attempts to destroy the German pontoon bridges, but German fighter aircraft, ground fire and Luftwaffe flak batteries with the panzer forces destroyed 56 percent of the attacking Allied aircraft while the bridges remained intact. Allied air superiority became a significant hindrance to German operations during the later years of the war. By June 1944 the Western Allies had complete control of the air over the battlefield and their fighter-bomber aircraft were very effective at attacking ground forces. On D-Day the Allies flew 14,500 sorties over the battlefield area alone, not including sorties flown over north-western Europe. Against this on 6 June the Luftwaffe flew some 300 sorties. Though German fighter presence over Normandy increased over the next days and weeks, it never approached the numbers the Allies commanded. Fighter-bomber attacks on German formations made movement during daylight almost impossible. Subsequently, shortages soon developed in food, fuel and ammunition, severely hampering the German defenders. German vehicle crews and even flak units experienced great difficulty moving during daylight. Indeed, the final German offensive operation in the west, Operation Wacht am Rhein, was planned to take place during poor weather to minimize interference by Allied aircraft. Under these conditions it was difficult for German commanders to employ the "armored idea", if at all. Counter-tactics Blitzkrieg is vulnerable to an enemy that is robust enough to weather the shock of the attack and that does not panic at the idea of enemy formations in its rear area. This is especially true if the attacking formation lacks the reserve to keep funnelling forces into the spearhead, or lacks the mobility to provide infantry, artillery and supplies into the attack. If the defender can hold the shoulders of the breach they will have the opportunity to counter-attack into the flank of the attacker, potentially cutting off the van as happened to Kampfgruppe Peiper in the Ardennes. During the Battle of France in 1940, the 4th Armoured Division (Major-General Charles de Gaulle) and elements of the 1st Army Tank Brigade (British Expeditionary Force) made probing attacks on the German flank, pushing into the rear of the advancing armored columns at times. This may have been a reason for Hitler to call a halt to the German advance. Those attacks combined with Maxime Weygand's Hedgehog tactic would become the major basis for responding to blitzkrieg attacks in the future: deployment in depth, permitting enemy or "shoulders" of a penetration was essential to channelling the enemy attack, and artillery, properly employed at the shoulders, could take a heavy toll of attackers. While Allied forces in 1940 lacked the experience to successfully develop these strategies, resulting in France's capitulation with heavy losses, they characterized later Allied operations. At the Battle of Kursk the Red Army employed a combination of defense in great depth, extensive minefields, and tenacious defense of breakthrough shoulders. In this way they depleted German combat power even as German forces advanced. The reverse can be seen in the Russian summer offensive of 1944, Operation Bagration, which resulted in the destruction of Army Group Center. German attempts to weather the storm and fight out of encirclements failed due to the Russian ability to continue to feed armored units into the attack, maintaining the mobility and strength of the offensive, arriving in force deep in the rear areas, faster than the Germans could regroup. Logistics Although effective in quick campaigns against Poland and France, mobile operations could not be sustained by Germany in later years. Strategies based on maneuver have the inherent danger of the attacking force overextending its supply lines, and can be defeated by a determined foe who is willing and able to sacrifice territory for time in which to regroup and rearm, as the Soviets did on the Eastern Front (as opposed to, for example, the Dutch who had no territory to sacrifice). Tank and vehicle production was a constant problem for Germany; indeed, late in the war many panzer "divisions" had no more than a few dozen tanks. As the end of the war approached, Germany also experienced critical shortages in fuel and ammunition stocks as a result of Anglo-American strategic bombing and blockade. Although production of Luftwaffe fighter aircraft continued, they would be unable to fly for lack of fuel. What fuel there was went to panzer divisions, and even then they were not able to operate normally. Of those Tiger tanks lost against the United States Army, nearly half of them were abandoned for lack of fuel. Military operations Spanish Civil War German volunteers first used armor in live field-conditions during the Spanish Civil War of 1936-1939. Armor commitment consisted of Panzer Battalion 88, a force built around three companies of Panzer I tanks that functioned as a training cadre for Spain's Nationalists. The Luftwaffe deployed squadrons of fighters, dive-bombers and transport aircraft as the Condor Legion. Guderian said that the tank deployment was "on too small a scale to allow accurate assessments to be made". (The true test of his "armored idea" would have to wait for the Second World War.) However, the Luftwaffe also provided volunteers to Spain to test both tactics and aircraft in combat, including the first combat use of the Stuka. During the war, the Condor Legion undertook the 1937 bombing of Guernica, which had a tremendous psychological effect on the populations of Europe. The results were exaggerated, and the Western Allies concluded that the "city-busting" techniques were now a part of the German way in war. The targets of the German aircraft were actually the rail lines and bridges. But lacking the ability to hit them with accuracy (only three or four Ju 87s saw action in Spain), the Luftwaffe chose a method of carpet bombing, resulting in heavy civilian casualties. Poland, 1939 Although journalists popularized the term blitzkrieg during the September 1939 invasion of Poland, historians Matthew Cooper and J. P. Harris have written that German operations during this campaign were consistent with traditional methods. The Wehrmacht strategy was more in line with Vernichtungsgedanke - a focus on envelopment to create pockets in broad-front annihilation. The German generals dispersed Panzer forces among the three German concentrations with little emphasis on independent use; they deployed tanks to create or destroy close pockets of Polish forces and to seize operational-depth terrain in support of the largely un-motorized infantry which followed. While the Wehrmacht used available models of tanks, Stuka dive-bombers and concentrated forces in the Polish campaign, the majority of the fighting involved conventional infantry and artillery warfare, and most Luftwaffe action was independent of the ground campaign. Matthew Cooper wrote that John Ellis wrote that "…there is considerable justice in Matthew Cooper's assertion that the panzer divisions were not given the kind of strategic mission that was to characterize authentic armored blitzkrieg, and were almost always closely subordinated to the various mass infantry armies". Steven Zaloga wrote, "Whilst Western accounts of the September campaign have stressed the shock value of the panzer and Stuka attacks, they have tended to underestimate the punishing effect of German artillery on Polish units. Mobile and available in significant quantity, artillery shattered as many units as any other branch of the Wehrmacht." Low Countries and France, 1940 The German invasion of France, with subsidiary attacks on Belgium and the Netherlands, consisted of two phases, Operation Yellow (Fall Gelb) and Operation Red (Fall Rot). Yellow opened with a feint conducted against the Netherlands and Belgium by two armored corps and paratroopers. Most of the German armored forces were placed in Panzer Group Kleist, which attacked through the Ardennes, a lightly-defended sector that the French planned to reinforce if need be, before the Germans could bring up heavy and siege artillery. There was no time for the French to send such reinforcement, for the Germans did not wait for siege artillery but reached the Meuse and achieved a breakthrough at the Battle of Sedan in three days. Panzer Group Kleist raced to the English Channel, reaching the coast at Abbeville, and cut off the BEF, the Belgian Army and some of the best-equipped divisions of the French Army in northern France. Armored and motorized units under Guderian, Rommel and others, advanced far beyond the marching and horse-drawn infantry divisions and far in excess of what Hitler and the German high command expected or wished. When the Allies counter-attacked at Arras using the heavily-armored British Matilda I and Matilda II tanks, a brief panic ensued in the German High Command. Hitler halted his armored and motorized forces outside the port of Dunkirk, which the Royal Navy had started using to evacuate the Allied forces. Hermann Göring promised that the Luftwaffe would complete the destruction of the encircled armies, but aerial operations failed to prevent the evacuation of the majority of the Allied troops. In Operation Dynamo some French and British troops escaped. Case Yellow surprised everyone, overcoming the Allies' 4,000 armored vehicles, many of which were better than their German equivalents in armor and gun-power. The French and British frequently used their tanks in the dispersed role of infantry support rather than concentrating force at the point of attack, to create overwhelming firepower. The French armies were much reduced in strength and the confidence of their commanders shaken. With much of their own armor and heavy equipment lost in Northern France, they lacked the means to fight a mobile war. The Germans followed their initial success with Operation Red, a triple-pronged offensive. The XV Panzer Corps attacked towards Brest, XIV Panzer Corps attacked east of Paris, towards Lyon and the XIX Panzer Corps encircled the Maginot Line. The French, hard pressed to organize any sort of counter-attack, were continually ordered to form new defensive lines and found that German forces had already by-passed them and moved on. An armored counter-attack organized by Colonel de Gaulle could not be sustained, and he had to retreat. Prior to the German offensive in May, Winston Churchill had said "Thank God for the French Army". That same French army collapsed after barely two months of fighting. This was in shocking contrast to the four years of trench warfare which French forces had engaged in during the First World War. The French president of the Ministerial Council, Reynaud, analyzed the collapse in a speech on 21 May 1940: The Germans had not used paratroop attacks in France and only made one big drop in the Netherlands, to capture three bridges; some small glider-landings were conducted in Belgium to take bottle-necks on routes of advance before the arrival of the main force (the most renowned being the landing on Fort Eben-Emael in Belgium). Eastern Front, 1941–44 Use of armored forces was crucial for both sides on the Eastern Front. Operation Barbarossa, the German invasion of the Soviet Union in June 1941, involved a number of breakthroughs and encirclements by motorized forces. Its goal - according to Führer Directive 21 (18 December 1940) - was "to destroy the Russian forces deployed in the West and to prevent their escape into the wide-open spaces of Russia". The Red Army was to be destroyed west of the Dvina and Dnieper rivers, which were about east of the Soviet border, to be followed by a mopping-up operation. The surprise attack resulted in the near annihilation of the Voyenno-Vozdushnye Sily (VVS, Soviet Air Force) by simultaneous attacks on airfields, allowing the Luftwaffe to achieve total air supremacy over all the battlefields within the first week. On the ground, four German panzer groups outflanked and encircled disorganized Red Army units, while the marching infantry completed the encirclements and defeated the trapped forces. In late July, after 2nd Panzer Group (commanded by Guderian) captured the watersheds of the Dvina and Dnieper rivers near Smolensk, the panzers had to defend the encirclement, because the marching infantry divisions remained hundreds of kilometers to the west. The Germans conquered large areas of the Soviet Union, but their failure to destroy the Red Army before the winter of 1941-1942 was a strategic failure that made German tactical superiority and territorial gains irrelevant. The Red Army had survived enormous losses and regrouped with new formations far to the rear of the front line. During the Battle of Moscow (October 1941 to January 1942), the Red Army defeated the German Army Group Center and for the first time in the war seized the strategic initiative. In the summer of 1942, Germany launched another offensive, this time focusing on Stalingrad and the Caucasus in the southern USSR. The Soviets again lost tremendous amounts of territory, only to counter-attack once more during winter. German gains were ultimately limited because Hitler diverted forces from the attack on Stalingrad and drove towards the Caucasus oilfields simultaneously. The Wehrmacht became overstretched: although winning operationally, it could not inflict a decisive defeat as the durability of the Soviet Union's manpower, resources, industrial base and aid from the Western Allies began to take effect. In July 1943 the Wehrmacht conducted Operation Zitadelle (Citadel) against a salient at Kursk which Soviet troop heavily defended. Soviet defensive tactics had by now hugely improved, particularly in the use of artillery and air support. By April 1943, the Stavka had learned of German intentions through intelligence supplied by front-line reconnaissance and Ultra intercepts. In the following months, the Red Army constructed deep defensive belts along the paths of the planned German attack. The Soviets made a concerted effort to disguise their knowledge of German plans and the extent of their own defensive preparations, and the German commanders still hoped to achieve operational surprise when the attack commenced. The Germans did not achieve surprise and were not able to outflank or break through into enemy rear-areas during the operation. Several historians assert that Operation Citadel was planned and intended to be a blitzkrieg operation. Many of the German participants who wrote about the operation after the war, including Manstein, make no mention of blitzkrieg in their accounts. In 2000, Niklas Zetterling and Anders Frankson characterized only the southern pincer of the German offensive as a "classical blitzkrieg attack". Pier Battistelli wrote that the operational planning marked a change in German offensive thinking away from blitzkrieg and that more priority was given to brute force and fire power than to speed and maneuver. In 1995, David Glantz stated that for the first time, blitzkrieg was defeated in summer and the opposing Soviet forces were able to mount a successful counter-offensive. The Battle of Kursk ended with two Soviet counter-offensives and the revival of deep operations. In the summer of 1944, the Red Army destroyed Army Group Centre in Operation Bagration, using combined-arms tactics for armor, infantry and air power in a coordinated strategic assault, known as deep operations, which led to an advance of in six weeks. Western Front, 1944–45 Allied armies began using combined-arms formations and deep-penetration strategies that Germany had used in the opening years of the war. Many Allied operations in the Western Desert and on the Eastern Front, relied on firepower to establish breakthroughs by fast-moving armored units. These artillery-based tactics were also decisive in Western Front operations after 1944's Operation Overlord, and the British Commonwealth and American armies developed flexible and powerful systems for using artillery support. What the Soviets lacked in flexibility, they made up for in number of rocket launchers, guns and mortars. The Germans never achieved the kind of fire concentrations their enemies were capable of by 1944. After the Allied landings in Normandy (June 1944), the Germans began a counter-offensive to overwhelm the landing force with armored attacks - but these failed due to a lack of co-ordination and to Allied superiority in anti-tank defense and in the air. The most notable attempt to use deep-penetration operations in Normandy was Operation Luttich at Mortain, which only hastened the Falaise Pocket and the destruction of German forces in Normandy. The Mortain counter-attack was defeated by the US 12th Army Group with little effect on its own offensive operations. The last German offensive on the Western front, the Battle of the Bulge (Operation Wacht am Rhein), was an offensive launched towards the port of Antwerp in December 1944. Launched in poor weather against a thinly-held Allied sector, it achieved surprise and initial success as Allied air-power was grounded due to cloud cover. Determined defense by US troops in places throughout the Ardennes, the lack of good roads and German supply shortages caused delays. Allied forces deployed to the flanks of the German penetration and as soon as the skies cleared, Allied aircraft returned to the battlefield. Allied counter-attacks soon forced back the Germans, who abandoned much equipment for lack of fuel. Post-war controversy Blitzkrieg had been called a Revolution in Military Affairs (RMA) but many writers and historians have concluded that the Germans did not invent a new form of warfare but applied new technologies to traditional ideas of Bewegungskrieg (maneuver warfare) to achieve decisive victory. Strategy In 1965, Captain Robert O'Neill, Professor of the History of War at the University of Oxford produced an example of the popular view. In Doctrine and Training in the German Army 1919–1939, O'Neill wrote Other historians wrote that blitzkrieg was an operational doctrine of the German armed forces and a strategic concept on which the leadership of Nazi Germany based its strategic and economic planning. Military planners and bureaucrats in the war economy appear rarely, if ever, to have employed the term blitzkrieg in official documents. That the German army had a "blitzkrieg doctrine" was rejected in the late 1970s by Matthew Cooper. The concept of a blitzkrieg Luftwaffe was challenged by Richard Overy in the late 1970s and by Williamson Murray in the mid-1980s. That Nazi Germany went to war on the basis of "blitzkrieg economics" was criticized by Richard Overy in the 1980s and George Raudzens described the contradictory senses in which historians have used the word. The notion of a German blitzkrieg concept or doctrine survives in popular history and many historians still support the thesis. Frieser wrote that after the failure of the Schlieffen Plan in 1914, the German army concluded that decisive battles were no longer possible in the changed conditions of the twentieth century. Frieser wrote that the Oberkommando der Wehrmacht (OKW), which was created in 1938 had intended to avoid the decisive battle concepts of its predecessors and planned for a long war of exhaustion (ermattungskrieg). It was only after the improvised plan for the Battle of France in 1940 was unexpectedly successful, that the German General Staff came to believe that vernichtungskrieg was still feasible. German thinking reverted to the possibility of a quick and decisive war for the Balkan campaign and Operation Barbarossa. Doctrine Most academic historians regard the notion of blitzkrieg as military doctrine to be a myth. Shimon Naveh wrote "The striking feature of the blitzkrieg concept is the complete absence of a coherent theory which should have served as the general cognitive basis for the actual conduct of operations". Naveh described it as an "ad hoc solution" to operational dangers, thrown together at the last moment. Overy disagreed with the idea that Hitler and the Nazi regime ever intended a blitzkrieg war, because the once popular belief that the Nazi state organized their economy to carry out its grand strategy in short campaigns was false. Hitler had intended for a rapid unlimited war to occur much later than 1939, but Germany's aggressive foreign policy forced the Nazi state into war before it was ready. Hitler and the Wehrmacht's planning in the 1930s did not reflect a blitzkrieg method but the opposite. John Harris wrote that the Wehrmacht never used the word, and it did not appear in German army or air force field manuals; the word was coined in September 1939, by a Times newspaper reporter. Harris also found no evidence that German military thinking developed a blitzkrieg mentality. Karl-Heinz Frieser and Adam Tooze reached similar conclusions to Overy and Naveh, that the notions of blitzkrieg-economy and strategy were myths. Frieser wrote that surviving German economists and General Staff officers denied that Germany went to war with a blitzkrieg strategy. Robert M. Citino argues: Historian Victor Davis Hanson states that Blitzkrieg "played on the myth of German technological superiority and industrial dominance," adding that German successes, particularly that of its Panzer divisions were "instead predicated on the poor preparation and morale of Germany's enemies." Hanson also reports that at a Munich public address in November 1941, Hitler had "disowned" the concept of Blitzkrieg by calling it an "idiotic word." Further, successful Blitzkrieg operations were predicated on superior numbers, air-support, and were only possible for short periods of time without sufficient supply lines. For all intents and purposes, Blitzkrieg ended at the Eastern Front once the German forces gave up Stalingrad, after they faced hundreds of new T-34 tanks, when the Luftwaffe became unable to assure air dominance, and following the stalemate at Kursk—to this end, Hanson concludes that German military success was not accompanied by the adequate provisioning of its troops with food and materiel far from the source of supply, which contributed to its ultimate failures. Despite its later disappointments as German troops extended their lines at too great a distance, the very specter of armored Blitzkrieg forces initially proved victorious against Polish, Dutch, Belgian, and French armies early in the war. Economics In the 1960s, Alan Milward developed a theory of blitzkrieg economics, that Germany could not fight a long war and chose to avoid comprehensive rearmament and armed in breadth, to win quick victories. Milward described an economy positioned between a full war economy and a peacetime economy. The purpose of the blitzkrieg economy was to allow the German people to enjoy high living standards in the event of hostilities and avoid the economic hardships of the First World War. Overy wrote that blitzkrieg as a "coherent military and economic concept has proven a difficult strategy to defend in light of the evidence". Milward's theory was contrary to Hitler's and German planners' intentions. The Germans, aware of the errors of the First World War, rejected the concept of organizing its economy to fight only a short war. Therefore, focus was given to the development of armament in depth for a long war, instead of armament in breadth for a short war. Hitler claimed that relying on surprise alone was "criminal" and that "we have to prepare for a long war along with surprise attack". During the winter of 1939–40, Hitler demobilized many troops from the army to return as skilled workers to factories because the war would be decided by production, not a quick "Panzer operation". In the 1930s, Hitler had ordered rearmament programs that cannot be considered limited. In November 1937 Hitler had indicated that most of the armament projects would be completed by 1943–45. The rearmament of the Kriegsmarine was to have been completed in 1949 and the Luftwaffe rearmament program was to have matured in 1942, with a force capable of strategic bombing with heavy bombers. The construction and training of motorized forces and a full mobilization of the rail networks would not begin until 1943 and 1944 respectively. Hitler needed to avoid war until these projects were complete but his misjudgements in 1939 forced Germany into war before rearmament was complete. After the war, Albert Speer claimed that the German economy achieved greater armaments output, not because of diversions of capacity from civilian to military industry but through streamlining of the economy. Richard Overy pointed out some 23 percent of German output was military by 1939. Between 1937 and 1939, 70 percent of investment capital went into the rubber, synthetic fuel, aircraft and shipbuilding industries. Hermann Göring had consistently stated that the task of the Four Year Plan was to rearm Germany for total war. Hitler's correspondence with his economists also reveals that his intent was to wage war in 1943–1945, when the resources of central Europe had been absorbed into Nazi Germany. Living standards were not high in the late 1930s. Consumption of consumer goods had fallen from 71 percent in 1928 to 59 percent in 1938. The demands of the war economy reduced the amount of spending in non-military sectors to satisfy the demand for the armed forces. On 9 September, Göring as Head of the Reich Defense Council, called for complete "employment" of living and fighting power of the national economy for the duration of the war. Overy presents this as evidence that a "blitzkrieg economy" did not exist. Adam Tooze wrote that the German economy was being prepared for a long war. The expenditure for this war was extensive and put the economy under severe strain. The German leadership were concerned less with how to balance the civilian economy and the needs of civilian consumption but to figure out how to best prepare the economy for total war. Once war had begun, Hitler urged his economic experts to abandon caution and expend all available resources on the war effort but the expansion plans only gradually gained momentum in 1941. Tooze wrote that the huge armament plans in the pre-war period did not indicate any clear-sighted blitzkrieg economy or strategy. Heer Frieser wrote that the () was not ready for blitzkrieg at the start of the war. A blitzkrieg method called for a young, highly skilled mechanized army. In 1939–40, 45 percent of the army was 40 years old and 50 percent of the soldiers had only a few weeks' training. The German army, contrary to the blitzkrieg legend, was not fully motorized and had only 120,000 vehicles, compared to the 300,000 of the French Army. The British also had an "enviable" contingent of motorized forces. Thus, "the image of the German 'Blitzkrieg' army is a figment of propaganda imagination". During the First World War the German army used 1.4 million horses for transport and in the Second World War used 2.7 million horses; only ten percent of the army was motorized in 1940. Half of the German divisions available in 1940 were combat ready but less well-equipped than the British and French or the Imperial German Army of 1914. In the spring of 1940, the German army was semi-modern, in which a small number of well-equipped and "elite" divisions were offset by many second and third rate divisions". In 2003, John Mosier wrote that while the French soldiers in 1940 were better trained than German soldiers, as were the Americans later and that the German army was the least mechanized of the major armies, its leadership cadres were larger and better and that the high standard of leadership was the main reason for the successes of the German army in World War II, as it had been in World War I. Luftwaffe James Corum wrote that it was a myth that the Luftwaffe had a doctrine of terror bombing, in which civilians were attacked to break the will or aid the collapse of an enemy, by the Luftwaffe in Blitzkrieg operations. After the bombing of Guernica in 1937 and the Rotterdam Blitz in 1940, it was commonly assumed that terror bombing was a part of Luftwaffe doctrine. During the interwar period the Luftwaffe leadership rejected the concept of terror bombing in favor of battlefield support and interdiction operations. Corum continues: General Walther Wever compiled a doctrine known as The Conduct of the Aerial War. This document, which the Luftwaffe adopted, rejected Giulio Douhet's theory of terror bombing. Terror bombing was deemed to be "counter-productive", increasing rather than destroying the enemy's will to resist. Such bombing campaigns were regarded as diversion from the Luftwaffe's main operations; destruction of the enemy armed forces. The bombings of Guernica, Rotterdam and Warsaw were tactical missions in support of military operations and were not intended as strategic terror attacks. J. P. Harris wrote that most Luftwaffe leaders from Goering through the general staff believed (as did their counterparts in Britain and the United States) that strategic bombing was the chief mission of the air force and that given such a role, the Luftwaffe would win the next war and that The Luftwaffe did end up with an air force consisting mainly of relatively short-range aircraft, but this does not prove that the German air force was solely interested in 'tactical' bombing. It happened because the German aircraft industry lacked the experience to build a long-range bomber fleet quickly, and because Hitler was insistent on the very rapid creation of a numerically large force. It is also significant that Germany's position in the center of Europe to a large extent obviated the need to make a clear distinction between bombers suitable only for 'tactical' and those necessary for strategic purposes in the early stages of a likely future war. Fuller and Liddell Hart British theorists John Frederick Charles Fuller and Captain Basil Henry Liddell Hart have often been associated with the development of blitzkrieg, though this is a matter of controversy. In recent years historians have uncovered that Liddell Hart distorted and falsified facts to make it appear as if his ideas were adopted. After the war Liddell Hart imposed his own perceptions, after the event, claiming that the mobile tank warfare practiced by the Wehrmacht was a result of his influence. By manipulation and contrivance, Liddell Hart distorted the actual circumstances of the blitzkrieg formation, and he obscured its origins. Through his indoctrinated idealization of an ostentatious concept, he reinforced the myth of blitzkrieg. By imposing, retrospectively, his own perceptions of mobile warfare upon the shallow concept of blitzkrieg, he "created a theoretical imbroglio that has taken 40 years to unravel." Blitzkrieg was not an official doctrine and historians in recent times have come to the conclusion that it did not exist as such. The early 1950s literature transformed blitzkrieg into a historical military doctrine, which carried the signature of Liddell Hart and Guderian. The main evidence of Liddell Hart's deceit and "tendentious" report of history can be found in his letters to Erich von Manstein, Heinz Guderian and the relatives and associates of Erwin Rommel. Liddell Hart, in letters to Guderian, "imposed his own fabricated version of blitzkrieg on the latter and compelled him to proclaim it as original formula". Kenneth Macksey found Liddell Hart's original letters to Guderian in the General's papers, requesting that Guderian give him credit for "impressing him" with his ideas of armored warfare. When Liddell Hart was questioned about this in 1968 and the discrepancy between the English and German editions of Guderian's memoirs, "he gave a conveniently unhelpful though strictly truthful reply. ('There is nothing about the matter in my file of correspondence with Guderian himself except...that I thanked him...for what he said in that additional paragraph'.)". During World War I, Fuller had been a staff officer attached to the new tank corps. He developed Plan 1919 for massive, independent tank operations, which he claimed were subsequently studied by the German military. It is variously argued that Fuller's wartime plans and post-war writings were an inspiration or that his readership was low and German experiences during the war received more attention. The German view of themselves as the losers of the war, may be linked to the senior and experienced officers' undertaking a thorough review, studying and rewriting of all their Army doctrine and training manuals. Fuller and Liddell Hart were "outsiders": Liddell Hart was unable to serve as a soldier after 1916 after being gassed on the Somme and Fuller's abrasive personality resulted in his premature retirement in 1933. Their views had limited impact in the British army; the War Office permitted the formation of an Experimental Mechanized Force on 1 May 1927, composed of tanks, motorized infantry, self-propelled artillery and motorized engineers but the force was disbanded in 1928 on the grounds that it had served its purpose. A new experimental brigade was intended for the next year and became a permanent formation in 1933, during the cuts of the financial years. Continuity It has been argued that blitzkrieg was not new; the Germans did not invent something called blitzkrieg in the 1920s and 1930s. Rather the German concept of wars of movement and concentrated force were seen in wars of Prussia and the German wars of unification. The first European general to introduce rapid movement, concentrated power and integrated military effort was Swedish King Gustavus Adolphus during the Thirty Years' War. The appearance of the aircraft and tank in the First World War, called an RMA, offered the German military a chance to get back to the traditional war of movement as practiced by Moltke the Elder. The so-called "blitzkrieg campaigns" of 1939 – circa 1942, were well within that operational context. At the outbreak of war, the German army had no radically new theory of war. The operational thinking of the German army had not changed significantly since the First World War or since the late 19th century. J. P. Harris and Robert M. Citino point out that the Germans had always had a marked preference for short, decisive campaigns – but were unable to achieve short-order victories in First World War conditions. The transformation from the stalemate of the First World War into tremendous initial operational and strategic success in the Second, was partly the employment of a relatively small number of mechanized divisions, most importantly the Panzer divisions, and the support of an exceptionally powerful air force. Guderian Heinz Guderian is widely regarded as being highly influential in developing the military methods of warfare used by Germany's tank men at the start of the Second World War. This style of warfare brought the maneuver back to the fore, and placed an emphasis on the offensive. This style, along with the shockingly rapid collapse in the armies that opposed it, came to be branded as blitzkrieg warfare. Following Germany's military reforms of the 1920s, Heinz Guderian emerged as a strong proponent of mechanized forces. Within the Inspectorate of Transport Troops, Guderian and colleagues performed theoretical and field exercise work. Guderian met with opposition from some in the General Staff, who were distrustful of the new weapons and who continued to view the infantry as the primary weapon of the army. Among them, Guderian claimed, was Chief of the General Staff Ludwig Beck (1935–38), who he alleged was skeptical that armored forces could be decisive. This claim has been disputed by later historians. James Corum wrote: By Guderian's account he single-handedly created the German tactical and operational methodology. Between 1922 and 1928 Guderian wrote a number of articles concerning military movement. As the ideas of making use of the combustible engine in a protected encasement to bring mobility back to warfare developed in the German army, Guderian was a leading proponent of the formations that would be used for this purpose. He was later asked to write an explanatory book, which was titled Achtung Panzer! (1937). In it he explained the theories of the tank men and defended them. Guderian argued that the tank would be the decisive weapon of the next war. "If the tanks succeed, then victory follows", he wrote. In an article addressed to critics of tank warfare, he wrote "until our critics can produce some new and better method of making a successful land attack other than self-massacre, we shall continue to maintain our beliefs that tanks—properly employed, needless to say—are today the best means available for land attack." Addressing the faster rate at which defenders could reinforce an area than attackers could penetrate it during the First World War, Guderian wrote that "since reserve forces will now be motorized, the building up of new defensive fronts is easier than it used to be; the chances of an offensive based on the timetable of artillery and infantry co-operation are, as a result, even slighter today than they were in the last war." He continued, "We believe that by attacking with tanks we can achieve a higher rate of movement than has been hitherto obtainable, and—what is perhaps even more important—that we can keep moving once a breakthrough has been made." Guderian additionally required that tactical radios be widely used to facilitate coordination and command by having one installed in all tanks. Guderian's leadership was supported, fostered and institutionalized by his supporters in the Reichswehr General Staff system, which worked the Army to greater and greater levels of capability through massive and systematic Movement Warfare war games in the 1930s. Guderian's book incorporated the work of theorists such as , whose book, The Tank War (Der Kampfwagenkrieg) (1934) gained a wide audience in the German army. Another German theorist, Ernst Volckheim, wrote a huge amount on tank and combined arms tactics and was influential to German thinking on the use of armored formations but his work was not acknowledged in Guderian's writings. See also AirLand Battle, blitzkrieg-like doctrine of US Army in 1980s Armoured warfare Maneuver warfare Shock and awe, the 21st century US military doctrine. Vernichtungsgedanke, or "annihilation concept". Mission-type tactics Deep Battle, Soviet Red Army Military Doctrine from the 1930s often confused with blitzkrieg. Battleplan (documentary TV series) Vernichtungsschlacht, Battle of annihilation Notes References Bibliography Books Conferences Journals Websites Further reading Raudzens, George. "Blitzkrieg Ambiguities: Doubtful Usage of a Famous Word." War & Society 7.2 (1989): 77–94. https://doi.org/10.1179/106980489790305551 External links Armstrong, G. P. The Controversy over Tanks in the British Army 1919 to 1933 (PhD 1976) Sinesi, Michael. Patrick. Modern Bewegungskrieg: German Battle Doctrine, 1920–1940 (2001) Vardi, Gil-Li. The Enigma of German Operational Theory: the Evolution of Military Thought in Germany, 1919–1938 (PhD 2008) Spiegel Online: The Nazi Death Machine, Hitler's Drugged Soldiers Words and phrases with no direct English translation Military strategy Military terminology Armoured warfare Military theory German words and phrases Warfare by type
2,052
4,669
https://en.wikipedia.org/wiki/Bill%20Holbrook
Bill Holbrook
Bill Holbrook (born 1958) is an American cartoonist and webcomic writer and artist, best known for his syndicated comic strip On the Fastrack. Born in Los Angeles, Holbrook grew up in Huntsville, Alabama, and began drawing at an early age. While majoring in illustration and visual design at Auburn University, Holbrook served as art director of the student newspaper, doing editorial cartoons and a weekly comic strip. At the same time, his work was being published in the Huntsville Times and the Monroe Journal. After graduation in 1980, he joined the Atlanta Constitution as an editorial staff artist. During a 1982 visit to relatives on the West Coast, Holbrook met Peanuts creator, Charles Schulz. Following his advice and encouragement, Holbrook created a strip in the fall of that year about a college graduate working in a rundown diner. It did not stir syndicate interest, but what he learned on the strip helped him when he created On the Fastrack. Eleven days before On the Fastrack made its syndicated debut (March 19, 1984), Holbrook met Teri Peitso on a blind date. They were married on Pearl Harbor Day, 1985. They have two daughters, Chandler and Haviland. Peitso-Holbrook's novels have been nominated for both Edgar Awards and Agatha Awards. She is currently an assistant professor in literacy education at Georgia State University. The family lives in the Atlanta area. On October 3, 1988, Holbrook began his second strip, Safe Havens, and his third strip, Kevin and Kell was launched in September 1995. Comic strips Every week Holbrook writes the story line for the next three weeks for one of his strips and draws the next three weeks' worth of strips for another. In 2010, characters from On the Fastrack and Safe Havens began appearing in both strips. On the Fastrack - About the misadventures at Fastrack, Inc., On the Fastrack has been distributed by King Features Syndicate since 1984. It now appears in 75 newspapers nationwide. Safe Havens - Initially about a day care center, this strip evolved into the adventures of Samantha Argus and her friends and is now syndicated nationally to over 50 newspapers. Kevin and Kell - Originally an online-only strip but was also published in the Atlanta Journal-Constitution for some years, Kevin and Kell centers on the mixed marriage between a rabbit, Kevin and a grey wolf, Kell Dewclaw. The plot revolves around species-related humor, satire, and interpersonal conflict. Duel In The Somme - Holbrook illustrated a story by Ben Bova and Rob Balder in this strip about a romantic rivalry between a computer-simulation designer and his boss. References External links 1958 births American comic strip cartoonists American webcomic creators People from Los Angeles Artists from Alabama Auburn University alumni Living people Writers from Huntsville, Alabama
2,062
4,670
https://en.wikipedia.org/wiki/Bruce%20Campbell
Bruce Campbell
Bruce Lorne Campbell (born June 22, 1958) is an American actor and filmmaker. He is best known for his role as Ash Williams in Sam Raimi's Evil Dead horror franchise, beginning with the short film Within the Woods (1978). He has also starred in many low-budget cult films such as Crimewave (1985), Maniac Cop (1988), Sundown: The Vampire in Retreat (1989), and Bubba Ho-Tep (2002). On television, Campbell had leading roles in The Adventures of Brisco County, Jr. (1993–1994) and Jack of All Trades (2000), and a recurring role as Autolycus, King of Thieves in Hercules: The Legendary Journeys (1995–1999) and Xena: Warrior Princess (1995–1999). He played Sam Axe on the USA Network series Burn Notice (2007–2013) and reprised his role as Ash on the Starz series Ash vs. Evil Dead (2015–2018). He also appeared in The Escort (2015). Campbell directed, produced, and starred in the documentaries Fanalysis (2002) and A Community Speaks (2004); co-wrote, directed, produced, and starred in the film Man with the Screaming Brain (2005); and directed, produced, and starred in a parody of his career My Name Is Bruce (2007). Campbell is known for frequent collaborations with the aforementioned Raimi, his brother Ted, Josh Becker, and Scott Spiegel. Early life Bruce Lorne Campbell was born in Royal Oak, Michigan, on June 22, 1958, the son of homemaker Joanne Louise (née Pickens) and advertising executive and college professor Charles Newton Campbell. He is of English and Scottish descent, and has an older brother named Don and an older half-brother named Michael. His father was also an actor and director in the local theater scene. Campbell began acting and making short Super 8 movies with friends as a teenager. After meeting future filmmaker Sam Raimi while the two attended Wylie E. Groves High School, they became close friends and collaborators. Campbell attended Western Michigan University and continued to pursue an acting career. Career Film Campbell and Raimi collaborated on a 30-minute Super 8 version of the first Evil Dead film, titled Within the Woods (1978), which was initially used to attract investors. He and Raimi got together with family and friends to begin working on The Evil Dead (1981). While starring in the lead role, Campbell also worked behind the camera, receiving a co-executive producer credit. Raimi wrote, directed, and edited the film, while Rob Tapert produced. Following an endorsement by horror author Stephen King, the film slowly began to receive attention and offers for distribution. Four years following its original release, it became the number one movie in the UK. It then received distribution in the United States, spawning the sequels Evil Dead II (1987) and Army of Darkness (1992). Campbell was also drawn in the Marvel Zombie comics as his character, Ash Williams. He is featured in five comics, all in the series Marvel Zombies vs. Army of Darkness. In them, he fights alongside the Marvel heroes against the heroes and people who have turned into zombies (deadites) while in search of the Necronomicon (Book of the Names of the Dead). Campbell also played as Coach Boomer in the movie “Sky High”. He has appeared in many of Raimi's films outside of the Evil Dead series, notably having cameo appearances in the director's Spider-Man film series. Campbell also joined the cast in Raimi's Darkman and The Quick and the Dead, though having no actual screen time in the latter film's theatrical cut. In March 2022, Campbell was announced to have a cameo in Raimi's Marvel Cinematic Universe film Doctor Strange in the Multiverse of Madness. Campbell often takes on quirky roles, such as Elvis Presley in the film Bubba Ho-Tep. Along with Bubba Ho-Tep, he played a supporting role in Maniac Cop and Maniac Cop 2, and spoofed his career in the self-directed My Name is Bruce. Other mainstream films for Campbell include supporting or featured roles in the Coen Brothers film The Hudsucker Proxy, the Michael Crichton adaptation Congo, the film version of McHale's Navy, Escape From L.A. (the sequel to John Carpenter's Escape From New York), the Jim Carrey drama The Majestic and the 2005 Disney film Sky High. Campbell had a starring voice role in the hit 2009 animated adaptation of the children's book Cloudy with a Chance of Meatballs, and a supporting voice role in Pixar's Cars 2. Campbell produced the 2013 remake of The Evil Dead, along with Raimi and Rob Tapert. Campbell appeared with the expectation he would reprise that role in Army of Darkness 2. The following year, the comedy metal band Psychostick released a song titled "Bruce Campbell" on their album IV: Revenge of the Vengeance that pays a comedic tribute to his past roles. Campbell worked as an executive producer for the 2023 film Evil Dead Rise. Television Outside of film, Campbell has appeared in a number of television series. He starred in The Adventures of Brisco County, Jr. a boisterous science fiction comedy western created by Jeffrey Boam and Carlton Cuse that ran for one season. He played a lawyer turned bounty hunter who was trying to hunt down John Bly, the man who killed his father. He starred in the television series Jack of All Trades, set on a fictional island occupied by the French in 1801. Campbell was also credited as co-executive producer, among others. The show was directed by Eric Gruendemann, and was produced by various people, including Sam Raimi. The show aired for two seasons, from 2000 to 2001. He had a recurring role as "Bill Church Jr." based upon the character of Morgan Edge from the Superman comics on Lois & Clark: The New Adventures of Superman. From 1996 to 1997, Campbell was a recurring guest star on the show Ellen as Ed Billik, who becomes Ellen's boss when she sells her bookstore in season four. He is also known for his supporting role as the recurring character Autolycus ("King of Thieves") on both Hercules: The Legendary Journeys and Xena: Warrior Princess, which reunited him with producer Rob Tapert. Campbell played Hercules/Xena series producer Tapert in two episodes of Hercules set in the present. He directed a number of episodes of Hercules and Xena, including the Hercules series finale. Campbell also landed the lead role of race car driver Hank Cooper in the Disney made-for-television remake of The Love Bug. Campbell made a critically acclaimed dramatic guest role as a grief-stricken detective seeking revenge for his father's murder in a two-part episode of the fourth season of Homicide: Life on the Street. Campbell later played the part of a bigamous demon in The X-Files episode "Terms of Endearment". He also starred as Agent Jackman in the episode "Witch Way Now?" of the WB series Charmed, as well as playing a state police officer in an episode of the short-lived series American Gothic titled "Meet the Beetles". Campbell co-starred on the television series Burn Notice, which aired from 2007 to 2013 on USA Network. He portrayed Sam Axe, a beer-chugging, former Navy SEAL now working as an unlicensed private investigator and occasional mercenary with his old friend Michael Westen, the show's main character. When working undercover, his character frequently used the alias Chuck Finley, which Bruce later revealed was the name of one of his father's old co-workers. Campbell was the star of a 2011 Burn Notice made-for-television prequel focusing on Sam's Navy SEAL career, titled Burn Notice: The Fall of Sam Axe. In 2014, Campbell played Santa Claus in an episode of The Librarians. Campbell played Ronald Reagan in season 2 of the FX original series Fargo. More recently Campbell reprised his role as Ashley "Ash" Williams in Ash vs Evil Dead, a series based upon the Evil Dead franchise that launched his career. Ash vs Evil Dead began airing on Starz on October 31, 2015, and was renewed by the cable channel for second and third seasons, before being cancelled. In January 2019, Travel Channel announced a reboot of the Ripley's Believe It or Not! reality series, with Campbell serving as host and executive producer. The 10-episode season debuted on June 9, 2019. Voice acting Campbell is featured as a voice actor in several video game titles. He provides the voice of Ash in the four games based on the Evil Dead film series: Evil Dead: Hail to the King, Evil Dead: A Fistful of Boomstick, Evil Dead: Regeneration and Evil Dead: The Game. He also provided voice talent in other titles such as Pitfall 3D: Beyond the Jungle, Spider-Man, Spider-Man 2, Spider-Man 3, The Amazing Spider-Man, and Dead by Daylight. He provided the voice of main character Jake Logan in the PC title, Tachyon: The Fringe, the voice of main character Jake Burton in the PlayStation game Broken Helix and the voice of Magnanimous in Megas XLR. Campbell voiced the pulp adventurer Lobster Johnson in Hellboy: The Science of Evil and has done voice-over work for the Codemaster's game Hei$t, a game which was announced on January 28, 2010 to have been "terminated". He also provided the voice of The Mayor in the 2009 film Cloudy With a Chance of Meatballs, the voice of Rod "Torque" Redline in Cars 2 and the voice of Fugax in the 2006 film The Ant Bully. Despite the inclusion of his character "Ash Williams" in Telltale Games' Poker Night 2, Danny Webber voices the character in the game, instead of Bruce Campbell. He has a voice in the online MOBA game, Tome: Immortal Arena in 2014. Campbell also provided voice-over and motion capture for Sgt. Lennox in the Exo Zombies mode of Call of Duty: Advanced Warfare. Writing In addition to acting and occasionally directing, Campbell has become a writer, starting with an autobiography, If Chins Could Kill: Confessions of a B Movie Actor, published in June 2001. The autobiography was a successful New York Times Best Seller. It follows Campbell's career to date as an actor in low-budget films and television, providing his insight into "Blue-Collar Hollywood". The paperback version of the book adds details about the reactions of fans at book signings: "Whenever I do mainstream stuff, I think they're pseudo-interested, but they're still interested in seeing weirdo, offbeat stuff, and that's what I'm attracted to." Campbell's next book Make Love! The Bruce Campbell Way was published on May 26, 2005. The book's plot involves him (depicted in a comical way) as the main character struggling to make it into the world of A-list movies. He later recorded an audio play adaptation of Make Love with fellow Michigan actors, including longtime collaborator Ted Raimi. This radio drama was released through the independent label Rykodisc and spans 6 discs with a 6-hour running time. In addition to his books, Campbell also wrote a column for X-Ray Magazine in 2001, an issue of the popular comic series The Hire, and comic book adaptations of his Man with the Screaming Brain. Most recently he wrote the introduction to Josh Becker's The Complete Guide to Low-Budget Feature Filmmaking. In late 2016, Campbell announced that he would be releasing a third book, Hail to the Chin: Further Confessions of a B Movie Actor, which will detail his life from where If Chins Could Kill left off. Hail to the Chin was released in August 2017, and accompanied by a book tour across the United States and Europe. Campbell maintained a blog on his official website, where he posted mainly about politics and the film industry, though it has since been deleted. Bruce Campbell Horror Film Festival Since 2014, the Bruce Campbell Horror Film Festival, narrated and organized by Campbell, was held in the Muvico Theater in Rosemont, Illinois. The first festival had its original run from August 21 to 25, 2014, presented by Wizard World, as part of the Chicago Comicon. The second festival ran from August 20 to 23, 2015, with guests Tom Holland and Eli Roth. The third festival took place over four days in August 2016. Guests of the event were Sam Raimi, Robert Tapert and Doug Benson. Personal life Campbell married Christine Deveau in 1983, and they had two children before divorcing in 1989. He met costume designer Ida Gearon while working on Mindwarp, and they were married in 1992. They reside in Jacksonville, Oregon. Campbell is also ordained and has performed marriage ceremonies. Filmography Film Television Video games Accolades See also Make Love! The Bruce Campbell Way () References External links Salon Interviews Bruce Campbell "Not My Job" Bruce Campbell appears on Wait Wait... Don't Tell Me! 1958 births Living people American male film actors American male television actors American male video game actors American male voice actors American people of English descent American people of Scottish descent Male actors from Michigan People from Jacksonville, Oregon People from Royal Oak, Michigan Western Michigan University alumni 20th-century American male actors 21st-century American male actors
2,063
4,706
https://en.wikipedia.org/wiki/Berkeley%20DB
Berkeley DB
Berkeley DB (BDB) is an unmaintained embedded database software library for key/value data, historically significant in open source software. Berkeley DB is written in C with API bindings for many other programming languages. BDB stores arbitrary key/data pairs as byte arrays, and supports multiple data items for a single key. Berkeley DB is not a relational database, although it has advanced database features including database transactions, multiversion concurrency control and write-ahead logging. BDB runs on a wide variety of operating systems including most Unix-like and Windows systems, and real-time operating systems. BDB was commercially supported and developed by Sleepycat Software from 1996 to 2006. Sleepycat Software was acquired by Oracle Corporation in February 2006, who continued to develop and sell the C Berkeley DB library. In 2013 Oracle re-licensed BDB under the AGPL license. and released new versions until May 2020. Bloomberg LP continues to develop a fork of the 2013 version of BDB within their Comdb2 database, under the original Sleepycat permissive license. Origin Berkeley DB originated at the University of California, Berkeley as part of BSD, Berkeley's version of the Unix operating system. After 4.3BSD (1986), the BSD developers attempted to remove or replace all code originating in the original AT&T Unix from which BSD was derived. In doing so, they needed to rewrite the Unix database package. Seltzer and Yigit created a new database, unencumbered by any AT&T patents: an on-disk hash table that outperformed the existing dbm libraries. Berkeley DB itself was first released in 1991 and later included with 4.4BSD. In 1996 Netscape requested that the authors of Berkeley DB improve and extend the library, then at version 1.86, to suit Netscape's requirements for an LDAP server and for use in the Netscape browser. That request led to the creation of Sleepycat Software. This company was acquired by Oracle Corporation in February 2006. Since its initial release, Berkeley DB has gone through various versions. Each major release cycle has introduced a single new major feature generally layering on top of the earlier features to add functionality to the product. The 1.x releases focused on managing key/value data storage and are referred to as "Data Store" (DS). The 2.x releases added a locking system enabling concurrent access to data. This is what is known as "Concurrent Data Store" (CDS). The 3.x releases added a logging system for transactions and recovery, called "Transactional Data Store" (TDS). The 4.x releases added the ability to replicate log records and create a distributed highly available single-master multi-replica database. This is called the "High Availability" (HA) feature set. Berkeley DB's evolution has sometimes led to minor API changes or log format changes, but very rarely have database formats changed. Berkeley DB HA supports online upgrades from one version to the next by maintaining the ability to read and apply the prior release's log records. The FreeBSD and OpenBSD operating systems continue to use Berkeley DB 1.8x for compatibility reasons; Linux-based operating systems commonly include several versions to accommodate for applications still using older interfaces/files. Starting with the 6.0.21 (Oracle 12c) release, all Berkeley DB products are licensed under the GNU AGPL. Previously, Berkeley DB was redistributed under the 4-clause BSD license (before version 2.0), and the Sleepycat Public License, which is an OSI-approved open-source license as well as an FSF-approved free software license. The product ships with complete source code, build script, test suite, and documentation. The comprehensive feature along with the licensing terms have led to its use in a multitude of free and open-source software. Those who do not wish to abide by the terms of the GNU AGPL, or use an older version with the Sleepycat Public License, have the option of purchasing another proprietary license for redistribution from Oracle Corporation. This technique is called dual licensing. Berkeley DB includes compatibility interfaces for some historic Unix database libraries: dbm, ndbm and hsearch (a System V and POSIX library for creating in-memory hash tables). Architecture Berkeley DB has an architecture notably simpler than that of other database systems like relational database management systems. For example, like SQLite and LMDB, it is not based on a server/client model, and does not provide support for network access programs access the database using in-process API calls. Oracle added support for SQL in 11g R2 release based on the popular SQLite API by including a version of SQLite in Berkeley DB (it uses Berkeley DB for storage). A program accessing the database is free to decide how the data is to be stored in a record. Berkeley DB puts no constraints on the record's data. The record and its key can both be up to four gigabytes long. Despite having a simple architecture, Berkeley DB supports many advanced database features such as ACID transactions, fine-grained locking, hot backups and replication. Oracle Corporation use of name "Berkeley DB" The name "Berkeley DB" is used by Oracle Corporation for three different products, only one of which is BDB: Berkeley DB, the C database library that is the subject of this article Berkeley DB Java Edition, a pure Java library whose design is modelled after the C library but is otherwise unrelated Berkeley DB XML, a C++ program that supports XQuery, and which includes a legacy version of the C database library Open Source Programs still using Berkeley DB BDB was once very widespread, but usage dropped steeply from 2013 (see licensing section). Notable software that still uses Berkeley DB for data storage include: Bogofilter – A free/open source spam filter that saves its wordlists using Berkeley DB by default Citadel – A free/open source groupware platform that keeps all of its data stores, including the message base, in Berkeley DB. Citadel is licensed under the GPLv3 which is compatible with Oracle BDB licensing Sendmail – A free/open source MTA first release in 1983 for Linux/Unix systems and no longer widely used Spamassassin – A free/open source anti-spam application Licensing Berkeley DB V2.0 and higher is available under a dual license: Oracle commercial license The GNU AGPL v3. Switching the open source license in 2013 from the Sleepycat license to the AGPL had a major effect on open source software. Since BDB is a library, any application linking to it must be under an AGPL-compatible license. Many open source applications and all closed source applications would need to be relicensed to become AGPL-compatible, which was not acceptable to many developers and open source operating systems. By 2013 there were many alternatives to BDB, and Debian Linux was typical in their decision to completely phase out Berkeley DB, with a preference for the Lightning Memory-Mapped Database (LMDB). References External links Oracle Berkeley DB Oracle Berkeley DB Downloads Oracle Berkeley DB Documentation Oracle Berkeley DB Licensing Information Licensing pitfalls for Oracle Technology Products Oracle Licensing Knowledge Net The Berkeley DB Book by Himanshu Yadava Database engines Database-related software for Linux Embedded databases Free database management systems Free software programmed in C Key-value databases NoSQL Oracle software Structured storage Software using the GNU AGPL license
2,083
4,715
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
Boolean satisfiability problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proved to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proved mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence. Conjunctive normal form A literal is either a variable (in which case it is called a positive literal) or the negation of a variable (called a negative literal). A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause). For example, is a positive literal, is a negative literal, is a clause. The formula is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively. For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form for some Boolean function R and (ordinary) literals . Different sets of allowed boolean functions lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields ; while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables. However, with use of the Tseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula. Complexity SAT was the first known NP-complete problem, as proved by Stephen Cook at the University of Toronto in 1971 and independently by Leonid Levin at the Russian Academy of Sciences in 1973. Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See Algorithms for solving SAT below. 3-satisfiability Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT, 3CNFSAT, or 3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause to a conjunction of clauses where are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original, i.e. the length growth is polynomial. 3-SAT is one of Karp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting literals from different clauses, cf. picture. The graph has a c-clique if and only if the formula is satisfiable. There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)n where n is the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT. The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any ) in time (i.e., fundamentally faster than exponential in n). Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm. They identified a phase transition region from almost certainly satisfiable to almost certainly unsatisfiable formulas at the clauses-to-variables ratio at about 4.26. 3-satisfiability can be generalized to k-satisfiability (k-SAT, also k-CNF-SAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k ≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT. Some authors restrict k-SAT to CNF formulas with exactly k literals. This doesn't lead to a different complexity class either, as each clause with j < k literals can be padded with fixed dummy variables to . After padding all clauses, 2k-1 extra clauses have to be appended to ensure that only can lead to a satisfying assignment. Since k doesn't depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses, as in . Special cases of SAT Conjunctive normal form Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form. Disjunctive normal form SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; for an example exchange "∧" and "∨" in the above exponential blow-up example for conjunctive normal forms. Exactly-1 3-satisfiability A variant of the 3-satisfiability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisfiability problem is called one-in-three positive 3-SAT. One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem "LO4" in the standard reference, Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. One-in-three 3-SAT was proved to be NP-complete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete. Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ∧ R(y,b,d) ∧ R(a,b,e) ∧ R(c,d,f) ∧ R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT instance with m clauses and n variables may be converted into an equisatisfiable one-in-three 3-SAT instance with 5m clauses and n+6m variables. Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z), see picture (right). Not-all-equal 3-satisfiability Another variant is the not-all-equal 3-satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem. Linear SAT A 3-SAT formula is Linear SAT (LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete. 2-satisfiability SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, the result is called exclusive-or 2-satisfiability, which is a problem complete for the complexity class SL = L. Horn-satisfiability The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Horn-satisfiability, or HORN-SAT. It can be solved in polynomial time by a single step of the Unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability is P-complete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time. Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x1 ∨ ... ∨ ¬xn ∨ y can be rewritten as x1 ∧ ... ∧ xn → y, that is, if x1,...,xn are all TRUE, then y needs to be TRUE as well. A generalization of the class of Horn formulae is that of renameable-Horn formulae, which is the set of formulae that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is not a Horn formula, but can be renamed to the Horn formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ ¬y3) ∧ ¬x1 by introducing y3 as negation of x3. In contrast, no renaming of (x1 ∨ ¬x2 ∨ ¬x3) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. XOR-satisfiability Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators. This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination; see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT, cf. picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable. Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT. Schaefer's dichotomy theorem The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulae; each restriction states a specific form for all subformulae: for example, only binary clauses can be subformulae in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulae, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem. The following table summarizes some common variants of SAT. Extensions of SAT An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ; it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-called tautology problem is obtained, which is co-NP-complete. If both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time. Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments: MAJ-SAT asks if the majority of all assignments make the formula TRUE. It is known to be complete for PP, a probabilistic class. #SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a decision problem, and is #P-complete. UNIQUE SAT is the problem of determining whether a formula has exactly one assignment. It is complete for US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine that accepts when there is exactly one nondeterministic accepting path and rejects otherwise. UNAMBIGUOUS-SAT is the name given to the satisfiability problem when the input is restricted to formulas having at most one satisfying assignment. The problem is also called USAT. A solving algorithm for UNAMBIGUOUS-SAT is allowed to exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although this problem seems easier, Valiant and Vazirani have shown that if there is a practical (i.e. randomized polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily. MAX-SAT, the maximum satisfiability problem, is an FNP generalization of SAT. It asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP. WMSAT is the problem of finding an assignment of minimum weight that satisfy a monotone Boolean formula (i.e. a formula without any negation). Weights of propositional variables are given in the input of the problem. The weight of an assignment is the sum of weights of true variables. That problem is NP-complete (see Th. 1 of ). Other generalizations include satisfiability for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming. Finding a satisfying assignment While SAT is a decision problem, the search problem of finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers if an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, i.e. Φ with the first variable x1 replaced by TRUE, and simplified accordingly. If the answer is "yes", then x1=TRUE, otherwise x1=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in Φ. This property is used in several theorems in complexity theory: NP ⊆ P/poly ⇒ PH = Σ2   (Karp–Lipton theorem) NP ⊆ BPP ⇒ NP = RP P = NP ⇒ FP = FNP Algorithms for solving SAT Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in our ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses). Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, planning, and scheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in the electronic design automation toolbox. Major techniques used by modern SAT solvers include the Davis–Putnam–Logemann–Loveland algorithm (or DPLL), conflict-driven clause learning (CDCL), and stochastic local search algorithms such as WalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent attempts have been made to learn an instance's satisfiability using deep learning techniques. SAT solvers are developed and compared in SAT-solving contests. Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. See also Unsatisfiable core Satisfiability modulo theories Counting SAT Planar SAT Karloff–Zwick algorithm Circuit satisfiability Notes External links SAT Game: try solving a Boolean satisfiability problem yourself The international SAT competition website International Conference on Theory and Applications of Satisfiability Testing Journal on Satisfiability, Boolean Modeling and Computation SAT Live, an aggregate website for research on the satisfiability problem Yearly evaluation of MaxSAT solvers References Further reading (by date of publication) This article includes material from a column in the ACM SIGDA e-newsletter by Prof. Karem Sakallah Original text is available here Boolean algebra Electronic design automation Formal methods Logic in computer science NP-complete problems Satisfiability problems
2,084
4,763
https://en.wikipedia.org/wiki/Baroque%20dance
Baroque dance
Baroque dance is dance of the Baroque era (roughly 1600–1750), closely linked with Baroque music, theatre, and opera. English country dance The majority of surviving choreographies from the period are English country dances, such as those in the many editions of Playford's The Dancing Master. Playford only gives the floor patterns of the dances, with no indication of the steps. However, other sources of the period, such as the writings of the French dancing-masters Feuillet and Lorin, indicate that steps more complicated than simple walking were used at least some of the time. English country dance survived well beyond the Baroque era and eventually spread in various forms across Europe and its colonies, and to all levels of society. The French Noble style The great innovations in dance in the 17th century originated at the French court under Louis XIV, and it is here that we see the first clear stylistic ancestor of classical ballet. The same basic technique was used both at social events, and as theatrical dance in court ballets and at public theaters. The style of dance is commonly known to modern scholars as the French noble style or belle danse (French, literally "beautiful dance"), however it is often referred to casually as baroque dance in spite of the existence of other theatrical and social dance styles during the baroque era. Primary sources include more than three hundred choreographies in Beauchamp–Feuillet notation, as well as manuals by Raoul Auger Feuillet and Pierre Rameau in France, Kellom Tomlinson, P. Siris, and John Weaver in England, and Gottfried Taubert in Germany (i.e. Leipzig, Saxony). This wealth of evidence has allowed modern scholars and dancers to recreate the style, although areas of controversy still exist. The standard modern introduction is Hilton. French dance types include: Allemande Bourrée Canarie (canary) Chaconne (French) courante Entrée grave Forlane (forlana) Gavotte Gigue Loure (slow gigue) Menuet (minuet) Musette Passacaille (passacaglia) Passepied Polonaise Rigaudon Sarabande Tambourin The English, working in the French style, added their own hornpipe to this list. Many of these dance types are familiar from baroque music, perhaps most spectacularly in the stylized suites of J. S. Bach. Note, however, that the allemandes, that occur in these suites do not correspond to a French dance from the same period. Theatrical dance The French noble style was danced both at social events and by professional dancers in theatrical productions such as opera-ballets and court entertainments. However, 18th-century theatrical dance had at least two other styles: comic or grotesque, and semi-serious. Other social dance styles Other dance styles, such as the Italian and Spanish dances of the period, are much less well studied than either English country dance or the French style. The general picture seems to be that during most of the 17th century, a style of late Renaissance dance was widespread, but as time progressed, French ballroom dances such as the minuet were widely adopted at fashionable courts. Beyond this, the evolution and cross-fertilisation of dance styles is an area of ongoing research. Modern reconstructions The revival of baroque music in the 1960s and '70s sparked renewed interest in 17th and 18th century dance styles. While some 300 of these dances had been preserved in Beauchamp–Feuillet notation, it wasn't until the mid-20th century that serious scholarship commenced in deciphering the notation and reconstructing the dances. Perhaps best known among these pioneers was Britain's Melusine Wood, who published several books on historical dancing in the 1950s. Wood passed her research on to her student Belinda Quirey, and also to Pavlova Company ballerina and choreographer Mary Skeaping (1902–1984). The latter became well known for her reconstructions of baroque ballets for London's "Ballet for All" company in the 1960s. The leading figures of the second generation of historical dance research include Shirley Wynne and her Baroque Dance Ensemble which was founded at Ohio State University in the early 1970s and Wendy Hilton (1931–2002), a student of Belinda Quirey who supplemented the work of Melusine Wood with her own research into original sources. A native of Britain, Hilton arrived in the U.S. in 1969 joining the faculty of the Juilliard School in 1972 and establishing her own baroque dance workshop at Stanford University in 1974 which endured for more than 25 years. Catherine Turocy (b. circa 1950) began her studies in Baroque dance in 1971 as a student of dance historian Shirley Wynne. She founded The New York Baroque Dance Company in 1976 with Ann Jacoby, and the company has since toured internationally. In 1982/83 as part of the French national celebration of Jean Philippe Rameau's 300th birthday, Turocy choreographed the first production of Jean-Philippe Rameau's Les Boréades - it was never performed during the composer's lifetime. This French supported production with John Eliot Gardiner, conductor, and his orchestra was directed by Jean Louis Martinoty. Turocy has been decorated as Chevalier in the Ordre des Arts et des Lettres by the French government. In 1973, French dance historian Francine Lancelot (1929–2003) began her formal studies in ethnomusicology which later led her to research French traditional dance forms and eventually Renaissance and Baroque dances. In 1980, at the invitation of the French Minister of Culture, she founded the baroque dance company "Ris et Danceries". Her work in choreographing the landmark 1986 production of Lully's 1676 tragedie-lyrique Atys was part of the national celebration of the 300th anniversary of Lully's death. This production propelled the career of William Christie and his ensemble Les Arts Florissants. Since the Ris et Danseries company was disbanded circa 1993, choreographers from the company have continued with their own work. Béatrice Massin with her "Compagnie Fetes Galantes", along with Marie-Geneviève Massé and her company "L'Eventail" are among the most prominent. In 1995 Francine Lancelot's catalogue raisonné of baroque dance, entitled La Belle Dance, was published. References External links BaroqueDance.info - background information, period dancing manuals, and a large collection of links The Calendar of Early Dance - information about upcoming baroque events, choreographies and photo galleries Dance European court festivities
2,111
4,775
https://en.wikipedia.org/wiki/British%20Standards
British Standards
British Standards (BS) are the standards produced by the BSI Group which is incorporated under a royal charter and which is formally designated as the national standards body (NSB) for the UK. The BSI Group produces British Standards under the authority of the charter, which lays down as one of the BSI's objectives to: Formally, as stated in a 2002 memorandum of understanding between the BSI and the United Kingdom Government, British Standards are defined as: Products and services which BSI certifies as having met the requirements of specific standards within designated schemes are awarded the Kitemark. History BSI Group began in 1901 as the Engineering Standards Committee, led by James Mansergh, to standardize the number and type of steel sections, in order to make British manufacturers more efficient and competitive. Over time the standards developed to cover many aspects of tangible engineering, and then engineering methodologies including quality systems, safety and security. British Standards creation The BSI Group as a whole does not produce British Standards, as standards work within the BSI is decentralized. The governing board of BSI establishes a Standards Board. The Standards Board does little apart from setting up sector boards (a sector in BSI parlance being a field of standardization such as ICT, quality, agriculture, manufacturing, or fire). Each sector board, in turn, constitutes several technical committees. It is the technical committees that, formally, approve a British Standard, which is then presented to the secretary of the supervisory sector board for endorsement of the fact that the technical committee has indeed completed a task for which it was constituted. Standards The standards produced are titled British Standard XXXX[-P]:YYYY where XXXX is the number of the standard, P is the number of the part of the standard (where the standard is split into multiple parts) and YYYY is the year in which the standard came into effect. BSI Group currently has over 27,000 active standards. Products are commonly specified as meeting a particular British Standard, and in general, this can be done without any certification or independent testing. The standard simply provides a shorthand way of claiming that certain specifications are met, while encouraging manufacturers to adhere to a common method for such a specification. The Kitemark can be used to indicate certification by BSI, but only where a Kitemark scheme has been set up around a particular standard. It is mainly applicable to safety and quality management standards. There is a common misunderstanding that Kitemarks are necessary to prove compliance with any BS standard, but in general, it is neither desirable nor possible that every standard be 'policed' in this way. Following the move on harmonization of the standard in Europe, some British Standards are gradually being superseded or replaced by the relevant European Standards (EN). Status of standards Standards are continuously reviewed and developed and are periodically allocated one or more of the following status keywords. Confirmed - the standard has been reviewed and confirmed as being current. Current - the document is the current, most recently published one available. Draft for public comment/DPC - a national stage in the development of a standard, where wider consultation is sought within the UK. Obsolescent - indicating by amendment that the standard is not recommended for use for new equipment, but needs to be retained to provide for the servicing of equipment that is expected to have a long working life, or due to legislative issues. Partially replaced - the standard has been partially replaced by one or more other standards. Proposed for confirmation - the standard is being reviewed and it has been proposed that it is confirmed as the current standard. Proposed for obsolescence - the standard is being reviewed and it has been proposed that it is made obsolescent. Proposed for withdrawal - the standard is being reviewed and it has been proposed that it is withdrawn. Revised - the standard has been revised. Superseded - the standard has been replaced by one or more other standards. Under review - the standard is under review. Withdrawn - the document is no longer current and has been withdrawn. Work in hand - there is work being undertaken on the standard and there may be a related draft for public comment available. Examples BS 0 A standard for standards specifies development, structure and drafting of standards. BS 1 Lists of rolled sections for structural purposes BS 2 Specification and sections of tramway rails and fishplates BS 3 Report on influence of gauge length and section of test bar on the percentage of elongation BS 4 Specification for structural steel sections BS 5 Report on locomotives for Indian railways BS 7 Dimensions of copper conductors insulated annealled, for electric power and light BS 9 Specifications for Bull Head railway rails BS 11 Specifications and sections of Flat Bottom railway rails BS 12 Specification for Portland Cement BS 15 Specification for structural steel for bridges, etc., and general building construction BS 16 Specification for telegraph material (insulators, pole fittings, et cetera) BS 17 Interim report on electrical machinery BS 22 Report on effect of temperature on insulating materials BS 24 Specifications for material used in the construction of standards for railway rolling stock BS 26 Second report on locomotives for Indian Railways (Superseding No 5) BS 27 Report on standard systems of limit gauges for running fits BS 28 Report on nuts, bolt heads and spanners BS 31 Specification for steel conduits for electrical wiring BS 32 Specification for steel bars for use in automatic machines BS 33 Carbon filament electric lamps BS 34 Tables of BS Whitworth, BS Fine and BS Pipe Threads BS 35 Specification for Copper Alloy Bars for use in Automatic Machines BS 36 Report on British Standards for Electrical Machinery BS 37 Specification for Electricity Meters BS 38 Report on British Standards Systems for Limit Gauges for Screw Threads BS 42 Report on reciprocating steam engines for electrical purposes BS 43 Specification for charcoal iron lip-welded boiler tubes BS 45 Report on Dimensions for Sparking Plugs (for Internal Combustion Engines) BS 47 Steel Fishplates for Bullhead and Flat Bottom Railway Rails, Specification and Sections of BS 49 Specification for Ammetres and Voltmetres BS 50 Third Report on Locomotives for Indian Railways (Superseding No. 5 and 26) BS 53 Specification for Cold Drawn Weldless Steel Boiler Tubes for Locomotive Boilers BS 54 Report on Screw Threads, Nuts and Bolt Heads for use in Automobile Construction BS 56 Definitions of Yield Point and Elastic Limit BS 57 Report on heads for Small Screws BS 70 Report on Pneumatic Tyre Rims for automobiles, motorcycles and bicycles BS 72 British Standardisation Rules for Electrical Machinery, BS 73 Specification for Two-Pin Wall Plugs and Sockets (Five-, Fifteen- and Thirty-Ampere) BS 76 Report of and Specifications for Tar and Pitch for Road Purposes BS 77 Specification. Voltages for a.c. transmission and distribution systems BS 80 Magnetos for automobile purposes BS 81 Specification for Instrument Transformers BS 82 Specification for Starters for Electric Motors BS 84 Report on Screw Threads (British Standard Fine), and their Tolerances (Superseding parts of Reports Nos. 20 and 33) BS 86 Report on Dimensions of Magnetos for Aircraft Purposes BS 153 Specification for Steel Girder Bridges BS 308 a now deleted standard for engineering drawing conventions, having been absorbed into BS 8888. BS 317 for Hand-Shield and Side Entry Pattern Three-Pin Wall Plugs and Sockets (Two Pin and Earth Type) BS 336 for fire hose couplings and ancillary equipment BS 372 for Side-entry wall plugs and sockets for domestic purposes (Part 1 superseded BS 73 and Part 2 superseded BS 317) BS 381 for colours used in identification, coding and other special purposes BS 476 for fire resistance of building materials/elements BS 499 Welding terms and symbols. BS 546 for Two-pole and earthing-pin plugs, socket-outlets and socket-outlet adaptors for AC (50–60 Hz) circuits up to 250V BS 857 for safety glass for land transport BS 970 Specification for wrought steels for mechanical and allied engineering purposes BS 987C Camouflage Colours BS 1011 Recommendation for welding of metallic materials BS 1088 for marine plywood BS 1192 for Construction Drawing Practice. Part 5 (BS1192-5:1998) concerns Guide for structuring and exchange of CAD data. BS 1361 for cartridge fuses for a.c. circuits in domestic and similar premises BS 1362 for cartridge fuses for BS 1363 power plugs BS 1363 for mains power plugs and sockets BS 1377 Methods of test for soils for civil engineering. BS 1572 Colours for Flat Finishes for Wall Decoration BS 1881 Testing Concrete BS 1852 Specification for marking codes for resistors and capacitors BS 2979 Transliteration of Cyrillic and Greek characters BS 3621 Thief resistant lock assembly. Key egress. BS 3943 Specification for plastics waste traps BS 4142 Methods for rating and assessing industrial and commercial sound BS 4293 for residual current-operated circuit-breakers BS 4343 for industrial electrical power connectors BS 4573 Specification for 2-pin reversible plugs and shaver socket-outlets BS 4960 for weighing instruments for domestic cookery BS 5252 for colour-coordination in building construction BS 5400 for steel, concrete and composite bridges. BS 5499 for graphical symbols and signs in building construction; including shape, colour and layout BS 5544 for anti-bandit glazing (glazing resistant to manual attack) BS 5750 for quality management, the ancestor of ISO 9000 BS 5837 for protection of trees during construction work BS 5839 for fire detection and alarm systems for buildings BS 5930 for site investigations BS 5950 for structural steel BS 5993 for Cricket balls BS 6008 for preparation of a liquor of tea for use in sensory tests BS 6312 for telephone plugs and sockets BS 6651 code of practice for protection of structures against lightning; replaced by BS EN 62305 (IEC 62305) series. BS 6879 for British geocodes, a superset of ISO 3166-2:GB BS 7430 code of practice for earthing BS 7671 Requirements for Electrical Installations, The IEE Wiring Regulations, produced by the IET. BS 7799 for information security, the ancestor of the ISO/IEC 27000 family of standards, including 27002 (formerly 17799) BS 7901 for recovery vehicles and vehicle recovery equipment BS 7909 Code of practice for temporary electrical systems for entertainment and related purposes BS 7919 Electric cables. Flexible cables rated up to 450/750 V, for use with appliances and equipment intended for industrial and similar environments BS 7910 guide to methods for assessing the acceptability of flaws in metallic structures BS 7925 Software testing BS 7971 Protective clothing and equipment for use in violent situations and in training BS 8110 for structural concrete BS 8233 Guidance on sound insulation and noise reduction in buildings BS 8484 for the provision of lone worker device services BS 8485 for the characterization and remediation from ground gas in affected developments BS 8494 for detecting and measuring carbon dioxide in ambient air or extraction systems BS 8546 Travel adaptors compatible with UK plug and socket system. BS 8888 for engineering drawing and technical product specification BS 15000 for IT Service Management, (ITIL), now ISO/IEC 20000 BS 3G 101 for general requirements for mechanical and electromechanical aircraft indicators BS EN 12195 Load restraining on road vehicles. BS EN 60204 Safety of machinery BS EN ISO 4210 - Cycles. Safety Requirements for Bicycles PAS documents BSI also publishes a series of Publicly Available Specification (PAS) documents. PAS documents are a flexible and rapid standards development model open to all organizations. A PAS is a sponsored piece of work allowing organizations flexibility in the rapid creation of a standard while also allowing for a greater degree of control over the document's development. A typical development time frame for a PAS is around six to nine months. Once published by BSI, a PAS has all the functionality of a British Standard for the purposes of creating schemes such as management systems and product benchmarks as well as codes of practice. A PAS is a living document and after two years the document will be reviewed and a decision made with the client as to whether or not this should be taken forward to become a formal standard. The term PAS was originally an abbreviation for "product approval specification", a name which was subsequently changed to “publicly available specification”. However, according to BSI, not all PAS documents are structured as specifications and the term is now sufficiently well established not to require any further amplification. Examples PAS 78: Guide to good practice in commissioning accessible websites PAS 440: Responsible Innovation – Guide PAS 9017: Plastics – Biodegradation of polyolefins in an open-air terrestrial environment – Specification PAS 1881: Assuring safety for automated vehicle trials and testing – Specification PAS 1201: Guide for describing graphene material PAS 4444: Hydrogen fired gas appliances – Guide Availability Copies of British Standards are sold at the BSI Online Shop or can be accessed via subscription to British Standards Online (BSOL). They can also be ordered via the publishing units of many other national standards bodies (ANSI, DIN, etc.) and from several specialized suppliers of technical specifications. British Standards, including European and international adoptions, are available in many university and public libraries that subscribe to the BSOL platform. Librarians and lecturers at UK-based subscribing universities have full access rights to the collection while students can copy/paste and print but not download a standard. Up to 10% of the content of a standard can be copy/pasted for personal or internal use and up to 5% of the collection made available as a paper or electronic reference collection at the subscribing university. Because of their reference material status standards are not available for interlibrary loan. Public library users in the UK may have access to BSOL on a view-only basis if their library service subscribes to the BSOL platform. Users may also be able to access the collection remotely if they have a valid library card and the library offers secure access to its resources. The BSI Knowledge Centre in Chiswick, London can be contacted directly about viewing standards in their Members’ Reading Room. See also Institute for Reference Materials and Measurements (EU) References External links Official website 1901 establishments in the United Kingdom International Electrotechnical Commission Certification marks Organizations established in 1901
2,117
4,797
https://en.wikipedia.org/wiki/Bob%20Young%20%28businessman%29
Bob Young (businessman)
Robert Young is a serial entrepreneur who is best known for founding Red Hat Inc., the open source software company. He owns the franchises for Forge FC of the Canadian Premier League as well as the Hamilton Tiger-Cats of the Canadian Football League for which he serves as self-styled Caretaker of the team. Early life He was born in Hamilton, Ontario, Canada. He attended Trinity College School in Port Hope, Ontario. He received a Bachelor of Arts from Victoria College at the University of Toronto. Career Prior to Red Hat, Young built a couple of computer rental and leasing businesses, including founding Vernon Computer Rentals in 1984. Descendants of Vernon are still operating under that name. After leaving Vernon, Young founded the ACC Corp Inc. in 1993. Marc Ewing and Young co-founded open-source software company Red Hat. Red Hat was a member of the S&P 500 Index before being purchased by IBM on July 9, 2019. Marc Ewing and Young's partnership started in 1994 when ACC acquired the Red Hat trademarks from Ewing. In early 1995, ACC changed its name to Red Hat Software, which has subsequently been shortened to simply Red Hat, Inc. Young served as Red Hat's CEO until 1999. In 2002, Young founded Lulu.com, a print-on-demand, self-publishing company, and served as CEO. In 2006, Young established the Lulu Blooker Prize, a book prize for books that began as blogs. He launched the prize partly as a means to promote Lulu. Young served as CEO of PrecisionHawk, a commercial drone technology company, from 2015 to 2017. Prior to being named PrecisionHawk's CEO in 2015, he was an early investor in the company. He continues to serve on its board as Chairman. Young also co-founded Linux Journal in 1994, and in 2003, he purchased the Hamilton Tiger-Cats of the Canadian Football League. In 2022, he sold minority stakes in the Tiger-Cats to Jim Lawson, team President Scott Mitchell, and American steel manufacturer Stelco. Young focuses his philanthropic efforts on access to information and advancement of knowledge. In 1999, he co-founded The Center for the Public Domain. Young has supported the Creative Commons, Public Knowledge.org, the Dictionary of Old English, Loran Scholarship Foundation, ibiblio.org, and the NCSU eGames, among others. References Year of birth missing (living people) Living people Businesspeople from Ontario Open source people Red Hat people University of Toronto alumni People from Hamilton, Ontario Hamilton Tiger-Cats owners Forge FC non-playing staff
2,130
4,801
https://en.wikipedia.org/wiki/BeOS
BeOS
BeOS is an operating system for personal computers first developed by Be Inc. in 1990. It was first written to run on BeBox hardware. BeOS was positioned as a multimedia platform that could be used by a substantial population of desktop users and a competitor to Classic Mac OS and Microsoft Windows. It was ultimately unable to achieve a significant market share, and did not prove commercially viable for Be Inc. The company was acquired by Palm Inc. Today BeOS is mainly used, and derivatives developed, by a small population of enthusiasts. The open-source operating system Haiku is a continuation of BeOS concepts and most of the application level compatibility. The latest version, Beta 4 released December 2022, still retains BeOS 5 compatibility in its x86 32-bit images. History Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Computer's PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Classic Mac OS. Toward the end of 1996, Apple was still looking for a replacement to Copland in their operating system strategy. Amidst rumours of Apple's interest in purchasing BeOS, Be wanted to increase their user base, to try to convince software developers to write software for the operating system. Be courted Macintosh clone vendors to ship BeOS with their hardware. Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs. In 1997, Power Computing began bundling BeOS (on a CD for optional installation) with its line of PowerPC-based Macintosh clones. These systems could dual boot either the Classic Mac OS or BeOS, with a start-up screen offering the choice. Motorola also announced in February 1997 that it would bundle BeOS with their Macintosh clones, the Motorola StarMax, along with MacOS. Due to Apple's moves and the mounting debt of Be Inc., BeOS was soon ported to the Intel x86 platform with its R3 release in March 1998. Through the late 1990s, BeOS managed to create a niche of followers, but the company failed to remain viable. Be Inc. also released a stripped-down, but free, copy of BeOS R5 known as BeOS Personal Edition (BeOS PE). BeOS PE could be started from within Microsoft Windows or Linux, and was intended to nurture consumer interest in its product and give developers something to tinker with. Be Inc. also released a stripped-down version of BeOS for Internet Appliances (BeIA), which soon became the company's business focus in place of BeOS. In 2001 Be's copyrights were sold to Palm, Inc. for some $11 million. BeOS R5 is considered the last official version, but BeOS R5.1 "Dano", which was under development before Be's sale to Palm and included the BeOS Networking Environment (BONE) networking stack, was leaked to the public shortly after the company's demise. In 2002, Be Inc. sued Microsoft claiming that Hitachi had been dissuaded from selling PCs loaded with BeOS, and that Compaq had been pressured not to market an Internet appliance in partnership with Be. Be also claimed that Microsoft acted to artificially depress Be Inc.'s initial public offering (IPO). The case was eventually settled out of court for $23.25 million with no admission of liability on Microsoft's part. After the split from Palm, PalmSource used parts of BeOS's multimedia framework for its failed Palm OS Cobalt product. With the takeover of PalmSource, the BeOS rights now belong to Access Co. Continuation and clones In the years that followed the demise of Be Inc. a handful of projects formed to recreate BeOS or key elements of the OS with the eventual goal of then continuing where Be Inc. left off. This was facilitated by Be Inc. having released some components of BeOS under a free licence. Such projects include: BlueEyedOS: It uses a modified version of the Linux kernel and reimplements the BeOS API over it (BeOS applications need to be recompiled). It is freely downloadable, but sources were never published. There have been no releases since 2003. Cosmoe: A port of the Haiku userland over a Linux kernel. BeOS applications need to be recompiled. It is free and open source software. The last release was in 2004 and its website is no longer online. E/OS: short for Emulator Operating System. A Linux and FreeBSD-based operating system that aimed to run Windows, DOS, AmigaOS and BeOS applications. It is free and open source software. Active development ended in July 2008. Haiku: A complete reimplementation of BeOS not based on Linux. Unlike Cosmoe and BlueEyedOS, it is directly compatible with BeOS applications. It is open source software. As of 2022, it was the only BeOS clone still under development, with the fourth beta (December 2022) still keeping BeOS 5 compatibility in its x86 32-bit images, with an increased number of modern drivers and GTK apps ported. Zeta was a commercially available operating system based on the BeOS R5.1 codebase. Originally developed by yellowTAB, the operating system was then distributed by magnussoft. During development by yellowTAB, the company received criticism from the BeOS community for refusing to discuss its legal position with regard to the BeOS codebase (perhaps for contractual reasons). Access Co. (which bought PalmSource, until then the holder of the intellectual property associated with BeOS) has since declared that yellowTAB had no right to distribute a modified version of BeOS, and magnussoft has ceased distribution of the operating system. Version history Features BeOS was built for digital media work and was written to take advantage of modern hardware facilities such as symmetric multiprocessing by utilizing modular I/O bandwidth, pervasive multithreading, preemptive multitasking and a 64-bit journaling file system known as BFS. The BeOS GUI was developed on the principles of clarity and a clean, uncluttered design. The API was written in C++ for ease of programming. The GUI was largely multithreaded: each window ran in its own thread, relying heavily on sending messages to communicate between threads; and these concepts are reflected into the API. It has partial POSIX compatibility and access to a command-line interface through Bash, although internally it is not a Unix-derived operating system. Many Unix applications were ported to the BeOS command-line interface. BeOS used Unicode as the default encoding in the GUI, though support for input methods such as bidirectional text input was never realized. Products using BeOS BeOS (and now Zeta) continue to be used in media appliances, such as the Edirol DV-7 video editors from Roland Corporation, which run on top of a modified BeOS and the Tunetracker Radio Automation software that used to run it on BeOS and Zeta, and it was also sold as a "Station-in-a-Box" with the Zeta operating system included. In 2015, Tunetracker released a Haiku distribution bundled with its broadcasting software. The Tascam SX-1 digital audio recorder runs a heavily modified version of BeOS that will only launch the recording interface software. The RADAR 24, RADAR V and RADAR 6, hard disk-based, 24-track professional audio recorders from iZ Technology Corporation were based on BeOS 5. Magicbox, a manufacturer of signage and broadcast display machines, uses BeOS to power their Aavelin product line. Final Scratch, a 12-inch vinyl timecode record-driven DJ software/hardware system, was first developed on BeOS. The "ProFS" version was sold to a few dozen DJs prior to the 1.0 release, which ran on a Linux virtual partition. See also Haiku (operating system) Access Co. BeIA bootman Comparison of operating systems Gobe Productive Hitachi Flora Prius KDL NetPositive OpenTracker Pe References Further reading External links The Dawn of Haiku, by Ryan Leavengood, IEEE Spectrum May 2012, p 40–43,51-54. Mirror of the old www.be.com site Other Mirror of the old www.be.com site BeOS Celebrating Ten Years BeGroovy A blog dedicated to all things BeOS BeOS: The Mac OS X might-have-been, reghardware.co.uk Programming the Be Operating System: An O'Reilly Open Book (out of print, but can be downloaded) (BeOS) Discontinued operating systems Object-oriented operating systems PowerPC operating systems X86 operating systems
2,132
4,810
https://en.wikipedia.org/wiki/Balance%20of%20trade
Balance of trade
The balance of trade, commercial balance, or net exports (sometimes symbolized as NX), is the difference between the monetary value of a nation's exports and imports over a certain time period. Sometimes a distinction is made between a balance of trade for goods versus one for services. The balance of trade measures a flow of exports and imports over a given period of time. The notion of the balance of trade does not mean that exports and imports are "in balance" with each other. If a country exports a greater value than it imports, it has a trade surplus or positive trade balance, and conversely, if a country imports a greater value than it exports, it has a trade deficit or negative trade balance. As of 2016, about 60 out of 200 countries have a trade surplus. The notion that bilateral trade deficits are bad in and of themselves is overwhelmingly rejected by trade experts and economists. Explanation The balance of trade forms part of the current account, which includes other transactions such as income from the net international investment position as well as international aid. If the current account is in surplus, the country's net international asset position increases correspondingly. Equally, a deficit decreases the net international asset position. The trade balance is identical to the difference between a country's output and its domestic demand (the difference between what goods a country produces and how many goods it buys from abroad; this does not include money re-spent on foreign stock, nor does it factor in the concept of importing goods to produce for the domestic market). Measuring the balance of trade can be problematic because of problems with recording and collecting data. As an illustration of this problem, when official data for all the world's countries are added up, exports exceed imports by almost 1%; it appears the world is running a positive balance of trade with itself. This cannot be true, because all transactions involve an equal credit or debit in the account of each nation. The discrepancy is widely believed to be explained by transactions intended to launder money or evade taxes, smuggling and other visibility problems. While the accuracy of developing countries' statistics would be suspicious, most of the discrepancy actually occurs between developed countries of trusted statistics. Factors that can affect the balance of trade include: The cost of production (land, labor, capital, taxes, incentives, etc.) in the exporting economy vis-à-vis those in the importing economy; The cost and availability of raw materials, intermediate goods and other inputs; Currency exchange rate movements; Multilateral, bilateral and unilateral taxes or restrictions on trade; Non-tariff barriers such as environmental, health or safety standards; The availability of adequate foreign exchange with which to pay for imports; and Prices of goods manufactured at home (influenced by the responsiveness of supply) In addition, the trade balance is likely to differ across the business cycle. In export-led growth (such as oil and early industrial goods), the balance of trade will shift towards exports during an economic expansion. However, with domestic demand-led growth (as in the United States and Australia) the trade balance will shift towards imports at the same stage in the business cycle. The monetary balance of trade is different from the physical balance of trade (which is expressed in amount of raw materials, known also as Total Material Consumption). Developed countries usually import a substantial amount of raw materials from developing countries. Typically, these imported materials are transformed into finished products and might be exported after adding value. Financial trade balance statistics conceal material flow. Most developed countries have a large physical trade deficit because they consume more raw materials than they produce. Examples Historical example Many countries in early modern Europe adopted a policy of mercantilism, which theorized that a trade surplus was beneficial to a country. Mercantilist ideas also influenced how European nations regulated trade policies with their colonies, promoting the idea that natural resources and cash crops should be exported to Europe, with processed goods being exported back to the colonies in return. Ideas such as bullionism spurred the popularity of mercantilism in European governments. An early statement concerning the balance of trade appeared in Discourse of the Common Wealth of this Realm of England, 1549: "We must always take heed that we buy no more from strangers than we sell them, for so should we impoverish ourselves and enrich them." Similarly, a systematic and coherent explanation of balance of trade was made public through Thomas Mun's 1630 "England's treasure by foreign trade, or, The balance of our foreign trade is the rule of our treasure". Since the mid-1980s, the United States has had a growing deficit in tradeable goods, especially with Asian nations (China and Japan) which now hold large sums of U.S debt that has in part funded the consumption. The U.S. has a trade surplus with nations such as Australia. The issue of trade deficits can be complex. Trade deficits generated in tradeable goods such as manufactured goods or software may impact domestic employment to different degrees than do trade deficits in raw materials. Economies that have savings surpluses, such as Japan and Germany, typically run trade surpluses. China, a high-growth economy, has tended to run trade surpluses. A higher savings rate generally corresponds to a trade surplus. Correspondingly, the U.S. with its lower savings rate has tended to run high trade deficits, especially with Asian nations. Some have said that China pursues a mercantilist economic policy. Russia pursues a policy based on protectionism, according to which international trade is not a "win-win" game but a zero-sum game: surplus countries get richer at the expense of deficit countries. In 2016 Country example: Armenia For the last two decades, the Armenian trade balance has been negative, reaching 203.9 USD million in March 2019, which was considered the highest by then, however the most recent ratio of the same indicator is -273,5 USD million in Oct 2021, which is evidently one of the consequences of 6 weeks war between Armenia and Azerbaijan in autumn of 2020. The reason for the trade deficit is that Armenia's foreign trade is limited by its landlocked location and border disputes with Turkey and Azerbaijan, to the west and east respectively. The situation results in the country's typically reporting large trade deficits. Views on economic impact The notion that bilateral trade deficits are bad in and of themselves is overwhelmingly rejected by trade experts and economists. According to the IMF trade deficits can cause a balance of payments problem, which can affect foreign exchange shortages and hurt countries. On the other hand, Joseph Stiglitz points out that countries running surpluses exert a "negative externality" on trading partners, and pose a threat to global prosperity, far more than those in deficit. Ben Bernanke argues that "persistent imbalances within the euro zone are... unhealthy, as they lead to financial imbalances as well as to unbalanced growth. The fact that Germany is selling so much more than it is buying redirects demand from its neighbors (as well as from other countries around the world), reducing output and employment outside Germany." According to Carla Norrlöf, there are three main benefits to trade deficits for the United States: Greater consumption than production: the US enjoys the better side of the bargain by being able to consume more than it produces Usage of efficiently produced foreign-made intermediate goods is productivity-enhancing for US firms: the US makes the most effective use of the global division of labor A large market that other countries are reliant on for exports enhances American bargaining power in trade negotiations A 2018 National Bureau of Economic Research paper by economists at the International Monetary Fund and University of California, Berkeley, found in a study of 151 countries over 1963-2014 that the imposition of tariffs had little effect on the trade balance. Classical theory Adam Smith on the balance of trade Keynesian theory In the last few years of his life, John Maynard Keynes was much preoccupied with the question of balance in international trade. He was the leader of the British delegation to the United Nations Monetary and Financial Conference in 1944 that established the Bretton Woods system of international currency management. He was the principal author of a proposal – the so-called Keynes Plan – for an International Clearing Union. The two governing principles of the plan were that the problem of settling outstanding balances should be solved by 'creating' additional 'international money', and that debtor and creditor should be treated almost alike as disturbers of equilibrium. In the event, though, the plans were rejected, in part because "American opinion was naturally reluctant to accept the principle of equality of treatment so novel in debtor-creditor relationships". The new system is not founded on free-trade (liberalisation of foreign trade) but rather on the regulation of international trade, in order to eliminate trade imbalances: the nations with a surplus would have a powerful incentive to get rid of it, and in doing so they would automatically clear other nations deficits. He proposed a global bank that would issue its own currency – the bancor – which was exchangeable with national currencies at fixed rates of exchange and would become the unit of account between nations, which means it would be used to measure a country's trade deficit or trade surplus. Every country would have an overdraft facility in its bancor account at the International Clearing Union. He pointed out that surpluses lead to weak global aggregate demand – countries running surpluses exert a "negative externality" on trading partners, and posed far more than those in deficit, a threat to global prosperity. In "National Self-Sufficiency" The Yale Review, Vol. 22, no. 4 (June 1933), he already highlighted the problems created by free trade. His view, supported by many economists and commentators at the time, was that creditor nations may be just as responsible as debtor nations for disequilibrium in exchanges and that both should be under an obligation to bring trade back into a state of balance. Failure for them to do so could have serious consequences. In the words of Geoffrey Crowther, then editor of The Economist, "If the economic relationships between nations are not, by one means or another, brought fairly close to balance, then there is no set of financial arrangements that can rescue the world from the impoverishing results of chaos." These ideas were informed by events prior to the Great Depression when – in the opinion of Keynes and others – international lending, primarily by the U.S., exceeded the capacity of sound investment and so got diverted into non-productive and speculative uses, which in turn invited default and a sudden stop to the process of lending. Influenced by Keynes, economics texts in the immediate post-war period put a significant emphasis on balance in trade. For example, the second edition of the popular introductory textbook, An Outline of Money, devoted the last three of its ten chapters to questions of foreign exchange management and in particular the 'problem of balance'. However, in more recent years, since the end of the Bretton Woods system in 1971, with the increasing influence of monetarist schools of thought in the 1980s, and particularly in the face of large sustained trade imbalances, these concerns – and particularly concerns about the destabilising effects of large trade surpluses – have largely disappeared from mainstream economics discourse and Keynes' insights have slipped from view. They are receiving some attention again in the wake of the financial crisis of 2007–08. Monetarist theory Prior to 20th-century monetarist theory, the 19th-century economist and philosopher Frédéric Bastiat expressed the idea that trade deficits actually were a manifestation of profit, rather than a loss. He proposed as an example to suppose that he, a Frenchman, exported French wine and imported British coal, turning a profit. He supposed he was in France and sent a cask of wine which was worth 50 francs to England. The customhouse would record an export of 50 francs. If in England, the wine sold for 70 francs (or the pound equivalent), which he then used to buy coal, which he imported into France, and was found to be worth 90 francs in France, he would have made a profit of 40 francs. But the customhouse would say that the value of imports exceeded that of exports and was trade deficit against the ledger of France. By reductio ad absurdum, Bastiat argued that the national trade deficit was an indicator of a successful economy, rather than a failing one. Bastiat predicted that a successful, growing economy would result in greater trade deficits, and an unsuccessful, shrinking economy would result in lower trade deficits. This was later, in the 20th century, echoed by economist Milton Friedman. In the 1980s, Friedman, a Nobel Memorial Prize-winning economist and a proponent of monetarism, contended that some of the concerns of trade deficits are unfair criticisms in an attempt to push macroeconomic policies favorable to exporting industries. Friedman argued that trade deficits are not necessarily important, as high exports raise the value of the currency, reducing aforementioned exports, and vice versa for imports, thus naturally removing trade deficits not due to investment. Since 1971, when the Nixon administration decided to abolish fixed exchange rates, America's Current Account accumulated trade deficits have totaled $7.75 trillion as of 2010. This deficit exists as it is matched by investment coming into the United States – purely by the definition of the balance of payments, any current account deficit that exists is matched by an inflow of foreign investment. In the late 1970s and early 1980s, the U.S. had experienced high inflation and Friedman's policy positions tended to defend the stronger dollar at that time. He stated his belief that these trade deficits were not necessarily harmful to the economy at the time since the currency comes back to the country (country A sells to country B, country B sells to country C who buys from country A, but the trade deficit only includes A and B). However, it may be in one form or another including the possible tradeoff of foreign control of assets. In his view, the "worst-case scenario" of the currency never returning to the country of origin was actually the best possible outcome: the country actually purchased its goods by exchanging them for pieces of cheaply made paper. As Friedman put it, this would be the same result as if the exporting country burned the dollars it earned, never returning it to market circulation. This position is a more refined version of the theorem first discovered by David Hume. Hume argued that England could not permanently gain from exports, because hoarding gold (i.e., currency) would make gold more plentiful in England; therefore, the prices of English goods would rise, making them less attractive exports and making foreign goods more attractive imports. In this way, countries' trade balances would balance out. Friedman presented his analysis of the balance of trade in Free to Choose, widely considered his most significant popular work. Trade balance’s effects upon a nation's GDP Exports directly increase and imports directly reduce a nation's balance of trade (i.e. net exports). A trade surplus is a positive net balance of trade, and a trade deficit is a negative net balance of trade. Due to the balance of trade being explicitly added to the calculation of the nation's gross domestic product using the expenditure method of calculating gross domestic product (i.e. GDP), trade surpluses are contributions and trade deficits are "drags" upon their nation's GDP; however, foreign made goods sold (e.g., retail) contribute to total GDP. Balance of trade vs. balance of payments See also Dutch disease Transfer problem References External links Where Do U.S. Dollars Go When the United States Runs a Trade Deficit? from Dollars & Sense magazine OECD Trade balance statistics U.S. Government Export Assistance U.S Trade Deficit Depicted in an Infographic Balance of payments International trade theory International trade International macroeconomics
2,136
4,848
https://en.wikipedia.org/wiki/Barrister
Barrister
A barrister is a type of lawyer in common law jurisdictions. Barristers mostly specialise in courtroom advocacy and litigation. Their tasks include taking cases in superior courts and tribunals, drafting legal pleadings, researching the law and giving expert legal opinions. Barristers are distinguished from both solicitors and chartered legal executives, who have more direct access to clients, and may do transactional legal work. It is mainly barristers who are appointed as judges, and they are rarely hired by clients directly. In some legal systems, including those of Scotland, South Africa, Scandinavia, Pakistan, India, Bangladesh, and the British Crown dependencies of Jersey, Guernsey and the Isle of Man, the word barrister is also regarded as an honorific title. In a few jurisdictions, barristers are usually forbidden from "conducting" litigation, and can only act on the instructions of a solicitor, and increasingly - chartered legal executives, who perform tasks such as corresponding with parties and the court, and drafting court documents. In England and Wales, barristers may seek authorisation from the Bar Standards Board to conduct litigation. This allows a barrister to practise in a "dual capacity", fulfilling the role of both barrister and solicitor. In some common law jurisdictions, such as New Zealand, Canada and some Australian states and territories, lawyers are entitled to practise both as barristers and solicitors, but it remains a separate system of qualification to practise exclusively as a barrister. In others, such as the United States, the barrister, solicitor and chartered legal executive distinction does not exist at all. Differences between barristers and other lawyers Differences A barrister, who can be considered a jurist, is a lawyer who represents a litigant as an advocate before a court of appropriate jurisdiction. A barrister speaks in court and presents the case before a judge or jury. In some jurisdictions, a barrister receives additional training in evidence law, ethics, and court practice and procedure. In contrast, a solicitor or chartered legal executive generally meets with clients, does preparatory and administrative work and provides legal advice. In this role, they may draft and review legal documents, interact with the client as necessary, prepare evidence, and generally manage the day-to-day administration of a lawsuit. In England and Wales, solicitors, and to some, yet steadily increasing degree, chartered legal executives, can provide a crucial support role to a barrister when in court, such as managing large volumes of documents in the case or even negotiating a settlement outside the courtroom while the trial continues inside. There are other essential differences. A barrister will usually have rights of audience in the higher courts, whereas other legal professionals will often have more limited access, or will need to acquire additional qualifications to have such access. As in common law countries in which there is a split between the roles of barrister and solicitor, the barrister in civil law jurisdictions is responsible for appearing in trials or pleading cases before the courts. Barristers usually have particular knowledge of case law, precedent, and the skills to "build" a case. When a solicitor or chartered legal executive in, respectively, general and specific practice is confronted with an unusual point of law, they may seek the "opinion of counsel" on the issue. In most countries, barristers operate as sole practitioners and are prohibited from forming partnerships or from working as a barrister as part of a corporation. (In 2009, the Clemens Report recommended the abolition of this restriction in England and Wales.) However, barristers normally band together into barristers' chambers to share clerks (administrators) and operating expenses. Some chambers grow to be large and sophisticated and have a distinctly corporate feel. In some jurisdictions, they may be employed by firms of solicitors and chartered legal executives, banks, or corporations as in-house legal advisers. In contrast, solicitors, chartered legal executives and attorneys work directly with the clients and are responsible for engaging a barrister with the appropriate expertise for the case. Barristers generally have little or no direct contact with their "lay clients", particularly without the presence or involvement of the solicitor and/or chartered legal executive. All correspondence, inquiries, invoices, and so on, will be addressed to the solicitor or to the chartered legal executive, who is primarily responsible for the barrister's fees. In court, barristers are often visibly distinguished from solicitors and other legal practitioners by their apparel. For example, in Ireland, England, and Wales, a barrister usually wears a horsehair wig, stiff collar, bands, and a gown. Since January 2008, solicitor advocates have also been entitled to wear wigs, but wear different gowns. In many countries the traditional divisions between barristers and solicitors and other legal representatives are breaking down. Barristers once enjoyed a monopoly on appearances before the higher courts, but in Great Britain this has now been abolished, and solicitor advocates and Rights of Audience conferred - chartered legal executives can generally appear for clients at trial. Increasingly, firms of solicitors and their rapidly emerging and increasingly recognised counterparts - the chartered legal executives, are keeping even the most advanced advisory and litigation work in-house for economic and client relationship reasons. Similarly, the prohibition on barristers taking instructions directly from the public has also been widely abolished. But, in practice, direct instruction is still a rarity in most jurisdictions, partly because barristers with narrow specializations, or who are only really trained for advocacy, are not prepared to provide general advice to members of the public. Historically, barristers have had a major role in trial preparation, including drafting pleadings and reviewing evidence. In some areas of law, that is still the case. In other areas, it is relatively common for the barrister to receive the brief from the instructing solicitor to represent a client at trial only a day or two before the proceeding. Part of the reason for this is cost. A barrister is entitled to a "brief fee" when a brief is delivered, and this represents the bulk of his or her fee in relation to any trial. They are then usually entitled to a "refresher" for each day of the trial after the first, but if a case is settled before the trial, the barrister is not needed and the brief fee would be wasted. Some solicitors avoid this by delaying delivery of the brief until it is certain the case will go to trial. Justification for a split profession Some benefits of maintaining the split include: Having an independent barrister reviewing a course of action gives the client a fresh and independent opinion from an expert in the field distinct from solicitors who may maintain ongoing and long-term relationships with the client. In many jurisdictions, judges are appointed from the bar (members of the profession of barrister within a given jurisdiction). Since barristers do not have long-term client relationships and are further removed from clients than solicitors, judicial appointees are more independent. Having recourse to all of the specialist barristers at the bar can enable smaller firms, who could not maintain large specialist departments, to compete with larger firms. A barrister acts as a check on the solicitor conducting the trial; if it becomes apparent that the claim or defense has not been properly conducted by the solicitor prior to trial, the barrister can (and usually has a duty to) advise the client of a separate possible claim against the solicitor. Expertise in conducting trials, owing to the fact that barristers are specialist advocates. In many jurisdictions, barristers must follow the cab-rank rule, which obliges them to accept a brief if it is in their area of expertise and if they are available, facilitating access to justice for the unpopular. Some disadvantages of the split include: A multiplicity of legal advisers can lead to less efficiency and higher costs, a concern to Sir David Clemens in his review of the English legal profession. Because they are further removed from the client, barristers can be less familiar with the client's needs. A detailed examination of the justifications for a split legal profession and of the arguments in favor of a fused profession can be found in English solicitor Peter Reeve's 1986 book, Are Two Legal Professions Necessary? Regulation Barristers are regulated by the Bar for the jurisdiction where they practise, and in some countries, by the Inn of Court to which they belong. In some countries, there is external regulation. Inns of Court, where they exist, regulate admission to the profession. Inns of Court are independent societies that are titularly responsible for the training, admission (calling), and discipline of barristers. Where they exist, a person may only be called to the Bar by an Inn, of which they must first be a member. In fact, historically, call to and success at the Bar, to a large degree, depended upon social connections made early in life. A Bar collectively describes all members of the profession of barrister within a given jurisdiction. While as a minimum the Bar is an association embracing all its members, it is usually the case, either de facto or de jure, that the Bar is invested with regulatory powers over the manner in which barristers practise. Barristers around the world In the common law tradition, the respective roles of a lawyer – that is as legal adviser and advocate – were formally split into two separate, regulated sub-professions, the other being the office of solicitor. Historically, the distinction was absolute, but in the modern legal age, some countries that had a split legal profession now have a fused profession – anyone entitled to practise as a barrister may also practise as a solicitor, and vice versa; and alternatively – as a chartered legal executive. In practice, the distinction may be non-existent, minor, or marked, depending on the jurisdiction. In some jurisdictions, such as Australia, Scotland and Ireland, there is little overlap. Australia In the Australian states of New South Wales, Victoria and Queensland, there is a split profession. Nevertheless, subject to conditions, barristers can accept direct access work from clients. Each state Bar Association regulates the profession and essentially has the functions of the English Inns of Court. In the states of South Australia and Western Australia, as well as the Australian Capital Territory, the professions of barrister and solicitor are fused, but an independent bar nonetheless exists, regulated by the Legal Practice Board of the state or territory. In Tasmania and the Northern Territory, the profession is fused, although a very small number of practitioners operate as an independent bar. Generally, counsel dress in the traditional English manner (wig, gown, bar jacket and jabot) before superior courts, although this is not usually done for interlocutory applications. Wigs and robes are still worn in the Supreme Court and the District Court in civil matters and are dependent on the judicial officer's attire. Robes and wigs are worn in all criminal cases. In Western Australia, wigs are no longer worn in any court. Each year, the Bar Association appoints certain barristers of seniority and eminence to the rank of "Senior Counsel" (in most States and Territories) or "King's Counsel" (in the Northern Territory, Queensland, Victoria and South Australia). Such barristers carry the title "SC" or "KC" after their name. The appointments are made after a process of consultation with members of the profession and the judiciary. Senior Counsel appear in particularly complex or difficult cases. They make up about 14 per cent of the bar in New South Wales. Bangladesh In Bangladesh, the law relating to the Barristers is the Bangladesh Legal Practitioners and Bar Council Order, 1972 (President Order No. 46) as amended which is administered and enforced by the Bangladesh Bar Council. Bangladesh Bar Council is the supreme statutory body to regulate the legal professions in Bangladesh and ensure educational standard and regulatory compliance by the Advocates on roll of the Bar Council. The Bar Council, with the help of government, prescribes rules to regulate the profession. All law graduates educating from home or abroad have to write and pass the Bar Council Examination to be enrolled and admitted as professional Advocates to practise law both as Barristers & Solicitors. The newly enrolled advocates are permitted to start practice in the lower (District) courts after admitting as members of the local (District) Bar Associations. After two years of Practice in lower court, the Advocates are eligible to be enrolled in the High Court Division of the Supreme Court of Bangladesh. By passing the Bar Council Examination, the advocates are issued with certificates of enrollment and permission in prescribed form to practise in the High Court Division of the Supreme Court also. Only those advocates who became Barristers in the U.K. maintain their honorific title of barristers. In Bangladesh, there is an association called Barristers' Association of Bangladesh that represents the such U.K. bases barristers.[10] Canada In Canada (except Quebec), the professions of barrister and solicitor are fused, and many lawyers refer to themselves with both names, even if they do not practise in both areas. In colloquial parlance within the Canadian legal profession, lawyers often term themselves as "litigators" (or "barristers"), or as "solicitors", depending on the nature of their law practice though some may in effect practise as both litigators and solicitors. However, "litigators" would generally perform all litigation functions traditionally performed by barristers and solicitors; in contrast, those terming themselves "solicitors" would generally limit themselves to legal work not involving practice before the courts (not even in a preparatory manner as performed by solicitors in England), though some might practise before chambers judges. As is the practice in many other Commonwealth jurisdictions such as Australia, Canadian litigators are "gowned", but without a wig, when appearing before courts of "superior jurisdiction". All law graduates from Canadian law schools, and holders of NCA certificates of Qualification (Internationally trained lawyers or graduates from other law schools in common-law jurisdictions outside Canada) from the Federation of Law Societies of Canada after can apply to the relevant Provincial regulating body (law society) for admission (note here that the Canadian Provinces are technically each considered different legal jurisdictions). Prerequisites to admission as a member to a law society involve the completion of a Canadian law degree (or completion of exams to recognize a foreign common law degree), a year of articling as a student supervised by a qualified lawyer, and passing the bar exams mandated by the province the student has applied for a licence in. Once these requirements are complete then the articling student may be "called to the bar" after the review if their application and consideration of any "good character" issues at which they are presented to the Court in a call ceremony. The applicant then becomes a member of the law society as a "barrister and solicitor". The situation is somewhat different in Quebec as a result of its civil law tradition. The profession of solicitor, or avoué, never took hold in colonial Quebec, so attorneys (avocats) have traditionally been a fused profession, arguing and preparing cases in contentious matters, whereas Quebec's other type of lawyer, civil-law notaries (notaires), handle out-of-court non-contentious matters. However, a number of areas of non-contentious private law are not monopolized by notaries so that attorneys often specialise in handling either trials, cases, advising, or non-trial matters. The only disadvantage is that attorneys cannot draw up public instruments that have the same force of law as notarial acts. Most large law firms in Quebec offer the full range of legal services of law firms in common-law provinces. Intending Quebec attorneys must earn a bachelor's degree in civil law, pass the provincial bar examination, and successfully complete a legal internship to be admitted to practice. Attorneys are regulated by the Quebec Law Society (Barreau du Québec). France In France, avocats, or attorneys, were, until the 20th century, the equivalent of barristers. The profession included several grades ranked by seniority: avocat-stagiaire (trainee, who was already qualified but needed to complete two years (or more, depending on the period) of training alongside seasoned lawyers), avocat, and avocat honoraire (senior barrister). Since the 14th century and during the course of the 19th and 20th in particular, French barristers competed in territorial battles over respective areas of legal practice against the conseil juridique (legal advisor, transactional solicitor) and avoué (procedural solicitor), and expanded to become the generalist legal practitioner, with the notable exception of notaires (notaries), who are ministry appointed lawyers (with a separate qualification) and who retain exclusivity over conveyancing and probate. After the 1971 and 1990 legal reforms, the avocat was fused with the avoué and the conseil juridique, making the avocat (or, if female, avocate) an all-purpose lawyer for matters of contentious jurisdiction, analogous to an American attorney. French attorneys usually do not (although it they are entitled to) act both as litigators (trial lawyers) and legal consultants (advising lawyers), known respectively as avocat plaidant and avocat-conseil. This distinction is however purely informal and does not correspond to any difference in qualification or admission to the roll. All intending attorneys must pass an examination to be able to enrol in one of the Centre régional de formation à la profession d'avocat (CRFPA) (Regional centre for the training of lawyers). The CRFPA course has a duration of two years and is a mix between classroom teachings and internships. Its culmination is the stage final (final training), where the intending attorney spends six months in a law firm (generally in their favoured field of practice and in a firm in which they hope to be recruited afterwards). The intending attorney then needs to pass the Certificat d'Aptitude à la Profession d'Avocat (CAPA), which is the last professional examination allowing them to join a court's bar (barreau). It is generally recognised that the first examination is much more difficult than the CAPA and is dreaded by most law students. Each bar is regulated by a Bar Council (Ordre du barreau). A separate body of barristers exists called the avocats au Conseil d'Etat et à la Cour de Cassation. Although their legal background, training and status is the same as the all-purpose avocats, these have a monopoly over litigation taken to the supreme courts, in civil, criminal or administrative matters. Germany In Germany, no distinction between barristers and solicitors is made. Lawyers may plead at all courts except the civil branch of the Federal Court of Justice (Bundesgerichtshof), to which fewer than fifty lawyers are admitted. Those lawyers, who deal almost exclusively with litigation, may not plead at other courts and are usually instructed by a lawyer who represented the client in the lower courts. However, these restrictions do not apply to criminal cases, nor to pleadings at courts of the other court systems, including labour, administrative, taxation, and social courts and the European Union court system. Hong Kong The legal profession in Hong Kong is also divided into two branches: barristers and solicitors. In the High Court (including both the Court of First Instance and the Court of Appeal) and the Court of Final Appeal, as a general rule, only barristers and solicitor-advocates are allowed to speak on behalf of any party in open court. This means that solicitors are restricted from doing so. In these two courts, save for hearings in chambers, barristers dress in the traditional English manner, as do the judges and other lawyers. In Hong Kong, the rank of King's Counsel was granted prior to the handover of Hong Kong from the United Kingdom to China in 1997. After the handover, the rank has been replaced by Senior Counsel post-nominal letters: SC. Senior Counsel may still, however, style themselves as silks, like their British counterparts. India In India, the law relating to the barrister is the Advocates Act, 1961 introduced and conceived by Ashoke Kumar Sen, the then-law minister of India, which is a law passed by the Parliament and is administered and enforced by the Bar Council of India. Under the act, the Bar Council of India is the supreme regulatory body for the legal profession in India, ensuring the compliance of the laws and maintenance of professional standards by the legal profession in the country. For this purpose, the Bar Council of India is authorized to pass regulations and make orders in individual cases and also generally. Each state has a Bar Council of its own whose function is to enroll the barristers willing to practise predominantly within the territorial confines of that state and to perform the functions of the Bar Council of India within the territory assigned to them. Therefore, each law degree holder must be enrolled with a (single) State Bar Council to practise in India. However, enrollment with any State Bar Council does not restrict the Barrister from appearing before any court in India, even though it is beyond the territorial jurisdiction of the State Bar Council which they are enrolled in. The advantage with having the State Bar Councils is that the workload of the Bar Council of India can be divided into these various State Bar Councils and also that matters can be dealt with locally and in an expedited manner. However, for all practical and legal purposes, the Bar Council of India retains with it, the final power to take decisions in any and all matters related to the legal profession on the whole or with respect to any The process for being entitled to practise in India is twofold. First, the applicant must be a holder of a law degree from a recognized institution in India (or from one of the four recognised Universities in the United Kingdom) and second, must pass the enrollment qualifications of the Bar Council of the state where they seek to be enrolled. For this purpose, the Bar Council of India has an internal Committee whose function is to supervise and examine the various institutions conferring law degrees and to grant recognition to these institutions once they meet the required standards. In this manner the Bar Council of India also ensures the standard of education required for practising in India is met with. As regards the qualification for enrollment with the State Bar Council, while the actual formalities may vary from one State to another, yet predominately they ensure that the application has not been a bankrupt /criminal and is generally fit to practise before courts of India. Enrollment with a Bar Council also means that the law degree holder is recognized as a Barrister and is required to maintain a standards of conduct and professional demeanor at all times, both on and off the profession. The Bar Council of India also prescribes "Rules of Conduct" to be observed by the Barristers in the courts, while interacting with clients and even otherwise. Ireland In the Republic of Ireland, admission to the Bar by the Chief Justice of Ireland is restricted to those on whom a Barrister-at-Law degree (BL) has first been conferred. The Honorable Society of King's Inns is the only educational establishment which runs vocational courses for barristers in the Republic and degrees of Barrister-at-Law can only be conferred by King's Inns. King's Inns are also the only body with the capacity to call individuals to the bar and to disbar them. Most Irish barristers choose to be governed thereafter by the Bar of Ireland, a quasi-private entity. Senior members of the profession may be selected for elevation to the Inner Bar, when they may describe themselves as Senior Counsel ("SC"). All barristers who have not been called to the Inner Bar are known as Junior Counsel (and are identified by the postnominal initials "BL"), regardless of age or experience. Admission to the Inner Bar is made by declaration before the Supreme Court, patents of precedence having been granted by the Government. Irish barristers are sole practitioners and may not form chambers or partnerships if they wish to remain members of the Bar of Ireland's Law Library. To practise under the Bar of Ireland's rules, a newly-qualified barrister is apprenticed to an experienced barrister of at least seven years' experience. This apprenticeship is known as pupillage or devilling. Devilling is compulsory for those barristers who wish to be members of the Law Library and lasts for one legal year. It is common to devil for a second year in a less formal arrangement but this is not compulsory. Devils are not generally paid for their work in their devilling year. Israel In Israel, there is no distinction between barristers and solicitors, notwithstanding the judicial system in Israel is based mostly on English common law, as a continuation of the British Mandate in Palestine. Practically, there are lawyers in Israel who do not appear in courts, and their work is similar to a solicitor's. Japan Japan adopts a unified system. However, there are certain classes of qualified professionals who are allowed to practise in certain limited areas of law, such as scriveners (shiho shoshi, qualified to handle title registration, deposit, and certain petite court proceedings with additional certification), tax accountants (zeirishi, qualified to prepare tax returns, provide advice on tax computation and represent a client in administrative tax appeals) and patent agents ("benrishi", qualified to practise patent registration and represent a client in administrative patent appeals). Only the lawyers (bengoshi) can appear before the court and are qualified to practise in any areas of law, including, but not limited to, areas that those qualified law-related professionals above are allowed to practise. Most attorneys still focus primarily on court practice and still a very small number of attorneys give sophisticated and expert legal advice on a day-to-day basis to large corporations. Netherlands The Netherlands used to have a semi-separated legal profession comprising the lawyer and the procureur, the latter resembling, to some extent, the profession of barrister. Under that system, lawyers were entitled to represent their clients in law, but were only able to file cases before the court at which they were registered. Cases falling under the jurisdiction of another court had to be filed by a procureur registered at that court, in practice often another lawyer exercising both functions. Questions were raised on the necessity of the separation, given the fact that its main purpose – the preservation of the quality of the legal profession and observance of local court rules and customs – had become obsolete. For that reason, the procureur as a separate profession was abolished and its functions merged with the legal profession in 2008. Currently, lawyers can file cases before any court, regardless of where they are registered. The only notable exception concerns civil cases brought before the Supreme Court, which have to be handled by lawyers registered at the Supreme Court, thus gaining from it the title "lawyer at the Supreme Court". New Zealand In New Zealand, the professions are not formally fused but practitioners are enrolled in the High Court as "Barristers and Solicitors". They may choose, however, to practise as barristers sole. About 15% practise solely as barristers, mainly in the larger cities and usually in "chambers" (following the British terminology). They receive "instructions" from other practitioners, at least nominally. They usually conduct the proceedings in their entirety. Any lawyer may apply to become a King's Counsel (KC) to recognize the long-standing contribution to the legal profession but this status is only conferred on those practising as solicitors in exceptional circumstances. This step referred to as "being called to the inner bar" or "taking silk", is considered highly prestigious and has been a step in the career of many New Zealand judges. Unlike other jurisdictions, the term "junior barrister" is popularly used to refer to a lawyer who holds a practising certificate as a barrister, but is employed by another, more senior barrister. Generally, junior barristers are within their first five years of practise and are not yet qualified to practise as barristers sole. Barristers sole (i.e. barristers who are not employed by another barrister) who are not King's Counsel are never referred to as junior barristers. Nigeria In Nigeria, there is no formal distinction between barristers and solicitors. All students who pass the bar examinations – offered exclusively by the Nigerian Law School – are called to the Nigerian bar, by the Body of Benchers. Lawyers may argue in any Federal trial or appellate court as well as any of the courts in Nigeria's 36 states and the Federal Capital Territory. The Legal Practitioner's Act refers to Nigerian lawyers as Legal Practitioners, and following their call to the Bar, Nigerian lawyers enter their names in the register or Roll of Legal Practitioners kept at the Supreme Court. Perhaps, for this reason, a Nigerian lawyer is also often referred to as a Barrister and Solicitor of the Supreme Court of Nigeria, and many Nigerian lawyers term themselves Barrister-at-Law complete with the postnominal initials "B.L.". The vast majority of Nigerian lawyers combine contentious and non-contentious work, although there is a growing tendency for practitioners in the bigger practices to specialise in one or the other. In colloquial parlance within the Nigerian legal profession, lawyers may, for this reason, be referred to as "litigators" or as "solicitors". Consistent with the practice in England and elsewhere in the Commonwealth, senior members of the profession may be selected for elevation to the Inner Bar by the conferment of the rank of Senior Advocate of Nigeria (SAN). Pakistan The profession in Pakistan is fused; an advocate works both as a barrister and a solicitor, with higher rights of audience being provided. To practice as a barrister in Pakistan, a law graduate must complete three steps: pass the Bar Practice and Training Course (BPTC), be called to the Bar by an Inn of Court, and attain a licence to practice as an advocate in the [courts of Pakistan from the relevant Bar Council, provincial or federal. Poland In Poland, there are two main types of legal professions: advocate and legal counsel. Both are regulated and these professions are restricted only for people who graduated five-year law studies, have at least three years of experience and passed five difficult national exams (civil law, criminal law, company law, administrative law and ethic) or have a doctor of law degree. Before 2015, the only difference was that advocates have a right to represent clients before the court in all cases and the legal advisors could not represent clients before the court in criminal cases. Presently, the legal advisors can also represent clients in criminal cases so currently, the differences between this professions are only historical significance. South Africa In South Africa the employment and practise of advocates (as barristers are known in South Africa) is consistent with the rest of the Commonwealth. Advocates carry the rank of Junior or Senior Counsel (SC), and are mostly briefed and paid by solicitors (known as attorneys). They are usually employed in the higher courts, particularly in the Appeal Courts where they often appear as specialist counsel. South African solicitors (attorneys) follow a practice of referring cases to Counsel for an opinion before proceeding with a case, when Counsel in question practises as a specialist in the case law at stake. Aspiring advocates currently spend one year in pupillage (formerly only six months) before being admitted to the bar in their respective provincial or judicial jurisdictions. The term "Advocate" is sometimes used in South Africa as a title, e. g. "Advocate John Doe, SC" (Advokaat in Afrikaans) in the same fashion as "Dr. John Doe" for a medical doctor. South Korea In South Korea, there is no distinction between the judiciary and lawyers. Previously, a person who has passed the national bar exam after two years of national education is able to become a judge, prosecutor, or "lawyer" in accordance to their grades upon graduation. As a result of changes from implementing an accommodated law school system, there are two standard means of becoming a lawyer. Under the current legal system, to be a judge or a prosecutor, lawyers need to practise their legal knowledge. A "lawyer" does not have any limitation of practice. Spain Spain has a division but it does not correspond to the division in Britain between barristers/advocates and solicitors. Procuradores represent the litigant procedurally in court, generally under the authority of a power of attorney executed by a civil law notary, while abogados represent the substantive claims of the litigant through trial advocacy. Abogados perform both transactional work and advise in connection with court proceedings, and they have full right of audience in front of the court. The court proceeding is carried out with abogados, not with procuradores. In a nutshell, procuradores are court agents that operate under the instructions of an abogado. Their practice is confined to the locality of the court to which they are admitted. United Kingdom Under EU law, barristers, along with advocates, chartered legal executives and solicitors, are recognised as lawyers. England and Wales Although with somewhat different laws, England and Wales are considered within the United Kingdom a single united and unified legal jurisdiction for the purposes of both civil and criminal law, alongside Scotland and Northern Ireland, the other two legal jurisdictions within the United Kingdom. England and Wales are covered by a common bar (an organisation of barristers) and a single law society (an organisation of solicitors). The profession of barrister in England and Wales is a separate profession from that of solicitor. It is, however, possible to hold the qualification of both barrister and solicitor, and/or chartered legal executive at the same time. It is not necessary to leave the bar to qualify as a solicitor. Barristers are regulated by the Bar Standards Board, a division of the General Council of the Bar. A barrister must be a member of one of the Inns of Court, which traditionally educated and regulated barristers. There are four Inns of Court: The Honourable Society of Lincoln's Inn, The Honourable Society of Gray's Inn, The Honourable Society of the Middle Temple, and The Honourable Society of the Inner Temple. All are situated in central London, near the Royal Courts of Justice. They perform scholastic and social roles, and in all cases, provide financial aid to student barristers (subject to merit) through scholarships. It is the Inns that actually "call" the student to the Bar at a ceremony similar to a graduation. Social functions include dining with other members and guests and hosting other events. Law graduates wishing to work and be known as barristers must take a course of professional training (known as the "vocational component") at one of the institutions approved by the Bar Council. Until late 2020 this course was exclusively the Bar Professional Training Course, but since then the approved training offer was broadened to would-be barristers via a number of different courses, such as the new Bar Vocational Course at the Inns of Court College of Advocacy. On successful completion of the vocational component, student barristers are "called" to the bar by their respective inns and are elevated to the degree of "Barrister". However, before they can practise independently they must first undertake 12 months of pupillage. The first six months of this period are spent shadowing more senior practitioners, after which pupil barristers may begin to undertake some court work of their own. Following successful completion of this stage, most barristers then join a set of Chambers, a group of counsel who share the costs of premises and support staff whilst remaining individually self-employed. In December 2014 there were just over 15,500 barristers in independent practice, of whom about ten percent are King's Counsel and the remainder are junior barristers. Many barristers (about 2,800) are employed in companies as "in-house" counsel, or by local or national government or in academic institutions. Certain barristers in England and Wales are now instructed directly by members of the public. Members of the public may engage the services of the barrister directly within the framework of the Public Access Scheme; a solicitor is not involved at any stage. Barristers undertaking public access work can provide legal advice and representation in court in almost all areas of law (see the Public Access Information on the Bar Council website) and are entitled to represent clients in any court or tribunal in England and Wales. Once instructions from a client are accepted, it is the barrister (rather than the solicitor) who advises and guides the client through the relevant legal procedure or litigation. Before a barrister can undertake Public Access work, they must have completed a special course. At present, about one in 20 barristers has so qualified. There is also a separate scheme called "Licensed Access", available to certain nominated classes of professional client; it is not open to the general public. Public access work is experiencing a huge surge at the bar, with barristers taking advantage of the new opportunity for the bar to make profit in the face of legal aid cuts elsewhere in the profession. The ability of barristers to accept such instructions is a recent development; it results from a change in the rules set down by the General Council of the Bar in July 2004. The Public Access Scheme has been introduced as part of the drive to open up the legal system to the public and to make it easier and cheaper to obtain access to legal advice. It further reduces the distinction between solicitors and barristers. The distinction remains however because there are certain aspects of a solicitor's role that a barrister is not able to undertake. Historically a barrister might use the honorific, Esquire. Even though the term barrister-at-law is sometimes seen, and was once very common, it has never been formally correct in England and Wales. Barrister is the only correct nomenclature. Barristers are expected to maintain very high standards of professional conduct. The objective of the barristers code of conduct is to avoid dominance by either the barrister or the client and the client being enabled to make informed decisions in a supportive atmosphere and, in turn, the client expects (implicitly and/or explicitly) the barrister to uphold their duties, namely by acting in the client's best interests (CD2), acting with honesty and integrity (CD3), keeping the client's affairs confidential (CD6) and working to a competent standard (CD7). These core duties (CDs) are a few, among others, that are enshrined in the BSB Handbook. Northern Ireland In April 2003 there were 554 barristers in independent practice in Northern Ireland. 66 were King's Counsel (KCs), barristers who have earned a high reputation and are appointed by the Queen on the recommendation of the Lord Chancellor as senior advocates and advisers. Those barristers who are not KCs are called Junior Counsel and are styled "BL" or "Barrister-at-Law". The term junior is often misleading since many members of the Junior Bar are experienced barristers with considerable expertise. Benchers are, and have been for centuries, the governing bodies of the four Inns of Court in London and King's Inns, Dublin. The Benchers of the Inn of Court of Northern Ireland governed the Inn until the enactment of the Constitution of the Inn in 1983, which provides that the government of the Inn is shared between the Benchers, the Executive Council of the Inn and members of the Inn assembled in General Meeting. The Executive Council (through its Education Committee) is responsible for considering Memorials submitted by applicants for admission as students of the Inn and by Bar students of the Inn for admission to the degree of Barrister-at-Law and making recommendations to the Benchers. The final decisions on these Memorials are taken by the Benchers. The Benchers also have the exclusive power of expelling or suspending a Bar student and of disbarring a barrister or suspending a barrister from practice. The Executive Council is also involved with: education; fees of students; calling counsel to the Bar, although call to the Bar is performed by the Lord Chief Justice of Northern Ireland on the invitation of the Benchers; administration of the Bar Library (to which all practising members of the Bar belong); and liaising with corresponding bodies in other countries. Scotland In Scotland, an advocate is, in all respects except name, a barrister, but there are significant differences in professional practice. In Scotland, admission to and the conduct of the profession is regulated by the Faculty of Advocates (as opposed to an Inn). Crown dependencies and UK Overseas Territories Isle of Man, Jersey and Guernsey In the Bailiwick of Jersey, there are solicitors (called ecrivains) and advocates (French avocat). In the Bailiwicks of Jersey and Guernsey and on the Isle of Man, Advocates perform the combined functions of both solicitors and barristers. Gibraltar Gibraltar is a British Overseas Territory boasting a legal profession based on the common law. The legal profession includes both barristers and solicitors with most barristers also acting as solicitors. Admission and Disciplinary matters in Gibraltar are dealt with by the Bar Council of Gibraltar and the Supreme Court of Gibraltar. In order for barristers or solicitors to be admitted as practising lawyers in Gibraltar they must comply with the Supreme Court Act 1930 as amended by the Supreme Court Amendment Act 2015 which requires, amongst other things, for all newly admitted lawyers as of 1 July 2015 to undertake a year's course in Gibraltar law at the University of Gibraltar. Solicitors also have right of audience in Gibraltar's courts. United States The United States does not distinguish between lawyers as pleaders (barristers) and agents (or solicitors). Any American lawyer who has passed a bar examination and has been admitted to practice law in a particular U.S. state or other jurisdiction may prosecute or defend in the courts of that jurisdiction. The barrister–solicitor distinction existed historically in some U.S. states, which had a separate label for barristers (called "counselors", hence the expression "attorney and counselor at law"). But both professions have long since been fused into the all-purpose "lawyer" or "attorney". Additionally, some state appellate courts require attorneys to obtain a separate certificate of admission to plead and practice in the appellate court. Federal courts require specific admission to that court's bar to practice before it. At the state appellate level and in federal courts, there is generally no separate examination process, although some U.S. district courts require an examination on practices and procedures in their specific courts. Unless an examination is required, admission is usually granted as a matter of course to any licensed attorney in the state where the court is located. Some federal courts will grant admission to any attorney licensed in any U.S. jurisdiction. Popular culture Rumpole of the Bailey (UK) – classic courtroom series Kavanagh Q.C. (1995–2001) (UK) North Square (2000) (UK) – Channel 4 court drama series contains interactions between barristers and solicitors Bridget Jones's Diary (1996 book and 2001 film) – A major character, Mark Darcy, is described as a "top barrister" A Fish Called Wanda (1988) – John Cleese's character, Archie Leach, is a barrister who, in defending a hapless jewel thief, becomes entangled in the crime Silk (2011–2014) (UK) – BBC court drama series Rake (2010–2016) – Australian TV series based on the story of a colourful barrister Sydney Carton – central character, a barrister, in Charles Dickens' A Tale of Two Cities Witness for the Prosecution, in which the central character is the barrister Sir Wilfred Robards, QC Arnold Timsh – Target in The Knife of Dunwall, an expansion on the video game Dishonored Barrister Babu (2020) – Indian social drama TV series on Colors TV See also Bar (law) Barristers' Ball Legal professions in England and Wales Revising Barrister Serjeant-at-law Special Pleader References Further reading Abel, Richard L. The Making of the English Legal Profession: 1800-1988 (1998), 576pp Lemmings, David. Gentlemen and Barristers: The Inns of Court and the English Bar, 1680-1730 (Oxford 1990) Levack, Brian. The civil lawyers (Oxford 1973) Prest, Wilfrid. The Inns of Court (1972) Prest, Wilfrid. The Rise of the Barristers (1986) External links Hong Kong Bar Association (barristers in Hong Kong) Canadian Bar Association Australia Australian Bar Association (barristers in the Commonwealth of Australia) New South Wales Bar Association The Victorian Bar (Australia) Queensland Bar Association (Australia) South Australian Bar Association (Australia) Western Australian Bar Association (Australia) The Northern Territory Bar Association (Australia) UK and Ireland The Barrister magazine The Inner Temple Bar Council (barristers in England and Wales) Bar Library of Northern Ireland Faculty of Advocates in Scotland The Bar of Ireland The difference between barristers and solicitors Advice on structure and training for the Bar Common law Law of the United Kingdom Legal professions
2,156
4,854
https://en.wikipedia.org/wiki/Bermuda%20Triangle
Bermuda Triangle
The Bermuda Triangle, also known as the Devil's Triangle, is an urban legend focused on a loosely defined region in the western part of the North Atlantic Ocean where a number of aircraft and ships are said to have disappeared under mysterious circumstances. The idea of the area as uniquely prone to disappearances arose in the mid-20th century, but most reputable sources dismiss the idea that there is any mystery. Origins The earliest suggestion of unusual disappearances in the Bermuda area appeared in a September 17, 1950, article published in The Miami Herald (Associated Press) by Edward Van Winkle Jones. Two years later, Fate magazine published "Sea Mystery at Our Back Door", a short article by George Sand covering the loss of several planes and ships, including the loss of Flight 19, a group of five US Navy Grumman TBM Avenger torpedo bombers on a training mission. Sand's article was the first to lay out the now-familiar triangular area where the losses took place, as well as the first to suggest a supernatural element to the Flight 19 incident. Flight 19 alone would be covered again in the April 1962 issue of American Legion magazine. In it, author Allan W. Eckert wrote that the flight leader had been heard saying, "We are entering white water, nothing seems right. We don't know where we are, the water is green, no white." He also wrote that officials at the Navy board of inquiry stated that the planes "flew off to Mars." In February 1964, Vincent Gaddis wrote an article called "The Deadly Bermuda Triangle" in the pulp magazine Argosy saying Flight 19 and other disappearances were part of a pattern of strange events in the region. The next year, Gaddis expanded this article into a book, Invisible Horizons. Other writers elaborated on Gaddis' ideas: John Wallace Spencer (Limbo of the Lost, 1969, repr. 1973); Charles Berlitz (The Bermuda Triangle, 1974); Richard Winer (The Devil's Triangle, 1974), and many others, all keeping to some of the same supernatural elements outlined by Eckert. Triangle area The Gaddis Argosy article delineated the boundaries of the triangle, giving its vertices as Miami; San Juan, Puerto Rico; and Bermuda. Subsequent writers did not necessarily follow this definition. Some writers gave different boundaries and vertices to the triangle, with the total area varying from . "Indeed, some writers even stretch it as far as the Irish coast." Consequently, the determination of which accidents occurred inside the triangle depends on which writer reported them. Criticism of the concept Larry Kusche Larry Kusche, author of The Bermuda Triangle Mystery: Solved (1975), argued that many claims of Gaddis and subsequent writers were exaggerated, dubious or unverifiable. Kusche's research revealed a number of inaccuracies and inconsistencies between Berlitz's accounts and statements from eyewitnesses, participants, and others involved in the initial incidents. Kusche noted cases where pertinent information went unreported, such as the disappearance of round-the-world yachtsman Donald Crowhurst, which Berlitz had presented as a mystery, despite clear evidence to the contrary. Another example was the ore-carrier recounted by Berlitz as lost without trace three days out of an Atlantic port when it had been lost three days out of a port with the same name in the Pacific Ocean. Kusche also argued that a large percentage of the incidents that sparked allegations of the Triangle's mysterious influence actually occurred well outside it. Often his research was simple: he would review period newspapers of the dates of reported incidents and find reports on possibly relevant events like unusual weather, that were never mentioned in the disappearance stories. Kusche concluded that: The number of ships and aircraft reported missing in the area was not significantly greater, proportionally speaking, than in any other part of the ocean. In an area frequented by tropical cyclones, the number of disappearances that did occur were, for the most part, neither disproportionate, unlikely, nor mysterious. Furthermore, Berlitz and other writers would often fail to mention such storms or even represent the disappearance as having happened in calm conditions when meteorological records clearly contradict this. The numbers themselves had been exaggerated by sloppy research. A boat's disappearance, for example, would be reported, but its eventual (if belated) return to port may not have been. Some disappearances had, in fact, never happened. One plane crash was said to have taken place in 1937, off Daytona Beach, Florida, in front of hundreds of witnesses. The legend of the Bermuda Triangle is a manufactured mystery, perpetuated by writers who either purposely or unknowingly made use of misconceptions, faulty reasoning, and sensationalism. In a 2013 study, the World Wide Fund for Nature identified the world's 10 most dangerous waters for shipping, but the Bermuda Triangle was not among them. Further responses When the UK Channel 4 television program The Bermuda Triangle (1992) was being produced by John Simmons of Geofilms for the Equinox series, the marine insurance market Lloyd's of London was asked if an unusually large number of ships had sunk in the Bermuda Triangle area. Lloyd's determined that large numbers of ships had not sunk there. Lloyd's does not charge higher rates for passing through this area. United States Coast Guard records confirm their conclusion. In fact, the number of supposed disappearances is relatively insignificant considering the number of ships and aircraft that pass through on a regular basis. The Coast Guard is also officially skeptical of the Triangle, noting that they collect and publish, through their inquiries, much documentation contradicting many of the incidents written about by the Triangle authors. In one such incident involving the 1972 explosion and sinking of the tanker , the Coast Guard photographed the wreck and recovered several bodies, in contrast with one Triangle author's claim that all the bodies had vanished, with the exception of the captain, who was found sitting in his cabin at his desk, clutching a coffee cup. In addition, V. A. Fogg sank off the coast of Texas, nowhere near the commonly accepted boundaries of the Triangle. The Nova/Horizon episode The Case of the Bermuda Triangle, aired on June 27, 1976, was highly critical, stating that "When we've gone back to the original sources or the people involved, the mystery evaporates. Science does not have to answer questions about the Triangle because those questions are not valid in the first place ... Ships and planes behave in the Triangle the same way they behave everywhere else in the world." Skeptical researchers, such as Ernest Taves and Barry Singer, have noted how mysteries and the paranormal are very popular and profitable. This has led to the production of vast amounts of material on topics such as the Bermuda Triangle. They were able to show that some of the pro-paranormal material is often misleading or inaccurate, but its producers continue to market it. Accordingly, they have claimed that the market is biased in favor of books, TV specials, and other media that support the Triangle mystery, and against well-researched material if it espouses a skeptical viewpoint. Benjamin Radford, an author and scientific paranormal investigator, noted in an interview on the Bermuda Triangle that it could be very difficult locating an aircraft lost at sea due to the vast search area, and although the disappearance might be mysterious, that did not make it paranormal or unexplainable. Radford further noted the importance of double-checking information as the mystery surrounding the Bermuda Triangle had been created by people who had neglected to do so. Hypothetical explanation attempts Persons accepting the Bermuda Triangle as a real phenomenon have offered a number of explanatory approaches. Paranormal explanations Triangle writers have used a number of supernatural concepts to explain the events. One explanation pins the blame on leftover technology from the mythical lost continent of Atlantis. Sometimes connected to the Atlantis story is the submerged rock formation known as the Bimini Road off the island of Bimini in the Bahamas, which is in the Triangle by some definitions. Followers of the purported psychic Edgar Cayce take his prediction that evidence of Atlantis would be found in 1968, as referring to the discovery of the Bimini Road. Believers describe the formation as a road, wall, or other structure, but the Bimini Road is of natural origin. Some hypothesize that a parallel universe exists in the Bermuda Triangle region, causing a time/space warp that sucks the objects around it into a parallel universe. Others attribute the events to UFOs. Charles Berlitz, author of various books on anomalous phenomena, lists several theories attributing the losses in the Triangle to anomalous or unexplained forces. Natural explanations Compass variations Compass problems are one of the cited phrases in many Triangle incidents. While some have theorized that unusual local magnetic anomalies may exist in the area, such anomalies have not been found. Compasses have natural magnetic variations in relation to the magnetic poles, a fact which navigators have known for centuries. Magnetic (compass) north and geographic (true) north are exactly the same only for a small number of places – for example, , in the United States, only those places on a line running from Wisconsin to the Gulf of Mexico. But the public may not be as informed, and think there is something mysterious about a compass "changing" across an area as large as the Triangle, which it naturally will. Gulf Stream The Gulf Stream is a major surface current, primarily driven by thermohaline circulation that originates in the Gulf of Mexico and then flows through the Straits of Florida into the North Atlantic. In essence, it is a river within an ocean, and, like a river, it can and does carry floating objects. It has a maximum surface velocity of about . A small plane making a water landing or a boat having engine trouble can be carried away from its reported position by the current. Human error One of the most cited explanations in official inquiries as to the loss of any aircraft or vessel is human error. Human stubbornness may have caused businessman Harvey Conover to lose his sailing yacht, Revonoc, as he sailed into the teeth of a storm south of Florida on January 1, 1958. Violent weather Hurricanes are powerful storms that form in tropical waters and have historically cost thousands of lives and caused billions of dollars in damage. The sinking of Francisco de Bobadilla's Spanish fleet in 1502 was the first recorded instance of a destructive hurricane. These storms have in the past caused a number of incidents related to the Triangle. Many Atlantic hurricanes pass through the Triangle as they recurve off the Eastern Seaboard, and, before the advent of weather satellite, ships often had little to no warning of a hurricane's approach. A powerful downdraft of cold air was suspected to be a cause in the sinking of Pride of Baltimore on May 14, 1986. The crew of the sunken vessel noted the wind suddenly shifted and increased velocity from to . A National Hurricane Center satellite specialist, James Lushine, stated "during very unstable weather conditions the downburst of cold air from aloft can hit the surface like a bomb, exploding outward like a giant squall line of wind and water." A similar event occurred to Concordia in 2010, off the coast of Brazil. Methane hydrates An explanation for some of the disappearances has focused on the presence of large fields of methane hydrates (a form of natural gas) on the continental shelves. Laboratory experiments carried out in Australia have proven that bubbles can, indeed, sink a scale model ship by decreasing the density of the water; any wreckage consequently rising to the surface would be rapidly dispersed by the Gulf Stream. It has been hypothesized that periodic methane eruptions (sometimes called "mud volcanoes") may produce regions of frothy water that are no longer capable of providing adequate buoyancy for ships. If this were the case, such an area forming around a ship could cause it to sink very rapidly and without warning. Publications by the USGS describe large stores of undersea hydrates worldwide, including the Blake Ridge area, off the coast of the southeastern United States. However, according to the USGS, no large releases of gas hydrates are believed to have occurred in the Bermuda Triangle for the past 15,000 years. Notable incidents HMS Atalanta The sail training ship HMS Atalanta (originally named HMS Juno) disappeared with her entire crew after setting sail from the Royal Naval Dockyard, Bermuda for Falmouth, England on 31 January 1880. It was presumed that she sank in a powerful storm which crossed her route a couple of weeks after she sailed, and that her crew being composed primarily of inexperienced trainees may have been a contributing factor. The search for evidence of her fate attracted worldwide attention at the time (connection is also often made to the 1878 loss of the training ship HMS Eurydice, which foundered after departing the Royal Naval Dockyard in Bermuda for Portsmouth on 6 March), and she was alleged decades later to have been a victim of the mysterious triangle, an allegation resoundingly refuted by the research of author David Francis Raine in 1997. USS Cyclops The incident resulting in the single largest loss of life in the history of the US Navy not related to combat occurred when the collier Cyclops, carrying a full load of manganese ore and with one engine out of action, went missing without a trace with a crew of 309 sometime after March 4, 1918, after departing the island of Barbados. Although there is no strong evidence for any single theory, many independent theories exist, some blaming storms, some capsizing, and some suggesting that wartime enemy activity was to blame for the loss. In addition, two of Cyclopss sister ships, and , were subsequently lost in the North Atlantic during World War II. Both ships were transporting heavy loads of metallic ore similar to that which was loaded on Cyclops during her fatal voyage. In all three cases structural failure due to overloading with a much denser cargo than designed is considered the most likely cause of sinking. Carroll A. Deering Carroll A. Deering, a five-masted schooner built in 1919, was found hard aground and abandoned at Diamond Shoals, near Cape Hatteras, North Carolina, on January 31, 1921. FBI investigation into the Deering scrutinized, then ruled out, multiple theories as to why and how the ship was abandoned, including piracy, domestic Communist sabotage and the involvement of rum-runners. Flight 19 Flight 19 was a training flight of five TBM Avenger torpedo bombers that disappeared on December 5, 1945, while over the Atlantic. The squadron's flight plan was scheduled to take them due east from Fort Lauderdale for , north for , and then back over a final leg to complete the exercise. The flight never returned to base. The disappearance was attributed by Navy investigators to navigational error leading to the aircraft running out of fuel. One of the search and rescue aircraft deployed to look for them, a PBM Mariner with a 13-man crew, also disappeared. A tanker off the coast of Florida reported seeing an explosion and observing a widespread oil slick when fruitlessly searching for survivors. The weather was becoming stormy by the end of the incident. According to contemporaneous sources the Mariner had a history of explosions due to vapour leaks when heavily loaded with fuel, as it might have been for a potentially long search-and-rescue operation. Star Tiger and Star Ariel G-AHNP Star Tiger disappeared on January 30, 1948, on a flight from the Azores to Bermuda; G-AGRE Star Ariel disappeared on January 17, 1949, on a flight from Bermuda to Kingston, Jamaica. Both were Avro Tudor IV passenger aircraft operated by British South American Airways. Both planes were operating at the very limits of their range and the slightest error or fault in the equipment could keep them from reaching the small island. Douglas DC-3 On December 28, 1948, a Douglas DC-3 aircraft, number NC16002, disappeared while on a flight from San Juan, Puerto Rico, to Miami. No trace of the aircraft, or the 32 people on board, was ever found. A Civil Aeronautics Board investigation found there was insufficient information available on which to determine probable cause of the disappearance. Connemara IV A pleasure yacht was found adrift in the Atlantic south of Bermuda on September 26, 1955; it is usually stated in the stories (Berlitz, Winer) that the crew vanished while the yacht survived being at sea during three hurricanes. The 1955 Atlantic hurricane season shows Hurricane Ione passing nearby between 14 and 18 September, with Bermuda being affected by winds of almost gale force. In his second book on the Bermuda Triangle, Winer quoted from a letter he had received from Mr J.E. Challenor of Barbados: KC-135 Stratotankers On August 28, 1963, a pair of US Air Force KC-135 Stratotanker aircraft collided and crashed into the Atlantic west of Bermuda. Some writers say that while the two aircraft did collide there were two distinct crash sites, separated by over of water. However, Kusche's research showed that the unclassified version of the Air Force investigation report revealed that the debris field defining the second "crash site" was examined by a search and rescue ship, and found to be a mass of seaweed and driftwood tangled in an old buoy. See also List of Bermuda Triangle incidents List of topics characterized as pseudoscience Nevada Triangle Devil's Sea (or Dragon's Triangle) Sargasso Sea SS Cotopaxi Vile vortex Hurricane Alley References Citations Bibliography The incidents cited above, apart from the official documentation, come from the following works. Some incidents mentioned as having taken place within the Triangle are found only in these sources: Reprinted in paperback in 2005; . Further reading Newspaper articles ProQuest has newspaper source material for many incidents, archived in Portable Document Format (PDF). The newspapers include The New York Times, The Washington Post, and The Atlanta Constitution. To access this website, registration is required, usually through a library connected to a college or university. Flight 19 "Great Hunt On For 27 Navy Fliers Missing In Five Planes Off Florida", The New York Times, December 7, 1945. "Wide Hunt For 27 Men In Six Navy Planes", The Washington Post, December 7, 1945. "Fire Signals Seen In Area Of Lost Men", The Washington Post, December 9, 1945. SS Cotopaxi "Lloyd's posts Cotopaxi As 'Missing'", The New York Times, January 7, 1926. "Efforts To Locate Missing Ship Fail", The Washington Post, December 6, 1925. "Lighthouse Keepers Seek Missing Ship", The Washington Post, December 7, 1925. "53 On Missing Craft Are Reported Saved", The Washington Post, December 13, 1925. USS Cyclops (AC-4) "Cold High Winds Do $25,000 Damage", The Washington Post, March 11, 1918. "Collier Overdue A Month", The New York Times, April 15, 1918. "More Ships Hunt For Missing Cyclops", The New York Times, April 16, 1918. "Haven't Given Up Hope For Cyclops", The New York Times, April 17, 1918. "Collier Cyclops Is Lost; 293 Persons On Board; Enemy Blow Suspected", The Washington Post, April 15, 1918. "U.S. Consul Gottschalk Coming To Enter The War", The Washington Post, April 15, 1918. "Cyclops Skipper Teuton, 'Tis Said", The Washington Post, April 16, 1918. "Fate Of Ship Baffles", The Washington Post, April 16, 1918. "Steamer Met Gale On Cyclops' Course", The Washington Post, April 19, 1918. Carroll A. Deering "Piracy Suspected In Disappearance Of 3 American Ships", The New York Times, June 21, 1921. "Bath Owners Skeptical", The New York Times, June 22, 1921. piera antonella "Deering Skipper's Wife Caused Investigation", The New York Times, June 22, 1921. "More Ships Added To Mystery List", The New York Times, June 22, 1921. "Hunt On For Pirates", The Washington Post, June 21, 1921 "Comb Seas For Ships", The Washington Post, June 22, 1921. "Port Of Missing Ships Claims 3000 Yearly", The Washington Post, July 10, 1921. Wreckers "'Wreckreation' Was The Name Of The Game That Flourished 100 Years Ago", The New York Times, March 30, 1969. S.S. Suduffco "To Search For Missing Freighter", The New York Times, April 11, 1926. "Abandon Hope For Ship", The New York Times, April 28, 1926. Star Tiger and Star Ariel "Hope Wanes in Sea Search For 28 Aboard Lost Airliner", The New York Times, January 31, 1948. "72 Planes Search Sea For Airliner", The New York Times, January 19, 1949. DC-3 Airliner NC16002 disappearance "30-Passenger Airliner Disappears In Flight From San Juan To Miami", The New York Times, December 29, 1948. "Check Cuba Report Of Missing Airliner", The New York Times, December 30, 1948. "Airliner Hunt Extended", The New York Times, December 31, 1948. Harvey Conover and Revonoc "Search Continuing For Conover Yawl", The New York Times, January 8, 1958. "Yacht Search Goes On", The New York Times, January 9, 1958. "Yacht Search Pressed", The New York Times, January 10, 1958. "Conover Search Called Off", The New York Times, January 15, 1958. KC-135 Stratotankers "Second Area Of Debris Found In Hunt For Jets", The New York Times, August 31, 1963. "Hunt For Tanker Jets Halted", The New York Times, September 3, 1963. "Planes Debris Found In Jet Tanker Hunt", The Washington Post, August 30, 1963. B-52 Bomber (Pogo 22) "U.S.-Canada Test Of Air Defence A Success", The New York Times, October 16, 1961. "Hunt For Lost B-52 Bomber Pushed In New Area", The New York Times, October 17, 1961. "Bomber Hunt Pressed", The New York Times, October 18, 1961. "Bomber Search Continuing", The New York Times, October 19, 1961. "Hunt For Bomber Ends", The New York Times, October 20, 1961. Charter vessel Sno'Boy "Plane Hunting Boat Sights Body In Sea", The New York Times, July 7, 1963. "Search Abandoned For 40 On Vessel Lost In Caribbean", The New York Times, July 11, 1963. "Search Continues For Vessel With 55 Aboard In Caribbean", The Washington Post, July 6, 1963. "Body Found In Search For Fishing Boat", The Washington Post, July 7, 1963. SS Marine Sulphur Queen "Tanker Lost In Atlantic; 39 Aboard", The Washington Post, February 9, 1963. "Debris Sighted In Plane Search For Tanker Missing Off Florida", The New York Times, February 11, 1963. "2.5 Million Is Asked In Sea Disaster", The Washington Post, February 19, 1963. "Vanishing Of Ship Ruled A Mystery", The New York Times, April 14, 1964. "Families Of 39 Lost At Sea Begin $20-Million Suit Here", The New York Times, June 4, 1969. "10-Year Rift Over Lost Ship Near End", The New York Times, February 4, 1973. SS Sylvia L. Ossa "Ship And 37 Vanish In Bermuda Triangle On Voyage To U.S.", The New York Times, October 18, 1976. "Ship Missing In Bermuda Triangle Now Presumed To Be Lost At Sea", The New York Times, October 19, 1976. "Distress Signal Heard From American Sailor Missing For 17 Days", The New York Times, October 31, 1976. Website links The following websites have either online material that supports the popular version of the Bermuda Triangle, or documents published from official sources as part of hearings or inquiries, such as those conducted by the United States Navy or United States Coast Guard. Copies of some inquiries are not online and may have to be ordered; for example, the losses of Flight 19 or USS Cyclops can be ordered direct from the United States Naval Historical Center. Text of Feb, 1964 Argosy Magazine article by Vincent Gaddis United States Coast Guard database of selected reports and inquiries U.S. Navy Historical Center Bermuda Triangle FAQ U.S. Navy Historical C/ The Bermuda Triangle: Startling New Secrets, Sci Fi Channel documentary (November 2005) Navy Historical Center: The Loss Of Flight 19 on losses of heavy ships at sea Bermuda Shipwrecks Association of Underwater Explorers shipwreck listings page Dictionary of American Naval Fighting Ships Books Most of the works listed here are largely out of print. Copies may be obtained at your local library, or purchased used at bookstores, or through eBay or Amazon.com. These books are often the only source material for some of the incidents that have taken place within the Triangle. Into the Bermuda Triangle: Pursuing the Truth Behind the World's Greatest Mystery by Gian J. Quasar, International Marine/Ragged Mountain Press (2003) ; contains list of missing craft as researched in official records. (Reprinted in paperback (2005) ). The Bermuda Triangle, Charles Berlitz (): Out of print. The Bermuda Triangle Mystery Solved (1975). Lawrence David Kusche () Limbo Of The Lost, John Wallace Spencer () The Evidence for the Bermuda Triangle (1984), David Group () The Final Flight (2006), Tony Blackman (). This book is a work of fiction. Bermuda Shipwrecks (2000), Daniel Berg() The Devil's Triangle (1974), Richard Winer (); this book sold well over a million copies by the end of its first year; to date there have been at least 17 printings. The Devil's Triangle 2 (1975), Richard Winer () From the Devil's Triangle to the Devil's Jaw (1977), Richard Winer () Ghost Ships: True Stories of Nautical Nightmares, Hauntings, and Disasters (2000), Richard Winer () The Bermuda Triangle (1975) by Adi-Kent Thomas Jeffrey () External links – updated version of Quasar's Bermuda Triangle information. Earth mysteries Geography of Miami Paranormal triangles Supernatural legends Urban legends
2,159
4,856
https://en.wikipedia.org/wiki/Borough
Borough
A borough is an administrative division in various English-speaking countries. In principle, the term borough designates a self-governing walled town, although in practice, official use of the term varies widely. History In the Middle Ages, boroughs were settlements in England that were granted some self-government; burghs were the Scottish equivalent. In medieval England, boroughs were also entitled to elect members of parliament. The use of the word borough probably derives from the burghal system of Alfred the Great. Alfred set up a system of defensive strong points (Burhs); in order to maintain these particular settlements, he granted them a degree of autonomy. After the Norman Conquest, when certain towns were granted self-governance, the concept of the burh/borough seems to have been reused to mean a self-governing settlement. The concept of the borough has been used repeatedly (and often differently) throughout the world. Often, a borough is a single town with its own local government. However, in some cities it is a subdivision of the city (for example, New York City, London, and Montreal). In such cases, the borough will normally have either limited powers delegated to it by the city's local government, or no powers at all. In other places, such as the U.S. state of Alaska, borough designates a whole region; Alaska's largest borough, the North Slope Borough, is comparable in area to the entire United Kingdom, although its population is less than that of Swanage on England's south coast with around 9,600 inhabitants. In Australia, a borough was once a self-governing small town, but this designation has all but vanished, except for the only remaining borough in the country, which is the Borough of Queenscliffe. Boroughs as administrative units are to be found in Ireland and the United Kingdom, more specifically in England and Northern Ireland. Boroughs also exist in the Canadian province of Quebec and formerly in Ontario, in some states of the United States, in Israel, formerly in New Zealand and only one left in Australia. Etymology The word borough derives from the Old English word burg, burh, meaning a fortified settlement; the word appears as modern English bury, -brough, Scots burgh, borg in Scandinavian languages, Burg in German. A number of other European languages have cognate words that were borrowed from the Germanic languages during the Middle Ages, including brog in Irish, bwr or bwrc, meaning "wall, rampart" in Welsh, bourg in French, burg in Catalan (in Catalonia there is a town named Burg), borgo in Italian, burgo in Portuguese and Castilian (hence the place-name Burgos), the -bork of Lębork and Malbork in Polish and the -bor of Maribor in Slovenian. The 'burg' element, which means "castle" or "fortress", is often confused with 'berg' meaning "hill" or "mountain" (c.f. iceberg, inselberg). Hence the 'berg' element in Bergen or Heidelberg relates to a hill, rather than a fort. In some cases, the 'berg' element in place names has converged towards burg/borough; for instance Farnborough, from fernaberga (fern-hill). Pronunciation In many parts of England, "borough" is pronounced as an independent word, and as when a suffix of a place-name. As a suffix, it is sometimes spelled "-brough". In the United States, "borough" is pronounced . When appearing as the suffix "-burg(h)" in place-names, it is pronounced . Definitions Australia In Australia, the term "borough" is an occasionally used term for a local government area. Currently there is only one borough in Australia, the Borough of Queenscliffe in Victoria, although there have been more in the past. However, in some cases it can be integrated into the council's name instead of used as an official title, such as the Municipality of Kingborough in Tasmania. Canada In Quebec, the term borough is generally used as the English translation of , referring to an administrative division of a municipality, or a district. Eight municipalities are divided into boroughs: See List of boroughs in Quebec. In Ontario, it was previously used to denote suburban municipalities in Metropolitan Toronto, including Scarborough, York, North York and Etobicoke prior to their conversions to cities. The Borough of East York was the last Toronto municipality to hold this status, relinquishing it upon becoming part of the City of Toronto government on January 1, 1998. Colombia The Colombian Municipalities are subdivided into boroughs (English translation of the Spanish term localidades) with a local executive and an administrative board for local government. These Boroughs are divided in neighborhoods. Also, the principal cities had localidades with the same features as the European or American cities, including Soacha in Bogotá, Bello, La Estrella, Sabaneta, Envigado and Itagüí on Medellín. Ireland There are four borough districts designated by the Local Government Reform Act 2014: Clonmel, Drogheda, Sligo, and Wexford. A local boundary review reporting in 2018 proposed granting borough status to any district containing a census town with a population over 30,000; this would have included the towns of Dundalk, Bray, and Navan. This requires an amendment to the 2014 act, promised for 2019 by minister John Paul Phelan. Historically, there were 117 parliamentary boroughs in the Irish House of Commons, of which 80 were disfranchised by the Acts of Union 1800 and all but 11 abolished under the Municipal Corporations (Ireland) Act 1840. Under the Local Government (Ireland) Act 1898, six of these became county boroughs: Dublin, Belfast, Cork, Derry, Limerick and Waterford. From 1921, Belfast and Derry were part of Northern Ireland and stayed within the United Kingdom on the establishment of the Irish Free State in 1922. Galway was a borough from 1937 until upgraded to a county borough in 1985. The county boroughs in the Republic of Ireland were redesignated as "cities" under the Local Government Act 2001. Dún Laoghaire was a borough from 1930 until merged into Dún Laoghaire–Rathdown county in 1994. There were five borough councils in place at the time of the Local Government Reform Act 2014 which abolished all second-tier local government units of borough and town councils. Each local government authority outside of Dublin, Cork City and Galway City was divided into areas termed municipal districts. In four of the areas which had previously been contained borough councils, as listed above, these were instead termed Borough Districts. Kilkenny had previously had a borough council, but its district was to be called the Municipal District of Kilkenny City, in recognition of its historic city status. Israel Under Israeli law, inherited from British Mandate municipal law, the possibility of creating a municipal borough exists. However, no borough was actually created under law until 2005–2006, when Neve Monosson and Maccabim-Re'ut, both communal settlements (Heb: yishuv kehilati) founded in 1953 and 1984, respectively, were declared to be autonomous municipal boroughs (Heb: vaad rova ironi), within their mergers with the towns of Yehud and Modi'in. Similar structures have been created under different types of legal status over the years in Israel, notably Kiryat Haim in Haifa, Jaffa in Tel Aviv-Yafo and Ramot and Gilo in Jerusalem. However, Neve Monosson is the first example of a full municipal borough actually declared under law by the Minister of the Interior, under a model subsequently adopted in Maccabim-Re'ut as well. It is the declared intention of the Interior Ministry to use the borough mechanism in order to facilitate municipal mergers in Israel, after a 2003 wide-reaching merger plan, which, in general, ignored the sensitivities of the communal settlements, and largely failed. Mexico In Mexico as translations from English to Spanish applied to Mexico City, the word borough has resulted in a delegación (delegation), referring to the 16 administrative areas within the Mexico City, now called Alcaldías. (see: Boroughs of Mexico and Boroughs of Mexico City). Netherlands In the Netherlands, the municipalities of Rotterdam and Amsterdam were divided into administrative boroughs, or deelgemeenten, which had their own borough council and a borough mayor. Other large cities are usually divided into districts, or stadsdelen, for census purposes. The deelgemeenten were abolished in 2014. New Zealand New Zealand formerly used the term borough to designate self-governing towns of more than 1,000 people, although 19th century census records show many boroughs with populations as low as 200. A borough of more than 20,000 people could become a city by proclamation. Boroughs and cities were collectively known as municipalities, and were enclaves separate from their surrounding counties. Boroughs proliferated in the suburban areas of the larger cities: By the 1980s there were 19 boroughs and three cities in the area that is now the City of Auckland. In the 1980s, some boroughs and cities began to be merged with their surrounding counties to form districts with a mixed urban and rural population. A nationwide reform of local government in 1989 completed the process. Counties and boroughs were abolished and all boundaries were redrawn. Under the new system, most territorial authorities cover both urban and rural land. The more populated councils are classified as cities, and the more rural councils are classified as districts. Only Kawerau District, an enclave within Whakatāne District, continues to follow the tradition of a small town council that does not include surrounding rural area. Trinidad and Tobago In Trinidad and Tobago, a Borough is a unit of Local Government. There are 3 boroughs in The Republic of Trinidad and Tobago: Chaguanas Arima Point Fortin United Kingdom England and Wales Ancient and municipal boroughs During the medieval period many towns were granted self-governance by the Crown, at which point they became referred to as boroughs. The formal status of borough came to be conferred by Royal Charter. These boroughs were generally governed by a self-selecting corporation (i.e., when a member died or resigned his replacement would be by co-option). Sometimes boroughs were governed by bailiffs. Debates on the Reform Bill (eventually the Reform Act 1832) lamented the diversity of polity of such town corporations, and a Royal Commission was set up to investigate this. This resulted in a regularisation of municipal government by the Municipal Corporations Act 1835. 178 of the ancient boroughs were re-formed as municipal boroughs, with all municipal corporations to be elected according to a standard franchise based on property ownership. The unreformed boroughs lapsed in borough status, or were reformed (or abolished) later. Several new municipal boroughs were formed in the new industrial cities after the bill enacted, per its provisions. As part of a large-scale reform of local government in England and Wales in 1974, municipal boroughs were finally abolished (having become increasingly irrelevant). However, the civic traditions of many were continued by the grant of a charter to their successor district councils. As to smallest boroughs, a town council was formed for an alike zone, while charter trustees were formed for a few others. A successor body is allowed to use the regalia of the old corporation, and appoint ceremonial office holders such as sword and mace bearers as provided in their original charters. The council, or trustees, may apply for an Order in Council or Royal Licence to use the coat of arms. Parliamentary boroughs From 1265, two burgesses from each borough were summoned to the Parliament of England, alongside two knights from each county. Thus parliamentary constituencies were derived from the ancient boroughs. Representation in the House of Commons was decided by the House itself, which resulted in boroughs being established in some small settlements for the purposes of parliamentary representation, despite their possessing no actual corporation. After the 1832 Reform Act, which disenfranchised many of the rotten boroughs (boroughs that had declined in importance, had only a small population, and had only a handful of eligible voters), parliamentary constituencies began to diverge from the ancient boroughs. While many ancient boroughs remained as municipal boroughs, they were disenfranchised by the Reform Act. County boroughs The Local Government Act 1888 established a new sort of borough – the county borough. These were designed to be 'counties-to-themselves'; administrative divisions to sit alongside the new administrative counties. They allowed urban areas to be administered separately from the more rural areas. They, therefore, often contained pre-existing municipal boroughs, which thereafter became part of the second tier of local government, below the administrative counties and county boroughs. The county boroughs were, like the municipal boroughs, abolished in 1974, being reabsorbed into their parent counties for administrative purposes. Metropolitan boroughs In 1899, as part of a reform of local government in the County of London, the various parishes in London were reorganised as new entities, the 'metropolitan boroughs'. These were reorganised further when Greater London was formed out of Middlesex, parts of Surrey, Kent, Essex, Hertfordshire and the County of London in 1965. These council areas are now referred to as "London boroughs" rather than "metropolitan boroughs". When the new metropolitan counties (Greater Manchester, Merseyside, South Yorkshire, Tyne and Wear, West Midlands, and West Yorkshire) were created in 1974, their sub-divisions also became metropolitan boroughs in many, but not all, cases; in many cases these metropolitan boroughs recapitulated abolished county boroughs (for example, Stockport). The metropolitan boroughs possessed slightly more autonomy from the metropolitan county councils than the shire county districts did from their county councils. With the abolition of the metropolitan county councils in 1986, these metropolitan boroughs became independent, and continue to be so at present. Other current uses Elsewhere in England a number of districts and unitary authority areas are called "borough". Until 1974, this was a status that denoted towns with a certain type of local government (a municipal corporation, or a self-governing body). Since 1974, it has been a purely ceremonial style granted by royal charter to districts which may consist of a single town or may include a number of towns or rural areas. Borough status entitles the council chairman to bear the title of mayor. Districts may apply to the British Crown for the grant of borough status upon advice of the Privy Council of the United Kingdom. Northern Ireland In Northern Ireland, local government was reorganised in 1973. Under the legislation that created the 26 districts of Northern Ireland, a district council whose area included an existing municipal borough could resolve to adopt the charter of the old municipality and thus continue to enjoy borough status. Districts that do not contain a former borough can apply for a charter in a similar manner to English districts. Scotland United States In the United States, a borough is a unit of local government or other administrative division below the level of the state. The term is currently used in seven states. The following states use, or have used, the word with the following meanings: Alaska, as a county-equivalent — List of boroughs and census areas in Alaska Connecticut, as an incorporated municipality within, or consolidated with, a town — see Borough (Connecticut) Michigan, formerly applied to a village in the midst of forming a city. Also in Michigan is Mackinac Island, which was a borough from 1817 to 1847, when it became a village; it has been a city since 1899. New Jersey, as a type of independent incorporated municipality — see Borough (New Jersey) New York, as one of the five divisions of New York City, each coextensive with a county — see Boroughs of New York City Pennsylvania, as a type of municipality comparable to a town — see Borough (Pennsylvania) Virginia, as a division of a city under certain circumstances — see Wisconsin in the 19th century occasionally used the term "borough" for the type of civil township normally known as a town. See also History of local government in England Borough status in the United Kingdom Boroughs incorporated in England and Wales 1835–1882 and 1882–1974 Burgh and List of burghs in Scotland County borough Ancient borough Metropolitan borough Municipal borough Boroughs in New York City Borough-English, a form of inheritance associated with the English boroughs References Citations Sources External links Local government in Canada Types of subdivision in the United Kingdom Types of populated places Types of administrative division English words
2,160
4,859
https://en.wikipedia.org/wiki/Bodmin%20Moor
Bodmin Moor
Bodmin Moor () is a granite moorland in north-eastern Cornwall, England. It is in size, and dates from the Carboniferous period of geological history. It includes Brown Willy, the highest point in Cornwall, and Rough Tor, a slightly lower peak. Many of Cornwall's rivers have their sources here. It has been inhabited since at least the Neolithic era, when primitive farmers started clearing trees and farming the land. They left their megalithic monuments, hut circles and cairns, and the Bronze Age culture that followed left further cairns, and more stone circles and stone rows. By medieval and modern times, nearly all the forest was gone and livestock rearing predominated. The name Bodmin Moor is relatively recent. An early mention is in the Royal Cornwall Gazette of 28 November 1812. The upland area was formerly known as Fowey Moor after the River Fowey, which rises within it. Geology Bodmin Moor is one of five granite plutons in Cornwall that make up part of the Cornubian batholith. The intrusion dates from the Cisuralian epoch, the earliest part of the Permian period, and outcrops across about 190 square km. Around the pluton's margins where it intruded into slates, the country rock has been hornfelsed. Numerous peat deposits occur across the moor whilst large areas are characterised by blockfields of granite boulders; both deposits are of Holocene age (see also Geology of Cornwall). Geography Dramatic granite tors rise from the rolling moorland: the best known are Brown Willy, the highest point in Cornwall at , and Rough Tor at . To the south-east Kilmar Tor and Caradon Hill are the most prominent hills. Considerable areas of the moor are poorly drained and form marshes (in hot summers these can dry out). The rest of the moor is mostly rough pasture or covered with heather and other low vegetation. The moor contains about 500 holdings with around 10,000 beef cows, 55,000 breeding ewes and 1,000 horses and ponies. Most of the moor is a Site of Special Scientific Interest (SSSI), Bodmin Moor, North, and has been designated an Area of Outstanding Natural Beauty (AONB), as part of Cornwall AONB. The moor has been identified by BirdLife International as an Important Bird Area (IBA) because it supports about 260 breeding pairs of European stonechats as well as a wintering population of 10,000 Eurasian golden plovers. The moor has also been recognised as a separate natural region and designated as national character area 153 by Natural England. Rivers and inland waters Bodmin Moor is the source of several of Cornwall's rivers: they are mentioned here anti-clockwise from the south. The River Fowey rises at a height of and flows through Lostwithiel and into the Fowey estuary. The River Tiddy rises near Pensilva and flows southeast to its confluence with the River Lynher (the Lynher flows generally south-east until it joins the Hamoaze near Plymouth). The River Inny rises near Davidstow and flows southeast to its confluence with the River Tamar. The River Camel rises on Hendraburnick Down and flows for approximately before joining the sea at Padstow. The River Camel and its tributary the De Lank River are an important habitat for the otter, and both have been proposed as Special Areas of Conservation (SAC) The De Lank River rises near Roughtor and flows along an irregular course before joining the Camel south of Wenford. The River Warleggan rises near Temple and flows south to join the Fowey. On the southern slopes of the moor lies Dozmary Pool. It is Cornwall's only natural inland lake and is glacial in origin. In the 20th century three reservoirs have been constructed on the moor; these are Colliford Lake, Siblyback Lake and Crowdy reservoirs, which supply water for a large part of the county's population. Various species of waterfowl are resident around these rivers. Parishes The parishes on the moor are as follows: History and antiquities Prehistoric times 10,000 years ago, in the Mesolithic period, hunter-gatherers wandered the area when it was wooded. There are several documented cases of flint scatters being discovered by archaeologists, indicating that these hunter-gatherers practised flint knapping in the region. During the Neolithic era, from about 4,500 to 2,300 BC, people began clearing trees and farming the land. It was also in this era that the production of various megalithic monuments began, predominantly long cairns (three of which have currently been identified, at Louden, Catshole and Bearah) and stone circles (sixteen of which have been identified). It was also likely that the naturally forming tors were also viewed in a similar manner to the manmade ceremonial sites. In the following Bronze Age, the creation of monuments increased dramatically, with the production of over 300 further cairns, and more stone circles and stone rows. More than 200 Bronze Age settlements with enclosures and field patterns have been recorded. and many prehistoric stone barrows and circles lie scattered across the moor. In the late 1990s, a team of archaeologists and anthropologists from UCL researched the Bronze Age landscapes of Leskernick over several seasons (Barbara Bender; Sue Hamilton; Christopher Tilley and students). In a programme shown in 2007 Channel 4's Time Team investigated a 500-metre cairn and the site of a Bronze Age village on the slopes of Rough Tor. King Arthur's Hall, thought to be a late Neolithic or early Bronze Age ceremonial site, can be found to the east of St Breward on the moor. Medieval and modern times Where practicable, areas of the moor were used for pasture by herdsmen from the parishes surrounding the moor. Granite boulders were also taken from the moor and used for stone posts and to a certain extent for building (such material is known as moorstone). Granite quarrying only became reasonably productive when gunpowder became available. The moor gave its name (Foweymore) to one of the medieval districts called stannaries which administered tin mining: the boundaries of these were never defined precisely. Until the establishment of a turnpike road through the moor (the present A30) in the 1770s the size of the moorland area made travel within Cornwall very difficult. Its Cornish name, Goen Bren, is first recorded in the 12th century. English Heritage monographs "Bodmin Moor: An Archaeological Survey" Volume 1 and Volume 2 covering the post-medieval and modern landscape are publicly available through the Archaeology Data Service. Jamaica Inn is a traditional inn on the Moor. Built as a coaching inn in 1750 and having an association with smuggling, it was used as a staging post for changing horses. In the 1980s, there was a big problem with the water supply in Camelford. Many people had medical issues after this and some died. Monuments and ruins Roughtor was the site of a medieval chapel of St Michael and is now designated as a memorial to the 43rd Wessex Division of the British Army. In 1844 on Bodmin Moor the body of 18-year-old Charlotte Dymond was discovered. Local labourer Matthew Weeks was accused of the murder, and at noon on 12 August 1844 he was led from Bodmin Gaol and hanged. The murder site now has a monument erected from public money, and her grave is at Davidstow churchyard. Legends and traditions Dozmary Pool is identified by some people with the lake in which, according to Arthurian legend, Sir Bedivere threw Excalibur to The Lady of the Lake. Another legend relating to the pool concerns Jan Tregeagle. The Beast of Bodmin has been reported many times but never identified with certainty. Film Cornish Cowboy, a 2014 short documentary film screened at the 2015 Cannes Film Festival, was shot on Bodmin Moor. The film features the work of St Neot horse trainer, Dan Wilson. See also List of topics related to Cornwall Brown Willy effect Jamaica Inn (novel) References Weatherhill, Craig (1995) Cornish Place Names & Language. Wilmslow: Sigma Leisure External links Cornwall AONB Hills of Cornwall Important Bird Areas of England Locations associated with Arthurian legend Moorlands of Cornwall Natural regions of England Sites of Special Scientific Interest in Cornwall Sites of Special Scientific Interest notified in 1951
2,162
4,862
https://en.wikipedia.org/wiki/Bengal
Bengal
Bengal ( ; , ) is a historical geographical, ethnolinguistic and cultural term referring to the eastern part of the Indian subcontinent at the apex of the Bay of Bengal. The region of Bengal proper is divided between modern-day Bangladesh and the Indian state of West Bengal. The Indian state of Tripura and the Barak Valley in the Indian state of Assam are also considered part of the Bengali cultural region. The administrative jurisdiction of Bengal historically extended beyond the territory of Bengal proper. Bengal ceased to be a single unit after the partition of India in 1947. Various Indo-Aryan, Dravidian, Austric and other peoples inhabited the region since antiquity. The ancient Vanga Kingdom is widely regarded as the namesake of the Bengal region. The Bengali calendar dates back to the reign of Shashanka in the 4th century. The Pala Empire was founded in Bengal during the 8th century. The Sena dynasty ruled between the 11th and 13th centuries. By the 14th century, Bengal was absorbed by Muslim conquests in the Indian subcontinent. An independent Bengal Sultanate was formed and became the eastern frontier of the Islamic world. During this period, Bengal's rule and influence spread to Assam, Arakan, Tripura, Bihar, and Orissa. Mughal Bengal later emerged as a prosperous part of the Mughal Empire. The last independent Nawab of Bengal was defeated in 1757 at the Battle of Plassey by the British Empire's East India Company. The company's Bengal Presidency grew into the largest administrative unit of British India with Calcutta as the capital of India. At its peak, the presidency stretched from Burma, Penang, Singapore and Malacca in the east to The Punjab and Ceded and Conquered Provinces in the west. Bengal was gradually re-organized by the early 20th century. As a result of first partition of Bengal, a short-lived province called Eastern Bengal and Assam existed between 1905 and 1911 with its capital in the former Mughal capital Dhaka. Following the Sylhet referendum and votes by the Bengal Legislative Council and Bengal Legislative Assembly, the region was again divided along religious lines in 1947. Etymology The name of Bengal is derived from the ancient kingdom of Banga (pronounced Bôngô), the earliest records of which date back to the Mahabharata epic in the first millennium BCE. The reference to 'Vangalam' is present in an inscription in the Vrihadeshwara temple at Tanjore, which is perhaps the earliest reference to Bengal as such. Theories on the origin of the term Banga point to Dravidian tribes, later known as the Bang, that settled in the area circa 1000 BCE, and the Austric word Bong (Sun-god). The term Vangaladesa is used to describe the region in 11th-century South Indian records. The modern term Bangla is prominent from the 14th century, which saw the establishment of the Sultanate of Bengal, whose first ruler Shamsuddin Ilyas Shah was known as the Shah of Bangala. The Portuguese referred to the region as Bengala in the Age of Discovery. History Antiquity Neolithic sites have been found in several parts of the region. In the second millennium BCE, rice-cultivating communities dotted the region. By the eleventh century BCE, people in Bengal lived in systematically aligned homes, produced copper objects, and crafted black and red pottery. Remnants of Copper Age settlements are located in the region. At the advent of the Iron Age, people in Bengal adopted iron-based weapons, tools and irrigation equipment. From 600 BCE, the second wave of urbanisation engulfed the north Indian subcontinent as part of the Northern Black Polished Ware culture. Cities in Mahasthangarh, Chandraketugarh and Wari-Bateshwar emerged. The Ganges, Brahmaputra and Meghna rivers were natural arteries for communication and transportation. Estuaries on the Bay of Bengal allowed for maritime trade with distand lands in Southeast Asia and elsewhere. The ancient geopolitical divisions of Bengal included Varendra, Suhma, Anga, Vanga, Samatata and Harikela. These regions were often independent or under the rule of larger empires. The Mahasthan Brahmi Inscription indicates that Bengal was ruled by the Mauryan Empire in the 3rd century BCE. The inscription was an administrative order instructing relief for a distressed segment of the population. Punch-marked coins found in the region indicate that coins were used as currency during the Iron Age. The namesake of Bengal is the ancient Vanga Kingdom which was reputed as a naval power with overseas colonies. A prince from Bengal named Vijaya founded the first kingdom in Sri Lanka. The two most prominent pan-Indian empires of this period included the Mauryans and the Gupta Empire. The region was a center of artistic, political, social, spiritual and scientific thinking, including the invention of chess, Indian numerals, and the concept of zero. The region was known to the ancient Greeks and Romans as Gangaridai. The Greek ambassador Megasthenes chronicled its military strength and dominance of the Ganges delta. The invasion army of Alexander the Great was deterred by the accounts of Gangaridai's power in 325 BCE, including a cavalry of war elephants. Later Roman accounts noted maritime trade routes with Bengal. 1st century Roman coins with images of Hercules were found in the region and point to trade links with Roman Egypt through the Red Sea. The Wari-Bateshwar ruins are believed to be the emporium (trading center) of Sounagoura mentioned by Roman geographer Claudius Ptolemy. A Roman amphora was found in Purba Medinipur district of West Bengal which was made in Aelana (present-day Aqaba, Jordan) between the 4th and 7th centuries AD. The first unified Bengali polity can be traced to the reign of Shashanka. The origins of the Bengali calendar can be traced to his reign. Shashanka founded the Gauda Kingdom. After Shashanka's death, Bengal experienced a period of civil war known as Matsyanyayam. The ancient city of Gauda later gave birth to the Pala Empire. The first Pala emperor Gopala I was chosen by an assembly of chieftains in Gauda. The Pala kingdom grew into one of the largest empires in the Indian subcontinent. The Pala period saw advances in linguistics, sculpture, painting, and education. The empire achieved its greatest territorial extent under Dharmapala and Devapala. The Palas vied for control of Kannauj with the rival Gurjara-Pratihara and Rashtrakuta dynasties. Pala influence also extended to Tibet and Sumatra due to the travels and preachings of Atisa. The university of Nalanda was established by the Palas. They also built the Somapura Mahavihara, which was the largest monastic institution in the subcontinent. The rule of the Palas eventually disintegrated. The Chandra dynasty ruled southeastern Bengal and Arakan. The Varman dynasty ruled parts of northeastern Bengal and Assam. The Sena dynasty emerged as the main successor of the Palas by the 11th century. The Senas were a resurgent Hindu dynasty which ruled much of Bengal. The smaller Deva dynasty also ruled parts of the region. Ancient Chinese visitors like Xuanzang provided elaborate accounts of Bengal's cities and monastic institutions. Muslim trade with Bengal flourished after the fall of the Sasanian Empire and the Arab takeover of Persian trade routes. Much of this trade occurred with southeastern Bengal in areas east of the Meghna River. Bengal was probably used as a transit route to China by the earliest Muslims. Abbasid coins have been discovered in the archaeological ruins of Paharpur and Mainamati. A collection of Sasanian, Umayyad and Abbasid coins are preserved in the Bangladesh National Museum. Sultanate period In 1204, the Ghurid general Muhammad bin Bakhtiyar Khalji began the Islamic conquest of Bengal. The fall of Lakhnauti was recounted by historians circa 1243. Lakhnauti was the capital of the Sena dynasty. According to historical accounts, Ghurid cavalry swept across the Gangetic plains towards Bengal. They entered the Bengali capital disguised as horse traders. Once inside the royal compound, Bakhtiyar and his horsemen swiftly overpowered the guards of the Sena king who had just sat down to eat a meal. The king then hastily fled to the forest with his followers. The overthrow of the Sena king has been described as a coup d’état, which "inaugurated an era, lasting over five centuries, during which most of Bengal was dominated by rulers professing the Islamic faith. In itself this was not exceptional, since from about this time until the eighteenth century, Muslim sovereigns ruled over most of the Indian subcontinent. What was exceptional, however, was that among India’s interior provinces only in Bengal—a region approximately the size of England and Scotland combined—did a majority of the indigenous population adopt the religion of the ruling class, Islam". Bengal became a province of the Delhi Sultanate. A coin featuring a horseman was issued to celebrate the Muslim conquest of Lakhnauti with inscriptions in Sanskrit and Arabic. An abortive Islamic invasion of Tibet was also mounted by Bakhtiyar. Bengal was under the formal rule of the Delhi Sultanate for approximately 150 years. Delhi struggled to consolidate control over Bengal. Rebel governors often sought to assert autonomy or independence. Sultan Iltutmish re-established control over Bengal in 1225 after suppressing the rebels. Due to the considerable overland distance, Delhi's authority in Bengal was relatively weak. It was left to local governors to expand territory and bring new areas under Muslim rule, such as through the Conquest of Sylhet in 1303. In 1338, new rebellions sprung up in Bengal's three main towns. Governors in Lakhnauti, Satgaon and Sonargaon declared independence from Delhi. This allowed the ruler of Sonargaon, Fakhruddin Mubarak Shah, to annex Chittagong to the Islamic administration. By 1352, the ruler of Satgaon, Shamsuddin Ilyas Shah, unified the region into an independent state. Ilyas Shah established his capital in Pandua. The new breakaway state emerged as the Bengal Sultanate, which developed into a territorial, mercantile and maritime empire. At the time, the Islamic world stretched from Muslim Spain in the west to Bengal in the east. The initial raids of Ilyas Shah saw the first Muslim army enter Nepal and stretched from Varanasi in the west to Orissa in the south to Assam in the east. The Delhi army continued to fend off the new Bengali army. The Bengal-Delhi War ended in 1359 when Delhi recognized the independence of Bengal. Ilyas Shah's son Sikandar Shah defeated Delhi Sultan Firuz Shah Tughluq during the Siege of Ekdala Fort. A subsequent peace treaty recognized Bengal's independence and Sikandar Shah was gifted a golden crown by the Sultan of Delhi. The ruler of Arakan sought refuge in Bengal during the reign of Ghiyasuddin Azam Shah. Jalaluddin Muhammad Shah later helped the Arakanese king to regain control of his throne in exchange for becoming a tributary state of the Bengal Sultanate. Bengali influence in Arakan persisted for 300 years. Bengal also helped the king of Tripura to regain control of his throne in exchange for becoming a tributary state. The ruler of the Jaunpur Sultanate also sought refuge in Bengal. The vassal states of Bengal included Arakan, Tripura, Chandradwip and Pratapgarh. At its peak, the Bengal Sultanate's territory included parts of Arakan, Assam, Bihar, Orissa, and Tripura. The Bengal Sultanate experienced its greatest military success under Alauddin Hussain Shah, who was proclaimed as the conqueror of Assam after his forces led by Shah Ismail Ghazi overthrew the Khen dynasty and annexed large parts of Assam. In maritime trade, the Bengal Sultanate benefited from Indian Ocean trade networks and emerged as a hub of re-exports. A giraffe was brought by African envoys from Malindi to Bengal's court and was later gifted to Imperial China. Ship-owing merchants acted as envoys of the Sultan while travelling to different regions in Asia and Africa. Many rich Bengali merchants lived in Malacca. Bengali ships transported embassies from Brunei, Aceh and Malacca to China. Bengal and the Maldives had a vast trade in shell currency. The Sultan of Bengal donated funds to build schools in the Hejaz region of Arabia. The five dynastic periods of the Bengal Sultanate spanned from the Ilyas Shahi dynasty, to a period of rule by Bengali converts, to the Hussain Shahi dynasty, to a period of rule by Abyssinian usurpers; an interruption by the Suri dynasty; and ended with the Karrani dynasty. The Battle of Raj Mahal and the capture of Daud Khan Karrani marked the end of the Bengal Sultanate during the reign of Mughal Emperor Akbar. In the late 16th-century, a confederation called the Baro-Bhuyan resisted Mughal invasions in eastern Bengal. The Baro-Bhuyan included twelve Muslim and Hindu leaders of the Zamindars of Bengal. They were led by Isa Khan, a former prime minister of the Bengal Sultanate. By the 17th century, the Mughals were able to fully absorb the region to their empire. Mughal period Mughal Bengal had the richest elite and was the wealthiest region in the subcontinent. Bengal's trade and wealth impressed the Mughals so much that it was described as the Paradise of the Nations by the Mughal Emperors. A new provincial capital was built in Dhaka. Members of the imperial family were appointed to positions in Mughal Bengal, including the position of governor (subedar). Dhaka became a center of palace intrigue and politics. Some of the most prominent governors included Rajput general Man Singh I, Emperor Shah Jahan's son Prince Shah Shuja, Emperor Aurangzeb's son and later Mughal emperor Azam Shah, and the influential aristocrat Shaista Khan. During the tenure of Shaista Khan, the Portuguese and Arakanese were expelled from the port of Chittagong in 1666. Bengal became the eastern frontier of the Mughal administration. By the 18th century, Bengal became home to a semi-independent aristocracy led by the Nawabs of Bengal. Bengal premier Murshid Quli Khan managed to curtail the influence of the governor due to his rivalry with Prince Azam Shah. Khan controlled Bengal's finances since he was in charge of the treasury. He shifted the provincial capital from Dhaka to Murshidabad. In 1717, the Mughal court in Delhi recognized the hereditary monarchy of the Nawab of Bengal. The ruler was officially titled as the "Nawab of Bengal, Bihar and Orissa", as the Nawab ruled over the three regions in the eastern subcontinent. The Nawabs began issuing their own coins but continued to pledge nominal allegiance to the Mughal emperor. The wealth of Bengal was vital for the Mughal court because Delhi received its biggest share of revenue from the Nawab's court. The Nawabs presided over a period of unprecedented economic growth and prosperity, including an era of growing organization in textiles, banking, a military-industrial complex, the production of fine quality handicrafts, and other trades. A process of proto-industrialisation was underway. Under the Nawabs, the streets of Bengali cities were filled with brokers, workers, peons, naibs, wakils, and ordinary traders. The Nawab's state was a major exporter of Bengal muslin, silk, gunpowder and saltpetre. The Nawabs also permitted European trading companies to operate in Bengal, including the British East India Company, the French East India Company, the Danish East India Company, the Austrian East India Company, the Ostend Company, and the Dutch East India Company. The Nawabs were also suspicious of the growing influence of these companies. Under Mughal rule, Bengal was a center of the worldwide muslin and silk trades. During the Mughal era, the most important center of cotton production was Bengal, particularly around its capital city of Dhaka, leading to muslin being called "daka" in distant markets such as Central Asia. Domestically, much of India depended on Bengali products such as rice, silks and cotton textiles. Overseas, Europeans depended on Bengali products such as cotton textiles, silks and opium; Bengal accounted for 40% of Dutch imports from Asia, for example, including more than 50% of textiles and around 80% of silks. From Bengal, saltpetre was also shipped to Europe, opium was sold in Indonesia, raw silk was exported to Japan and the Netherlands, cotton and silk textiles were exported to Europe, Indonesia, and Japan, cotton cloth was exported to the Americas and the Indian Ocean. Bengal also had a large shipbuilding industry. In terms of shipbuilding tonnage during the 16th–18th centuries, economic historian Indrajit Ray estimates the annual output of Bengal at 223,250 tons, compared with 23,061 tons produced in nineteen colonies in North America from 1769 to 1771. Since the 16th century, European traders traversed the sea routes to Bengal, following the Portuguese conquests of Malacca and Goa. The Portuguese established a settlement in Chittagong with permission from the Bengal Sultanate in 1528, but were later expelled by the Mughals in 1666. In the 18th-century, the Mughal Court rapidly disintegrated due to Nader Shah's invasion and internal rebellions, allowing European colonial powers to set up trading posts across the territory. The British East India Company eventually emerged as the foremost military power in the region; and defeated the last independent Nawab of Bengal at the Battle of Plassey in 1757. Colonial era (1757–1947) In Bengal effective political and military power was transferred from the old regime to the British East India Company around 1757–65. Company rule in India began under the Bengal Presidency. Calcutta was named the capital of British India in 1772. The presidency was run by a military-civil administration, including the Bengal Army, and had the world's sixth earliest railway network. The Governor of Bengal was concurrently the Viceroy of India for many years. Great Bengal famines struck several times during colonial rule (notably the Great Bengal famine of 1770 and Bengal famine of 1943). Under British rule, Bengal experienced the deindustrialisation of its pre-colonial economy. Company policies led to the deindustrialisation of Bengal's textile industry. The capital amassed by the East India Company in Bengal was invested in the emerging Industrial Revolution in Great Britain, in industries such as textile manufacturing. Economic mismanagement, alongside drought and a smallpox epidemic, directly led to the Great Bengal famine of 1770, which is estimated to have caused the deaths of between 1 million and 10 million people. In 1862, the Bengal Legislative Council was set up as the first modern legislature in India. Elected representation was gradually introduced during the early 20th century, including with the Morley-Minto reforms and the system of dyarchy. In 1937, the council became the upper chamber of the Bengali legislature while the Bengal Legislative Assembly was created. Between 1937 and 1947, the chief executive of the government was the Prime Minister of Bengal. The Bengal Presidency was the largest administrative unit in the British Empire. At its height, it covered large parts of present-day India, Pakistan, Bangladesh, Burma, Malaysia, and Singapore. In 1830, the British Straits Settlements on the coast of the Malacca Straits was made a residency of Bengal. The area included the erstwhile Prince of Wales Island, Province Wellesley, Malacca and Singapore. In 1867, Penang, Singapore and Malacca were separated from Bengal into the Straits Settlements. British Burma became a province of India and a later a Crown Colony in itself. Western areas, including the Ceded and Conquered Provinces and The Punjab, were further reorganized. Northeastern areas became Colonial Assam. In 1876, about 200,000 people were killed in Bengal by the Great Backerganj Cyclone of 1876 in the Barisal region. About 50 million were killed in Bengal due to massive plague outbreaks and famines which happened in 1895 to 1920, mostly in western Bengal. The Indian Rebellion of 1857 was initiated on the outskirts of Calcutta, and spread to Dhaka, Chittagong, Jalpaiguri, Sylhet and Agartala, in solidarity with revolts in North India. The failure of the rebellion led to the abolition of the Company Rule in India and establishment of direct rule over India by the British, commonly referred to as the British Raj. The late 19th and early 20th century Bengal Renaissance had a great impact on the cultural and economic life of Bengal and started a great advance in the literature and science of Bengal. Between 1905 and 1911, an abortive attempt was made to divide the province of Bengal into two: Bengal proper and the short-lived province of Eastern Bengal and Assam where the All India Muslim League was founded. In 1911, the Bengali poet and polymath Rabindranath Tagore became Asia's first Nobel laureate when he won the Nobel Prize in Literature. Bengal played a major role in the Indian independence movement, in which revolutionary groups were dominant. Armed attempts to overthrow the British Raj began with the rebellion of Titumir, and reached a climax when Subhas Chandra Bose led the Indian National Army against the British. Bengal was also central in the rising political awareness of the Muslim population—the All-India Muslim League was established in Dhaka in 1906. The Muslim homeland movement pushed for a sovereign state in eastern India with the Lahore Resolution in 1943. Hindu nationalism was also strong in Bengal, which was home to groups like the Hindu Mahasabha. In spite of a last-ditch effort by politicians Huseyn Shaheed Suhrawardy, Sarat Chandra Bose to form a United Bengal, when India gained independence in 1947, Bengal was partitioned along religious lines. The western joined India (and was named West Bengal) while the eastern part joined Pakistan as a province called East Bengal (later renamed East Pakistan, giving rise to Bangladesh in 1971). The circumstances of partition were bloody, with widespread religious riots in Bengal. Partition of Bengal (1947) On 27 April 1947, the last Prime Minister of Bengal Huseyn Shaheed Suhrawardy held a press conference in New Delhi where he outlined his vision for a independent Bengal. Suhrawardy said "Let us pause for a moment to consider what Bengal can be if it remains united. It will be a great country, indeed the richest and the most prosperous in India capable of giving to its people a high standard of living, where a great people will be able to rise to the fullest height of their stature, a land that will truly be plentiful. It will be rich in agriculture, rich in industry and commerce and in course of time it will be one of the powerful and progressive states of the world. If Bengal remains united this will be no dream, no fantasy". On 2 June 1947, British Prime Minister Clement Attlee told the US Ambassador to the United Kingdom that there was a "distinct possibility Bengal might decide against partition and against joining either Hindustan or Pakistan". On 3 June 1947, the Mountbatten Plan outlined the partition of British India. On 20 June, the Bengal Legislative Assembly met to decide on the partition of Bengal. At the preliminary joint meeting, it was decided (120 votes to 90) that if the province remained united, it should join the Constituent Assembly of Pakistan. At a separate meeting of legislators from West Bengal, it was decided (58 votes to 21) that the province should be partitioned and West Bengal should join the Constituent Assembly of India. At another meeting of legislators from East Bengal, it was decided (106 votes to 35) that the province should not be partitioned and (107 votes to 34) that East Bengal should join the Constituent Assembly of Pakistan if Bengal was partitioned. On 6 July, the Sylhet district of Assam voted in a referendum to join East Bengal. The English barrister Cyril Radcliffe was instructed to draw the borders of Pakistan and India. The Radcliffe Line created the boundary between the Dominion of India and the Dominion of Pakistan, which later became the Bangladesh-India border. The Radcliffe Line awarded two-thirds of Bengal as the eastern wing of Pakistan, although the historic Bengali capitals of Gaur, Pandua, Murshidabad and Calcutta fell on the Indian side close to the border with Pakistan. Dhaka's status as a capital was also restored. Geography Most of the Bengal region lies in the Ganges-Brahmaputra delta, but there are highlands in its north, northeast and southeast. The Ganges Delta arises from the confluence of the rivers Ganges, Brahmaputra, and Meghna rivers and their respective tributaries. The total area of Bengal is 232,752  km2—West Bengal is and Bangladesh . The flat and fertile Bangladesh Plain dominates the geography of Bangladesh. The Chittagong Hill Tracts and Sylhet region are home to most of the mountains in Bangladesh. Most parts of Bangladesh are within above the sea level, and it is believed that about 10% of the land would be flooded if the sea level were to rise by . Because of this low elevation, much of this region is exceptionally vulnerable to seasonal flooding due to monsoons. The highest point in Bangladesh is in Mowdok range at . A major part of the coastline comprises a marshy jungle, the Sundarbans, the largest mangrove forest in the world and home to diverse flora and fauna, including the royal Bengal tiger. In 1997, this region was declared endangered. West Bengal is on the eastern bottleneck of India, stretching from the Himalayas in the north to the Bay of Bengal in the south. The state has a total area of . The Darjeeling Himalayan hill region in the northern extreme of the state belongs to the eastern Himalaya. This region contains Sandakfu ()—the highest peak of the state. The narrow Terai region separates this region from the plains, which in turn transitions into the Ganges delta towards the south. The Rarh region intervenes between the Ganges delta in the east and the western plateau and high lands. A small coastal region is on the extreme south, while the Sundarbans mangrove forests form a remarkable geographical landmark at the Ganges delta. At least nine districts in West Bengal and 42 districts in Bangladesh have arsenic levels in groundwater above the World Health Organization maximum permissible limit of 50 µg/L or 50 parts per billion and the untreated water is unfit for human consumption. The water causes arsenicosis, skin cancer and various other complications in the body. Historical, political and cultural geography Geographic distinctions North Bengal North Bengal is a term used for the north-western part of Bangladesh and northern part of West Bengal. The Bangladeshi part comprises Rajshahi Division and Rangpur Division. Generally, it is the area lying west of Jamuna River and north of Padma River, and includes the Barind Tract. Politically, West Bengal's part comprises Jalpaiguri Division (Alipurduar, Cooch Behar, Darjeeling, Jalpaiguri, North Dinajpur, South Dinajpur and Malda) together and Bihar's parts include Kishanganj district. Darjeeling Hills are also part of North Bengal. Although only people of Jaipaiguri, Alipurduar and Cooch Behar identifies themselves as North Bengali. North Bengal is divided into Terai and Dooars regions. North Bengal is also noted for its rich cultural heritage, including two UNESCO World Heritage Sites. Aside from the Bengali majority, North Bengal is home to many other communities including Nepalis, Santhal people, Lepchas and Rajbongshis. Northeast Bengal Northeast Bengal refers to the Sylhet region, comprising Sylhet Division of Bangladesh and the Karimganj district in the Indian state of Assam. The region is noted for its distinctive fertile highland terrain, extensive tea plantations, rainforests and wetlands. The Surma and Barak river are the geographic markers of the area. The city of Sylhet is its largest urban center, and the region is known for its unique regional language known as Sylheti. The ancient name of the region is Srihatta. The region was ruled by the Kamarupa and Harikela kingdoms as well as the Bengal Sultanate. It later became a district of the Mughal Empire. Alongside the predominant Bengali population resides a small Bishnupriya Manipuri, Khasia and other tribal minorities. The region is the crossroads of Bengal and northeast India. Central Bengal Central Bengal refers to the Dhaka Division of Bangladesh. It includes the elevated Madhupur tract with a large Sal tree forest. The Padma River cuts through the southern part of the region, separating the greater Faridpur region. In the north lies the greater Mymensingh and Tangail regions. South Bengal South Bengal covers the southwestern Bangladesh and the southern part of the Indian state of West Bengal.The Bangladeshi part includes Khulna Division, Barisal Division and the proposed Faridpur Division The Indian part of South Bengal includes 12 districts: Kolkata, Howrah, Hooghly, Burdwan, East Midnapur, West Midnapur, Purulia, Bankura, Birbhum, Nadia, South 24 Parganas, North 24 Parganas. The Sundarbans, a major biodiversity hotspot, is located in South Bengal. Bangladesh hosts 60% of the forest, with the remainder in India. Southeast Bengal Southeast Bengal refers to the hilly and coastal Bengali-speaking areas of Chittagong Division in southeastern Bangladesh. Southeast Bengal is noted for its thalassocratic and seafaring heritage. The area was dominated by the Bengali Harikela and Samatata kingdoms in antiquity. It was known to Arab traders as Harkand in the 9th century. During the medieval period, the region was ruled by the Sultanate of Bengal, the Kingdom of Tripura, the Kingdom of Mrauk U, the Portuguese Empire and the Mughal Empire, prior to the advent of British rule. The Chittagonian language, a sister of Bengali is prevalent in coastal areas of southeast Bengal. Along with its Bengali population, it is also home to Tibeto-Burman ethnic groups, including the Chakma, Marma, Tanchangya and Bawm peoples. Southeast Bengal is considered a bridge to Southeast Asia and the northern parts of Arakan are also historically considered to be a part of it. Places of interest There are four World Heritage Sites in the region, including the Sundarbans, the Somapura Mahavihara, the Mosque City of Bagerhat and the Darjeeling Himalayan Railway. Other prominent places include the Bishnupur, Bankura temple city, the Adina Mosque, the Caravanserai Mosque, numerous zamindar palaces (like Ahsan Manzil and Cooch Behar Palace), the Lalbagh Fort, the Great Caravanserai ruins, the Shaista Khan Caravanserai ruins, the Kolkata Victoria Memorial, the Dhaka Parliament Building, archaeologically excavated ancient fort cities in Mahasthangarh, Mainamati, Chandraketugarh and Wari-Bateshwar, the Jaldapara National Park, the Lawachara National Park, the Teknaf Game Reserve and the Chittagong Hill Tracts. Cox's Bazar in southeastern Bangladesh is home to the longest natural sea beach in the world with an unbroken length of 120 km (75 mi). It is also a growing surfing destination. St. Martin's Island, off the coast of Chittagong Division, is home to the sole coral reef in Bengal. Flora and fauna The flat Bengal Plain, which covers most of Bangladesh and West Bengal, is one of the most fertile areas on Earth, with lush vegetation and farmland dominating its landscape. Bengali villages are buried among groves of mango, jackfruit, betel nut and date palm. Rice, jute, mustard and sugarcane plantations are a common sight. Water bodies and wetlands provide a habitat for many aquatic plants in the Ganges-Brahmaputra delta. The northern part of the region features Himalayan foothills (Dooars) with densely wooded Sal and other tropical evergreen trees. Above an elevation of 1,000 metres (3,300 ft), the forest becomes predominantly subtropical, with a predominance of temperate-forest trees such as oaks, conifers and rhododendrons. Sal woodland is also found across central Bangladesh, particularly in the Bhawal National Park. The Lawachara National Park is a rainforest in northeastern Bangladesh. The Chittagong Hill Tracts in southeastern Bangladesh is noted for its high degree of biodiversity. The littoral Sundarbans in the southwestern part of Bengal is the largest mangrove forest in the world and a UNESCO World Heritage Site. The region has over 89 species of mammals, 628 species of birds and numerous species of fish. For Bangladesh, the water lily, the oriental magpie-robin, the hilsa and mango tree are national symbols. For West Bengal, the white-throated kingfisher, the chatim tree and the night-flowering jasmine are state symbols. The Bengal tiger is the national animal of Bangladesh and India. The fishing cat is the state animal of West Bengal. Politics Today, the region of Bengal proper is divided between the sovereign state of the People's Republic of Bangladesh and the Indian state of West Bengal. The Bengali-speaking Barak Valley forms part of the Indian state of Assam. The Indian state of Tripura has a Bengali-speaking majority and was formerly the princely state of Hill Tipperah. In the Bay of Bengal, St. Martin's Island is governed by Bangladesh; while the Andaman and Nicobar Islands has a plurality of Bengali speakers and is governed by India's federal government as a union territory. Bangladeshi Republic The state of Bangladesh is a parliamentary republic based on the Westminster system, with a written constitution and a President elected by parliament for mostly ceremonial purposes. The government is headed by a Prime Minister, who is appointed by the President from among the popularly elected 300 Members of Parliament in the Jatiyo Sangshad, the national parliament. The Prime Minister is traditionally the leader of the single largest party in the Jatiyo Sangshad. Under the constitution, while recognising Islam as the country's established religion, the constitution grants freedom of religion to non-Muslims. Between 1975 and 1990, Bangladesh had a presidential system of government. Since the 1990s, it was administered by non-political technocratic caretaker governments on four occasions, the last being under military-backed emergency rule in 2007 and 2008. The Awami League and the Bangladesh Nationalist Party (BNP) are the two largest political parties in Bangladesh. Bangladesh is a member of the UN, WTO, IMF, the World Bank, ADB, OIC, IDB, SAARC, BIMSTEC and the IMCTC. Bangladesh has achieved significant strides in human development compared to its neighbours. Indian Bengal West Bengal is a constituent state of the Republic of India, with local executives and assemblies- features shared with other states in the Indian federal system. The president of India appoints a governor as the ceremonial representative of the union government. The governor appoints the chief minister on the nomination of the legislative assembly. The chief minister is the traditionally the leader of the party or coalition with most seats in the assembly. President's rule is often imposed in Indian states as a direct intervention of the union government led by the prime minister of India. Each state has popularly elected members in the Indian lower house of parliament, the Lok Sabha. Each state nominates members to the Indian upper house of parliament, the Rajya Sabha. The state legislative assemblies also play a key role in electing the ceremonial president of India. The former president of India, Pranab Mukherjee, was a native of West Bengal and a leader of the Indian National Congress. The two major political forces in the Bengali-speaking zone of India are the Left Front and the Trinamool Congress, with the Bharatiya Janata Party (BJP) and the Indian National Congress being minor players. Crossborder relations India and Bangladesh are the world's second and eighth most populous countries respectively. Bangladesh-India relations began on a high note in 1971 when India played a major role in the liberation of Bangladesh, with the Indian Bengali populace and media providing overwhelming support to the independence movement in the former East Pakistan. The two countries had a twenty five-year friendship treaty between 1972 and 1996. However, differences over river sharing, border security and access to trade have long plagued the relationship. In more recent years, a consensus has evolved in both countries on the importance of developing good relations, as well as a strategic partnership in South Asia and beyond. Commercial, cultural and defence co-operation have expanded since 2010, when Prime Ministers Sheikh Hasina and Manmohan Singh pledged to reinvigorate ties. The Bangladesh High Commission in New Delhi operates a Deputy High Commission in Kolkata and a consular office in Agartala. India has a High Commission in Dhaka with consulates in Chittagong and Rajshahi. Frequent international air, bus and rail services connect major cities in Bangladesh and Indian Bengal, particularly the three largest cities- Dhaka, Kolkata and Chittagong. Undocumented immigration of Bangladeshi workers is a controversial issue championed by right-wing nationalist parties in India but finds little sympathy in West Bengal. India has since fenced the border which has been criticised by Bangladesh. Economy The Ganges Delta provided advantages of fertile soil, ample water, and an abundance of fish, wildlife, and fruit. Living standards for Bengal's elite were relatively better than other parts of the Indian subcontinent. Between 400 and 1200, Bengal had a well-developed economy in terms of land ownership, agriculture, livestock, shipping, trade, commerce, taxation, and banking. The apparent vibrancy of the Bengal economy in the beginning of the 15th century is attributed to the end of tribute payments to the Delhi Sultanate, which ceased after the creation of the Bengal Sultanate and stopped the outflow of wealth. Ma Huan's travelogue recorded a booming shipbuilding industry and significant international trade in Bengal. In 1338, Ibn Battuta noticed that the silver taka was the most popular currency in the region instead of the Islamic dinar. In 1415, members of Admiral Zheng He's entourage also noticed the dominance of the taka. The currency was the most important symbol of sovereignty for the Sultan of Bengal. The Sultanate of Bengal established an estimated 27 mints in provincial capitals across the kingdom. These provincial capitals were known as Mint Towns. These Mint Towns formed an integral aspect of governance and administration in Bengal. The taka continued to be issued in Mughal Bengal, which inherited the sultanate's legacy. As Bengal became more prosperous and integrated into the world economy under Mughal rule, the taka replaced shell currency in rural areas and became the standardized legal tender. It was also used in commerce with the Dutch East India Company, the French East India Company, the Danish East India Company and the British East India Company. Under Mughal rule, Bengal was the center of the worldwide muslin trade. The muslin trade in Bengal was patronized by the Mughal imperial court. Muslin from Bengal was worn by aristocratic ladies in courts as far away as Europe, Persia and Central Asia. The treasury of the Nawab of Bengal was the biggest source of revenue for the imperial Mughal court in Delhi. Bengal had a large shipbuilding industry. The shipbuilding output of Bengal during the 16th and 17th centuries stood at 223,250tons annually, which was higher than the volume of shipbuilding in the nineteen colonies of North America between 1769 to 1771. Historically, Bengal has been the industrial leader of the subcontinent. Mughal Bengal saw the emergence of a proto-industrial economy backed up by textiles and gunpowder. The organized early modern economy flourished till the beginning of British rule in the mid 18th-century, when the region underwent radical and revolutionary changes in government, trade, and regulation. The British displaced the indigenous ruling class and transferred much of the region's wealth back to the colonial metropole in Britain. In the 19th century, the British began investing in railways and limited industrialization. However, the Bengali economy was dominated by trade in raw materials during much of the colonial period, particularly the jute trade. The partition of India changed the economic geography of the region. Calcutta in West Bengal inherited a thriving industrial base from the colonial period, particularly in terms of jute processing. East Pakistan soon developed its industrial base, including the world's largest jute mill. In 1972, the newly independent government of Bangladesh nationalized 580 industrial plants. These industries were later privatized in the late 1970s as Bangladesh moved towards a market-oriented economy. Liberal reforms in 1991 paved the way for a major expansion of Bangladesh's private sector industry, including in telecoms, natural gas, textiles, pharmaceuticals, ceramics, steel and shipbuilding. In 2022, Bangladesh was the second largest economy in South Asia after India. The region is one of the largest rice producing areas in the world, with West Bengal being India's largest rice producer and Bangladesh being the world's fourth largest rice producer. Three Bengali economists have been Nobel laureates, including Amartya Sen and Abhijit Banerjee who won the Nobel Memorial Prize in Economics and Muhammad Yunus who won the Nobel Peace Prize. Stock markets Dhaka Stock Exchange Chittagong Stock Exchange Calcutta Stock Exchange Ports and harbours Port of Chittagong Port of Kolkata Port of Mongla Haldia Port Port of Payra Port of Pangaon Port of Narayanganj Port of Ashuganj Port of Barisal Matarbari Port Land port of Benapole-Petrapole Chambers of commerce Bengal Chamber of Commerce and Industry Bengal National Chamber of Commerce & Industry Federation of Bangladesh Chambers of Commerce and Industry (FBCCI) Chittagong Chamber of Commerce & Industry Dhaka Chamber of Commerce & Industry (DCCI) Metropolitan Chamber of Commerce and Industry (MCCI) Intra-Bengal trade Bangladesh and India are the largest trading partners in South Asia, with two-way trade valued at an estimated US$16 billion. Most of this trade relationship is centered on some of the world's busiest land ports on the Bangladesh-India border. The Bangladesh Bhutan India Nepal Initiative seeks to boost trade through a Regional Motor Vehicles Agreement. Demographics The Bengal region is one of the most densely populated areas in the world. With a population of 300 million, Bengalis are the third largest ethnic group in the world after the Han Chinese and Arabs. According to provisional results of 2011 Bangladesh census, the population of Bangladesh was 149,772,364; however, CIA's The World Factbook gives 163,654,860 as its population in a July 2013 estimate. According to the provisional results of the 2011 Indian national census, West Bengal has a population of 91,347,736. "So, the Bengal region, , has at least 241.1 million people. This figures give a population density of 1003.9/km2; making it among the most densely populated areas in the world. Bengali is the main language spoken in Bengal. Many phonological, lexical, and structural differences from the standard variety occur in peripheral varieties of Bengali across the region. Other regional languages closely related to Bengali include Sylheti, Chittagonian, Chakma, Rangpuri/Rajbangshi, Hajong, Rohingya, and Tangchangya. English is often used for official work alongside Bengali. Other major Indo-Aryan languages such as Hindi, Urdu, Assamese, and Nepali are also familiar to Bengalis. In addition, several minority ethnolinguistic groups are native to the region. These include speakers of other Indo-Aryan languages (e.g., Bishnupriya Manipuri, Oraon Sadri, various Bihari languages), Tibeto-Burman languages (e.g., A'Tong, Chak, Koch, Garo, Megam, Meitei (officially called "Manipuri"), Mizo, Mru, Pangkhua, Rakhine/Marma, Kok Borok, Riang, Tippera, Usoi, various Chin languages), Austroasiatic languages (e.g., Khasi, Koda, Mundari, Pnar, Santali, War), and Dravidian languages (e.g., Kurukh, Sauria Paharia). Life expectancy is around 72.49 years for Bangladesh and 70.2 for West Bengal. In terms of literacy, West Bengal leads with 77% literacy rate, in Bangladesh the rate is approximately 72.9%. The level of poverty in West Bengal is at 19.98%, while in Bangladesh it stands at 12.9% West Bengal has one of the lowest total fertility rates in India. West Bengal's TFR of 1.6 roughly equals that of Canada. About 20,000 people live on chars. Chars are temporary islands formed by the deposition of sediments eroded off the banks of the Ganges in West Bengal, which often disappear in the monsoon season. They are made of very fertile soil. The inhabitants of the chars are not recognised by the Government of West Bengal on the grounds that it is not known whether they are Indians or Bangladeshis. Consequently, no identification documents are issued to char-dwellers who cannot benefit from health care, barely survive because of very poor sanitation and are prevented from emigrating to the mainland to find jobs when they have turned 14. On a particular char, it was reported that 13% of women died at childbirth. Major cities Culture Language The Bengali language developed between the 7th and 10th centuries from Apabhraṃśa and Magadhi Prakrit. It is written using the indigenous Bengali alphabet, a descendant of the ancient Brahmi script. Bengali is the 5th most spoken language in the world. It is an eastern Indo-Aryan language and one of the easternmost branches of the Indo-European language family. It is part of the Bengali-Assamese languages. Bengali has greatly influenced other languages in the region, including Odia, Assamese, Chakma, Nepali and Rohingya. It is the sole state language of Bangladesh and the second most spoken language in India. It is also the seventh most spoken language by total number of speakers in the world. Bengali binds together a culturally diverse region and is an important contributor to regional identity. The 1952 Bengali Language Movement in East Pakistan is commemorated by UNESCO as International Mother Language Day, as part of global efforts to preserve linguistic identity. Currency In both Bangladesh and West Bengal, currency is commonly denominated as taka. The Bangladesh taka is an official standard bearer of this tradition, while the Indian rupee is also written as taka in Bengali script on all of its banknotes. The history of the taka dates back centuries. Bengal was home one of the world's earliest coin currencies in the first millennium BCE. Under the Delhi Sultanate, the taka was introduced by Muhammad bin Tughluq in 1329. Bengal became the stronghold of the taka. The silver currency was the most important symbol of sovereignty of the Sultanate of Bengal. It was traded on the Silk Road and replicated in Nepal and China's Tibetan protectorate. The Pakistani rupee was scripted in Bengali as taka on its banknotes until Bangladesh's creation in 1971. Literature Bengali literature has a rich heritage. It has a history stretching back to the 3rd century BCE, when the main language was Sanskrit written in the brahmi script. The Bengali language and script evolved circa 1000 CE from Magadhi Prakrit. Bengal has a long tradition in folk literature, evidenced by the Chôrjapôdô, Mangalkavya, Shreekrishna Kirtana, Maimansingha Gitika or Thakurmar Jhuli. Bengali literature in the medieval age was often either religious (e.g. Chandidas), or adaptations from other languages (e.g. Alaol). During the Bengal Renaissance of the nineteenth and twentieth centuries, Bengali literature was modernised through the works of authors such as Michael Madhusudan Dutta, Ishwar Chandra Vidyasagar, Bankim Chandra Chattopadhyay, Rabindranath Tagore, Sarat Chandra Chattopadhyay, Kazi Nazrul Islam, Satyendranath Dutta and Jibanananda Das. In the 20th century, prominent modern Bengali writers included Syed Mujtaba Ali, Jasimuddin, Manik Bandopadhyay, Tarasankar Bandyopadhyay, Bibhutibhushan Bandyopadhyay, Buddhadeb Bose, Sunil Gangopadhyay and Humayun Ahmed. Prominent contemporary Bengali writers in English include Amitav Ghosh, Tahmima Anam, Jhumpa Lahiri and Zia Haider Rahman among others. Personification The Bangamata is a female personification of Bengal which was created during the Bengali Renaissance and later adopted by the Bengali nationalists. Hindu nationalists adopted a modified Bharat Mata as a national personification of India. The Mother Bengal represents not only biological motherness but its attributed characteristics as well – protection, never ending love, consolation, care, the beginning and the end of life. In Amar Sonar Bangla, the national anthem of Bangladesh, Rabindranath Tagore has used the word "Maa" (Mother) numerous times to refer to the motherland i.e. Bengal. Art The Pala-Sena School of Art developed in Bengal between the 8th and 12th centuries and is considered a high point of classical Asian art. It included sculptures and paintings. Islamic Bengal was noted for its production of the finest cotton fabrics and saris, notably the Jamdani, which received warrants from the Mughal court. The Bengal School of painting flourished in Kolkata and Shantiniketan in the British Raj during the early 20th century. Its practitioners were among the harbingers of modern painting in India. Zainul Abedin was the pioneer of modern Bangladeshi art. The country has a thriving and internationally acclaimed contemporary art scene. Architecture Classical Bengali architecture features terracotta buildings. Ancient Bengali kingdoms laid the foundations of the region's architectural heritage through the construction of monasteries and temples (for example, the Somapura Mahavihara). During the sultanate period, a distinct and glorious Islamic style of architecture developed the region. Most Islamic buildings were small and highly artistic terracotta mosques with multiple domes and no minarets. Bengal was also home to the largest mosque in South Asia at Adina. Bengali vernacular architecture is credited for inspiring the popularity of the bungalow. The Bengal region also has a rich heritage of Indo-Saracenic architecture, including numerous zamindar palaces and mansions. The most prominent example of this style is the Victoria Memorial, Kolkata. In the 1950s, Muzharul Islam pioneered the modernist terracotta style of architecture in South Asia. This was followed by the design of the Jatiyo Sangshad Bhaban by the renowned American architect Louis Kahn in the 1960s, which was based on the aesthetic heritage of Bengali architecture and geography. Sciences The Gupta dynasty, which is believed to have originated in North Bengal, pioneered the invention of chess, the concept of zero, the theory of Earth orbiting the Sun, the study of solar and lunar eclipses and the flourishing of Sanskrit literature and drama. Bengal was the leader of scientific endeavours in the subcontinent during the British Raj. The educational reforms during this period gave birth to many distinguished scientists in the region. Sir Jagadish Chandra Bose pioneered the investigation of radio and microwave optics, made very significant contributions to plant science, and laid the foundations of experimental science in the Indian subcontinent. IEEE named him one of the fathers of radio science. He was the first person from the Indian subcontinent to receive a US patent, in 1904. In 1924–25, while researching at the University of Dhaka, Prof Satyendra Nath Bose well known for his works in quantum mechanics, provided the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. Meghnad Saha was the first scientist to relate a star's spectrum to its temperature, developing thermal ionization equations (notably the Saha ionization equation) that have been foundational in the fields of astrophysics and astrochemistry. Amal Kumar Raychaudhuri was a physicist, known for his research in general relativity and cosmology. His most significant contribution is the eponymous Raychaudhuri equation, which demonstrates that singularities arise inevitably in general relativity and is a key ingredient in the proofs of the Penrose–Hawking singularity theorems. In the United States, the Bangladeshi-American engineer Fazlur Rahman Khan emerged as the "father of tubular designs" in skyscraper construction. Ashoke Sen is an Indian theoretical physicist whose main area of work is string theory. He was among the first recipients of the Fundamental Physics Prize "for opening the path to the realisation that all string theories are different limits of the same underlying theory". Music The Baul tradition is a unique heritage of Bengali folk music. The 19th century mystic poet Lalon Shah is the most celebrated practitioner of the tradition. Other folk music forms include Gombhira, Bhatiali and Bhawaiya. Hason Raja is a renowned folk poet of the Sylhet region. Folk music in Bengal is often accompanied by the ektara, a one-stringed instrument. Other instruments include the dotara, dhol, flute, and tabla. The region also has a rich heritage in North Indian classical music. Cuisine Bengali cuisine is the only traditionally developed multi-course tradition from the Indian subcontinent. Rice and fish are traditional favourite foods, leading to a saying that "fish and rice make a Bengali". Bengal's vast repertoire of fish-based dishes includes Hilsa preparations, a favourite among Bengalis. Bengalis make distinctive sweetmeats from milk products, including Rôshogolla, Chômchôm, and several kinds of Pithe. The old city of Dhaka is noted for its distinct Indo-Islamic cuisine, including biryani, bakarkhani and kebab dishes. Boats There are 150 types of Bengali country boats plying the 700 rivers of the Bengal delta, the vast floodplain and many oxbow lakes. They vary in design and size. The boats include the dinghy and sampan among others. Country boats are a central element of Bengali culture and have inspired generations of artists and poets, including the ivory artisans of the Mughal era. The country has a long shipbuilding tradition, dating back many centuries. Wooden boats are made of timber such as Jarul (dipterocarpus turbinatus), sal (shorea robusta), sundari (heritiera fomes), and Burma teak (tectons grandis). Medieval Bengal was shipbuilding hub for the Mughal and Ottoman navies. The British Royal Navy later utilised Bengali shipyards in the 19th century, including for the Battle of Trafalgar. Attire Bengali women commonly wear the shaŗi and the salwar kameez, often distinctly designed according to local cultural customs. In urban areas, many women and men wear Western-style attire. Among men, European dressing has greater acceptance. Men also wear traditional costumes such as the kurta with dhoti or pyjama, often on religious occasions. The lungi, a kind of long skirt, is widely worn by Bangladeshi men. Festivals For Bengali Hindus, the major religious festivals include Durga Puja, Janmashtami and Rath Yatra. For Bengali Muslims, the major religious festivals are Eid al-Fitr, Eid al-Adha, Milad un Nabi, Muharram, and Shab-e-Barat. In honour of Bengali Buddhists and Bengali Christians, both Buddha's Birthday and Christmas are public holidays in the region. The Bengali New Year is the main secular festival of Bengali culture celebrated by people regardless of religious and social backgrounds. Other Bengali festivals include the first day of spring and the Nabanna harvest festival in autumn. Media Bangladesh has a diverse, outspoken and privately owned press, with the largest circulated Bengali language newspapers in the world. English-language titles are popular in the urban readership. West Bengal had 559 published newspapers in 2005, of which 430 were in Bengali. Bengali cinema is divided between the media hubs of Dhaka and Kolkata. Sports Cricket and football are popular sports in the Bengal region. Local games include sports such as Kho Kho and Kabaddi, the latter being the national sport of Bangladesh. An Indo-Bangladesh Bengali Games has been organised among the athletes of the Bengali speaking areas of the two countries. See also Bengali Renaissance Bengalis Greater Bengal East India Hindi Belt List of Bengalis North-East India Punjab Notes References External links Regions of Asia Regions of Eurasia Geography of South Asia Geography of Bangladesh Geography of India B Regions of India Historical Indian regions Subdivisions of British India Divided regions Bengali-speaking countries and territories Historical regions
2,165
4,865
https://en.wikipedia.org/wiki/Roman%20Breviary
Roman Breviary
The Roman Breviary (Latin: Breviarium Romanum) is a breviary of the Roman Rite in the Catholic Church. A liturgical book, it contains public or canonical prayers, hymns, the Psalms, readings, and notations for everyday use, especially by bishops, priests, and deacons in the Divine Office (i.e., at the canonical hours, the Christians' daily prayer). The volume containing the daily hours of Catholic prayer was published as the Breviarium Romanum (Roman Breviary) from its editio princeps in 1568 under Pope Pius V until the reforms of Paul VI (1974), when it was largely supplanted by the Liturgy of the Hours. In the course of the Catholic Counter-Reformation, Pope Pius V (r. 1566–1572) imposed the use of the Roman Breviary, mainly based on the Breviarium secundum usum Romanae Curiae, on the Latin Church of the Catholic Church. Exceptions are the Benedictines and Dominicans, who have breviaries of their own, and two surviving local use breviaries: the Mozarabic Breviary, once in use throughout all Spain, but now confined to a single foundation at Toledo; it is remarkable for the number and length of its hymns, and for the fact that the majority of its collects are addressed to God the Son; the Ambrosian Breviary, now confined to Milan, where it owes its retention to the attachment of the clergy and people to their traditionary usages, which they derive from St Ambrose. Origin of name The Latin word breviarium generally signifies "abridgement, compendium". This wider sense has often been used by Christian authors, e.g. Breviarium fidei, Breviarium in psalmos, Breviarium canonum, Breviarium regularum. In liturgical language specifically, "breviary" (breviarium) has a special meaning, indicating a book furnishing the regulations for the celebration of Mass or the canonical Office, and may be met with under the titles Breviarium Ecclesiastici Ordinis, or Breviarium Ecclesiæ Romanæ. In the 9th century, Alcuin uses the word to designate an office abridged or simplified for the use of the laity. Prudentius of Troyes, about the same period, composed a Breviarium Psalterii. In an ancient inventory occurs Breviarium Antiphonarii, meaning "Extracts from the Antiphonary". In the Vita Aldrici occurs sicut in plenariis et breviariis Ecclesiæ ejusdem continentur. Again, in the inventories in the catalogues, such notes as these may be met with: Sunt et duo cursinarii et tres benedictionales Libri; ex his unus habet obsequium mortuorum et unus Breviarius, or, Præter Breviarium quoddam quod usque ad festivitatem S. Joannis Baptistæ retinebunt, etc. Monte Cassino in c. 1100 obtained a book titled Incipit Breviarium sive Ordo Officiorum per totam anni decursionem. From such references, and from others of a like nature, Quesnel gathers that by the word Breviarium was at first designated a book furnishing the rubrics, a sort of Ordo. The title Breviary, as we employ it—that is, a book containing the entire canonical office—appears to date from the 11th century. Pope Gregory VII (r. 1073–1085) having abridged the order of prayers, and having simplified the Liturgy as performed at the Roman Court, this abridgment received the name of Breviary, which was suitable, since, according to the etymology of the word, it was an abridgment. The name has been extended to books which contain in one volume, or at least in one work, liturgical books of different kinds, such as the Psalter, the Antiphonary, the Responsoriary, the Lectionary, etc. In this connection it may be pointed out that in this sense the word, as it is used nowadays, is illogical; it should be named a Plenarium rather than a Breviarium, since, liturgically speaking, the word Plenarium exactly designates such books as contain several different compilations united under one cover. History Early history The canonical hours of the Breviary owe their remote origin to the Old Covenant when God commanded the Aaronic priests to offer morning and evening sacrifices. Other inspiration may have come from David's words in the Psalms "Seven times a day I praise you" (Ps. 119:164), as well as, "the just man meditates on the law day and night" (Ps. 1:2). Regarding Daniel "Three times daily he was kneeling and offering prayers and thanks to his God" (Dan. 6:10). In the early days of Christian worship the Sacred Scriptures furnished all that was thought necessary, containing as it did the books from which the lessons were read and the psalms that were recited. The first step in the evolution of the Breviary was the separation of the Psalter into a choir-book. At first the president of the local church (bishop) or the leader of the choir chose a particular psalm as he thought appropriate. From about the 4th century certain psalms began to be grouped together, a process that was furthered by the monastic practice of daily reciting the 150 psalms. This took so much time that the monks began to spread it over a week, dividing each day into hours, and allotting to each hour its portion of the Psalter. St Benedict in the 6th century drew up such an arrangement, probably, though not certainly, on the basis of an older Roman division which, though not so skilful, is the one in general use. Gradually there were added to these psalter choir-books additions in the form of antiphons, responses, collects or short prayers, for the use of those not skilful at improvisation and metrical compositions. Jean Beleth, a 12th-century liturgical author, gives the following list of books necessary for the right conduct of the canonical office: the Antiphonarium, the Old and New Testaments, the Passionarius (liber) and the Legendarius (dealing respectively with martyrs and saints), the Homiliarius (homilies on the Gospels), the Sermologus (collection of sermons) and the works of the Fathers, besides the Psalterium and the Collectarium. To overcome the inconvenience of using such a library the Breviary came into existence and use. Already in the 9th century Prudentius, bishop of Troyes, had in a Breviarium Psalterii made an abridgment of the Psalter for the laity, giving a few psalms for each day, and Alcuin had rendered a similar service by including a prayer for each day and some other prayers, but no lessons or homilies. Medieval breviaries The Breviary, rightly so called, only dates from the 11th century; the earliest MS. containing the whole canonical office, is of the year 1099 and is in the Mazarin library. Gregory VII (pope 1073–1085), too, simplified the liturgy as performed at the Roman court, and gave his abridgment the name of Breviary, which thus came to denote a work which from another point of view might be called a Plenary, involving as it did the collection of several works into one. There are several extant specimens of 12th-century Breviaries, all Benedictine, but under Innocent III (pope 1198–1216) their use was extended, especially by the newly founded and active Franciscan order. These preaching friars, with the authorization of Gregory IX, adopted (with some modifications, e.g. the substitution of the "Gallican" for the "Roman" version of the Psalter) the Breviary hitherto used exclusively by the Roman court, and with it gradually swept out of Europe all the earlier partial books (Legendaries, Responsories), etc., and to some extent the local Breviaries, like that of Sarum. Finally, Nicholas III (pope 1277–1280) adopted this version both for the curia and for the basilicas of Rome, and thus made its position secure. Before the rise of the mendicant orders (wandering friars) in the 13th century, the daily services were usually contained in a number of large volumes. The first occurrence of a single manuscript of the daily office was written by the Benedictine order at Monte Cassino in Italy in 1099. The Benedictines were not a mendicant order, but a stable, monastery-based order, and single-volume breviaries are rare from this early period. The arrangement of the Psalms in the Rule of St. Benedict had a profound impact upon the breviaries used by secular and monastic clergy alike, until 1911 when Pope Pius X introduced his reform of the Roman Breviary. In many places, every diocese, order or ecclesiastical province maintained its own edition of the breviary. However, mendicant friars travelled frequently and needed a shortened, or abbreviated, daily office contained in one portable book, and single-volume breviaries flourished from the thirteenth century onwards. These abbreviated volumes soon became very popular and eventually supplanted the Catholic Church's Curia office, previously said by non-monastic clergy. Early printed editions Before the advent of printing, breviaries were written by hand and were often richly decorated with initials and miniature illustrations telling stories in the lives of Christ or the saints, or stories from the Bible. Later printed breviaries usually have woodcut illustrations, interesting in their own right but with poor relation to the beautifully illuminated breviaries. The beauty and value of many of the Latin Breviaries were brought to the notice of English churchmen by one of the numbers of the Oxford Tracts for the Times, since which time they have been much more studied, both for their own sake and for the light they throw upon the English Prayer-Book. From a bibliographical point of view some of the early printed Breviaries are among the rarest of literary curiosities, being merely local. The copies were not spread far, and were soon worn out by the daily use made of them. Doubtless many editions have perished without leaving a trace of their existence, while others are known by unique copies. In Scotland the only one which has survived the convulsions of the 16th century is Aberdeen Breviary, a Scottish form of the Sarum Office (the Sarum Rite was much favoured in Scotland as a kind of protest against the jurisdiction claimed by the diocese of York), revised by William Elphinstone (bishop 1483–1514), and printed at Edinburgh by Walter Chapman and Androw Myllar in 1509–1510. Four copies have been preserved of it, of which only one is complete; but it was reprinted in facsimile in 1854 for the Bannatyne Club by the munificence of the Duke of Buccleuch. It is particularly valuable for the trustworthy notices of the early history of Scotland which are embedded in the lives of the national saints. Though enjoined by royal mandate in 1501 for general use within the realm of Scotland, it was probably never widely adopted. The new Scottish Proprium sanctioned for the Catholic province of St Andrews in 1903 contains many of the old Aberdeen collects and antiphons. The Sarum or Salisbury Breviary itself was very widely used. The first edition was printed at Venice in 1483 by Raynald de Novimagio in folio; the latest at Paris, 1556, 1557. While modern Breviaries are nearly always printed in four volumes, one for each season of the year, the editions of the Sarum never exceeded two parts. Early modern reforms Until the Council of Trent (1545–1563) and the Catholic Counter-Reformation, every bishop had full power to regulate the Breviary of his own diocese; and this was acted upon almost everywhere. Each monastic community, also, had one of its own. Pope Pius V (r. 1566–1572), however, while sanctioning those which could show at least 200 years of existence, made the Roman obligatory in all other places. But the influence of the Roman rite has gradually gone much beyond this, and has superseded almost all the local uses. The Roman has thus become nearly universal, with the allowance only of additional offices for saints specially venerated in each particular diocese. The Roman Breviary has undergone several revisions: The most remarkable of these is that by Francis Quignonez, cardinal of Santa Croce in Gerusalemme (1536), which, though not accepted by Rome (it was approved by Clement VII and Paul III, and permitted as a substitute for the unrevised Breviary, until Pius V in 1568 excluded it as too short and too modern, and issued a reformed edition of the old Breviary, the Breviarium Pianum or "Pian Breviary"), formed the model for the still more thorough reform made in 1549 by the Church of England, whose daily morning and evening services are but a condensation and simplification of the Breviary offices. Some parts of the prefaces at the beginning of the English Prayer-Book are free translations of those of Quignonez. The Pian Breviary was again altered by Sixtus V in 1588, who introduced the revised Vulgate, in 1602 by Clement VIII (through Baronius and Bellarmine), especially as concerns the rubrics, and by Urban VIII (1623–1644), a purist who altered the text of certain hymns. In the 17th and 18th centuries a movement of revision took place in France, and succeeded in modifying about half the Breviaries of that country. Historically, this proceeded from the labours of Jean de Launoy (1603–1678), "le dénicheur des saints", and Louis Sébastien le Nain de Tillemont, who had shown the falsity of numerous lives of the saints; theologically it was produced by the Port Royal school, which led men to dwell more on communion with God as contrasted with the invocation of the saints. This was mainly carried out by the adoption of a rule that all antiphons and responses should be in the exact words of Scripture, which cut out the whole class of appeals to created beings. The services were at the same time simplified and shortened, and the use of the whole Psalter every week (which had become a mere theory in the Roman Breviary, owing to its frequent supersession by saints' day services) was made a reality. These reformed French Breviaries—e.g. the Paris Breviary of 1680 by Archbishop François de Harlay (1625–1695) and that of 1736 by Archbishop Charles-Gaspard-Guillaume de Vintimille du Luc (1655–1746)—show a deep knowledge of Holy Scripture, and much careful adaptation of different texts. Later modern reforms During the pontificate of Pius IX a strong Ultramontane movement arose against the French Breviaries of 1680 and 1736. This was inaugurated by Montalembert, but its literary advocates were chiefly Dom Gueranger, a learned Benedictine monk, abbot of Solesmes, and Louis Veuillot (1813–1883) of the Univers; and it succeeded in suppressing them everywhere, the last diocese to surrender being Orleans in 1875. The Jansenist and Gallican influence was also strongly felt in Italy and in Germany, where Breviaries based on the French models were published at Cologne, Münster, Mainz and other towns. Meanwhile, under the direction of Benedict XIV (pope 1740–1758), a special congregation collected much material for an official revision, but nothing was published. In 1902, under Leo XIII, a commission under the presidency of Monsignor Louis Duchesne was appointed to consider the Breviary, the Missal, the Pontifical and the Ritual. Significant changes came in 1910 with the reform of the Roman Breviary by Pope Pius X. This revision modified the traditional psalm scheme so that, while all 150 psalms were used in the course of the week, these were said without repetition. Those assigned to the Sunday office underwent the least revision, although noticeably fewer psalms are recited at Matins, and both Lauds and Compline are slightly shorter due to psalms (or in the case of Compline the first few verses of a psalm) being removed. Pius X was probably influenced by earlier attempts to eliminate repetition in the psalter, most notably the liturgy of the Benedictine congregation of St. Maur. However, since Cardinal Quignonez's attempt to reform the Breviary employed this principle—albeit with no regard to the traditional scheme—such notions had floated around in the western Church, and can particularly be seen in the Paris Breviary. Pope Pius XII introduced optional use of a new translation of the Psalms from the Hebrew to a more classical Latin. Most breviaries published in the late 1950s and early 1960s used this "Pian Psalter". Pope John XXIII also revised the Breviary in 1960, introducing changes drawn up by his predecessor Pope Pius XII. The most notable alteration is the shortening of most feasts from nine to three lessons at Matins, keeping only the Scripture readings (the former lesson i, then lessons ii and iii together), followed by either the first part of the patristic reading (lesson vii) or, for most feasts, a condensed version of the former second Nocturn, which was formerly used when a feast was reduced in rank and commemorated. Contents of the Roman Breviary At the beginning stands the usual introductory matter, such as the tables for determining the date of Easter, the calendar, and the general rubrics. The Breviary itself is divided into four seasonal parts—winter, spring, summer, autumn—and comprises under each part: the Psalter; Proprium de Tempore (the special office of the season); Proprium Sanctorum (special offices of saints); Commune Sanctorum (general offices for saints); Extra Services. These parts are often published separately. The Psalter This psalm book is the very backbone of the Breviary, the groundwork of the Catholic prayer-book; out of it have grown the antiphons, responsories and versicles. Until the 1911 reform, the psalms were arranged according to a disposition dating from the 8th century, as follows: Psalms 1-108, with some omissions, were recited at Matins, twelve each day from Monday to Saturday, and eighteen on Sunday. The omissions were said at Lauds, Prime and Compline. Psalms 109-147 (except 117, 118, and 142) were said at Vespers, five each day. Psalms 148-150 were always used at Lauds, and give that hour its name. The text of this Psalter is that commonly known as the Gallican. The name is misleading, for it is simply the second revision (A.D. 392) made by Jerome of the old Itala version originally used in Rome. Jerome's first revision of the Itala (A.D. 383), known as the Roman, is still used at St Peter's in Rome, but the "Gallican", thanks especially to St Gregory of Tours, who introduced it into Gaul in the 6th century, has ousted it everywhere else. The Antiphonary of Bangor proves that Ireland accepted the Gallican version in the 7th century, and the English Church did so in the 10th. Following the 1911 reform, Matins was reduced to nine Psalms every day, with the other psalms redistributed throughout Prime, Terce, Sext, and Compline. For Sundays and special feasts Lauds and Vespers largely remained the same, Psalm 118 remained distributed at the Little Hours and Psalms 4, 90, and 130 were kept at Compline. The Proprium de Tempore This contains the office of the seasons of the Christian year (Advent to Trinity), a conception that only gradually grew up. There is here given the whole service for every Sunday and weekday, the proper antiphons, responsories, hymns, and especially the course of daily Scripture reading, averaging about twenty verses a day, and (roughly) arranged thus: Advent: Isaiah Epiphany to Septuagesima: Pauline Epistles Lent: patristic homilies (Genesis on Sundays) Passiontide: Jeremiah Easter to Pentecost: Acts, Catholic epistles and Revelation Pentecost to August: Samuel and Kings August to Advent: Wisdom books, Maccabees, Prophets The Proprium Sanctorum This contains the lessons, psalms and liturgical formularies for saints' festivals, and depends on the days of the secular month. The readings of the second Nocturn are mainly hagiological biography, with homilies or papal documents for certain major feasts, particularly those of Jesus and Mary. Some of this material has been revised by Leo XIII, in view of archaeological and other discoveries. The third Nocturn consists of a homily on the Gospel which is read at that day's Mass. Covering a great stretch of time and space, they do for the worshipper in the field of church history what the Scripture readings do in that of biblical history. The Commune Sanctorum This comprises psalms, antiphons, lessons, &c., for feasts of various groups or classes (twelve in all); e.g. apostles, martyrs, confessors, virgins, and the Blessed Virgin Mary. These offices are of very ancient date, and many of them were probably in origin proper to individual saints. They contain passages of great literary beauty. The lessons read at the third nocturn are patristic homilies on the Gospels, and together form a rough summary of theological instruction. Extra services Here are found the Little Office of the Blessed Virgin Mary, the Office for the Dead (obligatory on All Souls' Day), and offices peculiar to each diocese. Elements of the Hours It has already been indicated, by reference to Matins, Lauds, &c., that not only each day, but each part of the day, has its own office, the day being divided into liturgical "hours." A detailed account of these will be found in the article Canonical Hours. Each of the hours of the office is composed of the same elements, and something must be said now of the nature of these constituent parts, of which mention has here and there been already made. They are: psalms (including canticles), antiphons, responsories, hymns, lessons, little chapters, versicles and collects. Psalms Before the 1911 reform, the multiplication of saints' festivals, with practically the same festal psalms, tended to repeat the about one-third of the Psalter, with a correspondingly rare recital of the remaining two-thirds. Following this reform, the entire Psalter is again generally recited each week, with the festal psalms restricted to only the highest-ranking feasts. As in the Greek usage and in the Benedictine, certain canticles like the Song of Moses (Exodus xv.), the Song of Hannah (1 Sam. ii.), the prayer of Habakkuk (iii.), the prayer of Hezekiah (Isaiah xxxviii.) and other similar Old Testament passages, and, from the New Testament, the Magnificat, the Benedictus and the Nunc dimittis, are admitted as psalms. Antiphons The antiphons are short liturgical forms, sometimes of biblical, sometimes of patristic origin, used to introduce a psalm. The term originally signified a chant by alternate choirs, but has quite lost this meaning in the Breviary. Responsories The responsories are similar in form to the antiphons, but come at the end of the psalm, being originally the reply of the choir or congregation to the precentor who recited the psalm. Hymns The hymns are short poems going back in part to the days of Prudentius, Synesius, Gregory of Nazianzus and Ambrose (4th and 5th centuries), but mainly the work of medieval authors. Lessons The lessons, as has been seen, are drawn variously from the Bible, the Acts of the Saints and the Fathers of the Church. In the primitive church, books afterwards excluded from the canon were often read, e.g. the letters of Clement of Rome and the Shepherd of Hermas. In later days the churches of Africa, having rich memorials of martyrdom, used them to supplement the reading of Scripture. Monastic influence accounts for the practice of adding to the reading of a biblical passage some patristic commentary or exposition. Books of homilies were compiled from the writings of SS. Augustine, Hilary, Athanasius, Isidore, Gregory the Great and others, and formed part of the library of which the Breviary was the ultimate compendium. In the lessons, as in the psalms, the order for special days breaks in upon the normal order of ferial offices and dislocates the scheme for consecutive reading. The lessons are read at Matins (which is subdivided into three nocturns). Little chapters The little chapters are very short lessons read at the other "hours." Versicles The versicles are short responsories used after the little chapters in the minor hours. They appear after the hymns in Lauds and Vespers. Collects The collects come at the close of the office and are short prayers summing up the supplications of the congregation. They arise out of a primitive practice on the part of the bishop (local president), examples of which are found in the Didachē (Teaching of the Apostles) and in the letters of Clement of Rome and Cyprian. With the crystallization of church order, improvisation in prayer largely gave place to set forms, and collections of prayers were made which later developed into Sacramentaries and Orationals. The collects of the Breviary are largely drawn from the Gelasian and other Sacramentaries, and they are used to sum up the dominant idea of the festival in connection with which they happen to be used. Celebration Before 1910, the difficulty of harmonizing the Proprium de Tempore and the Proprium Sanctorum, to which reference has been made, was only partly met in the thirty-seven chapters of general rubrics. Additional help was given by a kind of Catholic Churchman's Almanack, called the Ordo Recitandi Divini Officii, published in different countries and dioceses, and giving, under every day, minute directions for proper reading. In 1960, John XXIII simplified the rubrics governing the Breviary in order to make it easier to use. Every cleric in Holy Orders, and many other members of religious orders, must publicly join in or privately read aloud (i.e. using the lips as well as the eyes—it takes about two hours in this way) the whole of the Breviary services allotted for each day. In large churches where they were celebrated the services were usually grouped; e.g. Matins and Lauds (about 7.30 A.M.); Prime, Terce (High Mass), Sext, and None (about 10 A.M.); Vespers and Compline (4 P.M.); and from four to eight hours (depending on the amount of music and the number of high masses) are thus spent in choir. Lay use of the Breviary has varied throughout the Church's history. In some periods laymen did not use the Breviary as a manual of devotion to any great extent. The late Medieval period saw the recitation of certain hours of the Little Office of the Blessed Virgin, which was based on the Breviary in form and content, becoming popular among those who could read, and Bishop Challoner did much to popularise the hours of Sunday Vespers and Compline (albeit in English translation) in his Garden of the Soul in the eighteenth century. The Liturgical Movement in the twentieth century saw renewed interest in the Offices of the Breviary and several popular editions were produced, containing the vernacular as well as the Latin. The complete pre-Pius X Roman Breviary was translated into English (by the Marquess of Bute in 1879; new ed. with a trans, of the Martyrology, 1908), French and German. Bute's version is noteworthy for its inclusion of the skilful renderings of the ancient hymns by J.H. Newman, J.M. Neale and others. Several editions of the Pius X Breviary were produced during the twentieth century, including a notable edition prepared with the assistance of the sisters of Stanbrook Abbey in the 1950s. Two editions in English and Latin were produced in the following decade, which conformed to the rubrics of 1960, published by Liturgical Press and Benziger in the United States. These used the Pius XII psalter. Baronius Press's revised edition of the Liturgical Press edition uses the older Gallican psalter of St. Jerome. This edition was published and released in 2012 for pre-orders only. In 2013, the publication has resumed printing and is available on Baronius' website. Under Pope Benedict XVI's motu proprio Summorum Pontificum, Catholic bishops, priests, and deacons are again permitted to use the 1961 edition of the Roman Breviary, promulgated by Pope John XXIII to satisfy their obligation to recite the Divine Office every day. Online resources In 2008, a website containing the Divine Office (both Ordinary and Extraordinary) in various languages, i-breviary, was launched, which combines the modern and ancient breviaries with the latest computer technology. Editions 1482. Breviarium Romanum. Albi, Johann Neumeister. 1494. Breviarium Romanum, Lyon, Perrinus Lathomi, Bonifacius Johannis & Johannes de Villa Veteri. 1502, Breviarium secundum comunem usus Romanum, Paris, Thielman Kerver. 1508. Breviarium secundum consuetudinem Romanam. Paris, Jean Philippe Jean Botcholdic, Gherard Berneuelt. 1509. Brevarium secundum ritum sacronsancte Romane ecclesie, Lyon, Ettienne Baland, Martin Boillon 1534. Breviarium Romanum, Paris, Yolande Bonhomme. 1535. Quignonius Breviary 1535. Breviarium Romanum Ex Decreto Sancrosancti Concilii Tridentini Restitutum ... editum et recognitum iuxta editionem venetiis 1536. Breviarium Romanum, nuper reformatum, in quo sacræ Scripturæ libri, probatæque Sanctorum historiæ eleganter beneque dispositæ leguntur; studio & labore Francisci Quignonii, Card. de licentia & facultate Pauli III. Pont. Max., Paris: Galliot du Pré, Jean Kerbriant, Jean Petit 1537. Breviarium Romanum nuper reformatum, Paris, Yolande Bonhomme. The second recension of the Quignon breviary (ed. 1908) 1570. Pian Breviary (Pius V, Council of Trent) 1570. Breviarium Romanum, ex decreto sacrosancti Concilii Tridentini restitutum, Pii V pontificis maximi jussu editum Rome, Paulus Manutius; Antwerp, Christophe Plantin. 1629. Urban VIII 1698. Breviarium Romanum, ex decreto sacrosancti Concilii Tridentini restitutum, et Clementis VIII et Urbani VIII auctoritate recognitum, cum officiis sanctorum, novissime per Summos Pontifices usque ad hanc diem concessis; in quatuor Anni Tempora divisum. pars Autumnalis (1697)(1698) pars Autumnalis (1719). 1740.Breviarium Romanum cum Psalterium, proprio,& Officiis Sanctorum ad usum cleri Basilicae Vaticanae pars Autumnalis (1740) pars Aestiva (1740) 1757. Breviarium Romanum, ex decreto sacrosancti Concilii Tridentini restitutum, et Clementis VIII et Urbani VIII auctoritate recognitum, novis Officiis ex Indulto Apostolico huc usque concessis auctum pars Aestivus (1757) 1799. Breviarium Romanum, ex decreto sacrosancti Concilii Tridentini restitutum, et Clementis VIII et Urbani VIII auctoritate recognitum, com officiis sanctorum, novissime per Summos Pontifices usque ad hanc diem concessis, in quatuor Anni Tempora divisum pars Verna pars Autumnalis pars Hiemalis 1828. pars Autumnalis (1828) pars Aestiva (1828) 1861. pars Autumnalis (1861) 1888. pars Verna (1888) 1908: Reform of the Roman Breviary by Pope Pius X The 1908 Roman Breviary in English (Pre-Pius X Psalter), Winter (part 1) The 1908 Roman Breviary in English (Pre-Pius X Psalter), Spring (part 2) The 1908 Roman Breviary in English (Pre-Pius X Psalter), Summer (part 3) The 1908 Roman Breviary in English (Pre-Pius X Psalter), Autumn/Fall (part 4) Canonical Hours according to the 1911 Breviarium Romanum without the festal propers of Common of the Saints (traditio.com) 1960 (Vatican II). The Roman Breviary in English and Latin: A Bilingual Edition of the Breviarium Romanum with Rubrics in English Only, Baronius Press (2011), 3 vols. divinumofficinum.com 1974: Universalis Online Breviary See also Book of Hours Canonical Hours Horologion Latin psalters Little Office of Our Lady Liturgical books of the Roman Rite Liturgy of the Hours References External links Psalter Schemas (Catholic), from 1900-Present 14th century breviary made in Paris for Marie de Saint Pol, Countess of Pembroke, Cambridge University Library 14th century breviary written in Gothic Textualis script, Center for Digital Initiatives, University of Vermont Libraries Catholic breviaries Roman Rite liturgical books
2,167
4,870
https://en.wikipedia.org/wiki/Bill%20Macy
Bill Macy
Wolf Martin Garber (May 18, 1922 – October 17, 2019), known professionally as Bill Macy, was an American television, film and stage actor, best known for his role in the CBS television series Maude (1972–1978). Early life Bill Macy was born Wolf Martin Garber on May 18, 1922, in Revere, Massachusetts, the son of Mollie (née Friedopfer; 1889–1986) and Michael Garber (1884–1974), a manufacturer. He was raised Jewish in the East Flatbush section of Brooklyn, New York. After graduating from Samuel J. Tilden High School he served in the United States Army from 1942 to 1946 with the 594th Engineer Boat and Shore Regiment, stationed in the Philippines, Japan and New Guinea. He worked as a cab driver for a decade before being cast as Walter Matthau's understudy in Once More, with Feeling on Broadway in 1958. He portrayed a cab driver on the soap opera The Edge of Night in 1966. Macy was an original cast member of the 1969–1972 Off-Broadway sensation Oh! Calcutta!, performing in the show from 1969 to 1971. He later appeared in the 1972 movie version of the musical. Of appearing fully nude with the rest of the cast in the stage show, he said, "The nudity didn't bother me. I'm from Brooklyn." Macy performed on the P.D.Q. Bach album The Stoned Guest (1970). Television Appreciating Macy's comedic skills off Broadway, Norman Lear brought him to Hollywood, where he first got a small part as a police officer in All in the Family. He was cast in the role of Walter Findlay, the long-suffering husband of the title character on the 1970s television sitcom Maude, starring Bea Arthur. The show ran for six seasons from 1972 to 1978. Strangers on the street often called him "Mr. Maude", consoling him for having such a difficult wife. "I used to tell them that people like that really existed," Macy explained. In 1975, Macy and Samantha Harper Macy appeared on the game show Tattletales. In 1986, Macy was a guest on the fourth episode of L.A. Law, playing an older man whose young wife wants a music career. Macy appeared in the television movie Perry Mason: The Case of the Murdered Madam (1987) as banker Richard Wilson. He occasionally appeared on Seinfeld as one of the residents of the Florida retirement community where Jerry Seinfeld's parents lived. Macy made a guest appearance as a patient on Chicago Hope and as an aging gambler on the series Las Vegas. Macy's last television role occurred in a 2010 episode of Jada Pinkett Smith's series Hawthorne. Film Macy appeared as the jury foreman in The Producers in 1967, with the memorable sole line "We find the defendants incredibly guilty". Other memorable roles include the co-inventor of the "Opti-Grab" in the 1979 Steve Martin comedy The Jerk and as the head television writer in My Favourite Year (1982). Other film credits included roles in Death at Love House (1976), The Late Show (1977), Serial (1980), Movers & Shakers (1985), Bad Medicine (1985), Tales from the Darkside (1985 - "Lifebomb" episode), Sibling Rivalry (1990), The Doctor (1991), Me Myself & I (1992), Analyze This (1999), Surviving Christmas (2004), The Holiday (2006), and Mr. Woodcock (2007). Personal life Macy met his future wife, Samantha Harper, on the set of Oh! Calcutta! in 1969. They married in 1975. Macy died on October 17, 2019, at the age of 97; no cause was given. Filmography References External links 1922 births 2019 deaths American male film actors American male television actors People from Revere, Massachusetts American male musical theatre actors American television personalities Male television personalities Place of death missing Jewish American male actors Male actors from Massachusetts 20th-century American male actors 21st-century American male actors 21st-century American Jews
2,171
4,888
https://en.wikipedia.org/wiki/Bruin
Bruin
Bruin, (from Dutch for "brown"), is an English folk term for brown bear. Bruin, Bruins or BRUIN may also refer to: Places Lake Bruin, ox-bow lake of the Mississippi River located in northeastern Louisiana Lake Bruin State Park Bruin, Kentucky, United States Bruin, Pennsylvania, United States Bruin's Slave Jail, building in Alexandria, Virginia Sports team nicknames and mascots Ayr Bruins, a defunct Scottish ice hockey team Bellevue University, Bellevue, Nebraska Belmont University, Nashville, Tennessee Bob Jones University, Greenville, South Carolina Boston Bruins, an American NHL hockey team UCLA Bruins, a collegiate sports team located in Los Angeles, California Chilliwack Bruins, a former Canadian major junior ice hockey team in Chilliwack, British Columbia George Fox University, Newberg, Oregon Kellogg Community College, Battle Creek, Michigan New Westminster Bruins, a former Canadian major junior ice hockey team in New Westminster, British Columbia Piedmont International University, Winston-Salem, North Carolina Providence Bruins, an American AHL hockey team in Providence, Rhode Island Salt Lake Community College, Salt Lake County, Utah Other uses Bruin (surname) Oud bruin, Belgian beer Heineken Oud Bruin, Dutch beer Yamaha Bruin 350, utility all-terrain vehicle Brown University Interactive Language, a programming language Rasmus Klump, a comic strip published as Bruin Bruin, a brown bear in the Reynard cycle fables See also List of Bruin mascots Ursidae
2,184
4,906
https://en.wikipedia.org/wiki/Beta%20sheet
Beta sheet
The beta sheet, (β-sheet) (also β-pleated sheet) is a common motif of the regular protein secondary structure. Beta sheets consist of beta strands (β-strands) connected laterally by at least two or three backbone hydrogen bonds, forming a generally twisted, pleated sheet. A β-strand is a stretch of polypeptide chain typically 3 to 10 amino acids long with backbone in an extended conformation. The supramolecular association of β-sheets has been implicated in the formation of the fibrils and protein aggregates observed in amyloidosis, notably Alzheimer's disease. History The first β-sheet structure was proposed by William Astbury in the 1930s. He proposed the idea of hydrogen bonding between the peptide bonds of parallel or antiparallel extended β-strands. However, Astbury did not have the necessary data on the bond geometry of the amino acids in order to build accurate models, especially since he did not then know that the peptide bond was planar. A refined version was proposed by Linus Pauling and Robert Corey in 1951. Their model incorporated the planarity of the peptide bond which they previously explained as resulting from keto-enol tautomerization. Structure and orientation Geometry The majority of β-strands are arranged adjacent to other strands and form an extensive hydrogen bond network with their neighbors in which the N−H groups in the backbone of one strand establish hydrogen bonds with the C=O groups in the backbone of the adjacent strands. In the fully extended β-strand, successive side chains point straight up and straight down in an alternating pattern. Adjacent β-strands in a β-sheet are aligned so that their Cα atoms are adjacent and their side chains point in the same direction. The "pleated" appearance of β-strands arises from tetrahedral chemical bonding at the Cα atom; for example, if a side chain points straight up, then the bonds to the C′ must point slightly downwards, since its bond angle is approximately 109.5°. The pleating causes the distance between C and C to be approximately , rather than the expected from two fully extended trans peptides. The "sideways" distance between adjacent Cα atoms in hydrogen-bonded β-strands is roughly . However, β-strands are rarely perfectly extended; rather, they exhibit a twist. The energetically preferred dihedral angles near (φ, ψ) = (–135°, 135°) (broadly, the upper left region of the Ramachandran plot) diverge significantly from the fully extended conformation (φ, ψ) = (–180°, 180°). The twist is often associated with alternating fluctuations in the dihedral angles to prevent the individual β-strands in a larger sheet from splaying apart. A good example of a strongly twisted β-hairpin can be seen in the protein BPTI. The side chains point outwards from the folds of the pleats, roughly perpendicularly to the plane of the sheet; successive amino acid residues point outwards on alternating faces of the sheet. Hydrogen bonding patterns Because peptide chains have a directionality conferred by their N-terminus and C-terminus, β-strands too can be said to be directional. They are usually represented in protein topology diagrams by an arrow pointing toward the C-terminus. Adjacent β-strands can form hydrogen bonds in antiparallel, parallel, or mixed arrangements. In an antiparallel arrangement, the successive β-strands alternate directions so that the N-terminus of one strand is adjacent to the C-terminus of the next. This is the arrangement that produces the strongest inter-strand stability because it allows the inter-strand hydrogen bonds between carbonyls and amines to be planar, which is their preferred orientation. The peptide backbone dihedral angles (φ, ψ) are about (–140°, 135°) in antiparallel sheets. In this case, if two atoms C and C are adjacent in two hydrogen-bonded β-strands, then they form two mutual backbone hydrogen bonds to each other's flanking peptide groups; this is known as a close pair of hydrogen bonds. In a parallel arrangement, all of the N-termini of successive strands are oriented in the same direction; this orientation may be slightly less stable because it introduces nonplanarity in the inter-strand hydrogen bonding pattern. The dihedral angles (φ, ψ) are about (–120°, 115°) in parallel sheets. It is rare to find less than five interacting parallel strands in a motif, suggesting that a smaller number of strands may be unstable, however it is also fundamentally more difficult for parallel β-sheets to form because strands with N and C termini aligned necessarily must be very distant in sequence . There is also evidence that parallel β-sheet may be more stable since small amyloidogenic sequences appear to generally aggregate into β-sheet fibrils composed of primarily parallel β-sheet strands, where one would expect anti-parallel fibrils if anti-parallel were more stable. In parallel β-sheet structure, if two atoms C and C are adjacent in two hydrogen-bonded β-strands, then they do not hydrogen bond to each other; rather, one residue forms hydrogen bonds to the residues that flank the other (but not vice versa). For example, residue i may form hydrogen bonds to residues j − 1 and j + 1; this is known as a wide pair of hydrogen bonds. By contrast, residue j may hydrogen-bond to different residues altogether, or to none at all. The hydrogen bond arrangement in parallel beta sheet resembles that in an amide ring motif with 11 atoms. Finally, an individual strand may exhibit a mixed bonding pattern, with a parallel strand on one side and an antiparallel strand on the other. Such arrangements are less common than a random distribution of orientations would suggest, suggesting that this pattern is less stable than the anti-parallel arrangement, however bioinformatic analysis always struggles with extracting structural thermodynamics since there are always numerous other structural features present in whole proteins. Also proteins are inherently constrained by folding kinetics as well as folding thermodynamics, so one must always be careful in concluding stability from bioinformatic analysis. The hydrogen bonding of β-strands need not be perfect, but can exhibit localized disruptions known as β-bulges. The hydrogen bonds lie roughly in the plane of the sheet, with the peptide carbonyl groups pointing in alternating directions with successive residues; for comparison, successive carbonyls point in the same direction in the alpha helix. Amino acid propensities Large aromatic residues (tyrosine, phenylalanine, tryptophan) and β-branched amino acids (threonine, valine, isoleucine) are favored to be found in β-strands in the middle of β-sheets. Different types of residues (such as proline) are likely to be found in the edge strands in β-sheets, presumably to avoid the "edge-to-edge" association between proteins that might lead to aggregation and amyloid formation. Common structural motifs β-hairpin motif A very simple structural motif involving β-sheets is the β-hairpin, in which two antiparallel strands are linked by a short loop of two to five residues, of which one is frequently a glycine or a proline, both of which can assume the dihedral-angle conformations required for a tight turn or a β-bulge loop. Individual strands can also be linked in more elaborate ways with longer loops that may contain α-helices. Greek key motif The Greek key motif consists of four adjacent antiparallel strands and their linking loops. It consists of three antiparallel strands connected by hairpins, while the fourth is adjacent to the first and linked to the third by a longer loop. This type of structure forms easily during the protein folding process. It was named after a pattern common to Greek ornamental artwork (see meander). β-α-β motif Due to the chirality of their component amino acids, all strands exhibit right-handed twist evident in most higher-order β-sheet structures. In particular, the linking loop between two parallel strands almost always has a right-handed crossover chirality, which is strongly favored by the inherent twist of the sheet. This linking loop frequently contains a helical region, in which case it is called a β-α-β motif. A closely related motif called a β-α-β-α motif forms the basic component of the most commonly observed protein tertiary structure, the TIM barrel. β-meander motif A simple supersecondary protein topology composed of two or more consecutive antiparallel β-strands linked together by hairpin loops. This motif is common in β-sheets and can be found in several structural architectures including β-barrels and β-propellers. The vast majority of β-meander regions in proteins are found packed against other motifs or sections of the polypeptide chain, forming portions of the hydrophobic core that canonically drives formation of the folded structure.  However, several notable exceptions include the Outer Surface Protein A (OspA) variants and the Single Layer β-sheet Proteins (SLBPs) which contain single-layer β-sheets in the absence of a traditional hydrophobic core.  These β-rich proteins feature an extended single-layer β-meander β-sheets that are primarily stabilized via inter-β-strand interactions and hydrophobic interactions present in the turn regions connecting individual strands. Psi-loop motif The psi-loop (Ψ-loop) motif consists of two antiparallel strands with one strand in between that is connected to both by hydrogen bonds. There are four possible strand topologies for single Ψ-loops. This motif is rare as the process resulting in its formation seems unlikely to occur during protein folding. The Ψ-loop was first identified in the aspartic protease family. Structural architectures of proteins with β-sheets β-sheets are present in all-β, α+β and α/β domains, and in many peptides or small proteins with poorly defined overall architecture. All-β domains may form β-barrels, β-sandwiches, β-prisms, β-propellers, and β-helices. Structural topology The topology of a β-sheet describes the order of hydrogen-bonded β-strands along the backbone. For example, the flavodoxin fold has a five-stranded, parallel β-sheet with topology 21345; thus, the edge strands are β-strand 2 and β-strand 5 along the backbone. Spelled out explicitly, β-strand 2 is H-bonded to β-strand 1, which is H-bonded to β-strand 3, which is H-bonded to β-strand 4, which is H-bonded to β-strand 5, the other edge strand. In the same system, the Greek key motif described above has a 4123 topology. The secondary structure of a β-sheet can be described roughly by giving the number of strands, their topology, and whether their hydrogen bonds are parallel or antiparallel. β-sheets can be open, meaning that they have two edge strands (as in the flavodoxin fold or the immunoglobulin fold) or they can be closed β-barrels (such as the TIM barrel). β-Barrels are often described by their stagger or shear. Some open β-sheets are very curved and fold over on themselves (as in the SH3 domain) or form horseshoe shapes (as in the ribonuclease inhibitor). Open β-sheets can assemble face-to-face (such as the β-propeller domain or immunoglobulin fold) or edge-to-edge, forming one big β-sheet. Dynamic features β-pleated sheet structures are made from extended β-strand polypeptide chains, with strands linked to their neighbours by hydrogen bonds. Due to this extended backbone conformation, β-sheets resist stretching. β-sheets in proteins may carry out low-frequency accordion-like motion as observed by the Raman spectroscopy and analyzed with the quasi-continuum model. Parallel β-helices A β-helix is formed from repeating structural units consisting of two or three short β-strands linked by short loops. These units "stack" atop one another in a helical fashion so that successive repetitions of the same strand hydrogen-bond with each other in a parallel orientation. See the β-helix article for further information. In lefthanded β-helices, the strands themselves are quite straight and untwisted; the resulting helical surfaces are nearly flat, forming a regular triangular prism shape, as shown for the 1QRE archaeal carbonic anhydrase at right. Other examples are the lipid A synthesis enzyme LpxA and insect antifreeze proteins with a regular array of Thr sidechains on one face that mimic the structure of ice. Righthanded β-helices, typified by the pectate lyase enzyme shown at left or P22 phage tailspike protein, have a less regular cross-section, longer and indented on one of the sides; of the three linker loops, one is consistently just two residues long and the others are variable, often elaborated to form a binding or active site. A two-sided β-helix (right-handed) is found in some bacterial metalloproteases; its two loops are each six residues long and bind stabilizing calcium ions to maintain the integrity of the structure, using the backbone and the Asp side chain oxygens of a GGXGXD sequence motif. This fold is called a β-roll in the SCOP classification. In pathology Some proteins that are disordered or helical as monomers, such as amyloid β (see amyloid plaque) can form β-sheet-rich oligomeric structures associated with pathological states. The amyloid β protein's oligomeric form is implicated as a cause of Alzheimer's. Its structure has yet to be determined in full, but recent data suggest that it may resemble an unusual two-strand β-helix. The side chains from the amino acid residues found in a β-sheet structure may also be arranged such that many of the adjacent sidechains on one side of the sheet are hydrophobic, while many of those adjacent to each other on the alternate side of the sheet are polar or charged (hydrophilic), which can be useful if the sheet is to form a boundary between polar/watery and nonpolar/greasy environments. See also Collagen helix Foldamers Folding (chemistry) Tertiary structure α-helix Structural motif References Further reading External links Anatomy & Taxonomy of Protein Structures -survey NetSurfP - Secondary Structure and Surface Accessibility predictor Protein structural motifs
2,193
4,910
https://en.wikipedia.org/wiki/Beryl
Beryl
Beryl ( ) is a mineral composed of beryllium aluminium silicate with the chemical formula Be3Al2Si6O18. Well-known varieties of beryl include emerald and aquamarine. Naturally occurring, hexagonal crystals of beryl can be up to several meters in size, but terminated crystals are relatively rare. Pure beryl is colorless, but it is frequently tinted by impurities; possible colors are green, blue, yellow, pink, and red (the rarest). It is an ore source of beryllium. Etymology The word beryl – – is borrowed, via and , from Ancient Greek βήρυλλος bḗryllos, which referred to a 'precious blue-green color-of-sea-water stone'; from Prakrit veruḷiya, veḷuriya 'beryl' (compare the pseudo-Sanskritization वैडूर्य vaiḍūrya 'cat's eye; jewel; lapis lazuli', traditionally explained as '(brought) from (the city of) Vidūra'), which is ultimately of Dravidian origin, maybe from the name of Belur or Velur, a town in Karnataka, southern India. The term was later adopted for the mineral beryl more exclusively. When the first eyeglasses were constructed in 13th-century Italy, the lenses were made of beryl (or of rock crystal) as glass could not be made clear enough. Consequently, glasses were named Brillen in German (bril in Dutch and briller in Danish). Deposits Beryl is a common mineral, and it is widely distributed in nature. It found most commonly in granitic pegmatites, but also occurs in mica schists, such as those of the Ural Mountains, and in limestone in Colombia. It is less common in ordinary granite and is only infrequently found in nepheline syenite. Beryl is often associated with tin and tungsten ore bodies formed as high-temperature hydrothermal veins. In granitic pegmatites, beryl is found in association with quartz, potassium feldspar, albite, muscovite, biotite, and tourmaline. Beryl is sometimes found in metasomatic contacts of igneous intrusions with gneiss, schist, or carbonate rocks. Common beryl, mined as beryllium ore, is found in small deposits in many countries, but the main producers are Russia, Brazil, and the United States. New England's pegmatites have produced some of the largest beryls found, including one massive crystal from the Bumpus Quarry in Albany, Maine with dimensions with a mass of around 18 metric tons; it is New Hampshire's state mineral. , the world's largest known naturally occurring crystal of any mineral is a crystal of beryl from Malakialina, Madagascar, long and in diameter, and weighing . Crystal habit and structure Beryl belongs to the hexagonal crystal system. Normally beryl forms hexagonal columns but can also occur in massive habits. As a cyclosilicate beryl incorporates rings of silicate tetrahedra of that are arranged in columns along the  axis and as parallel layers perpendicular to the  axis, forming channels along the  axis. These channels permit a variety of ions, neutral atoms, and molecules to be incorporated into the crystal thus disrupting the overall charge of the crystal permitting further substitutions in aluminium, silicon, and beryllium sites in the crystal structure. These impurities give rise to the variety of colors of beryl that can be found. Increasing alkali content within the silicate ring channels causes increases to the refractive indices and birefringence. Human health impact Beryl is a beryllium compound that is a known carcinogen with acute toxic effects leading to pneumonitis when inhaled. Care must thus be used when mining, handling, and refining these gems. Varieties Aquamarine and maxixe Aquamarine (from , "sea water") is a blue or cyan variety of beryl. It occurs at most localities which yield ordinary beryl. The gem-gravel placer deposits of Sri Lanka contain aquamarine. Green-yellow beryl, such as that occurring in Brazil, is sometimes called chrysolite aquamarine. The deep blue version of aquamarine is called maxixe. The pale blue color of aquamarine is attributed to Fe2+. Fe3+ ions produce golden-yellow color, and when both Fe2+ and Fe3+ are present, the color is a darker blue as in maxixe. Decoloration of maxixe by light or heat thus may be due to the charge transfer between Fe3+ and Fe2+. In the United States, aquamarines can be found at the summit of Mt. Antero in the Sawatch Range in central Colorado, and in the New England and North Carolina pegmatites. Aquamarines are also present in the state of Wyoming, aquamarine has been discovered in the Big Horn Mountains, near Powder River Pass. Another location within the United States is the Sawtooth Range near Stanley, Idaho, although the minerals are within a wilderness area which prevents collecting. In Brazil, there are mines in the states of Minas Gerais, Espírito Santo, and Bahia, and minorly in Rio Grande do Norte. The mines of Colombia, Madagascar, Russia, Namibia, Zambia, Malawi, Tanzania, and Kenya also produce aquamarine. Emerald Emerald is green beryl, colored by around 2% chromium and sometimes vanadium. Most emeralds are highly included, so their brittleness (resistance to breakage) is classified as generally poor., The modern English word "emerald" comes via Middle English emeraude, imported from modern French via Old French ésmeraude and Medieval Latin , from Latin , from Greek σμάραγδος smaragdos meaning ‘green gem’, from Hebrew ברקת bareket (one of the twelve stones in the Hoshen pectoral pendant of the Kohen HaGadol), meaning ‘lightning flash’, referring to ‘emerald’, relating to Akkadian baraqtu, meaning ‘emerald’, and possibly relating to the Sanskrit word मरकत marakata, meaning ‘green’. The Semitic word אזמרגד izmargad, meaning ‘emerald’, is a back-loan, deriving from Greek smaragdos. Emeralds in antiquity were mined by the Egyptians and in what is now Austria, as well as Swat in contemporary Pakistan. A rare type of emerald known as a trapiche emerald is occasionally found in the mines of Colombia. A trapiche emerald exhibits a "star" pattern; it has raylike spokes of dark carbon impurities that give the emerald a six-pointed radial pattern. It is named for the trapiche, a grinding wheel used to process sugarcane in the region. Colombian emeralds are generally the most prized due to their transparency and fire. Some of the rarest emeralds come from the two main emerald belts in the Eastern Ranges of the Colombian Andes: Muzo and Coscuez west of the Altiplano Cundiboyacense, and Chivor and Somondoco to the east. Fine emeralds are also found in other countries, such as Zambia, Brazil, Zimbabwe, Madagascar, Pakistan, India, Afghanistan and Russia. In the US, emeralds can be found in Hiddenite, North Carolina. In 1998, emeralds were discovered in Yukon. Emerald is a rare and valuable gemstone and, as such, it has provided the incentive for developing synthetic emeralds. Both hydrothermal and flux-growth synthetics have been produced. The first commercially successful emerald synthesis process was that of Carroll Chatham. The other large producer of flux emeralds was Pierre Gilson Sr., which has been on the market since 1964. Gilson's emeralds are usually grown on natural colorless beryl seeds which become coated on both sides. Growth occurs at the rate of per month, a typical seven-month growth run producing emerald crystals of 7 mm of thickness. The green color of emeralds is widely attributed to presence of Cr3+ ions. Intensely green beryls from Brazil, Zimbabwe and elsewhere in which the color is attributed to vanadium have also been sold and certified as emeralds. Golden beryl and heliodor Golden beryl can range in colors from pale yellow to a brilliant gold. Unlike emerald, golden beryl generally has very few flaws. The term "golden beryl" is sometimes synonymous with heliodor (from Greek hēlios – ἥλιος "sun" + dōron – δῶρον "gift") but golden beryl refers to pure yellow or golden yellow shades, while heliodor refers to the greenish-yellow shades. The golden yellow color is attributed to Fe3+ ions. Both golden beryl and heliodor are used as gems. Probably the largest cut golden beryl is the flawless 2054-carat stone on display in the Hall of Gems, Washington, D.C., United States. Goshenite Colorless beryl is called goshenite. The name originates from Goshen, Massachusetts, where it was originally discovered. In the past, goshenite was used for manufacturing eyeglasses and lenses owing to its transparency. Nowadays, it is most commonly used for gemstone purposes. The gem value of goshenite is relatively low. However, goshenite can be colored yellow, green, pink, blue and in intermediate colors by irradiating it with high-energy particles. The resulting color depends on the content of Ca, Sc, Ti, V, Fe, and Co impurities. Morganite Morganite, also known as "pink beryl", "rose beryl", "pink emerald" (which is not a legal term according to the new Federal Trade Commission Guidelines and Regulations), and "cesian (or caesian) beryl", is a rare light pink to rose-colored gem-quality variety of beryl. Orange/yellow varieties of morganite can also be found, and color banding is common. It can be routinely heat treated to remove patches of yellow and is occasionally treated by irradiation to improve its color. The pink color of morganite is attributed to Mn2+ ions. Red beryl Red variety of beryl (the "bixbite") was first described in 1904 for an occurrence, its type locality, at Maynard's Claim (Pismire Knolls), Thomas Range, Juab County, Utah. The dark red color is attributed to Mn3+ ions. Old synonym "bixbite" is deprecated from the CIBJO because of the possibility of confusion with the mineral bixbyite (both named after mineralogist Maynard Bixby). Red "bixbite" beryl formerly was marketed as "red" or "scarlet emerald", but these terms involving "Emerald" terminology are now prohibited in the US. Red beryl is very rare and has only been reported from a handful of North American locations: Wah Wah Mountains, Beaver County, Utah; Paramount Canyon, Round Mountain, Juab County, Utah; and Sierra County, New Mexico, although this locality does not often produce gem-grade stones. The bulk of gem-grade red beryl comes from the Ruby-Violet Claim in the Wah Wah Mts. of midwestern Utah, discovered in 1958 by Lamar Hodges, of Fillmore, Utah, while he was prospecting for uranium. Red beryl has been known to be confused with pezzottaite, a caesium analog of beryl, found in Madagascar and, more recently, Afghanistan; cut gems of the two varieties can be distinguished by their difference in refractive index, and the rough crystals easily by their differing crystal systems (pezzottaite trigonal, red beryl hexagonal). Synthetic red beryl is also produced. Like emerald and unlike most other varieties of beryl, the red ones are usually highly included. While gem beryls are ordinarily found in pegmatites and certain metamorphic stones, red beryl occurs in topaz-bearing rhyolites. It is formed by crystallizing under low pressure and high temperature from a pneumatolytic phase along fractures or within near-surface miarolitic cavities of the rhyolite. Associated minerals include bixbyite, quartz, orthoclase, topaz, spessartine, pseudobrookite and hematite. See also List of emeralds by size References Further reading External links Hexagonal minerals Minerals in space group 192
2,194
4,911
https://en.wikipedia.org/wiki/Basel
Basel
Basel ( , ), also known as Basle ( ), is a city in northwestern Switzerland on the river Rhine. Basel is Switzerland's third-most-populous city (after Zürich and Geneva) with about 175,000 inhabitants. The official language of Basel is (the Swiss variety of Standard) German, but the main spoken language is the local Basel German dialect. Basel is commonly considered to be the cultural capital of Switzerland and the city is famous for its many museums, including the Kunstmuseum, which is the first collection of art accessible to the public in the world (1661) and the largest museum of art in Switzerland, the Fondation Beyeler (located in Riehen), the Museum Tinguely and the Museum of Contemporary Art, which is the first public museum of contemporary art in Europe. Forty museums are spread throughout the city-canton, making Basel one of the largest cultural centres in relation to its size and population in Europe. The University of Basel, Switzerland's oldest university (founded in 1460), and the city's centuries-long commitment to humanism, have made Basel a safe haven at times of political unrest in other parts of Europe for such notable people as Erasmus of Rotterdam, the Holbein family, Friedrich Nietzsche, Carl Jung, and in the 20th century also Hermann Hesse and Karl Jaspers. Basel was the seat of a Prince-Bishopric starting in the 11th century, and joined the Swiss Confederacy in 1501. The city has been a commercial hub and an important cultural centre since the Renaissance, and has emerged as a centre for the chemical and pharmaceutical industries in the 20th century. In 1897, Basel was chosen by Theodor Herzl as the location for the first World Zionist Congress, and altogether the congress was held there ten times over a time span of 50 years, more than in any other location. The city is also home to the world headquarters of the Bank for International Settlements. The name of the city is internationally known through institutions like the Basel Accords, Art Basel and FC Basel. In 2019 Basel was ranked the tenth most liveable city in the world by Mercer. Name The name of Basel is first recorded as Basilia in the 3rd century (237/8), at the time referring to the Roman castle.This name is mostly interpreted as deriving from the personal name Basilius, from a toponym villa Basilia ("estate of Basilius") or similar. Another suggestion derives it from a name Basilia attested in northern France as a development of basilica, the term for a public or church building (as in Bazeilles), but all of these names reference early church buildings of the 4th or 5th century and cannot be adduced for the 3rd-century attestation of Basilia. By popular etymology, or simple assonance, the basilisk becomes closely associated with the city, used as heraldic supporter from 1448, represented on coins minted by the city, and frequently found in ornaments. The Middle French form Basle was adopted into English, but this form gradually fell out of use although it continues to be used in some sections of British English including the BBC. Currently, the spelling Basel is most often used, to match the official German spelling. In French Basle was still in use in the 18th century, but was gradually replaced by the modern French spelling Bâle. In Icelandic, the city is recorded as Buslaraborg in the 12th-century itinerary Leiðarvísir og borgarskipan. History Early history There are traces of a settlement at the nearby Rhine knee from the early La Tène period (5th century BC). In the 2nd century BC, there was a village of the Raurici at the site of Basel-Gasfabrik (to the northwest of the Old City, and likely identical with the town of Arialbinnum that was mentioned on the Tabula Peutingeriana). The unfortified settlement was abandoned in the 1st century BC in favour of an oppidum on the site of Basel Minster, probably in reaction to the Roman invasion of Gaul. In Roman Gaul, Augusta Raurica was established some from Basel as the regional administrative centre, while a castrum (fortified camp) was built on the site of the Celtic oppidum. In AD 83, the area was incorporated into the Roman province of Germania Superior. The Roman Senator Munatius Plancus is known as the traditional founder of Basel since the Renaissance. Roman control over the area deteriorated in the 3rd century, and Basel became an outpost of the Provincia Maxima Sequanorum formed by Diocletian. Basilia is first named by the Ammianus Marcellinus in his Res Gestae as part of the Roman military fortifications along the Rhine in the late 4th century. The Germanic confederation of the Alemanni attempted to cross the Rhine several times in the 4th century, but were repelled; one such event was the Battle of Solicinium (368). However, in the great invasion of AD 406, the Alemanni appear to have crossed the Rhine a final time, conquering and then settling what is today Alsace and a large part of the Swiss Plateau. The Duchy of Alemannia fell under Frankish rule in the 6th century. The Alemannic and Frankish settlement of Basel gradually grew around the old Roman castle in the 6th and 7th century. It appears that Basel surpassed the ancient regional capital of Augusta Raurica by the 7th century; based on the evidence of a gold tremissis (a small gold coin with the value of a third of a solidus) with the inscription Basilia fit, Basel seems to have minted its own coins in the 7th century. Basel at this time was part of the Archdiocese of Besançon. A separate bishopric of Basel, replacing the ancient bishopric of Augusta Raurica, was established in the 8th century. Under bishop Haito (r. 806–823), the first cathedral was built on the site of the Roman castle (replaced by a Romanesque structure consecrated in 1019). At the partition of the Carolingian Empire through the Treaty of Verdun in 843, Basel was first given to West Francia and became its German exclave. It passed to East Francia with the Treaty of Meerssen of 870. Basel was destroyed by the Magyars in 917. The rebuilt town became part of Upper Burgundy, and as such was incorporated into the Holy Roman Empire in 1032. Prince-Bishopric of Basel From the donation by Rudolph III of Burgundy of the Moutier-Grandval Abbey and all its possessions to Bishop Adalbero II of Metz in 999 until the Reformation, Basel was ruled by Prince-Bishops. In 1019, the construction of the cathedral of Basel (known locally as the Münster) began under Henry II, Holy Roman Emperor. In the 11th to 12th century, Basel gradually acquired the characteristics of a medieval city. The main market place is first mentioned in 1091. The first city walls were constructed around 1100 (with improvements made in the mid-13th and in the late 14th century). A city council of nobles and burghers is recorded for 1185, and the first mayor, Heinrich Steinlin of Murbach, for 1253. The first bridge across the Rhine was built in 1225 under bishop Heinrich von Thun (at the location of the modern Middle Bridge), and from this time the settlement of Kleinbasel gradually formed around the bridgehead on the far river bank. The bridge was largely funded by Basel's Jewish community who had settled there a century earlier. For many centuries to come Basel possessed the only permanent bridge over the river "between Lake Constance and the sea". The first city guild were the furriers, established in 1226. A total of about fifteen guilds were established in the course of the 13th century, reflecting the increasing economic prosperity of the city. The Crusade of 1267 set out from Basel. Political conflicts between the bishops and the burghers began in the mid-13th century and continued throughout the 14th century. By the late 14th century, the city was for all practical purposes independent although it continued to nominally pledge fealty to the bishops. The House of Habsburg attempted to gain control over the city. This was not successful, but it caused a political split among the burghers of Basel into a pro-Habsburg faction, known as Sterner, and an anti-Habsburg faction, the Psitticher. The Black Death reached Basel in 1348. The Jews were blamed, and an estimated 50 to 70 Jews were executed by burning on 16 January 1349 in what has become known as the Basel massacre. The Basel earthquake of 1356 destroyed much of the city along with a number of castles in the vicinity. A riot on 26 February 1376, known as Böse Fasnacht, led to the killing of a number of men of Leopold III, Duke of Austria. This was seen as a serious breach of the peace, and the city council blamed "foreign ruffians" for this and executed twelve alleged perpetrators. Leopold nevertheless had the city placed under imperial ban, and in a treaty of 9 July, Basel was given a heavy fine and was placed under Habsburg control. To free itself from Habsburg hegemony, Basel joined the Swabian League of Cities in 1385, and many knights of the pro-Habsburg faction, along with duke Leopold himself, were killed in the Battle of Sempach the following year. A formal treaty with Habsburg was made in 1393. Basel had gained its de facto independence from both the bishop and from the Habsburgs and was free to pursue its own policy of territorial expansion, beginning around 1400. The unique representation of a bishops' crozier as the heraldic charge in the coat of arms of Basel first appears in the form of a gilded wooden staff in the 12th century. It is of unknown origin or significance (beyond its obvious status of bishop's crozier), but it is assumed to have represented a relic, possibly attributed to Saint Germanus of Granfelden. This staff (known as Baselstab) became a symbol representing the Basel diocese, depicted in bishops' seals of the late medieval period. It is represented in a heraldic context in the early 14th century, not yet as a heraldic charge but as a kind of heraldic achievement flanked by the heraldic shields of the bishop. The staff is also represented in the bishops's seals of the period. The use of the Baselstab in black as the coat of arms of the city was introduced in 1385. From this time, the Baselstab in red represented the bishop, and the same charge in black represented the city. The blazon of the municipal coat of arms is In Silber ein schwarzer Baselstab (Argent, a staff of Basel sable). In 1400, Basel was able to purchase the towns of Liestal, Homburg and Waldenburg with its surrounding territory. In 1412 (or earlier), the well-known Gasthof zum Goldenen Sternen was established. Basel became the focal point of western Christendom during the 15th century Council of Basel (1431–1449), including the 1439 election of antipope Felix V. In 1459, Pope Pius II endowed the University of Basel, where such notables as Erasmus of Rotterdam and Paracelsus later taught. At the same time the new craft of printing was introduced to Basel by apprentices of Johann Gutenberg. In 1461, the land around Farnsburg became a part of Basel. The Schwabe publishing house was founded in 1488 by Johannes Petri and is the oldest publishing house still in business. Johann Froben also operated his printing house in Basel and was notable for publishing works by Erasmus. In 1495, Basel was incorporated into the Upper Rhenish Imperial Circle; the Bishop of Basel was added to the Bench of the Ecclesiastical Princes of the Imperial Diet. In 1500 the construction of the Basel Münster was finished. In 1521 so was the bishop. The council, under the supremacy of the guilds, explained that henceforth they would only give allegiance to the Swiss Confederation, to whom the bishop appealed but in vain. As a member state in the Swiss Confederacy The city had remained neutral through the Swabian War of 1499 despite being plundered by soldiers on both sides. The Treaty of Basel ended the war and granted the Swiss confederates exemptions from the emperor Maximillian's taxes and jurisdictions, separating Switzerland de facto from the Holy Roman Empire. On 9 June 1501, Basel joined the Swiss Confederation as its eleventh canton. It was the only canton that was asked to join, not the other way round. Basel had a strategic location, good relations with Strasbourg and Mulhouse, and control of the corn imports from Alsace, whereas the Swiss lands were becoming overpopulated and had few resources. A provision of the Charter accepting Basel required that in conflicts among the other cantons it was to stay neutral and offer its services for mediation. In 1503, the new bishop Christoph von Utenheim refused to give Basel a new constitution; whereupon, to show its power, the city began to build a new city hall. In 1529, the city became Protestant under Oecolampadius and the bishop's seat was moved to Porrentruy. The bishop's crook was however retained as the city's coat of arms. For centuries to come, a handful of wealthy families collectively referred to as the "Daig" played a pivotal role in city affairs as they gradually established themselves as a de facto city aristocracy. The first edition of Christianae religionis institutio (Institutes of the Christian Religion – John Calvin's great exposition of Calvinist doctrine) was published at Basel in March 1536. In 1544, Johann von Brugge, a rich Dutch Protestant refugee, was given citizenship and lived respectably until his death in 1556, then buried with honors. His body was exhumed and burnt at the stake in 1559 after it was discovered that he was the Anabaptist David Joris. In 1543, De humani corporis fabrica, the first book on human anatomy, was published and printed in Basel by Andreas Vesalius (1514–1564). There are indications Joachim Meyer, author of the influential 16th-century martial arts text Kunst des Fechten ("The Art of Fencing"), came from Basel. In 1661 the Amerbaschsches Kabinett, a vast collection of exotic artifacts, coins, medals and books was purchased by Basel. It was to become to the first public museum of art. Its collection became the core of the later Basel Museum of Art. The Bernoulli family, which included important 17th- and 18th-century mathematicians such as Jakob Bernoulli, Johann Bernoulli and Daniel Bernoulli, were from Basel. The 18th-century mathematician Leonhard Euler was born in Basel and studied under Johann Bernoulli. Modern history In 1792, the Republic of Rauracia, a revolutionary French client republic, was created. It lasted until 1793. After three years of political agitation and a short civil war in 1833 the disadvantaged countryside seceded from the Canton of Basel, forming the half canton of Basel-Landschaft. Between 1861 and 1878 the city walls were slighted. On 3 July 1874, Switzerland's first zoo, the Zoo Basel, opened its doors in the south of the city towards Binningen. In 1897 the first World Zionist Congress was held in Basel. Altogether the World Zionist Congress was held in Basel ten times, more than in any other city in the world. On 16 November 1938, the psychedelic drug LSD was first synthesized by Swiss chemist Albert Hofmann at Sandoz Laboratories in Basel. In 1967, the population of Basel voted in favor of buying three works of art by painter Pablo Picasso which were at risk of being sold and taken out of the local museum of art, due to a financial crisis on the part of the owner's family. Therefore, Basel became the first city in the world where the population of a political community democratically decided to acquire works of art for a public institution. Pablo Picasso was so moved by the gesture that he subsequently gifted the city with an additional three paintings. Basel as a historical, international meeting place Basel has often been the site of peace negotiations and other international meetings. The Treaty of Basel (1499) ended the Swabian War. Two years later Basel joined the Swiss Confederation. The Peace of Basel in 1795 between the French Republic and Prussia and Spain ended the First Coalition against France during the French Revolutionary Wars. In more recent times, the World Zionist Organization held its first congress in Basel from 29 August through 31 August 1897. Because of the Balkan Wars, the (Socialist) Second International held an extraordinary congress at Basel in 1912. In 1989, the Basel Convention was opened for signature with the aim of preventing the export of hazardous waste from wealthy to developing nations for disposal. Geography and climate Location Basel is located in Northwestern Switzerland and is commonly considered to be the capital of that region. It is close to the point where the Swiss, French and German borders meet, and Basel also has suburbs in France and Germany. , the Swiss Basel agglomeration was the third-largest in Switzerland, with a population of 541,000 in 74 municipalities in Switzerland (municipal count as of 2018). The initiative Trinational Eurodistrict Basel (TEB) of 62 suburban communes including municipalities in neighboring countries, counted 829,000 inhabitants in 2007. Topography Basel has an area, , of . Of this area, or 4.0% is used for agricultural purposes, while or 3.7% is forested. Of the rest of the land, or 86.4% is settled (buildings or roads), or 6.1% is either rivers or lakes. Of the built up area, industrial buildings made up 10.2% of the total area while housing and buildings made up 40.7% and transportation infrastructure made up 24.0%. Power and water infrastructure as well as other special developed areas made up 2.7% of the area while parks, green belts and sports fields made up 8.9%. Out of the forested land, all of the forested land area is covered with heavy forests. Of the agricultural land, 2.5% is used for growing crops and 1.3% is pastures. All the water in the municipality is flowing water. Climate Under the Köppen system, Basel features an oceanic climate (Köppen: Cfb), although with notable continental influences due to its relatively far inland position with cool to cold, overcast winters and warm to hot, humid summers. The city averages 118.2 days of rain or snow annually and on average receives of precipitation. The wettest month is May during which time Basel receives an average of of rain. The month with the most days of precipitation is also May, with an average of 11.7 days. The driest month of the year is February with an average of of precipitation over 8.4 days. Politics The city of Basel functions as the capital of the Swiss half-canton of Basel-Stadt. Canton The canton Basel-Stadt consists of three municipalities: Riehen, Bettingen, and the city Basel itself. The political structure and agencies of the city and the canton are identical. City Quarters The city itself has 19 quarters: Grossbasel (Greater Basel): 1 Altstadt Grossbasel 2 Vorstädte 3 Am Ring 4 Breite 5 St. Alban 6 Gundeldingen 7 Bruderholz 8 Bachletten 9 Gotthelf 10 Iselin 11 St. Johann Kleinbasel (Lesser Basel): 12 Altstadt Kleinbasel 13 Clara 14 Wettstein 15 Hirzbrunnen 16 Rosental 17 Matthäus 18 Klybeck 19 Kleinhüningen Government The city's and canton's executive, the Executive Council (Regierungsrat), consists of seven members for a mandate period of 4 years. They are elected by any inhabitant valid to vote on the same day as the parliament, but by means of a system of Majorz, and operates as a collegiate authority. The president () is elected as such by a public election, while the heads of the other departments are appointed by the collegiate. The current president is Beat Jans. The executive body holds its meetings in the red Town Hall () on the central Marktplatz. The building was built in 1504–14. , Basel's Executive Council is made up of three representatives of the SP (Social Democratic Party) including the president, two LDP (Liberal-Demokratische Partei of Basel), and one member each of Green Liberals (glp), and CVP (Christian Democratic Party). The last election was held on 25 October and 29 November 2020 and four new members have been elected. Barbara Schüpbach-Guggenbühlis is State Chronicler (Staatsschreiberin) since 2009, and Marco Greiner is Head of Communication (Regierungssprecher) and Vice State Chronicler (Vizestaatsschreiber) since 2007 for the Executive Council. Parliament The city's and canton's parliament, the Grand Council of Basel-Stadt (Grosser Rat), consists of 100 seats, with members (called in German: Grossrat/Grossrätin) elected every 4 years. The sessions of the Grand Council are public. Unlike the members of the Executive Council, the members of the Grand Council are not politicians by profession, but they are paid a fee based on their attendance. Any resident of Basel allowed to vote can be elected as a member of the parliament. The delegates are elected by means of a system of Proporz. The legislative body holds its meetings in the red Town Hall (Rathaus). The last election was held on 25 October 2020 for the mandate period (Legislatur) of 2021–2025. , the Grand Council consist of 30 (-5) members of the Social Democratic Party (SP), 18 (+5) Grün-Alternatives Bündnis (GAB) (a collaboration of the Green Party (GPS), its junior party, and Basels starke Alternative (BastA!)), 14 (-1) Liberal-Demokratische Partei (LDP), 11 (-4) members of the Swiss People's Party (SVP), 8 (+5) Green Liberal Party (glp), 7 (-3) The Liberals (FDP), 7 (-) Christian Democratic People's Party (CVP), 3 (+2) Evangelical People's Party (EVP), and one each representative of the Aktive Bettingen (AB) and Volks-Aktion gegen zuviele Ausländer und Asylanten in unserer Heimat (VA). The left parties missed an absolute majority by two seats. Federal elections National Council In the 2019 federal election the most popular party was the Social Democratic Party (SP) which received two seats with 34% (−1) of the votes. The next five most popular parties were the Green Party (GPS) (19.4%, +7.3), the LPS (14.5%, +3.6) and the FDP (5.8, −3.5), which are chained together at 20.3%, (+0.1), the SVP (11.3%, ), and the Green Liberal Party (GLP) (5%, +0.6), CVP (4.1%, -1.9). In the federal election, a total of 44,628 votes were cast, and the voter turnout was 49.4%. On 18 October 2015, in the federal election the most popular party was the Social Democratic Party (SP) which received two seats with 35% of the votes. The next three most popular parties were the FDP (20.2%), the SVP (16.8%), and the Green Party (GPS) (12.2%), each with one seat. In the federal election, a total of 57,304 votes were cast, and the voter turnout was 50.4%. Council of States On 20 October 2019, in the federal election Eva Herzog, member of the Social Democratic Party (SP), was elected for the first time as a State Councillor () in the first round as single representative of the canton of Basel-Town and successor of Anita Fetz in the national Council of States () with an absolute majority of 37'210 votes. On 18 October 2015, in the federal election State Councillor () Anita Fetz, member of the Social Democratic Party (SP), was re-elected in the first round as single representative of the canton of Basel-Town in the national Council of States () with an absolute majority of 35'842 votes. She has been a member of it since 2003. International relations Twin towns, sister cities and partner regions Basel has two sister cities and a twinning among two states: US state of Massachusetts, since 2002 Shanghai, China, since 2007 Toyama Prefecture, Japan, since 2009 Miami Beach, US, since 2011 Abidjan, Ivory Coast, since 2021 Seoul, South Korea, since 2022 Partner cities Rotterdam, Netherlands, since 1945 Demographics Population The canton of Basel (slightly more than the city itself) has a population () of 201,971, of whom 36.9% are resident foreign nationals. Over the 10 years of 1999–2009 the population has changed at a rate of −0.3%. It has changed at a rate of 3.2% due to migration and at a rate of −3% due to births and deaths. Of the population in the municipality 58,560 or about 35.2% were born in Basel and lived there in 2000. There were 1,396 or 0.8% who were born in the same canton, while 44,874 or 26.9% were born somewhere else in Switzerland, and 53,774 or 32.3% were born outside of Switzerland. In there were 898 live births to Swiss citizens and 621 births to non-Swiss citizens, and in same time span there were 1,732 deaths of Swiss citizens and 175 non-Swiss citizen deaths. Ignoring immigration and emigration, the population of Swiss citizens decreased by 834 while the foreign population increased by 446. There were 207 Swiss men and 271 Swiss women who emigrated from Switzerland. At the same time, there were 1756 non-Swiss men and 1655 non-Swiss women who immigrated from another country to Switzerland. The total Swiss population change in 2008 (from all sources, including moves across municipal borders) was an increase of 278 and the non-Swiss population increased by 1138 people. This represents a population growth rate of 0.9%. , there were 70,502 people who were single and never married in the municipality. There were 70,517 married individuals, 12,435 widows or widowers and 13,104 individuals who are divorced. the average number of residents per living room was 0.59 which is about equal to the cantonal average of 0.58 per room. In this case, a room is defined as space of a housing unit of at least as normal bedrooms, dining rooms, living rooms, kitchens and habitable cellars and attics. About 10.5% of the total households were owner occupied, or in other words did not pay rent (though they may have a mortgage or a rent-to-own agreement). , there were 86,371 private households in the municipality, and an average of 1.8 persons per household. There were 44,469 households that consist of only one person and 2,842 households with five or more people. Out of a total of 88,646 households that answered this question, 50.2% were households made up of just one person and there were 451 adults who lived with their parents. Of the rest of the households, there are 20,472 married couples without children, 14,554 married couples with children There were 4,318 single parents with a child or children. There were 2,107 households that were made up of unrelated people and 2,275 households that were made up of some sort of institution or another collective housing. there were 5,747 single family homes (or 30.8% of the total) out of a total of 18,631 inhabited buildings. There were 7,642 multi-family buildings (41.0%), along with 4,093 multi-purpose buildings that were mostly used for housing (22.0%) and 1,149 other use buildings (commercial or industrial) that also had some housing (6.2%). Of the single family homes 1090 were built before 1919, while 65 were built between 1990 and 2000. The greatest number of single family homes (3,474) were built between 1919 and 1945. there were 96,640 apartments in the municipality. The most common apartment size was 3 rooms of which there were 35,958. There were 11,957 single room apartments and 9,702 apartments with five or more rooms. Of these apartments, a total of 84,675 apartments (87.6% of the total) were permanently occupied, while 7,916 apartments (8.2%) were seasonally occupied and 4,049 apartments (4.2%) were empty. , the construction rate of new housing units was 2.6 new units per 1000 residents. the average price to rent an average apartment in Basel was 1118.60 Swiss francs (CHF) per month (US$890, £500, €720 approx. exchange rate from 2003). The average rate for a one-room apartment was 602.27 CHF (US$480, £270, €390), a two-room apartment was about 846.52 CHF (US$680, £380, €540), a three-room apartment was about 1054.14 CHF (US$840, £470, €670) and a six or more room apartment cost an average of 2185.24 CHF (US$1750, £980, €1400). The average apartment price in Basel was 100.2% of the national average of 1116 CHF. The vacancy rate for the municipality, , was 0.74%. Historical population Language In 2000, most of the population spoke German (129,592 or 77.8%), with Italian being second most common (9,049 or 5.4%) and French being third (4,280 or 2.6%). There were 202 people who spoke Romansh. Religion From the , 41,916 or 25.2% were Roman Catholic, while 39,180 or 23.5% belonged to the Swiss Reformed Church. Of the rest of the population, there were 4,567 members of an Orthodox church (or about 2.74% of the population), 459 individuals (or about 0.28% of the population) who belonged to the Christian Catholic Church and 3,464 individuals (or about 2.08% of the population) who belonged to another Christian church. There were 12,368 individuals (or about 7.43% of the population) who were Muslim, 1,325 individuals (or about 0.80% of the population) who were Jewish, however only members of religious institutions are counted as such by the municipality, which makes the actual number of people of Jewish descent living in Basel considerably higher. There were 746 individuals who were Buddhist, 947 individuals who were Hindu and 485 individuals who belonged to another church. 52,321 (or about 31.41% of the population) belonged to no church, are agnostic or atheist, and 8,780 individuals (or about 5.27% of the population) did not answer the question. Infrastructure Quarters Basel is subdivided into 19 quarters (Quartiere). The municipalities of Riehen and Bettingen, outside the city limits of Basel, are included in the canton of Basel-Stadt as rural quarters (Landquartiere). Transport Basel's airport is set up for airfreight; heavy goods reach the city and the heart of continental Europe from the North Sea by ship along the Rhine. The main European routes for the highway and railway transport of freight cross in Basel. The outstanding location benefits logistics corporations, which operate globally from Basel. Trading firms are traditionally well represented in the Basel Region. Port Basel has Switzerland's only cargo port, through which goods pass along the navigable stretches of the Rhine and connect to ocean-going ships at the port of Rotterdam. Air transport EuroAirport Basel Mulhouse Freiburg is operated jointly by two countries, France and Switzerland, although the airport is located completely on French soil. The airport itself is split into two architecturally independent sectors, one half serving the French side and the other half serving the Swiss side; prior to Schengen there was an immigration inspection point at the middle of the airport so that people could "emigrate" to the other side of the airport. Railways Basel has long held an important place as a rail hub. Three railway stations—those of the German, French and Swiss networks—lie within the city (although the Swiss (Basel SBB) and French (Bâle SNCF) stations are actually in the same complex, separated by Customs and Immigration facilities). Basel Badischer Bahnhof is on the opposite side of the city. Basel's local rail services are supplied by the Basel Regional S-Bahn. The largest goods railway complex of the country is located just outside the city, spanning the municipalities of Muttenz and Pratteln. The new highspeed ICE railway line from Karlsruhe to Basel was completed in 2008 while phase I of the TGV Rhin-Rhône line, opened in December 2011, has reduced travel time from Basel to Paris to about 3 hours. Roads Basel is located on the A3 motorway. Within the city limits, five bridges connect Greater and Lesser Basel (downstream): Schwarzwaldbrücke (built 1972) Wettsteinbrücke (current structure built 1998, original bridge built 1879) Mittlere Rheinbrücke (current structure built 1905, original bridge built 1225 as the first bridge to cross the Rhine) Johanniterbrücke (built 1967) Dreirosenbrücke (built 2004, original bridge built 1935) Ferries A somewhat anachronistic yet still widely used system of reaction ferry boats links the two shores. There are four ferries, each situated approximately midway between two bridges. Each is attached by a cable to a block that rides along another cable spanning the river at a height of . To cross the river, the ferryman orients the boat around 45° from the current so that the current pushes the boat across the river. This form of transportation is therefore completely hydraulically driven, requiring no outside energy source. Home/Aktuell - Fähri Verein Basel Public transport Basel has an extensive public transportation network serving the city and connecting to surrounding suburbs, including a large tram network. Today, Basel has the largest tramway in terms of kilometers of rail tracks in Switzerland. Historically, only Geneva had a larger one at some point. The green-colored local trams and buses are operated by the Basler Verkehrs-Betriebe (BVB). The yellow-colored buses and trams are operated by the Baselland Transport (BLT), and connect areas in the nearby half-canton of Baselland to central Basel. The BVB also shares commuter bus lines in cooperation with transit authorities in the neighboring Alsace region in France and Baden region in Germany. The Basel Regional S-Bahn, the commuter rail network connecting to suburbs surrounding the city, is jointly operated by SBB, SNCF and DB. Border crossings Basel is located at the meeting point of France, Germany, and Switzerland; because it sits on the Swiss national border and is beyond the Jura Mountains, many within the Swiss military reportedly believe that the city is indefensible during wartime. It has numerous road and rail crossings between Switzerland and the other two countries. With Switzerland joining the Schengen Area on 12 December 2008, immigration checks were no longer carried out at the crossings. However, Switzerland did not join the European Union Customs Union (though it did join the EU Single Market) and customs checks are still conducted at or near the crossings. France-Switzerland (from east to west) Road crossings (with French road name continuation) Kohlenstrasse (Avenue de Bâle, Huningue). This crossing replaces the former crossing Hüningerstrasse further east. Elsässerstrasse (Avenue de Bâle, Saint-Louis) Autobahn A3 (A35 autoroute, Saint-Louis), crossing Mulhouse, Colmar and Strasbourg. EuroAirport Basel-Mulhouse-Freiburg – pedestrian walkway between the French and Swiss sections on Level 3 (departures) of airport. Burgfelderstrasse (Rue du 1er Mars, Saint Louis) Railway crossing Basel SBB railway station Germany-Switzerland (clockwise, from north to south) Road crossings (with German road name continuation) Hiltalingerstrasse (Zollstraße, Weil am Rhein). Tram 8 goes along this road to Weil am Rhein. The extension opened in 2014; it used to end before the border. Autobahn A2 (Autobahn A5, Weil am Rhein) Freiburgerstrasse (Baslerstraße, Weil am Rhein) Weilstrasse, Riehen (Haupstraße, Weil am Rhein) Lörracherstrasse, Riehen (Baslerstraße, Stetten, Lörrach) Inzlingerstrasse, Riehen (Riehenstraße, Inzlingen) Grenzacherstrasse (Hörnle, Grenzach-Wyhlen) Railway crossing Between Basel SBB and Basel Badischer Bahnhof – Basel Badischer Bahnhof, and all other railway property and stations on the right bank of the Rhine belong to DB and are classed as German customs territory. Immigration and customs checks are conducted at the platform exit tunnel for passengers leaving trains here. Additionally there are many footpaths and cycle tracks crossing the border between Basel and Germany. Health As the biggest town in the Northwest of Switzerland numerous public and private health centres are located in Basel. Among others the Universitätsspital Basel and the Universitätskinderspital Basel. The anthroposophical health institute Klinik-Arlesheim (formerly known as Lukas-Klinik and Ita-Wegman-Klinik) are both located in the Basel area as well. Private health centres include the Bethesda Spital and the Merian Iselin Klinik. Additionally the Swiss Tropical and Public Health Institute is located in Basel too. Energy Basel is at the forefront of a national vision to more than halve energy use in Switzerland by 2050. To research, develop and commercialise the technologies and techniques required for the country to become a 2000 Watt society, a number of projects have been set up since 2001 in the Basel metropolitan area. These include demonstration buildings constructed to Minergie or Passivhaus standards, electricity generation from renewable energy sources, and vehicles using natural gas, hydrogen and biogas. A building construction law was passed in 2002 also which stated that all new flat roofs must be greened leading to Basel becoming the world's leading green roof city. This was driven by an energy saving programme. A hot dry rock geothermal energy project was cancelled in 2009 since it caused induced seismicity in Basel. Economy The city of Basel, located in the north west of Switzerland, is one of the most dynamic economic regions of Switzerland. , Basel had an unemployment rate of 3.7%. , 19.3% of the working population was employed in the secondary sector and 80.6% was employed in the tertiary sector. There were 82,449 residents of the municipality who were employed in some capacity, of which women made up 46.2% of the workforce. the total number of full-time equivalent jobs was 130,988. The number of jobs in the primary sector was 13, of which 10 were in agriculture and 4 were in forestry or lumber production. The number of jobs in the secondary sector was 33,171 of which 24,848 or (74.9%) were in manufacturing, 10 were in mining and 7,313 (22.0%) were in construction. The number of jobs in the tertiary sector was 97,804. In the tertiary sector; 12,880 or 13.2% were in wholesale or retail sales or the repair of motor vehicles, 11,959 or 12.2% were in the movement and storage of goods, 6,120 or 6.3% were in a hotel or restaurant, 4,186 or 4.3% were in the information industry, 10,752 or 11.0% were the insurance or financial industry, 13,695 or 14.0% were technical professionals or scientists, 6,983 or 7.1% were in education and 16,060 or 16.4% were in health care. , there were 121,842 workers who commuted into the municipality and 19,263 workers who commuted away. The municipality is a net importer of workers, with about 6.3 workers entering the municipality for every one leaving. About 23.9% of the workforce coming into Basel are coming from outside Switzerland, while 1.0% of the locals commute out of Switzerland for work. Of the working population, 49.2% used public transportation to get to work, and 18.7% used a private car. The Roche Tower, designed by Herzog & de Meuron, is 41 floors and high, upon its opening in 2015 it has become the tallest building in Switzerland. Basel has also Switzerland's third tallest building (Basler Messeturm, ) and Switzerland's tallest tower (St. Chrischona TV tower, ). Chemical industry The Swiss chemical industry operates largely from Basel, and Basel also has a large pharmaceutical industry. Novartis, Syngenta, Ciba Specialty Chemicals, Clariant, Hoffmann-La Roche, Basilea Pharmaceutica and Actelion are headquartered there. Pharmaceuticals and specialty chemicals have become the modern focus of the city's industrial production. In addition, Basel is a major European hub for Biotech and Biopharmaceuticals. There are plenty of small and mid-sized start-ups. The vibrant VC scene also supports this. Banking Banking is important to Basel: UBS AG maintains central offices in Basel. The Bank for International Settlements is located within the city and is the central banker's bank. The bank is controlled by a board of directors, which is composed of the elite central bankers of 11 countries (US, UK, Belgium, Canada, France, Germany, Italy, Japan, Switzerland, the Netherlands and Sweden). According to the BIS, "The choice of Switzerland for the seat of the BIS was a compromise by those countries that established the BIS: Belgium, France, Germany, Italy, Japan, the United Kingdom and the United States. When consensus could not be reached on locating the Bank in London, Brussels or Amsterdam, the choice fell on Switzerland. An independent, neutral country, Switzerland offered the BIS less exposure to undue influence from any of the major powers. Within Switzerland, Basel was chosen largely because of its location, with excellent railway connections in all directions, especially important at a time when most international travel was by train." Created in May 1930, the BIS is owned by its member central banks, which are private entities. No agent of the Swiss public authorities may enter the premises without the express consent of the bank. The bank exercises supervision and police power over its premises. The bank enjoys immunity from criminal and administrative jurisdiction, as well as setting recommendations which become standard for the world's commercial banking system. Basel is also the location of the Basel Committee on Banking Supervision, which is distinct from the BIS. It usually meets at the BIS premises in Basel. Responsible for the Basel Accords (Basel I, Basel II and Basel III), this organization fundamentally changed Risk management within its industry. Basel also hosts the headquarters of the Global Infrastructure Basel Foundation, which is active in the field of sustainable infrastructure (financing). Air Swiss International Air Lines, the national airline of Switzerland, is headquartered on the grounds of EuroAirport Basel-Mulhouse-Freiburg in Saint-Louis, Haut-Rhin, France, near Basel. Prior to the formation of Swiss International Air Lines, the regional airline Crossair was headquartered near Basel. Media Basler Zeitung ("BaZ") and bz Basel are the local newspapers. The local TV station is called Telebasel. The German-speaking Swiss Radio and Television SRF company, part of the Swiss Broadcasting Corporation SRG SSR, holds offices in Basel as well. The academic publishers Birkhäuser, Karger and MDPI are based in Basel. Trade fairs Important trade shows include Art Basel, the world's most important fair for modern and contemporary art, Baselworld (watches and jewelry), Swissbau (construction and real estate) and Igeho (hotels, catering, take-away, care). The Swiss Sample Fair ("Schweizer Mustermesse") was the largest and oldest consumer fair in Switzerland. It was held from 2007 to 2019 and took place in Kleinbasel on the right bank of the Rhine. Education Besides Humanism the city of Basel has also been well known for its achievements in the field of mathematics. Among others, the mathematician Leonhard Euler and the Bernoulli family have done research and been teaching at the local institutions for centuries. In 1910 the Swiss Mathematical Society was founded in the city and in the mid-twentieth century the Russian mathematician Alexander Ostrowski taught at the local university. In 2000 about 57,864 or (34.7%) of the population have completed non-mandatory upper secondary education, and 27,603 or (16.6%) have completed additional higher education (either university or a Fachhochschule). Of the 27,603 who completed tertiary schooling, 44.4% were Swiss men, 31.1% were Swiss women, 13.9% were non-Swiss men and 10.6% were non-Swiss women. In 2010 11,912 students attended the University of Basel (55% female). 25% were foreign nationals, 16% were from canton of Basel-Stadt. In 2006, 6162 students studied at one of the nine academies of the FHNW (51% female). , there were 5,820 students in Basel who came from another municipality, while 1,116 residents attended schools outside the municipality. Universities Basel hosts Switzerland's oldest university, the University of Basel, dating from 1460. Erasmus, Paracelsus, Daniel Bernoulli, Leonhard Euler, Jacob Burckhardt, Friedrich Nietzsche, Tadeusz Reichstein, Karl Jaspers, Carl Gustav Jung and Karl Barth worked there. The University of Basel is currently counted among the 90 best educational institutions worldwide. In 2007, the ETH Zurich (Swiss Federal Institute of Technology Zürich) established the Department of Biosystems Science and Engineering (D-BSSE) in Basel. The creation of the D-BSSE was driven by a Swiss-wide research initiative SystemsX, and was jointly supported by funding from the ETH Zürich, the Swiss Government, the Swiss University Conference (SUC) and private industry. Basel also hosts several academies of the Fachhochschule Nordwestschweiz|Fachhochschule NW (FHNW): the FHNW Academy of Art and Design, FHNW Academy of Music, and the FHNW School of Business. Basel is renowned for various scientific societies, such as the Entomological Society of Basel (Entomologische Gesellschaft Basel, EGB), which celebrated its 100th anniversary in 2005. Volksschule In 2005 16,939 pupils and students attended the Volksschule (the obligatory school time, including Kindergarten (127), primary schools (Primarschule, 25), and lower secondary schools (Sekundarschule, 10), of which 94% visited public schools and 39.5% were foreign nationals. In 2010 already 51.1% of all pupils spoke another language than German as their first language. In 2009 3.1% of the pupils visited special classes for pupils with particular needs. The average amount of study in primary school in Basel is 816 teaching hours per year. Upper secondary school In 2010 65% of the youth finished their upper secondary education with a vocational training and education, 18% finished their upper secondary education with a Federal Matura at one of the five gymnasiums, 5% completed a Fachmaturität at the FMS, 5% completed a Berufsmaturität synchronously to their vocational training, and 7% other kind of upper secondary maturity. 14.1% of all students at public gymnasiums were foreign nationals. The Maturity quota in 2010 was on a record high at 28.8% (32.8 female, 24.9% male). Basel has five public gymnasiums (, , , , ), each with its own profiles (different focus on major subjects, such as visual design, biology and chemistry, Italian, Spanish, or Latin languages, music, physics and applied mathematics, philosophy/education/psychology, and economics and law) that entitles students with a successful Matura graduation to attend universities. And one Fachmaturitätsschule, the FMS, with six different major subjects (health/natural sciences, education, social work, design/art, music/theatre/dance, and communication/media) that entitles students with a successful Fachmatura graduation to attend Fachhochschulen. Four different höhere Fachschulen (higher vocational schools such as Bildungszentrum Gesundheit Basel-Stadt (health), Allgemeine Gewerbeschule Basel (trade), Berufsfachschule Basel, Schule für Gestaltung Basel (design)) allows vocational students to improve their knowledge and know-how. International schools As a city with a percentage of foreigners of more than thirty-five percent and as one of the most important centres in the chemical and pharmaceutical field in the world, Basel counts several international schools including: Academia International School, École Française de Bâle, Freies Gymnasium Basel (private), Gymnasium am Münsterplatz (public), Schweizerisch-italienische Primarschule Sandro Pertini, International School Basel and SIS Swiss International School. Libraries Basel is home to at least 65 libraries. Some of the largest include; the Universitätsbibliothek Basel (main university library), the special libraries of the University of Basel, the Allgemein Bibliotheken der Gesellschaft für Gutes und Gemeinnütziges (GGG) Basel, the Library of the Pädagogische Hochschule, the Library of the Hochschule für Soziale Arbeit and the Library of the Hochschule für Wirtschaft. There was a combined total () of 8,443,643 books or other media in the libraries, and in the same year a total of 1,722,802 items were loaned out. Culture Main sights The red sandstone Münster, one of the foremost late-Romanesque/early Gothic buildings in the Upper Rhine, was badly damaged in the great earthquake of 1356, rebuilt in the 14th and 15th century, extensively reconstructed in the mid-19th century and further restored in the late 20th century. A memorial to Erasmus lies inside the Münster. The City Hall from the 16th century is located on the Market Square and is decorated with fine murals on the outer walls and on the walls of the inner court. Basel is also host to an array of buildings by internationally renowned architects. These include the Beyeler Foundation by Renzo Piano, or the Vitra complex in nearby Weil am Rhein, composed of buildings by architects such as Zaha Hadid (fire station), Frank Gehry (Design Museum), Álvaro Siza Vieira (factory building) and Tadao Ando (conference centre). Basel also features buildings by Mario Botta (Jean Tinguely Museum and Bank of International settlements) and Herzog & de Meuron (whose architectural practice is in Basel, and who are best known as the architects of Tate Modern in London and the Bird's Nest in Beijing, the Olympia stadium, which was designed for use throughout the 2008 Summer Olympics and Paralympics). The city received the Wakker Prize in 1996. Heritage sites Basel features a great number of heritage sites of national significance. These include the entire Old Town of Basel as well as the following buildings and collections: Churches and monasteries Old Catholic Prediger Kirche (church), Bischofshof with Collegiate church at Rittergasse 1, Domhof at Münsterplatz 10–12, former Carthusian House of St Margarethental, Catholic Church of St Antonius, Lohnhof (former Augustinians Collegiate Church), Mission 21, Archive of the Evangelisches Missionswerk Basel, Münster of Basel (cathedral), Reformed Elisabethenkirche (church), Reformed Johanneskirche (church), Reformed Leonhardskirche (church, former Augustinians Abbey), Reformed Martinskirche (church), Reformed Pauluskirche (church), Reformed Peterskirche (church), Reformed St. Albankirche (church) with cloister and cemetery, Reformed Theodorskirche (church), Synagoge at Eulerstrasse 2 Secular buildings Badischer Bahnhof (German Baden's railway station) with fountain, Bank for International Settlements, Blaues Haus (Reichensteinerhof) at Rheinsprung 16, Bruderholzschule (school house) at Fritz-Hauser-Strasse 20, Brunschwiler Haus at Hebelstrasse 15, Bahnhof Basel SBB (Swiss railway station), Bürgerspital (hospital), Café Spitz (Merianflügel), Coop Schweiz company's central archive, Depot of the Archäologischen Bodenforschung des Kanton Basel-Stadt, former Gallizian Paper Mill and Swiss Museum of Paper, former Klingental-Kaserne (casern) with Klingentaler Kirche (church), Fasnachtsbrunnen (fountain), Feuerschützenhaus (guild house of the riflemen) at Schützenmattstrasse 56, Fischmarktbrunnen (fountain), Geltenzunft at Marktplatz 13, Gymnasium am Kohlenberg (St Leonhard) (school), Hauptpost (main post office), Haus zum Raben at Aeschenvorstadt 15, Hohenfirstenhof at Rittergasse 19, Holsteinerhof at Hebelstrasse 30, Markgräflerhof a former palace of the margraves of Baden-Durlach, Mittlere Rhein Brücke (Central Rhine Bridge), Stadtcasino (music hall) at Steinenberg 14, Ramsteinerhof at Rittergasse 7 and 9, Rathaus (town hall), Rundhof building of the Schweizerischen Mustermesse, Safranzunft at Gerbergasse 11, Sandgrube at Riehenstrasse 154, Schlösschen (Manor house) Gundeldingen, Schönes Haus and Schöner Hof at Nadelberg 6, Wasgenring school house, Seidenhof with painting of Rudolf von Habsburg, Spalenhof at Spalenberg 12, Spiesshof at Heuberg 7, city walls, Townhouse (former post office) at Stadthausgasse 13 / Totengässlein 6, Weisses Haus at Martinsgasse 3, Wildt'sches Haus at Petersplatz 13, Haus zum Neuen Singer at Speiserstrasse 98, Wolfgottesacker at Münchensteinerstrasse 99, Zerkindenhof at Nadelberg 10. Archaeological sites The Celtic Settlement at Gasfabrik, Münsterhügel and Altstadt (historical city, late La Tène and medieval settlement). Museums, archives and collections Basel calls itself the Cultural Capital of Switzerland. Among others, there is the Anatomical Museum of the University Basel, Berri-Villen and Museum of Ancient Art Basel and Ludwig Collection, Former Franciscan Barefoot Order Church and Basel Historical Museum, Company Archive of Novartis, Haus zum Kirschgarten which is part of the Basel Historical Museum, Historic Archive Roche and Industrial Complex Hoffmann-La Roche, Jewish Museum of Switzerland, Caricature & Cartoon Museum Basel, Karl Barth-Archive, Kleines Klingental (Lower Klingen Valley) with Museum Klingental, Art Museum of Basel, hosting the world's oldest art collection accessible to the public, Natural History Museum of Basel and the Museum of Cultures Basel, Museum of Modern Art Basel with the E. Hoffmann collection, Museum Jean Tinguely Basel, Music Museum, Pharmacy Historical Museum of the University of Basel, Poster Collection of the School for Design (Schule für Gestaltung), Swiss Business Archives, Sculpture Hall, Sports Museum of Switzerland, Archives of the Canton of Basel-Stadt, UBS AG Corporate Archives, University Library with manuscripts and music collection, Zoological Garden (Zoologischer Garten). Theatre and music Basel is the home of the Schola Cantorum Basiliensis, founded in 1933, a worldwide centre for research on and performance of music from the Medieval through the Baroque eras. Theater Basel, chosen in 1999 as the best stage for German-language performances and in 2009 and 2010 as "Opera house of the year" by German opera magazine Opernwelt, presents a busy schedule of plays in addition to being home to the city's opera and ballet companies. Basel is home to the largest orchestra in Switzerland, the Sinfonieorchester Basel. It is also the home of the Basel Sinfonietta and the Kammerorchester Basel, which recorded the complete symphonies of Ludwig van Beethoven for the Sony label, led by its music director Giovanni Antonini. The Schola Cantorum and the Basler Kammerorchester were both founded by the conductor Paul Sacher, who went on to commission works by many leading composers. The Paul Sacher Foundation, opened in 1986, houses a major collection of manuscripts, including the entire Igor Stravinsky archive. The baroque orchestras La Cetra and Capriccio Basel are also based in Basel. In May 2004, the fifth European Festival of Youth Choirs (Europäisches Jugendchorfestival, or EJCF) opened; this Basel tradition started in 1992. Host of the festival is the local Basel Boys Choir. In 1997, Basel contended to become the "European Capital of Culture", though the honor went to Thessaloniki. Museums The Basel museums cover a broad and diverse spectrum of collections with a marked concentration in the fine arts. They house numerous holdings of international significance. The over three dozen institutions yield an extraordinarily high density of museums compared to other cities of similar size and draw over one million visitors annually. Constituting an essential component of Basel culture and cultural policy, the museums are the result of closely interwoven private and public collecting activities and promotion of arts and culture going back to the 16th century. The public museum collection was first created back in 1661 and represents the oldest public collection in continuous existence in Europa. Since the late 1980s, various private collections have been made accessible to the public in new purpose-built structures that have been recognized as acclaimed examples of avant-garde museum architecture. Antikenmuseum Basel und Sammlung Ludwig Ancient cultures of the Mediterranean museum Augusta Raurica Roman open-air museum Basel Paper Mill () Beyeler Foundation (Foundation Beyeler) Beyeler Museum (Fondation Beyeler) Botanical Garden Basel One of the oldest botanical gardens in the world Caricature & Cartoon Museum Basel () Dollhouse Museum () a museum housing the largest teddy bear collection in Europe. Foundation Fernet Branca () in Saint-Louis, Haut-Rhin near Basel. Modern art collection. Historical Museum Basel () Kunsthalle Basel Modern and contemporary art museum Kunstmuseum Basel Upper Rhenish and Flemish paintings, drawings from 1400 to 1600 and 19th- to 21st-century art Monteverdi Automuseum Museum of Cultures Basel () Large collections on European and non-European cultural life Museum of Contemporary Art Art from the 1960s up to the present Music Museum () of the Basel Historic Museum Natural History Museum of Basel () Pharmazie-Historisches Museum der Universität Basel Schaulager Modern and contemporary art museum Swiss Architecture Museum () Tinguely Museum Life and work of the major Swiss iron sculptor Jean Tinguely Jewish Museum of Switzerland Events The city of Basel is a centre for numerous fairs and events all year round. One of the most important fairs for contemporary art worldwide is the Art Basel which was founded in 1970 by Ernst Beyeler and takes place in June each year. Baselworld, the watch and jewellery show (Uhren- und Schmuckmesse) one of the biggest fairs of its kind in Europe is held every year as well, and attracts a great number of tourists and dealers to the city. Live marketing company and fair organizer MCH Group has its head office in Basel. The carnival of the city of Basel (Basler Fasnacht) is a major cultural event in the year. The carnival is the biggest in Switzerland and attracts large crowds every year, despite the fact that it starts at exactly four o'clock in the morning (Morgestraich) on a winter Monday. The Fasnacht asserts Basel's Protestant history by commencing the revelry five days after Ash Wednesday and continuing exactly 72 hours. Almost all study and work in the old city cease. Dozens of fife and drum clubs parade in medieval guild tradition with fantastical masks and illuminated lanterns. Basel Tattoo, founded in 2006 by the local Top Secret Drum Corps, has grown to be the world's second largest military tattoo in terms of performers and budget after the Edinburgh Military Tattoo. The Basel Tattoo annual parade, with an estimated 125,000 visitors, is considered the largest event in Basel. The event is now sponsored by the Swiss Federal Department of Defence, Civil Protection and Sport (DDPS), making it the official military tattoo of Switzerland. Cuisine There are a number of culinary specialties originating in Basel, including Basler Läckerli cookies and Mässmogge candies. Being located in the meeting place between Switzerland, France and Germany the culinary landscape as a whole is very varied and diverse, making it a city with a great number of restaurants of all sorts. Zoo Zoo Basel is, with over 1.7 million visitors per year, the most visited tourist attraction in Basel and the second most visited tourist attraction in Switzerland. Established in 1874, Zoo Basel is the oldest zoo in Switzerland and, by number of animals, the largest. Through its history, Zoo Basel has had several breeding successes, such as the first worldwide Indian rhinoceros birth and Greater flamingo hatch in a zoo. These and other achievements led Forbes Travel to rank Zoo Basel as one of the fifteen best zoos in the world in 2008. Despite its international fame, Basel's population remains attached to Zoo Basel, which is entirely surrounded by the city of Basel. Evidence of this is the millions of donations money each year, as well as Zoo Basel's unofficial name: locals lovingly call "their" zoo "Zolli" by which is it known throughout Basel and most of Switzerland. Sport Basel has a reputation in Switzerland as a successful sporting city. The football club FC Basel continues to be successful and in recognition of this the city was one of the Swiss venues for the 2008 European Championships, as well as Geneva, Zürich and Bern. The championships were jointly hosted by Switzerland and Austria. BSC Old Boys and Concordia Basel are the other football teams in Basel. Among the most popular sports in Switzerland is ice hockey. Basel is home to the EHC Basel which plays in the MySports League, the third tier of the Swiss ice hockey league system. They play their home games in the 6,700-seat St. Jakob Arena. The team previously played in the National League and the Swiss League, but they had to file for bankruptcy after the 2013–14 Swiss League season. Amongst its major sports venues, Basel features a large football stadium that has been awarded four stars by UEFA, a modern ice hockey arena, and a sports hall. A large indoor tennis event takes place in Basel every October. Some of the best ATP-professionals play every year at the Swiss Indoors, including Switzerland's biggest sporting hero and frequent participant Roger Federer, a Basel native who describes the city as "one of the most beautiful cities in the world". The annual Basel Rhine Swim draws several thousand visitors to the city to swim in or float on the Rhine. While football and ice hockey are by far the most popular sports, basketball has a very small but faithful fan base. The top division, called the SBL, is a semi-professional league and has one team from the Basel region, the "Birstal Starwings". Two players from Switzerland are currently active in the NBA, Thabo Sefolosha and Clint Capela. As in most European countries, and contrary to the U.S., Switzerland has a club-based rather than a school-based competition system. The Starwings Basel are the only first division basketball team in German-speaking Switzerland. The headquarters of the IHF (International Handball Federation) is located in Basel. Basel Dragons AFC have been playing Australian Football in the AFL Switzerland league since 2019. In July 2022, the women's water polo players of the WSV Basel secured their 11th national championship title. Notable people Notable people who were born or grew up in Basel: Gaspard Bauhin (1560–1624), botanist and anatomist Matthäus Merian the Elder (1593–1650) an engraver Johannes Buxtorf II (1599–1664) a Protestant Christian Hebraist Jacob Bernoulli, (1654–1705) mathematician Johann Bernoulli, (1667–1748) mathematician Johann Jakob Wettstein (1693–1754) a theologian and New Testament critic Maximilian Ulysses Browne (1705–1757) Austrian field marshal Leonhard Euler, (1707–1783) mathematician, physicist and astronomer Johann Peter Hebel (1760–1826) a German short story writer, poet and Lutheran theologian Johann Jakob Herzog (1805–1882) a Swiss-German Protestant theologian Jacob Burckhardt (1818–1897) historian of art and culture. Arnold Böcklin (1827–1901) a symbolist painter Karl Barth (1886--1968) Swiss Reformed theologian, best known for his involvement with the Confessing Church and Christian resistance to Hitler Rudy Burckhardt, (1914–1999) American filmmaker and photographer Martina Gmür (born 1979), Swiss visual artist Peter Zumthor, (born 1943) architect Heidi Köpfer, (born 1954) choreographer, dancer and video artist Antoine Konrad (born 1975) known as DJ Antoine a DJ and record producer Roger Federer, (born 1981) professional tennis player Granit Xhaka, (born 1992) professional footballer with 100 caps with Switzerland Picture gallery Notes and references Notes References Bibliography External links Official tourism site Overview of museums in Basel or basel museums (archived 5 May 2008) Website of the regional television of Basel – Enjoy daily news and stories about Baselcity, Baselland and the green Fricktal and Laufental, together with its citizens. Cultural property of national significance in Basel-Stadt Municipalities of Basel-Stadt Cantonal capitals of Switzerland Cities in Switzerland Populated places on the Rhine Port cities and towns in Switzerland Free imperial cities Germany–Switzerland border crossings France–Switzerland border crossings Border tripoints Turkish communities outside Turkey
2,195
4,924
https://en.wikipedia.org/wiki/Bunsen%20burner
Bunsen burner
A Bunsen burner, named after Robert Bunsen, is a kind of ambient air gas burner used as laboratory equipment; it produces a single open gas flame, and is used for heating, sterilization, and combustion. The gas can be natural gas (which is mainly methane) or a liquefied petroleum gas, such as propane, butane, or a mixture. Combustion temperature achieved depends in part on the adiabatic flame temperature of the chosen fuel mixture. History In 1852, the University of Heidelberg hired Bunsen and promised him a new laboratory building. The city of Heidelberg had begun to install coal-gas street lighting, and so the university laid gas lines to the new laboratory. The designers of the building intended to use the gas not just for illumination, but also in burners for laboratory operations. For any burner lamp, it was desirable to maximize the temperature and minimize luminosity. However, existing laboratory burner lamps left much to be desired not just in terms of the heat of the flame, but also regarding economy and simplicity. While the building was still under construction in late 1854, Bunsen suggested certain design principles to the university's mechanic, Peter Desaga, and asked him to construct a prototype. Similar principles had been used in an earlier burner design by Michael Faraday, as well as in a device patented in 1856 by the gas engineer R. W. Elsner. The Bunsen/Desaga design succeeded in generating a hot, sootless, non-luminous flame by mixing the gas with air in a controlled fashion before combustion. Desaga created adjustable slits for air at the bottom of the cylindrical burner, with the flame igniting at the top. By the time the building opened early in 1855, Desaga had made 50 burners for Bunsen's students. Two years later Bunsen published a description, and many of his colleagues soon adopted the design. Bunsen burners are now used in laboratories all around the world. Operation The device in use today safely burns a continuous stream of a flammable gas such as natural gas (which is principally methane) or a liquefied petroleum gas such as propane, butane, or a mixture of both. The hose barb is connected to a gas nozzle on the laboratory bench with rubber tubing. Most laboratory benches are equipped with multiple gas nozzles connected to a central gas source, as well as vacuum, nitrogen, and steam nozzles. The gas then flows up through the base through a small hole at the bottom of the barrel and is directed upward. There are open slots in the side of the tube bottom to admit air into the stream using the Venturi effect, and the gas burns at the top of the tube once ignited by a flame or spark. The most common methods of lighting the burner are using a match or a spark lighter. The amount of air mixed with the gas stream affects the completeness of the combustion reaction. Less air yields an incomplete and thus cooler reaction, while a gas stream well mixed with air provides oxygen in a stoichiometric amount and thus a complete and hotter reaction. The air flow can be controlled by opening or closing the slot openings at the base of the barrel, similar in function to the choke in a carburettor. If the collar at the bottom of the tube is adjusted so more air can mix with the gas before combustion, the flame will burn hotter, appearing blue as a result. If the holes are closed, the gas will only mix with ambient air at the point of combustion, that is, only after it has exited the tube at the top. This reduced mixing produces an incomplete reaction, producing a cooler but brighter yellow, which is often called the "safety flame" or "luminous flame". The yellow flame is luminous due to small soot particles in the flame, which are heated to incandescence. The yellow flame is considered "dirty" because it leaves a layer of carbon on whatever it is heating. When the burner is regulated to produce a hot, blue flame, it can be nearly invisible against some backgrounds. The hottest part of the flame is the tip of the inner flame, while the coolest is the whole inner flame. Increasing the amount of fuel gas flow through the tube by opening the needle valve will increase the size of the flame. However, unless the airflow is adjusted as well, the flame temperature will decrease because an increased amount of gas is now mixed with the same amount of air, starving the flame of oxygen. Generally, the burner is placed underneath a laboratory tripod, which supports a beaker or other container. The burner will often be placed on a suitable heatproof mat to protect the laboratory bench surface. A Bunsen burner is also used in microbiology laboratories to sterilise pieces of equipment and to produce an updraft that forces airborne contaminants away from the working area. Variants Other burners based on the same principle exist. The most important alternatives to the Bunsen burner are: Teclu burner – The lower part of its tube is conical, with a round screw nut below its base. The gap, set by the distance between the nut and the end of the tube, regulates the influx of the air in a way similar to the open slots of the Bunsen burner. The Teclu burner provides better mixing of air and fuel and can achieve higher flame temperatures than the Bunsen burner. Meker burner – The lower part of its tube has more openings with larger total cross-section, admitting more air and facilitating better mixing of air and gas. The tube is wider and its top is covered with a wire grid. The grid separates the flame into an array of smaller flames with a common external envelope, and also prevents flashback to the bottom of the tube, which is a risk at high air-to-fuel ratios and limits the maximum rate of air intake in a conventional Bunsen burner. Flame temperatures of up to are achievable if properly used. The flame also burns without noise, unlike the Bunsen or Teclu burners. Tirrill burner – The base of the burner has a needle valve which allows the regulation of gas intake directly from the Burner, rather than from the gas source. Maximum temperature of flame can reach 1560 °C. See also Alcohol burner Heating mantle Meker-Fisher burner References External links Burners Combustion engineering German inventions Laboratory equipment
2,204
4,941
https://en.wikipedia.org/wiki/Blood%20libel
Blood libel
Blood libel or ritual murder libel (also blood accusation) is an antisemitic canard which falsely accuses Jews of murdering Christian boys in order to use their blood in the performance of religious rituals. Historically, echoing very old myths of secret cultic practices in many prehistoric societies, the claim as it is leveled against Jews, was rarely attested to in antiquity. It was however, frequently attached to early communities of Christians in the Roman Empire, re-emerging as a European Christian accusation against Jews in the medieval period. This libel—alongside those of well poisoning and host desecration—became a major theme of the persecution of Jews in Europe from that period to the present day. Blood libels typically claim that Jews require human blood for the baking of matzos, an unleavened flatbread which is eaten during Passover, although this element of the accusation was allegedly absent in the earliest blood libels in which then-contemporary Jews were accused of re-enacting the crucifixion. The accusations often assert that the blood of Christian children is especially coveted, and historically, blood libel claims have been made in order to account for the otherwise unexplained deaths of children. In some cases, the alleged victims of human sacrifice have become venerated as Christian martyrs. Three of these William of Norwich, Little Saint Hugh of Lincoln, and Simon of Trent became objects of local cults and veneration; and although he was never canonized, the veneration of Simon was added to the General Roman Calendar. One child who was allegedly murdered by Jews, Gabriel of Białystok, was canonized by the Russian Orthodox Church. In Jewish lore, blood libels served as the impetus for the creation of the Golem of Prague by Rabbi Judah Loew ben Bezalel in the 16th century. According to Walter Laqueur: The term 'blood libel' has also been used in reference to any unpleasant or damaging false accusation, and as a result, it has acquired a broader metaphoric meaning. However, this wider usage of the term remains controversial, because Jewish groups object to it. History The earliest versions of the accusations involving Jews supposedly crucifying Christian children on Easter/Passover is said to be because of a prophecy. There is no reference to the use of blood in unleavened matzo bread at this time yet, which develops later as a major motivation for the crime. Possible precursors The earliest known antecedent is from a man named Democritus (not the philosopher) mentioned in the Suda, who alleged that "every seven years the Jews captured a stranger, brought him to the temple in Jerusalem, and sacrificed him, cutting his flesh into bits." The Greco-Egyptian author Apion claimed that the Jews sacrificed Greek victims in their temple. Here, the writer states that when Antiochus Epiphanes entered the temple in Jerusalem, he discovered a Greek captive, who told him that he was being fattened for sacrifice. Every year, Apion claimed, the Jews would sacrifice a Greek and consume his flesh, at the same time swearing eternal hatred towards the Greeks. Apion's claim likely reflects already circulating attitudes towards Jews as similar claims are made by Posidonius and Apollonius Molon in the 1st century BCE. This idea is exampled later in history, when Socrates Scholasticus ( 5th century) reported that in a drunken frolic, a group of Jews bound a Christian child to a cross in mockery of the death of Christ and scourged him until he died. Israel Yuval proposed that the blood libel may have originated in the 12th century due to Christian views on Jewish behavior during the First Crusade. Some Jews committed suicide and killed their own children rather than exposing them to forced conversion to Christianity. Yuval wrote that Christians may have argued that if Jews could kill their own children, they could also kill Christian children. Origins in England In England in 1144, the Jews of Norwich were falsely accused of ritual murder after a boy, William of Norwich, was found dead in the woods with stab wounds. William's hagiographer, Thomas of Monmouth, falsely claimed that every year there is an international council of Jews at which they choose the country in which a child will be killed during Easter, because of a Jewish prophecy that states that the killing of a Christian child each year will ensure that the Jews will be restored to the Holy Land. In 1144, England was chosen, and the leaders of the Jewish community delegated the Jews of Norwich to perform the killing. They then abducted and crucified William. The legend was turned into a cult, with William acquiring the status of a martyr and pilgrims bringing offerings to the local church. This was followed by similar accusations in Gloucester (1168), Bury St Edmunds (1181) and Bristol (1183). In 1189, the Jewish deputation attending the coronation of Richard the Lionheart was attacked by the crowd. Massacres of Jews at London and York soon followed. In 1190 on 16 March 150 Jews were attacked in York and then massacred when they took refuge in the royal castle, where Clifford's Tower now stands, with some committing suicide rather than being taken by the mob. The remains of 17 bodies thrown in a well in Norwich between the 12th and 13th century (five that were shown by DNA testing to likely be members of a single Jewish family) were very possibly killed as part of one of these pogroms. After the death of Little Saint Hugh of Lincoln, there were trials and executions of Jews. The case is mentioned by Matthew Paris and Chaucer, and thus has become well-known. Its notoriety sprang from the intervention of the Crown, the first time an accusation of ritual killing had been given royal credibility. The eight-year-old Hugh disappeared at Lincoln on 31 July 1255. His body was probably discovered on 29 August, in a well. A Jew named Copin or Koppin confessed to involvement. He confessed to John of Lexington, a servant of the crown, and relative of the Bishop of Lincoln. He confessed that the boy had been crucified by the Jews, who had assembled at Lincoln for that purpose. King Henry III, who had reached Lincoln at the beginning of October, had Copin executed and 91 of the Jews of Lincoln seized and sent up to London, where 18 of them were executed. The rest were pardoned at the intercession of the Franciscans or Dominicans. Within a few decades, Jews would be expelled from all of England in 1290 and not allowed to return until 1657. Continental Europe Much like the blood libel of England, the history of blood libel in continental Europe consists of unsubstantiated claims made about the corpses of Christian children. There were frequently associated supernatural events speculated about these discoveries and corpses, events which were often attributed by contemporaries to miracles. Also, just as in England, these accusations in continental Europe typically resulted in the execution of numerous Jews – sometimes even all, or close to all, the Jews in one town. These accusations and their effects also, in some cases, led to royal interference on behalf of the Jews. Thomas of Monmouth's story of the annual Jewish meeting to decide which local community would kill a Christian child also quickly spread to the continent. An early version appears in Bonum Universale de Apibus ii. 29, § 23, by Thomas of Cantimpré (a monastery near Cambray). Thomas wrote, in around 1260, "It is quite certain that the Jews of every province annually decide by lot which congregation or city is to send Christian blood to the other congregations." Thomas of Cantimpré also believed that since the time when the Jews called out to Pontius Pilate, "His blood be on us, and on our children" (), they have been afflicted with hemorrhages, a condition equated with male menstruation: A very learned Jew, who in our day has been converted to the (Christian) faith, informs us that one enjoying the reputation of a prophet among them, toward the close of his life, made the following prediction: 'Be assured that relief from this secret ailment, to which you are exposed, can only be obtained through Christian blood ("solo sanguine Christiano").' This suggestion was followed by the ever-blind and impious Jews, who instituted the custom of annually shedding Christian blood in every province, in order that they might recover from their malady. Thomas added that the Jews had misunderstood the words of their prophet, who by his expression "solo sanguine Christiano" had meant not the blood of any Christian, but that of Jesus the only true remedy for all physical and spiritual suffering. Thomas did not mention the name of the "very learned" proselyte, but it may have been Nicholas Donin of La Rochelle, who, in 1240, had a disputation on the Talmud with Yechiel of Paris, and who in 1242 caused the burning of numerous Talmudic manuscripts in Paris. It is known that Thomas was personally acquainted with Nicholas. Nicholas Donin and another Jewish convert, Theobald of Cambridge, are greatly credited with the adoption and the belief of the blood libel myth in Europe. The first known case outside England was in Blois, France, in 1171. This was the site of a blood libel accusation against the town's entire Jewish community that led to around 31–33 Jews (with 17 women making up this total) being burned to death. on 29 May of that year, or the 20th of Sivan of 4931. The blood libel revolved around R. Isaac, a Jew whom a Christian servant reported had deposited a murdered Christian in the Loire. The child's body was never found. The count had about 40 adult Blois Jews arrested and they were eventually to be burned. The surviving members of the Blois Jewish community, as well as surviving holy texts, were ransomed. As a result of this case, the Jews garnered new promises from the king. The burned bodies of the sentenced Jews were supposedly maintained unblemished through the burning, a claim which is a well-known miracle, martyr myth for both Jews and Christians. There is significant primary source material from this case including a letter revealing moves for Jewish protection with King Louis VII. Responding to the mass execution, the 20th of Sivan was declared a fast day by Rabbenu Tam. In this case in Blois, there was not yet the myth proclaimed that Jews needed the blood of Christians. In 1235, after the dead bodies of five boys were found on Christmas day in Fulda, the inhabitants of the town claimed the Jews had killed them to consume their blood, and burned 34 Jews to death with the help of Crusaders assembled at the time. Even though emperor Frederick II cleared the Jews of any wrongdoing after an investigation, blood libel accusations persisted in Germany. At Pforzheim, Baden, in 1267, a woman supposedly sold a girl to Jews who, according to the myth, then cut her open and dumped her in the Enz River, where boatmen found her; the girl cried for vengeance, and then died. The body was said to have bled as the Jews were brought to it. The woman and the Jews allegedly confessed and were subsequently killed. That a judicial execution was summarily committed in consequence of the accusation is evident from the manner in which the Nuremberg "Memorbuch" and the synagogal poems refer to the incident. In 1270, at Weissenburg, of Alsace, a supposed miracle alone decided the charge against the Jews. A child's body had shown up in the Lauter River; it was claimed that Jews had cut into the child to acquire his blood, and that the child continued bleeding for five days. At Oberwesel, near Easter of 1287, alleged miracles again constituted the only evidence against the Jews. In this case, it was claimed that the corpse of the 16-year-old Werner of Oberwesel (also referred to as "Good Werner") landed at Bacharach and the body performed miracles, particularly medicinal miracles. Light was also said to have been emitted by the body. Reportedly, the child was hung upside down, forced to throw up the host and was cut open. In consequence, the Jews of Oberwesel and many other adjacent localities were severely persecuted during the years 1286-89. The Jews of Oberwesel were particularly targeted because there were no Jews remaining in Bacharach following a 1283 pogrom. Additionally, there were pogroms following this case as well at and around Oberwesel. Rudolph of Habsburg, to whom the Jews had appealed for protection, in order to manage the miracle story, had the archbishop of Mainz declare great wrong had been done to the Jew. This apparent declaration was very limited in effectiveness. A statement was made, in the Chronicle of Konrad Justinger of 1423, that at Bern in 1293 or 1294 the Jews tortured and murdered a boy called Rudolph (sometimes also referred to as Rudolph, Ruff, or Ruof). The body was reportedly found by the house of Jöly, a Jew. The Jewish community was then implicated. The penalties imposed upon the Jews included torture, execution, expulsion, and steep financial fines. Justinger argued Jews were out to harm Christianity. The historical impossibility of this widely credited story was demonstrated by Jakob Stammler, pastor of Bern, in 1888. There have been several explanations put forth as to why these blood libel accusations were made and perpetuated. For example, it has been argued Thomas of Monmouth's account and other similar false accusations, as well as their perpetuation, largely had to do with the economic and political interests of leaders who did, in fact, perpetuate these myths. Additionally, it was largely believed in Europe that Jews used Christian blood for medicinal and other purposes. Despite the unsubstantiated, mythical nature of these claims, as well as their sources, they evidently materially impacted the communities in which they occurred including both the Jewish and non-Jewish populations. Renaissance and Baroque Simon of Trent, aged two, disappeared in 1475, and his father alleged that he had been kidnapped and murdered by the local Jewish community. Fifteen local Jews were sentenced to death and burned. Simon was regarded locally as a saint, although he was never canonised by the church of Rome. He was removed from the Roman Martyrology in 1965 by Pope Paul VI. Christopher of Toledo, also known as Christopher of La Guardia or "the Holy Child of La Guardia", was a four-year-old Christian boy supposedly murdered in 1490 by two Jews and three conversos (converts to Christianity). In total, eight men were executed. It is now believed that this case was constructed by the Spanish Inquisition to facilitate the expulsion of Jews from Spain. In a case at Tyrnau (Nagyszombat, today Trnava, Slovakia), the absurdity, even the impossibility, of the statements forced by torture from women and children shows that the accused preferred death as a means of escape from the torture, and admitted everything that was asked of them. They even said that Jewish men menstruated and that the latter therefore practiced the drinking of Christian blood as a remedy. At Bösing (Bazin, today Pezinok, Slovakia), it was charged that a nine-year-old boy had been bled to death, suffering cruel torture; thirty Jews confessed to the crime and were publicly burned. The true facts of the case were disclosed later when the child was found alive in Vienna. He had been taken there by the accuser, Count Wolf of Bazin, as a means of ridding himself of his Jewish creditors at Bazin. In Rinn, near Innsbruck, a boy named Andreas Oxner (also known as Anderl von Rinn) was said to have been bought by Jewish merchants and cruelly murdered by them in a forest near the city, his blood being carefully collected in vessels. The accusation of drawing off the blood (without murder) was not made until the beginning of the 17th century when the cult was founded. The older inscription in the church of Rinn, dating from 1575, is distorted by fabulous embellishments for example, that the money paid for the boy to his godfather turned into leaves, and that a lily blossomed upon his grave. The cult continued until officially prohibited in 1994, by the Bishop of Innsbruck. On 17 January 1670 Raphael Levy, a member of the Jewish community of Metz, was executed on charges of the ritual murder of a peasant child who had gone missing in the woods outside the village of Glatigny on 25 September 1669, the eve of Rosh Hashanah. 19th century One of the child-saints in the Russian Orthodox Church is the six-year-old boy Gavriil Belostoksky from the village Zverki. According to the legend supported by the church, the boy was kidnapped from his home during the holiday of Passover while his parents were away. Shutko, who was a Jew from Białystok, was accused of bringing the boy to Białystok, piercing him with sharp objects and draining his blood for nine days, then bringing the body back to Zverki and dumping it at a local field. A cult developed, and the boy was canonized in 1820. His relics are still the object of pilgrimage. On All Saints Day, 27 July 1997, the Belarusian state TV showed a film alleging the story is true. The revival of the cult in Belarus was cited as a dangerous expression of antisemitism in international reports on human rights and religious freedoms which were passed to the UNHCR. 1823–35 Velizh blood libel: After a Christian child was found murdered outside of this small Russian town in 1823, accusations by a drunk prostitute led to the imprisonment of many local Jews. Some were not released until 1835. 1840 Damascus affair: In February, at Damascus, a Catholic monk named Father Thomas and his servant disappeared. The accusation of ritual murder was brought against members of the Jewish community of Damascus. 1840 Rhodes blood libel: The Jews of Rhodes, under the Ottoman Empire, were accused of murdering a Greek Christian boy. The libel was supported by the local governor and the European consuls posted to Rhodes. Several Jews were arrested and tortured, and the entire Jewish quarter was blockaded for twelve days. An investigation carried out by the central Ottoman government found the Jews to be innocent. In 1844 David Paul Drach, the son of the Head Rabbi of Paris and a convert to Christianity, wrote in his book De L’harmonie Entre L’eglise et la Synagogue, that a Catholic priest in Damascus had been ritually killed and the murder covered up by powerful Jews in Europe; referring to the 1840 Damascus affair [See above] In March 1879, ten Jewish men from a mountain village were brought to Kutaisi, Georgia to stand trial for the alleged kidnapping and murder of a Christian girl. The case attracted a great deal of attention in Russia (of which Georgia was then a part): "While periodicals as diverse in tendency as Herald of Europe and Saint Petersburg Notices expressed their amazement that medieval prejudice should have found a place in the modern judiciary of a civilized state, New Times hinted darkly of strange Jewish sects with unknown practices." The trial ended in acquittal, and the orientalist Daniel Chwolson published a refutation of the blood libel. 1882 Tiszaeszlár blood libel: The Jews of the village of Tiszaeszlár, Hungary were accused of the ritual murder of a fourteen-year-old Christian girl, Eszter Solymosi. The case was one of the main causes of the rise of antisemitism in the country. The accused persons were eventually acquitted. In 1899 Hilsner Affair: Leopold Hilsner, a Czech Jewish vagabond, was accused of murdering a nineteen-year-old Christian woman, Anežka Hrůzová, with a slash to the throat. Despite the absurdity of the charge and the relatively progressive nature of society in Austria-Hungary, Hilsner was convicted and sentenced to death. He was later convicted of an additional unsolved murder, also involving a Christian woman. In 1901, the sentence was commuted to life imprisonment. Tomáš Masaryk, a prominent Austro-Czech philosophy professor and future president of Czechoslovakia, spearheaded Hilsner's defense. He was later blamed by Czech media because of this. In March 1918, Hilsner was pardoned by Austrian emperor Charles I. He was never exonerated, and the true guilty parties were never found. 20th century and beyond The 1903 Kishinev pogrom, an anti-Jewish revolt, started when an anti-Semitic newspaper wrote that a Christian Russian boy, Mikhail Rybachenko, was found murdered in the town of Dubossary, alleging that the Jews killed him in order to use the blood in preparation of matzo. Around 49 Jews were killed and hundreds were wounded, with over 700 houses being looted and destroyed. In the 1910 Shiraz blood libel, the Jews of Shiraz, Iran, were falsely accused of murdering a Muslim girl. The entire Jewish quarter was pillaged; the pogrom left 12 Jews dead and about 50 injured. In Kyiv, a Jewish factory manager, Menahem Mendel Beilis, was accused of murdering 13-year-old Andriy Yushchinskyi, a Christian child, and using his blood to make matzos. He was acquitted by an all-Christian jury after a sensational trial in 1913. In 1928, the Jews of Massena, New York were falsely accused of kidnapping and killing a Christian girl in the Massena blood libel. Jews were frequently accused of the ritual murder of Christians for their blood in Der Stürmer, an antisemitic newspaper which was published in Nazi Germany. The infamous May 1934 issue of the paper was later banned by the Nazi authorities, because it went so far as to compare alleged Jewish ritual murder with the Christian rite of communion. In 1938 the British fascist politician and veterinarian Arnold Leese published an antisemitic booklet in defense of the Blood Libel which he titled My Irrelevant Defence: Meditations inside Gaol and Out on Jewish Ritual Murder. The 1944–1946 Anti-Jewish violence in Poland, which according to some estimates killed as many as 1000–2000 Jews (237 documented cases), involved, among other elements, accusations of blood libel, especially in the case of the 1946 Kielce pogrom. King Faisal of Saudi Arabia (r. 1964–1975) made accusations against Parisian Jews that took the form of a blood libel. The Matzah of Zion was written by the Syrian Defense Minister, Mustafa Tlass in 1986. The book concentrates on two issues: renewed ritual murder accusations against the Jews in the Damascus affair of 1840, and The Protocols of the Elders of Zion. The book was cited at a United Nations conference in 1991 by a Syrian delegate. On 21 October 2002, the London-based Arabic paper Al-Hayat reported that the book The Matzah of Zion was undergoing its eighth reprinting and it was also being translated into English, French and Italian. Egyptian filmmaker Munir Radhi has announced plans to adapt the book into a film. In 2003, a private Syrian film company created a 29-part television series Ash-Shatat ("The Diaspora"). This series originally aired in Lebanon in late 2003 and it was subsequently broadcast by Al-Manar, a satellite television network owned by Hezbollah. This TV series, based on the antisemitic forgery The Protocols of the Learned Elders of Zion, shows the Jewish people engaging in a conspiracy to rule the world, and it also presents Jews as people who murder the children of Christians, drain their blood and use it to bake matzah. In early January 2005, some 20 members of the Russian State Duma publicly made a blood libel accusation against the Jewish people. They approached the Prosecutor General's Office and demanded that Russia "ban all Jewish organizations." They accused all Jewish groups of being extremist, "anti-Christian and inhumane, and even accused them of practices that include ritual murders." Alluding to previous antisemitic Russian court decrees that accused the Jews of ritual murder, they wrote that "Many facts of such religious extremism were proven in courts." The accusation included traditional antisemitic canards, such as the claim that "the whole democratic world today is under the financial and political control of international Jewry. And we do not want our Russia to be among such unfree countries". This demand was published as an open letter to the prosecutor general, in Rus Pravoslavnaya (, "Orthodox Russia"), a national-conservative newspaper. This group consisted of members of the ultra-nationalist Liberal Democrats, the Communist faction, and the nationalist Motherland party, with some 500 supporters. The mentioned document is known as "The Letter of Five Hundred" ("Письмо пятисот"). Their supporters included editors of nationalist newspapers as well as journalists. By the end of the month, this group was strongly criticized, and it retracted its demand in response. At the end of April 2005, five boys, ages 9 to 12, in Krasnoyarsk (Russia) disappeared. In May 2005, their burnt bodies were found in the city sewage. The crime was not disclosed, and in August 2007 the investigation was extended until 18 November 2007. Some Russian nationalist groups claimed that the children were murdered by a Jewish sect with a ritual purpose. Nationalist M. Nazarov, one of the authors of "The Letter of Five Hundred" alleges "the existence of a 'Hasidic sect', whose members kill children before Passover to collect their blood", using the Beilis case mentioned above as evidence. M.Nazarov also alleges that "the ritual murder requires throwing the body away rather than its concealing". "The Union of the Russian People" demanded officials thoroughly investigate the Jews, not stopping at the search in synagogues, Matzah bakeries and their offices. During a speech in 2007, Raed Salah, the leader of the northern branch of the Islamic Movement in Israel, referred to Jews in Europe having in the past used children's blood to bake holy bread. "We have never allowed ourselves to knead [the dough for] the bread that breaks the fast in the holy month of Ramadan with children's blood", he said. "Whoever wants a more thorough explanation, let him ask what used to happen to some children in Europe, whose blood was mixed in with the dough of the [Jewish] holy bread." In the 2000s, a Polish team of anthropologists and sociologists investigated the currency of the blood libel myth in Sandomierz where a painting depicting the blood libel adorns the Cathedral and Orthodox faithful in villages near Bialystok, and they discovered that these beliefs persist among some Catholic and Orthodox Christians. In an address that aired on Al-Aqsa TV, a Hamas run TV station in Gaza, on 31 March 2010, Salah Eldeen Sultan (Arabic: صلاح الدين سلطان), founder of the American Center for Islamic Research in Columbus, Ohio, the Islamic American University in Southfield, Michigan, and the Sultan Publishing Co. and described in 2005 as "one of America's most noted Muslim scholars", alleged that Jews kidnap Christians and others in order to slaughter them and use their blood for making matzos. Sultan, who is currently a lecturer on Muslim jurisprudence at Cairo University stated that: "The Zionists kidnap several non-Muslims Christians and others... this happened in a Jewish neighborhood in Damascus. They killed the French doctor, Toma, who used to treat the Jews and others for free, in order to spread Christianity. Even though he was their friend and they benefited from him the most, they took him on one of these holidays and slaughtered him, along with the nurse. Then they kneaded the matzos with the blood of Dr. Toma and his nurse. They do this every year. The world must know these facts about the Zionist entity and its terrible corrupt creed. The world should know this." (Translation by the Middle East Media Research Institute) During an interview which aired on Rotana Khalijiya TV on 13 August 2012, Saudi Cleric Salman Al-Odeh stated (as translated by MEMRI) that "It is well known that the Jews celebrate several holidays, one of which is the Passover, or the Matzos Holiday. I read once about a doctor who was working in a laboratory. This doctor lived with a Jewish family. One day, they said to him: 'We want blood. Get us some human blood.' He was confused. He didn't know what this was all about. Of course, he couldn't betray his work ethics in such a way, but he began inquiring, and he found that they were making matzos with human blood." Al-Odeh also stated that "[Jews] eat it, believing that this brings them close to their false god, Yahweh" and that "They would lure a child in order to sacrifice him in the religious rite that they perform during that holiday." In April 2013, the Palestinian non-profit organization MIFTAH, founded by Hanan Ashrawi apologized for publishing an article which criticized US President Barack Obama for holding a Passover Seder in the White House by saying "Does Obama, in fact, know the relationship, for example, between 'Passover' and 'Christian blood'...?! Or 'Passover' and 'Jewish blood rituals?!’ Much of the chatter and gossip about historical Jewish blood rituals in Europe is real and not fake as they claim; the Jews used the blood of Christians in the Jewish Passover." MIFTAH's apology expressed its "sincerest regret." In an interview which aired on Al-Hafez TV on 12 May 2013, Khaled Al-Zaafrani of the Egyptian Justice and Progress Party, stated (as translated by MEMRI): "It's well known that during the Passover, they [the Jews] make matzos called the 'Blood of Zion.' They take a Christian child, slit his throat and slaughter him. Then they take his blood and make their [matzos]. This is a very important rite for the Jews, which they never forgo... They slice it and fight over who gets to eat Christian blood." In the same interview, Al-Zaafrani stated that "The French kings and the Russian czars discovered this in the Jewish quarters. All the massacring of Jews that occurred in those countries were because they discovered that the Jews had kidnapped and slaughtered children, in order to make the Passover matzos." In an interview which aired on the Al-Quds TV channel on 28 July 2014 (as translated by MEMRI), Osama Hamdan, the top representative of Hamas in Lebanon, stated that "we all remember how the Jews used to slaughter Christians, in order to mix their blood in their holy matzos. This is not a figment of imagination or something taken from a film. It is a fact, acknowledged by their own books and by historical evidence." In a subsequent interview with CNN's Wolf Blitzer, Hamdan defended his comments, stating that he "has Jewish friends". In a sermon broadcast on the official Jordanian TV channel on 22 August 2014, Sheik Bassam Ammoush, a former Minister of Administrative Development who was appointed to Jordan's House of Senate ("Majlis al-Aayan") in 2011, stated (as translated by MEMRI): "In [the Gaza Strip] we are dealing with the enemies of Allah, who believe that the matzos that they bake on their holidays must be kneaded with blood. When the Jews were in the diaspora, they would murder children in England, in Europe, and in America. They would slaughter them and use their blood to make their matzos... They believe that they are God's chosen people. They believe that the killing of any human being is a form of worship and a means to draw near their god." In March 2020, Italian painter Giovanni Gasparro unveiled a painting of the martyrdom of Simon of Trent, titled "Martirio di San Simonino da Trento (Simone Unverdorben), per omicidio rituale ebraico (The Martyrdom of St. Simon of Trento in accordance with Jewish ritual murder)". The painting was condemned by the Italian Jewish community and the Simon Wiesenthal Center, among others. The QAnon conspiracy theory has been accused of advancing blood libel tropes through its belief that Hollywood elites are harvesting adrenochrome from children through Satanic ritual abuse in order to become immortal. In February 2022, a sculpture of Simon of Trent depicting the blood libel was used to promote the adrenochrome-harvesting conspiracy theory. Views of the Catholic Church The attitude of the Catholic Church towards these accusations and the cults venerating children supposedly killed by Jews has varied over time. The Papacy generally opposed them, although it had problems in enforcing its opposition. In 1911, the Dictionnaire apologétique de la foi catholique, an important French Catholic encyclopedia, published an analysis of the blood libel accusations. This may be taken as being broadly representative of educated Catholic opinion in continental Europe at that time. The article noted that the popes had generally refrained from endorsing the blood libel, and it concluded that the accusations were unproven in a general sense, but it left open the possibility that some Jews had committed ritual murders of Christians. Other contemporary Catholic sources (notably the Jesuit periodical La Civiltà Cattolica) promoted the blood libel as truth. Today, the accusations are rarer in Catholic circles. While Simon of Trent's local status as a saint was removed in 1965, several towns in Spain still commemorate the blood libel. Papal pronouncements Pope Innocent IV took action against the blood libel: "5 July 1247 Mandate to the prelates of Germany and France to annul all measures adopted against the Jews on account of the ritual murder libel, and to prevent the accusation of Arabs on similar charges" (The Apostolic See and the Jews, Documents: 492–1404; Simonsohn, Shlomo, pp. 188–189, 193–195, 208). In 1247, he wrote also that "Certain of the clergy, and princes, nobles and great lords of your cities and dioceses have falsely devised certain godless plans against the Jews, unjustly depriving them by force of their property, and appropriating it themselves;... they falsely charge them with dividing up among themselves on the Passover the heart of a murdered boy...In their malice, they ascribe every murder, wherever it chance to occur, to the Jews. And on the ground of these and other fabrications, they are filled with rage against them, rob them of their possessions without any formal accusation, without confession, and without legal trial and conviction, contrary to the privileges granted to them by the Apostolic See... Since it is our pleasure that they shall not be disturbed,... we ordain that ye behave towards them in a friendly and kind manner. Whenever any unjust attacks upon them come under your notice, redress their injuries, and do not suffer them to be visited in the future by similar tribulations." Pope Gregory X (1271–1276) issued a letter which criticized the practice of blood libels and forbade arrests and persecution of Jews based on a blood libel, ... unless which we do not believe they be caught in the commission of the crime. Pope Paul III, in a bull of 12 May 1540, made clear his displeasure at having learned, through the complaints of the Jews of Hungary, Bohemia, and Poland, that their enemies, looking for a pretext to lay their hands on the Jews' property, were falsely attributing terrible crimes to them, in particular that of killing children and drinking their blood. Pope Benedict XIV wrote the bull Beatus Andreas (22 February 1755) in response to an application for the formal canonization of the 15th-century Andreas Oxner, a folk saint alleged to have been murdered by Jews "out of hatred for the Christian faith". Benedict did not dispute the factual claim that Jews murdered Christian children, and in anticipating that further cases on this basis would be brought appears to have accepted it as accurate, but decreed that in such cases beatification or canonization would be inappropriate. Blood libels in Muslim lands In late 1553 or 1554, Suleiman the Magnificent, the reigning Sultan of the Ottoman Empire, issued a firman (royal decree) which formally denounced blood libels against the Jews. In 1840, following the Western outrage arising from the Damascus affair, British politician and leader of the British Jewish community, Sir Moses Montefiore, backed by other influential westerners including Britain's Lord Palmerston and Damascus consul Charles Henry Churchill, the French lawyer Adolphe Crémieux, Austrian consul Giovanni Gasparo Merlato, Danish missionary John Nicolayson, and Solomon Munk, persuaded Sultan Abdulmecid I in Constantinople, to issue a firman on 6 November 1840 intended to halt the spread of blood libel accusations in the Ottoman Empire. The edict declared that blood libel accusations were a slander against Jews and they would be prohibited throughout the Ottoman Empire, and read in part: "... and for the love we bear to our subjects, we cannot permit the Jewish nation, whose innocence for the crime alleged against them is evident, to be worried and tormented as a consequence of accusations which have not the least foundation in truth...". In the remainder of the 19th century and into the 20th century, there were many instances of the blood libel in Ottoman lands, such as the 1881 Fornaraki affair. However the libel almost always came from the Christian community, sometimes with the connivance of Greek or French diplomats. The Jews could usually count on the goodwill of the Ottoman authorities and increasingly on the support of British, Prussian and Austrian representatives. In the 1910 Shiraz blood libel, the Jews of Shiraz, Iran, were falsely accused of murdering a Muslim girl. The entire Jewish quarter was pillaged, with the pogrom leaving 12 Jews dead and about 50 injured. In 1983, Mustafa Tlass, the Syrian Minister of Defense, wrote and published The Matzah of Zion, which is a treatment of the Damascus affair of 1840 that repeats the ancient "blood libel", that Jews use the blood of murdered non-Jews in religious rituals such as baking Matza bread. In this book, he argues that the true religious beliefs of Jews are "black hatred against all humans and religions", and no Arab country should ever sign a peace treaty with Israel. Tlass re-printed the book several times. Following the book's publication, Tlass told Der Spiegel, that this accusation against Jews was valid and he also claimed that his book is "an historical study ... based on documents from France, Vienna and the American University in Beirut." In 2003, the Egyptian newspaper Al-Ahram published a series of articles by Osama El-Baz, a senior advisor to the then Egyptian President Hosni Mubarak. Among other things, Osama El-Baz explained the origins of the blood libel against the Jews. He said that Arabs and Muslims have never been antisemitic, as a group, but he accepted the fact that a few Arab writers and media figures attack Jews "on the basis of the racist fallacies and myths that originated in Europe". He urged people not to succumb to "myths" such as the blood libel. Nevertheless, on many occasions in modern times, blood libel stories have appeared in the state-sponsored media of a number of Arab and Muslim nations, as well as on their television shows and websites, and books which allege instances of Jewish blood libels are not uncommon there. The blood libel was featured in a scene in the Syrian TV series Ash-Shatat, shown in 2003. In 2007, Lebanese poet Marwan Chamoun, in an interview aired on Télé Liban, referred to the "... slaughter of the priest Tomaso de Camangiano ... in 1840... in the presence of two rabbis in the heart of Damascus, in the home of a close friend of this priest, Daud Al-Harari, the head of the Jewish community of Damascus. After he was slaughtered, his blood was collected, and the two rabbis took it." A novel, Death of a Monk, based on the Damascus affair, was published in 2004. See also Blood atonement Blood curse Blood ritual Cake of Light Conspiracy theory Human cannibalism Kiddush#History of using white wine Moral panic OpIndia#Bihar human sacrifice claims Salem witch trials Satanic ritual abuse Sefer HaRazim References Notes Further reading Hsia, R. Po-chia (1998) The Myth of Ritual Murder: Jews and Magic in Reformation Germany. New Haven: Yale University Press. O'Brien, Darren (2011) The Pinnacle of Hatred: The Blood Libel and the Jews. Jerusalem: Vidal Sassoon International Center for the Study of Antisemitism, Hebrew University Magnes Press. Rose, E. M. (2015) The Murder of William of Norwich: The Origins of the Blood Libel in Medieval Europe. Oxford University Press Yuval, Israel Jacob (2006) Two Nations in Your Womb: Perceptions of Jews and Christians in Late Antiquity and the Middle Ages. Berkeley: University of California Press. pp. 135–204 External links Antisemitic canards
2,213
4,944
https://en.wikipedia.org/wiki/Naive%20set%20theory
Naive set theory
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics. Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping-stone towards more formal treatments. Method A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos. Early or later versions of Georg Cantor's theory and other informal systems. Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind. Paradoxes The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets. Cantor's theory Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand Russell actually addressed when he presented his paradox, not necessarily a theory Cantorwho, as mentioned, was aware of several paradoxespresumably had in mind. Axiomatic theories Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when. Consistency A naive set theory is not necessarily inconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos' Naive Set Theory, which is actually an informal presentation of the usual axiomatic Zermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system. Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows from Gödel's incompleteness theorems that a sufficiently complicated first order logic system (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do exclude some paradoxes, like Russell's paradox. Based on Gödel's theorem, it is just not known – and never can be – if there are no paradoxes at all in these theories or in any first-order set theory. The term naive set theory is still today also used in some literature to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory. Utility The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. References to particular axioms typically then occur only when demanded by tradition, e.g. the axiom of choice is often mentioned when used. Likewise, formal proofs occur only when warranted by exceptional circumstances. This informal usage of axiomatic set theory can have (depending on notation) precisely the appearance of naive set theory as outlined below. It is considerably easier to read and write (in the formulation of most statements, proofs, and lines of discussion) and is less error-prone than a strictly formal approach. Sets, membership and equality In naive set theory, a set is described as a well-defined collection of objects. These objects are called the elements or members of the set. Objects can be anything: numbers, people, other sets, etc. For instance, 4 is a member of the set of all even integers. Clearly, the set of even numbers is infinitely large; there is no requirement that a set be finite. The definition of sets goes back to Georg Cantor. He wrote in his 1915 article Beiträge zur Begründung der transfiniten Mengenlehre: “Unter einer 'Menge' verstehen wir jede Zusammenfassung M von bestimmten wohlunterschiedenen Objekten unserer Anschauung oder unseres Denkens (welche die 'Elemente' von M genannt werden) zu einem Ganzen.” – Georg Cantor “A set is a gathering together into a whole of definite, distinct objects of our perception or of our thought—which are called elements of the set.” – Georg Cantor Note on consistency It does not follow from this definition how sets can be formed, and what operations on sets again will produce a set. The term "well-defined" in "well-defined collection of objects" cannot, by itself, guarantee the consistency and unambiguity of what exactly constitutes and what does not constitute a set. Attempting to achieve this would be the realm of axiomatic set theory or of axiomatic class theory. The problem, in this context, with informally formulated set theories, not derived from (and implying) any particular axiomatic theory, is that there may be several widely differing formalized versions, that have both different sets and different rules for how new sets may be formed, that all conform to the original informal definition. For example, Cantor's verbatim definition allows for considerable freedom in what constitutes a set. On the other hand, it is unlikely that Cantor was particularly interested in sets containing cats and dogs, but rather only in sets containing purely mathematical objects. An example of such a class of sets could be the von Neumann universe. But even when fixing the class of sets under consideration, it is not always clear which rules for set formation are allowed without introducing paradoxes. For the purpose of fixing the discussion below, the term "well-defined" should instead be interpreted as an intention, with either implicit or explicit rules (axioms or definitions), to rule out inconsistencies. The purpose is to keep the often deep and difficult issues of consistency away from the, usually simpler, context at hand. An explicit ruling out of all conceivable inconsistencies (paradoxes) cannot be achieved for an axiomatic set theory anyway, due to Gödel's second incompleteness theorem, so this does not at all hamper the utility of naive set theory as compared to axiomatic set theory in the simple contexts considered below. It merely simplifies the discussion. Consistency is henceforth taken for granted unless explicitly mentioned. Membership If x is a member of a set A, then it is also said that x belongs to A, or that x is in A. This is denoted by x ∈ A. The symbol ∈ is a derivation from the lowercase Greek letter epsilon, "ε", introduced by Giuseppe Peano in 1889 and is the first letter of the word ἐστί (means "is"). The symbol ∉ is often used to write x ∉ A, meaning "x is not in A". Equality Two sets A and B are defined to be equal when they have precisely the same elements, that is, if every element of A is an element of B and every element of B is an element of A. (See axiom of extensionality.) Thus a set is completely determined by its elements; the description is immaterial. For example, the set with elements 2, 3, and 5 is equal to the set of all prime numbers less than 6. If the sets A and B are equal, this is denoted symbolically as A = B (as usual). Empty set The empty set, denoted as and sometimes , is a set with no members at all. Because a set is determined completely by its elements, there can be only one empty set. (See axiom of empty set.) Although the empty set has no members, it can be a member of other sets. Thus , because the former has no members and the latter has one member. In mathematics, the only sets with which one needs to be concerned can be built up from the empty set alone. Specifying sets The simplest way to describe a set is to list its elements between curly braces (known as defining a set extensionally). Thus denotes the set whose only elements are and . (See axiom of pairing.) Note the following points: The order of elements is immaterial; for example, . Repetition (multiplicity) of elements is irrelevant; for example, . (These are consequences of the definition of equality in the previous section.) This notation can be informally abused by saying something like to indicate the set of all dogs, but this example would usually be read by mathematicians as "the set containing the single element dogs". An extreme (but correct) example of this notation is , which denotes the empty set. The notation , or sometimes , is used to denote the set containing all objects for which the condition holds (known as defining a set intensionally). For example, denotes the set of real numbers, denotes the set of everything with blonde hair. This notation is called set-builder notation (or "set comprehension", particularly in the context of Functional programming). Some variants of set builder notation are: denotes the set of all that are already members of such that the condition holds for . For example, if is the set of integers, then is the set of all even integers. (See axiom of specification.) denotes the set of all objects obtained by putting members of the set into the formula . For example, is again the set of all even integers. (See axiom of replacement.) is the most general form of set builder notation. For example, {{math|{xs owner : x is a dog}}} is the set of all dog owners. Subsets Given two sets A and B, A is a subset of B if every element of A is also an element of B. In particular, each set B is a subset of itself; a subset of B that is not equal to B is called a proper subset. If A is a subset of B, then one can also say that B is a superset of A, that A is contained in B, or that B contains A. In symbols, A ⊆ B means that A is a subset of B, and B ⊇ A means that B is a superset of A. Some authors use the symbols ⊂ and ⊃ for subsets, and others use these symbols only for proper subsets. For clarity, one can explicitly use the symbols ⊊ and ⊋ to indicate non-equality. As an illustration, let R be the set of real numbers, let Z be the set of integers, let O be the set of odd integers, and let P be the set of current or former U.S. Presidents. Then O is a subset of Z, Z is a subset of R, and (hence) O is a subset of R, where in all cases subset may even be read as proper subset. Not all sets are comparable in this way. For example, it is not the case either that R is a subset of P nor that P is a subset of R. It follows immediately from the definition of equality of sets above that, given two sets A and B, A = B if and only if A ⊆ B and B ⊆ A. In fact this is often given as the definition of equality. Usually when trying to prove that two sets are equal, one aims to show these two inclusions. The empty set is a subset of every set (the statement that all elements of the empty set are also members of any set A is vacuously true). The set of all subsets of a given set A is called the power set of A and is denoted by or ; the "P" is sometimes in a script font. If the set A has n elements, then will have elements. Universal sets and absolute complements In certain contexts, one may consider all sets under consideration as being subsets of some given universal set. For instance, when investigating properties of the real numbers R (and subsets of R), R may be taken as the universal set. A true universal set is not included in standard set theory (see Paradoxes below), but is included in some non-standard set theories. Given a universal set U and a subset A of U, the complement of A (in U) is defined as AC := {x ∈ U''' : x ∉ A}. In other words, AC ("A-complement"; sometimes simply A, "A-prime" ) is the set of all members of U which are not members of A. Thus with R, Z and O defined as in the section on subsets, if Z is the universal set, then OC is the set of even integers, while if R is the universal set, then OC is the set of all real numbers that are either even integers or not integers at all. Unions, intersections, and relative complements Given two sets A and B, their union is the set consisting of all objects which are elements of A or of B or of both (see axiom of union). It is denoted by A ∪ B. The intersection of A and B is the set of all objects which are both in A and in B. It is denoted by A ∩ B. Finally, the relative complement of B relative to A, also known as the set theoretic difference of A and B, is the set of all objects that belong to A but not to B. It is written as A \ B or A − B. Symbolically, these are respectively A ∪ B := {x : (x ∈ A) or (x ∈ B)}; A ∩ B := {x : (x ∈ A) and (x ∈ B)} = {x ∈ A : x ∈ B} = {x ∈ B : x ∈ A}; A \ B := {x : (x ∈ A) and not (x ∈ B) } = {x ∈ A : not (x ∈ B)}. The set B doesn't have to be a subset of A for A \ B to make sense; this is the difference between the relative complement and the absolute complement (AC = U \ A) from the previous section. To illustrate these ideas, let A be the set of left-handed people, and let B be the set of people with blond hair. Then A ∩ B is the set of all left-handed blond-haired people, while A ∪ B is the set of all people who are left-handed or blond-haired or both. A \ B, on the other hand, is the set of all people that are left-handed but not blond-haired, while B \ A is the set of all people who have blond hair but aren't left-handed. Now let E be the set of all human beings, and let F be the set of all living things over 1000 years old. What is E ∩ F in this case? No living human being is over 1000 years old, so E ∩ F must be the empty set {}. For any set A, the power set is a Boolean algebra under the operations of union and intersection. Ordered pairs and Cartesian products Intuitively, an ordered pair is simply a collection of two objects such that one can be distinguished as the first element and the other as the second element, and having the fundamental property that, two ordered pairs are equal if and only if their first elements are equal and their second elements are equal. Formally, an ordered pair with first coordinate a, and second coordinate b, usually denoted by (a, b), can be defined as the set {{a}, {a, b}}. It follows that, two ordered pairs (a,b) and (c,d) are equal if and only if a = c and b = d. Alternatively, an ordered pair can be formally thought of as a set {a,b} with a total order. (The notation (a, b) is also used to denote an open interval on the real number line, but the context should make it clear which meaning is intended. Otherwise, the notation ]a, b[ may be used to denote the open interval whereas (a, b) is used for the ordered pair). If A and B are sets, then the Cartesian product (or simply product) is defined to be: A × B = {(a,b) : a is in A and b is in B}. That is, A × B is the set of all ordered pairs whose first coordinate is an element of A and whose second coordinate is an element of B. This definition may be extended to a set A × B × C of ordered triples, and more generally to sets of ordered n-tuples for any positive integer n. It is even possible to define infinite Cartesian products, but this requires a more recondite definition of the product. Cartesian products were first developed by René Descartes in the context of analytic geometry. If R denotes the set of all real numbers, then R2 := R × R represents the Euclidean plane and R3 := R × R × R represents three-dimensional Euclidean space. Some important sets There are some ubiquitous sets for which the notation is almost universal. Some of these are listed below. In the list, a, b, and c refer to natural numbers, and r and s are real numbers. Natural numbers are used for counting. A blackboard bold capital N () often represents this set. Integers appear as solutions for x in equations like x + a = b. A blackboard bold capital Z () often represents this set (from the German Zahlen, meaning numbers). Rational numbers appear as solutions to equations like a + bx = c. A blackboard bold capital Q () often represents this set (for quotient, because R is used for the set of real numbers). Algebraic numbers appear as solutions to polynomial equations (with integer coefficients) and may involve radicals (including ) and certain other irrational numbers. A Q with an overline () often represents this set. The overline denotes the operation of algebraic closure. Real numbers represent the "real line" and include all numbers that can be approximated by rationals. These numbers may be rational or algebraic but may also be transcendental numbers, which cannot appear as solutions to polynomial equations with rational coefficients. A blackboard bold capital R () often represents this set. Complex numbers are sums of a real and an imaginary number: . Here either or (or both) can be zero; thus, the set of real numbers and the set of strictly imaginary numbers are subsets of the set of complex numbers, which form an algebraic closure for the set of real numbers, meaning that every polynomial with coefficients in has at least one root in this set. A blackboard bold capital C () often represents this set. Note that since a number can be identified with a point in the plane, is basically "the same" as the Cartesian product ("the same" meaning that any point in one determines a unique point in the other and for the result of calculations, it doesn't matter which one is used for the calculation, as long as multiplication rule is appropriate for ). Paradoxes in early set theory The unrestricted formation principle of sets referred to as the axiom schema of unrestricted comprehension, is the source of several early appearing paradoxes: led, in the year 1897, to the Burali-Forti paradox, the first published antinomy. produced Cantor's paradox in 1897. yielded Cantor's second antinomy in the year 1899. Here the property is true for all , whatever may be, so would be a universal set, containing everything. , i.e. the set of all sets that do not contain themselves as elements, gave Russell's paradox in 1902. If the axiom schema of unrestricted comprehension is weakened to the axiom schema of specification or axiom schema of separation', then all the above paradoxes disappear. There is a corollary. With the axiom schema of separation as an axiom of the theory, it follows, as a theorem of the theory: Or, more spectacularly (Halmos' phrasing): There is no universe. Proof: Suppose that it exists and call it . Now apply the axiom schema of separation with and for use . This leads to Russell's paradox again. Hence cannot exist in this theory. Related to the above constructions is formation of the set , where the statement following the implication certainly is false. It follows, from the definition of , using the usual inference rules (and some afterthought when reading the proof in the linked article below) both that and holds, hence . This is Curry's paradox. It is (perhaps surprisingly) not the possibility of that is problematic. It is again the axiom schema of unrestricted comprehension allowing for . With the axiom schema of specification instead of unrestricted comprehension, the conclusion does not hold and hence is not a logical consequence. Nonetheless, the possibility of is often removed explicitly or, e.g. in ZFC, implicitly, by demanding the axiom of regularity to hold. One consequence of it is or, in other words, no set is an element of itself. The axiom schema of separation is simply too weak (while unrestricted comprehension is a very strong axiom—too strong for set theory) to develop set theory with its usual operations and constructions outlined above. The axiom of regularity is of a restrictive nature as well. Therefore, one is led to the formulation of other axioms to guarantee the existence of enough sets to form a set theory. Some of these have been described informally above and many others are possible. Not all conceivable axioms can be combined freely into consistent theories. For example, the axiom of choice of ZFC is incompatible with the conceivable "every set of reals is Lebesgue measurable". The former implies the latter is false. See also Algebra of sets Axiomatic set theory Internal set theory List of set identities and relations Set theory Set (mathematics) Partially ordered set Notes References Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany, 1994. Devlin, K.J., The Joy of Sets: Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY, 1993. María J. Frápolli|Frápolli, María J., 1991, "Is Cantorian set theory an iterative conception of set?". Modern Logic, v. 1 n. 4, 1991, 302–318. Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955. van Heijenoort, J., From Frege to Gödel, A Source Book in Mathematical Logic, 1879-1931'', Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. . External links Beginnings of set theory page at St. Andrews Earliest Known Uses of Some of the Words of Mathematics (S) Set theory Systems of set theory
2,215
4,946
https://en.wikipedia.org/wiki/Breathy%20voice
Breathy voice
Breathy voice (also called murmured voice, whispery voice, soughing and susurration) is a phonation in which the vocal folds vibrate, as they do in normal (modal) voicing, but are adjusted to let more air escape which produces a sighing-like sound. A simple breathy phonation, (not actually a fricative consonant, as a literal reading of the IPA chart would suggest), can sometimes be heard as an allophone of English between vowels, such as in the word behind, for some speakers. In the context of the Indo-Aryan languages like Sanskrit and Hindi and comparative Indo-European studies, breathy consonants are often called voiced aspirated, as in the Hindi and Sanskrit stops normally denoted bh, dh, ḍh, jh, and gh and the reconstructed Proto-Indo-European phoneme gʷʰ. , as breathy voice is a different type of phonation from aspiration. However, breathy and aspirated stops are acoustically similar in that in both cases there is a delay in the onset of full voicing. In the history of several languages, like Greek and some varieties of Chinese, breathy stops have developed into aspirated stops. Classification and terminology There is some confusion as to the nature of murmured phonation. The International Phonetic Alphabet (IPA) and authors such as Peter Ladefoged equate phonemically contrastive murmur with breathy voice in which the vocal folds are held with lower tension (and farther apart) than in modal voice, with a concomitant increase in airflow and slower vibration of the glottis. In that model, murmur is a point in a continuum of glottal aperture between modal voice and breath phonation (voicelessness). Others, such as Laver, Catford, Trask and the authors of the Voice Quality Symbols (VoQS), equate murmur with whispery voice in which the vocal folds or, at least, the anterior part of the vocal folds vibrates, as in modal voice, but the arytenoid cartilages are held apart to allow a large turbulent airflow between them. In that model, murmur is a compound phonation of approximately modal voice plus whisper. It is possible that the realization of murmur varies among individuals or languages. The IPA uses the term "breathy voice", but VoQS uses the term "whispery voice". Both accept the term "murmur", popularised by Ladefoged. Transcription A stop with breathy release or a breathy nasal is transcribed in the IPA as etc. or as etc. Breathy vowels are most often written etc. Indication of breathy voice by using subscript diaeresis was approved in or before June 1976 by members of the council of International Phonetic Association. In VoQS, the notation } is used for whispery voice (or murmur), and } is used for breathy voice. Some authors, such as Laver, suggest the alternative transcription (rather than IPA ) as the correct analysis of Gujarati , but it could be confused with the replacement of modal voicing in voiced segments with whispered phonation, conventionally transcribed with the diacritic . Methods of production There are several ways to produce breathy sounds such as . One is to hold the vocal folds apart, so that they are lax as they are for , but to increase the volume of airflow so that they vibrate loosely. A second is to bring the vocal folds closer together along their entire length than in voiceless , but not as close as in modally voiced sounds such as vowels. This results in an airflow intermediate between and vowels, and is the case with English intervocalic /h/. A third is to constrict the glottis, but separate the arytenoid cartilages that control one end. This results in the vocal folds being drawn together for voicing in the back, but separated to allow the passage of large volumes of air in the front. This is the situation with Hindi. The distinction between the latter two of these realizations, vocal folds somewhat separated along their length (breathy voice) and vocal folds together with the arytenoids making an opening (whispery voice), is phonetically relevant in White Hmong (Hmong Daw). Phonological property A number of languages use breathy voicing in a phonologically contrastive way. Many Indo-Aryan languages, such as Hindi, typically have a four-way contrast among plosives and affricates (voiced, breathy, tenuis, aspirated) and a two-way contrast among nasals (voiced, breathy). The Nguni languages within the southern branch of the Bantu languages, including Phuthi, Xhosa, Zulu, Southern Ndebele and Swazi, also have contrastive breathy voice. In the case of Xhosa, there is a four-way contrast analogous to Indic in oral clicks, and similarly a two-way contrast among nasal clicks, but a three-way contrast among plosives and affricates (breathy, aspirated, and ejective), and two-way contrasts among fricatives (voiceless and breathy) and nasals (voiced and breathy). In some Bantu languages, historically breathy stops have been phonetically devoiced, but the four-way contrast in the system has been retained. In all five of the southeastern Bantu languages named, the breathy stops (even if they are realised phonetically as devoiced aspirates) have a marked tone-lowering (or tone-depressing) effect on the following tautosyllabic vowels. For this reason, such stop consonants are frequently referred to in the local linguistic literature as 'depressor' stops. Swazi, and to a greater extent Phuthi, display good evidence that breathy voicing can be used as a morphological property independent of any consonant voicing value. For example, in both languages, the standard morphological mechanism for achieving the morphosyntactic copula is to simply execute the noun prefix syllable as breathy (or 'depressed'). In Portuguese, vowels after the stressed syllable can be pronounced with breathy voice. Gujarati is unusual in contrasting breathy vowels and consonants: 'twelve', 'outside', 'burden'. Tsumkwe Juǀ'hoan makes the following rare distinctions : fall, land (of a bird etc.); walk; herb species; and /n|ʱoaᵑ/ greedy person; /n|oaʱᵑ/ cat. Breathy stops in Punjabi lost their phonation, merging with voiceless and voiced stops in various positions, and a system of high and low tones developed in syllables that formerly had these sounds. Breathy voice can also be observed in place of debuccalized coda in some dialects of colloquial Spanish, e.g. for . See also Aspirated consonant Creaky voice Guttural Index of phonetics articles Slack voice Voiced glottal fricative Whispering References Phonation
2,216
4,949
https://en.wikipedia.org/wiki/Blue%20Angels
Blue Angels
The Blue Angels is a flight demonstration squadron of the United States Navy. Formed in 1946, the unit is the second oldest formal aerobatic team in the world, after the French formed in 1931. The team, composed of six Navy and one Marine Corps demonstration pilot, fly Boeing F/A-18 Super Hornets. The Blue Angels typically perform aerial displays in at least 60 shows annually at 30 locations throughout the United States and two shows at one location in Canada. The "Blues" still employ many of the same practices and techniques used in the inaugural 1946 season. An estimated 11 million spectators view the squadron during air shows from March through November each year. Members of the Blue Angels team also visit more than 50,000 people in schools, hospitals, and community functions at air show cities. Since 1946, the Blue Angels have flown for more than 505 million spectators. As of November 2011, the Blue Angels received $37 million annually from the annual Department of Defense budget. Mission The mission of the United States Navy Flight Demonstration Squadron is to showcase the pride and professionalism of the United States Navy and Marine Corps by inspiring a culture of excellence and service to the country through flight demonstrations and community outreach. Air shows The "Blues" perform at both military and non-military airfields, and often at major U.S. cities and capitals; also locations in Canada are often included in the air show schedule. During their aerobatic demonstration, the six-member team flies F/A-18E Super Hornets, split into the diamond formation (Blue Angels 1through 4) and the Lead and Opposing Solos (Blue Angels 5and 6). Most of the show alternates between maneuvers performed by the Diamond Formation and those performed by the Solos. The Diamond, in tight formation and usually at lower speeds (400 mph), performs maneuvers such as formation loops, rolls, and transitions from one formation to another. The Solos showcase the high performance capabilities of their individual aircraft through the execution of high-speed passes, slow passes, fast rolls, slow rolls, and very tight turns. The highest speed flown during an air show is 700 mph (just under Mach 1) and the lowest speed, is 126 mph (110 knots) during Section High Alpha with the new Super Hornet (about 115 knots with the old "Legacy" Hornet). Some of the maneuvers include both solo aircraft performing at once, such as opposing passes (toward each other in what appears to be a collision course) and mirror formations (back-to-back, belly-to-belly, or wingtip-to-wingtip, with one jet flying inverted). The Solos join the Diamond Formation near the end of the show for a number of maneuvers in the Delta Formation. The parameters of each show must be tailored in accordance with local weather conditions at showtime: in clear weather the high show is performed; in overcast conditions a low show is performed, and in limited visibility (weather permitting) the flat show is presented. The high show requires at least an ceiling and visibility of at least from the show's center point. The minimum ceilings allowed for low and flat shows are 4,500 feet, and 1,500 feet respectively. Aircraft The team flew the McDonnell Douglas F/A-18 Hornet for 34 years from 1986 through 2020. The team currently flies the Boeing F/A-18 Super Hornet. In August 2018, Boeing was awarded a contract to convert nine single-seat F/A-18E Super Hornets and two F/A-18F two-seaters for Blue Angels use. Modifications to each F/A-18E/F include removal of the weapons and replacement with a tank that contains smoke-oil used in demonstrations and outfitting the control stick with a spring system for more precise aircraft control input. Control sticks are tensioned with of force to allow the pilot minimal room for non-commanded movement of the aircraft. Each modified F/A-18 remains in the fleet and can be returned to combat duty aboard an aircraft carrier within 72 hours. As converted aircraft were delivered, they were used for testing maneuvers starting in mid 2020. The team's Super Hornets became operational by the beginning of 2021, their 75th anniversary year. The show's narrator flies Blue Angels No. 7, a two-seat F/A-18F Hornet, to show sites. The Blues use these jets for backups, and to give demonstration rides to VIP (civilians). Usually, two back seats rides are available at each air show; one goes to a member of the press, and the other to the "Key Influencer". The No. 4 slot pilot often flies the No. 7 aircraft in Friday's "practice" so that pilots from the fleet and future team members can experience the show. The Blue Angels use a United States Marine Corps Lockheed C-130J Super Hercules, nicknamed "Fat Albert", for their logistics, carrying spare parts, equipment, and to carry support personnel between shows. Team members , there have been 272 demonstration pilots in the Blue Angels since their inception. All team members, both officer and enlisted, pilots and staff officers, come from the ranks of regular Navy and United States Marine Corps units. The demonstration pilots and narrator are made up of Navy and USMC Naval Aviators. Pilots serve two to three years, and position assignments are made according to team needs, pilot experience levels, and career considerations for members. Other officers in the squadron include a naval flight officer who serves as the events coordinator, three USMC C-130 pilots, an executive officer, a maintenance officer, a supply officer, a public affairs officer, an administrative officer, and a flight surgeon. Enlisted members range from E-4 to E-9 and perform all maintenance, administrative, and support functions. They serve three to four years in the squadron. After serving with the squadron, members return to fleet assignments. The officer selection process requires pilots and support officers (flight surgeon, events coordinator, maintenance officer, supply officer, and public affairs officer) wishing to become Blue Angels to apply formally via their chain-of-command, with a personal statement, letters of recommendation, and flight records. Navy and Marine Corps F/A-18 demonstration pilots and naval flight officers are required to have a minimum of 1,250 tactical jet hours and be carrier-qualified. Marine Corps C-130 demonstration pilots are required to have 1,200 flight hours and be an aircraft commander. Applicants "rush" the team at one or more airshows, paid out of their own finances, and sit in on team briefs, post-show activities, and social events. It is critical that new officers fit the existing culture and team dynamics. The application and evaluation process runs from March through early July, culminating with extensive finalist interviews and team deliberations. Team members vote in secret on the next year's officers. Selections must be unanimous. There have been female and minority staff officers as Blue Angel members, including minority Blue Angel pilot Lt. Andre Webb on the 2018 team. Flight surgeons serve a two-year term. The flight surgeon provides team medical services, evaluates demonstration maneuvers from the ground, and participates in each post-flight debrief. The first female Blue Angel flight surgeon was Lt. Tamara Schnurr, who was a member of the 2001 team. The Flight Leader (No. 1) is the Commanding Officer and is always a Navy commander, who may be promoted to captain mid-tour if approved for captain by the selection board. Pilots of numbers 2–7 are Navy lieutenant commanders or lieutenants, or Marine Corps majors or captains. The No.7 pilot narrates for a year, and then typically flies Opposing and then Lead Solo the following two years, respectively. The No.3 pilot moves to the No.4 (slot) position for their second year. Blue Angel No.4 serves as the demonstration safety officer, due largely to the perspective they are afforded from the slot position within the formation, as well as their status as a second-year demonstration pilot. The first woman named to the Blue Angels as F/A-18 demonstration pilot was Lt. Amanda Lee, who is a member of the 2023 team. Flight Leader/Commanding Officer Commander Alexander P. Armatas is a native of Skaneateles, New York. He graduated from the United States Naval Academy in 2002 with a Bachelor of Science in aerospace engineering. Alexander joined the Blue Angels in August 2022. He has accumulated more than 4,100 flight hours and 911 carrier-arrested landings. His decorations include the Meritorious Service Medal, four Strike/Flight Air Medals, five Navy and Marine Corps Commendation Medals, one Navy and Marine Corps Achievement Medal, and various personal, unit and service awards. Training and weekly routine Annual winter training takes place at NAF El Centro, California, where new and returning pilots hone skills learned in the fleet. During winter training, the pilots fly two practice sessions per day, six days a week, to fly the 120 training missions needed to perform the demonstration safely. The separation between the formation of aircraft and their maneuver altitude is gradually reduced over the course of about two months in January and February. The team then returns to their home base in Pensacola, Florida, in March, and continues to practice throughout the show season. A typical week during the season has practices at NAS Pensacola on Tuesday and Wednesday mornings. The team then flies to its show venue for the upcoming weekend on Thursday, conducting "circle and arrival" orientation maneuvers upon arrival. The team flies a "practice" airshow at the show site on Friday. This show is attended by invited guests but is often open to the general public. The main airshows are conducted on Saturdays and Sundays, with the team returning home to NAS Pensacola on Sunday evenings after the show. Monday is an off day for the Blues' demonstration pilots and road crew. Extensive aircraft maintenance is performed on Sunday evening and Monday by maintenance team members. Pilots maneuver the flight stick with their right hand and operate the throttle with their left. They do not wear G-suits because the air bladders inside repeatedly deflate and inflate, interfering with that stability. To prevent blood from pooling in their legs, Blue Angel pilots have developed a method for tensing their muscles to prevent blood from pooling in their lower extremities, possibly rendering them unconscious. History Overview The Blue Angels were originally formed in April 1946 as the Navy Flight Exhibition Team. They changed their name to the Blue Angels after seeing an advertisement for the New York nightclub The Blue Angel, also known as The Blue Angel Supper Club, in the New Yorker Magazine. The team was first introduced as the Blue Angels during an air show in July 1946. The first Blue Angels demonstration aircraft wore navy blue (nearly black) with gold lettering. The current shades of blue and yellow were adopted when the first demonstration aircraft were transitioned from the Grumman F6F-5 Hellcat to the Grumman F8F-1 Bearcat in August 1946; the aircraft wore an all-yellow scheme with blue markings during the 1949 show season. The original Blue Angels insignia or crest was designed in 1949, by Lt. Commander Raleigh "Dusty" Rhodes, their third Flight Leader and first jet fighter leader. The aircraft silhouettes change as the team changes aircraft. The Blue Angels transitioned from propeller-driven aircraft to blue and gold jet aircraft (Grumman F9F-2B Panther) in August 1949. The Blue Angels demonstration teams began wearing leather jackets and special colored flight suits with the Blue Angels insignia, in 1952. In 1953, they began wearing gold colored flight suits for the first show of the season and or to commemorate milestones for the flight demonstration squadron. The Navy Flight Exhibition Team was reorganized and commissioned the United States Navy Flight Demonstration Squadron on 10 December 1973. 1946–1949 The Blue Angels were established as a Navy flight exhibition team on 24 April 1946 by order of Chief of Naval Operations Admiral Chester Nimitz to generate greater public support of naval aviation. To boost Navy morale, demonstrate naval air power, and maintain public interest in naval aviation, an underlying mission was to help the Navy generate public and political support for a larger allocation of the shrinking defense budget. Rear Admiral Ralph Davison personally selected Lieutenant Commander Roy Marlin "Butch" Voris, a World War II fighter ace, to assemble and train a flight demonstration team, naming him Officer-in-Charge and Flight Leader. Voris selected three fellow instructors to join him (Lt. Maurice "Wick" Wickendoll, Lt. Mel Cassidy, and Lt. Cmdr. Lloyd Barnard, veterans of the War in the Pacific), and they spent countless hours developing the show. The group perfected its initial maneuvers in secret over the Florida Everglades so that, in Voris' words, "if anything happened, just the alligators would know". The first four pilots and those after them, were and are some of the best and most experienced aviators in the Navy. The team's first demonstration with Grumman F6F-5 Hellcat aircraft took place before Navy officials on 10 May 1946 and was met with enthusiastic approval. The Blue Angels performed their first public flight demonstration from their first training base and team headquarters at Naval Air Station (NAS) Jacksonville, Florida, on 15 and 16 June 1946, with three F6F-5 Hellcats (a fourth F6F-5 was held in reserve). On 15 June, Voris led the three Hellcats (numbered 1–3), specially modified to reduce weight and painted sea blue with gold leaf trim, through their inaugural 15-minute-long performance. The team employed a North American SNJ Texan, painted and configured to simulate a Japanese Zero, to simulate aerial combat. This aircraft was later painted yellow and dubbed the "Beetle Bomb". This aircraft is said to have been inspired by one of the Spike Jones' Murdering the Classics series of musical satires, set to the tune (in part) of the William Tell Overture as a thoroughbred horse race scene, with "Beetle Bomb" being the "trailing horse" in the lyrics. The team thrilled spectators with low-flying maneuvers performed in tight formations, and (according to Voris) by "keeping something in front of the crowds at all times. My objective was to beat the Army Air Corps. If we did that, we'd get all the other side issues. I felt that if we weren't the best, it would be my naval career." The Blue Angels' first public demonstration also netted the team its first trophy, which sits on display at the team's current home at NAS Pensacola. During an air show at Omaha, Nebraska on 19–21 July 1946, the Navy Flight Exhibition Team was introduced as the Blue Angels. The name had originated through a suggestion by Right Wing Pilot Lt. Maurice "Wick" Wickendoll, after he had read about the Blue Angel nightclub in The New Yorker magazine. After ten appearances with the Hellcats, the Hellcats were replaced by the lighter, faster, and more powerful F8F-1 Bearcats on 25 August. By the end of the year the team consisted of four Bearcats numbered 1–4 on the tail sections. In May 1947, flight leader Lt. Cmdr. Bob Clarke replaced Butch Voris as the leader of the team. The team with an additional fifth pilot, relocated to Naval Air Station (NAS) Corpus Christi, Texas. On 7 June at Birmingham, Alabama, four F8F-1 Bearcats (numbered 1–4) flew in diamond formation for the first time which is now considered the Blue Angels' trademark. A fifth Bearcat was also added that year. A SNJ was used as a Japanese Zero for dogfights with the Bearcats in air shows. In January 1948, Lt. Cmdr. Raleigh " Dusty" Rhodes took command of the Blue Angels team which was flying four Bearcats and a yellow painted SNJ with USN markings dubbed "Beetle Bomb"; the SNJ represented a Japanese Zero for the air show dogfights with the Bearcats. The name "Blue Angels" also was painted on the Bearcats. In 1949, the team acquired a Douglas R4D Skytrain for logistics to and from show sites. The team's SNJ was also replaced by another Bearcat, painted yellow for the air combat routine, inheriting the "Beetle Bomb" nickname. In May, the team went to the west coast on temporary duty so the pilots and the rest of the team could become familiar with jet aircraft. On 13 July, the team acquired, and began flying the straight-wing Grumman F9F-2B Panther between demonstration shows. On 20 August, the team debuted the panther jets under Team Leader Lt. Commander Raleigh "Dusty" Rhodes during an air show at Beaumont, Texas and added a sixth pilot. The F8F-1 "Beetle Bomb" was relegated to solo aerobatics before the main show, until it crashed on takeoff at a training show in Pensacola on 24 April 1950, killing "Blues" pilot Lt. Robert Longworth. Team headquarters shifted from NAS Corpus Christi, Texas, to NAAS Whiting Field, Florida, on 10 September 1949, announced 14 July 1949. 1950–1959 The Blue Angels pilots continued to perform nationwide in 1950. On 25 June, the Korean War started, and all Blue Angels pilots volunteered for combat duty. The squadron (due to a shortage of pilots, and no available planes) and its members were ordered to "combat-ready status" after an exhibition at Naval Air Station, Dallas, Texas on 30 July. The Blue Angels were disbanded, and its pilots were reassigned to a carrier. Once aboard the aircraft carrier on 9 November, the group formed the core of Fighter Squadron 191 (VF-19), "Satan's Kittens", under the command of World War II fighter ace and 1950 Blue Angels Commander/Flight Leader, Lt. Commander John Magda; he was killed in action on 8 March 1951. On 25 October 1951, the Blues were ordered to re-activate as a flight demonstration team, and reported to NAS Corpus Christi, Texas. Lt. Cdr. Voris was again tasked with assembling the team (he was the first of only two commanding officers to lead them twice). In May 1952, the Blue Angels began performing again with F9F-5 Panthers at an airshow in Memphis, Tennessee. In 1953, the team traded its Sky Train for a Curtiss R5C Commando. In August, "Blues" leader LCDR Ray Hawkins became the first naval aviator to survive an ejection at supersonic speeds when a new F9F-6 he was piloting became uncontrollable on a cross-country flight. After summer, the team began demonstrating with F9F-6 Cougar. In 1954, the first Marine Corps pilot, Captain Chuck Hiett, joined the Navy flight demonstration team. The Blue Angels also received special colored flight suits. In May, the Blue Angels performed at Bolling Air Force Base in Washington, D.C. with the Air Force Thunderbirds (activated 25 May 1953). The Blue Angels began relocating to their current home at Naval Air Station (NAS) Pensacola, Florida that winter, and it was here they progressed to the swept-wing Grumman F9F-8 Cougar. In December, the team left its home base for its first winter training facility at Naval Air Facility El Centro, California In September 1956, the team added a sixth aircraft to the flight demonstration in the Opposing Solo position, and gave its first performance outside the United States at the International Air Exposition in Toronto, Ontario, Canada. It also upgraded its logistics aircraft to the Douglas R5D Skymaster. In 1957, the Blue Angels transitioned from the F9F-8 Cougar to the supersonic Grumman F11F-1 Tiger. The first demonstration was flying the short-nosed version on 23 March, at Barin Field, Pensacola, and then the long-nosed versions. The demonstration team (with added Angel 6) wore gold flight suits during the first air show that season. In 1958, the first Six-Plane Delta Maneuvers were added that season. 1960–1969 In July 1964, the Blue Angels participated in the Aeronaves de Mexico Anniversary Air Show over Mexico City, Mexico, before an estimated crowd of 1.5 million people. In 1965, the Blue Angels conducted a Caribbean island tour, flying at five sites. Later that year, they embarked on a European tour to a dozen sites, including the Paris Air Show, where they were the only team to receive a standing ovation. In 1967, the Blues toured Europe again, at six sites. In 1968, the C-54 Skymaster transport aircraft was replaced with a Lockheed VC-121J Constellation. The Blues transitioned to the two-seat McDonnell Douglas F-4J Phantom II in 1969, nearly always keeping the back seat empty for flight demonstrations. The Phantom was the only plane to be flown by both the "Blues" and the United States Air Force Thunderbirds (the "Birds"). That year they also upgraded to the Lockheed C-121 Super Constellation for logistics. 1970–1979 In 1970, the Blues received their first U.S. Marine Corps Lockheed KC-130F Hercules, manned by an all-Marine crew. That year, they went on their first South American tour. In 1971, the team which wore the gold flight suits for the first show, conducted its first Far East Tour, performing at a dozen locations in Korea, Japan, Taiwan, Guam, and the Philippines. In 1972, the Blue Angels were awarded the Navy's Meritorious Unit Commendation for the two-year period from 1 March 1970 to 31 December 1971. Another European tour followed in 1973, including air shows in Iran, England, France, Spain, Turkey, Greece, and Italy. On 10 December 1973, the Navy Flight Exhibition Team was reorganized and commissioned the United States Navy Flight Demonstration Squadron. The Blues mission was more on Navy recruiting. In 1974, the Blue Angels transitioned to the new Douglas A-4F Skyhawk II. Navy Commander Anthony Less became the squadron's first "commanding officer" and "flight leader". A permanent flight surgeon position and administration officer was added to the team. The squadron's mission was redefined by Less to further improve the recruiting effort. Beginning in 1975, "Bert" was used for Jet Assisted Take Off (JATO) and short aerial demonstrations just prior to the main event at selected venues, but the JATO demonstration ended in 2009 due to dwindling supplies of rockets. "Fat Albert Airlines" flies with an all-Marine crew of three officers and five enlisted personnel. 1980–1989 In 1986, LCDR Donnie Cochran, joined the Blue Angels as the first African-American Naval Aviator to be selected. He served for two more years with the squadron flying the left wing-man position in the No.3 A-4F fighter, and returned to command the Blue Angels in 1995 and 1996. On 8 November 1986, the Blue Angels completed their 40th anniversary year during ceremonies unveiling what would be their aircraft through their 75th anniversary year, the McDonnell Douglas F/A-18 Hornet. The power and aerodynamics of the Hornet allows them to perform a slow, high angle of attack "tail sitting" maneuver, and to fly a "dirty" (landing gear down) formation loop. 1990–1999 In 1992, the Blue Angels deployed for a month-long European tour, their first in 19 years, conducting shows in Sweden, Finland, Russia (first foreign flight demonstration team to perform there), Romania, Bulgaria, Italy, the United Kingdom, and Spain. In 1998, CDR Patrick Driscoll made the first "Blue Jet" landing on a "haze gray and underway" aircraft carrier, USS Harry S. Truman (CVN-75). On 8 October 1999, the Blue Angels lost two pilots. LCDR Kieron O'Connor and LT Kevin Colling were returning from a practice flight before an air show when their F/A-18B crashed in a wooded area of south Georgia. 2000–2009 In 2000, the Navy was conducting investigations in regard and connected to the loss of two Blue Angels pilots in October 1999. The pilots of the F/A-18 Hornet were not required to wear and do not wear g-suits. In 2006, the Blue Angels marked their 60th year of performing. On 30 October 2008, a spokesman for the team announced that the team would complete its last three performances of the year with five jets instead of six. The change was because one pilot and another officer in the organization had been removed from duty for engaging in an "inappropriate relationship". The Navy said one of the individuals was a man and the other a woman, one a Marine and the other from the Navy, and that Rear Admiral Mark Guadagnini, chief of Naval air training, was reviewing the situation. At the next performance at Lackland Air Force Base following the announcement the No.4 or slot pilot, was absent from the formation. A spokesman for the team would not confirm the identity of the pilot removed from the team. On 6 November 2008, both officers were found guilty at an admiral's mast on unspecified charges but the resulting punishment was not disclosed. The names of the two members involved were later released on the Pensacola News Journal website/forum as pilot No.4 USMC Maj. Clint Harris and the administrative officer, Navy Lt. Gretchen Doane. On 21 April 2007, pilot Kevin "Kojak" Davis was killed and eight people on the ground were injured when Davis lost control of the No.6 jet and crashed due to G-force-induced Loss Of Consciousness (G-LOC) during an air show at the Marine Corps Air Station Beaufort in Beaufort, South Carolina. The Fat Albert performed its final JATO demonstration at the 2009 Pensacola Homecoming show, expending their eight remaining JATO bottles. This demonstration not only was the last JATO performance of the squadron, but also the final JATO use of the U.S. Marine Corps. In 2009, the Blue Angels were inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum. 2010–2019 On 22 May 2011, the Blue Angels were performing at the Lynchburg Regional Airshow in Lynchburg, Virginia, when the Diamond formation flew the Barrel Roll Break maneuver at an altitude lower than the required minimum. The maneuver was aborted, the remainder of the demonstration canceled and all aircraft landed safely. The next day, the Blue Angels announced that they were initiating a safety stand-down, canceling their upcoming Naval Academy Airshow and returning to their home base in Pensacola, Florida, for additional training and airshow practice. On 26 May, the Blue Angels announced they would not be flying their traditional fly-over of the Naval Academy Graduation Ceremony and that they were canceling their 28–29 May 2011 performances at the Millville Wings and Wheels Airshow in Millville, New Jersey. On 27 May 2011, the Blue Angels announced that Commander Dave Koss, the squadron's commanding officer, would be stepping down. He was replaced by Captain Greg McWherter, the team's previous commanding officer. The squadron canceled performances at the Rockford, Illinois Airfest 4–5 June and the Evansville, Indiana Freedom Festival Air Show 11–12 June to allow additional practice and demonstration training under McWherter's leadership. On 29 July 2011, a new Blue Angels Mustang GT was auctioned off for $400,000 at the Experimental Aircraft Association AirVenture Oshkosh (Oshkosh Air Show) annual summer gathering of aviation enthusiasts from 25 to 31 July in Oshkosh, Wisconsin which had an attendance of 541,000 persons and 2,522 show planes. Between 2 and 4 September 2011 on Labor Day weekend, the Blue Angels flew for the first time with a fifty-fifty blend of conventional JP-5 jet fuel and a camelina-based biofuel at Naval Air Station Patuxent River, Maryland. McWherter flew an F/A-18 test flight on 17 August and stated there were no noticeable differences in performance from inside the cockpit. On 1 March 2013, the U.S. Navy announced that it was cancelling remaining 2013 performances after 1 April 2013 due to sequestration budget constraints. In October 2013, Secretary of Defense Chuck Hagel, stating that "community and public outreach is a crucial Departmental activity", announced that the Blue Angels (along with the U.S. Air Force's Thunderbirds) would resume appearing at air shows starting in 2014, although the number of flyovers will continue to be severely reduced. On 15 March 2014, the demonstration pilots numbered 1–7 wore gold flight suits to celebrate the team's "return to the skies" during their first air show of the season; there were only three air shows in 2013. In July 2014, Marine Corps C-130 pilot Capt. Katie Higgins, 27, became the first female pilot to join the Blue Angels, flying the support aircraft Fat Albert for the 2015 and 2016 show seasons. In July 2015, Cmdr Bob Flynn became the Blue Angels' first executive officer. On 2 June 2016, Capt. Jeff Kuss Opposing Solo died just after takeoff while performing the Split-S maneuver in his Hornet during a practice run for The Great Tennessee Air Show in Smyrna, Tennessee. The Navy's investigation found that Capt. Kuss had performed the maneuver too low while failing to retard the throttle out of afterburner, causing him to fall too fast and recover too low above the ground. Capt. Kuss ejected, but his parachute was immediately engulfed in flames, causing him to fall to his death. Kuss' body was recovered just yards away from the crash site. The cause of death was blunt force trauma to the head. The investigation also cited weather and pilot fatigue as additional causes of the crash. In a strange twist, Captain Kuss' fatal crash happened hours after the Blue Angels' fellow pilots in the United States Air Force Thunderbirds suffered a crash of their own, following the United States Air Force Academy graduation ceremony earlier that day. Capt. Jeff Kuss was replaced by Cmdr. Frank Weisser to finish out the 2016 and 2017 seasons. In July 2016, Boeing was awarded a $12 million contract to begin an engineering proposal for converting the Boeing F/A-18E/F Super Hornet for Blue Angels use, with the proposal to be completed by September 2017. The Fat Albert (BUNO 164763) was retired from service in May 2019 with 30,000 flight hours. The Blue Angels replaced it with an Ex-RAF C-130J (BUNO 170000). 2020–present In response to the Coronavirus outbreak, the Blue Angels flew over multiple US cities as a tribute to healthcare and front line workers. The Blues officially transitioned to Boeing F/A-18E/F Super Hornets on 4 November 2020. In July 2022, Lt. Amanda Lee was announced as the first woman to serve as a demonstration pilot in the Blue Angels. Aircraft timeline The "Blues" have flown ten different demonstration aircraft and six support aircraft models: Demonstration aircraft Grumman F6F-5 Hellcat: June – August 1946 Grumman F8F-1 Bearcat: August 1946 – 1949 Grumman F9F-2 Panther: 1949 – June 1950 (first jet); F9F-5 Panther: 1951 – Winter 1954/55 Grumman F9F-8 Cougar: Winter 1954/55 – mid-season 1957 (swept-wing) Grumman F11F-1 (F-11) Tiger: mid-season 1957 – 1968 (first supersonic jet) McDonnell Douglas F-4J Phantom II: 1969 – December 1974 Douglas A-4F Skyhawk: December 1974 – November 1986 McDonnell Douglas F/A-18 Hornet (F/A-18B as #7): November 1986 – 2010 Boeing F/A-18A/C (B/D as #7) Hornet: 2010-2020 Boeing F/A-18E Super Hornet (F/A-18F as #7): 2020– Support aircraft JRB Expeditor (Beech 18): 1949–? Douglas R4D-6 Skytrain: 1949–1955 Curtiss R5C Commando: 1953 Douglas R5D Skymaster: 1956–1968 Lockheed C-121 Super Constellation: 1969–1973 Lockheed C-130 Hercules "Fat Albert": 1970–2019 (JATO usage was stopped in 2009) Lockheed Martin C-130J Super Hercules "Fat Albert": 2020–present Miscellaneous aircraft North American SNJ Texan "Beetle Bomb" (used to simulate a Japanese A6M Zero aircraft in demonstrations during the late 1940s) Lockheed T-33 Shooting Star (Used during the 1950s as a VIP transport aircraft for the team) Vought F7U Cutlass (two of the unusual F7Us were received in late 1952 and flown as a side demonstration during the 1953 season but they were not a part of their regular formations which at the time used the F9F Panther. Pilots and ground crew found it unsatisfactory and a plan to use it as the team's primary aircraft was canceled). Air show routine The 2022 Blue Angels High Show Routine: Fat Albert (C-130)high-performance takeoff (Low Transition) Fat AlbertParade Pass (The plane banks around the front of the crowd.) Fat AlbertFlat Pass Fat AlbertHead on Pass Fat AlbertShort-Field Assault Landing FA-18 Engine Start-Up and Taxi Out Diamond Takeoffeither a low transition with turn, a loop on takeoff, a Half Cuban Eight takeoff, or a Half Squirrel Cage Solos Take OffNo. 5 Dirty Roll on Takeoff; No. 6 Low transition/Immelman Diamond 360Aircraft 1-4 in their signature 18-inch wingtip-to-canopy diamond formation Opposing Knife Edge Pass 5 and 6 Diamond Rollentire diamond formation rolls as a single entity Opposing Inverted to Inverted Rolls 5 and 6 Diamond Aileron Rollall four diamond jets perform simultaneous aileron rolls FortusSolos flying in carrier landing configuration with No.5 inverted, establishing a "mirror image" effect Diamond Dirty Loopthe diamond flies a loop with all four jets in carrier landing configuration Minimum Radius Turnhighest G maneuver (No. 5 flies a "horizontal loop" pulling seven Gs to maintain a tight radius.) Double Farveldiamond formation flat pass with No.1 and No.4 inverted Opposing Minimum Radius Turn Echelon Parade Opposing Horizontal Rolls Changeover Rolla left Echelon barrel roll where the echelon formation changes over to diamond formation after 90° off bank. Sneak Passthe fastest speed of the show, just under Mach 1 (about 700 mph at sea level) Line-Abreast Loopthe most difficult formation maneuver to do well (No.5 joins the diamond as the five jets fly a loop in a straight line.) Opposing Four Point Hesitation Roll Vertical Break Opposing Vertical Pitch Barrel Roll Break Tuck Over Roll Low Break Cross Section High-Alpha Pass: (tail sitting), the show's slowest maneuver Diamond Burner 270 Delta Roll Fleur de Lis Solos Pass to Rejoin, Diamond flies a loop Loop Break CrossDelta Break (After the break the aircraft separate in six different directions, perform half Cuban Eights then cross in the center of the performance area.) Delta Breakout Delta Pitch Up Carrier Break to Land Commanding officers Notable Commanding Officers include; Roy Marlin Voris – 1946, 1952 John J. Magda – 1950, Killed in Action March 1951, Korean War Arthur Ray Hawkins – 1952 to 1953 Richard Cormier – 1954 to 1956 Edward B. Holley – 1957 to 1958 Zebulon V. Knott – 1959 to 1961 Kenneth R. Wallace – 1962 to 1963 Robert F. Aumack – 1964 to 1966 William V. Wheat – 1967 to 1969 Harley H. Hall – 1970 to 1971 Don Bently – 1972 Marvin F. "Skip" Umstead – 1973 Anthony A. Less – Oct 1973 to Jan 1976 Keith S. Jones – 1976 to 1978 William E. Newman – 1978 to 1979 Hugh D. Wisely – Dec 1979 to 1982 David Carroll – 1982 to 1983 Larry Pearson – 1983 to 1985 Gilman E. Rud – Nov 1985 to Nov 1988 Gregory Wooldridge – 1990 to 1992, 1996 Robert E. Stumpf – 1993 to 1994 Donnie Cochran – Nov 1994 to May 1996 George B. Dom – Nov 1996 to Oct 1998 Patrick Driscoll – Oct 1998 to 2000 Robert Field – 2000 to Sept 2002 Russell J. Bartlett – Sept 2002 to Sept 2004 Stephen R. Foley – Sept 2004 to Nov 2006 Kevin Mannix – Nov 2006 to 2008 Gregory McWherter 2008 to 2010, 2011 David Koss – Fall 2010 to spring of 2011 Gregory McWherter – 2011 to 2012 Thomas Frosch – 2012 to 2015 Ryan Bernacchi – 2015 to 2017 Eric D. Doyle – 2017 to 2019 Brian C. Kesselring – 2019 to 2022 Alexander P. Armatas – 2022 to present Notable members Below are some of the more notable members of the Blue Angels squadron: Capt Roy "Butch" Voris, World War II fighter ace and first Flight Leader Charles "Chuck" Brady Jr., Astronaut and physician Donnie Cochran, First African-American Blue Angels aviator and commander Edward L. Feightner, World War II fighter ace and Lead Solo Arthur Ray Hawkins, World War II flying ace Bob Hoover, World War II fighter pilot and flight instructor, honorary Blue Angel member Anthony A. Less, First Commanding Officer of Blue Angels squadron, numerous other commands including Naval Air Forces Atlantic Fleet Robert L. Rasmussen, Aviation Artist Raleigh Rhodes, World War II and Korean War fighter pilot and third Flight Leader of the Blue Angels Patrick M. Walsh, Left Wingman and Slot Pilot who later commanded the U.S. Pacific Fleet and became Vice Chief of Naval Operations and a White House Fellow Team accidents, deaths A total of 26 Blue Angels pilots and one crew member have died in Blue Angels history. Deaths 1946–present (20 pilots, one crew member) Lt. Ross "Robby" Robinson29 September 1946: killed during a performance when a wingtip broke off his F8F-1 Bearcat, sending him into an unrecoverable spin. Lt. Bud Wood7 July 1952: killed when his F9F-5 Panther collided with another Panther jet during a demonstration in Corpus Christi, Texas. The team resumed performances two weeks later. Cmdr. Robert Nicholls Glasgow14 October 1958: died during an orientation flight just days after reporting for duty as the new Blue Angels leader. Lt. Anton M. Campanella (#3 Left Wing)14 June 1960: killed flying a Grumman F-11A Tiger that crashed into the water near Fort Morgan, Alabama during a test flight. Lt. George L. Neale15 March 1964: killed during an attempted emergency landing at Apalach Airport near Apalachicola, Florida. Lt. Neale's F-11A Tiger had experienced mechanical difficulties during a flight from West Palm Beach, to Naval Air Station Pensacola, causing him to attempt the emergency landing. Failing to reach the airport, he ejected from the aircraft on final approach, but his parachute did not have sufficient time to fully deploy. Lt. Cmdr. Dick Oliver2 September 1966: crashed his F-11A Tiger and was killed at the Canadian International Air Show in Toronto. Lt Frank Gallagher1 February 1967: killed when his F-11A Tiger stalled during a practice Half Cuban Eight maneuver and spun into the ground. Capt. Ronald Thompson18 February 1967: killed when his F-11A Tiger struck the ground during a practice formation loop. Lt. Bill Worley (Opposing Solo)14 January 1968: killed when his Tiger crashed during a practice double Immelmann. Lt. Larry Watters14 February 1972: killed when his F-4J Phantom II struck the ground, upright, while practicing inverted flight, during winter training at NAF El Centro. Lt. Cmdr. Skip Umstead (Team Leader), Capt. Mike Murphy, and ADJ1 Ron Thomas (Crew Chief)26 July 1973: all three were killed in a mid-air collision between two Phantoms over Lakehurst, New Jersey, during an arrival practice. The rest of the season was cancelled after this incident. Lt. Nile Kraft (Opposing Solo)22 February 1977: killed when his Skyhawk struck the ground during practice. Lt. Michael Curtin8 November 1978: one of the solo Skyhawks struck the ground after low roll during arrival maneuvers at Naval Air Station Miramar, and Curtin was killed. Lt. Cmdr Stu Powrie (Lead Solo)22 February 1982: killed when his Skyhawk struck the ground during winter training at Naval Air Facility El Centro, California, just after a dirty loop. Lt. Cmdr. Mike Gershon (Opposing Solo #6)13 July 1985: his Skyhawk collided with Lt. Andy Caputi (Lead Solo #5) during a show at Niagara Falls, Gershon was killed and Caputi ejected and parachuted to safety. Lt. Cmdr. Kieron O'Connor and Lt. Kevin Colling28 October 1999: flying in the back seat and front seat of a Hornet, both were killed after striking the ground during circle and arrival maneuvers in Valdosta, Georgia. Lt. Cmdr. Kevin J. Davis21 April 2007: crashed his Hornet near the end of the Marine Corps Air Station Beaufort airshow in Beaufort, South Carolina, and was killed. Capt. Jeff Kuss (Opposing Solo, #6)2 June 2016: died just after takeoff while performing the Split-S maneuver in his F/A-18 Hornet during a practice run for The Great Tennessee Air Show in Smyrna, Tennessee. Other incidents 1958–2010 Lt. John R. Dewenter2 August 1958: landed wheels up at Buffalo Niagara International Airport after experiencing engine troubles during a show in Clarence, New York. The Grumman F-11 Tiger landed on Runway 23, but exited airport property, coming to rest in the intersection of Genesee Street and Dick Road, nearly hitting a filling station. Lt. Dewenter was uninjured, but the plane was a total loss. Lt. Ernie Christensen30 August 1970: belly-landed his F-4J Phantom at The Eastern Iowa Airport in Cedar Rapids, Iowa, after he inadvertently left the landing gear in the up position. He ejected safely, while the aircraft slid off the runway. Cmdr. Harley Hall4 June 1971: safely ejected after his F-4J Phantom jet caught fire during practice over NAS Quonset Point in North Kingstown, Rhode Island, and crashed in Narragansett Bay. Capt. John Fogg, Lt. Marlin Wiita, and Lt. Cmdr. Don Bentley8 March 1973: all three survived a multi-aircraft mid-air collision during practice over Superstition Mountain, near El Centro, California. Lt. Jim Ross (Lead Solo)April 1980: unhurt when his Skyhawk suffered a fuel line fire during a show at Roosevelt Roads Naval Station, Puerto Rico. Lt. Ross stayed with the plane and landed, leaving the end of the runway and rolling into the woods after a total hydraulic failure upon landing. Lt. Dave Anderson (Lead solo)12 February 1987: ejected from his Hornet after a dual engine flame-out during practice near El Centro, California. Marine Corps Maj. Charles Moseley and Cmdr. Pat Moneymaker23 January 1990: their Blue Angel Hornets suffered a mid-air collision during a practice at El Centro. Moseley ejected safely and Moneymaker was able to land his airplane, which then required a complete right wing replacement. Lt. Ted Steelman1 December 2004: ejected from his F/A-18 approximately one mile off Perdido Key after his aircraft struck the water, suffering catastrophic engine and structural damage. He suffered minor injuries. Combat casualties Four former Blue Angels pilots have been killed in action or died after being captured, all having been downed by anti-aircraft fire. Korean War Commander John Magda – 8 March 1951: Blue Angels (1949, 1950; Commander/Flight Leader 1950): Magda was killed after his F9F-2B Panther was hit by anti aircraft fire while leading a low-level strike mission against North Korean and Chinese communist positions at Tanchon which earned him the Navy Cross during the Korean War. He also was a fighter ace in World War II. Vietnam War Commander Herbert P. Hunter – 19 July 1967: Blue Angels (1957–1959; Lead Solo pilot): Hunter was hit by antiaircraft fire in North Vietnam and crashed in his F-8E Crusader during the Vietnam war. He was awarded the Distinguished Flying Cross posthumously for actions on 16 July 1967. He also was a Korean War veteran. Captain Clarence O. Tolbert – 6 November 1972: Blue Angels (1968): Tolbert was flying a Corsair II (A-7B) during a mission in North Vietnam and was hit by antiaircraft fire, crashed, and died during his second tour in the Vietnam war. He was awarded the Silver Star and Distinguished Flying Cross for his service. Captain Harley H. Hall – 27 January 1973: Blue Angels (1970–1971; Commander/Team Leader 1971): Hall and his co-pilot were shot down by antiaircraft fire in South Vietnam flying their F-4J Phantom II on the last day of the Vietnam War, and they both were officially listed as prisoners of war. In 1980, Hall was presumed to have died while captured. In the media The Blue Angels was a dramatic television series, starring Dennis Cross and Don Gordon, inspired by the team's exploits and filmed with the cooperation of the Navy. It aired in syndication from 26 September 1960 to 3 July 1961. Threshold: The Blue Angels Experience is a 1975 documentary film, written by Dune author Frank Herbert, featuring the team in practice and performance during their F-4J Phantom era; many of the aerial photography techniques pioneered in Threshold were later used in the film Top Gun. To Fly!, a short IMAX film featured at the Smithsonian Air and Space Museum since its 1976 opening features footage from a camera on a Blue Angels A4 Skyhawk tail as the pilot performs in a show. In 2005, the Discovery Channel aired a documentary miniseries, Blue Angels: A Year in the Life, focusing on the intricate day-to-day details of that year's training and performance schedule. In 2009, MythBusters enlisted the aid of Blue Angels to help test the myth that a sonic boom could shatter glass. Blue Angels and the Thunderbirds is a four-disc SkyTrax DVD set 2012 TOPICS Entertainment, Inc. It features highlights from airshows performed in the United States shot from inside and outside the cockpit including interviews of squadron aviators, plus aerial combat footage taken during Desert Storm, histories of the two flying squadrons from 1947 through 2008 including on-screen notes on changes in Congressional budgeting and research program funding, photo gallery slideshow, and two "forward-looking" sequences Into the 21st Century detailing developments of the F/A-18 Hornet's C and E and F models (10 min.) and footage of the F-22 with commentary (20 min.). See also List of United States Navy aircraft squadrons United States Air Force Thunderbirds United States Marine Corps Aviation References Further reading (2012). "My incredible flight aboard the Blue Angels" By Charles Atkeison Blue Angels Timeline (1946–1980) accessed 10 November 2005. "Grumman and the Blue Angels" article by William C. Barto at the Grumman Memorial Park official website, accessed 15 October 2005. "First Blue: The story of World War II Ace Butch Voris and the Creation of the Blue Angels" by Robert K. Wilcox, Thomas Dunne Books/St.Martins Press, 2004, robertkwilcox.com External links Blue Angels, official U.S. Navy web site Complete Blue Angels History The Navy’s Blue Angels (1966), Texas Archive of the Moving Image Blue Angels Sneak Pass video on Youtube.com Aircraft squadrons of the United States Navy American aerobatic teams Ceremonial units of the United States military
2,219
4,952
https://en.wikipedia.org/wiki/Rockwell%20B-1%20Lancer
Rockwell B-1 Lancer
The Rockwell B-1 Lancer is a supersonic variable-sweep wing, heavy bomber used by the United States Air Force. It is commonly called the "Bone" (from "B-One"). It is one of three strategic bombers serving in the U.S. Air Force fleet along with the B-2 Spirit and the B-52 Stratofortress . The B-1 was first envisioned in the 1960s as a platform that would combine the Mach 2 speed of the B-58 Hustler with the range and payload of the B-52, and was meant to ultimately replace both bombers. After a long series of studies, Rockwell International (now part of Boeing) won the design contest for what emerged as the B-1A. This version had a top speed of Mach 2.2 at high altitude and the ability to fly for long distances at Mach 0.85 at very low altitudes. The combination of the high cost of the aircraft, the introduction of the AGM-86 cruise missile that flew the same basic speed and distance, and early work on the B-2 stealth bomber reduced the need for the B-1. The program was canceled in 1977, after the B-1A prototypes had been built. The program was restarted in 1981, largely as an interim measure due to delays in the B-2 stealth bomber program. The B-1A design was altered, reducing top speed to Mach 1.25 at high altitude, increasing low-altitude speed to Mach 0.96, extensively improving electronic components, and upgrading the airframe to carry more fuel and weapons. Dubbed the B-1B, deliveries of the new variant began in 1986; the plane formally entered service with Strategic Air Command (SAC) as a nuclear bomber that same year. By 1988, all 100 aircraft had been delivered. With the disestablishment of SAC and its reassignment to the Air Combat Command in 1992, the B-1B was converted for a conventional bombing role. It first served in combat during Operation Desert Fox in 1998 and again during the NATO action in Kosovo the following year. The B-1B has supported U.S. and NATO military forces in Afghanistan and Iraq. As of 2021 the Air Force has 45 B-1Bs. The Northrop Grumman B-21 Raider is to begin replacing the B-1B after 2025; all B-1s are planned to be retired by 2036. Development Background In 1955, the USAF issued requirements for a new bomber combining the payload and range of the Boeing B-52 Stratofortress with the Mach 2 maximum speed of the Convair B-58 Hustler. In December 1957, the USAF selected North American Aviation's B-70 Valkyrie for this role, a six-engine bomber that could cruise at Mach 3 at high altitude (). Soviet Union interceptor aircraft, the only effective anti-bomber weapon in the 1950s, were already unable to intercept the high-flying Lockheed U-2; the Valkyrie would fly at similar altitudes, but much higher speeds, and was expected to fly right by the fighters. By the late 1950s, however, anti-aircraft surface-to-air missiles (SAMs) could threaten high-altitude aircraft, as demonstrated by the 1960 downing of Gary Powers' U-2. The USAF Strategic Air Command (SAC) was aware of these developments and had begun moving its bombers to low-level penetration even before the U-2 incident. This tactic greatly reduces radar detection distances through the use of terrain masking; using features of the terrain like hills and valleys, the line-of-sight from the radar to the bomber can be broken, rendering the radar (and human observers) incapable of seeing it. Additionally, radars of the era were subject to "clutter" from stray returns from the ground and other objects, which meant a minimum angle existed above the horizon where they could detect a target. Bombers flying at low altitudes could remain under these angles simply by keeping their distance from the radar sites. This combination of effects made SAMs of the era ineffective against low-flying aircraft. The same effects also meant that low-flying aircraft were difficult to detect by higher-flying interceptors, since their radar systems could not readily pick out aircraft against the clutter from ground reflections (lack of look-down/shoot-down capability). The switch from high-altitude to low-altitude flight profiles severely affected the B-70, the design of which was tuned for high-altitude performance. Higher aerodynamic drag at low level limited the B-70 to subsonic speed while dramatically decreasing its range. The result would be an aircraft with somewhat higher subsonic speed than the B-52, but less range. Because of this, and a growing shift to the intercontinental ballistic missile (ICBM) force, the B-70 bomber program was cancelled in 1961 by President John F. Kennedy, and the two XB-70 prototypes were used in a supersonic research program. Although never intended for the low-level role, the B-52's flexibility allowed it to outlast its intended successor as the nature of the air war environment changed. The B-52's huge fuel load allowed it to operate at lower altitudes for longer times, and the large airframe allowed the addition of improved radar jamming and deception suites to deal with radars. During the Vietnam War, the concept that all future wars would be nuclear was turned on its head, and the "big belly" modifications increased the B-52's total bomb load to , turning it into a powerful tactical aircraft which could be used against ground troops along with strategic targets from high altitudes. The much smaller bomb bay of the B-70 would have made it much less useful in this role. Design studies and delays Although effective, the B-52 was not ideal for the low-level role. This led to a number of aircraft designs known as penetrators, which were tuned specifically for long-range low-altitude flight. The first of these designs to see operation was the supersonic F-111 fighter-bomber, which used variable-sweep wings for tactical missions. A number of studies on a strategic-range counterpart followed. The first post-B-70 strategic penetrator study was known as the Subsonic Low-Altitude Bomber (SLAB), which was completed in 1961. This produced a design that looked more like an airliner than a bomber, with a large swept wing, T-tail, and large high-bypass engines. This was followed by the similar Extended Range Strike Aircraft (ERSA), which added a variable-sweep wing, then en vogue in the aviation industry. ERSA envisioned a relatively small aircraft with a payload and a range of including flown at low altitudes. In August 1963, the similar Low-Altitude Manned Penetrator design was completed, which called for an aircraft with a bomb load and somewhat shorter range of . These all culminated in the October 1963 Advanced Manned Precision Strike System (AMPSS), which led to industry studies at Boeing, General Dynamics, and North American. In mid-1964, the USAF had revised its requirements and retitled the project as Advanced Manned Strategic Aircraft (AMSA), which differed from AMPSS primarily in that it also demanded a high-speed high-altitude capability, similar to that of the existing Mach 2-class F-111. Given the lengthy series of design studies, Rockwell engineers joked that the new name actually stood for "America's Most Studied Aircraft". The arguments that led to the cancellation of the B-70 program had led some to question the need for a new strategic bomber of any sort. The USAF was adamant about retaining bombers as part of the nuclear triad concept that included bombers, ICBMs, and submarine-launched ballistic missiles (SLBMs) in a combined package that complicated any potential defense. They argued that the bomber was needed to attack hardened military targets and to provide a safe counterforce option because the bombers could be quickly launched into safe loitering areas where they could not be attacked. However, the introduction of the SLBM made moot the mobility and survivability argument, and a newer generation of ICBMs, such as the Minuteman III, had the accuracy and speed needed to attack point targets. During this time, ICBMs were seen as a less costly option based on their lower unit cost, but development costs were much higher. Secretary of Defense Robert McNamara preferred ICBMs over bombers for the Air Force portion of the deterrent force and felt a new expensive bomber was not needed. McNamara limited the AMSA program to studies and component development beginning in 1964. Program studies continued; IBM and Autonetics were awarded AMSA advanced avionics study contracts in 1968. McNamara remained opposed to the program in favor of upgrading the existing B-52 fleet and adding nearly 300 FB-111s for shorter range roles then being filled by the B-58. He again vetoed funding for AMSA aircraft development in 1968. B-1A program President Richard Nixon reestablished the AMSA program after taking office, keeping with his administration's flexible response strategy that required a broad range of options short of general nuclear war. Nixon's Secretary of Defense, Melvin Laird, reviewed the programs and decided to lower the numbers of FB-111s, since they lacked the desired range, and recommended that the AMSA design studies be accelerated. In April 1969, the program officially became the B-1A. This was the first entry in the new bomber designation series, created in 1962. The Air Force issued a request for proposals in November 1969. Proposals were submitted by Boeing, General Dynamics and North American Rockwell in January 1970. In June 1970, North American Rockwell was awarded the development contract. The original program called for two test airframes, five flyable aircraft, and 40 engines. This was cut in 1971 to one ground and three flight test aircraft. The company changed its name to Rockwell International and named its aircraft division North American Aircraft Operations in 1973. A fourth prototype, built to production standards, was ordered in the fiscal year 1976 budget. Plans called for 240 B-1As to be built, with initial operational capability set for 1979. Rockwell's design had features common to the General Dynamics F-111 Aardvark and North American XB-70 Valkyrie. It used a crew escape capsule, that ejected as a unit to improve crew survivability if the crew had to abandon the aircraft at high speed. Additionally, the design featured large variable-sweep wings in order to provide both more lift during takeoff and landing, and lower drag during a high-speed dash phase. With the wings set to their widest position the aircraft had a much better airfield performance than the B-52, allowing it to operate from a wider variety of bases. Penetration of the Soviet Union's defenses would take place at supersonic speed, crossing them as quickly as possible before entering the more sparsely defended interior of the country where speeds could be reduced again. The large size and fuel capacity of the design would allow the "dash" portion of the flight to be relatively long. In order to achieve the required Mach 2 performance at high altitudes, the exhaust nozzles and air intake ramps were variable. Initially, it had been expected that a Mach 1.2 performance could be achieved at low altitude, which required that titanium be used in critical areas in the fuselage and wing structure. The low altitude performance requirement was later lowered to Mach 0.85, reducing the amount of titanium and therefore cost. A pair of small vanes mounted near the nose are part of an active vibration damping system that smooths out the otherwise bumpy low-altitude ride. The first three B-1As featured the escape capsule that ejected the cockpit with all four crew members inside. The fourth B-1A was equipped with a conventional ejection seat for each crew member. The B-1A mockup review occurred in late October 1971; this resulted in 297 requests for alteration to the design due to failures to meet specifications and desired improvements for ease of maintenance and operation. The first B-1A prototype (Air Force serial no. 74–0158) flew on 23 December 1974. As the program continued the per-unit cost continued to rise in part because of high inflation during that period. In 1970, the estimated unit cost was $40 million, and by 1975, this figure had climbed to $70 million. New problems and cancellation In 1976, Soviet pilot Viktor Belenko defected to Japan with his MiG-25 "Foxbat". During debriefing he described a new "super-Foxbat" (almost certainly referring to the MiG-31) that had look-down/shoot-down radar in order to attack cruise missiles. This would also make any low-level penetration aircraft "visible" and easy to attack. Given that the B-1's armament suite was similar to the B-52, and it now appeared no more likely to survive Soviet airspace than the B-52, the program was increasingly questioned. In particular, Senator William Proxmire continually derided the B-1 in public, arguing it was an outlandishly expensive dinosaur. During the 1976 federal election campaign, Jimmy Carter made it one of the Democratic Party's platforms, saying "The B-1 bomber is an example of a proposed system which should not be funded and would be wasteful of taxpayers' dollars." When Carter took office in 1977 he ordered a review of the entire program. By this point the projected cost of the program had risen to over $100 million per aircraft, although this was lifetime cost over 20 years. He was informed of the relatively new work on stealth aircraft that had started in 1975, and he decided that this was a better approach than the B-1. Pentagon officials also stated that the AGM-86 Air-Launched Cruise Missile (ALCM) launched from the existing B-52 fleet would give the USAF equal capability of penetrating Soviet airspace. With a range of , the ALCM could be launched well outside the range of any Soviet defenses and penetrate at low altitude like a bomber (with a much lower radar cross-section (RCS) due to smaller size), and in much greater numbers at a lower cost. A small number of B-52s could launch hundreds of ALCMs, saturating the defense. A program to improve the B-52 and develop and deploy the ALCM would cost at least 20% less than the planned 244 B-1As. On 30 June 1977, Carter announced that the B-1A would be canceled in favor of ICBMs, SLBMs, and a fleet of modernized B-52s armed with ALCMs. Carter called it "one of the most difficult decisions that I've made since I've been in office." No mention of the stealth work was made public with the program being top secret, but it is now known that in early 1978 he authorized the Advanced Technology Bomber (ATB) project, which eventually led to the B-2 Spirit. Domestically, the reaction to the cancellation was split along partisan lines. The Department of Defense was surprised by the announcement; it expected that the number of B-1s ordered would be reduced to around 150. Congressman Robert Dornan (R-CA) claimed, "They're breaking out the vodka and caviar in Moscow." However, it appears the Soviets were more concerned by large numbers of ALCMs representing a much greater threat than a smaller number of B-1s. Soviet news agency TASS commented that "the implementation of these militaristic plans has seriously complicated efforts for the limitation of the strategic arms race." Western military leaders were generally happy with the decision. NATO commander Alexander Haig described the ALCM as an "attractive alternative" to the B-1. French General Georges Buis stated "The B-1 is a formidable weapon, but not terribly useful. For the price of one bomber, you can have 200 cruise missiles." Flight tests of the four B-1A prototypes for the B-1A program continued through April 1981. The program included 70 flights totaling 378 hours. A top speed of Mach 2.22 was reached by the second B-1A. Engine testing also continued during this time with the YF101 engines totaling almost 7,600 hours. Shifting priorities It was during this period that the Soviets started to assert themselves in several new theaters of action, in particular through Cuban proxies during the Angolan Civil War starting in 1975 and the Soviet invasion of Afghanistan in 1979. U.S. strategy to this point had been focused on containing Communism and preparation for war in Europe. The new Soviet actions revealed that the military lacked capability outside these narrow confines. The U.S. Department of Defense responded by accelerating its Rapid Deployment Forces concept but suffered from major problems with airlift and sealift capability. In order to slow an enemy invasion of other countries, air power was critical; however the key Iran-Afghanistan border was outside the range of the U.S. Navy's carrier-based attack aircraft, leaving this role to the U.S. Air Force. During the 1980 presidential campaign, Ronald Reagan campaigned heavily on the platform that Carter was weak on defense, citing the cancellation of the B-1 program as an example, a theme he continued using into the 1980s. During this time Carter's defense secretary, Harold Brown, announced the stealth bomber project, apparently implying that this was the reason for the B-1 cancellation. B-1B program On taking office, Reagan was faced with the same decision as Carter before: whether to continue with the B-1 for the short term, or to wait for the development of the ATB, a much more advanced aircraft. Studies suggested that the existing B-52 fleet with ALCM would remain a credible threat until 1985. It was predicted that 75% of the B-52 force would survive to attack its targets. After 1985, the introduction of the SA-10 missile, the MiG-31 interceptor and the first effective Soviet Airborne Early Warning and Control (AWACS) systems would make the B-52 increasingly vulnerable. During 1981, funds were allocated to a new study for a bomber for the 1990s time-frame which led to developing the Long-Range Combat Aircraft (LRCA) project. The LRCA evaluated the B-1, F-111, and ATB as possible solutions; an emphasis was placed on multi-role capabilities, as opposed to purely strategic operations. In 1981, it was believed the B-1 could be in operation before the ATB, covering the transitional period between the B-52's increasing vulnerability and the ATB's introduction. Reagan decided the best solution was to procure both the B-1 and ATB, and on 2 October 1981 he announced that 100 B-1s were to be ordered to fill the LRCA role. In January 1982, the U.S. Air Force awarded two contracts to Rockwell worth a combined $2.2 billion for the development and production of 100 new B-1 bombers. Numerous changes were made to the design to make it better suited to the now expected missions, resulting in the B-1B. These changes included a reduction in maximum speed, which allowed the variable-aspect intake ramps to be replaced by simpler fixed geometry intake ramps. This reduced the B-1B's radar cross-section which was seen as a good trade off for the speed decrease. High subsonic speeds at low altitude became a focus area for the revised design, and low-level speeds were increased from about Mach 0.85 to 0.92. The B-1B has a maximum speed of Mach 1.25 at higher altitudes. The B-1B's maximum takeoff weight was increased to from the B-1A's . The weight increase was to allow for takeoff with a full internal fuel load and for external weapons to be carried. Rockwell engineers were able to reinforce critical areas and lighten non-critical areas of the airframe, so the increase in empty weight was minimal. To deal with the introduction of the MiG-31 equipped with the new Zaslon radar system, and other aircraft with look-down capability, the B-1B's electronic warfare suite was significantly upgraded. Opposition to the plan was widespread within Congress. Critics pointed out that many of the original problems remained in both areas of performance and expense. In particular it seemed the B-52 fitted with electronics similar to the B-1B would be equally able to avoid interception, as the speed advantage of the B-1 was now minimal. It also appeared that the "interim" time frame served by the B-1B would be less than a decade, being rendered obsolete shortly after the introduction of a much more capable ATB design. The primary argument in favor of the B-1 was its large conventional weapon payload, and that its takeoff performance allowed it to operate with a credible bomb load from a much wider variety of airfields. Production subcontracts were spread across many congressional districts, making the aircraft more popular on Capitol Hill. B-1A No. 1 was disassembled and used for radar testing at the Rome Air Development Center at the former Griffiss Air Force Base, New York. B-1As No. 2 and No. 4 were then modified to include B-1B systems. The first B-1B was completed and began flight testing in March 1983. The first production B-1B was rolled out on 4 September 1984 and first flew on 18 October 1984. The 100th and final B-1B was delivered on 2 May 1988; before the last B-1B was delivered, the USAF had determined that the aircraft was vulnerable to Soviet air defenses. Design Overview The B-1 has a blended wing body configuration, with variable-sweep wing, four turbofan engines, triangular ride-control fins and cruciform tail. The wings can sweep from 15 degrees to 67.5 degrees (full forward to full sweep). Forward-swept wing settings are used for takeoff, landings and high-altitude economical cruise. Aft-swept wing settings are used in high subsonic and supersonic flight. The B-1's variable-sweep wings and thrust-to-weight ratio provide it with improved takeoff performance, allowing it to use shorter runways than previous bombers. The length of the aircraft presented a flexing problem due to air turbulence at low altitude. To alleviate this, Rockwell included small triangular fin control surfaces or vanes near the nose on the B-1. The B-1's Structural Mode Control System moves the vanes, and lower rudder, to counteract the effects of turbulence and smooth out the ride. Unlike the B-1A, the B-1B cannot reach Mach 2+ speeds; its maximum speed is Mach 1.25 (about 950 mph or 1,530 km/h at altitude), but its low-level speed increased to Mach 0.92 (700 mph, 1,130 km/h). The speed of the current version of the aircraft is limited by the need to avoid damage to its structure and air intakes. To help lower its radar cross-section, the B-1B uses serpentine air intake ducts (see S-duct) and fixed intake ramps, which limit its speed compared to the B-1A. Vanes in the intake ducts serve to deflect and shield radar returns from the highly reflective engine compressor blades. The B-1A's engine was modified slightly to produce the GE F101-102 for the B-1B, with an emphasis on durability, and increased efficiency. The core from this engine was subsequently used in several other engines, including the GE F110 used in the F-14 Tomcat, F-15K/SG variants and later versions of the General Dynamics F-16 Fighting Falcon. It is also the basis for the non-afterburning GE F118 used in the B-2 Spirit and the U-2S. The F101 engine core is also used in the CFM56 civil engine. The nose-gear door is the location for ground-crew control of the auxiliary power unit (APU) which can be used during a scramble for quick-starting the APU. Avionics The B-1's main computer is the IBM AP-101, which was also used on the Space Shuttle orbiter and the B-52 bomber. The computer is programmed with the JOVIAL programming language. The Lancer's offensive avionics include the Westinghouse (now Northrop Grumman) AN/APQ-164 forward-looking offensive passive electronically scanned array radar set with electronic beam steering (and a fixed antenna pointed downward for reduced radar observability), synthetic aperture radar, ground moving target indication (GMTI), and terrain-following radar modes, Doppler navigation, radar altimeter, and an inertial navigation suite. The B-1B Block D upgrade added a Global Positioning System (GPS) receiver beginning in 1995. The B-1's defensive electronics include the Eaton AN/ALQ-161A radar warning and defensive jamming equipment, which has three sets of antennas; one at the front base of each wing and the third rear-facing in the tail radome. Also in the tail radome is the AN/ALQ-153 missile approach warning system (pulse-Doppler radar). The ALQ-161 is linked to a total of eight AN/ALE-49 flare dispensers located on top behind the canopy, which are handled by the AN/ASQ-184 avionics management system. Each AN/ALE-49 dispenser has a capacity of 12 MJU-23A/B flares. The MJU-23A/B flare is one of the world's largest infrared countermeasure flares at a weight of over . The B-1 has also been equipped to carry the ALE-50 towed decoy system. Also aiding the B-1's survivability is its relatively low RCS. Although not technically a stealth aircraft, thanks to the aircraft's structure, serpentine intake paths and use of radar-absorbent material its RCS is about 1/50th that of the similar sized B-52. This is approximately 26 ft2 or 2.4 m2, comparable to that of a small fighter aircraft. The B-1 holds 61 FAI world records for speed, payload, distance, and time-to-climb in different aircraft weight classes. In November 1993, three B-1Bs set a long-distance record for the aircraft, which demonstrated its ability to conduct extended mission lengths to strike anywhere in the world and return to base without any stops. The National Aeronautic Association recognized the B-1B for completing one of the 10 most memorable record flights for 1994. Upgrades The B-1 has been upgraded since production, beginning with the "Conventional Mission Upgrade Program" (CMUP), which added a new MIL-STD-1760 smart-weapons interface to enable the use of precision-guided conventional weapons. CMUP was delivered through a series of upgrades: Block A was the standard B-1B with the capability to deliver non-precision gravity bombs. Block B brought an improved Synthetic Aperture Radar, and upgrades to the Defensive Countermeasures System and was fielded in 1995. Block C provided an "enhanced capability" for delivery of up to 30 cluster bomb units (CBUs) per sortie with modifications made to 50 bomb racks. Block D added a "Near Precision Capability" via improved weapons and targeting systems, and added advanced secure communications capabilities. The first part of the electronic countermeasures upgrade added Joint Direct Attack Munition (JDAM), ALE-50 towed decoy system, and anti-jam radios. Block E upgraded the avionics computers and incorporated the Wind Corrected Munitions Dispenser (WCMD), the AGM-154 Joint Standoff Weapon (JSOW) and the AGM-158 JASSM (Joint Air to Surface Standoff Munition), substantially improving the bomber's capability. Upgrades were completed in September 2006. Block F was the Defensive Systems Upgrade Program (DSUP) to improve the aircraft's electronic countermeasures and jamming capabilities, but it was canceled in December 2002 due to cost overruns and delays. In 2007, the Sniper XR targeting pod was integrated on the B-1 fleet. The pod is mounted on an external hardpoint at the aircraft's chin near the forward bomb bay. Following accelerated testing, the Sniper pod was fielded in summer 2008. Future precision munitions include the Small Diameter Bomb. The USAF commenced the Integrated Battle Station (IBS) modification in 2012 as a combination of three separate upgrades when it realised the benefits of completing them concurrently; the Fully Integrated Data Link (FIDL), Vertical Situational Display Unit (ASDU) and Central Integrated Test System (CITS). FIDL enables electronic data sharing, eliminating the need to enter information between systems by hand. VSDU replaces existing flight instruments with multifunction color displays, a second display aids with threat evasion and targeting, and acts as a back-up display. CITS saw a new diagnostic system installed that allows crew to monitor over 9,000 parameters on the aircraft. Other additions are to replace the two spinning mass gyroscopic inertial navigation system with ring laser gyroscopic systems and a GPS antenna, replacement of the APQ-164 radar with the Scalable Agile Beam Radar – Global Strike (SABR-GS) active electronically scanned array, and a new attitude indicator. The IBS upgrades were completed in 2020. In August 2019, the Air Force unveiled a modification to the B-1B to allow it to carry more weapons internally and externally. Using the moveable forward bulkhead, space in the intermediate bay was increased from 180 to 269 in (457 to 683 cm). Expanding the internal bay to make use of the Common Strategic Rotary Launcher (CSRL), as well as utilizing six of the eight external hardpoints that had been previously out of use to keep in line with the New START Treaty, would increase the B-1B's weapon load from 24 to 40. The configuration also enables it to carry heavier weapons in the 5,000 lb (2,300 kg) range, such as hypersonic missiles; the AGM-183 ARRW is planned for integration onto the bomber. In the future the HAWC could be used by the bomber which, combining both internal and external weapon carriage, could conceivably bring the total number of hypersonic weapons to 31. Operational history Strategic Air Command The second B-1B, "The Star of Abilene", was the first B-1B delivered to SAC in June 1985. Initial operational capability was reached on 1 October 1986 and the B-1B was placed on nuclear alert status. The B-1 received the official name "Lancer" on 15 March 1990. However, the bomber has been commonly called the "Bone"; a nickname that appears to stem from an early newspaper article on the aircraft wherein its name was phonetically spelled out as "B-ONE" with the hyphen inadvertently omitted. In late 1990, engine fires in two Lancers led to a grounding of the fleet. The cause was traced back to problems in the first-stage fan, and the aircraft were placed on "limited alert"; in other words, they were grounded unless a nuclear war broke out. Following inspections and repairs they were returned to duty beginning on 6 February 1991. By 1991, the B-1 had a fledgling conventional capability, forty of them able to drop the Mk-82 General Purpose (GP) bomb, although mostly from low altitude. Despite being cleared for this role, the problems with the engines prevented their use in Operation Desert Storm during the Gulf War. B-1s were primarily reserved for strategic nuclear strike missions at this time, providing the role of airborne nuclear deterrent against the Soviet Union. The B-52 was more suited to the role of conventional warfare and it was used by coalition forces instead. Originally designed strictly for nuclear war, the B-1's development as an effective conventional bomber was delayed. The collapse of the Soviet Union had brought the B-1's nuclear role into question, leading to President George H. W. Bush ordering a $3 billion conventional refit. After the inactivation of SAC and the establishment of the Air Combat Command (ACC) in 1992, the B-1 developed a greater conventional weapons capability. Part of this development was the start-up of the U.S. Air Force Weapons School B-1 Division. In 1994, two additional B-1 bomb wings were also created in the Air National Guard, with former fighter wings in the Kansas Air National Guard and the Georgia Air National Guard converting to the aircraft. By the mid-1990s, the B-1 could employ GP weapons as well as various CBUs. By the end of the 1990s, with the advent of the "Block D" upgrade, the B-1 boasted a full array of guided and unguided munitions. The B-1B no longer carries nuclear weapons; its nuclear capability was disabled by 1995 with the removal of nuclear arming and fuzing hardware. Under provisions of the New START treaty with Russia, further conversions were performed. These included modification of aircraft hardpoints to prevent nuclear weapon pylons from being attached, removal of weapons bay wiring bundles for arming nuclear weapons, and destruction of nuclear weapon pylons. The conversion process was completed in 2011, and Russian officials inspect the aircraft every year to verify compliance. Air Combat Command The B-1 was first used in combat in support of operations in Iraq during Operation Desert Fox in December 1998, employing unguided GP weapons. B-1s have been subsequently used in Operation Allied Force (Kosovo) and, most notably, in Operation Enduring Freedom in Afghanistan and the 2003 invasion of Iraq. The B-1 has deployed an array of conventional weapons in war zones, most notably the GBU-31, JDAM. In the first six months of Operation Enduring Freedom, eight B-1s dropped almost 40 percent of aerial ordnance, including some 3,900 JDAMs. JDAM munitions were heavily used by the B-1 over Iraq, notably on 7 April 2003 in an unsuccessful attempt to kill Saddam Hussein and his two sons. During Operation Enduring Freedom, the B-1 was able to raise its mission capable rate to 79%. Of the 100 B-1Bs built, 93 remained in 2000 after losses in accidents. In June 2001, the Pentagon sought to place one-third of its then fleet into storage; this proposal resulted in several U.S. Air National Guard officers and members of Congress lobbying against the proposal, including the drafting of an amendment to prevent such cuts. The 2001 proposal was intended to allow money to be diverted to further upgrades to the remaining B-1Bs, such as computer modernization. In 2003, accompanied by the removal of B-1Bs from the two bomb wings in the Air National Guard, the USAF decided to retire 33 aircraft to concentrate its budget on maintaining availability of remaining B-1Bs. In 2004, a new appropriation bill called for some retired aircraft to return to service, and the USAF returned seven mothballed bombers to service to increase the fleet to 67 aircraft. On 14 July 2007, the Associated Press reported on the growing USAF presence in Iraq, including reintroduction of B-1Bs as a close-at-hand platform to support Coalition ground forces. Beginning in 2008, B-1s were used in Iraq and Afghanistan in an "armed overwatch" role, loitering for surveillance purposes while ready to deliver guided bombs in support of ground troops as required. The B-1B underwent a series of flight tests using a 50/50 mix of synthetic and petroleum fuel; on 19 March 2008, a B-1B from Dyess Air Force Base, Texas, became the first USAF aircraft to fly at supersonic speed using a synthetic fuel during a flight over Texas and New Mexico. This was conducted as part of an USAF testing and certification program to reduce reliance on traditional oil sources. On 4 August 2008, a B-1B flew the first Sniper Advanced Targeting Pod equipped combat sortie where the crew successfully targeted enemy ground forces and dropped a GBU-38 guided bomb in Afghanistan. In March 2011, B-1Bs from Ellsworth Air Force Base attacked undisclosed targets in Libya as part of Operation Odyssey Dawn. With upgrades to keep the B-1 viable, the USAF may keep it in service until approximately 2038. Despite upgrades, a single flight hour needs 48.4 hours of repair. The fuel, repairs, and other needs for a 12-hour mission cost $720,000 as of 2010. The $63,000 cost per flight hour is, however, less than the $72,000 for the B-52 and the $135,000 of the B-2. In June 2010, senior USAF officials met to consider retiring the entire fleet to meet budget cuts. The Pentagon plans to begin replacing the aircraft with the B-21 Raider after 2025. In the meantime, its "capabilities are particularly well-suited to the vast distances and unique challenges of the Pacific region, and we'll continue to invest in, and rely on, the B-1 in support of the focus on the Pacific" as part of President Obama's "Pivot to East Asia". In August 2012, the 9th Expeditionary Bomb Squadron returned from a six-month tour in Afghanistan. Its 9 B-1Bs flew 770 sorties, the most of any B-1B squadron on a single deployment. The squadron spent 9,500 hours airborne, keeping one of its bombers in the air at all times. They accounted for a quarter of all combat aircraft sorties over the country during that time and fulfilled an average of two to three air support requests per day. On 4 September 2013, a B-1B participated in a maritime evaluation exercise, deploying munitions such as laser-guided 500 lb GBU-54 bombs, 500 lb and 2,000 lb JDAM, and Long Range Anti-Ship Missiles (LRASM). The aim was to detect and engage several small craft using existing weapons and tactics developed from conventional warfare against ground targets; the B-1 is seen as a useful asset for maritime duties such as patrolling shipping lanes. Beginning in 2014, the B-1 was used against the Islamic State (IS) in the Syrian Civil War. From August 2014 to January 2015, the B-1 accounted for eight percent of USAF sorties during Operation Inherent Resolve. The 9th Bomb Squadron was deployed to Qatar in July 2014 to support missions in Afghanistan, but when the air campaign against IS began on 8 August, the aircraft were employed in Iraq. During the Battle of Kobane in Syria, the squadron's B-1s dropped 660 bombs over 5 months in support of Kurdish forces defending the city. This amounted to one-third of all bombs used during OIR during the period, and they killed some 1,000 ISIL fighters. The 9th Bomb Squadron's B-1s went "Winchester"–dropping all weapons on board–31 times during their deployment. They dropped over 2,000 JDAMs during the six-month rotation. B-1s from the 28th Bomb Wing flew 490 sorties where they dropped 3,800 munitions on 3,700 targets during a six-month deployment. In February 2016, the B-1s were sent back to the U.S. for cockpit upgrades. Air Force Global Strike Command As part of a USAF reorganization announced in April 2015, all B-1s were reassigned from Air Combat Command to Global Strike Command (GSC) in October 2015. On 8 July 2017, the USAF flew two B-1s near the North Korean border in a show of force amid increasing tensions, particularly in response to North Korea's 4 July test of an ICBM capable of reaching Alaska. On 14 April 2018, B-1s launched 19 JASSM missiles as part of the 2018 bombing of Damascus and Homs in Syria. In August 2019, six B-1Bs met full mission capablity; 15 were undergoing depot maintenance and 39 under repair and inspection. In February 2021, the USAF announced it will retire 17 B-1s, leaving 45 aircraft in service. Four of these will be stored in a condition that will allow their return to service if required. In March 2021, B-1s deployed to Norway's Ørland Main Air Station for the first time. During the deployment, they conducted bombing training with Norwegian and Swedish ground force Joint terminal attack controllers. One B-1 also conducted a warm-pit refuel at Bodø Main Air Station, marking the first landing inside Norway's Arctic Circle, and integrated with four Swedish Air Force JAS 39 Gripen fighters. Variants B-1A The B-1A was the original B-1 design with variable engine intakes and Mach 2.2 top speed. Four prototypes were built; no production units were manufactured. B-1B The B-1B is a revised B-1 design with reduced radar signature and a top speed of Mach 1.25. It is optimized for low-level penetration. A total of 100 B-1Bs were produced. B-1R The B-1R was a 2004 proposed upgrade of existing B-1B aircraft. The B-1R (R for "regional") would be fitted with advanced radars, air-to-air missiles, and new Pratt & Whitney F119 engines (from the Lockheed Martin F-22 Raptor). This variant would have a top speed of Mach 2.2, but with 20% shorter range. Existing external hardpoints would be modified to allow multiple conventional weapons to be carried, increasing overall loadout. For air-to-air defense, an active electronically scanned array (AESA) radar would be added and some existing hardpoints modified to carry air-to-air missiles. Operators The USAF had 62 B-1Bs in service as of August 2017. Aircraft on display B-1A B-1B Accidents and incidents From 1984 to 2001, ten B-1s were lost due to accidents with 17 crew members or people on board killed. In September 1987, B-1B (serial number 84–0052) from the 96th Bomb Wing, 338th Combat Crew Training Squadron, Dyess AFB, crashed near La Junta, Colorado, while flying on a low-level training route. This was the only B-1B crash to occur with six crew members aboard. The two crew members in jump seats, and one of the four crew members in ejection seats perished. The root cause of the accident was thought to be a bird strike on a wing's leading edge during the low-level flight. The impact was severe enough to sever fuel and hydraulic lines on one side of the aircraft, while the other side's engines functioned long enough to allow for ejection. The B-1B fleet was later modified to protect these supply lines. In October 1990, while flying a training route in eastern Colorado, B-1B (86-0128) from the 384th Bomb Wing, 28th Bomb Squadron, McConnell AFB, experienced an explosion as the engines reached full power without afterburners. Fire on the aircraft's left was spotted. The No. 1 engine was shut down and its fire extinguisher was activated. The accident investigation determined that the engine had suffered catastrophic failure, engine blades had cut through the engine mounts and the engine became detached from the aircraft. In December 1990, B-1B (83-0071) from the 96th Bomb Wing, 337th Bomb Squadron, Dyess AFB, Texas, experienced a jolt that caused the No. 3 engine to shut down with its fire extinguisher activating. This event, coupled with the October 1990 engine incident, led to a 50+ day grounding of the B-1Bs not on nuclear alert status. The problem was eventually traced back to problems in the first-stage fan, and all B-1Bs were equipped with modified engines. Specifications (B-1B) Weapons loads Notable appearances in media See also Notes References Bibliography Dao, James. "Much-Maligned B-1 Bomber Proves Hard to Kill." The New York Times, 1 August 2001. Donald, David, ed. "Rockwell B-1B". The Complete Encyclopedia of World Aircraft. New York: Barnes & Noble, 1997. . . . . . External links B-1B Fact Sheet on af.mil B-1B product page and B-1B history page on Boeing.com B-1 history page on NASA/Langley Research Center site B-1B Lancer in Airman Magazine's Airframe Profiles Aircraft first flown in 1974 Cruciform tail aircraft Quadjets B-001 1970s United States bomber aircraft Variable-sweep-wing aircraft Supersonic aircraft Strategic bombers
2,221
4,959
https://en.wikipedia.org/wiki/BSA
BSA
BSA may refer to: Businesses and organizations Basketball South Africa Bearing Specialists Association Belarusian Socialist Assembly Bibliographical Society of America Birmingham Small Arms Company, UK manufacturer of firearms and vehicles Black Socialists in America Boston Society of Architects Botanical Society of America Boy Scouts of America Scouts BSA, the flagship program British Social Attitudes Survey British Sandwich Association British Science Association British Sociological Association British Speleological Association British Stammering Association Broadcasting Service Association, former name of the Australian radio network Macquarie Media Broadcasting Standards Authority BSA Company, motorcycle manufacturer BSA motorcycles, made by the Birmingham Small Arms Company Limited BSA (The Software Alliance), a trade group established by Microsoft, formerly called Business Software Alliance BSA Manufacturing, a Malaysian manufacturer of aluminum alloy wheels Business Services Association, of UK service providers Schools Baltimore School for the Arts Birmingham School of Acting British School at Athens Science and medicine Behavioral systems analysis Bis(trimethylsilyl)acetamide Body surface area Bovine serum albumin Broad-spectrum antiviral drug Other uses Bank Secrecy Act, US Bilateral Security Agreement, US umbrella for military cooperation Bosnian Serb Army, the Army of Republika Srpska, the former armed forces of the Republika Srpska Bachelor of Science and Arts (BSA) Bachelor of Science in Accountancy (BSA) Bachelor of Science in Agriculture (B.S.A.) British Soap Awards, an awards ceremony in the UK BSA, a brand of bicycles produced by Tube Investments of India Business systems analyst Blue Dragon Series Awards, an annual award ceremony in South Korea See also
2,225
4,980
https://en.wikipedia.org/wiki/Bohdan%20Khmelnytsky
Bohdan Khmelnytsky
Bohdan Zynovii Mykhailovych Khmelnytskyi (Ruthenian: Ѕѣнові Богданъ Хмелнiцкiи; modern ; 6 August 1657) was a Ukrainian military commander and Hetman of the Zaporozhian Host, which was then under the suzerainty of the Polish–Lithuanian Commonwealth. He led an uprising against the Commonwealth and its magnates (1648–1654) that resulted in the creation of an independent Ukrainian Cossack state. In 1654, he concluded the Treaty of Pereyaslav with the Russian Tsar and allied the Cossack Hetmanate with Tsardom of Russia, thus placing central Ukraine under Russian protection. During the uprising the Cossacks led a massacre of thousands of Jewish people during 1648–1649 as one of the more traumatic events in the history of the Jews in Ukraine and Ukrainian nationalism. Early life Although there is no definite proof of the date of Khmelnytsky's birth, Russian historian Mykhaylo Maksymovych suggests that it is likely 27 December 1595 Julian (St. Theodore's day). As was the custom in the Orthodox Church, he was baptized with one of his middle names, Theodor, translated into Ukrainian as Bohdan. A biography of Khmelnytsky by Smoliy and Stepankov, however, suggests that it is more likely he was born on 9 November (feast day of St Zenoby, 30 October in Julian calendar) and was baptized on 11 November (feast day of St. Theodore in the Catholic Church). Khmelnytsky was probably born in the village of Subotiv, near Chyhyryn in the Crown of the Kingdom of Poland at the estate of his father Mykhailo Khmelnytsky. He was born into Ukrainian lesser nobility. His father was a courtier of Great Crown Hetman Stanisław Żółkiewski, but later joined the court of his son-in-law Jan Daniłowicz, who in 1597 became starosta of Korsuń and Chyhyryn and appointed Mykhailo as his deputy in Chyhyryn (pidstarosta). For his service, he was granted a strip of land near the town, where Mykhailo set up a khutor Subotiv. There has been controversy as to whether Bohdan and his father belonged to the Szlachta (Polish term for noblemen). Some sources state that in 1590 his father Mykhailo was appointed as a sotnyk for the Korsun-Chyhyryn starosta Jan Daniłowicz, who continued to colonize the new Ukrainian lands near the Dnieper river. Khmelnytsky identified as a noble, and his father's status as a pidstarosta of Chyhyryn helped him to be considered as such by others. During the Uprising, however, Khmelnytsky would stress his mother's Cossack roots and his father's exploits with the Cossacks of the Sich. Khmelnytsky attended a Jesuit college, possibly in Jarosław, but more likely in Lviv in the school founded by hetman Żółkiewski. He completed his schooling by 1617, acquiring a broad knowledge of world history and learning Polish and Latin. Later he learned Turkish, Tatar, and French. Unlike many of the other Jesuit students, he did not embrace Roman Catholicism but remained Orthodox. Marriage and family Khmelnytsky married Hanna Somkivna, a sister of a rich Pereyaslavl Cossack; the couple settled in Subotiv. By the second half of the 1620s, they had three daughters: Stepanyda, Olena, and Kateryna. His first son Tymish (Tymofiy) was born in 1632, and another son Yuriy was born in 1640. Registered Cossack Upon completion of his studies in 1617, Khmelnytsky entered into service with the Cossacks. As early as 1619, he was sent together with his father to Moldavia, when the Polish–Lithuanian Commonwealth entered into war against the Ottoman Empire. During the battle of Cecora (Țuțora) on 17 September 1620, his father was killed, and young Khmelnytsky, among many others including future hetman Stanisław Koniecpolski, was captured by the Turks. He spent the next two years in captivity in Constantinople as a prisoner of an Ottoman Kapudan Pasha (presumably Parlak Mustafa Pasha). Other sources claim that he spent his slavery in Ottoman Navy on galleys as an oarsman, where he picked up a knowledge of Turkic languages. While there is no concrete evidence as to his return to Ukraine, most historians believe Khmelnytsky either escaped or was ransomed. Sources vary as to his benefactor – his mother, friends, the Polish king – but perhaps by Krzysztof Zbaraski, ambassador of the Commonwealth to the Ottomans. In 1622 he paid 30,000 thalers in ransom for all prisoners of war captured at the Battle of Cecora. Upon return to Subotiv, Khmelnytsky took over operating his father's estate and became a registered Cossack in the Chyhyryn Regiment. He most likely did not take part in any of the Cossack uprisings that broke out in Ukraine at that time. His loyal service achieved him the rank of military clerk (pisarz wojskowy) of the registered Cossacks in 1637. It happened after the capitulation of the Pavlyuk uprising in the town Borowica on 24 December 1637, when field hetman Mikołaj Potocki appointed new Cossack eldership. He had to do it because some of the elders either joined Pavlyuk or were killed by him (like former military clerk, Teodor Onuszkowicz). Because of his new position Khmelnytsky was the one who prepared and signed an act of capitulation. Fighting didn't stop in Borowica, rebel Cossacks rose up again under the new command of Ostryanyn and Hunia in the spring next year. Mikołaj Potocki was successful again and after a six week long siege, the rebel Cossacks were forced to capitulate on 3 August 1638. Like the year before, some registered Cossacks joined the rebels, while some of them remained loyal. Unlike the last time, Potocki decided not to punish the rebel Cossacks, but forced all of them to swear loyalty to the king and the state and swear not to seek revenge against each other. The Hetman also agreed to their request to send emissaries to the king to seek royal grace and preserve Cossack rights. They were elected on a council on 9 September 1638 in Kyiv. Bohdan Khmelnytsky was one of them; the other three were Iwan Bojaryn, colonel of Kaniów, Roman Połowiec and Jan Wołczenko. The emissaries didn't achieve much, mostly because all decisions were already made by the Sejm earlier this year, when deputies accepted the project presented by the grand Hetman Stanisław Koniecpolski. Cossacks were forced to accept harsh new terms at the next council in Masłowy Staw, at the Ros river. According to one of the articles of the Ordynacya Woyska Zaporowskiego ("Ordinance of the Zaporozhian Army") registered Cossacks lost the right to elect their own officers and a commander, called elder (starszy) or commissar. From now on, the elder was to be nominated by the Sejm, from the Grand Hetman’s recommendation. The Grand Hetman also got the right to appoint all officers. Commissars, colonels and osauls had to be a noblemen, while sotniks and atamans had to be Cossacks, who were "distinguished in a service for Us and the Commonwealth". Khmelnytsky became one of the sotniks of Chyhryn regiment. In 1663 in Paris Pierre Chevalier published a book about Cossack uprising called Histoire de la guerre des Cosaques contre la Pologne, which he dedicated to Nicolas Léonor de Flesselles, count de Brégy, who was an ambassador to Poland in 1645. In the dedication he described the meeting de Brégy had with Khmelnytsky in France, and group of Cossacks he brought to France to fight against Spain in Flanders. Chevalier also claimed that he himself commanded Cossacks in Flanders. Although in distant parts of the book Chevalier doesn't mention either Cossacks or Khmelnytsky even once. In his other writing, Relation des Cosaques (avec la vie de Kmielniski, tirée d’un Manuscrit), published the same year, which also contains a biography of Khmelnytsky, there is no mention about his or any other Cossacks stay in France or Flanders. Moreover first Chevalier book is the only source that mention such an event, there is not trace of it even in correspondence of count de Brègy. Although it is true that he was conducting a recruitment of soldiers in Poland for French army in years 1646-1648. In fact he succeeded and about 3000 of them travelled via Gdańsk to Flanders and took part in fights around Dunkirk. French sources describes them as infanterie tout Poulonnois qu’Allemand. They were commanded by colonels Krzysztof Przyjemski, Andrzej Przyjemski and Georges Cabray. Second recruitment that shipped off in 1647 were commanded by Jan Pleitner, Dutch military engineer in service of Władysław IV and Jan Denhoff, colonel of Royal Guard. 17th century French historian Jean-François Sarasіn in his Histoire de siège de Dunkerque when describing participation of Polish mercenaries in fights over Dunkirk, notes that they were commanded by some "Sirot". Some historians identify him as Ivan Sirko, Cossack ataman. Claims that Khmelnytsky and Cossacks were actually in France are supported by some Ukrainian historians, while other and most Polish scholarship finds it unlikely. Czapliński Affair Upon the death of magnate Stanisław Koniecpolski (March 1646) his successor, Aleksander, redrew the maps of his possessions. He laid claim to Khmelnytsky's estate, claiming it as his. Trying to find protection from this grab by the powerful magnate, Khmelnytsky wrote numerous appeals and letters to different representatives of the Polish crown but to no avail. At the end of 1645 the Chyhyryn starosta Daniel Czapliński officially received authority from Koniecpolski to seize Khmelnytsky's Subotiv estate. In the summer of 1646, Khmelnytsky arranged an audience with King Władysław IV to plead his case, as he had favourable standing at the court. Władysław, who wanted Cossacks on his side in the wars he planned, gave Khmelnytsky a royal charter, protecting his rights to the Subotiv estate. But, because of the structure of the Commonwealth at that time and the lawlessness of Ukraine, even the King was not able to prevent a confrontation with local magnates. In the beginning of 1647, Daniel Czapliński started to harass Khmelnytsky in order to force him off the land. On two occasions the magnate had Subotiv raided: considerable property damage was done and Khmelnytsky's son Yuriy was badly beaten. Finally, in April 1647, Czapliński succeeded in evicting Khmelnytsky from the land, and he was forced to move with his large family to a relative's house in Chyhyryn. In May 1647, Khmelnytsky arranged a second audience with the king to plead his case but found him unwilling to confront a powerful magnate. In addition to losing the estate, Khmelnytsky suffered the loss of his wife Hanna, and he was left alone with their children. He promptly remarried, to Motrona (Helena Czaplińska), by that time wife of Daniel Czapliński, the so-called "Helen of the steppe". He was less successful in real estate, and was unable to regain the land and property of his estate or financial compensation for it. During this time, he met several higher Polish officials to discuss the Cossacks' war with the Tatars, and used this occasion again to plead his case with Czapliński, still unsuccessfully. While Khmelnytsky found no support from the Polish officials, he found it in his Cossack friends and subordinates. His Chyhyryn regiment and others were on his side. All through the autumn of 1647 Khmelnytsky travelled from one regiment to another, and had numerous consultations with Cossack leaders throughout Ukraine. His activity raised suspicion among the local Polish authorities already used to Cossack revolts; he was promptly arrested. Koniecpolski issued an order for his execution, but the Chyhyryn Cossack polkovnyk, who held Khmelnytsky, was persuaded to release him. Not willing to tempt fate any further, Khmelnytsky headed for the Zaporozhian Sich with a group of his supporters. Uprising While the Czapliński Affair is generally regarded as the immediate cause of the uprising, it was primarily a catalyst for actions representing rising popular discontent. Religion, ethnicity, and economics factored into this discontent. While the Polish–Lithuanian Commonwealth remained a union of nations, a sizable population of Orthodox Ruthenians were ignored. Oppressed by the Polish magnates, they took their wrath out on Poles, as well as the Jews, who often managed the estates of Polish nobles. The advent of the Counter-Reformation worsened relations between the Orthodox and Catholic Churches. Many Orthodox Ukrainians considered the Union of Brest as a threat to their Orthodox faith. Initial successes At the end of 1647 Khmelnytsky reached the estuary of the Dnieper river. On 7 December, his small detachment (300–500 men), with the help of registered Cossacks who went over to his side, disarmed the small Polish detachment guarding the area and took over the Zaporozhian Sich. The Poles attempted to retake the Sich but were decisively defeated as more registered Cossacks joined the forces. At the end of January 1648, a Cossack Rada was called and Khmelnytsky was unanimously elected a hetman. A period of feverish activity followed. Cossacks were sent with hetman's letters to many regions of Ukraine calling on Cossacks and Orthodox peasants to join the rebellion, Khortytsia was fortified, efforts were made to acquire and make weapons and ammunition, and emissaries were sent to the Khan of Crimea, İslâm III Giray. Initially, Polish authorities took the news of Khmelnytsky's arrival at the Sich and reports about the rebellion lightly. The two sides exchanged lists of demands: the Poles asked the Cossacks to surrender the mutinous leader and disband, while Khmelnytsky and the Rada demanded that the Commonwealth restore the Cossacks' ancient rights, stop the advance of the Ukrainian Greek Catholic Church, yield the right to appoint Orthodox leaders of the Sich and of the Registered Cossack regiments, and to remove Commonwealth troops from Ukraine. The Polish magnates considered the demands an affront, and an army headed by Stefan Potocki moved in the direction of the Sich. Had the Cossacks stayed at Khortytsia, they might have been defeated, as in many other rebellions. However, Khmelnytsky marched against the Poles. The two armies met on 16 May 1648 at Zhovti Vody, where, aided by the Tatars of Tugay Bey, the Cossacks inflicted their first crushing defeat on the Commonwealth. It was repeated soon afterwards, with the same success, at the Battle of Korsuń on 26 May 1648. Khmelnytsky used his diplomatic and military skills: under his leadership, the Cossack army moved to battle positions following his plans, Cossacks were proactive and decisive in their manoeuvrers and attacks, and most importantly, he gained the support of both large contingents of registered Cossacks and the Crimean Khan, his crucial ally for the many battles to come. Establishment of Cossack Hetmanate The Patriarch of Jerusalem Paiseus, who was visiting Kyiv at this time, referred to Khmelnytsky as the Prince of Rus, the head of an independent Ukrainian state, according to contemporaries. In February 1649, during negotiations in Pereiaslav with a Polish delegation headed by Senator Adam Kysil, Khmelnytsky declared that he was "the sole autocrat of Rus" and that he had "enough power in Ukraine, Podilia, and Volhynia... in his land and principality stretching as far as Lviv, Chełm, and Halych." After the period of initial military successes, the state-building process began. His leadership was demonstrated in all areas of state-building: military, administration, finance, economics and culture. Khmelnytsky made the Zaporozhian Host the supreme power in the new Ukrainian state and unified all the spheres of Ukrainian society under his authority. Khmelnytsky built a new government system and developed military and civilian administration. A new generation of statesmen and military leaders came to the forefront: Ivan Vyhovsky, Pavlo Teteria, Danylo Nechai and Ivan Nechai, Ivan Bohun, Hryhoriy Hulyanytsky. From Cossack polkovnyks, officers, and military commanders, a new elite within the Cossack Hetman state was born. Throughout the years, the elite preserved and maintained the autonomy of the Cossack Hetmanate in the face of Russia's attempt to curb it. It was also instrumental in the onset of the period of Ruin that followed, eventually destroying most of the achievements of the Khmelnytsky era. Complications Khmelnytsky's initial successes were followed by a series of setbacks as neither Khmelnytsky nor the Commonwealth had enough strength to stabilise the situation or to inflict a defeat on the enemy. What followed was a period of intermittent warfare and several peace treaties, which were seldom upheld. From spring 1649 onward, the situation turned for the worse for the Cossacks; as Polish attacks increased in frequency, they became more successful. The resulting Treaty of Zboriv on 18 August 1649 was unfavourable for the Cossacks. It was followed by another defeat at the battle of Berestechko on 18 June 1651 in which the Tatars betrayed Khmelnytsky and held the hetman captive. The Cossacks suffered a crushing defeat, with an estimated 30,000 casualties. They were forced to sign the Treaty of Bila Tserkva, which favoured the Polish-Lithuanian Commonwealth. Warfare broke open again and, in the years that followed, the two sides were almost perpetually at war. Now, the Crimean Tatars played a decisive role and did not allow either side to prevail. It was in their interests to keep both Ukraine and the Polish–Lithuanian Commonwealth from getting too strong and becoming an effective power in the region. Khmelnytsky started looking for another foreign ally. Although the Cossacks had established their de facto independence from Poland, the new state needed legitimacy, which could be provided by a foreign monarch. In search of a protectorate, Khmelnytsky approached the Ottoman sultan in 1651, and formal embassies were exchanged. The Turks offered vassalship, like their other arrangements with contemporary Crimea, Moldavia and Wallachia. However, the idea of a union with the Muslim monarch was not acceptable to the general populace and most Cossacks. The other possible ally was the Orthodox tsar of Moskovia. That government remained quite cautious and stayed away from the hostilities in Ukraine. In spite of numerous envoys and calls for help from Khmelnytsky in the name of the shared Orthodox faith, the tsar preferred to wait, until the threat of a Cossack-Ottoman union in 1653 finally forced him to action. The idea that the tsar might be favourable to taking Ukraine under his hand was communicated to the hetman and so diplomatic activity intensified. Treaty with tsar After a series of negotiations, it was agreed that the Cossacks would accept overlordship by the Tsar Alexei Mikhailovich. To finalize the treaty, a Russian embassy led by boyar Vasily Buturlin came to Pereyaslav, where, on 18 January 1654, the Cossack Rada was called and the treaty concluded. Historians have not come to consensus in interpreting the intentions of the tsar and Khmelnytsky in signing this agreement. The treaty legitimized Russian claims to the capital of Kievan Rus' and strengthened the tsar's influence in the region. Khmelnytsky needed the treaty to gain a legitimate monarch's protection and support from a friendly Orthodox power. Historians have differed in their reading of Khmelnytsky's goal with the union: whether it was to be a military union, a suzerainty, or a complete incorporation of Ukraine into the Tsardom of Russia. The differences were expressed during the ceremony of the oath of allegiance to the tsar: the Russian envoy refused to reciprocate with an oath from the ruler to his subjects, as the Cossacks and Ruthenians expected since it was the custom of the Polish king. Khmelnytsky stormed out of the church and threatened to cancel the entire treaty. The Cossacks decided to rescind the demand and abide by the treaty. Final years As a result of the 1654 Treaty of Pereyaslav, the geopolitical map of the region changed. Russia entered the scene, and the Cossacks' former allies, the Tatars, had switched sides and gone over to the Polish side, initiating warfare against Khmelnytsky and his forces. Tatar raids depopulated whole areas of Sich. Cossacks, aided by the Tsar's army, took revenge on Polish possessions in Belarus, and in the spring of 1654, the Cossacks drove the Poles from much of the country. Sweden entered the mêlée. Old adversaries of both Poland and Russia, they occupied a share of Lithuania before the Russians could get there. The occupation displeased Russia because the tsar sought to take over the Swedish Baltic provinces. In 1656, with the Commonwealth increasingly war-torn but also increasingly hostile and successful against the Swedes, the ruler of Transylvania, George II Rákóczi, also joined in. Charles X of Sweden had solicited his help because of the massive Polish popular opposition and resistance against the Swedes. Under blows from all sides, the Commonwealth barely survived. Russia attacked Sweden in July 1656, while its forces were deeply involved in Poland. That war ended in status quo two years later, but it complicated matters for Khmelnytsky, as his ally was now fighting his overlord. In addition to diplomatic tensions between the tsar and Khmelnytsky, a number of other disagreements between the two surfaced. In particular, they concerned Russian officials' interference in the finances of the Cossack Hetmanate and in the newly captured Belarus. The tsar concluded a separate treaty with the Poles in Vilnius in 1656. The Hetman's emissaries were not even allowed to attend the negotiations. Khmelnytsky wrote an irate letter to the tsar accusing him of breaking the Pereyaslav agreement. He compared the Swedes to the tsar and said that the former were more honourable and trustworthy than the Russians. In Poland, the Cossack army and Transylvanian allies suffered a number of setbacks. As a result, Khmelnytsky had to deal with a Cossack rebellion on the home front. Troubling news also came from Crimea, as Tatars, in alliance with Poland, were preparing for a new invasion of Ukraine. Though already ill, Khmelnytsky continued to conduct diplomatic activity, at one point even receiving the tsar's envoys from his bed. On 22 July, he suffered a cerebral hemorrhage and became paralysed after his audience with the Kyiv Colonel Zhdanovich. His expedition to Halychyna had failed because of mutiny within his army. Less than a week later, Bohdan Khmelnytsky died at 5 a.m. on 27 July 1657. His funeral was held on 23 August, and his body was taken from his capital, Chyhyryn, to his estate, at Subotiv, for burial in his ancestral church. In 1664 a Polish hetman Stefan Czarniecki recaptured Subotiv, which according to some Ukrainian historians, ordered the bodies of the hetman and his son, Tymish, to be exhumed and desecrated, while others claim that is not the case. Influences Khmelnytsky had a crucial influence on the history of Ukraine. He not only shaped the future of Ukraine but affected the balance of power in Europe, as the weakening of Poland-Lithuania was exploited by Austria, Saxony, Prussia, and Russia. His actions and role in events were viewed differently by different contemporaries, and even now there are greatly differing perspectives on his legacy. Ukrainian assessment In Ukraine, Khmelnytsky is generally regarded as a national hero. A city and a region of the country bear his name. His image is prominently displayed on Ukrainian banknotes and his monument in the centre of Kiev is a focal point of the Ukrainian capital. There have also been several issues of the Order of Bohdan Khmelnytsky – one of the highest decorations in Ukraine and in the former Soviet Union. However, with all this positive appreciation of his legacy, even in Ukraine it is far from being unanimous. He is criticised for his union with Russia, which in the view of some, proved to be disastrous for the future of the country. Prominent Ukrainian poet, Taras Shevchenko, was one of Khmelnytsky's very vocal and harsh critics. Others criticize him for his alliance with the Crimean Tatars, which permitted the latter to take a large number of Ukrainian peasants as slaves. (The Cossacks as a military caste did not protect the kholopy, the lowest stratum of the Ukrainian people). Folk songs capture this. On the balance, the view of his legacy in present-day Ukraine is more positive than negative, with some critics acknowledging that the union with Russia was dictated by necessity and an attempt to survive in those difficult times. In a 2018 Ukraine's Rating Sociological Group poll, 73% of Ukrainian respondents had a positive attitude to Khmelnytsky. Polish assessment Khmelnytsky's role in the history of the Polish State has been viewed mostly in a negative light. The rebellion of 1648 proved to be the end of the Golden Age of the Commonwealth and the beginning of its demise. Although it survived the rebellion and the following war, within a hundred years it was divided amongst Russia, Prussia, and Austria in the partitions of Poland. Many Poles blamed Khmelnytsky for the decline of the Commonwealth. Khmelnytsky has been a subject to several works of fiction in the 19th century Polish literature, but the most notable treatment of him in Polish literature is found in Henryk Sienkiewicz's With Fire and Sword. The rather critical portrayal of him by Sienkiewicz has been moderated in the 1999 movie adaptation by Jerzy Hoffman. Russian and Soviet history The official Russian historiography stressed the fact that Khmelnytsky entered into union with Moscow's Tsar Alexei Mikhailovich with an expressed desire to "re-unify" Ukraine with Russia. This view corresponded with the official theory of Moscow as an heir of the Kievan Rus', which appropriately gathered its former territories. Khmelnytsky was viewed as a national hero of Russia for bringing Ukraine into the "eternal union" of all the Russias – Great (Russia), Little (Ukraine) and White (Belarus) Russia. As such, he was much respected and venerated during the existence of the Russian Empire. His role was presented as a model for all Ukrainians to follow: to aspire for closer ties with Great Russia. This view was expressed in a monument commissioned by the Russian nationalist Mikhail Yuzefovich, which was installed in the centre of Kyiv in 1888. Russian authorities decided the original version of the monument (created by Russian sculptor Mikhail Mikeshin) was too xenophobic; it was to depict a vanquished Pole, Jew, and a Catholic priest under the hooves of the horse. The inscription on the monument reads "To Bohdan Khmelnytsky from one and indivisible Russia." Mikeshin also created the Monument to the Millennium of Russia in Novgorod, which has Khmelnytsky shown as one of Russia's prominent figures. Soviet historiography followed in many ways the Imperial Russian theory of re-unification while adding the class struggle dimension to the story. Khmelnytsky was praised not only for re-unifying Ukraine with Russia, but also for organizing the class struggle of oppressed Ukrainian peasants against Polish exploiters. Jewish history The assessment of Khmelnytsky in Jewish history is overwhelmingly negative because he blamed Jews in assisting Polish szlachta, as the former were often employed by them as tax collectors. Bohdan sought to eradicate Jews from Ukraine. Thus, according to the treaty of Zboriv all Jewish people were forbidden to live on the territory controlled by Cossack rebels. The Khmelnytsky Uprising led to the deaths of at least a couple thousands of Jews living in the territory. Due to the lack of reliable data for the assessment of the local Jewish population's size and the amount of victims, giving a more accurate estimate is a tough if not impossible task for historians. Atrocity stories about massacre victims who had been buried alive, cut to pieces or forced to kill one another spread throughout Europe and beyond. The pogroms contributed to a revival of the ideas of Isaac Luria, who revered the Kabbalah, and the identification of Sabbatai Zevi as the Messiah. Orest Subtelny writes: Between 1648 and 1656, tens of thousands of Jews—given the lack of reliable data, it is impossible to establish more accurate figures—were killed by the rebels, and to this day the Khmelnytsky uprising is considered by Jews to be one of the most traumatic events in their history. Commemoration The Ukrainian city of Khmelnytskyi is named after Khmelnytsky. In most Ukrainian cities there are Bohdan Khmelnytskyi streets, as well as Bohdan Khmelnytskyi Avenue in the city of Dnipro. The Separate Presidential Brigade "Hetman Bohdan Khmelnytskyi", a unit of the Armed Forces of Ukraine tasked with protecting the president of Ukraine, is named in honor of Khmelnytsky. See also Bohdan Khmelnytsky Bridge in Moscow List of Ukrainian rulers Order of Bohdan Khmelnytsky, a state military award in Ukraine Order of Bogdan Khmelnitsky (Soviet Union) With Fire and Sword (1884), an historical novel by the Polish author Henryk Sienkiewicz about these events. References Further reading Władysław Andrzej Serczyk, Na dalekiej Ukrainie. Dzieje Kozaczyzny do 1648 roku, Kraków 2008. Władysław Andrzej Serczyk, Na płonącej Ukrainie. Dzieje Kozaczyzny 1648-1651, Kraków 2009. Valeriy Smoliy, Valeriy Stepankov, Bohdan Khmelnytsky. Sotsialno-politychnyi portret Second Edition. Lebid, Kyiv. 1995. . Orest Subtelny. Ukraine. A history. University of Toronto press. 1994. . Zbigniew Wójcik, Czy Kozacy Zaporoscy byli na służbie Mazarina?, "Przegląd Historyczny", vol. 64/3 (1973). Stephen Velychenko, The influence of historical, political and social ideas on the politics of Bohdan Khmelnytsky and the Cossack officers between 1648 and 1657, Ph.D. dissertation, University of London, 1981. Mykhailo Hrushevsky. History of Ukraine-Rus’. Volume 8, The Cossack Age, 1626 to 1650: trans. Marta Daria Olynyk, ed. Frank Sysyn, with Myroslav Yurkevich. 2002. Mykhailo Hrushevsky. History of Ukraine-Rus’. Volume 9-1, The Khmelnytsky Era, 1651 to 1653: trans. Bohdan Strumiński, ed. Serhii Plokhy and Frank E. Sysyn, with Uliana M. Pasicznyk. 2005. Mykhailo Hrushevsky. History of Ukraine-Rus’. Volume 9-2, The Khmelnytsky Era, 1654 to 1657, part 1: trans. Marta Daria Olynyk, ed. Serhii Plokhy and Frank E. Sysyn, with Myroslav Yurkevich. 2008. Mykhailo Hrushevsky. History of Ukraine-Rus’. Volume 9-2, The Khmelnytsky Era, 1654 to 1657, part 2: trans. Marta Daria Olynyk, ed. Yaroslav Fedoruk and Frank E. Sysyn, with Myroslav Yurkevich. 2010. External links Oleksander Ohloblyn, Khmelnytsky, Bohdan, article originally appeared in the Encyclopedia of Ukraine, vol. 2 (1989). Cossack State after 1649 (map) Biography of Bohdan Khmelnytsky Mykola Mashchenko, Film about Khmelnytsky (2008), Dovzhenko Film Studios Dr. Henry Abramson, Video on Nathan of Hanover and the Ukrainian Revolution of 1648–1649, 19 February 2013, Jewish Biography as History, Jewish History lectures, Henry Abramson website 1590s births 1657 deaths Year of birth uncertain Antisemitism in Ukraine 17th-century Ukrainian people Founding monarchs Hetmans of Zaporizhian Host People from Cherkasy Oblast
2,238
4,997
https://en.wikipedia.org/wiki/Boeing%20B-17%20Flying%20Fortress
Boeing B-17 Flying Fortress
The Boeing B-17 Flying Fortress is a four-engined heavy bomber developed in the 1930s for the United States Army Air Corps (USAAC). Relatively fast and high-flying for a bomber of its era, the B-17 was used primarily in the European Theater of Operations and dropped more bombs than any other aircraft during World War II. It is the third-most produced bomber of all time, behind the four-engined Consolidated B-24 Liberator and the multirole, twin-engined Junkers Ju 88. It was also employed as a transport, antisubmarine aircraft, drone controller, and search-and-rescue aircraft. In a USAAC competition, Boeing's prototype Model 299/XB-17 outperformed two other entries but crashed, losing the initial 200-bomber contract to the Douglas B-18 Bolo. Still, the Air Corps ordered 13 more B-17s for further evaluation, then introduced it into service in 1938. The B-17 evolved through numerous design advances but from its inception, the USAAC (later, the USAAF) promoted the aircraft as a strategic weapon. It was a relatively fast, high-flying, long-range bomber with heavy defensive armament at the expense of bombload. It also developed a reputation for toughness based upon stories and photos of badly damaged B-17s safely returning to base. The B-17 saw early action in the Pacific War, where it conducted raids against Japanese shipping and airfields. But it was primarily employed by the USAAF in the daylight strategic bombing campaign over Europe, complementing RAF Bomber Command's night-time area bombing of German industrial, military and civilian targets. Of the roughly of bombs dropped on Nazi Germany and its occupied territories by U.S. aircraft, over (42.6%) were dropped from B-17s. As of November 2022, four aircraft remain airworthy, none flown in combat. Dozens more are in storage or on static display. The oldest of these is a D-series flown in combat in the Pacific on the first day of the United States' involvement in World War II. Development Origins On 8 August 1934, the USAAC tendered a proposal for a multiengine bomber to replace the Martin B-10. The Air Corps was looking for a bomber capable of reinforcing the air forces in Hawaii, Panama, and Alaska. Requirements were for it to carry a "useful bombload" at an altitude of for 10 hours with a top speed of at least . They also desired, but did not require, a range of and a speed of . The competition for the air corps contract was to be decided by a "fly-off" between Boeing's design, the Douglas DB-1, and the Martin Model 146 at Wilbur Wright Field in Dayton, Ohio. The prototype B-17, with the Boeing factory designation of Model 299, was designed by a team of engineers led by E. Gifford Emery and Edward Curtis Wells, and was built at Boeing's own expense. It combined features of the company's experimental XB-15 bomber and 247 transport. The B-17's armament consisted of five .30 caliber (7.62 mm) machine guns, with a payload up to of bombs on two racks in the bomb bay behind the cockpit. The aircraft was powered by four Pratt & Whitney R-1690 Hornet radial engines, each producing at . The first flight of the Model 299 was on 1935 with Boeing chief test-pilot Leslie Tower at the controls. The day before, Richard Williams, a reporter for The Seattle Times, coined the name "Flying Fortress" when – observing the large number of machine guns sticking out from the new airplane – he described it as a "15-ton flying fortress" in a picture caption. The most distinct mount was in the nose, which allowed the single machine gun to be fired toward nearly all frontal angles. Boeing was quick to see the value of the name and had it trademarked for use. Boeing also claimed in some of the early press releases that Model 299 was the first combat aircraft that could continue its mission if one of its four engines failed. On , the prototype flew from Seattle to Wright Field in nine hours and three minutes with an average cruising speed of , much faster than the competition. At the fly-off, the four-engined Boeing's performance was superior to those of the twin-engined DB-1 and Model 146. Major General Frank Maxwell Andrews of the GHQ Air Force believed that the capabilities of large four-engined aircraft exceeded those of shorter-ranged, twin-engined aircraft, and that the B-17 was better suited to new, emerging USAAC doctrine. His opinions were shared by the air corps procurement officers, and even before the competition had finished, they suggested buying 65 B-17s. On 30 October 1935, a test flight determining the rate of climb and service ceiling was planned. The command pilot was Major Ployer Peter Hill, Wright Field Material Division Chief of the Flying Branch, his first flight in the Model 299. Copilot was Lieutenant Donald Putt, while Boeing chief test pilot Leslie R. Tower was behind the pilots in an advisory role. Also on board were Wright Field test observer John Cutting, and mechanic Mark Koegler. Tragically, the plane stalled and spun into the ground soon after takeoff, bursting into flames. Though initially surviving the impact, Hill died within a few hours, and Tower on 19 November. Post accident interviews with Tower and Putt determined the control surface gust lock had not been released. Doyle notes, "The loss of Hill and Tower, and the Model 299, was directly responsible for the creation of the modern written checklist used by pilots to this day." The crashed Model 299 could not finish the evaluation, disqualifying it from the competition. While the air corps was still enthusiastic about the aircraft's potential, army officials were daunted by its cost; Douglas quoted a unit price of $58,200 () based on a production order of 220 aircraft, compared with $99,620 ( ) from Boeing. Army Chief of Staff Malin Craig cancelled the order for 65 YB-17s, and ordered 133 of the twin-engined Douglas B-18 Bolo, instead. Initial orders Regardless, the USAAC had been impressed by the prototype's performance, and on 1936, through a legal loophole, the Air Corps ordered 13 YB-17s (designated Y1B-17 after November 1936 to denote its special F-1 funding) for service testing. The YB-17 incorporated a number of significant changes from the Model 299, including more powerful Wright R-1820-39 Cyclone engines. Although the prototype was company-owned and never received a military serial (the B-17 designation itself did not appear officially until January 1936, nearly three months after the prototype crashed), the term "XB-17" was retroactively applied to the NX13372's airframe and has entered the lexicon to describe the first Flying Fortress. Between 1 March and 4 August 1937, 12 of the 13 Y1B-17s were delivered to the 2nd Bombardment Group at Langley Field in Virginia for operational development and flight tests. One suggestion adopted was the use of a preflight checklist to avoid accidents such as that which befell the Model 299. In one of their first missions, three B-17s, directed by lead navigator Lieutenant Curtis LeMay, were sent by General Andrews to "intercept" and photograph the Italian ocean liner Rex off the Atlantic coast. The mission was successful and widely publicized. The 13th Y1B-17 was delivered to the Material Division at Wright Field, Ohio, to be used for flight testing. A 14th Y1B-17 (37-369), originally constructed for ground testing of the airframe's strength, was upgraded by Boeing with exhaust-driven General Electric turbo-superchargers, and designated Y1B-17A. Designed by Dr. Sanford Moss, engine exhaust gases turned the turbine's steel-alloy blades, forcing high-pressure ram air into the Wright Cyclone GR-1820-39 engine supercharger. Scheduled to fly in 1937, it encountered problems with the turbochargers, and its first flight was delayed until 1938. The aircraft was delivered to the army on 1939. Once service testing was complete, the Y1B-17s and Y1B-17A were redesignated B-17 and B-17A, respectively, to signify the change to operational status. The Y1B-17A had a maximum speed of , at its best operational altitude, compared to for the Y1B-17. Also, the Y1B-17A's new service ceiling was more than higher at , compared to the Y1B-17's . These turbo-superchargers were incorporated into the B-17B. Opposition to the air corps' ambitions for the acquisition of more B-17s faded, and in late 1937, 10 more aircraft designated B-17B were ordered to equip two bombardment groups, one on each U.S. coast. Improved with larger flaps and rudder and a well-framed, 10-panel plexiglas nose, the B-17Bs were delivered in five small batches between July 1939 and March 1940. In July 1940, an order for 512 B-17s was issued, but at the time of the attack on Pearl Harbor, fewer than 200 were in service with the army. A total of 155 B-17s of all variants were delivered between 1937 and 1941, but production quickly accelerated, with the B-17 once holding the record for the highest production rate for any large aircraft. The aircraft went on to serve in every World War II combat zone, and by the time production ended in May 1945, 12,731 aircraft had been built by Boeing, Douglas, and Vega (a subsidiary of Lockheed). Design and variants The aircraft went through several alterations in each of its design stages and variants. Of the 13 YB-17s ordered for service testing, 12 were used by the 2nd Bomb Group of Langley Field, Virginia, to develop heavy bombing techniques, and the 13th was used for flight testing at the Material Division at Wright Field, Ohio. Experiments on this aircraft led to the use of a quartet of General Electric turbo-superchargers, which later became standard on the B-17 line. A 14th aircraft, the YB-17A, originally destined for ground testing only and upgraded with the turbochargers, was redesignated B-17A after testing had finished. As the production line developed, Boeing engineers continued to improve upon the basic design. To enhance performance at slower speeds, the B-17B was altered to include larger rudders and flaps. The B-17C changed from three bulged, oval-shaped gun blisters to two flush, oval-shaped gun window openings, and on the lower fuselage, a single "bathtub" gun gondola housing, which resembled the similarly configured and located Bodenlafette/"Bola" ventral defensive emplacement on the German Heinkel He 111P-series medium bomber. While models A through D of the B-17 were designed defensively, the large-tailed B-17E was the first model primarily focused on offensive warfare. The B-17E was an extensive revision of the Model 299 design: The fuselage was extended by ; a much larger rear fuselage, vertical tailfin, rudder, and horizontal stabilizer were added; a gunner's position was added in the new tail; the nose (especially the bombardier's framed, 10-panel nose glazing) remained relatively the same as the earlier B through D versions had; a Sperry electrically powered manned dorsal gun turret just behind the cockpit was added; a similarly powered (also built by Sperry) manned ventral ball turret just aft of the bomb bay – replaced the relatively hard-to-use, Sperry model 645705-D remotely operated ventral turret on the earliest examples of the E variant. These modifications resulted in a 20% increase in aircraft weight. The B-17's turbocharged Wright R-1820 Cyclone 9 engines were upgraded to increasingly more powerful versions of the same powerplants throughout its production, and similarly, the number of machine gun emplacement locations was increased. The B-17F variants were the primary versions flying for the Eighth Air Force to face the Germans in 1943 and had standardized the manned Sperry ball turret for ventral defense, also replacing the earlier, 10-panel framed bombardier's nose glazing from the B subtype with an enlarged, nearly frameless Plexiglas bombardier's nose enclosure for improved forward vision. Two experimental versions of the B-17 were flown under different designations, the XB-38 Flying Fortress and the YB-40 Flying Fortress. The XB-38 was an engine testbed for Allison V-1710 liquid-cooled engines, should the Wright engines normally used on the B-17 become unavailable. The only prototype XB-38 to fly crashed on its ninth flight, and the type was abandoned. The Allison V-1710 was allocated to fighter aircraft. The YB-40 was a heavily armed modification of the standard B-17 used before the North American P-51 Mustang, an effective long-range fighter, became available to act as escort. Additional armament included an additional dorsal turret in the radio room, a remotely operated and fired Bendix-built "chin turret" directly below the bombardier's accommodation, and twin guns in each of the waist positions. The ammunition load was over 11,000 rounds. All of these modifications made the YB-40 well over heavier than a fully loaded B-17F. The YB-40s with their numerous heavy modifications had trouble keeping up with the lighter bombers once they had dropped their bombs, so the project was abandoned and finally phased out in July 1943. The final production blocks of the B-17F from Douglas' plants did, however, adopt the YB-40's "chin turret", giving them a much-improved forward defense capability. By the time the definitive B-17G appeared, the number of guns had been increased from seven to 13, the designs of the gun stations were finalized, and other adjustments were completed. The B-17G was the final version of the Flying Fortress, incorporating all changes made to its predecessor, the B-17F, and in total, 8,680 were built, the last (by Lockheed) on 1945. Many B-17Gs were converted for other missions such as cargo hauling, engine testing, and reconnaissance. Initially designated SB-17G, a number of B-17Gs were also converted for search-and-rescue duties, later to be redesignated B-17H. Late in World War II, at least 25 B-17s were fitted with radio controls and television cameras, loaded with of high explosives and dubbed BQ-7 "Aphrodite missiles" for Operation Aphrodite. The operation, which involved remotely flying Aphrodite drones onto their targets by accompanying CQ-17 "mothership" control aircraft, was approved on 1944, and assigned to the 388th Bombardment Group stationed at RAF Fersfield, a satellite of RAF Knettishall. The first four drones were sent to Mimoyecques, the Siracourt V-1 bunker, Watten, and Wizernes on 4 August, causing little damage. The project came to a sudden end with the unexplained midair explosion over the Blyth estuary of a B-24, part of the United States Navy's contribution as "Project Anvil", en route for Heligoland piloted by Lieutenant Joseph P. Kennedy Jr., future U.S. president John F. Kennedy's elder brother. Blast damage was caused over a radius of . British authorities were anxious that no similar accidents should again occur, and the Aphrodite project was scrapped in early 1945. Operational history The B-17 began operations in World War II with the Royal Air Force (RAF) in 1941, and in the Southwest Pacific with the U.S. Army. The 19th Bombardment Group had deployed to Clark Field in the Philippines a few weeks before the Japanese attack on Pearl Harbor as the first of a planned heavy bomber buildup in the Pacific. Half of the group's B-17s were wiped out on 8 December 1941 when they were caught on the ground during refueling and rearming for a planned attack on Japanese airfields on Formosa. The small force of B-17s operated against the Japanese invasion force until they were withdrawn to Darwin, in Australia's Northern Territory. In early 1942, the 7th Bombardment Group began arriving in Java with a mixed force of B-17s and LB-30/B-24s. A squadron of B-17s from this force detached to the Middle East to join the First Provisional Bombardment Group, thus becoming the first American B-17 squadron to go to war against the Germans. After the defeat in Java, the 19th withdrew to Australia, where it continued in combat until it was sent home by General George C. Kenney when he arrived in Australia in mid-1942. In July 1942, the first USAAF B-17s were sent to England to join the Eighth Air Force. Later that year, two groups moved to Algeria to join Twelfth Air Force for operations in North Africa. The B-17s were primarily involved in the daylight precision strategic bombing campaign against German targets ranging from U-boat pens, docks, warehouses, and airfields to industrial targets such as aircraft factories. In the campaign against German aircraft forces in preparation for the invasion of France, B-17 and B-24 raids were directed against German aircraft production while their presence drew the Luftwaffe fighters into battle with Allied fighters. During World War II, the B-17 equipped 32 overseas combat groups, inventory peaking in August 1944 at 4,574 USAAF aircraft worldwide. The British heavy bombers, the Avro Lancaster and Handley Page Halifax, dropped 608,612 long tons (681,645 short tons) and 224,207 long tons (251,112 short tons) respectively. RAF use The RAF entered World War II with no heavy bomber of its own in service; the biggest available were long-range medium bombers such as the Vickers Wellington, which could carry up to of bombs. While the Short Stirling and Handley Page Halifax became its primary bombers by 1941, in early 1940, the RAF entered into an agreement with the U.S. Army Air Corps to acquire 20 B-17Cs, which were given the service name Fortress I. Their first operation, against Wilhelmshaven on 1941 was unsuccessful. On three B-17s of 90 Squadron took part in a raid on the German capital ship Gneisenau and Prinz Eugen anchored in Brest from 30,000 ft (9,100 m), with the objective of drawing German fighters away from 18 Handley Page Hampdens attacking at lower altitudes, and in time for 79 Vickers Wellingtons to attack later with the German fighters refuelling. The operation did not work as expected, with 90 Squadron's Fortresses being unopposed. By September, the RAF had lost eight B-17Cs in combat and had experienced numerous mechanical problems, and Bomber Command abandoned daylight bombing raids using the Fortress I because of the aircraft's poor performance. The experience showed both the RAF and USAAF that the B-17C was not ready for combat, and that improved defenses, larger bomb loads and more accurate bombing methods were required. However, the USAAF continued using the B-17 as a day bomber, despite misgivings by the RAF that attempts at daylight bombing would be ineffective. As use by Bomber Command had been curtailed, the RAF transferred its remaining Fortress I aircraft to Coastal Command for use as a long-range maritime patrol aircraft. These were augmented starting in July 1942 by 45 Fortress Mk IIA (B-17E) followed by 19 Fortress Mk II (B-17F) and three Fortress Mk III (B-17G). A Fortress IIA from No. 206 Squadron RAF sank U-627 on 1942, the first of 11 U-boat kills credited to RAF Fortress bombers during the war. As sufficient Consolidated Liberators finally became available, Coastal Command withdrew the Fortress from the Azores, transferring the type to the meteorological reconnaissance role. Three squadrons undertook Met profiles from airfields in Iceland, Scotland and England, gathering data for vital weather forecasting purposes. The RAF's No. 223 Squadron, as part of 100 Group, operated a number of Fortresses equipped with an electronic warfare system known as "Airborne Cigar" (ABC). This was operated by German-speaking radio operators who were to identify and jam German ground controllers' broadcasts to their nightfighters. They could also pose as ground controllers themselves with the intention of steering nightfighters away from the bomber streams. Initial USAAF operations over Europe The air corps – renamed United States Army Air Forces (USAAF) on 20 June 1941 – used the B-17 and other bombers to bomb from high altitudes with the aid of the then-secret Norden bombsight, known as the "Blue Ox", which was an optical electromechanical gyrostabilized analog computer. The device was able to determine, from variables put in by the bombardier, the point at which the aircraft's bombs should be released to hit the target. The bombardier essentially took over flight control of the aircraft during the bomb run, maintaining a level altitude during the final moments before release. The USAAF began building up its air forces in Europe using B-17Es soon after entering the war. The first Eighth Air Force units arrived in High Wycombe, England, on 1942, to form the 97th Bomb Group. On 1942, 12 B-17Es of the 97th, with the lead aircraft piloted by Major Paul Tibbets and carrying Brigadier General Ira Eaker as an observer, were close escorted by four squadrons of RAF Spitfire IXs (and a further five squadrons of Spitfire Vs to cover the withdrawal) on the first USAAF heavy bomber raid over Europe, against the large railroad marshalling yards at Rouen-Sotteville in France, while a further six aircraft flew a diversionary raid along the French coast. The operation, carried out in good visibility, was a success, with only minor damage to one aircraft, unrelated to enemy action, and half the bombs landing in the target area. The raid helped allay British doubts about the capabilities of American heavy bombers in operations over Europe. Two additional groups arrived in Britain at the same time, bringing with them the first B-17Fs, which served as the primary AAF heavy bomber fighting the Germans until September 1943. As the raids of the American bombing campaign grew in numbers and frequency, German interception efforts grew in strength (such as during the attempted bombing of Kiel on 13 June 1943), such that unescorted bombing missions came to be discouraged. Combined offensive The two different strategies of the American and British bomber commands were organized at the Casablanca Conference in January 1943. The resulting "Combined Bomber Offensive" weakened the Wehrmacht, destroyed German morale, and established air superiority through Operation Pointblank's destruction of German fighter strength in preparation for a ground offensive. The USAAF bombers attacked by day, with British operations – chiefly against industrial cities – by night. Operation Pointblank opened with attacks on targets in Western Europe. General Ira C. Eaker and the Eighth Air Force placed highest priority on attacks on the German aircraft industry, especially fighter assembly plants, engine factories, and ball-bearing manufacturers. Attacks began in April 1943 on heavily fortified key industrial plants in Bremen and Recklinghausen. Since the airfield bombings were not appreciably reducing German fighter strength, additional B-17 groups were formed, and Eaker ordered major missions deeper into Germany against important industrial targets. The 8th Air Force then targeted the ball-bearing factories in Schweinfurt, hoping to cripple the war effort there. The first raid on 1943 did not result in critical damage to the factories, with the 230 attacking B-17s being intercepted by an estimated 300 Luftwaffe fighters. The Germans shot down 36 aircraft with the loss of 200 men, and coupled with a raid earlier in the day against Regensburg, a total of 60 B-17s were lost that day. A second attempt on Schweinfurt on 14 October 1943 later came to be known as "Black Thursday". While the attack was successful at disrupting the entire works, severely curtailing work there for the remainder of the war, it was at an extreme cost. Of the 291 attacking Fortresses, 60 were shot down over Germany, five crashed on approach to Britain, and 12 more were scrapped due to damage – a loss of 77 B-17s. Additionally, 122 bombers were damaged and needed repairs before their next flights. Of 2,900 men in the crews, about 650 did not return, although some survived as prisoners of war. Only 33 bombers landed without damage. These losses were a result of concentrated attacks by over 300 German fighters. Such high losses of aircrews could not be sustained, and the USAAF, recognizing the vulnerability of heavy bombers to interceptors when operating alone, suspended daylight bomber raids deep into Germany until the development of an escort fighter that could protect the bombers all the way from the United Kingdom to Germany and back. At the same time, the German nightfighting ability noticeably improved to counter the nighttime strikes, challenging the conventional faith in the cover of darkness. The 8th Air Force alone lost 176 bombers in October 1943, and was to suffer similar casualties on 1944 on missions to Oschersleben, Halberstadt, and Brunswick. Lieutenant General James Doolittle, commander of the 8th, had ordered the second Schweinfurt mission to be cancelled as the weather deteriorated, but the lead units had already entered hostile air space and continued with the mission. Most of the escorts turned back or missed the rendezvous, and as a result, 60 B-17s were destroyed. A third raid on Schweinfurt on 1944 highlighted what came to be known as "Big Week", during which the bombing missions were directed against German aircraft production. German fighters needed to respond, and the North American P-51 Mustang and Republic P-47 Thunderbolt fighters (equipped with improved drop tanks to extend their range) accompanying the American heavies all the way to and from the targets engaged them. The escort fighters reduced the loss rate to below 7%, with a total of 247 B-17s lost in 3,500 sorties while taking part in the Big Week raids. By September 1944, 27 of the 42 bomb groups of the 8th Air Force and six of the 21 groups of the 15th Air Force used B-17s. Losses to flak continued to take a high toll of heavy bombers through 1944, but the war in Europe was being won by the Allies. And by 1945, 2 days after the last heavy bombing mission in Europe, the rate of aircraft loss was so low that replacement aircraft were no longer arriving and the number of bombers per bomb group was reduced. The Combined Bomber Offensive was effectively complete. Pacific Theater On 7 December 1941, a group of 12 B-17s of the 38th (four B-17C) and 88th (eight B-17E) Reconnaissance Squadrons, en route to reinforce the Philippines, was flown into Pearl Harbor from Hamilton Field, California, arriving while the surprise attack on Pearl Harbor was going on. Leonard "Smitty" Smith Humiston, co-pilot on First Lieutenant Robert H. Richards' B-17C, AAF S/N 40-2049, reported that he thought the U.S. Navy was giving the flight a 21-gun salute to celebrate the arrival of the bombers, after which he realized that Pearl Harbor was under attack. The Fortress came under fire from Japanese fighter aircraft, though the crew was unharmed with the exception of one member who suffered an abrasion on his hand. Japanese activity forced them to divert from Hickam Field to Bellows Field. On landing, the aircraft overran the runway and ran into a ditch, where it was then strafed. Although initially deemed repairable, 40-2049 (11th BG / 38th RS) received more than 200 bullet holes and never flew again. Ten of the 12 Fortresses survived the attack. By 1941, the Far East Air Force (FEAF) based at Clark Field in the Philippines had 35 B-17s, with the War Department eventually planning to raise that to 165. When the FEAF received word of the attack on Pearl Harbor, General Lewis H. Brereton sent his bombers and fighters on various patrol missions to prevent them from being caught on the ground. Brereton planned B-17 raids on Japanese airfields in Formosa, in accordance with Rainbow 5 war plan directives, but this was overruled by General Douglas MacArthur. A series of disputed discussions and decisions, followed by several confusing and false reports of air attacks, delayed the authorization of the sortie. By the time the B-17s and escorting Curtiss P-40 Warhawk fighters were about to get airborne, they were destroyed by Japanese bombers of the 11th Air Fleet. The FEAF lost half its aircraft during the first strike, and was all but destroyed over the next few days. Another early World War II Pacific engagement, on 1941, involved Colin Kelly, who reportedly crashed his B-17 into the Japanese battleship Haruna, which was later acknowledged as a near bomb miss on the heavy cruiser Ashigara. Nonetheless, this deed made him a celebrated war hero. Kelly's B-17C AAF S/N 40-2045 (19th BG / 30th BS) crashed about from Clark Field after he held the burning Fortress steady long enough for the surviving crew to bail out. Kelly was posthumously awarded the Distinguished Service Cross. Noted Japanese ace Saburō Sakai is credited with this kill, and in the process, came to respect the ability of the Fortress to absorb punishment. B-17s were used in early battles of the Pacific with little success, notably the Battle of Coral Sea and Battle of Midway. While there, the Fifth Air Force B-17s were tasked with disrupting the Japanese sea lanes. Air Corps doctrine dictated bombing runs from high altitude, but they soon found only 1% of their bombs hit targets. However, B-17s were operating at heights too great for most A6M Zero fighters to reach. The B-17's greatest success in the Pacific was in the Battle of the Bismarck Sea, in which aircraft of this type were responsible for damaging and sinking several Japanese transport ships. On 2 March 1943, six B-17s of the 64th Squadron flying at attacked a major Japanese troop convoy off New Guinea, using skip bombing to sink , which carried 1,200 army troops, and damage two other transports, Teiyo Maru and Nojima. On 3 March 1943, 13 B-17s flying at bombed the convoy, forcing the convoy to disperse and reducing the concentration of their anti-aircraft defenses. The B-17s attracted a number of Mitsubishi A6M Zero fighters, which were in turn attacked by the P-38 Lightning escorts. One B-17 broke up in the air, and its crew was forced to take to their parachutes. Japanese fighter pilots machine-gunned some of the B-17 crew members as they descended and attacked others in the water after they landed. Five of the Japanese fighters strafing the B-17 aircrew were promptly engaged and shot down by three Lightnings, though these were also then lost. The allied fighter pilots claimed 15 Zeros destroyed, while the B-17 crews claimed five more. Actual Japanese fighter losses for the day were seven destroyed and three damaged. The remaining seven transports and three of the eight destroyers were then sunk by a combination of low level strafing runs by Royal Australian Air Force Beaufighters, and skip bombing by USAAF North American B-25 Mitchells at , while B-17s claimed five hits from higher altitudes. On the morning of 4 March 1943, a B-17 sank the destroyer Asashio with a bomb while she was picking up survivors from Arashio. At their peak, 168 B-17 bombers were in the Pacific theater in September 1942, but already in mid-1942 Gen. Arnold had decided that the B-17 was unsuitable for the kind of operations required in the Pacific and made plans to replace all of the B-17s in the theater with B-24s (and later, B-29s) as soon as they became available. Although the conversion was not complete until mid-1943, B-17 combat operations in the Pacific theater came to an end after a little over a year. Surviving aircraft were reassigned to the 54th Troop Carrier Wing's special airdrop section and were used to drop supplies to ground forces operating in close contact with the enemy. Special airdrop B-17s supported Australian commandos operating near the Japanese stronghold at Rabaul, which had been the primary B-17 target in 1942 and early 1943. B-17s were still used in the Pacific later in the war, however, mainly in the combat search and rescue role. A number of B-17Gs, redesignated B-17Hs and later SB-17Gs, were used in the Pacific during the final year of the war to carry and drop lifeboats to stranded bomber crews who had been shot down or crashed at sea. These aircraft were nicknamed Dumbos, and remained in service for many years after the end of World War II. Bomber defense Before the advent of long-range fighter escorts, B-17s had only their .50 caliber M2 Browning machine guns to rely on for defense during the bombing runs over Europe. As the war intensified, Boeing used feedback from aircrews to improve each new variant with increased armament and armor. Defensive armament increased from four machine guns and one nose machine gun in the B-17C, to thirteen machine guns in the B-17G. But because the bombers could not maneuver when attacked by fighters and needed to be flown straight and level during their final bomb run, individual aircraft struggled to fend off a direct attack. A 1943 survey by the USAAF found that over half the bombers shot down by the Germans had left the protection of the main formation. To address this problem, the United States developed the bomb-group formation, which evolved into the staggered combat box formation in which all the B-17s could safely cover any others in their formation with their machine guns. This made a formation of bombers a dangerous target to engage by enemy fighters. In order to more quickly form these formations, assembly ships, planes with distinctive paint schemes, were utilized to guide bombers into formation, saving assembly time. Luftwaffe fighter pilots likened attacking a B-17 combat box formation to encountering a fliegendes Stachelschwein, "flying porcupine", with dozens of machine guns in a combat box aimed at them from almost every direction. However, the use of this rigid formation meant that individual aircraft could not engage in evasive maneuvers: they had to fly constantly in a straight line, which made them vulnerable to German flak. Moreover, German fighter aircraft later developed the tactic of high-speed strafing passes rather than engaging with individual aircraft to inflict damage with minimum risk. As a result, the B-17s' loss rate was up to 25% on some early missions. It was not until the advent of long-range fighter escorts (particularly the North American P-51 Mustang) and the resulting degradation of the Luftwaffe as an effective interceptor force between February and June 1944, that the B-17 became strategically potent. The B-17 was noted for its ability to absorb battle damage, still reach its target and bring its crew home safely. Wally Hoffman, a B-17 pilot with the Eighth Air Force during World War II, said, "The plane can be cut and slashed almost to pieces by enemy fire and bring its crew home." Martin Caidin reported one instance in which a B-17 suffered a midair collision with a Focke-Wulf Fw 190, losing an engine and suffering serious damage to both the starboard horizontal stabilizer and the vertical stabilizer, and being knocked out of formation by the impact. The B-17 was reported as shot down by observers, but it survived and brought its crew home without injury. Its toughness was compensation for its shorter range and lighter bomb load compared to the B-24 and British Avro Lancaster heavy bombers. Stories circulated of B-17s returning to base with tails shredded, engines destroyed and large portions of their wings destroyed by flak. This durability, together with the large operational numbers in the Eighth Air Force and the fame achieved by the Memphis Belle, made the B-17 a key bomber aircraft of the war. Other factors such as combat effectiveness and political issues also contributed to the B-17's success. Luftwaffe attacks After examining wrecked B-17s and B-24s, Luftwaffe officers discovered that on average it took about 20 hits with 20 mm shells fired from the rear to bring them down. Pilots of average ability hit the bombers with only about two percent of the rounds they fired, so to obtain 20 hits, the average pilot had to fire one thousand rounds at a bomber. Early versions of the Fw 190, one of the best German interceptor fighters, were equipped with two MG FF cannons, which carried only 500 rounds when belt-fed (normally using 60-round drum magazines in earlier installations), and later with the better Mauser MG 151/20 cannons, which had a longer effective range than the MG FF weapon. Later versions carried four or even six MG 151/20 cannon and twin 13 mm machine guns. The German fighters found that when attacking from the front, where fewer defensive guns were mounted (and where the pilot was exposed and not protected by armor as he was from the rear), it took only four or five hits to bring a bomber down. To rectify the Fw 190's shortcomings, the number of cannons fitted was doubled to four, with a corresponding increase in the amount of ammunition carried, creating the Sturmbock bomber destroyer version. This type replaced the vulnerable twin-engine Zerstörer heavy fighters which could not survive interception by P-51 Mustangs flying well ahead of the combat boxes in an air supremacy role starting very early in 1944 to clear any Luftwaffe defensive fighters from the skies. By 1944, a further upgrade to Rheinmetall-Borsig's MK 108 cannons mounted either in the wing, or in underwing, conformal mount gun pods, was made for the Sturmbock Focke-Wulfs as either the /R2 or /R8 field modification kits, enabling aircraft to bring a bomber down with just a few hits. The adoption of the 21 cm Nebelwerfer-derived Werfer-Granate 21 (Wfr. Gr. 21) rocket mortar by the Luftwaffe in mid-August 1943 promised the introduction of a major "stand-off" style of offensive weapon – one strut-mounted tubular launcher was fixed under each wing panel on the Luftwaffe's single-engine fighters, and two under each wing panel of a few twin-engine Bf 110 daylight Zerstörer aircraft. However, due to the slow 715 mph velocity and characteristic ballistic drop of the fired rocket (despite the usual mounting of the launcher at about 15° upward orientation), and the small number of fighters fitted with the weapons, the Wfr. Gr. 21 never had a major effect on the combat box formations of Fortresses. The Luftwaffe also fitted heavy-caliber Bordkanone-series 37, 50 and even cannon as anti-bomber weapons on twin-engine aircraft such as the special Ju 88P fighters, as well as one model of the Me 410 Hornisse but these measures did not have much effect on the American strategic bomber offensive. The Me 262, however, had moderate success against the B-17 late in the war. With its usual nose-mounted armament of four MK 108 cannons, and with some examples later equipped with the R4M rocket, launched from underwing racks, it could fire from outside the range of the bombers' defensive guns and bring an aircraft down with one hit, as both the MK 108's shells and the R4M's warheads were filled with the "shattering" force of the strongly brisant Hexogen military explosive. Luftwaffe-captured B-17s During World War II approximately 40 B-17s were captured and refurbished by Germany after crash-landing or being forced down, with about a dozen put back into the air. Given German Balkenkreuz national markings on their wings and fuselage sides, and "Hakenkreuz" swastika tail fin-flashes, the captured B-17s were used to determine the B-17's vulnerabilities and to train German interceptor pilots in attack tactics. Others, with the cover designations Dornier Do 200 and Do 288, were used as long-range transports by the Kampfgeschwader 200 special duties unit, carrying out agent drops and supplying secret airstrips in the Middle East and North Africa. They were chosen specifically for these missions as being more suitable for this role than other available German aircraft; they never attempted to deceive the Allies and always wore full Luftwaffe markings. One B-17 of KG200, bearing the Luftwaffe's KG 200 Geschwaderkennung (combat wing code) markings A3+FB, was interned by Spain when it landed at Valencia airfield, 1944, remaining there for the rest of the war. It has been alleged that some B-17s kept their Allied markings and were used by the Luftwaffe in attempts to infiltrate B-17 bombing formations and report on their positions and altitudes. According to these allegations, the practice was initially successful, but Army Air Force combat aircrews quickly developed and established standard procedures to first warn off, and then fire upon any "stranger" trying to join a group's formation. Soviet-interned B-17s The U.S. did not offer B-17s to the Soviet Union as part of its war materiel assistance program, but at least 73 aircraft were acquired by the Soviet Air Force. These aircraft had landed with mechanical trouble during the shuttle bombing raids over Germany or had been damaged by a Luftwaffe raid in Poltava. The Soviets restored 23 to flying condition and concentrated them in the 890th bomber regiment of the 45th Bomber Aviation Division, but they never saw combat. In 1946 (or 1947, according to Holm) the regiment was assigned to the Kazan factory (moving from Baranovichi) to aid in the Soviet effort to reproduce the more advanced Boeing B-29 as the Tupolev Tu-4. Swiss-interned B-17s During the Allied bomber offensive, U.S. and British bombers sometimes flew into Swiss airspace, either because they were damaged or, on rare occasions, accidentally bombing Swiss cities. Swiss aircraft attempted to intercept and force individual aircraft to land, interning their crews; one Swiss pilot was killed, shot down by a U.S. bomber crew in September 1944. From then on, red and white neutrality bands were added to the wings of Swiss aircraft to stop accidental attacks by Allied aircraft. Official Swiss records identify 6,501 airspace violations during the course of the war, with 198 foreign aircraft landing on Swiss territory and 56 aircraft crashing there. In October 1943 the Swiss interned Boeing B-17F-25-VE, tail number 25841, and its U.S. flight crew after the Flying Fortress developed engine trouble after a raid over Germany and was forced to land. The aircraft was turned over to the Swiss Air Force, who then flew the bomber until the end of the war, using other interned but non-airworthy B-17s for spare parts. The bomber's topside surfaces were repainted a dark olive drab, but retained its light gray under wing and lower fuselage surfaces. It carried Swiss national white cross insignia in red squares on both sides of its rudder, fuselage sides, and on the topside and underside wings. The B-17F also carried light gray flash letters "RD" and "I" on either side of the fuselage's Swiss national insignia. Japanese-captured B-17s Three damaged B-17s, one "D" and two "E" series, were rebuilt during 1942 to flying status by Japanese technicians and mechanics, using parts salvaged from abandoned B-17 wrecks in the Philippines and the Java East Indies. The three bombers, which still contained their top secret Norden bombsights, were ferried to Japan where they underwent extensive technical evaluation by the Giken, the Imperial Japanese Army Air Force's Air Technical Research Institute (Koku Gijutsu Kenkyujo) at Tachikawa's air field. The "D" model, later deemed an obsolescent design, was used in Japanese training and propaganda films. The two "E"s were used to develop B-17 air combat counter-tactics and also used as enemy aircraft in pilot and crew training films. One of the two "E" Flying Fortresses was photographed late in the war by U. S. aerial recon. It was code-named "Tachikawa 105" after the mystery aircraft's wingspan was measured (104-ft.) but never identified. Photo-recon analysts never made the connection to it being a captured B-17 until after the war. No traces of the 3 captured Flying Fortresses were ever found in Japan by Allied occupation forces. The bombers were assumed either lost by various means or scrapped late in the war for their vital war materials. Postwar history U.S. Air Force Following the end of World War II, the B-17 was quickly phased out of use as a bomber and the Army Air Forces retired most of its fleet. Flight crews ferried the bombers back across the Atlantic to the United States where the majority were sold for scrap and melted down, although significant numbers remained in use in second-line roles such as VIP transports, air-sea rescue and photo-reconnaissance. Strategic Air Command (SAC), established in 1946, used reconnaissance B-17s (at first called F-9 [F for Fotorecon], later RB-17) until 1949. The USAF Air Rescue Service of the Military Air Transport Service (MATS) operated B-17s as so-called "Dumbo" air-sea rescue aircraft. Work on using B-17s to carry airborne lifeboats had begun in 1943, but they entered service in the European theater only in February 1945. They were also used to provide search and rescue support for B-29 raids against Japan. About 130 B-17s were converted to the air-sea rescue role, at first designated B-17H and later SB-17G. Some SB-17s had their defensive guns removed, while others retained their guns to allow use close to combat areas. The SB-17 served through the Korean War, remaining in service with USAF until the mid-1950s. In 1946, surplus B-17s were chosen as drone aircraft for atmospheric sampling during the Operation Crossroads atomic bomb tests, being able to fly close to or even through the mushroom clouds without endangering a crew. This led to more widespread conversion of B-17s as drones and drone control aircraft, both for further use in atomic testing and as targets for testing surface-to-air and air-to-air missiles. were converted to drones. The last operational mission flown by a USAF Fortress was conducted on 1959, when a DB-17P, serial 44-83684 , directed a QB-17G, out of Holloman Air Force Base, New Mexico, as a target for an AIM-4 Falcon air-to-air missile fired from a McDonnell F-101 Voodoo. A retirement ceremony was held several days later at Holloman AFB, after which 44-83684 was retired. It was subsequently used in various films and in the 1960s television show 12 O'Clock High before being retired to the Planes of Fame aviation museum in Chino, California. Perhaps the most famous B-17, the Memphis Belle, has been restored – with the B-17D The Swoose under way – to her World War II wartime appearance by the National Museum of the United States Air Force at Wright-Patterson Air Force Base, Ohio. U.S. Navy and Coast Guard During the last year of World War II and shortly thereafter, the United States Navy (USN) acquired 48 ex-USAAF B-17s for patrol and air-sea rescue work. The first two ex-USAAF B-17s, a B-17F (later modified to B-17G standard) and a B-17G were obtained by the Navy for various development programs. At first, these aircraft operated under their original USAAF designations, but on 31 July 1945 they were assigned the naval aircraft designation PB-1, a designation which had originally been used in 1925 for the Boeing Model 50 experimental flying boat. Thirty-two B-17Gs were used by the Navy under the designation PB-1W, the suffix -W indicating an airborne early warning role. A large radome for an S-band AN/APS-20 search radar was fitted underneath the fuselage and additional internal fuel tanks were added for longer range, with the provision for additional underwing fuel tanks. Originally, the B-17 was also chosen because of its heavy defensive armament, but this was later removed. These aircraft were painted dark blue, the standard Navy paint scheme which had been adopted in late 1944. PB-1Ws continued in USN service until 1955, gradually being phased out in favor of the Lockheed WV-2 (known in the USAF as the EC-121, a designation adopted by the USN in 1962), a military version of the Lockheed 1049 Constellation commercial airliner. In July 1945, 16 B-17s were transferred to the Coast Guard via the Navy; these aircraft were initially assigned U.S. Navy Bureau Numbers (BuNo), but were delivered to the Coast Guard designated as PB-1Gs beginning in July 1946. Coast Guard PB-1Gs were stationed at a number of bases in the U.S. and Newfoundland, with five at Coast Guard Air Station Elizabeth City, North Carolina, two at CGAS San Francisco, two at NAS Argentia, Newfoundland, one at CGAS Kodiak, Alaska, and one in Washington state. They were used primarily in the "Dumbo" air-sea rescue role, but were also used for iceberg patrol duties and for photo mapping. The Coast Guard PB-1Gs served throughout the 1950s, the last example not being withdrawn from service until 14 October 1959. Special operations B-17s were used by the CIA front companies Civil Air Transport, Air America and Intermountain Aviation for special missions. These included B-17G 44-85531, registered as N809Z. These aircraft were primarily used for agent drop missions over the People's Republic of China, flying from Taiwan, with Taiwanese crews. Four B-17s were shot down in these operations. In 1957 the surviving B-17s had been stripped of all weapons and painted black. One of these Taiwan-based B-17s was flown to Clark Air Base in the Philippines in mid-September, assigned for covert missions into Tibet. On 28 May 1962, N809Z, piloted by Connie Seigrist and Douglas Price, flew Major James Smith, USAF and Lieutenant Leonard A. LeSchack, USNR to the abandoned Soviet arctic ice station NP 8, as Operation Coldfeet. Smith and LeSchack parachuted from the B-17 and searched the station for several days. On 1 June, Seigrist and Price returned and picked up Smith and LeSchack using a Fulton Skyhook system installed on the B-17. N809Z was used to perform a Skyhook pick up in the James Bond movie Thunderball in 1965. This aircraft, now restored to its original B-17G configuration, was on display in the Evergreen Aviation & Space Museum in McMinnville, Oregon until it was sold to the Collings Foundation in 2015. Operators The B-17, a versatile aircraft, served in dozens of USAAF units in theaters of combat throughout World War II, and in other roles for the RAF. Its main use was in Europe, where its shorter range and smaller bombload relative to other aircraft did not hamper it as much as in the Pacific Theater. Peak USAAF inventory (in August 1944) was 4,574 worldwide. as Beuteflugzeug (captured aircraft) Surviving aircraft Forty-five planes survive in complete form, 38 in the United States. Four are airworthy. Fortresses as a symbol The B-17 Flying Fortress became symbolic of the United States of America's air power. In a 1943 Consolidated Aircraft poll of 2,500 men in cities where Consolidated advertisements had been run in newspapers, 73% had heard of the B-24 and 90% knew of the B-17. After the first Y1B-17s were delivered to the Army Air Corps 2nd Bombardment Group, they were used on flights to promote their long range and navigational capabilities. In January 1938, group commander Colonel Robert Olds flew a Y1B-17 from the U.S. east coast to the west coast, setting a transcontinental record of 13 hours 27 minutes. He also broke the west-to-east coast record on the return trip, averaging in 11 hours 1 minute. Six bombers of the 2nd Bombardment Group took off from Langley Field on 1938 as part of a goodwill flight to Buenos Aires, Argentina. Covering they returned on , with seven aircraft setting off on a flight to Rio de Janeiro, Brazil, three days later. In a well-publicized mission on 12 May of the same year, three Y1B-17s "intercepted" and took photographs of the Italian ocean liner SS Rex off the Atlantic coast. Many pilots who flew both the B-17 and the B-24 preferred the B-17 for its greater stability and ease in formation flying. The electrical systems were less vulnerable to damage than the B-24's hydraulics, and the B-17 was easier to fly than a B-24 when missing an engine. During the war, the largest offensive bombing force, the Eighth Air Force, had an open preference for the B-17. Lieutenant General Jimmy Doolittle wrote about his preference for equipping the Eighth with B-17s, citing the logistical advantage in keeping field forces down to a minimum number of aircraft types with their individual servicing and spares. For this reason, he wanted B-17 bombers and P-51 fighters for the Eighth. His views were supported by Eighth Air Force statisticians, whose mission studies showed that the Flying Fortress's utility and survivability was much greater than those of the B-24 Liberator. Making it back to base on numerous occasions, despite extensive battle damage, the B-17's durability became legendary; stories and photos of B-17s surviving battle damage were widely circulated during the war. Despite an inferior performance and smaller bombload than the more numerous B-24 Liberators, a survey of Eighth Air Force crews showed a much higher rate of satisfaction with the B-17. Notable B-17s All American – This B-17F survived having her tail almost cut off in a mid-air collision with a Bf 109 over Tunisia but returned safely to base in Algeria. Chief Seattle – sponsored by the city of Seattle, she disappeared (MIA) on 14 August 1942 flying a recon mission for the 19th BG, 435th BS and the crew declared dead on 7 December 1945. Hell's Kitchen – B-17F 41-24392 was one of only three early B-17F's in 414th BS to complete more than 100 combat missions. Mary Ann – a B-17D that was part of an unarmed flight which left Hamilton Air Field, Novato, California on 6 December 1941 en route to Hickam Field in Hawaii, arriving during the attack on Pearl Harbor. The plane and her crew were immediately forced into action on Wake Island and in the Philippines during the outbreak of World War II. She became famous when her exploits were featured in Air Force, one of the first of the patriotic war films released in 1943. Memphis Belle – one of the first B-17s to complete a tour of duty of 25 missions in the 8th Air Force and the subject of a feature film, now completely restored and on display since 17 May 2018 at the National Museum of the U.S. Air Force at Wright-Patterson AFB in Dayton, Ohio. Miss Every Morning Fix'n – B-17C. Previously named 'Pamela'. Stationed in Mackay, Queensland, Australia during World War II. On 14 June 1943, crashed shortly after takeoff from Mackay while ferrying U.S. forces personnel back to Port Moresby, with 40 of the 41 people on board killed. It remains the worst air disaster in Australian history. The sole survivor, Foye Roberts, married an Australian and returned to the States. He died in Wichita Falls, Texas, on 4 February 2004. Murder Inc. – A B-17 bombardier wearing the name of the B-17 "Murder Inc." on his jacket was used for propaganda in German newspapers. Old 666 – B-17E flown by the most highly decorated crew in the Pacific Theater Royal Flush – B-17F 42-6087 from the 100th Bomb Group and commanded on one mission by highly decorated USAAF officer Robert Rosenthal, she was the lone surviving 100th BG B-17 of 10 October 1943 raid against Münster to return to the unit's base at RAF Thorpe Abbotts. Sir Baboon McGoon – B-17F featured in the June 1944 issue of Popular Science magazine and the 1945 issue of Flying magazine. Articles discuss mobile recovery crews following October 1943 belly landing at Tannington, England. The Swoose – Initially nicknamed Ole Betsy while in service, The Swoose is the only remaining intact B-17D, built in 1940, the oldest surviving Flying Fortress, and the only surviving B-17 to have seen action in the Philippines campaign (1941–1942); she is in the collection of the National Air and Space Museum and is being restored for final display at the National Museum of the U.S. Air Force at Wright-Patterson AFB in Dayton, Ohio. The Swoose was flown by Frank Kurtz, father of actress Swoosie Kurtz, who named his daughter after the bomber. Ye Olde Pub – A highly damaged B-17 that was not shot down by Franz Stigler, as memorialized in the painting A Higher Call by John D. Shaw. 5 Grand – 5,000th B-17 made, emblazoned with Boeing employee signatures, served with the 333rd Bomb Squadron, 96th Bomb Group in Europe. Damaged and repaired after gear-up landing, transferred to 388th Bomb Group. Returned from duty following V-E Day, flown for war bonds tour, then stored at Kingman, Arizona. Following an unsuccessful bid for museum preservation, the aircraft was scrapped. Accidents and incidents Noted B-17 pilots and crew members Medal of Honor recipients Many B-17 crew members received military honors and 17 received the Medal of Honor, the highest military decoration awarded by the United States: Brigadier General Frederick Castle (flying as co-pilot) – awarded posthumously for remaining at controls so others could escape damaged aircraft. 2nd Lt Robert Femoyer (navigator) – awarded posthumously 1st Lt Donald J. Gott (pilot) – awarded posthumously 2nd Lt David R. Kingsley (bombardier) – awarded posthumously for tending to injured crew and giving up his parachute to another 1st Lt William R. Lawley Jr. – "heroism and exceptional flying skill" Sgt Archibald Mathies (engineer-gunner) – awarded posthumously 1st Lt Jack W. Mathis (bombardier) – posthumously, the first airman in the European theater to be awarded the Medal of Honor 2nd Lt William E. Metzger Jr. (co-pilot) – awarded posthumously 1st Lt Edward Michael 1st Lt John C. Morgan Capt Harl Pease (awarded posthumously) 2nd Lt Joseph Sarnoski (awarded posthumously) S/Sgt Maynard H. Smith (gunner) 1st Lt Walter E. Truemper (awarded posthumously) T/Sgt Forrest L. Vosler (radio operator) Brigadier General Kenneth Walker Commanding officer of V Bomber Command, killed while leading small force in raid on Rabaul – awarded posthumously Maj Jay Zeamer Jr. (pilot) – earned on unescorted reconnaissance mission in Pacific, same mission as Sarnoski Other military achievements or events Lincoln Broyhill (1925–2008), tail-gunner on a B-17 in the 483rd Bombardment Group. He received a Distinguished Unit Citation and set two individual records in a single day: (1) most German jets destroyed by a single gunner in one mission (two), and (2) most German jets destroyed by a single gunner during the entirety of World War II. Allison C. Brooks (1917–2006), a B-17 pilot who was awarded numerous military decorations and was ultimately promoted to the rank of major general and served in active duty until 1971. 1st Lt Eugene Emond (1921–1998): Lead pilot for Man O War II Horsepower Limited. Received the Distinguished Flying Cross, Air Medal with three oak leaf clusters, American Theater Ribbon and Victory Ribbon. Was part of D-Day and witnessed one of the first German jets when a Me 262A-1a flew through his formation over Germany. One of the youngest bomber pilots in the U.S. Army Air Forces. Immanuel J. Klette (1918–1988): Second-generation German-American whose 91 combat missions were the most flown by any Eighth Air Force pilot in World War II. Capt Colin Kelly (1915–1941): Pilot of the first U.S. B-17 lost in action. Col Frank Kurtz (1911–1996): The USAAF's most decorated pilot of World War II. Commander of the 463rd Bombardment Group (Heavy), 15th Air Force, Celone Field, Foggia, Italy. Clark Field Philippines attack survivor. Olympic bronze medalist in diving (1932), 1944–1945. Father of actress Swoosie Kurtz, herself named for the still-surviving B-17D mentioned above. Gen Curtis LeMay (1906–1990): Became head of the Strategic Air Command and Chief of Staff of the USAF. Lt Col Nancy Love (1914–1976) and Betty (Huyler) Gillies (1908–1998): The first women pilots to be certified to fly the B-17, in 1943 and to qualify for the Women's Auxiliary Ferrying Squadron. SSgt Alan Magee (1919–2003): B-17 gunner who on 3 January 1943 survived a 22,000-foot (6,700-meter) freefall after his aircraft was shot down by the Luftwaffe over St. Nazaire. Col Robert K. Morgan (1918–2004): Pilot of Memphis Belle. Lt Col Robert Rosenthal (1917–2007): Commanded the only surviving B-17, Royal Flush, of a US 8th Air Force raid by the 100th Bomb Group on Münster on 10 October 1943. Completed 53 missions. Earned sixteen medals for gallantry (including one each from Britain and France), and led the raid on Berlin on 3 February 1945, that is likely to have ended the life of Roland Freisler, the infamous "hanging judge" of the People's Court. 1st Lt Bruce Sundlun (1920–2011): Pilot of Damn Yankee of the 384th Bomb Group was shot down over Belgium on 1 December 1943 and evaded capture until reaching Switzerland 5 May 1944. Specifications (B-17G) Notable appearances in media B-17 in popular culture Hollywood featured the B-17 in its period films, such as director Howard Hawks' Air Force starring John Garfield and Twelve O'Clock High starring Gregory Peck. Both films were made with the full cooperation of the United States Army Air Forces and used USAAF aircraft and (for Twelve O'Clock High) combat footage. In 1964, the latter film was made into a television show of the same name and ran for three years on ABC TV. Footage from Twelve O' Clock High was also used, along with three restored B-17s, in the 1962 film The War Lover. An early model YB-17 also appeared in the 1938 film Test Pilot with Clark Gable and Spencer Tracy, and later with Clark Gable in Command Decision in 1948, in Tora! Tora! Tora! in 1970, and in Memphis Belle with Matthew Modine, Eric Stoltz, Billy Zane, and Harry Connick Jr. in 1990. The most famous B-17, the Memphis Belle, toured the U. S. with her crew to reinforce national morale (and to sell war bonds). She was featured in a USAAF documentary, Memphis Belle: A Story of a Flying Fortress. The Flying Fortress has also been featured in artistic works expressing the physical and psychological stress of the combat conditions and the high casualty rates that crews suffered. Works such as The Death of the Ball Turret Gunner by Randall Jarrell and Heavy Metals section "B-17" depict the nature of these missions. The Ball turret itself has inspired works like Steven Spielberg's The Mission. Artists who served on the bomber units also created paintings and drawings depicting the combat conditions in World War II. See also Notes References {{Reflist |refs = }} Sources Andrews, C.F and E.B. Morgan. Vickers Aircraft since 1908. London: Putnam, 1988. . Angelucci, Enzo and Paolo Matricardi. Combat Aircraft of World War II, 1940–1941. Westoning, Bedfordshire, UK: Military Press, 1988. . Arakaki, Leatrice R. and John R. Kuborn. 1941: The Air Force Story. Hickam Air Force Base, Hawaii: Pacific Air Forces, Office of History, 1991. . Birdsall, Steve. The B-24 Liberator. New York: Arco Publishing Company, Inc., 1968. . Bowers, Peter M. Boeing Aircraft Since 1916. London: Putnam, 1989. . Borth, Christy. Masters of Mass Production. Indianapolis, Indiana: Bobbs-Merrill Co., 1945. . Bowers, Peter M. Fortress in the Sky, Granada Hills, California: Sentry Books, 1976. . Bowman, Martin W. Castles in the Air: The Story of the B-17 Flying Fortress Crews of the U.S. 8th Air Force. Dulles, Virginia: Potomac Books, 2000, . Bowman, Martin W. B-17 Flying Fortress Units of the Eighth Air Force, Volume 2. Oxford, UK: Osprey Publishing, 2002. . Caidin, Martin. Black Thursday. New York: E.P. Dutton & Company, 1960. . Caldwell, Donald and Richard Muller. The Luftwaffe over Germany: Defense of the Reich. London: Greenhill Books Publications, 2007. . Carey, Brian Todd. "Operation Pointblank: Evolution of Allied Air Doctrine During World War II". historynet.com, 12 June 2006. archived version 19 October 2014. Chant, Christopher. Warplanes of the 20th century. London: Tiger Books International, 1996. . Cora, Paul B. Diamondbacks Over Europe: B-17s of the 99th Bomb Group, Part Two. Air Enthusiast 111, May/June 2004, pp. 66–73. Craven, Wesley Frank, James Lea Cate and Richard L. Watson, eds. "The Battle of the Bismarck Sea", pp. 129–62; The Pacific: Guadalcanal to Saipan, August 1942 to July 1944 (The Army Air Forces in World War II, Volume IV. Chicago: University of Chicago Press, 1950. Donald, David, ed. American Warplanes of World War Two. London: Aerospace Publishing, 1995. . Donald, David. "Boeing Model 299 (B-17 Flying Fortress)." The Encyclopedia of World Aircraft. Etobicoke, Ontario, Canada: Prospero Books, 1997. . Francillon, René J. McDonnell Douglas Aircraft since 1920. London: Putnam, 1979. . Francillon, René J. Lockheed Aircraft since 1913. London: Putnam, 1982. . Freeman, Roger A. B-17 Fortress at War. New York: Charles Scribner's Sons, 1977. . Gamble, Bruce. Fortress Rabaul: The Battle for the Southwest Pacific, January 1942 – April 1943. Minneapolis, Minnesota: Zenith Press, 2010. . Gillison, Douglas. Australia in the War of 1939–1945: Series 3 – Air, Volume 1. Canberra, Australia: Australian War Memorial, 1962. . Gordon, Yefim. Soviet Air Power in World War 2. Hinckley, Lancashire, UK: Midland, Ian Allan Publishing, 2008. . Herman, Arthur. Freedom's Forge: How American Business Produced Victory in World War II New York: Random House, 2012. . Hess, William N. B-17 Flying Fortress: Combat and Development History of the Flying Fortress. St. Paul, Minnesota: Motorbook International, 1994. . Hess, William N. B-17 Flying Fortress Units of the MTO. Botley, Oxford, UK: Osprey Publishing Limited, 2003. . Hess, William N. Big Bombers of WWII. Ann Arbor, Michigan: Lowe & B. Hould, 1998. . Hess, William N. and Jim Winchester. "Boeing B-17 Flying Fortress: Queen of the Skies". Wings of Fame. Volume 6, 1997, pp. 38–103. London: Aerospace Publishing. . . Hoffman, Wally and Philippe Rouyer. La guerre à 30 000 pieds[Available only in French]. Louviers, France: Ysec Editions, 2008. . Jacobson, Capt. Richard S., ed. Moresby to Manila Via Troop Carrier: True Story of 54th Troop Carrier Wing, the Third Tactical Arm of the U.S. Army, Air Forces in the Southwest Pacific. Sydney, Australia: Angus and Robertson, 1945. Johnsen, Frederick A. "The Making of an Iconic Bomber." Air Force Magazine, Volume 89, Issue 10, October 2006. Retrieved: 2012. Knaack, Marcelle Size. Encyclopedia of U.S. Air Force Aircraft and Missile Systems: Volume II: Post-World War II Bombers, 1945–1973. Washington, D.C.: Office of Air Force History, 1988. . Listemann, Phil H. Allied Wings No. 7, Boeing Fortress Mk. I. www.raf-in-combat.com, 2009. First edition. . Maurer, Maurer. Aviation in the U.S. Army, 1919–1939. Washington, D.C.: United States Air Force Historical Research Center, Office of Air Force History, 1987, pp. 406–08. . Parker, Dana T. Building Victory: Aircraft Manufacturing in the Los Angeles Area in World War II. Cypress, California, Dana Parker Enterprises, 2013. . Parshall, Jonathon and Anthony Tulley. Shattered Sword: The Untold Story of the Battle of Midway. Dulles, Virginia: Potomac Books, 2005. . Ramsey, Winston G. The V-Weapons. London, United Kingdom: After The Battle, Number 6, 1974. Roberts, Michael D. Dictionary of American Naval Aviation Squadrons: Volume 2: The History of VP, VPB, VP(HL) and VP(AM) Squadrons. Washington, D.C.: Naval Historical Center, 2000. Sakai, Saburo with Martin Caidin and Fred Saito. Samurai!. Annapolis, Maryland: Naval Institute Press, 1996. . Salecker, Gene Eric. Fortress Against The Sun: The B-17 Flying Fortress in the Pacific. Conshohocken, Pennsylvania: Combined Publishing, 2001. . Serling, Robert J. Legend & Legacy: The Story of Boeing and its People. New York: St. Martin's Press, 1992. . Shores, Christopher, Brian Cull and Yasuho Izawa. Bloody Shambles: Volume One: The Drift to War to The Fall of Singapore. London: Grub Street, 1992. . Stitt, Robert M. Boeing B-17 Fortress in RAF Coastal Command Service. Sandomierz, Poland: STRATUS sp.j., 2010 (second edition 2019). . Swanborough, F. G. and Peter M. Bowers. United States Military aircraft since 1909. London: Putnam, 1963. Swanborough, Gordon and Peter M. Bowers. United States Navy Aircraft since 1911. London: Putnam, Second edition, 1976. . Tate, Dr. James P. The Army and its Air Corps: Army Policy toward Aviation 1919–1941. Maxwell Air Force Base, Alabama: Air University Press, 1998. . Retrieved: 2008. Trescott, Jacqueline. "Smithsonian Panel Backs Transfer of Famed B-17 Bomber." The Washington Post Volume 130, Issue 333, 2007. Weigley, Russell Frank. The American Way of War: A History of United States Military Strategy and Policy. Bloomington, Indiana: Indiana University Press, 1977. . Wixley, Ken. "Boeing's Battle Wagon: The B-17 Flying Fortress – An Outline History". Air Enthusiast, No. 78, November/December 1998, pp. 20–33. Stamford, UK: Key Publishing. . Wynn, Kenneth G. U-boat Operations of the Second World War: Career Histories, U511-UIT25. Annapolis, MD: Naval Institute Press, 1998. . Yenne, Bill. B-17 at War. St. Paul, Minnesota: Zenith Imprint, 2006. . Yenne, Bill. The Story of the Boeing Company. St. Paul, Minnesota: Zenith Imprint, 2005. . ; originally issued as an academic thesis . Further reading Birdsall, Steve. The B-17 Flying Fortress. Dallas, Texas: Morgan Aviation Books, 1965. . Davis, Larry. B-17 in Action. Carrollton, Texas: Squadron/Signal Publications, 1984. . Jablonski, Edward. Flying Fortress. New York: Doubleday, 1965. . Johnsen, Frederick A. Boeing B-17 Flying Fortress. Stillwater, Minnesota: Voyageur Press, 2001. . Gansz, David M. B-17 Production - Boeing Aircraft: 4 January 1944 - 26 February 1944 B-17G-35 to G-45 42-31932 - 42-32116 and 42-97058 - 42-97407. New Jersey: First Mountain Belgians, 2020. . Gansz, David M. B-17 Production - Boeing Aircraft: 26 February 1944 - 25 April 1944 B-17G-50 to G-60 42-102379 - 42-102978. New Jersey: First Mountain Belgians, 2013. . Gansz, David M. B-17 Production - Boeing Aircraft: 25 April 1944 - 22 June 1944 B-17G-65 to G-75 43-37509 - 43-38073. New Jersey: First Mountain Belgians, 2017. . Lloyd, Alwyn T. B-17 Flying Fortress in Detail and Scale, Vol. 11: Derivatives, Part 2. Fallbrook, California: Aero Publishers, 1983. . Lloyd, Alwyn T. B-17 Flying Fortress in Detail and Scale, Vol. 20: More derivatives, Part 3. Blue Ridge Summit, Pennsylvania: Tab Books, 1986. . Lloyd, Alwyn T. and Terry D. Moore. B-17 Flying Fortress in Detail and Scale, Vol. 1: Production Versions, Part 1. Fallbrook, California: Aero Publishers, 1981. . O'Leary, Michael. Boeing B-17 Flying Fortress (Osprey Production Line to Frontline 2). Botley, Oxford, UK: Osprey Publishing, 1999. . Thompson, Scott A. Final Cut: The Post War B-17 Flying Fortress, The Survivors: Revised and Updated Edition. Highland County, Ohio: Pictorial Histories Publishing Company, 2000. . Wagner, Ray, "American Combat Planes of the 20th Century", Reno, Nevada, 2004, Jack Bacon & Company, . Willmott, H.P. B-17 Flying Fortress. London: Bison Books, 1980. . Wisker Thomas J. "Talkback". Air Enthusiast'', No. 10, July–September 1979, p. 79. External links B-17 Flying Fortress 1930s United States bomber aircraft Four-engined tractor aircraft Low-wing aircraft World War II bombers of the United States Aircraft first flown in 1935 World War II heavy bombers Four-engined piston aircraft
2,248
5,001
https://en.wikipedia.org/wiki/Trieste%20%28bathyscaphe%29
Trieste (bathyscaphe)
Trieste is a Swiss-designed, Italian-built deep-diving research bathyscaphe which reached a record depth of about in the Challenger Deep of the Mariana Trench near Guam in the Pacific. On 23 January 1960, Jacques Piccard (son of the boat's designer Auguste Piccard) and US Navy Lieutenant Don Walsh achieved the goal of Project Nekton. It was the first crewed vessel to reach the bottom of the Challenger Deep. Design Trieste consisted of a float chamber filled with gasoline (petrol) for buoyancy, with a separate pressure sphere to hold the crew. This configuration (dubbed a "bathyscaphe" by the Piccards) allowed for a free dive, rather than the previous bathysphere designs in which a sphere was lowered to depth and raised again to the surface by a cable attached to a ship. Trieste was designed by the Swiss scientist Auguste Piccard and originally built in Italy. His pressure sphere, composed of two sections, was built by Acciaierie Terni. The upper part was manufactured by the company Cantieri Riuniti dell'Adriatico, in the Free Territory of Trieste (on the border between Italy and Yugoslavia, now in Italy); hence the name chosen for the bathyscaphe. The installation of the pressure sphere was done in the Cantiere navale di Castellammare di Stabia, near Naples. Trieste was launched on 26 August 1953 into the Mediterranean Sea near the Isle of Capri. The design was based on previous experience with the bathyscaphe FNRS-2. Trieste was operated by the French Navy. After several years of operation in the Mediterranean Sea, the Trieste was purchased by the United States Navy in 1958 for $250,000 (equivalent to $ million today). At the time of Project Nekton, Trieste was more than 15 m (50 ft) long. The majority of this was a series of floats filled with of gasoline, and water ballast tanks were included at either end of the vessel, as well as releasable iron ballast in two conical hoppers along the bottom, fore, and aft of the crew sphere. The crew occupied the 2.16 m (7.09 ft) pressure sphere, attached to the underside of the float and accessed from the vessel's deck by a vertical shaft that penetrated the float and continued down to the sphere hatch. The pressure sphere provided just enough room for two people. It provided completely independent life support, with a closed-circuit rebreather system similar to that used in modern spacecraft and spacesuits: oxygen was provided from pressure cylinders, and carbon dioxide was scrubbed from breathing air by being passed through canisters of soda-lime. Batteries provided power. Trieste was fitted with a new pressure sphere in winter of 1958, manufactured by the Krupp Steel Works of Essen, Germany, in three finely-machined sections (an equatorial ring and two caps), and by the Ateliers de Constructions Mécaniques de Vevey. To withstand the enormous pressure of 1.25 metric tons per cm (110 MPa) at the bottom of Challenger Deep, the sphere's walls were thick (it was overdesigned to withstand considerably more than the rated pressure). The sphere weighed in air and in water (giving it an average specific gravity of 13/(13−8) = 2.6 times that of seawater). The float was necessary because of the sphere's density: it was impossible to design a sphere large enough to hold a person that could withstand the necessary pressures and have metal walls thin enough for the sphere to be neutrally buoyant. Gasoline was chosen as the float fluid because it is less dense than water and also less compressible, thus retaining its buoyant properties and negating the need for thick, heavy walls for the float chamber. Observation of the sea outside the craft was conducted directly by eye, via a single, very tapered, cone-shaped block of acrylic glass (Plexiglas), the only transparent substance identified which would withstand the external pressure. Outside illumination for the craft was provided by quartz arc-light bulbs, which proved to be able to withstand the over (100 MPa) of pressure without any modification. of magnetic iron pellets were placed on the craft as ballast, both to speed the descent and allow ascent since the extreme water pressures would not have permitted compressed air ballast-expulsion tanks to be used at great depths. This additional weight was held in place at the throats of two hopper-like ballast silos by electromagnets. In case of an electrical failure, the bathyscaphe would automatically rise to the surface. Transported to the Naval Electronics Laboratory's facility in San Diego, California, Trieste was modified extensively by the Americans, and then used in a series of deep-submergence tests in the Pacific Ocean during the next few years, culminating in the dive to the bottom of the Challenger Deep during January 1960. The Mariana Trench dives Trieste departed San Diego on 5 October 1959 for Guam aboard the freighter Santa Maria to participate in Project Nekton, a series of very deep dives in the Mariana Trench. On 23 January 1960, she reached the ocean floor in the Challenger Deep (the deepest southern part of the Mariana Trench), carrying Jacques Piccard and Don Walsh. This was the first time a vessel, crewed or uncrewed, had reached the deepest known point of the Earth's oceans. The onboard systems indicated a depth of , although this was revised later to ; fairly recently, more accurate measurements have found Challenger Deep to be between and deep. The descent to the ocean floor took 4 hours 47 minutes at a descent rate of . After passing , one of the outer Plexiglas window panes cracked, shaking the entire vessel. The two men spent twenty minutes on the ocean floor. The temperature in the cabin was 7 °C (45 °F) at the time. While at maximum depth, Piccard and Walsh unexpectedly regained the ability to communicate with the support ship, USS Wandank (ATA-204), using a sonar/hydrophone voice communications system. At a speed of almost – about five times the speed of sound in air – it took about seven seconds for a voice message to travel from the craft to the support ship and another seven seconds for answers to return. While at the bottom, Piccard and Walsh reported observing a number of sole and flounder (both flatfish). The accuracy of this observation has later been questioned and recent authorities do not recognize it as valid. The theoretical maximum depth for fish is at about , beyond which they would become hyperosmotic. Invertebrates such as sea cucumbers, some of which potentially could be mistaken for flatfish, have been confirmed at depths of and more. Walsh later said that their original observation could be mistaken as their knowledge of biology was limited. Piccard and Walsh noted that the floor of the Challenger Deep consisted of "diatomaceous ooze". The ascent took 3 hours and 15 minutes. The National Museum of the Navy commemorated the 60th anniversary of the dive in January 2020. Other deep dives and retirement The Trieste performed a number of deep dives in the Mediterranean prior to being purchased by the U.S. Navy in 1957. It conducted 48 dives exceeding between 1953 and 1957 as the "BATISCAFO TRIESTE". Beginning in April 1963, Trieste was modified and used in the Atlantic Ocean to search for the missing nuclear submarine . Trieste was delivered to Boston Harbor by USS Point Defiance (LSD-31) under the command of Captain H. H. Haisten. In August 1963, Trieste found debris of the wreck off the coast of New England, below the surface after several dives. Trieste's participation in the search earned her the Navy Unit Commendation. Following the mission, Trieste was returned to San Diego and taken out of service in 1966. Between 1964 and 1966, Trieste was used to develop her replacement, the Trieste II, with the original Terni pressure sphere reincorporated in her successor. In early 1980, she was transported to the Washington Navy Yard where she remains on exhibit today in the National Museum of the U.S. Navy, along with the Krupp pressure sphere. Awards Navy Unit Citation with star Meritorious Unit Commendation with star Navy E Ribbon National Defense Service Medal with star See also Deep Submergence Rescue Vehicle Deep Submergence Vehicle Alvin (DSV-2) Project Mohole MIR (submersible) Notes and references Bibliography External links The Bathyscaph Trieste Celebrates the 50th Anniversary of the World's Deepest Dive Dives of the Bathyscaph Trieste – dictabelt recordings (pdf, p. 38) 50th anniversary recollection by retired Navy Captain Don Walsh. 2008 obituary of diver Jaques Piccard Trieste Program Dive Log from the Collection of the Naval Undersea Museum The Bathyscaph Trieste Technical and Operational Aspects, 1958–1961 by LT Don Walsh, US Navy Electronics Laboratory Conservation of the Trieste submarine at the National Museum of the United States Navy Trieste-class deep-submergence vehicle Museum ships in Washington, D.C. Ships preserved in museums Submarines of Italy Submarines of the United States Navy Submarines of Switzerland Ships built in Trieste Swiss inventions 1953 ships Washington Navy Yard
2,249
5,003
https://en.wikipedia.org/wiki/Battle%20of%20Bouvines
Battle of Bouvines
The Battle of Bouvines was fought on 27 July 1214 near the town of Bouvines in the County of Flanders. It was the concluding battle of the Anglo-French War of 1213–1214. Although estimates on the number of troops vary considerably among modern historians, at Bouvines, a French army commanded by King Philip Augustus routed a larger Allied army led by Holy Roman Emperor Otto IV in one of the rare pitched battles of the High Middle Ages and one of the most decisive medieval engagements. In early 1214, a coalition was assembled against King Philip Augustus of France, consisting of Otto IV, King John of England, Count Ferrand of Flanders, Count Renaud of Boulogne, Duke Henry I of Brabant, Count William I of Holland, Duke Theobald I of Lorraine, and Duke Henry III of Limburg. Its objective was to reverse the conquests made by Philip earlier in his reign. After initial manoeuvring in late July, battle was offered near Bouvines on 27 July. The long allied column deployed slowly into battle order, leaving the Allies at a disadvantage. The superior discipline and training of the French knights allowed them to carry out a series of devastating charges, shattering the Flemish knights on the allied left wing. In the centre, the Allied knights and infantry under Otto enjoyed initial success, scattering the French urban infantry and nearly killing Philip. A counterattack by French knights smashed the isolated Allied infantry and Otto's entire centre division fell back. Otto fled the battle and his knightly followers were defeated by the French knights, who went on to capture the Imperial eagle standard. With the Allied centre and left wing routed, only the soldiers of the right wing under Renaud of Boulogne and William de Longespee held on. They were killed, captured or driven from the field. A pursuit was not conducted as it was nearly dark. The crushing French victory dashed English and Flemish hopes of regaining their lost territories. Having lost all credibility as emperor following the battle, Otto IV was deposed by Pope Innocent III, leading to Frederick II's accession to the Imperial throne. King John was compelled to hand over Anjou, the ancient patrimony of the Angevin kings of England, to Philip in a peace settlement. This confirmed the collapse of the Angevin Empire. The disaster at Bouvines forever altered the political situation in England, as John was so weakened that his discontented barons forced him to agree to the Magna Carta in 1215. Counts Ferrand, Renaud and Longespee were captured and imprisoned. The balance of power shifted, with the popes of the 13th century increasingly seeking the support of a powerful France. Philip had achieved remarkable success in the expansion of his realm and by the end of his reign, in 1223, had not only laid the foundations for the era of Capetian pre-eminence in Europe which followed and marked much of the Late Middle Ages, but also those of the absolutism that came to define the Ancien Régime. Prelude In 1214, , Infante of Portugal and Count of Flanders, desired the return of the cities of Aire-sur-la-Lys and Saint-Omer, which he had recently lost to Philip II, King of France, in the Treaty of Pont-à-Vendin. He thus broke allegiance with Philip and assembled a broad coalition including Emperor Otto IV, King John of England, Duke Henry I of Brabant, Count William I of Holland, Duke Theobald I of Lorraine, and Duke Henry III of Limburg. The campaign was planned by John, who was the fulcrum of the alliance; his plan was to draw the French away from Paris southward towards his forces and keep them occupied, while the main army, under Emperor Otto IV, marched on Paris from the north. John's plan was followed initially, but the Allies in the north moved slowly. John, after two encounters with the French, retreated to Aquitaine on 3 July. On 23 July, having summoned his vassals, Philip had an army consisting of 7,000 knights and 15,000 infantry. The Emperor finally succeeded in concentrating his forces at Valenciennes, although this did not include John, and in the interval Philip had counter-marched northward and regrouped. Philip now took the offensive himself, and after manoeuvring to obtain good ground for his cavalry he offered battle on 27 July, on the plain east of Bouvines and the river Marque. Otto was surprised by the speed of his enemy and was thought to have been caught unprepared by Philip, who probably deliberately lured Otto into his trap. Otto decided to launch an attack on what was then the French rearguard. The Allied army drew up facing south-west towards Bouvines, the heavy cavalry on the wings, the infantry in one great mass in the centre, supported by a cavalry corps under Otto himself. The French army formed up opposite in a similar formation, cavalry on the wings, infantry, including the town militias, in the centre. Philip, with the cavalry reserve and the royal standard, the Oriflamme, positioned himself to the rear of the men on foot. It is said by William the Breton, chaplain to Philip at the battle, that the soldiers stood in line in a space of 40,000 steps (), which leaves very little clearance and predisposes to hand-to-hand fighting. William the Breton also says in his chronicle that "the two lines of combatants were separated by a small space". Order of battle French The French army contained 1,200–1,360 knights (of whom 765 were from the royal demesne) and 300 mounted sergeants. Philip had launched an appeal to the municipalities in northern France, in order to obtain their support. 16 of the 39 municipalities of the royal demesne answered the call to arms. They provided 3,160 infantry, broken down as: Amiens 250, Arras 1000, Beauvais 500, Compiegne 200, Corbie 200, Bruyeres 120, Cerny and Crepy-en-Laonnais 80, Crandelain 40, Hesdin 80, Montreuil-sur-Mer 150, Noyon 150, Roye 100, Soissons 160, and Vailly 50. The balance of the infantry, possibly another 2,000 men, were composed of mercenaries. The other communes of the royal demesne were supposed to provide a further 1,980 infantry, but it is doubtful that they did. In total, the royal army totalled approximately 6,000–7,000 men. The royal standard bearer of the Fleur-de-lis was Galore of Montigny. The royal army was divided into three parts, or "battles": The right wing, composed of the knights of Champagne and Burgundy, was commanded by Eudes, Duke of Burgundy, and his lieutenants: Gaucher of Châtillon, Count of Saint-Pol, Count Guillaume I de Sancerre, Count de Beaumont, Mathieu of Montmorency and Adam II Viscount of Melun. In the front of the right wing were men-at-arms and militia from Burgundy, Champagne, and Picardy led by 150 mounted sergeants from Soissons. The central battle was led by Philip Augustus and his chief knights – Guillaume des Barres, Bartholomé de Roye, Girard Scophe, Guillaume de Garland, Enguerrand of Coucy and Gautier of Nemours. In front of the king and his 175 knights were 2,150 infantry from the towns of the Île de France and Normandy. The left wing was led by Robert of Dreux, supported by Count William of Ponthieu. The main body of the left wing consisted of Bretons and militia from Dreux, Perche, Ponthieu, and Vimeux. The bridge of Bouvines, the only means of retreat across the marshes, was guarded by 150 sergeants-at-arms, who also formed the French reserve. Allied Otto's army contained some 1,300–1,500 knights: 600–650 Flemish, 425–500 Hainaulter and 275–350 from elsewhere. He also fielded approximately 7,500 infantry, to give a total force of just under 9,000 men. The imperial army was also formed up in three battles: The left flank, under the command of Ferrand of Flanders with his Flemish knights – directed by Arnaud of Oudenaarde. The infantry were from Flanders and Hainaut. The centre was under the command of Otto and of Theobald, Duke of Lorraine, Henry, Duke of Brabant, and Philip Courtenay, Marquis of Namur. It included many Saxons and infantry from Brabant and Germany. In the front of the battle stood German pike phalanxes. Saxon infantry formed the second line. Otto stood between these with 50 German knights. The right flank, under the command of Renaud de Dammartin, included Brabant infantry and English knights, the latter under the command of the Earl of Salisbury, William Longespée. On the extreme right, English archers supported the flank of both the Brabant infantry and the nobles of the two Lorraines (i.e. of the Duchy of Lorraine and the County of Bar). Battle Allied left The battle opened with an attack by 150 light cavalrymen from the Abbey of Saint-Médard de Soissons against the Flemish knights on the allied left, aiming to throw it into confusion. The Flemish knights easily drove off the unarmoured horsemen. Some Flemish knights left their formations and chased the retreating light cavalry. 180 French knights from Champagne in turn attacked and killed or captured the over-aggressive Flemish knights. The Count of Flanders counter-attacked with his entire force of 600 knights and threw the French back. Gaucher de Châtillon launched his 30 knights at the Flemish force, followed by a further 250 knights. They carried out a continuous series of charges, and halted the allied advance. Many knights on both sides fell from their horses in the first clash. The French were better ordered than the more loosely formed Flemish knights, and the Allied ranks grew thinner as they were assaulted by the compact French masses. Châtillon and Melun with their knights broke through the ranks of their Flemish counterparts, then wheeled and struck them from the rear, constantly switching targets. St. Pol's knights and the Burgundians engaged in an exhausting struggle against the Flemings, taking no prisoners. The Duke of Burgundy's horse was killed and the Duke thrown to the ground, but he was saved by his knights, who beat off the Flemish and found him a fresh horse. The Flemings fought on for three hours despite their increasingly desperate situation, driven by knightly honour. Finally, the wounded and unhorsed Count of Flanders was captured by two French knights, triggering the collapse of his knights' morale. Centre The French urban militia infantry, 2,150 strong, were gathered under the Oriflamme in the centre, in front of Philip's knights and the fleur-de-lis standard. Soon after deploying, they were attacked by Allied knights and infantry under Otto and thrown back. Otto and his knights had nearly reached the French king when they were halted by French knights. The allied infantrymen broke through to Philip and his handful of knightly companions, unhorsing him with their hooked pikes. The French king's armour deflected an enemy lance and saved his life. Gales de Montigny used the royal standard to signal for help and another knight gave Philip a fresh horse. The allied infantry used daggers to stab unhorsed French knights through the openings in their helmets and other weak spots in their armour. The Norman knight Etienne de Longchamp was killed in this way and the French suffered heavy losses. After repeated French counterattacks and a prolonged fight the Allies were thrown back. The battle in the centre was now a mêlée between the two mounted reserves led by the King and the Emperor in person. The French knight Pierre Mauvoisin nearly captured Otto and his horse and Gérard la Truie stabbed the Emperor with a dagger, which bounced off his coat of mail and struck Otto's horse in the eye, killing it. Otto was saved by four German lords and their followers. As the French sent more knights to attack him personally he fled the field. The German knights fought to the bitter end to save their emperor, all being killed or captured. The Imperial Standard with the eagle and dragon was captured by the French knights, who brought it to their king. By this time, Allied resistance in the centre had ceased. Allied right Meanwhile, on the French left Robert de Dreux's troops were at first pressed by men led by William Longespée. William Longespée was unhorsed and taken prisoner by Philip of Dreux, the Bishop of Beauvais, and the English soldiers fled. Mathieu de Montmorency captured twelve enemy banners. (In memory of this feat, the shield of Montmorency includes an additional twelve eagles or sixteen altogether instead of the previous four.) Last stand The day was already decided in favour of the French when their wings began to close inwards to cut off the retreat of the imperial centre. The battle closed with the celebrated stand of Reginald of Boulogne (Renaud de Dammartin), a former vassal of King Philip, who formed a ring of 400–700 Brabançon pikemen. They defied every attack by the French cavalry, while Reginald made repeated sorties with his small force of knights. Eventually, long after the Imperial army had retreated, the Brabant schiltrom was overrun by a charge of 50 knights and 1,000–2,000 infantry under Thomas de St. Valery. Reginald was taken prisoner in the melee. A pursuit was not conducted owing to the approaching nightfall and a fear that the prisoners might escape. The French formations were recalled using trumpets. Aftermath French knightly casualties are not recorded; the French infantry suffered heavily. The Allies had 169 knights killed and "heavy" but unquantified losses among the infantry; including between 400 and 700 Brabant infantry killed. As well as Reginald of Boulogne two other counts were captured by the French, Hainaut Ferrand and William Longespée, as well as twenty-five barons and over a hundred knights. The battle ended the threat from both Otto and John. According to Jean Favier, Bouvines is "one of the most decisive and symbolic battles in the history of France". For Philippe Contamine "the Battle of Bouvines had both important consequences and a great impact". Ferdinand Lot called it a "medieval Austerlitz". Philip returned to Paris triumphant, marching his captive prisoners behind him in a long procession, as his subjects lined the streets to greet the victorious king. In the aftermath of the battle, Otto retreated to his castle of Harzburg and was soon overthrown as Holy Roman Emperor by Frederick II, who had already been recognised as emperor in the south a year and a half earlier. Count Ferdinand remained imprisoned following his defeat, while King John obtained a five-year truce, on very lenient terms given the circumstances. Philip's decisive victory was crucial to the political situation in England. The battle ended all hope of a restoration of the Angevin Empire. So weakened was the defeated King John that he soon needed to submit to his barons' demands and agree to the Magna Carta in 1215, limiting the power of the crown and establishing the basis for common law. Commemoration In thanksgiving for the victory, Philip Augustus founded the Abbey of Notre Dame de la Victoire, between Senlis and Mont l'Evêque. In 1914, to mark the seventh centenary, Félix Dehau had the parish church of Bouvines rebuilt with a number of stained-glass windows representing the history of the battle. In 2014, the eighth centenary was commemorated in Bouvines by an association called Bouvines 2014. A series of events, including an official ceremony and a show called "Bouvines la Bataille", attracted more than 6,000 viewers in Bouvines. See also Anglo-French Wars References Sources External links Historical accounts 1214 in Europe Bouvines 1214 Bouvines 1214 Bouvines 1214 Bouvines 1214 Battles in Hauts-de-France History of Nord (French department) Conflicts in 1214 13th century in the county of Flanders Philip II of France Otto IV, Holy Roman Emperor
2,250
5,034
https://en.wikipedia.org/wiki/Bela%20Lugosi
Bela Lugosi
Béla Ferenc Dezső Blaskó (; October 20, 1882 – August 16, 1956), known professionally as Bela Lugosi (; ), was a Hungarian and American actor best remembered for portraying Count Dracula in the 1931 horror classic Dracula, Ygor in Son of Frankenstein (1939) and his roles in many other horror films from 1931 through 1956. Lugosi began acting on the Hungarian stage in 1902. After playing in 172 different productions in his native Hungary, Lugosi moved on to appearing in Hungarian silent films in 1917. He had to suddenly emigrate to Germany after the failed Hungarian Communist Revolution of 1919 because of his former socialist activities (organizing a stage actors' union), leaving his first wife in the process. He acted in several films in Weimar Germany, before arriving in New Orleans as a seaman on a merchant ship, then making his way north to New York City and Ellis Island. In 1927, he starred as Count Dracula in a Broadway adaptation of Bram Stoker's novel, moving with the play to the West Coast in 1928 and settling down in Hollywood. He later starred in the 1931 film version of Dracula directed by Tod Browning and produced by Universal Pictures. Through the 1930s, he occupied an important niche in horror films, but his notoriety as "Dracula" and ominous thick Hungarian accent greatly limited the roles offered to him, and he unsuccessfully tried for years to avoid the typecasting. He co-starred in a number of films with Boris Karloff, who was able to demand top billing. To his frustration, Lugosi, a charter member of the American Screen Actors Guild, was increasingly restricted to mad scientist roles because of his inability to speak English more clearly. He was kept employed by the studios principally so that they could put his name on the posters. Among his teamings with Karloff, he performed major roles only in The Black Cat (1934), The Raven (1935), and Son of Frankenstein (1939); even in The Raven, Karloff received top billing despite Lugosi performing the lead role. By this time, Lugosi had been receiving regular medication for sciatic neuritis, and he became addicted to doctor-prescribed morphine and methadone. This drug dependence (and his gradually worsening alcoholism) was becoming apparent to producers, and after 1948's Abbott and Costello Meet Frankenstein, the offers dwindled to parts in low-budget films; some of these were directed by Ed Wood, including a brief (posthumous) appearance in Wood's Plan 9 from Outer Space (1957). Lugosi married five times and had one son, Bela G. Lugosi (with his fourth wife, Lillian). Early life Lugosi, the youngest of four children, was born Béla Ferenc Dezső Blaskó in 1882 in Lugos, Kingdom of Hungary (now Lugoj, Romania) to Hungarian father István Blaskó, a baker who later became a banker, and Serbian-born mother Paula de Vojnich. He was raised in a Roman Catholic family. At the age of 12, Lugosi dropped out of school and left home to work at a succession of manual labor jobs. His father died during his absence. He began his stage acting career in 1902. His earliest known performances are from provincial theatres in the 1903–04 season, playing small roles in several plays and operettas. He took the last name "Lugosi" in 1903 to honor his birthplace, and went on to perform in Shakespeare plays. After moving to Budapest in 1911, he played dozens of roles with the National Theatre of Hungary between 1913 and 1919. Although Lugosi would later claim that he "became the leading actor of Hungary's Royal National Theatre", many of his roles there were small or supporting parts, which led him to enter the Hungarian film industry. During World War I, he served as an infantryman in the Austro-Hungarian Army from 1914 to 1916, rising to the rank of Lieutenant. He was awarded the Wound Medal for wounds he sustained while serving on the Russian front. Returning to civilian life, Lugosi became an actor in Hungarian silent films, appearing in many of them under the stage name "Arisztid Olt". Due to his activism in the actors' union in Hungary during the revolution of 1919 and his active participation in the Hungarian Soviet Republic, he was forced to flee his homeland when the government changed hands, initially accompanied by his first wife Ilona Szmik. They escaped to Vienna before settling in Berlin (in the Langestrasse), where he began acting in German silent films. During these moves, Ilona lost her unborn child, after which she left Lugosi and returned home to her parents where she filed for divorce, and soon after remarried. Lugosi eventually travelled to New Orleans, Louisiana in December, 1920 working as a crewman aboard a merchant ship, then made his way north to New York City, where he again took up acting in (and sometimes directing) stage plays in 1921–1922, then worked in the New York silent film industry from 1923 to 1926. In 1921, he met and married his second wife, Ilona von Montagh, a young Hungarian emigree and stage actress whom he had worked with years before in Europe. They only lived together for a few weeks, but their divorce took until October 1925 to be finalized. He later moved to California in 1928 to tour in the Dracula stage play, and his Hollywood film career took off. Lugosi claimed he performed the Dracula play around 1,000 times during his lifetime. He eventually became a U.S. citizen in 1931, soon after the release of his film version of Dracula. Career Early films Lugosi's first film appearance was in the 1917 Hungarian silent film Leoni Leo. When appearing in Hungarian silent films, he mostly used the stage name Arisztid Olt. Lugosi made at least 10 films in Hungary between 1917 and 1918 before leaving for Germany. Following the collapse of Béla Kun's Hungarian Soviet Republic in 1919, leftists and trade unionists became vulnerable, some being imprisoned or executed in public. Lugosi was proscribed from acting due to his participation in the formation of an actors' union. Exiled in Weimar-era Germany, he co-starred in at least 14 German silent films in 1920, among them Hypnose: Sklaven fremden Willens (1920), Der Januskopf (1920) and an adaptation of the Karl May novel Caravan of Death (1920). Lugosi left Germany in October 1920, emigrating by ship to the United States, and entered the country at New Orleans in December 1920. He made his way to New York and was inspected by immigration officers at Ellis Island in March 1921. He only declared his intention to become a US citizen in 1928; on June 26, 1931, he was naturalized. On his arrival in America, the , Lugosi worked for some time as a laborer, and then entered the theater in New York City's Hungarian immigrant colony. With fellow expatriate Hungarian actors he formed a small stock company that toured Eastern cities, playing for immigrant audiences. Lugosi acted in several Hungarian language plays before starring in his first English Broadway play, The Red Poppy in 1922. Three more parts came in 1925–26, including a five-month run in the comedy-fantasy The Devil in the Cheese. In 1925, he played an Arab Sheik in Arabesque which premiered in Buffalo, New York at the Teck Theatre before moving to Broadway. His first American film role was in the silent melodrama The Silent Command (1923) which was filmed in New York. Four other silent roles followed, villains and continental types, all in productions made in the New York area. A rumor has circulated for decades among film historians that Lugosi played an uncredited bit part as a clown in the 1924 Lon Chaney Hollywood film He Who Gets Slapped, but this has been heavily disputed. The rumor originated from the discovery of a publicity still from this film found posthumously in Lugosi's scrapbook, which showed an unidentified clown in heavy makeup standing near Lon Chaney in one scene. It was thought to be evidence that Lugosi appeared in the film, but historians all agree that is very unlikely, since Lugosi was in both Chicago (appearing in a play called The Werewolf) and New York at the time that film was being made in Hollywood. Dracula Lugosi was approached in the summer of 1927 to star in a Broadway theatre production of Dracula, which had been adapted by Hamilton Deane and John L. Balderston from Bram Stoker's 1897 novel. The Horace Liveright production was successful, running in New York City for 261 performances before touring the United States to much fanfare and critical acclaim throughout 1928 and 1929. In 1928, Lugosi decided to stay in California when the play ended its first West Coast run. His performance had piqued the interest of Fox Film, and he was cast in the Hollywood studio's silent film The Veiled Woman (1929). He also appeared in the film Prisoners (also 1929), believed lost, which was released in both a silent and partial talkie version. In 1929, with no other film roles in sight, he returned to the stage as Dracula for a short West Coast tour of the play. Lugosi remained in California where he resumed his film work under contract with Fox, appearing in early talkies often as a heavy or an "exotic sheik". He also continued to lobby for his prized role in the film version of Dracula. Despite his critically acclaimed performance on stage, Lugosi was not Universal Pictures' first choice for the role of Dracula when the company optioned the rights to the Deane play and began production in 1930. Different prominent actors, such as Paul Muni, Chester Morris, Ian Keith, John Wray, Joseph Schildkraut, Arthur Edmund Carewe, William Courtenay, John Carradine, and Conrad Veidt were considered. Lew Ayres was eventually hired to play Dracula, only to be replaced by Robert Ames after being cast in a different role in a different Universal Pictures film. Ames was in turn replaced with David Manners, who would instead come to play John Harker. Lugosi had played the role on Broadway, and was considered before director Tod Browning cast him in the role. The film was a major hit, but Lugosi was paid a salary of only $3,500, since he had too eagerly accepted the role. Typecasting Through his association with Dracula (in which he appeared with minimal makeup, using his natural, heavily accented voice), Lugosi found himself typecast as a horror villain in films such as Murders in the Rue Morgue (1932), The Black Cat (1934) and The Raven (1935) for Universal, and the independent White Zombie (1932). His accent, while a part of his image, limited the type of role he could play. Lugosi did attempt to break type by auditioning for other roles. He lost out to Lionel Barrymore for the role of Grigori Rasputin in Rasputin and the Empress (also 1932); C. Henry Gordon for the role of Surat Khan in Charge of the Light Brigade (1936), and Basil Rathbone for the role of Commissar Dimitri Gorotchenko in Tovarich (1937), a role Lugosi had played on stage. He played the elegant, somewhat hot-tempered General Nicholas Strenovsky-Petronovich in International House (1933). Regardless of controversy, five films at Universal – The Black Cat (1934), The Raven (1935), The Invisible Ray (1936), Son of Frankenstein (1939), Black Friday (1940), plus minor cameo performances in Gift of Gab (1934) and two at RKO Pictures, You'll Find Out (1940) and The Body Snatcher (1945) – paired Lugosi with Boris Karloff. Despite the relative size of their roles, Lugosi inevitably received second billing, below Karloff. There are contradictory reports of Lugosi's attitude toward Karloff, some claiming that he was openly resentful of Karloff's long-term success and ability to gain good roles beyond the horror arena, while others suggested the two actors were – for a time, at least – amicable. Karloff himself in interviews suggested that Lugosi was initially mistrustful of him when they acted together, believing that the Englishman would attempt to upstage him. When this proved not to be the case, according to Karloff, Lugosi settled down and they worked together amicably (though some have further commented that the English Karloff's on-set demand to break from filming for mid-afternoon tea annoyed Lugosi). Lugosi did get a few heroic leads, as in Universal's The Black Cat after Karloff had been accorded the more colorful role of the villain, The Invisible Ray, and a romantic role in producer Sol Lesser's adventure serial The Return of Chandu (1934), but his typecasting problem appears to have been too entrenched to be alleviated by those films. Lugosi addressed his plea to be cast in non-horror roles directly to casting directors through his listing in the 1937 Players Directory, published by the Academy of Motion Picture Arts and Sciences, in which he (or his agent) calls the idea that he is only fit for horror films "an error." Career decline [[File:Bride of the Monster photo - 1956.jpg|thumb|right|Tor Johnson and Lugosi in Bride of the Monster (1956)]] Historian John McElwee reports, in his 2013 book Showmen, Sell It Hot!, that Bela Lugosi's popularity received a much-needed boost in August 1938, when California theater owner Emil Umann revived Dracula and Frankenstein as a special double feature. The combination was so successful that Umann scheduled extra shows to accommodate the capacity crowds, and invited Lugosi to appear in person, which thrilled new audiences that had never seen Lugosi's classic performance. "I owe it all to that little man at the Regina Theatre," said Lugosi of exhibitor Umann. "I was dead, and he brought me back to life." Universal took notice of the tremendous business and launched its own national re-release of the same two horror favorites. The studio then rehired Lugosi to star in new films, fortunately just as Lugosi's fourth wife had given birth to a son. Universal cast Lugosi in Son of Frankenstein (1939), appearing in the character role of Ygor, a mad blacksmith with a broken neck, in heavy makeup and beard. Lugosi was third-billed with his name above the title alongside Basil Rathbone as Dr. Frankenstein's son and Boris Karloff reprising his role as Frankenstein's monster. Regarding Son of Frankenstein, the film's director Rowland V. Lee said his crew let Lugosi "work on the characterization; the interpretation he gave us was imaginative and totally unexpected ... when we finished shooting, there was no doubt in anyone's mind that he stole the show. Karloff's monster was weak by comparison." The same year saw Lugosi making a rare appearance in an A-list motion picture: he was a stern Soviet commissar in the Metro-Goldwyn-Mayer romantic comedy Ninotchka, starring Greta Garbo and directed by Ernst Lubitsch. Lugosi was quite effective in this small but prestigious character part and he even received top billing among the film's supporting cast, all of whom had significantly larger roles. It might have been a turning point for the actor, but within the year he was back on Hollywood's Poverty Row, playing leads for Sam Katzman. These horror, comedy and mystery B-films were mostly released by Monogram Pictures. At Universal, he often received star billing for what amounted to a supporting part. Lugosi went to 20th Century-Fox for The Gorilla (1939), which had him playing straight man (a butler) to Patsy Kelly and the Ritz Brothers. When Lugosi's Black Friday premiered in 1940 on a double bill with the Vincent Price film The House of the Seven Gables, Lugosi and Price both appeared in person at the Chicago theatre where it opened on Feb. 29, 1940 and remained for four performances. Ostensibly aggravated by injuries received during his military service, Lugosi developed severe, chronic sciatica. Though at first he was treated with benign pain remedies such as asparagus juice, doctors increased the medication to opiates. The growth of his dependence on opiates, particularly morphine and, after 1947 when it became available in America, methadone, was directly proportional to the dwindling of Lugosi's screen offers. He was finally cast in the role of Frankenstein's monster for Universal's Frankenstein Meets the Wolf Man (1943). (At the end of the previous film in the series, The Ghost of Frankenstein (1942), Lugosi's voice had been dubbed over that of Lon Chaney Jr. since Ygor's brain was now in the Monster's skull.) But at the last minute, Lugosi's heavily-accented dialogue was edited out after the film was completed, along with the idea of the Monster being blind, leaving his performance featuring groping, outstretched arms and moving lips seeming enigmatic (and funny) to audiences. Lugosi played Dracula for a second and final time on film in Abbott and Costello Meet Frankenstein (1948).Abbott and Costello Meet Frankenstein was Bela Lugosi's last "A" movie. For the remainder of his life, he appeared – less and less frequently – in obscure, forgettable, low-budget B features. From 1947 to 1950, he performed in summer stock, often in productions of Dracula or Arsenic and Old Lace, and during the rest of the year, made personal appearances in a touring "spook show", and on early commercial television. In September 1949, Milton Berle invited Lugosi to appear in a sketch on Texaco Star Theatre. Lugosi memorized the script for the skit, but became confused on the air when Berle began to ad lib. He also appeared on the anthology series Suspense on October 11, 1949, in a live adaptation of Edgar Allan Poe's "The Cask of Amontillado". In 1951, while in England to play a six-month tour of Dracula, Lugosi co-starred in a lowbrow film comedy, Mother Riley Meets the Vampire (also known as Vampire Over London and My Son, the Vampire), released the following year. Following his return to the United States, he was interviewed for television, and reflected wistfully on his typecasting in horror parts: "Now I am the boogie man". In the same interview he expressed a desire to play more comedy, as he had in the Mother Riley farce. Independent producer Jack Broder took Lugosi at his word, casting him in a jungle-themed comedy, Bela Lugosi Meets a Brooklyn Gorilla (1952), starring nightclub comedians Duke Mitchell and Jerry Lewis look-alike Sammy Petrillo, whose act closely resembled that of Dean Martin and Jerry Lewis (Martin and Lewis), who promptly sued the duo. Stage and personal appearances Lugosi enjoyed a lively career on stage, with plenty of personal appearances. As film offers declined, he became more and more dependent on live venues to support his family. Lugosi took over the role of Jonathan Brewster from Boris Karloff for Arsenic and Old Lace. Lugosi had also expressed interest in playing Elwood P. Dowd in Harvey to help himself professionally. He also made plenty of personal live appearances to promote his horror image or an accompanying film. The Vincent Price film, House of Wax premiered in Los Angeles at the Paramount Theatre on April 16, 1953. The film played at midnight with a number of celebrities in the audience that night (Judy Garland, Ginger Rogers, Rock Hudson, Broderick Crawford, Gracie Allen, Eddie Cantor, Shelley Winters and others). Producer Alex Gordon, knowing Lugosi was in dire need of cash, arranged for the actor to stand outside the theater wearing a cape and dark glasses, holding a man costumed as a gorilla on a leash. He later allowed himself to be photographed drinking a glass of milk at a Red Cross booth there. When Lugosi playfully attempted to bite the "nurse" in attendance, she overreacted and spilled a glass of milk all over his shirt and cape. Afterward, Lugosi was interviewed by a female reporter who botched the interview by asking the prearranged questions out of order, thoroughly confusing the aging star. Embarrassed, Lugosi left abruptly, without attending the screening. Ed Wood and final projects Late in his life, Bela Lugosi again received star billing in films when the ambitious but financially limited filmmaker Ed Wood, a fan of Lugosi, found him living in obscurity and near-poverty and offered him roles in his films, such as an anonymous narrator in Glen or Glenda (1953) and a mad scientist in Bride of the Monster (1955). During post-production of the latter, Lugosi decided to seek treatment for his drug addiction, and the film's premiere was arranged to raise money for Lugosi's hospital expenses (resulting in a paltry amount of money). According to Kitty Kelley's biography of Frank Sinatra, when the entertainer heard of Lugosi's problems, he visited Lugosi at the hospital and gave him a $1,000 check. Sinatra would recall Lugosi's amazement at his visit, since the two men had never met before. During an impromptu interview upon his release from the treatment center in 1955, Lugosi stated that he was about to begin work on a new Ed Wood film called The Ghoul Goes West. This was one of several projects proposed by Wood, including The Phantom Ghoul and Dr. Acula. With Lugosi in his Dracula cape, Wood shot impromptu test footage, with no particular storyline in mind, in front of Tor Johnson's home, at a suburban graveyard, and in front of Lugosi's apartment building on Carlton Way. This footage ended up posthumously in Wood's Plan 9 from Outer Space (1957), which was filmed in 1956 soon after Lugosi died. Wood hired Tom Mason, his wife's chiropractor, to double for Lugosi in additional shots. Mason was noticeably taller and thinner than Lugosi, and had the lower half of his face covered with his cape in every shot, as Lugosi sometimes did in Abbott and Costello Meet Frankenstein. Following his treatment, Lugosi made one final film, in late 1955, The Black Sleep, for Bel-Air Pictures, which was released in the summer of 1956 through United Artists with a promotional campaign that included several personal appearances by Lugosi and his co-stars, as well as Maila Nurmi (TV's horror host "Vampira"). To Lugosi's disappointment, however, his role in this film was that of a mute butler with no dialogue. Lugosi was intoxicated and very ill during the film's promotional campaign and had to return to L.A. earlier than planned. He never got to see the finished film. Tor Johnson said in interviews that Lugosi kept screaming that he wanted to die the night they shared a hotel room together. In 1959, a British film called Lock Up Your Daughters was theatrically released (in the U.K.), composed of clips from Bela Lugosi's Monogram pictures from the 1940s. The film is lost today, but a March 16, 1959, critical review in the Kinematograph Weekly mentioned that the movie contained new Lugosi footage (intriguing since Lugosi had died in 1956). Back in 1950 however, Lugosi had appeared on a one-hour TV program called Murder and Bela Lugosi (which WPIX-TV broadcast on Sept. 18, 1950) in which Lugosi was interviewed and provided commentary about a number of his old horror films while clips from the films were being shown; historian Gary Rhodes thinks some of this Lugosi TV production found its way into the 1959 British film, which would finally explain the mystery.Review of "Lock Up Your Daughters". Kinematograph Weekly. March 16, 1959 Personal life Lugosi repeatedly married. In June 1917, Lugosi married 19-year-old Ilona Szmik (1898–1991) in Hungary. The couple divorced after Lugosi was forced to flee his homeland for political reasons (risking execution if he stayed) and Ilona did not wish to leave her parents. The divorce became final on July 17, 1920, uncontested since Lugosi could not show up for the proceedings. (Szmik remarried a wealthy Hungarian architect Imre Francsek in December 1920, moved with him to Iran in 1930, had two children and died in 1991.) After living briefly in Germany, Lugosi left Europe by ship and arrived in New Orleans on October 27, 1920, and, after making his way north, underwent his primary alien inspection at Ellis Island, N.Y. on March 23, 1921. In September 1921, he married Hungarian actress Ilona von Montagh in New York City, and she filed for divorce on November 11, 1924, charging him with adultery and complaining that he wanted her to abandon her acting career to keep house for him. The divorce became final in October, 1925. (Lugosi learned in 1935 that von Montagh and a female friend were both arrested for shoplifting in New York City, which was the last he heard of her.). Lugosi took his place in Hollywood society and scandal when he married wealthy San Francisco resident Beatrice Woodruff Weeks (1897–1931), widow of architect Charles Peter Weeks, on July 27, 1929. Weeks subsequently filed for divorce on November 4, 1929, accusing Lugosi of infidelity, citing actress Clara Bow as the "other woman", and claimed Lugosi tried to take her checkbook and the key to her safe deposit box away from her. Lugosi complained of her excessive drinking and dancing with other men at social gatherings. She claimed he slapped her in the face one night because she ate a pork chop he had hidden in their refrigerator. The divorce became official on December 9, 1929. (Weeks died 17 months later (at age 34) from alcoholism in Panama, Lugosi never receiving a penny from her fortune.) On June 26, 1931, Lugosi became a naturalized United States citizen. In 1933, the 51-year-old Lugosi married 22-year-old Lillian Arch (1911–1981), the daughter of Hungarian immigrants living in Hollywood. Lillian's father was against her marriage to Lugosi at first since the actor was experiencing financial difficulties at the time, so Bela talked her into eloping with him to Las Vegas in January 1933. They remained married for twenty years and they had a child, Bela G. Lugosi, in 1938. (Bela eventually had four grandchildren and six great-grandchildren, although he did not live long enough to meet any of them.) Lillian and Bela vacationed on their lakeshore property in Lake Elsinore, California (then called Elsinore), on several lots between 1944 and 1953. Lillian's parents lived on one of their properties, and Lugosi frequented the health spa there. Bela Lugosi Jr. was boarded at the Elsinore Naval and Military School in Lake Elsinore, and also lived with Lillian's parents while she and Bela were touring. After almost breaking up their marriage in 1944, Lillian and Béla finally did divorce on July 17, 1953, at least partially because of Béla's excessive drinking and his jealousy over Lillian taking a full-time job as an assistant to actor Brian Donlevy on Donlevy's radio and television series Dangerous Assignment. Lillian got custody of their son Bela Jr. Lugosi called the police one night after Lillian left him and threatened to commit suicide, but when the police showed up at his apartment, he denied making the call.(Lillian eventually did marry Brian Donlevy in 1966, by which time he had also become an alcoholic and she died in 1981.) Lugosi married Hope Lininger, his fifth wife, in 1955; she was 37 years his junior. She had been a fan, writing letters to him when he was in the hospital, recovering from his drug addiction. She would sign her letters "A dash of Hope". They remained married until his death in 1956. However Bela and Hope were actually discussing getting divorced around the time he passed away. Death Lugosi died of a heart attack on August 16, 1956, in the bedroom of his Los Angeles apartment while taking a nap. His wife Hope discovered him dead, on his bed dressed only in his underwear, when she came home from work that evening, he having apparently died peacefully in his sleep around 6:45 PM according to the medical examiner. He was 73 and weighed 140 pounds. The rumor that Lugosi was clutching the script for The Final Curtain, a planned Ed Wood project, at the time of his death is not true. Lugosi was buried wearing one of the "Dracula" capes and his full costume as well as his Dracula ring in the Holy Cross Cemetery in Culver City, California. Contrary to popular belief, Lugosi never requested to be buried in his cloak; Bela G. Lugosi confirmed on numerous occasions that he and his mother, Lillian, made the decision but believed that it is what his father would have wanted. The funeral was held on Saturday, August 18 at the Utter-McKinley funeral home in Hollywood. Attendees included Forrest J. Ackerman, Ed Wood (who was a pall bearer), Tor Johnson, Conrad Brooks, Richard Sheffield, both of the widows (Hope and Lillian), Bela Lugosi Jr., Norma McCarty, Loretta King, Paul Marco and actor George Becwar. Bela's fourth wife Lillian paid for the cemetery plot and stone (which was inscribed "Beloved Father"), while Hope Lugosi paid for the coffin and the funeral service. Lugosi's will left several inexpensive pieces of real estate in Elsinore and only $1,000 cash to his son, but since the will had been written on Jan. 12, 1954 (before Lugosi's fifth marriage), Bela Jr. had to share the thousand dollars evenly with Hope Lugosi. Hope later gave most of Lugosi's personal belongings and memorabilia to Bela's young neighborhood friend Richard Sheffield, who gave Lugosi's duplicate Dracula cape to Bela Jr. and sold some of the other items to Forrest J. Ackerman. Hope told Sheffield she had searched the apartment for several days looking for $3,000 she suspected Lugosi had hidden there, but she never found it. Sheffield said years later "Lugosi had probably spent it all on alcohol." Hope later moved to Hawaii, where she worked for many years as a caregiver in a leper colony.Arthur Lennig, The Immortal Count, University Press of Kentucky, 2003 . Hope died in Hawaii in 1997, at age 78, having never remarried. Before her death, she gave several (rather downbeat) interviews to the fan press. California Supreme Court decision on personality rights In 1979, the Lugosi v. Universal Pictures decision by the California Supreme Court held that Lugosi's personality rights could not pass to his heirs, as a copyright would have. The court ruled that under California law any rights of publicity, including the right to his image, terminated with Lugosi's death. Legacy In Tim Burton's Ed Wood, Bela Lugosi is portrayed by Martin Landau, who received the 1994 Academy Award for Best Supporting Actor for the performance. According to Bela G. Lugosi (his son), Forrest Ackerman, Dolores Fuller and Richard Sheffield, the film's portrayal of Lugosi is inaccurate: In real life, he never used profanity, did not hate Karloff, owned no small dogs, nor did he sleep in a coffin. Also Ed Wood did not meet Lugosi in a funeral parlor, but rather through his roommate Alex Gordon. An episode of Sledge Hammer! titled "Last of the Red Hot Vampires" was an homage to Bela Lugosi; at the end of the episode, it was dedicated to "Mr. Blasko". In 2001, BBC Radio 4 broadcast There Are Such Things by Steven McNicoll and Mark McDonnell. Focusing on Lugosi and his well-documented struggle to escape from the role that had typecast him, the play went on to receive the Hamilton Deane Award for best dramatic presentation from the Dracula Society in 2002. On July 19, 2003, German artist Hartmut Zech erected a bust of Lugosi on one of the corners of Vajdahunyad Castle in Budapest. The Ellis Island Immigration Museum in New York City features a live 30-minute play that focuses on Lugosi's illegal entry into the country via New Orleans and his arrival at Ellis Island months later to enter the country legally. The cape Lugosi wore in Dracula (1931) was in the possession of his son until it was put up for auction in 2011. It was expected to sell for up to $2 million, but has since been listed again by Bonhams in 2018. In 2019, the Academy Museum of Motion Pictures announced acquisition of the cape via partial donation from the Lugosi family. In 2019 it was announced that the cape would be on display the following year. Péter Müller's theatrical play Lugosi – the Shadow of the Vampire () is based on Lugosi's life, telling the story of his life as he became typecast as Dracula and as his drug addiction worsened. In the Hungarian production, directed by István Szabó, Lugosi was played by Ivan Darvas. Andy Warhol's 1963 silkscreen The Kiss depicts Lugosi from Dracula about to bite into the neck of co-star Helen Chandler, who played Mina Harker. A copy sold for $798,000 at Christie's in May 2000. In 1979, a song called "Bela Lugosi's Dead" was released by UK post-punk band Bauhaus and is widely considered to be a pioneering song in the Goth music genre. On choosing the topic of the song, the band's bassist David J remarked "It's so weird you should say that, because I've got this lyric about Bela Lugosi, the actor who played a vampire. There was a season of old horror films on TV and I was telling Daniel about how much I loved them. The one that had been on the night before was Dracula [1931]. I was saying how Bela Lugosi was the quintessential Dracula, the elegant depiction of the character." Bela Lugosi and Boris Karloff are referenced in the Curtis Stigers' song "Sleeping with the Lights On", from the 1991 album Curtis Stigers. Lugosi's star on the Hollywood Walk of Fame is mentioned in "Celluloid Heroes", a song performed by The Kinks and written by their lead vocalist and principal songwriter, Ray Davies. It appeared on their 1972 album Everybody's in Show-Biz. In 2013 the Hungarian electronic music band Žagar recorded a song entitled "Mr. Lugosi", which contains a recording of the voice of Bela Lugosi. The song was a part of the Light Leaks record. According to Paru Itagaki, the creator of the Japanese manga/anime Beastars, the main character Legoshi was inspired by Bela Lugosi (regarding the similar-sounding names). In 2020, Legendary Comics published an adaptation of Bram Stoker's 1897 Dracula novel, which used the likeness of Lugosi. A 2021 hardcover graphic novel depicting the life of Bela Lugosi was written and drawn by Koren Shadmi, entitled Lugosi: The Rise and Fall of Hollywood's Dracula. Notes References Further reading Ed Wood's Bride of the Monster by Gary D. Rhodes and Tom Weaver (2015) BearManor Media, Tod Browning's Dracula by Gary D. Rhodes (2015) Tomahawk Press, Bela Lugosi In Person by Bill Kaffenberger and Gary D. Rhodes (2015) BearManor Media, No Traveler Returns: The Lost Years of Bela Lugosi by Bill Kaffenberger and Gary D. Rhodes (2012) BearManor Media, Bela Lugosi: Dreams and Nightmares by Gary D. Rhodes, with Richard Sheffield, (2007) Collectables/Alpha Video Publishers, (hardcover) Lugosi: His Life on Film, Stage, and in the Hearts of Horror Lovers by Gary D. Rhodes (2006) McFarland & Company, The Immortal Count: The Life and Films of Bela Lugosi by Arthur Lennig (2003), (hardcover) Bela Lugosi (Midnight Marquee Actors Series) by Gary Svehla and Susan Svehla (1995) (paperback) Bela Lugosi: Master of the Macabre by Larry Edwards (1997), (paperback) Films of Bela Lugosi by Richard Bojarski (1980) (hardcover) Sinister Serials of Boris Karloff, Bela Lugosi and Lon Chaney, Jr. by Leonard J. Kohl (2000) (paperback) Vampire over London: Bela Lugosi in Britain by Frank J. Dello Stritto and Andi Brooks (2000) (hardcover) Lugosi: The Man Behind the Cape by Robert Cremer (1976) (hardcover) Bela Lugosi: Biografia di una metamorfosi by Edgardo Franzosini (1998) Lugosi: The Rise and Fall of Hollywood's Dracula'' by Koren Shadmi (Life Drawn graphic novel)(2021) External links https://www.retroagogo.com/categories/collections/bela-lugosi/ Video Biography at CinemaScream.com 1882 births 1956 deaths People from Lugoj 19th-century Hungarian people 20th-century Hungarian male actors 20th-century American male actors 19th-century Roman Catholics 20th-century Roman Catholics 20th-century sailors American male film actors American Roman Catholics American people of Serbian descent American socialists American trade unionists Austro-Hungarian military personnel of World War I Burials at Holy Cross Cemetery, Culver City Hungarian emigrants to the United States Hungarian expatriates in Austria Hungarian expatriates in Germany Hungarian male film actors Hungarian male silent film actors Hungarian male stage actors Hungarian Roman Catholics Hungarian people of Serbian descent Hungarian sailors Hungarian trade unionists Hungarian socialists Male Shakespearean actors Naturalized citizens of the United States Universal Pictures contract players
2,263
5,132
https://en.wikipedia.org/wiki/Charlize%20Theron
Charlize Theron
Charlize Theron ( ; ; born 7 August 1975) is a South African and American actress and producer. One of the world's highest-paid actresses, she is the recipient of various accolades, including an Academy Award and a Golden Globe Award. In 2016, Time named her one of the 100 most influential people in the world. Theron came to international prominence in the 1990s by playing the leading lady in the Hollywood films The Devil's Advocate (1997), Mighty Joe Young (1998), and The Cider House Rules (1999). She received critical acclaim for her portrayal of serial killer Aileen Wuornos in Monster (2003), for which she won the Silver Bear and Academy Award for Best Actress, becoming the first South African to win an acting Oscar. She received another Academy Award nomination for playing a sexually abused woman seeking justice in the drama North Country (2005). Theron has starred in several commercially successful action films, including The Italian Job (2003), Hancock (2008), Snow White and the Huntsman (2012), Prometheus (2012), Mad Max: Fury Road (2015), The Fate of the Furious (2017), Atomic Blonde (2017), The Old Guard (2020) and F9 (2021). She received praise for playing troubled women in Jason Reitman's comedy-dramas Young Adult (2011) and Tully (2018), and for portraying Megyn Kelly in the biographical drama Bombshell (2019), receiving a third Academy Award nomination for the lattermost. Since the early 2000s, Theron has ventured into film production with her company Denver and Delilah Productions. She has produced numerous films, in many of which she had a starring role, including The Burning Plain (2008), Dark Places (2015), and Long Shot (2019). Theron became an American citizen in 2007, while retaining her South African citizenship. She has been honoured with a motion picture star on the Hollywood Walk of Fame. Early life Theron was born in Benoni, in Transvaal Province (Gauteng Province since 1994) of South Africa on 7 August 1975. She is the only child of road constructionists Gerda (née Maritz) and Charles Theron (27 November 1947 – 21 June 1991). The Second Boer War military leader Danie Theron was her great-grand-uncle. She is from an Afrikaner family, and her ancestry includes Dutch as well as French and German. Her French forebears were early Huguenots in South Africa. "Theron" is an Occitan surname (originally spelled Théron) pronounced in Afrikaans as . She grew up on her parents' farm in Benoni, near Johannesburg. On 21 June 1991, Theron's father, an alcoholic, threatened both teenaged Charlize and her mother while drunk, physically attacking her mother and firing a gun at both of them. Theron's mother retrieved her own handgun, shot back and killed him. The shooting was legally adjudged to have been self-defense, and her mother faced no charges. Theron attended Putfontein Primary School (Laerskool Putfontein), a period during which she has said she was not "fitting in". She was frequently unwell with jaundice throughout childhood and the antibiotics she was administered made her upper incisor milk teeth rot (they had to be surgically removed) and teeth did not grow until she was roughly ten years old. At 13, Theron was sent to boarding school and began her studies at the National School of the Arts in Johannesburg. She has said about her early life in her home country: “I grew up as an only child in South Africa, and there was turmoil in my family, but the surroundings were so great. I was usually barefoot in the dirt: no Game Boys, no computers, and we had sanctions, so there were no concerts. This meant you had to entertain yourself.” Although Theron is fluent in English, her first language is Afrikaans. Career Early work (1991–1996) Although seeing herself as a dancer, at age 16 Theron won a one-year modelling contract at a local competition in Salerno and moved with her mother to Milan, Italy. After Theron spent a year modelling throughout Europe, she and her mother moved to the United States, both New York City and Miami. In New York, she attended the Joffrey Ballet School, where she trained as a ballet dancer until a knee injury closed this career path. As Theron recalled in 2008: In 1994, Theron flew to Los Angeles, on a one-way ticket her mother bought for her, intending to work in the film industry. During the initial months there, she lived in a motel with the $300 budget that her mother had given her; she continued receiving cheques from New York and lived "from paycheck to paycheck" to the point of stealing bread from a basket in a restaurant to survive. One day, she went to a Hollywood Boulevard bank to cash a few cheques, including one her mother had sent to help with the rent, but it was rejected because it was out-of-state and she was not an American citizen. Theron argued and pleaded with the bank teller until talent agent John Crosby, who was the next customer behind her, cashed it for her and gave her his business card. Crosby introduced Theron to an acting school, and in 1995 she played her first non-speaking role in the horror film Children of the Corn III: Urban Harvest. Her first speaking role was Helga Svelgen the hitwoman in 2 Days in the Valley (1996), but despite the film's mixed reviews, attention drew to Theron due to her beauty and the scene where she fought Teri Hatcher's character. Theron feared being typecast as characters similar to Helga and recalled being asked to repeat her performance in the film during auditions: "A lot of people were saying, 'You should just hit while the iron's hot' [...] But playing the same part over and over doesn't leave you with any longevity. And I knew it was going to be harder for me, because of what I look like, to branch out to different kinds of roles". When auditioning for Showgirls, Theron was introduced to talent agent J. J. Harris by the co-casting director Johanna Ray. She recalled being surprised at how much faith Harris had in her potential and referred to Harris as her mentor. Harris would find scripts and films for Theron in a variety of genres and encouraged her to become a producer. She would be Theron's agent for over 15 years until Harris's death. Breakthrough (1997–2002) Larger roles in widely released Hollywood films followed, and her career expanded by the end of the 1990s. In the horror drama The Devil's Advocate (1997), which is credited to be her break-out film, Theron starred alongside Keanu Reeves and Al Pacino as the haunted wife of an unusually successful lawyer. She subsequently starred in the adventure film Mighty Joe Young (1998) as the friend and protector of a giant mountain gorilla, and in the drama The Cider House Rules (1999), as a woman who seeks an abortion in World War II-era Maine. While Mighty Joe Young flopped at the box office, The Devil's Advocate and The Cider House Rules were commercially successful. She was on the cover of the January 1999 issue of Vanity Fair as the "White Hot Venus". She appeared on the cover of the May 1999 issue of Playboy magazine, in photos taken several years earlier when she was an unknown model; Theron unsuccessfully sued the magazine for publishing them without her consent. By the early 2000s, Theron continued to steadily take on roles in films such as Reindeer Games (2000), The Yards (2000), The Legend of Bagger Vance (2000), Men of Honor (2000), Sweet November (2001), The Curse of the Jade Scorpion (2001), and Trapped (2002), all of which, despite achieving only limited commercial success, helped to establish her as an actress. On this period in her career, Theron remarked: "I kept finding myself in a place where directors would back me but studios didn't. [I began] a love affair with directors, the ones I really, truly admired. I found myself making really bad movies, too. Reindeer Games was not a good movie, but I did it because I loved [director] John Frankenheimer." Worldwide recognition and critical success (2003–2008) Theron starred as a safe and vault technician in the 2003 heist film The Italian Job, an American homage/remake of the 1969 British film of the same name, directed by F. Gary Gray and opposite Mark Wahlberg, Edward Norton, Jason Statham, Seth Green, and Donald Sutherland. The film was a box office success, grossing US$176 million worldwide. In Monster (2003), Theron portrayed serial killer Aileen Wuornos, a former prostitute who was executed in Florida in 2002 for killing six men (she was not tried for a seventh murder) in the late 1980s and early 1990s; film critic Roger Ebert felt that Theron gave "one of the greatest performances in the history of the cinema". For her portrayal, she was awarded the Academy Award for Best Actress at the 76th Academy Awards in February 2004, as well as the Screen Actors Guild Award and the Golden Globe Award. She is the first South African to win an Oscar for Best Actress. The Oscar win pushed her to The Hollywood Reporters 2006 list of highest-paid actresses in Hollywood, earning up to US$10 million for a film; she ranked seventh. AskMen named her the number one most desirable woman of 2003. For her role as Swedish actress and singer Britt Ekland in the 2004 HBO film The Life and Death of Peter Sellers, Theron garnered Golden Globe Award and Primetime Emmy Award nominations. In 2005, she portrayed Rita, the mentally challenged love interest of Michael Bluth (Jason Bateman), on the third season of Fox's television series Arrested Development, and starred in the financially unsuccessful science fiction thriller Æon Flux; for her voice-over work in the Aeon Flux video game, she received a Spike Video Game Award for Best Performance by a Human Female. In the critically acclaimed drama North Country (2005), Theron played a single mother and an iron mine worker experiencing sexual harassment. David Rooney of Variety wrote: "The film represents a confident next step for lead Charlize Theron. Though the challenges of following a career-redefining Oscar role have stymied actresses, Theron segues from Monster to a performance in many ways more accomplished [...] The strength of both the performance and character anchor the film firmly in the tradition of other dramas about working-class women leading the fight over industrial workplace issues, such as Norma Rae or Silkwood." Roger Ebert echoed the same sentiment, calling her "an actress who has the beauty of a fashion model but has found resources within herself for these powerful roles about unglamorous women in the world of men." For her performance, she received Academy Award and Golden Globe Award nominations for Best Actress. Ms. magazine honoured her for this performance with a feature article in its Fall 2005 issue. On 30 September 2005, Theron received a star on the Hollywood Walk of Fame. In 2007, Theron played a police detective in the critically acclaimed crime film In the Valley of Elah, and produced and starred as a reckless, slatternly mother in the drama film Sleepwalking, alongside Nick Stahl and AnnaSophia Robb. The Christian Science Monitor praised the latter film, commenting that "Despite its deficiencies, and the inadequate screen time allotted to Theron (who's quite good), Sleepwalking has a core of feeling". In 2008, Theron starred as a woman who faced a traumatic childhood in the drama The Burning Plain, directed by Guillermo Arriaga and opposite Jennifer Lawrence and Kim Basinger, and played the ex-wife of an alcoholic superhero alongside Will Smith in the superhero film Hancock. The Burning Plain found a limited release in US theatres, but grossed $5,267,917 outside the US. Hancock made US$624.3 million worldwide. Also in 2008, Theron was named the Hasty Pudding Theatricals Woman of the Year, and was asked to be a UN Messenger of Peace by the UN Secretary General Ban Ki-moon. Career hiatus and fluctuations (2009–2016) Her film releases in 2009 were the post-apocalyptic drama The Road, in which she briefly appears in flashbacks, and the animated film Astro Boy, providing her voice for a character. On 4 December 2009, Theron co-presented the draw for the 2010 FIFA World Cup in Cape Town, South Africa, accompanied by several other celebrities of South African nationality or ancestry. During rehearsals she drew an Ireland ball instead of France as a joke at the expense of FIFA, referring to Thierry Henry's handball controversy in the play-off match between France and Ireland. The stunt alarmed FIFA enough for it to fear she might do it again in front of a live global audience. Following a two-year hiatus from films, Theron returned to the spotlight in 2011 with the black comedy Young Adult. Directed by Jason Reitman, the film earned critical acclaim, particularly for her performance as a depressed divorced, alcoholic 37-year-old ghostwriter. Richard Roeper awarded the film an A grade, stating "Charlize Theron delivers one of the most impressive performances of the year". She was nominated for a Golden Globe Award and several other awards. Roger Ebert called her one of the best actors working today. In 2019, Theron spoke about her method of working on roles. Creating a physical identity together with the emotional part of the character, she said, is "a great tool set that adds on to everything else you were already doing as an actor. It's a case-by-case thing, but there is, to me, this beautiful thing that happens when you can get both sides: the exterior and interior. It's a really powerful dynamic". When preparing for a role, "I almost treat it like studying. I will find space where I am alone, where I can be focused, where there's nobody in my house, and I can really just sit down and study and play and look at my face and hear my voice and walk around and be a fucking idiot and my dogs are the only ones who are seeing that". In 2012, Theron took on the role of villain in two big-budgeted films. She played Evil Queen Ravenna, Snow White's evil stepmother, in Snow White and the Huntsman, opposite Kristen Stewart and Chris Hemsworth, and appeared as a crew member with a hidden agenda in Ridley Scott's Prometheus. Mick LaSalle of the San Francisco Chronicle found Snow White and the Huntsman to be "[a] slow, boring film that has no charm and is highlighted only by a handful of special effects and Charlize Theron's truly evil queen", while The Hollywood Reporter writer Todd McCarthy, describing her role in Prometheus, asserted: "Theron is in ice goddess mode here, with the emphasis on ice [...] but perfect for the role all the same". Both films were major box office hits, grossing around US$400 million internationally each. The following year, Vulture/NYMag named her the 68th Most Valuable Star in Hollywood saying: "We're just happy that Theron can stay on the list in a year when she didn't come out with anything [...] any actress who's got that kind of skill, beauty, and ferocity ought to have a permanent place in Hollywood". On 10 May 2014, Theron hosted Saturday Night Live on NBC. In 2014, Theron took on the role of the wife of an infamous outlaw in the western comedy film A Million Ways to Die in the West, directed by Seth MacFarlane, which was met with mediocre reviews and moderate box office returns. In 2015, Theron played the sole survivor of the massacre of her family in the film adaptation of the Gillian Flynn novel Dark Places, directed by Gilles Paquet-Brenner, in which she had a producer credit, and starred as Imperator Furiosa in Mad Max: Fury Road (2015), opposite Tom Hardy. Mad Max received widespread critical acclaim, with praise going towards Theron for the dominant nature taken by her character. The film made US$378.4 million worldwide. She next reprised her role as Queen Ravenna in the 2016 film The Huntsman: Winter's War, a sequel to Snow White and the Huntsman, which was a critical and commercial failure. In 2016, Theron starred as a physician and activist working in West Africa in the little-seen romantic drama The Last Face, with Sean Penn, provided her voice for the 3D stop-motion fantasy film Kubo and the Two Strings, and produced the independent drama Brain on Fire. That year, Time named her in the Time 100 list of the most influential people in the world. Resurgence (2017–present) In 2017, Theron starred in The Fate of the Furious as the cyberterrorist Cipher, the main antagonist of the entire franchise, and played a spy on the eve of the collapse of the Berlin Wall in 1989 in Atomic Blonde, an adaptation of the graphic novel The Coldest City, directed by David Leitch. The Fate of The Furious had a worldwide gross of US$1.2 billion. and Atomic Blonde was described by Richard Roeper of the Chicago Sun-Times as "a slick vehicle for the magnetic, badass charms of Charlize Theron, who is now officially an A-list action star on the strength of this film and Mad Max: Fury Road". In the black comedy Tully (2018), directed by Jason Reitman and written by Diablo Cody, Theron played an overwhelmed mother of three. The film was acclaimed by critics, who concluded it "delves into the modern parenthood experience with an admirably deft blend of humor and raw honesty, brought to life by an outstanding performance by Charlize Theron". She played the president of a pharmaceutical in the crime film Gringo and produced the biographical war drama film A Private War, both released in 2018. In 2019, Theron produced and starred in the romantic comedy film Long Shot, opposite Seth Rogen and directed by Jonathan Levine, portraying a U.S. Secretary of State who reconnects with a journalist she used to babysit. The film had its world premiere at South by Southwest in March 2019, and was released on 3 May 2019, to positive reviews from film critics. Theron next starred as Megyn Kelly in the drama Bombshell, which she co-produced. Directed by Jay Roach, the film revolves around the sexual harassment allegations made against Fox News CEO Roger Ailes by former female employees. For her work in the film, Theron was nominated for an Academy Award for Best Actress, Golden Globe Award for Best Actress in a Motion Picture – Drama, Critics' Choice Movie Award for Best Actress, Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Leading Role, and BAFTA Award for Best Actress in a Leading Role. That same year, Forbes ranked her as the ninth highest-paid actress in the world, with an annual income of $23 million. In 2020, she produced and starred opposite KiKi Layne in The Old Guard, directed by Gina Prince-Bythewood. The following year, she reprised her role as Cipher in F9, originally set for release on 22 May 2020, before its delay to June 2021 due to the COVID-19 pandemic. Upon the film's release in May 2022, it was revealed that Theron would be portraying the character Clea in the Marvel Cinematic Universe (MCU), beginning with her debut in the mid-credits scene of the superhero film Doctor Strange in the Multiverse of Madness. She played Lady Lesso in the fantasy Netflix film The School for Good and Evil (2022). The actress makes a cameo in Season 3 opener of The Boys as an actress playing Stormfront. Other ventures Activism The Charlize Theron Africa Outreach Project (CTAOP) was created in 2007 by Theron, who the following year was named a UN Messenger of Peace, in an effort to support African youth in the fight against HIV/AIDS. The project is committed to supporting community-engaged organizations that address the key drivers of the disease. Although the geographic scope of CTAOP is Sub-Saharan Africa, the primary concentration has mostly been Charlize's home country of South Africa. By November 2017, CTAOP had raised more than $6.3 million to support African organizations working on the ground. In 2008, Theron was named a United Nations Messenger of Peace. In his citation, Ban Ki-Moon said of Theron "You have consistently dedicated yourself to improving the lives of women and children in South Africa, and to preventing and stopping violence against women and girls". She recorded a public service announcement in 2014 as part of their Stop Rape Now program. In December 2009, CTAOP and Toms Shoes partnered to create a limited edition unisex shoe. The shoe was made from vegan materials and inspired by the African baobab tree, the silhouette of which was embroidered on blue and orange canvas. Ten thousand pairs were given to destitute children, and a portion of the proceeds went to CTAOP. In 2020, CTAOP partnered with Parfums Christian Dior to create Dior Stands With Women, an initiative that includes Cara Delevingne, Yalitza Aparicio, Leona Bloom, Paloma Elsesser, and others, to encourage women to be assertive by documenting their journey, challenges and accomplishments. Theron is involved in women's rights organizations and has marched in pro-choice rallies. Theron is a supporter of same-sex marriage and attended a march and rally to support that in Fresno, California, on 30 May 2009. She publicly stated that she refused to get married until same sex marriage became legal in the United States, saying: "I don't want to get married because right now the institution of marriage feels very one-sided, and I want to live in a country where we all have equal rights. I think it would be exactly the same if we were married, but for me to go through that kind of ceremony, because I have so many friends who are gays and lesbians who would so badly want to get married, that I wouldn't be able to sleep with myself". Theron further elaborated on her stance in a June 2011 interview on Piers Morgan Tonight. She stated: "I do have a problem with the fact that our government hasn't stepped up enough to make this federal, to make [gay marriage] legal. I think everybody has that right". In March 2014, CTAOP was among the charities that benefited from the annual Fame and Philanthropy fundraising event on the night of the 86th Academy Awards. Theron was an honoured guest along with Halle Berry and keynote speaker James Cameron. In 2015, Theron signed an open letter which One Campaign had been collecting signatures for; the letter was addressed to Angela Merkel and Nkosazana Dlamini-Zuma, urging them to focus on women as they serve as the head of the G7 in Germany and the AU in South Africa respectively, which will start to set the priorities in development funding before a main UN summit in September 2015 that will establish new development goals for the generation. In August 2017, she visited South Africa with Trevor Noah and made a donation to the South African charity Life Choices. In 2018, she gave a speech about AIDS prevention at the 22nd International AIDS Conference in Amsterdam, organized by the International AIDS Society. Since 2008, Theron has been named a United Nations Messenger of Peace. On 22 June 2022, it was announced that Theron and Sheryl Lee Ralph would receive the Elizabeth Taylor Commitment to End AIDS Award for their commitment to raising awareness of HIV at the Elizabeth Taylor Ball to End AIDS fundraising gala. Endorsements Having signed a deal with John Galliano in 2004, Theron replaced Estonian model Tiiu Kuik as the spokeswoman in the J'Adore advertisements by Christian Dior. In 2018, she appeared in a new advertisement for Dior J'adore. From October 2005 to December 2006, Theron earned US$3 million for the use of her image in a worldwide print media advertising campaign for Raymond Weil watches. In February 2006, she and her production company were sued by Weil for breach of contract. The lawsuit was settled on 4 November 2008. In 2018, Theron joined Brad Pitt, Daniel Wu and Adam Driver as brand ambassadors for Breitling, dubbed the Breitling Cinema Squad. Personal life In 2007, Theron became a naturalised citizen of the United States, while retaining her South African citizenship. Theron has adopted two children: a daughter in March 2012 and another daughter in July 2015. She has been interested in adoption since childhood, when she became aware of orphanages and the overflowing numbers of children in them. In April 2019, Theron revealed that Jackson, then seven years old, is a transgender girl. She said of her daughters, "They were born who they are and exactly where in the world both of them get to find themselves as they grow up, and who they want to be, is not for me to decide". She is inspired by actresses Susan Sarandon and Sigourney Weaver. She has described her admiration for Tom Hanks as a "love affair" and watched many of his films throughout her youth. Hollywood actors were not featured in magazines in South Africa so she did not know how famous he was until she moved to the United States, which has been inferred as a factor of her "down-to-earth" attitude to fame. After filming for That Thing You Do! finished, Theron got Hanks' autograph on her script. She later presented him his Cecil B. DeMille Award in 2020, in which Hanks revealed that he had a mutual admiration for Theron's career since the day he met her. Theron said in 2018 that she went to therapy in her thirties because of anger, discovering that it was due to her frustration growing up during South Africa's apartheid, which ended when she was 15. Theron is a longtime fan of the English band Depeche Mode, and was the presenter for their Rock and Roll Hall of Fame induction in 2020. Relationships Theron's first public relationship was with actor Craig Bierko, whom she dated from 1995 to 1997. Theron was in a three-year relationship with singer Stephan Jenkins until October 2001. Some of Third Eye Blind's third album, Out of the Vein, explores the emotions Jenkins experienced as a result of their breakup. Theron began a relationship with Irish actor Stuart Townsend in 2001 after meeting him on the set of Trapped. The couple lived together in Los Angeles and Ireland. The couple split up in late 2009. In December 2013, Theron began dating American actor Sean Penn. The relationship ended in June 2015. Health concerns Theron often quips that she has more injuries on sets that are not action films; however, while filming Æon Flux in Berlin, Theron suffered a herniated disc in her neck, caused by a fall while filming a series of back handsprings. It required her to wear a neck brace for a month. Her thumb ligament tore during The Old Guard when her thumb caught in another actor's jacket during a fight scene, which required three operations and six months in a thumb brace. There were no major injuries during the filming of Atomic Blonde but she broke teeth from jaw clenching and had dental surgery to remove them: "I had the removal and I had to put a donor bone in there to heal until I came back, and then I had another surgery to put a metal screw in there." Outside of action films, she had a herniated disk in her lower back as she filmed Tully and suffered from a depression-like state, which she theorised was the result from the processed food she had to eat for her character's post-natal body. In July 2009, she was diagnosed with a serious stomach virus, thought to be contracted while overseas. While filming The Road, Theron injured her vocal cords during the labour screaming scenes. When promoting Long Shot, she revealed that she laughed so hard at Borat that her neck locked for five days. Then she added that on the set of Long Shot she "ended up in the ER" after knocking her head against a bench behind her when she was putting on knee pads. Filmography and accolades As of early 2020, Theron's film work has earned her 100 award nominations and 39 wins. References External links (Verified Twitter account) from at Emmys.com 1975 births 20th-century American actresses 20th-century South African actresses 21st-century American actresses 21st-century South African actresses Afrikaner people American abortion-rights activists American female models American film actresses American film producers American people of Afrikaner descent American people of Dutch descent American people of French descent American people of German descent American television actresses American voice actresses American women film producers American women's rights activists Best Actress Academy Award winners Best Drama Actress Golden Globe (film) winners HIV/AIDS activists Independent Spirit Award for Best Female Lead winners Living people Naturalized citizens of the United States Outstanding Performance by a Female Actor in a Leading Role Screen Actors Guild Award winners People from Benoni Silver Bear for Best Actress winners South African emigrants to the United States South African female models South African film actresses South African film producers South African humanitarians South African people of Dutch descent South African people of French descent South African people of German descent South African television actresses South African voice actresses South African women activists South African women's rights activists United Nations Messengers of Peace Women humanitarians United Service Organizations entertainers
2,278
5,177
https://en.wikipedia.org/wiki/Communication
Communication
Communication is usually defined as the transmission of information. The term can also refer to the message itself or the field of inquiry studying these transmissions, also known as communication studies. There are some disagreements about the precise definition of communication, for example, whether unintentional or failed transmissions are also included and whether communication does not just transmit meaning but also create it. Models of communication aim to provide a simplified overview of its main components and their interaction. Many models include the idea that a source uses a coding system to express information in the form of a message. The source uses a channel to send the message to a receiver who has to decode it in order to understand its meaning. Channels are usually discussed in terms of the senses used to perceive the message, like hearing, sight, smell, touch, and taste. Communication can be classified based on whether information is exchanged between humans, members of other species, or non-living entities such as computers. For human communication, an important distinction is between verbal and non-verbal communication. Verbal communication involves the exchange of messages in linguistic form. This can happen through natural languages, like English or Japanese, or through artificial languages, like Esperanto. Verbal communication includes spoken and written messages as well as the use of sign language. Non-verbal communication happens without the use of a linguistic system. There are many forms of non-verbal communication, for example, using body language, body position, touch, and intonation. Another important distinction is between interpersonal and intrapersonal communication. Interpersonal communication happens between distinct individuals, such as greeting someone on the street or making a phone call. Intrapersonal communication, on the other hand, refers to communication with oneself. This can happen internally, as a form of inner dialog or daydreaming, or externally, for example, when writing down a shopping list or engaging in a monologue. Non-human forms of communication include animal and plant communication. Researchers in this field often formulate additional criteria for their definition of communicative behavior, like the requirement that the behavior serves a beneficial function for natural selection or that a response to the message is observed. Animal communication plays important roles for various species in the areas of courtship and mating, parent-offspring relations, social relations, navigation, self-defense, and territoriality. In the area of courtship and mating, for example, communication is used to identify and attract potential mates. An often-discussed example concerning navigational communication is the waggle dance used by bees to indicate to other bees where flowers are located. Due to the rigid cell walls of plants, their communication often happens through chemical means rather than movement. For example, various plants, like maple trees, release so-called volatile organic compounds into the air to warn other plants of a herbivore attack. Most communication takes place between members of the same species since its purpose is usually some form of cooperation, which is not as common between species. However, there are also forms of interspecies communication, mainly in cases of symbiotic relationships. For example, many flowers use symmetrical shapes and colors that stand out from their surroundings in order to communicate to insects where nectar is located to attract them. Humans also practice interspecies communication, for example, when interacting with pets. The field of communication includes various other issues, like communicative competence and the history of communication. Communicative competence refers to the ability to communicate well and applies both to the capability to formulate messages and to understand them. Two central aspects are that the communicative behavior is effective, i.e. that it achieves the individual's goal, and that it is appropriate, i.e. that it follows social standards and expectations. Human communication has a long history and how people exchange information has changed over time. These changes were usually triggered by the development of new communication technologies, such as the invention of writing systems (first pictographic and later alphabetic), the development of mass printing, the use of radio and television, and the invention of the internet. Definitions The word "communication" has its root in the Latin verb "communicare", which means "to share" or "to make common". Communication is usually understood as the transmission of information. In this regard, a message is conveyed from a sender to a receiver using some form of medium, such as sound, paper, bodily movements, or electricity. In a different sense, the term "communication" can also refer just to the message that is being communicated or to the field of inquiry studying such transmissions. There is a lot of disagreement concerning the precise characterization of communication and various scholars have raised doubts that any single definition can capture the term accurately. These difficulties come from the fact that the term is applied to diverse phenomena in different contexts, often with slightly different meanings. Despite these problems, the question of the right definition is of great theoretical importance since it affects the research process on all levels. This includes issues like which empirical phenomena are observed, how they are categorized, which hypotheses and laws are formulated as well as how systematic theories based on these steps are articulated. Some theorists give very broad definitions of communication that encompass unconscious and non-human behavior. In this regard, many animals communicate within their own species and even plants like flowers may be said to communicate by attracting bees. Other researchers restrict communication to conscious interactions among human beings. Some definitions focus on the use of symbols and signs while others emphasize the role of understanding, interaction, power, or transmission of ideas. Various characterizations see the communicator's intent to send a message as a central component. On this view, the transmission of information is not sufficient for communication if it happens unintentionally. An important version of this view is given by Paul Grice, who identifies communication with actions that aim to make the recipient aware of the communicator's intention. One question in this regard is whether only the successful transmission of information should be regarded as communication. For example, distortion may interfere and change the actual message from what was originally intended. A closely related problem is whether acts of deliberate deception constitute communication. According to an influential and broad definition by I. A. Richards, communication happens when one mind acts upon its environment in order to transmit its own experience to another mind. Another important characterization is due to Claude Shannon and Warren Weaver. On their view, communication involves the interaction of several components, such as a source, a message, an encoder, a channel, a decoder, and a receiver. Various contemporary scholars hold that communication is not just about the transmission of information but also about creating meaning. This way, communication shapes the participant's experience by conceptualizing the world, and making sense of their environment and themselves. In regard to animal and plant communication, researchers focus less on meaning-making but often include additional requirements in their definition, for example, that the communicative behavior plays a beneficial role in natural selection or that some kind of response to the message is observed. The paradigmatic form of communication happens between two or several individuals. However, it can also take place on a larger level, for example, between organizations, social classes, or nations. Niklas Luhmann rejects the view that communication is, on its most fundamental level, an interaction between two distinct parties. Instead, he holds that "only communication can communicate" and tries to provide a conceptualization in terms of autopoietic systems without any reference to consciousness or life. Models of communication Models of communication are conceptual representations of the process of communication. Their goal is to provide a simplified overview of its main components. This makes it easier for researchers to formulate hypotheses, apply communication-related concepts to real-world cases, and test predictions. However, it is often argued that many models lack the conceptual complexity needed for a comprehensive understanding of all the essential aspects of communication. They are usually presented visually in the form of diagrams showing various basic components and their interaction. Models of communication are often categorized based on their intended applications and how they conceptualize communication. Some models are general in the sense that they are intended for all forms of communication. They contrast with specialized models, which aim to describe only certain forms of communication, like models of mass communication. An influential classification distinguishes between linear transmission models, interaction models, and transaction models. Linear transmission models focus on how a sender transmits information to a receiver. They are linear because this flow of information only goes in one direction. This view is rejected by interaction models, which include a feedback loop. Feedback is required to describe many forms of communication, such as a regular conversation, where the listener may respond by expressing their opinion on the issue or by asking for clarification. For interaction models, communication is a two-way-process in which the communicators take turns in sending and receiving messages. Transaction models further refine this picture by allowing sending and responding to happen at the same time. This modification is needed, for example, to describe how the listener in a face-to-face conversation gives non-verbal feedback through their body posture and their facial expressions while the other person is talking. Transaction models also hold that meaning is produced during communication and does not exist independent of it. All the early models, developed in the middle of the 20th century, are linear transmission models. Lasswell's model, for example, is based on five fundamental questions: "Who?", "Says What?", "In What Channel?", "To Whom?", and "With What Effect?". The goal of these questions is to identify the basic components involved in the communicative process: the sender, the message, the channel, the receiver, and the effect. Lasswell's model was initially only conceived as a model of mass communication, but it has been applied to various other fields as well. Some theorists have expanded it by including additional questions, like "Under What Circumstances?" and "For What Purpose?". The Shannon–Weaver model is another influential linear transmission model. It is based on the idea that a source creates a message, which is then translated into a signal by a transmitter. Noise may interfere and distort the signal. Once the signal reaches the receiver, it is translated back into a message and made available to the destination. For a landline telephone call, the person calling is the source and their telephone is the transmitter. It translates the message into an electrical signal that travels through the wire, which acts as the channel. The person taking the call is the destination and their telephone is the receiver. The Shannon–Weaver model includes an in-depth discussion of how noise can distort the signal and how successful communication can be achieved despite noise. This can happen, for example, by making the message partially redundant so that decoding is possible nonetheless. Other influential linear transmission models include Gerbner's model and Berlo's model. The earliest interaction model is due to Wilbur Schramm. For him, communication starts when a source has an idea and expresses it in the form of a message. This process is called encoding and happens using a code, i.e. a sign system that is able to express the idea, for example, through visual or auditory signs. The message is sent to a destination, who has to decode and interpret it in order to understand it. In response, they formulate their own idea, encode it into a message and send it back as a form of feedback. Another important innovation of Schramm's model is that previous experience is necessary to be able to encode and decode messages. For communication to be successful, the fields of experience of source and destination have to overlap. The first transactional model was proposed by Dean Barnlund. He understands communication as "the production of meaning, rather than the production of messages". Its goal is to decrease uncertainty and arrive at a shared understanding. This happens in response to external and internal cues. Decoding is the process of ascribing meaning to them and encoding consists in producing new behavioral cues as a response. Human There are many forms of human communication. Important distinctions concern whether language is used, as in the contrast between verbal and non-verbal communication, and whether one communicates with others or with oneself, as in the contrast between interpersonal and intrapersonal communication. The field studying human communication is known as anthroposemiotics. Mediums Verbal Verbal communication refers to the exchange of messages in linguistic form or by means of language. Some of the difficulties in distinguishing verbal from non-verbal communication come from the difficulties in defining what exactly language means. Language is usually understood as a conventional system of symbols and rules used for communication. Important in this regard is that the system is based on a set of simple units of meaning that can be combined with each other to express more complex ideas. The rules for combining the units into compound expressions are called grammar. This way, words are combined to form sentences. One hallmark of human language, in contrast to animal communication, lies in its complexity and expressive power. For example, it can be used to refer not just to concrete objects in the here-and-now but also to spatially and temporally distant objects and to abstract ideas. The academic discipline studying language is called linguistics. Significant subfields include semantics (the study of meaning), morphology (the study of word formation), syntax (the study of sentence structure), pragmatics (the study of language use), and phonetics (the study of basic sounds). A central distinction among languages is between natural and artificial or constructed languages. Natural languages, like English, Spanish, and Japanese, developed naturally and for the most part unplanned in the course of history. Artificial languages, like Esperanto, the language of first-order logic, C++, and Quenya, are purposefully designed from the ground up. Most everyday verbal communication happens using natural languages. The most important forms of verbal communication are speech and writing together with their counterparts of listening and reading. Spoken languages use sounds to produce signs and transmit meaning while for writing, the signs are physically inscribed on a surface. Sign languages, like American Sign Language, are another form of verbal communication. They rely on visual means, mostly by using gestures with hands and arms, to form sentences and convey meaning. In colloquial usage, verbal communication is sometimes restricted to oral communication and may exclude writing and sign languages. However, in the academic sense, the term is usually used in a wider sense and encompasses any form of linguistic communication, independent of whether the language is expressed through speech, writing, or gestures. Humans have a natural tendency to acquire their native language in childhood. They are also able to learn other languages later in life, so-called second languages. But this process is less intuitive and often does not result in the same level of linguistic competence. Verbal communication serves various functions. One important function is to exchange information, i.e. an attempt by the speaker to make the audience aware of something, usually of an external event. But language can also be used to express the speaker's feelings and attitudes. A closely related role is to establish and maintain social relations with other people. Verbal communication is also utilized to coordinate one's behavior with others and influence them. In some cases, language is not employed for an external purpose but only for entertainment or because it is enjoyable. One aspect of verbal communication that stands out in comparison to non-verbal communication is that it helps the communicators conceptualize the world around them and themselves. This affects how perceptions of external events are interpreted, how things are categorized, and how ideas are organized and related to each other. Non-verbal Non-verbal communication refers to the exchange of information through non-linguistic modes, like facial expressions, gestures, and postures. However, not every form of non-verbal behavior constitutes non-verbal communication and some theorists hold that the existence of a socially shared coding system for interpreting the meaning of the behavior is relevant for whether it should be regarded as non-verbal communication. A lot of non-verbal communication happens unintentionally and unconsciously, like sweating or blushing. But there are also conscious intentional forms, like shaking hands or raising a thumb. Traditionally, most research focused on verbal communication. However, this paradigm has shifted and a lot of importance is given to non-verbal communication in contemporary research. For example, many judgments about the nature and behavior of other people are based on non-verbal cues, like their facial expressions and tone of voice. Some theorists claim that the majority of the ideas and information conveyed happens this way. According to Ray Birdwhistell, for example, 65% of communication happens non-verbally. Other reasons for its importance are that it is present in almost every communicative act to some extent, that it is able to fulfill many different functions, and that certain parts of it are universally understood. It has also been suggested that human communication is at its core non-verbal and that words can only acquire meaning because of non-verbal communication. The earliest forms of human communication are non-verbal, like crying to indicate distress and later also babbling, which conveys information about the infant's health and well-being. Non-verbal communication is studied in various fields besides communication studies, like linguistics, semiotics, anthropology, and social psychology. Non-verbal communication has many functions. It frequently contains information about emotions, attitudes, personality, interpersonal relationships, and private thoughts. It often happens simultaneously with verbal communication and helps optimize the exchange through emphasis and illustration or by adding additional information. Non-verbal cues can also clarify the intent behind a verbal message. Communication is usually more effective if several modalities are used and their messages are consistent. But in some cases, the different modalities contain conflicting messages, for example, when a person verbally agrees with a statement but presses their lips together, thereby indicating disagreement non-verbally. There are many forms of non-verbal communication. They include kinesics, proxemics, haptics, paralanguage, chronemics, and physical appearance. Kinesics investigates the role of bodily behavior in conveying information. It is commonly referred to as body language, even though it is, strictly speaking, not a language but belongs to non-verbal communication. It includes many forms, like gestures, postures, walking styles, and dance. Facial expressions, like laughing, smiling, and frowning, are an important part of kinesics since they are both very expressive and highly flexible. Oculesics is another subcategory of kinesics in regard to the eyes. It covers questions like how eye contact, gaze, blink rate, and pupil dilation form part of communication. Some kinesic patterns are inborn and involuntary, like blinking, while others are learned and voluntary, like giving a military salute. Proxemics studies how personal space is used in communication. For example, the distance between the speakers reflects their degree of familiarity and intimacy with each other as well as their social status. Haptics investigates how information is conveyed using touching behavior, like handshakes, holding hands, kissing, or slapping. Many of the meanings associated with haptics reflect care, concern, anger, and violence. For example, handshaking is often seen as a symbol of equality and fairness, while refusing to shake hands can indicate aggressiveness. Kissing is another form often used to show affection and erotic closeness. Paralanguage, also known as vocalics, concerns the use of voice in communication. It depends on verbal communication in the form of speech but studies how something is said instead of what is said. It includes factors like articulation, lip control, rhythm, intensity, pitch, fluency, and loudness. In this regard, saying something loudly and in high pitch may convey a very different meaning than whispering the same words. Paralanguage is mainly concerned with spoken language but also includes aspects of written language, like the use of colors and fonts as well as the spatial arrangement in paragraphs and tables. Chronemics refers to the use of time, for example, what messages are sent by being on time or being late for a meeting. The physical appearance of the communicator also carries a lot of information, like height, weight, hair, skin color, gender, odors, clothing, tattooing, and piercing. It is an important factor for first impressions but is more limited as a mode of communication since it is less changeable. Some forms of non-verbal communication happen using artifacts, such as drums, smoke, batons, or traffic lights. Channels For communication to be successful, the message has to travel from the sender to the receiver. The term channel refers to the way this is accomplished. In this regard, the channel is not concerned with the meaning of the message but only with the technical means of how the meaning is conveyed. Channels are often understood in terms of the senses used to perceive the message, i.e. hearing, seeing, smelling, touching, and tasting. But in the widest sense, channels encompass any form of transmission, including technological means like books, cables, radio waves, telephones, or television. Naturally transmitted messages usually fade rapidly whereas many messages using artificial channels have a much longer lifespan, like books or sculptures. The physical characteristics of a channel have an important impact on the code and cues that can be used to express the information. For example, telephone calls are restricted to the use of verbal language and paralanguage but exclude facial expressions. It is often possible to translate messages from one code into another to make them available to a different channel, for example, by writing down words instead of speaking them or by using sign language. For many technical purposes, the choice of channels matters regarding the amount of information that can be transmitted. For example, a wired Ethernet connection may have a higher capacity for data transfer than a wireless WiFi connection, making it more suitable for transferring large amounts of data. The same is true for fiber optic cables in contrast to copper cables. The transmission of information can occur through multiple channels at once. For example, regular face-to-face communication combines the auditory channel to convey verbal information with the visual channel transmitting non-verbal information using gestures and facial expressions. Employing multiple channels can enhance the effectiveness of communication by helping the audience better understand the subject matter. The choice of channels often matters since the receiver's ability to understand may vary depending on the chosen channel. For example, a teacher may decide to present some information orally and other information visually, depending on the content and the student's preferred learning style. Interpersonal Interpersonal communication refers to communication between distinct individuals. Its typical form is dyadic communication between two people but it can also refer to communication within groups. It can be planned or unplanned and occurs in many different forms, like when greeting someone, during salary negotiations, or when making a phone call. Some theorists understand interpersonal communication as a fuzzy concept that manifests in degrees. On this view, an exchange is more or less interpersonal depending on how many people are present, whether it happens face-to-face rather than through telephone or email, and whether it focuses on the relationship between the communicators. In this regard, group communication and mass communication are less typical forms of interpersonal communication and some theorists treat them as distinct types. Various theories of the function of interpersonal communication have been proposed. Some focus on how it helps people make sense of their world and create society while others hold that its primary purpose is to understand why other people act the way they do and to adjust one's behavior accordingly. A closely related approach is to focus on information and see interpersonal communication as an attempt to reduce uncertainty about others and external events. Other explanations understand it in terms of the needs it satisfies. This includes the needs of belonging somewhere, being included, being liked, maintaining relationships, and influencing the behavior of others. On a practical level, interpersonal communication is used to coordinate one's actions with the actions of others in order to get things done. Research on interpersonal communication concerns such topics as how people build, maintain, and dissolve relationships through communication, why they choose one message rather than another, what effects these messages have on the relationship and on the individual, and how to predict whether two people would like each other. Interpersonal communication can be synchronous or asynchronous. For asynchronous communication, the different parties take turns in sending and receiving messages. An example would be the exchange of letters or emails. For synchronous communication, both parties send messages at the same time. This happens, for example, when one person is talking while the other person sends non-verbal messages in response signaling whether they agree with what is being said. Some theorists distinguish between content messages and relational messages. Content messages express the speaker's feelings toward the topic of discussion. Relational messages, on the other hand, demonstrate the speaker's feelings toward their relationship with the other participants. Intrapersonal Intrapersonal communication refers to communication with oneself. In some cases this manifests externally, like when engaged in a monologue, taking notes, highlighting a passage, and writing a diary or a shopping list. But many forms of intrapersonal communication happen internally in the form of inner dialog, like when thinking about something or daydreaming. Intrapersonal communication serves various functions. As a form of inner dialog, it is usually triggered by external events and may happen in the form of articulating a phrase before expressing it externally, planning for the future, or as an attempt to process emotions when trying to calm oneself down in stressful situations. It can help regulate one's own mental activity and outward behavior as well as internalize cultural norms and ways of thinking. External forms of intrapersonal communication can aid one's memory, like when making a shopping list, help unravel difficult problems, as when solving a complex mathematical equation line by line, and internalize new knowledge, like when repeating new vocabulary to oneself. Because of these functions, intrapersonal communication can be understood as "an exceptionally powerful and pervasive tool for thinking." Based on its role in self-regulation, some theorists have suggested that intrapersonal communication is more fundamental than interpersonal communication. This is based on the observation that young children sometimes use egocentric speech while playing in an attempt to direct their own behavior. On this view, interpersonal communication only develops later when the child moves from their early egocentric perspective to a more social perspective. Other theorists contend that interpersonal communication is more basic. They explain this by arguing that language is used first by parents to regulate what their child does. Once the child has learned this, it can apply the same technique on itself to get more control over its own behavior. Contexts and purposes There are countless other categorizations of communication besides the types discussed so far. They often focus on the context, purpose, and topic of communication. For example, organizational communication concerns communication between members of organizations such as corporations, nonprofits, or small businesses. Important in this regard is the coordination of the behavior of the different members as well as the interaction with customers and the general public. Closely related terms are business communication, corporate communication, professional communication, and workspace communication. Political communication refers to communication in relation to politics. It covers topics like electoral campaigns to influence the voters and legislative communication, like letters to a congress or committee documents. Specific emphasis is often given to propaganda and the role of mass media. Intercultural communication is relevant to both organizational and political communication since they often involve attempts to exchange messages between communicators from different cultural backgrounds. In this context, it is crucial to avoid misunderstandings since the cultural background affects how messages are formulated and interpreted. This is also relevant for development communication, which is concerned with the use of communication for assisting in development, specifically concerning aid given by first-world countries to third-world countries. Another significant field is health communication, which is about communication in the field of healthcare and health promotion efforts. A central topic in this field is how healthcare providers, like doctors and nurses, should communicate with their patients. Many other types of communication are discussed in the academic literature. They include international communication, non-violent communication, strategic communication, military communication, aviation communication, risk communication, defensive communication, upward communication, interdepartmental communication, scientific communication, environmental communication, and agricultural communication. Other species Besides human communication, there are many other forms of communication found, for example, in the animal kingdom and among plants. Sometimes, the term extrapersonal communication is used in this regard to contrast it with interpersonal and intrapersonal communication. The field of inquiry studying these forms of communication is called biosemiotics. There are additional difficulties in this field for judging whether communication has taken place between two individuals. For example, acoustic signals are often easy to notice and analyze for scientists but additional difficulties come when judging whether tactile or chemical changes should be understood as communicative signals rather than as other biological processes. For this reason, researchers often use slightly altered definitions of communication in order to facilitate their work. A common assumption in this regard comes from evolutionary biology and holds that communication should somehow benefit the communicators in terms of natural selection. In this regard, "communication can be defined as the exchange of information between individuals, wherein both the signaller and receiver may expect to benefit from the exchange." So the sender should benefit by influencing the receiver's behavior and the receiver should benefit by responding to the signal. It is often held that these benefits should exist on average but not necessarily in every single case. This way, deceptive signaling can also be understood as a form of communication. One problem with the evolutionary approach is that it is often very difficult to assess the influence of such behavior on natural selection. Another common pragmatic constraint is to hold that it is necessary to observe a response by the receiver following the signal when judging whether communication has occurred. Animals Animal communication refers to the process of giving and taking information among animals. The field studying animal communication is called zoosemiotics. There are many parallels to human communication. For example, humans and many animals express sympathy by synchronizing their movements and postures. Nonetheless, there are also important differences, like the fact that humans also engage in verbal communication while animal communication is restricted to non-verbal communication. Some theorists have tried to distinguish human from animal communication based on the claim that animal communication lacks a referential function and is thus not able to refer to external phenomena. However, this view is often rejected, especially for higher animals. A different approach is to draw the distinction based on the complexity of human language, especially its almost limitless ability to combine basic units of meaning into more complex meaning structures. For example, it has been argued that recursion is a property of human language that sets it apart from all non-human communicative systems. Another difference is that human communication is frequently associated with a conscious intention to send information, which is often not discernable for animal communication. Animal communication can take a variety of forms, including visual, auditory, tactile, olfactory, and gustatory communication. Visual communication happens in the form of movements, gestures, facial expressions, and colors, like movements seen during mating rituals, the colors of birds, and the rhythmic light of fireflies. Auditory communication takes place through vocalizations by species like birds, primates, and dogs. It is frequently used to alert and warn. Lower animals often have very simple response patterns to auditory messages, reacting either by approach or avoidance. More complex response patterns are observed for higher species, which may use different signals for different types of predators and responses. For example, certain primates use different signals for airborne and land predators. Tactile communication occurs through touch, vibration, stroking, rubbing, and pressure. It is especially relevant for parent-young relations, courtship, social greetings, and defense. Olfactory and gustatory communication happens chemically through smells and tastes. There are huge differences between species concerning what functions communication plays, how much it is realized, and the behavior through which they communicate. Common functions include the fields of courtship and mating, parent-offspring relations, social relations, navigation, self-defense, and territoriality. An important part of courtship and mating consists in identifying and attracting potential mates. This can happen through songs, like grasshoppers and crickets, chemically through pheromones, like moths, and through visual messages by flashing light, like fireflies. For many species, the offspring depends for its survival on the parent. One central function of parent-offspring communication is to recognize each other. In some cases, the parents are also able to guide the offspring's behavior. Social animals, like chimpanzees, bonobos, wolves, and dogs, engage in various forms of communication to express their feelings and build relations. Navigation concerns the movement through space in a purposeful manner, e.g. to locate food, avoid enemies, and follow a colleague. In bats, this happens through echolocation, i.e. by sending auditory signals and processing the information from the echoes. Bees are another often-discussed case in this respect since they perform a dance to indicate to other bees where flowers are located. In regard to self-defense, communication is used to warn others and to assess whether a costly fight can be avoided. Another function of communication is to mark and claim certain territories used for food and mating. For example, some male birds claim a hedge or part of a meadow by using songs to keep other males away and attract females. Two competing theories in the study of animal communication are nature theory and nurture theory. Their conflict concerns to what extent animal communication is programmed into the genes as a form of adaptation rather than learned from previous experience as a form of conditioning. To the degree that it is learned, it usually happens through imprinting, i.e. as a form of learning that only happens in a certain phase and is then mostly irreversible. Plants, fungi, and bacteria Plant communication refers to plant processes involving the sending and receiving of information. The field studying plant communication is called phytosemiotics. This field poses additional difficulties for researchers since plants are very different from humans and other animals: they lack a central nervous system and have rigid cell walls. These walls restrict movement and make it impossible for plants to send or receive signals that depend on rapid movement. However, there are important similarities as well since plants face many of the same challenges as other animals, like finding resources, avoiding predators and pathogens as well as finding mates and ensuring that their offspring survives. Many of the evolutionary responses to these challenges are analogous to those in animals but are implemented using different means. One crucial difference is that chemical communication is much more prominent for plant communication in contrast to the importance of visual and auditory communication for animals. Communication is a form of behavior. In regard to plants, the term behavior is usually not defined in terms of physical movement, as is the case for animals, but as a biochemical response to a stimulus. This response has to be short relative to the plant's lifespan. Communication is a special form of behavior that involves conveying information from a sender to a receiver and is distinguished from other types of behavior, like defensive reactions and mere sensing. Theorists usually include additional requirements, like that there is some form of response in the receiver and that the communicative behavior benefits both sender and receiver in terms of natural selection. Richard Karban distinguishes three steps of plant communication: the emission of a cue by a sender, the perception of the cue by a receiver, and their response. It is not relevant to what extent the emission of a cue is intentional but it should be possible for the receiver to ignore the signal. Plant communication happens in various forms. It includes communication within plants, i.e. within plant cells and between plant cells, between plants of the same or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots also communicate with rhizome bacteria, fungi, and insects within the soil. A prominent form of communication is airborne and happens through so-called volatile organic compounds (VOCs). For example, many plants, like maple trees, release VOCs when they are attacked by a herbivore to warn neighboring plants, which then react accordingly by adjusting their defenses. Another form of plant-to-plant communication happens through mycorrhizal fungi. These fungi form underground networks, sometimes referred to as the Wood-Wide Web, and connect the roots of different plants. The plants use the network to send messages to each other, specifically to warn other plants of a pest attack and to help prepare their defenses. Communication can also be observed for fungi and bacteria. Some fungal species communicate by releasing pheromones into the external environment. For example, they are used to promote sexual interaction (mating) in several aquatic fungal species, like Allomyces macrogynus, the Mucorales fungus Mucor mucedo, Neurospora crassa and the yeasts Saccharomyces cerevisiae, Schizosaccharomyces pombe, and Rhodosporidium toruloides. An important form of communication between bacteria is called quorum sensing. It happens by releasing hormone-like molecules, which other bacteria detect and respond to. This process is used to monitor the environment for other bacteria and to coordinate population-wide responses, for example, by sensing the density of bacteria and regulating gene expression accordingly. Other possible responses include the induction of bioluminescence and the formation of biofilms. Interspecies Most communication happens between members within a species as a form of intraspecies communication. This is because the purpose of communication is usually some form of cooperation, which happens mostly within a species while different species are often in conflict with each other in their competition over resources. However, there are also some forms of interspecies communication. This occurs especially when there are symbiotic relationships and significantly less for parasitic or predator-prey relationships. Interspecies communication plays an important role for various plants that depend for their reproduction on external agents. For example, flowers need insects for pollination and provide resources like nectar and other rewards in return. They use various forms of communication to signal their benefits and attract visitors, for example, by using colors that stand out from their surroundings and by using symmetrical shapes. This form of advertisement is necessary since different flowers compete for potential visitors. Many fruit-bearing plants rely on plant-to-animal communication to disperse their seeds and move them to a favorable location. This happens by providing nutritious fruits to animals. The seeds are eaten together with the fruit and are later excreted at a different location. Communication is central to make the animals aware of where the fruits are and whether they are ripe. For many fruits, this happens through their color: they have an inconspicuous green color until they ripen and take on a new color that stands in visual contrast to the environment. Another example of interspecies communication is found in the ant-plant relationship. It concerns, for example, the selection of seeds by ants for their ant gardens and the pruning of exogenous vegetation as well as plant protection by ants. Several animal species also engage in interspecies communication, like apes, whales, dolphins, elephants, and dogs. For example, different species of monkeys use common signals to cooperate when threatened by a common predator. An example of interspecies communication involving humans is found in their relation to pets. For example, acoustic signals play a central role in communication with dogs. Dogs are able to learn to respond to various commands, like "sit" and "come". They can even learn short syntactic combinations, like "bring X" or "put X in a box". They also react to the pitch and frequency of the human voice by reading off information about emotions, dominance, and uncertainty. Humans can understand dog signals in the form of interpreting and reacting to their emotions, such as aggressiveness, fearfulness, and playfulness. Computer Computer communication refers to the exchange of data between computers and similar devices. For this to be possible, the devices have to be connected through a transmission system that forms a network between them. To access the transmission system, a transmitter is required to send messages and a receiver is required to receive them. For example, a personal computer may use a modem as a transmitter to send information to a server through the public telephone network as the transmission system. The server may use a modem as its receiver. To transmit the data, it has to be converted into an electric signal. Communication channels used for transmission are either analog or digital and are characterized by features like bandwidth and latency. There are many different forms of computer networks. The most commonly discussed ones are LANs and WANs. LAN stands for local area network and refers to computer networks within a limited area, usually with a distance of less than one kilometer. For example, connecting two computers within a home or an office building is a form of LAN. This can happen using a wired connection, like Ethernet, or a wireless connection, like WiFi. WANs, on the other hand, are wide area networks that span large geographical regions, like the internet. They may use several intermediate connection nodes to link the different endpoints. Further types of computer networks include PANs (personal area networks), CANs (campus area networks), and MANs (Metropolitan area networks). For computer communication to be successful, the involved devices have to follow a common set of conventions governing their exchange. These conventions are known as the communication protocol and concern various aspects of the exchange, like the format of the data exchanged, how to respond to transmission errors, and how the two systems are synchronized, for example, how the receiver identifies the start and end of a signal. A significant distinction in this regard is between simplex, half-duplex, and full-duplex systems. For simplex systems, signals flow only in one direction from the sender to the receiver, like in radio, television, or screens displaying arrivals and departures at airports. Half-duplex systems allow two-way exchanges but signals can only flow in one direction at a time, like walkie-talkies or police radios. In the case of full-duplex systems, signals can flow in both directions at the same time, like regular telephone and internet. In either case, it is often important that the connection is secure to ensure that the transmitted data reaches only the intended destination and not an unauthorized third party. Human-computer communication is a closely related field that concerns the question of how humans interact with computers. This happens through a user interface, which includes the hardware used to interact with the computer, like mouse, keyboard, and monitor, as well as the software used in the process. On the software side, most early user interfaces were command-line interfaces in which the user has to type a command to interact with the computer. Most modern user interfaces are graphical user interfaces, like Microsoft Windows and macOS. They involve various graphical elements through which the user can interact with the computer, like icons representing files and folders as well as buttons used to trigger commands. They are usually much easier to use for non-experts. An important aim when designing user interfaces is to simplify the interaction with computers. This helps make them more user-friendly and accessible to a wider audience while also increasing productivity. Communication studies Communication studies, also referred to as communication science, is the academic discipline studying communication. It is closely related to semiotics, with one difference being that communication studies focuses more on technical questions of how messages are sent, received, and processed while semiotics tackles more abstract questions in relation to meaning and hows signs acquire meaning. Communication studies covers a wide area overlapping with many other disciplines, such as biology, anthropology, psychology, sociology, linguistics, media studies, and journalism. Many contributions in the field of communication studies focus on developing models and theories of communication. Models of communication aim to give a simplified overview of the main components involved in communication. Theories of communication, on the other hand, try to provide conceptual frameworks to accurately present communication in all its complexity. Other topics in communication studies concern the function and effects of communication, like satisfying physiological and psychological needs and building relationships as well as gathering information about the environment, others, and oneself. A further issue concerns the question of how communication systems change over time and how these changes correlate with other societal changes. A related question focuses on psychological principles underlying those changes and the effects they have on how people exchange ideas. Communication was already studied as early as Ancient Greece. Important early theories are due to Plato and Aristotle, who emphasized public speaking and the understanding of rhetoric. For example, Aristotle held that the goal of communication is to persuade the audience. However, the field of communication studies only became a separate research discipline in the 20th century, especially starting in the 1940s. The development of new communication technologies, such as telephone, radio, newspapers, television, and the internet, has had a big impact on communication and communication studies. Today, communication studies is a wide discipline that includes many subfields dedicated to topics like interpersonal and intrapersonal communication, verbal and non-verbal communication, group communication, organizational communication, political communication, intercultural communication, mass communication, persuasive communication, and health communication. Some works in communications studies try to provide a very general characterization of communication in the widest sense while others attempt to give a precise analysis of a specific form of communication. Communicative competence Communicative competence refers to the ability to communicate effectively or to choose the appropriate communicative behavior in a given situation. It concerns several aspects, like what to say and how to say it as well as when to say it. It includes both the capability to send messages as well as to receive and understand them. Competence is often used as a synonym for ability and contrasted with performance: competence can be present even if it is not exercised while performance consists in the realization of this competence. However, some theorists reject this distinction and hold instead that whether the behavior is actually performed is highly relevant for whether the competence is possessed. On this view, performance is the observable part and is used to infer competence in relation to future performances. Some researchers define communicative competence subjectively as the individual's perception of their performance, i.e. whether they managed to realize their own goals. A different approach is to understand it more objectively, judged from the perspective of an observer concerning whether a person meets certain social expectations. These two perspectives are not mutually exclusive and can be combined by achieving one's personal goals while doing so in a socially appropriate manner. In this regard, there are two central components to communicative competence: effectiveness and appropriateness. Effectiveness is the degree to which the speaker achieves their desired outcomes or the degree to which preferred alternatives are realized. This means that whether a communicative behavior is effective does not just depend on the actual outcome but also on the speaker's intention, i.e. whether this outcome was what they intended to achieve. Because of this, some theorists additionally require that the speaker has a certain background knowledge of what they were doing and should therefore be able to give an explanation of why they engaged in one behavior rather than another. Effectiveness is closely related to efficiency but not identical to it. The difference is that effectiveness is about achieving goals while efficiency is about using few resources (such as time, effort, and money) in the process. Appropriateness means that the communicative behavior meets certain social standards and expectations. It is "the perceived legitimacy or acceptability of behavior or enactments in a given context". This means that the speaker is aware of the social and cultural context in order to adapt and express the message in a way that is considered acceptable in the given situation. For example, to bid farewell to their teacher, a student may use the expression "Goodbye, sir" but not the expression "I gotta split, man", which they may use when talking to a peer. To be both effective and appropriate means to achieve one's preferred outcomes in a way that follows social standards and expectations. Many additional components of communicative competence have been suggested, such as empathy, control, flexibility, sensitivity, and knowledge. It is often discussed in terms of the individual communications skills employed in the process, i.e. the specific behavioral components that make up communicative competence. They include nonverbal communication skills and conversation skills as well as message production and reception skills. Examples of message production skills are speaking and writing while listening and reading are the corresponding reception skills. On a purely linguistic level, communicative competence involves a proper understanding of a language, including its phonology, orthography, syntax, lexicon, and semantics. It is of central importance since many aspects of the individual's life depend on successful communication, like ensuring basic necessities of survival as well as building and maintaining relationships. Communicative competence is a key factor regarding whether a person is able to reach their goals in social life, like having a successful career or finding a suitable spouse. Because of this, it can have a big impact on the individual's well-being. The lack of communicative competence, on the other hand, can cause various problems both on the individual and the societal level, including professional, academic, and health problems. Barriers to effective communication Barriers to effective communication can distort the message. This may result in failed communication and cause undesirable effects. Potential sources of distortion include filtering, selective perception, information overload, emotions, communication apprehension, and gender differences. Noise is another negative factor. It refers to influences that interfere with the message on its way to the receiver and distort it. For example, crackling sounds during a telephone call are one form of noise. Ambiguous expressions can also inhibit effective communication and make it necessary to disambiguate between the possible interpretation to discern the sender's intention. These interpretations depend also on the cultural background of the participants. Significant cultural differences constitute additional difficulties and make it more likely that messages are misinterpreted. History The history of communication investigates how communicative processes evolved and interacted with society, culture, and technology. Human communication has a long history and the way people communicate has changed a lot in the process. Many of these changes were triggered by the development of new communication technology and had important effects on how people exchanged ideas. In the academic literature, the history of communication is usually divided into different ages based on the dominant form of communication in that age. There are some disagreements about the number of ages and the precise periodization but they usually include ages for speaking, writing, and print as well as electronic mass communication and the internet. According to Marshall Poe, the different dominant media for each age can be characterized in relation to accessibility (cost of using the medium), privacy (cost of hiding data from third parties), fidelity (degree to which the medium can express information), volume (amount of data that can be transmitted), velocity (the time it takes to transmit), range (the maximum distance between sender and receiver), persistence (the time the data remains intact), and searchability (how easy it is to find data). Poe argues that subsequent ages usually involve some form of improvement in regard to these characteristics. In early societies, spoken language was the primary form of communication. Most knowledge was passed on through it, often in the form of stories or wise sayings. One problem with this form is that it does not produce stable knowledge since it depends on imperfect human memory. Because of this, many details differ from one telling to the next and are presented differently by distinct storytellers. As people started to settle and form agricultural communities, societies grew and there was an increased need for stable records of ownership of land and commercial transactions. This triggered the invention of writing, which is able to solve many of these problems of oral communication. It is much more efficient at preserving knowledge and passing it on between generations since it does not depend on human memory. Most early written communication happened through pictograms. Pictograms are graphical symbols that convey meaning by visually resembling real world objects. The first complex pictographic writing system was developed around 3500 BCE by the Sumerians and is called cuneiform. Pictograms are still in use today, like no-smoking signs and the symbols of male and female figures on bathroom doors. An important disadvantage of pictographic writing systems is that they require a huge amount of symbols to refer to all the objects one wants to talk about. This problem was solved by the development of alphabetic writing systems, which dominate to this day. Their symbols do not stand for regular objects but for the basic units of sound used in spoken language, so-called phonemes. Another drawback of early forms of writing, like the clay tablets used for cuneiform, was that they were not very portable. This made it difficult to transport the texts from one location to another to share the information. This changed with the invention of papyrus by the Egyptians around 2500 BCE and was further improved later by the development of parchment and paper. Until the 1400s, almost all written communication was done by hand. Because of this, the spread of writing within society was still rather limited since the cost of copying books by hand was relatively high. The introduction and popularization of mass printing in the middle of the 15th century by Johann Gutenberg resulted in rapid changes in this regard. It quickly increased the circulation of written media and also led to the dissemination of new forms of written documents, like newspapers and pamphlets. One important side effect was that the augmented availability of written documents significantly improved the general literacy of the population. This development served as the foundation for revolutions in various fields, including science, politics, and religion. Scientific discoveries in the 19th and 20th centuries caused many further developments in the history of communication. They include the invention of telegraphs and telephones, which made it even easier and faster to transmit information from one location to another without the need to transport written documents. These communication forms were initially limited to cable connections, which had to be established first. Later developments found ways of wireless transmission using radio signals. They made it possible to reach wide audiences and radio soon became one of the central forms of mass communication. Various innovations in the field of photography enabled the recording of images on film, which led to the development of cinema and television. The reach of wireless communication was further enhanced with the development of satellites, which made it possible to broadcast radio and television signals to different stations all over the world. This way, information could be shared almost instantly everywhere around the globe. The development of the internet constitutes a further milestone in the history of communication. It made it easier than ever before for people to exchange ideas, collaborate, and access information from anywhere in the world by using a variety of means, such as websites, e-mail, social media, and video conferences. See also References Works cited External links Communication Main topic articles
2,286
5,180
https://en.wikipedia.org/wiki/Chemistry
Chemistry
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science under natural sciences that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during a reaction with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds. In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics). Etymology The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry. The modern word alchemy in turn is derived from the Arabic word (). This may have Egyptian origins since is derived from the Ancient Greek , which is in turn derived from the word , which is the ancient name of Egypt in the Egyptian language. Alternately, may derive from 'cast together'. Modern principles The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory. The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it. A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are: Matter In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances. Atom The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus. The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent). Element A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends. Compound A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number. Molecule A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable. The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature. Substance and mixture A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys. Mole and amount of substance The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3. Phase In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions. Sometimes the distinction between phases can be continuous instead of having a discrete boundary' in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions. The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water). Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology. Bonding Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom. The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition. An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed. In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell. Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals. Energy In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium. There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions. The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. Reaction When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware. Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions. A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events'). Ions and salts An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−). Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. Acidity and basicity A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion. A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values. Redox Redox (reduction-oxidation) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers. A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. Equilibrium Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase. A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time. Chemical laws Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are: Avogadro's law Beer–Lambert law Boyle's law (1662, relating pressure and volume) Charles's law (1787, relating volume and temperature) Fick's laws of diffusion Gay-Lussac's law (1809, relating pressure and temperature) Le Chatelier's principle Henry's law Hess's law Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Law of conservation of mass continues to be conserved in isolated systems, even in modern physics. However, special relativity shows that due to mass–energy equivalence, whenever non-material "energy" (heat, light, kinetic energy) is removed from a non-isolated system, some mass will be lost with it. High energy losses result in loss of weighable amounts of mass, an important topic in nuclear chemistry. Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Law of multiple proportions Raoult's law History The history of chemistry spans a period from very old times to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. Definition The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection. The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances – a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes. Background Early civilizations, such as the Egyptians Babylonians and Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but didn't develop a systematic theory. A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his book De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be". In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations. The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals. Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment. In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day. English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights. The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current. British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles. His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis. The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s). Practice Subdisciplines Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry. Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry. Biochemistry is the study of the chemicals, chemical reactions and interactions that take place in living organisms. Biochemistry and organic chemistry are closely related, as in medicinal chemistry or neurochemistry. Biochemistry is also associated with molecular biology and genetics. Inorganic chemistry is the study of the properties and reactions of inorganic compounds. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry. Materials chemistry is the preparation, characterization, and understanding of substances with a useful function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases. Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern Transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Others subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. Interdisciplinary Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others. Industry The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%. Professional societies American Chemical Society American Society for Neurochemistry Chemical Institute of Canada Chemical Society of Peru International Union of Pure and Applied Chemistry Royal Australian Chemical Institute Royal Netherlands Chemical Society Royal Society of Chemistry Society of Chemical Industry World Association of Theoretical and Computational Chemists List of chemistry societies See also Comparison of software for molecular mechanics modeling Glossary of chemistry terms International Year of Chemistry List of chemists List of compounds List of important publications in chemistry List of unsolved problems in chemistry Outline of chemistry Periodic systems of small molecules Philosophy of chemistry Science tourism References Bibliography Further reading Popular reading Atkins, P.W. Galileo's Finger (Oxford University Press) Atkins, P.W. Atkins' Molecules (Cambridge University Press) Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, 2010 Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984) Stwertka, A. A Guide to the Elements (Oxford University Press) Introductory undergraduate textbooks Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th edition) 2006 (Oxford University Press) Chang, Raymond. Chemistry 6th ed. Boston: James M. Smith, 1998. . Voet and Voet. Biochemistry (Wiley) Advanced undergraduate-level or graduate textbooks Atkins, P. W. Physical Chemistry (Oxford University Press) Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press) McWeeny, R. Coulson's Valence (Oxford Science Publications) Pauling, L. The Nature of the chemical bond (Cornell University Press) Pauling, L., and Wilson, E.B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications) Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall) Stephenson, G. Mathematical Methods for Science Students (Longman) External links General Chemistry principles, patterns and applications.
2,288
5,185
https://en.wikipedia.org/wiki/Christ%20%28title%29
Christ (title)
Christ, used by Christians as both a name and a title, unambiguously refers to Jesus. It is also used as a title, in the reciprocal use "Christ Jesus", meaning "the Messiah Jesus", and independently as "the Christ". The Pauline epistles, the earliest texts of the New Testament, often refer to Jesus as "Christ Jesus" or "Christ". The concept of the Christ in Christianity originated from the concept of the messiah in Judaism. Christians believe that Jesus is the messiah foretold in the Hebrew Bible and the Christian Old Testament. Although the conceptions of the messiah in each religion are similar, for the most part they are distinct from one another due to the split of early Christianity and Judaism in the 1st century. Although the original followers of Jesus believed Jesus to be the Jewish messiah, e.g. in the Confession of Peter, Jesus was usually referred to as "Jesus of Nazareth" or "Jesus, son of Joseph", Jesus came to be called "Jesus Christ" (meaning "Jesus the Khristós", i.e. "Jesus the Messiah" or "Jesus the Anointed") by Christians, who believe that his crucifixion and resurrection fulfill the messianic prophecies of the Old Testament. Etymology Christ comes from the Greek word (), meaning "anointed one". The word is derived from the Greek verb (), meaning "to anoint." In the Greek Septuagint, χριστός was a semantic loan used to translate the Hebrew (, messiah), meaning "&NoBreak;[one who is] anointed". Usage The word Christ (and similar spellings) appears in English and in most European languages. English-speakers now often use "Christ" as if it were a name, one part of the name "Jesus Christ", though it was originally a title ("the Messiah"). Its usage in "Christ Jesus" emphasizes its nature as a title. Compare the usage "the Christ". The spelling Christ in English became standardized in the 18th century, when, in the spirit of the Enlightenment, the spelling of certain words changed to fit their Greek or Latin origins. Prior to this, scribes writing in Old and Middle English usually used the spelling Crist—the i being pronounced either as , preserved in the names of churches such as St Katherine Cree, or as a short , preserved in the modern pronunciation of "Christmas". The spelling "Christ" in English is attested from the 14th century. In modern and ancient usage, even in secular terminology, "Christ" usually refers to Jesus, based on the centuries-old tradition of such usage. Since the Apostolic Age, the... use of the definite article before the word Christ and its gradual development into a proper name show the Christians identified the bearer with the promised Messias of the Jews. Background and New Testament references Pre-New Testament references In the Old Testament, anointing was a ceremonial reserved to the Kings of Israel (1 Kings 19:16; 24:7), Psalms 17 (18):51), to Cyrus the Great (Isaiah 45:1), to the High Priest of Israel, the patriarchs (Psalms 104(105):15 and to the prophets. In the Septuagint text of the deuterocanonical books, the term "Christ" (Χριστός, translit. Christós) is found in 2 Maccabees 1:10 (referring to the anointed High Priest of Israel) and in the Book of Sirach 46:19, in relation to Samuel, prophet and institutor of the kingdom under Saul. At the time of Jesus, there was no single form of Second Temple Judaism, and there were significant political, social, and religious differences among the various Jewish groups. However, for centuries the Jews had used the term moshiach ("anointed") to refer to their expected deliverer. Opening lines of Mark and Matthew Mark ("The beginning of the gospel of Jesus Christ, the Son of God") identifies Jesus as both Christ and the Son of God. uses Christ as a name and Matthew explains it again with: "Jesus, who is called Christ". The use of the definite article before the word "Christ" and its gradual development into a proper name show that the Christians identified Jesus with the promised messiah of the Jews who fulfilled all the messianic predictions in a fuller and a higher sense than had been given them by the rabbis. Confession of Peter (Matthew, Mark and Luke) The so-called Confession of Peter, recorded in the Synoptic Gospels as Jesus's foremost apostle Peter saying that Jesus was the Messiah, has become a famous proclamation of faith among Christians since the first century. Martha's statement (John) In Martha told Jesus, "you are the Christ, the Son of God, who is coming into the world", signifying that both titles were generally accepted (yet considered distinct) among the followers of Jesus before the raising of Lazarus. Sanhedrin trial of Jesus (Matthew, Mark and Luke) During the Sanhedrin trial of Jesus, it might appear from the narrative of Matthew that Jesus at first refused a direct reply to the high priest Caiaphas's question: "Are you the Messiah, the Son of God?", where his answer is given merely as Σὺ εἶπας (Su eipas, "You [singular] have said it"). Similarly but differently in Luke, all those present are said to ask Jesus: 'Are you then the Son of God?', to which Jesus reportedly answered: Ὑμεῖς λέγετε ὅτι ἐγώ εἰμι (Hymeis legete hoti ego eimi, "You [plural] say that I am". In the Gospel of Mark, however, when asked by Caiaphas 'Are you the Messiah, the Son of the Blessed One?', Jesus tells the Sanhedrin: Ἐγώ εἰμι (ego eimi, "I am"). There are instances from Jewish literature in which the expression "you have said it" is equivalent to "you are right". The Messianic claim was less significant than the claim to divinity, which caused the high priest's horrified accusation of blasphemy and the subsequent call for the death sentence. Before Pilate, on the other hand, it was merely the assertion of his royal dignity which gave grounds for his condemnation. Pauline epistles The word "Christ" is closely associated with Jesus in the Pauline epistles, which suggests that there was no need for the early Christians to claim that Jesus is Christ because it was considered widely accepted among them. Hence Paul can use the term Khristós with no confusion as to whom it refers, and he can use expressions such as "in Christ" to refer to the followers of Jesus, as in and . Paul proclaimed him as the Last Adam, who restored through obedience what Adam lost through disobedience. The Pauline epistles are a source of some key Christological connections; e.g., relates the love of Christ to the knowledge of Christ, and considers the love of Christ as a necessity for knowing him. There are also implicit claims to him being the Christ in the words and actions of Jesus. Use of Messias in John The Hellenization Μεσσίας (Messías) is used twice to mean "Messiah" in the New Testament: by the disciple Andrew at John 1:41, and by the Samaritan woman at the well at John 4:25. In both cases, the Greek text specifies immediately after that this means "the Christ." Christology Christology, literally "the understanding of Christ," is the study of the nature (person) and work (role in salvation) of Jesus in Christianity. It studies Jesus Christ's humanity and divinity, and the relation between these two aspects; and the role he plays in salvation. From the second to the fifth centuries, the relation of the human and divine nature of Christ was a major focus of debates in the early church and at the first seven ecumenical councils. The Council of Chalcedon in 451 issued a formulation of the hypostatic union of the two natures of Christ, one human and one divine, "united with neither confusion nor division". Most of the major branches of Western Christianity and Eastern Orthodoxy subscribe to this formulation, while many branches of Oriental Orthodox Churches reject it, subscribing to miaphysitism. According to the Summa Theologica of Thomas Aquinas, in the singular case of Jesus, the word Christ has a twofold meaning, which stands for "both the Godhead anointing and the manhood anointed". It derives from the twofold human-divine nature of Christ (dyophysitism): the Son of man is anointed in consequence of His incarnated flesh, as well as the Son of God is anointing in consequence of the "Godhead which He has with the Father" (ST III, q. 16, a. 5). Symbols The use of "Χ" as an abbreviation for "Christ" derives from the Greek letter Chi (χ), in the word (). An early Christogram is the Chi Rho symbol, formed by superimposing the first two Greek letters in Christ, chi (Χ) and rho (Ρ), to produce ☧. The centuries-old English word Χmas (or, in earlier form, XPmas) is an English form of χ-mas, itself an abbreviation for Christ-mas. The Oxford English Dictionary (OED) and the OED Supplement have cited usages of "X-" or "Xp-" for "Christ-" as early as 1485. The terms "Xpian" and "Xren" have been used for "Christian", "Xst" for "Christ's" "Xρofer" for Christopher and Xmas, Xstmas, and Xtmas for Christmas. The OED further cites usage of "Xtianity" for "Christianity" from 1634. According to Merriam-Webster's Dictionary of English Usage, most of the evidence for these words comes from "educated Englishmen who knew their Greek". The December 1957 News and Views published by the Church League of America, a conservative organization founded in 1937, attacked the use of "Xmas" in an article titled "X=The Unknown Quantity". Gerald L. K. Smith picked up the statements later, in December 1966, saying that Xmas was a "blasphemous omission of the name of Christ" and that "'X' is referred to as being symbolical of the unknown quantity." More recently, American evangelist Franklin Graham and former CNN contributor Roland S. Martin publicly raised concerns. Graham stated in an interview that the use of "Xmas" is taking "Christ out of Christmas" and called it a "war against the name of Jesus Christ." Roland Martin relates the use of "Xmas" to his growing concerns of increasing commercialization and secularization of what he says is one of Christianity's highest holy days. See also Chrism Ichthys Dyophysitism Hypostatic union Kerigma Knowledge of Christ Masih Names and titles of Jesus in the Quran Perfection of Christ You are Christ Notes References Further reading Christian messianism Christian terminology Christology Religious titles Septuagint words and phrases Davidic line
2,290
5,228
https://en.wikipedia.org/wiki/Cheirogaleidae
Cheirogaleidae
The Cheirogaleidae are the family of strepsirrhine primates containing the various dwarf and mouse lemurs. Like all other lemurs, cheirogaleids live exclusively on the island of Madagascar. Characteristics Cheirogaleids are smaller than the other lemurs and, in fact, they are the smallest primates. They have soft, long fur, colored grey-brown to reddish on top, with a generally brighter underbelly. Typically, they have small ears, large, close-set eyes, and long hind legs. Like all strepsirrhines, they have fine claws at the second toe of the hind legs. They grow to a size of only 13 to 28 cm, with a tail that is very long, sometimes up to one and a half times as long as the body. They weigh no more than 500 grams, with some species weighing as little as 60 grams. Dwarf and mouse lemurs are nocturnal and arboreal. They are excellent climbers and can also jump far, using their long tails for balance. When on the ground (a rare occurrence), they move by hopping on their hind legs. They spend the day in tree hollows or leaf nests. Cheirogaleids are typically solitary, but sometimes live together in pairs. Their eyes possess a tapetum lucidum, a light-reflecting layer that improves their night vision. Some species, such as the lesser dwarf lemur, store fat at the hind legs and the base of the tail, and hibernate. Unlike lemurids, they have long upper incisors, although they do have the comb-like teeth typical of all strepsirhines. They have the dental formula: Cheirogaleids are omnivores, eating fruits, flowers and leaves (and sometimes nectar), as well as insects, spiders, and small vertebrates. The females usually have three pairs of nipples. After a meager 60-day gestation, they will bear two to four (usually two or three) young. After five to six weeks, the young are weaned and become fully mature near the end of their first year or sometime in their second year, depending on the species. In human care, they can live for up to 15 years, although their life expectancy in the wild is probably significantly shorter. Classification The five genera of cheirogaleids contain 42 species. Infraorder Lemuriformes Family Cheirogaleidae Genus Cheirogaleus: dwarf lemurs Montagne d'Ambre dwarf lemur, Cheirogaleus andysabini Furry-eared dwarf lemur, Cheirogaleus crossleyi Groves' dwarf lemur, Cheirogaleus grovesi Lavasoa dwarf lemur, Cheirogaleus lavasoensis Greater dwarf lemur, Cheirogaleus major Fat-tailed dwarf lemur, Cheirogaleus medius Lesser iron-gray dwarf lemur, Cheirogaleus minusculus Ankarana dwarf lemur, Cheirogaleus shethi Sibree's dwarf lemur, Cheirogaleus sibreei Thomas' dwarf lemur, Cheirogaleus thomasi Genus Microcebus: mouse lemurs Arnhold's mouse lemur, Microcebus arnholdi Madame Berthe's mouse lemur, Microcebus berthae Bongolava mouse lemur Microcebus bongolavensis Boraha mouse lemur Microcebus boraha Danfoss' mouse lemur Microcebus danfossi Ganzhorn's mouse lemur. Microcebus ganzhorni Gerp's mouse lemur. Microcebus gerpi Reddish-gray mouse lemur, Microcebus griseorufus Jolly's mouse lemur, Microcebus jollyae Jonah's mouse lemur, Microcebus jonahi Goodman's mouse lemur, Microcebus lehilahytsara MacArthur's mouse lemur, Microcebus macarthurii Claire's mouse lemur, Microcebus mamiratra, synonymous to M. lokobensis Bemanasy mouse lemur, Microcebus manitatra Margot Marsh's mouse lemur, Microcebus margotmarshae Marohita mouse lemur, Microcebus marohita Mittermeier's mouse lemur, Microcebus mittermeieri Gray mouse lemur, Microcebus murinus Pygmy mouse lemur, Microcebus myoxinus Golden-brown mouse lemur, Microcebus ravelobensis Brown mouse lemur, Microcebus rufus Sambirano mouse lemur, Microcebus sambiranensis Simmons' mouse lemur, Microcebus simmonsi Anosy mouse lemur. Microcebus tanosi Northern rufous mouse lemur, Microcebus tavaratra Genus Mirza: giant mouse lemurs Coquerel's giant mouse lemur or Coquerel's dwarf lemur, Mirza coquereli Northern giant mouse lemur, Mirza zaza Genus Allocebus Hairy-eared dwarf lemur, Allocebus trichotis Genus Phaner: fork-marked lemurs Masoala fork-marked lemur, Phaner furcifer Pale fork-marked lemur, Phaner pallescens Pariente's fork-marked lemur, Phaner parienti Amber Mountain fork-marked lemur, Phaner electromontis Footnotes According to the letter of the International Code of Zoological Nomenclature, the correct name for this family should be Microcebidae, but the name Cheirogaleidae has been retained for stability. In 2008, 7 new species of Microcebus were formally recognized, but Microcebus lokobensis (Lokobe mouse lemur) was not among the additions, even though it was described in 2006. Therefore, its status as a species is still questionable. References Lemurs Primate families Taxa named by John Edward Gray Taxa described in 1873
2,309
5,230
https://en.wikipedia.org/wiki/Cebidae
Cebidae
The Cebidae are one of the five families of New World monkeys now recognised. Extant members are the capuchin and squirrel monkeys. These species are found throughout tropical and subtropical South and Central America. Characteristics Cebid monkeys are arboreal animals that only rarely travel on the ground. They are generally small monkeys, ranging in size up to that of the brown capuchin, with a body length of 33 to 56 cm, and a weight of 2.5 to 3.9 kilograms. They are somewhat variable in form and coloration, but all have the wide, flat, noses typical of New World monkeys. They are omnivorous, mostly eating fruit and insects, although the proportions of these foods vary greatly between species. They have the dental formula: Females give birth to one or two young after a gestation period of between 130 and 170 days, depending on species. They are social animals, living in groups of between five and forty individuals, with the smaller species typically forming larger groups. They are generally diurnal in habit. Classification Previously, New World monkeys were divided between Callitrichidae and this family. For a few recent years, marmosets, tamarins, and lion tamarins were placed as a subfamily (Callitrichinae) in Cebidae, while moving other genera from Cebidae into the families Aotidae, Pitheciidae and Atelidae. The most recent classification of New World monkeys again splits the callitrichids off, leaving only the capuchins and squirrel monkeys in this family. Subfamily Cebinae (capuchin monkeys) Genus Cebus (gracile capuchin monkeys) Colombian white-faced capuchin or Colombian white-headed capuchin, Cebus capucinus Panamanian white-faced capuchin or Panamanian white-headed capuchin, Cebus imitator Marañón white-fronted capuchin, Cebus yuracus Shock-headed capuchin, Cebus cuscinus Spix's white-fronted capuchin, Cebus unicolor Humboldt's white-fronted capuchin, Cebus albifrons Guianan weeper capuchin, Cebus olivaceus Chestnut capuchin, Cebus castaneus Ka'apor capuchin, Cebus kaapori Venezuelan brown capuchin, Cebus brunneus Sierra de Perijá white-fronted capuchin, Cebus leucocephalus Río Cesar white-fronted capuchin, Cebus cesare Varied white-fronted capuchin, Cebus versicolor Santa Marta white-fronted capuchin, Cebus malitiosus Ecuadorian white-fronted capuchin, Cebus aequatorialis Genus Sapajus (robust capuchin monkeys) Tufted capuchin, Sapajus apella Blond capuchin, Sapajus flavius Black-striped capuchin, Sapajus libidinosus Azaras's capuchin, Sapajus cay Black capuchin, Sapajus nigritus Crested capuchin, Sapajus robustus Golden-bellied capuchin, Sapajus xanthosternos Subfamily Saimiriinae (squirrel monkeys) Genus Saimiri Bare-eared squirrel monkey, Saimiri ustus Black squirrel monkey, Saimiri vanzolinii Black-capped squirrel monkey, Saimiri boliviensis Central American squirrel monkey, Saimiri oerstedi Guianan squirrel monkey, Saimiri sciureus Humboldt's squirrel monkey, Saimiri cassiquiarensis Collins' squirrel monkey, Saimiri collinsi Extinct taxa Genus Panamacebus Panamacebus transitus Subfamily Cebinae Genus Acrecebus Acrecebus fraileyi Genus Killikaike Killikaike blakei Genus Dolichocebus Dolichocebus gaimanensis Subfamily Saimiriinae Genus Saimiri Saimiri fieldsi Saimiri annectens Genus Patasola Patasola magdalenae References New World monkeys Primate families Taxa named by Charles Lucien Bonaparte Taxa described in 1831
2,311
5,249
https://en.wikipedia.org/wiki/Crony%20capitalism
Crony capitalism
Crony capitalism, sometimes called cronyism, is an economic system in which businesses thrive not as a result of free enterprise, but rather as a return on money amassed through collusion between a business class and the political class. This is often achieved by the manipulation of relationships with state power by business interests rather than unfettered competition in obtaining permits, government grants, tax breaks, or other forms of state intervention over resources where business interests exercise undue influence over the state's deployment of public goods, for example, mining concessions for primary commodities or contracts for public works. Money is then made not merely by making a profit in the market, but through profiteering by rent seeking using this monopoly or oligopoly. Entrepreneurship and innovative practices which seek to reward risk are stifled since the value-added is little by crony businesses, as hardly anything of significant value is created by them, with transactions taking the form of trading. Crony capitalism spills over into the government, the politics, and the media, when this nexus distorts the economy and affects society to an extent it corrupts public-serving economic, political, and social ideals. Historical usage The first extensive use of the term "crony capitalism" came about in the 1980s, to characterize the Philippine economy under the dictatorship of Ferdinand Marcos. Early uses of this term to describe the economic practices of the Marcos regime included that of Ricardo Manapat, who introduced it in his 1979 pamphlet "Some are Smarter than Others", which was later published in 1991; former Time magazine business editor George M. Taber, who used the term in a Time magazine article in 1980, and activist (and later Finance Minister) Jaime Ongpin, who used the term extensively in his writing and is sometimes credited for having coined it. The term crony capitalism made a significant impact in the public as an explanation of the Asian financial crisis. It is also used to describe governmental decisions favoring cronies of governmental officials. In this context, the term is often used comparatively with corporate welfare, a technical term often used to assess government bailouts and favoritistic monetary policy as opposed to the economic theory described by crony capitalism. The extent of difference between these terms is whether a government action can be said to benefit the individual (crony capitalism) rather than the industry (corporate welfare). In practice Crony capitalism exists along a continuum. In its lightest form, crony capitalism consists of collusion among market players which is officially tolerated or encouraged by the government. While perhaps lightly competing against each other, they will present a unified front (sometimes called a trade association or industry trade group) to the government in requesting subsidies or aid or regulation. For instance, newcomers to a market then need to surmount significant barriers to entry in seeking loans, acquiring shelf space, or receiving official sanction. Some such systems are very formalized, such as sports leagues and the Medallion System of the taxicabs of New York City, but often the process is more subtle, such as expanding training and certification exams to make it more expensive for new entrants to enter a market and thereby limiting potential competition. In technological fields, there may evolve a system whereby new entrants may be accused of infringing on patents that the established competitors never assert against each other. In spite of this, some competitors may succeed when the legal barriers are light. The term crony capitalism is generally used when these practices either come to dominate the economy as a whole, or come to dominate the most valuable industries in an economy. Intentionally ambiguous laws and regulations are common in such systems. Taken strictly, such laws would greatly impede practically all business activity, but in practice they are only erratically enforced. The specter of having such laws suddenly brought down upon a business provides an incentive to stay in the good graces of political officials. Troublesome rivals who have overstepped their bounds can have these laws suddenly enforced against them, leading to fines or even jail time. Even in high-income democracies with well-established legal systems and freedom of the press in place, a larger state is generally associated with increased political corruption. The term crony capitalism was initially applied to states involved in the 1997 Asian financial crisis such as Indonesia, South Korea and Thailand. In these cases, the term was used to point out how family members of the ruling leaders become extremely wealthy with no non-political justification. Southeast Asian nations, such as Hong Kong and Malaysia, still score very poorly in rankings measuring this. The term has also been applied to the system of oligarchs in Russia. Other states to which the term has been applied include India, in particular the system after the 1990s liberalization, whereby land and other resources were given at throwaway prices in the name of public private partnerships, the more recent coal-gate scam and cheap allocation of land and resources to Adani SEZ under the Congress and BJP governments. Similar references to crony capitalism have been made to other countries such as Argentina and Greece. Wu Jinglian, one of China's leading economists and a longtime advocate of its transition to free markets, says that it faces two starkly contrasting futures, namely a market economy under the rule of law or crony capitalism. A dozen years later, prominent political scientist Pei Minxin had concluded that the latter course had become deeply embedded in China. The anti-corruption campaign under Xi Jinping (2012–) has seen more than 100,000 high- and low-ranking Chinese officials indicted and jailed. Many prosperous nations have also had varying amounts of cronyism throughout their history, including the United Kingdom especially in the 1600s and 1700s, the United States and Japan. Crony capitalism index The Economist benchmarks countries based on a crony-capitalism index calculated via how much economic activity occurs in industries prone to cronyism. Its 2014 Crony Capitalism Index ranking listed Hong Kong, Russia and Malaysia in the top three spots. In finance Crony capitalism in finance was found in the Second Bank of the United States. It was a private company, but its largest stockholder was the federal government which owned 20%. It was an early bank regulator and grew to be one being the most powerful organizations in the country due largely to being the depository of the government's revenue. The Gramm–Leach–Bliley Act in 1999 completely removed Glass–Steagall’s separation between commercial banks and investment banks. After this repeal, commercial banks, investment banks and insurance companies combined their lobbying efforts. Critics claim this was instrumental in the passage of the Bankruptcy Abuse Prevention and Consumer Protection Act of 2005. In sections of an economy More direct government involvement in a specific sector can also lead to specific areas of crony capitalism, even if the economy as a whole may be competitive. This is most common in natural resource sectors through the granting of mining or drilling concessions, but it is also possible through a process known as regulatory capture where the government agencies in charge of regulating an industry come to be controlled by that industry. Governments will often establish in good faith government agencies to regulate an industry. However, the members of an industry have a very strong interest in the actions of that regulatory body while the rest of the citizenry are only lightly affected. As a result, it is not uncommon for current industry players to gain control of the watchdog and to use it against competitors. This typically takes the form of making it very expensive for a new entrant to enter the market. An 1824 landmark United States Supreme Court ruling overturned a New York State-granted monopoly ("a veritable model of state munificence" facilitated by Robert R. Livingston, one of the Founding Fathers) for the then-revolutionary technology of steamboats. Leveraging the Supreme Court's establishment of Congressional supremacy over commerce, the Interstate Commerce Commission was established in 1887 with the intent of regulating railroad robber barons. President Grover Cleveland appointed Thomas M. Cooley, a railroad ally, as its first chairman and a permit system was used to deny access to new entrants and legalize price fixing. The defense industry in the United States is often described as an example of crony capitalism in an industry. Connections with the Pentagon and lobbyists in Washington are described by critics as more important than actual competition due to the political and secretive nature of defense contracts. In the Airbus-Boeing WTO dispute, Airbus (which receives outright subsidies from European governments) has stated Boeing receives similar subsidies which are hidden as inefficient defense contracts. Other American defense companies were put under scrutiny for no-bid contracts for Iraq War and Hurricane Katrina related contracts purportedly due to having cronies in the Bush administration. Gerald P. O'Driscoll, former vice president at the Federal Reserve Bank of Dallas, stated that Fannie Mae and Freddie Mac became examples of crony capitalism as government backing let Fannie and Freddie dominate mortgage underwriting, saying. "The politicians created the mortgage giants, which then returned some of the profits to the pols—sometimes directly, as campaign funds; sometimes as "contributions" to favored constituents". In developing economies In its worst form, crony capitalism can devolve into simple corruption where any pretense of a free market is dispensed with, bribes to government officials are considered de rigueur and tax evasion is common. This is seen in many parts of Africa and is sometimes called plutocracy (rule by wealth) or kleptocracy (rule by theft). Kenyan economist David Ndii has repeatedly brought to light how this system has manifested over time, occasioned by the reign of Uhuru Kenyatta as president. Corrupt governments may favor one set of business owners who have close ties to the government over others. This may also be done with, religious, or ethnic favoritism. For instance, Alawites in Syria have a disproportionate share of power in the government and business there (President Assad himself is an Alawite). This can be explained by considering personal relationships as a social network. As government and business leaders try to accomplish various things, they naturally turn to other powerful people for support in their endeavors. These people form hubs in the network. In a developing country those hubs may be very few, thus concentrating economic and political power in a small interlocking group. Normally, this will be untenable to maintain in business as new entrants will affect the market. However, if business and government are entwined, then the government can maintain the small-hub network. Raymond Vernon, specialist in economics and international affairs, wrote that the Industrial Revolution began in Great Britain because they were the first to successfully limit the power of veto groups (typically cronies of those with power in government) to block innovations, writing: "Unlike most other national environments, the British environment of the early 19th century contained relatively few threats to those who improved and applied existing inventions, whether from business competitors, labor, or the government itself. In other European countries, by contrast, the merchant guilds ... were a pervasive source of veto for many centuries. This power was typically bestowed upon them by government". For example, a Russian inventor produced a steam engine in 1766 and disappeared without a trace. Vermon further stated that "a steam powered horseless carriage produced in France in 1769 was officially suppressed". James Watt began experimenting with steam in 1763, got a patent in 1769 and began commercial production in 1775. Raghuram Rajan, former governor of the Reserve Bank of India, has said: "One of the greatest dangers to the growth of developing countries is the middle income trap, where crony capitalism creates oligarchies that slow down growth. If the debate during the elections is any pointer, this is a very real concern of the public in India today". Tavleen Singh, columnist for The Indian Express, has disagreed. According to Singh, India's corporate success is not a product of crony capitalism, but because India is no longer under the influence of crony socialism. Political viewpoints While the problem is generally accepted across the political spectrum, ideology shades the view of the problem's causes and therefore its solutions. Political views mostly fall into two camps which might be called the socialist and capitalist critique. The socialist position is that crony capitalism is the inevitable result of any strictly capitalist system and thus broadly democratic government must regulate economic, or wealthy, interests to restrict monopoly. The capitalist position is that natural monopolies are rare, therefore governmental regulations generally abet established wealthy interests by restricting competition. Socialist critique Critics of crony capitalism including socialists and anti-capitalists often assert that crony capitalism is the inevitable result of any strictly capitalist system. Jane Jacobs described it as a natural consequence of collusion between those managing power and trade while Noam Chomsky has argued that the word crony is superfluous when describing capitalism. Since businesses make money and money leads to political power, business will inevitably use their power to influence governments. Much of the impetus behind campaign finance reform in the United States and in other countries is an attempt to prevent economic power being used to take political power. Ravi Batra argues that "all official economic measures adopted since 1981 ... have devastated the middle class" and that the Occupy Wall Street movement should push for their repeal and thus end the influence of the super wealthy in the political process which he considers a manifestation of crony capitalism. Socialist economists, such as Robin Hahnel, have criticized the term as an ideologically motivated attempt to cast what is in their view the fundamental problems of capitalism as avoidable irregularities. Socialist economists dismiss the term as an apologetic for failures of neoliberal policy and more fundamentally their perception of the weaknesses of market allocation. Capitalist critique Supporters of capitalism also generally oppose crony capitalism. Further, supporters such as classical liberals, neoliberals and right-libertarians consider it an aberration brought on by governmental favors incompatible with free market. Such proponents of capitalism tend to regard the term as an oxymoron, arguing that crony capitalism is not capitalism at all. In the capitalist view, cronyism is the result of an excess of interference in the market which inevitably will result in a toxic combination of corporations and government officials running sectors of the economy. For instance, the Financial Times observed that, in Vietnam during the 2010s, the primary beneficiaries of cronyism were Communist party officials, noting also the "common practice of employing only party members and their family members and associates to government jobs or to jobs in state-owned enterprises." Conservative commentator Ben Shapiro prefers to equate this problem with terms such as corporatocracy or corporatism, considered "a modern form of mercantilism", to emphasize that the only way to run a profitable business in such a system is to have help from corrupt government officials. Likewise, Hernando de Soto said that mercantilism "is also known as 'crony' or 'noninclusive' capitalism". Even if the initial regulation was well-intentioned (to curb actual abuses) and even if the initial lobbying by corporations was well-intentioned (to reduce illogical regulations), the mixture of business and government stifles competition, a collusive result called regulatory capture. Burton W. Folsom Jr. distinguishes those that engage in crony capitalism—designated by him political entrepreneurs—from those who compete in the marketplace without special aid from government, whom he calls market entrepreneurs. The market entrepreneurs such as James J. Hill, Cornelius Vanderbilt and John D. Rockefeller succeeded by producing a quality product at a competitive price. For example, the political entrepreneurs such as Edward Collins in steamships and the leaders of the Union Pacific Railroad in railroads were men who used the power of government to succeed. They tried to gain subsidies or in some way use government to stop competitors. See also Corporatocracy Cronies of Ferdinand Marcos Economic History of the Philippines under Ferdinand Marcos Government failure Government-owned corporation Inverted totalitarianism Iron triangle (US politics) Licence Raj (concept in Indian political-economics) Mercantilism Patrimonialism Political family Political machine Regulatory capture Rent-seeking Stamocap State capture Supercapitalism Zhao family Notes References Further reading Khatri, Naresh (2013). Anatomy of Indian Brand of Crony Capitalism. https://ssrn.com/abstract=2335201. http://mpra.ub.uni-muenchen.de/19626/1/WP0802.pdf Bribery Capitalism Political corruption Political terminology Public choice theory
2,319
5,259
https://en.wikipedia.org/wiki/Common%20descent
Common descent
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth, . Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: History The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted: May we not say that, in the fortuitous combination of the productions of Nature, since only those creatures could survive in whose organizations a certain degree of adaptation was present, there is nothing extraordinary in the fact that such adaptation is actually found in all these species which now exist? Chance, one might say, turned out a vast number of individuals; a small proportion of these were organized in such a manner that the animals' organs could satisfy their needs. A much greater number showed neither adaptation nor order; these last have all perished.... Thus the species which we see today are but a small part of all those that a blind destiny has produced. In 1790, the philosopher Immanuel Kant wrote in Kritik der Urteilskraft (Critique of Judgment) that the similarity of animal forms implies a common original type, and thus a common parent. In 1794, Charles Darwin's grandfather, Erasmus Darwin asked: [W]ould it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which endued with animality, with the power of acquiring new parts attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end? Charles Darwin's views about common descent, as expressed in On the Origin of Species, were that it was probable that there was only one progenitor for all life forms: Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed. But he precedes that remark by, "Analogy would lead me one step further, namely, to the belief that all animals and plants have descended from some one prototype. But analogy may be a deceitful guide." And in the subsequent edition, he asserts rather, "We do not know all the possible transitional gradations between the simplest and the most perfect organs; it cannot be pretended that we know all the varied means of Distribution during the long lapse of years, or that we know how imperfect the Geological Record is. Grave as these several difficulties are, in my judgment they do not overthrow the theory of descent from a few created forms with subsequent modification". Common descent was widely accepted amongst the scientific community after Darwin's publication. In 1907, Vernon Kellogg commented that "practically no naturalists of position and recognized attainment doubt the theory of descent." In 2008, biologist T. Ryan Gregory noted that: No reliable observation has ever been found to contradict the general notion of common descent. It should come as no surprise, then, that the scientific community at large has accepted evolutionary descent as a historical reality since Darwin’s time and considers it among the most reliably established and fundamentally important facts in all of science. Evidence Common biochemistry All known forms of life are based on the same fundamental biochemical organization: genetic information encoded in DNA, transcribed into RNA, through the effect of protein- and RNA-enzymes, then translated into proteins by (highly similar) ribosomes, with ATP, NADPH and others as energy sources. Analysis of small sequence differences in widely shared substances such as cytochrome c further supports universal common descent. Some 23 proteins are found in all organisms, serving as enzymes carrying out core functions like DNA replication. The fact that only one such set of enzymes exists is convincing evidence of a single ancestry. 6,331 genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Common genetic code The genetic code (the "translation table" according to which DNA information is translated into amino acids, and hence proteins) is nearly identical for all known lifeforms, from bacteria and archaea to animals and plants. The universality of this code is generally regarded by biologists as definitive evidence in favor of universal common descent. The way that codons (DNA triplets) are mapped to amino acids seems to be strongly optimised. Richard Egel argues that in particular the hydrophobic (non-polar) side-chains are well organised, suggesting that these enabled the earliest organisms to create peptides with water-repelling regions able to support the essential electron exchange (redox) reactions for energy transfer. Selectively neutral similarities Similarities which have no adaptive relevance cannot be explained by convergent evolution, and therefore they provide compelling support for universal common descent. Such evidence has come from two areas: amino acid sequences and DNA sequences. Proteins with the same three-dimensional structure need not have identical amino acid sequences; any irrelevant similarity between the sequences is evidence for common descent. In certain cases, there are several codons (DNA triplets) that code redundantly for the same amino acid. Since many species use the same codon at the same place to specify an amino acid that can be represented by more than one codon, that is evidence for their sharing a recent common ancestor. Had the amino acid sequences come from different ancestors, they would have been coded for by any of the redundant codons, and since the correct amino acids would already have been in place, natural selection would not have driven any change in the codons, however much time was available. Genetic drift could change the codons, but it would be extremely unlikely to make all the redundant codons in a whole sequence match exactly across multiple lineages. Similarly, shared nucleotide sequences, especially where these are apparently neutral such as the positioning of introns and pseudogenes, provide strong evidence of common ancestry. Other similarities Biologists often point to the universality of many aspects of cellular life as supportive evidence to the more compelling evidence listed above. These similarities include the energy carrier adenosine triphosphate (ATP), and the fact that all amino acids found in proteins are left-handed. It is, however, possible that these similarities resulted because of the laws of physics and chemistry - rather than through universal common descent - and therefore resulted in convergent evolution. In contrast, there is evidence for homology of the central subunits of Transmembrane ATPases throughout all living organisms, especially how the rotating elements are bound to the membrane. This supports the assumption of a LUCA as a cellular organism, although primordial membranes may have been semipermeable and evolved later to the membranes of modern bacteria, and on a second path to those of modern archaea also. Phylogenetic trees Another important piece of evidence is from detailed phylogenetic trees (i.e., "genealogic trees" of species) mapping out the proposed divisions and common ancestors of all living species. In 2010, Douglas L. Theobald published a statistical analysis of available genetic data, mapping them to phylogenetic trees, that gave "strong quantitative support, by a formal test, for the unity of life." Traditionally, these trees have been built using morphological methods, such as appearance, embryology, etc. Recently, it has been possible to construct these trees using molecular data, based on similarities and differences between genetic and protein sequences. All these methods produce essentially similar results, even though most genetic variation has no influence over external morphology. That phylogenetic trees based on different types of information agree with each other is strong evidence of a real underlying common descent. Objections Gene exchange clouds phylogenetic analysis Theobald noted that substantial horizontal gene transfer could have occurred during early evolution. Bacteria today remain capable of gene exchange between distantly-related lineages. This weakens the basic assumption of phylogenetic analysis, that similarity of genomes implies common ancestry, because sufficient gene exchange would allow lineages to share much of their genome whether or not they shared an ancestor (monophyly). This has led to questions about the single ancestry of life. However, biologists consider it very unlikely that completely unrelated proto-organisms could have exchanged genes, as their different coding mechanisms would have resulted only in garble rather than functioning systems. Later, however, many organisms all derived from a single ancestor could readily have shared genes that all worked in the same way, and it appears that they have. Convergent evolution If early organisms had been driven by the same environmental conditions to evolve similar biochemistry convergently, they might independently have acquired similar genetic sequences. Theobald's "formal test" was accordingly criticised by Takahiro Yonezawa and colleagues for not including consideration of convergence. They argued that Theobald's test was insufficient to distinguish between the competing hypotheses. Theobald has defended his method against this claim, arguing that his tests distinguish between phylogenetic structure and mere sequence similarity. Therefore, Theobald argued, his results show that "real universally conserved proteins are homologous." RNA world The possibility is mentioned, above, that all living organisms may be descended from an original single-celled organism with a DNA genome, and that this implies a single origin for life. Although such a universal common ancestor may have existed, such a complex entity is unlikely to have arisen spontaneously from non-life and thus a cell with a DNA genome cannot reasonably be regarded as the “origin” of life. To understand the “origin” of life, it has been proposed that DNA based cellular life descended from relatively simple pre-cellular self-replicating RNA molecules able to undergo natural selection (see RNA world). During the course of evolution, this RNA world was replaced by the evolutionary emergence of the DNA world. A world of independently self-replicating RNA genomes apparently no longer exists (RNA viruses are dependent on host cells with DNA genomes). Because the RNA world is apparently gone, it is not clear how scientific evidence could be brought to bear on the question of whether there was a single “origin” of life event from which all life descended. See also The Ancestor's Tale Urmetazoan Bibliography The book is available from The Complete Work of Charles Darwin Online. Retrieved 2015-11-23. Retrieved 2015-11-23. Notes References External links 29+ Evidences for Macroevolution: The Scientific Case for Common Descent from the TalkOrigins Archive. The Tree of Life Web Project Evolutionary biology Descent Last common ancestors
2,325
5,269
https://en.wikipedia.org/wiki/Character
Character
Character or Characters may refer to: Arts, entertainment, and media Literature Character (novel), a 1936 Dutch novel by Ferdinand Bordewijk Characters (Theophrastus), a classical Greek set of character sketches attributed to Theophrastus Music Characters (John Abercrombie album), 1977 Character (Dark Tranquillity album), 2005 Character (Julia Kent album), 2013 Character (Rachael Sage album), 2020 Characters (Stevie Wonder album), 1987 Types of entity Character (arts), an agent within a work of art, including literature, drama, cinema, opera, etc. Character sketch or character, a literary description of a character type Game character (disambiguation), various types of characters in a video game or role playing game Player character, as above but who is controlled or whose actions are directly chosen by a player Non-player character, as above but not player-controlled, frequently abbreviated as NPC Other uses in arts, entertainment, and media Character (film), a 1997 Dutch film based on Bordewijk's novel Charaktery, a monthly magazine in Poland Netflix Presents: The Characters, an improvised sketch comedy show on Netflix Sciences Character (biology), the abstraction of an observable physical or biochemical trait of an organism Mathematics Character (mathematics), a homomorphism from a group to a field Characterization (mathematics), the logical equivalency between objects of two different domains. Character theory, the mathematical theory of special kinds of characters associated to group representations Dirichlet character, a type of character in number theory Multiplicative character, a homomorphism from a group to the multiplicative subgroup of a field Morality and social science Character education, a US term for values education Character structure, a person's traits Moral character, an evaluation of a particular individual's durable moral qualities Symbols Character (symbol), a sign or symbol Character (computing), a unit of information roughly corresponding to a grapheme Chinese characters, a written language symbol (sinogram) used in Chinese, Japanese, and other languages Other uses Character (income tax), a type of income for tax purposes in the US Sacramental character, a Catholic teaching Neighbourhood character, the look and feel of a built environment See also Virtual character (disambiguation)
2,328
5,295
https://en.wikipedia.org/wiki/Character%20encoding
Character encoding
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form. History The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, International maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode). Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known. The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name "baudot" has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often "improved" by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), Fieldata fell short of its goals and was short-lived. In 1963 the first ASCII (American Standard Code for Information Interchange) code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert) which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67 was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM's Binary Coded Decimal (BCD) was a six-bit encoding scheme used by IBM as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series, as well as in associated peripherals. Since the punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. BCD extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping it easily to punch-card encoding which was already in widespread use. IBMs codes were used primarily with IBM equipment; other computer vendors of the era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. BCD was the precursor of IBM's Extended Binary Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters. The limitations of such sets soon became apparent, and a number of ad hoc methods were developed to extend them. The need to support more writing systems for different languages, including the CJK family of East Asian scripts, required support for a far larger number of characters and demanded a systematic approach to character encoding rather than the previous ad hoc approaches. In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count. The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units, the solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point. Terminology Terminology related to character encoding A character is a minimal unit of text that has semantic value. A character set is a collection of characters that might be used by multiple languages. Example: The Latin character set is used by English and most European languages, though the Greek character set is used only by the Greek language. A coded character set is a character set in which each character corresponds to a unique number. A code point of a coded character set is any allowed value in the character set or code space. A code space is a range of integers whose values are code points. A code unit is the "word size" of the character encoding scheme, such as 7-bit, 8-bit, 16-bit. In some schemes, some characters are encoded using multiple code units, resulting in a variable-length encoding. A code unit is referred to as a code value in some documents. Character repertoire (the abstract set of characters) The character repertoire is an abstract set of more than one million characters found in a wide variety of scripts including Latin, Cyrillic, Chinese, Korean, Japanese, Hebrew, and Aramaic. Other symbols such as musical notation are also included in the character repertoire. Both the Unicode and GB 18030 standards have a character repertoire. As new characters are added to one standard, the other standard also adds those characters, to maintain parity. The code unit size is equivalent to the bit measurement for the particular encoding: A code unit in US-ASCII consists of 7 bits; A code unit in UTF-8, EBCDIC and GB 18030 consists of 8 bits; A code unit in UTF-16 consists of 16 bits; A code unit in UTF-32 consists of 32 bits. Example of a code unit Consider a string of the letters "ab̲c𐐀", that is, a string containing a Unicode combining character () as well a supplementary character (). This string has several representions which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements: Four composed characters: , , , Five graphemes: , , , , Five Unicode code points: , , , , Five UTF-32 code units (32-bit integer values): , , , , Six UTF-16 code units (16-bit integers) , , , , , Nine UTF-8 code units (8-bit values, or bytes) , , , , , , , , Note in particular the last character, which is represented with either one 1 32-bit value, 2 16-bit values. or 4 8-bit values. Although each of those forms uses the same total number of bits (32) to represent the glyph, the actual numeric byte values and their arrangement appear entirely unrelated. Code point The convention to refer to a character in Unicode is to start with 'U+' followed by the codepoint value in hexadecimal. The range of valid code points for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes, identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters. The following table shows examples of code point values: A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding: UTF-8: code points map to a sequence of one, two, three or four code units. UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: "Unicode surrogate pairs". UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit. GB 18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units. Unicode encoding model Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set, together constitute a modern, unified character encoding. Rather than mapping characters directly to octets (bytes), they separately define what characters are available, corresponding natural numbers (code points), how those numbers are encoded as a series of fixed-size natural numbers (code units), and finally how those units are encoded as a stream of octets. The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model correctly requires more precise terms than "character set" and "character encoding." The terms used in the modern model follow: A character repertoire is the full set of abstract characters that a system supports. The repertoire may be closed, i.e. no additions are allowed without creating a new standard (as is the case with ASCII and most of the ISO-8859 series), or it may be open, allowing additions (as is the case with Unicode and to a limited extent the Windows code pages). The characters in a given repertoire reflect decisions that have been made about how to divide writing systems into basic information units. The basic variants of the Latin, Greek and Cyrillic alphabets can be broken down into letters, digits, punctuation, and a few special characters such as the space, which can all be arranged in simple linear sequences that are displayed in the same order they are read. But even with these alphabets, diacritics pose a complication: they can be regarded either as part of a single character containing a letter and diacritic (known as a precomposed character), or as separate characters. The former allows a far simpler text handling system but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Other writing systems, such as Arabic and Hebrew, are represented with more complex character repertoires due to the need to accommodate things like bidirectional text and glyphs that are joined in different ways for different situations. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" to 66, and so on. Multiple coded character sets may share the same repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover the same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence is defined by a CEF. Next, a character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8, UTF-16BE, UTF-32BE, UTF-16LE or UTF-32LE; compound character encoding schemes, such as UTF-16, UTF-32 and ISO/IEC 2022, switch between several simple schemes by using a byte order mark or escape sequences; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU, BOCU, and Punycode). Although UTF-32BE is a simpler CES, most systems working with Unicode use either UTF-8, which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE, which is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as the same character. An example is the XML attribute xml:lang. The Unicode model uses the term character map for historical systems which directly assign a sequence of characters to a sequence of bytes, covering all of CCS, CEF and CES layers. Character sets, character maps and code pages Historically, the terms "character encoding", "character map", "character set" and "code page" were synonymous in computer science, as the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units – usually with a single character per code unit. But now the terms have related but distinct meanings, due to efforts by standards bodies to use precise terminology when writing about and unifying many different encoding systems. Regardless, the terms are still used interchangeably, with character set being nearly ubiquitous. A "code page" usually means a byte-oriented encoding, but with regard to some suite of encodings (covering different scripts), where many characters share the same codes in most or all those code pages. Well-known code page suites are "Windows" (based on Windows-1252) and "IBM"/"DOS" (based on code page 437), see Windows code page for details. Most, but not all, encodings referred to as code pages are single-byte encodings (but see octet on byte size.) IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers (CCSIDs), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". The term "code page" does not occur in Unix or Linux where "charmap" is preferred, usually in the larger context of locales. In contrast to a "coded character set", a "character encoding" is a map from abstract characters to code words. A "character set" in HTTP (and MIME) parlance is the same as a character encoding (but not the same as CCS). "Legacy encoding" is a term sometimes used to characterize old character encodings, but with an ambiguity of sense. Most of its use is in the context of Unicodification, where it refers to encodings that fail to cover all Unicode code points, or, more generally, using a somewhat different character repertoire: several code points representing one Unicode character, or versa (see e.g. code page 437). Some sources refer to an encoding as legacy only because it preceded Unicode. All Windows code pages are usually referred to as legacy, both because they antedate Unicode and because they are unable to represent all 221 possible Unicode code points. Character encoding translation As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between encoding schemes as a form of data transcoding. Some of these are cited below. Cross-platform: Web browsers – most modern web browsers feature automatic character encoding detection. On Firefox 3, for example, see the View/Character Encoding submenu. iconv – a program and standardized API to convert encodings luit – a program that converts encoding of input and output to programs running interactively convert_encoding.py – a Python-based utility to convert text files between arbitrary encodings and line endings decodeh.py – an algorithm and module to heuristically guess the encoding of a string International Components for Unicode – A set of C and Java libraries to perform charset conversion. uconv can be used from ICU4C. chardet – This is a translation of the Mozilla automatic-encoding-detection code into the Python computer language. The newer versions of the Unix file command attempt to do a basic detection of character encoding (also available on Cygwin). charset – C++ template library with simple interface to convert between C++/user-defined streams. charset defined many character-sets and allows you to use Unicode formats with support of endianness. Unix-like: cmv – a simple tool for transcoding filenames. convmv – converts a filename from one encoding to another. cstocs – converts file contents from one encoding to another for the Czech and Slovak languages. enca – analyzes encodings for given text files. recode – converts file contents from one encoding to another. utrac – converts file contents from one encoding to another. Windows: Encoding.Convert – .NET API MultiByteToWideChar/WideCharToMultiByte – to convert from ANSI to Unicode & Unicode to ANSI cscvt – a character set conversion tool enca – analyzes encodings for given text files. See also Percent-encoding Alt code Character encodings in HTML :Category:Character encoding – articles related to character encoding in general :Category:Character sets – articles detailing specific character encodings Hexadecimal representations Mojibake – character set mismap Mojikyō – a system ("glyph set") that includes over 100,000 Chinese character drawings, modern and ancient, popular and obscure Presentation layer TRON, part of the TRON project, is an encoding system that does not use Han Unification; instead, it uses "control codes" to switch between 16-bit "planes" of characters. Universal Character Set characters Charset sniffing – used in some applications when character encoding metadata is not available Common character encodings ISO 646 ASCII EBCDIC ISO 8859: ISO 8859-1 Western Europe ISO 8859-2 Western and Central Europe ISO 8859-3 Western Europe and South European (Turkish, Maltese plus Esperanto) ISO 8859-4 Western Europe and Baltic countries (Lithuania, Estonia, Latvia and Lapp) ISO 8859-5 Cyrillic alphabet ISO 8859-6 Arabic ISO 8859-7 Greek ISO 8859-8 Hebrew ISO 8859-9 Western Europe with amended Turkish character set ISO 8859-10 Western Europe with rationalised character set for Nordic languages, including complete Icelandic set ISO 8859-11 Thai ISO 8859-13 Baltic languages plus Polish ISO 8859-14 Celtic languages (Irish Gaelic, Scottish, Welsh) ISO 8859-15 Added the Euro sign and other rationalisations to ISO 8859-1 ISO 8859-16 Central, Eastern and Southern European languages (Albanian, Bosnian, Croatian, Hungarian, Polish, Romanian, Serbian and Slovenian, but also French, German, Italian and Irish Gaelic) CP437, CP720, CP737, CP850, CP852, CP855, CP857, CP858, CP860, CP861, CP862, CP863, CP865, CP866, CP869, CP872 MS-Windows character sets: Windows-1250 for Central European languages that use Latin script, (Polish, Czech, Slovak, Hungarian, Slovene, Serbian, Croatian, Bosnian, Romanian and Albanian) Windows-1251 for Cyrillic alphabets Windows-1252 for Western languages Windows-1253 for Greek Windows-1254 for Turkish Windows-1255 for Hebrew Windows-1256 for Arabic Windows-1257 for Baltic languages Windows-1258 for Vietnamese Mac OS Roman KOI8-R, KOI8-U, KOI7 MIK ISCII TSCII VISCII JIS X 0208 is a widely deployed standard for Japanese character encoding that has several encoding forms. Shift JIS (Microsoft Code page 932 is a dialect of Shift_JIS) EUC-JP ISO-2022-JP JIS X 0213 is an extended version of JIS X 0208. Shift_JIS-2004 EUC-JIS-2004 ISO-2022-JP-2004 Chinese Guobiao GB 2312 GBK (Microsoft Code page 936) GB 18030 Taiwan Big5 (a more famous variant is Microsoft Code page 950) Hong Kong HKSCS Korean KS X 1001 is a Korean double-byte character encoding standard EUC-KR ISO-2022-KR Unicode (and subsets thereof, such as the 16-bit 'Basic Multilingual Plane') UTF-8 UTF-16 UTF-32 ANSEL or ISO/IEC 6937 References Further reading External links Character sets registered by Internet Assigned Numbers Authority (IANA) Characters and encodings, by Jukka Korpela Unicode Technical Report #17: Character Encoding Model Decimal, Hexadecimal Character Codes in HTML Unicode – Encoding converter The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) by Joel Spolsky (Oct 10, 2003) Encoding
2,335
5,298
https://en.wikipedia.org/wiki/Control%20character
Control character
In computing and telecommunication, a control character or non-printing character (NPC) is a code point (a number) in a character set, that does not represent a written symbol. They are used as in-band signaling to cause effects other than the addition of a symbol to the text. All other characters are mainly printing, printable, or graphic characters, except perhaps for the "space" character (see ASCII printable characters). History Procedural signs in Morse code are a form of control character. A form of control characters were introduced in the 1870 Baudot code: NUL and DEL. The 1901 Murray code added the carriage return (CR) and line feed (LF), and other versions of the Baudot code included other control characters. The bell character (BEL), which rang a bell to alert operators, was also an early teletype control character. Control characters have also been called "format effectors". In ASCII There were quite a few control characters defined (33 in ASCII, and the ECMA-48 standard adds 32 more). This was because early terminals had very primitive mechanical or electrical controls that made any kind of state-remembering API quite expensive to implement, thus a different code for each and every function looked like a requirement. It quickly became possible and inexpensive to interpret sequences of codes to perform a function, and device makers found a way to send hundreds of device instructions. Specifically, they used ASCII code 27 (escape), followed by a series of characters called a "control sequence" or "escape sequence". The mechanism was invented by Bob Bemer, the father of ASCII. For example, the sequence of code 27, followed by the printable characters "[2;10H", would cause a Digital Equipment Corporation VT100 terminal to move its cursor to the 10th cell of the 2nd line of the screen. Several standards exist for these sequences, notably ANSI X3.64. But the number of non-standard variations in use is large, especially among printers, where technology has advanced far faster than any standards body can possibly keep up with. All entries in the ASCII table below code 32 (technically the C0 control code set) are of this kind, including CR and LF used to separate lines of text. The code 127 (DEL) is also a control character. Extended ASCII sets defined by ISO 8859 added the codes 128 through 159 as control characters, this was primarily done so that if the high bit was stripped it would not change a printing character to a C0 control code, but there have been some assignments here, in particular NEL. This second set is called the C1 set. These 65 control codes were carried over to Unicode. Unicode added more characters that could be considered controls, but it makes a distinction between these "Formatting characters" (such as the zero-width non-joiner), and the 65 control characters. The Extended Binary Coded Decimal Interchange Code (EBCDIC) character set contains 65 control codes, including all of the ASCII control codes plus additional codes which are mostly used to control IBM peripherals. The control characters in ASCII still in common use include: 0 (null, NUL, \0, ^@), originally intended to be an ignored character, but now used by many programming languages including C to mark the end of a string. 7 (bell, BEL, \a, ^G), which may cause the device to emit a warning such as a bell or beep sound or the screen flashing. 8 (backspace, BS, \b, ^H), may overprint the previous character. 9 (horizontal tab, HT, \t, ^I), moves the printing position right to the next tab stop. 10 (line feed, LF, \n, ^J), moves the print head down one line, or to the left edge and down. Used as the end of line marker in most UNIX systems and variants. 11 (vertical tab, VT, \v, ^K), vertical tabulation. 12 (form feed, FF, \f, ^L), to cause a printer to eject paper to the top of the next page, or a video terminal to clear the screen. 13 (carriage return, CR, \r, ^M), moves the printing position to the start of the line, allowing overprinting. Used as the end of line marker in Classic Mac OS, OS-9, FLEX (and variants). A CR+LF pair is used by CP/M-80 and its derivatives including DOS and Windows, and by Application Layer protocols such as FTP, SMTP, and HTTP. 26 (Control-Z, SUB, EOF, ^Z). Acts as an end-of-file for the Windows text-mode file i/o. 27 (escape, ESC, \e (GCC only), ^[). Introduces an escape sequence. Control characters may be described as doing something when the user inputs them, such as code 3 (End-of-Text character, ETX, ^C) to interrupt the running process, or code 4 (End-of-Transmission character, EOT, ^D), used to end text input or to exit a Unix shell. These uses usually have little to do with their use when they are in text being output, and on modern systems usually do not involve the transmission of the code number at all (instead the program gets the fact that the user is holding down the Ctrl key and pushing the key marked with a 'C'). In Unicode In Unicode, "Control-characters" are U+0000—U+001F (C0 controls), U+007F (delete), and U+0080—U+009F (C1 controls). Their General Category is "Cc". Formatting codes are distinct, in General Category "Cf". The Cc control characters have no Name in Unicode, but are given labels such as "<control-001A>" instead. Display There are a number of techniques to display non-printing characters, which may be illustrated with the bell character in ASCII encoding: Code point: decimal 7, hexadecimal 0x07 An abbreviation, often three capital letters: BEL A special character condensing the abbreviation: Unicode U+2407 (␇), "symbol for bell" An ISO 2047 graphical representation: Unicode U+237E (⍾), "graphic for bell" Caret notation in ASCII, where code point 00xxxxx is represented as a caret followed by the capital letter at code point 10xxxxx: ^G An escape sequence, as in C/C++ character string codes: , , , etc. How control characters map to keyboards ASCII-based keyboards have a key labelled "Control", "Ctrl", or (rarely) "Cntl" which is used much like a shift key, being pressed in combination with another letter or symbol key. In one implementation, the control key generates the code 64 places below the code for the (generally) uppercase letter it is pressed in combination with (i.e., subtract 64 from ASCII code value in decimal of the (generally) uppercase letter). The other implementation is to take the ASCII code produced by the key and bitwise AND it with 31, forcing bits 6 and 7 to zero. For example, pressing "control" and the letter "g" or "G" (code 107 in octal or 71 in base 10, which is 01000111 in binary), produces the code 7 (Bell, 7 in base 10, or 00000111 in binary). The NULL character (code 0) is represented by Ctrl-@, "@" being the code immediately before "A" in the ASCII character set. For convenience, a lot of terminals accept Ctrl-Space as an alias for Ctrl-@. In either case, this produces one of the 32 ASCII control codes between 0 and 31. This approach is not able to represent the DEL character because of its value (code 127), but Ctrl-? is often used for this character, as subtracting 64 from a '?' gives −1, which if masked to 7 bits is 127. When the control key is held down, letter keys produce the same control characters regardless of the state of the shift or caps lock keys. In other words, it does not matter whether the key would have produced an upper-case or a lower-case letter. The interpretation of the control key with the space, graphics character, and digit keys (ASCII codes 32 to 63) vary between systems. Some will produce the same character code as if the control key were not held down. Other systems translate these keys into control characters when the control key is held down. The interpretation of the control key with non-ASCII ("foreign") keys also varies between systems. Control characters are often rendered into a printable form known as caret notation by printing a caret (^) and then the ASCII character that has a value of the control character plus 64. Control characters generated using letter keys are thus displayed with the upper-case form of the letter. For example, ^G represents code 7, which is generated by pressing the G key when the control key is held down. Keyboards also typically have a few single keys which produce control character codes. For example, the key labelled "Backspace" typically produces code 8, "Tab" code 9, "Enter" or "Return" code 13 (though some keyboards might produce code 10 for "Enter"). Many keyboards include keys that do not correspond to any ASCII printable or control character, for example cursor control arrows and word processing functions. The associated keypresses are communicated to computer programs by one of four methods: appropriating otherwise unused control characters; using some encoding other than ASCII; using multi-character control sequences; or using an additional mechanism outside of generating characters. "Dumb" computer terminals typically use control sequences. Keyboards attached to stand-alone personal computers made in the 1980s typically use one (or both) of the first two methods. Modern computer keyboards generate scancodes that identify the specific physical keys that are pressed; computer software then determines how to handle the keys that are pressed, including any of the four methods described above. The design purpose The control characters were designed to fall into a few groups: printing and display control, data structuring, transmission control, and miscellaneous. Printing and display control Printing control characters were first used to control the physical mechanism of printers, the earliest output device. An early example of this idea was the use of Figures (FIGS) and Letters (LTRS) in Baudot code to shift between two code pages. A later, but still early, example was the out-of-band ASA carriage control characters. Later, control characters were integrated into the stream of data to be printed. The carriage return character (CR), when sent to such a device, causes it to put the character at the edge of the paper at which writing begins (it may, or may not, also move the printing position to the next line). The line feed character (LF/NL) causes the device to put the printing position on the next line. It may (or may not), depending on the device and its configuration, also move the printing position to the start of the next line (which would be the leftmost position for left-to-right scripts, such as the alphabets used for Western languages, and the rightmost position for right-to-left scripts such as the Hebrew and Arabic alphabets). The vertical and horizontal tab characters (VT and HT/TAB) cause the output device to move the printing position to the next tab stop in the direction of reading. The form feed character (FF/NP) starts a new sheet of paper, and may or may not move to the start of the first line. The backspace character (BS) moves the printing position one character space backwards. On printers, including hard-copy terminals, this is most often used so the printer can overprint characters to make other, not normally available, characters. On video terminals and other electronic output devices, there are often software (or hardware) configuration choices that allow a destructive backspace (e.g., a BS, SP, BS sequence), which erases, or a non-destructive one, which does not. The shift in and shift out characters (SI and SO) selected alternate character sets, fonts, underlining, or other printing modes. Escape sequences were often used to do the same thing. With the advent of computer terminals that did not physically print on paper and so offered more flexibility regarding screen placement, erasure, and so forth, printing control codes were adapted. Form feeds, for example, usually cleared the screen, there being no new paper page to move to. More complex escape sequences were developed to take advantage of the flexibility of the new terminals, and indeed of newer printers. The concept of a control character had always been somewhat limiting, and was extremely so when used with new, much more flexible, hardware. Control sequences (sometimes implemented as escape sequences) could match the new flexibility and power and became the standard method. However, there were, and remain, a large variety of standard sequences to choose from. Data structuring The separators (File, Group, Record, and Unit: FS, GS, RS and US) were made to structure data, usually on a tape, in order to simulate punched cards. End of medium (EM) warns that the tape (or other recording medium) is ending. While many systems use CR/LF and TAB for structuring data, it is possible to encounter the separator control characters in data that needs to be structured. The separator control characters are not overloaded; there is no general use of them except to separate data into structured groupings. Their numeric values are contiguous with the space character, which can be considered a member of the group, as a word separator. For example, the RS separator is used by (JSON Text Sequences) to encode a sequence of JSON elements. Each sequence item starts with a RS character and ends with a line feed. This allows to serialize open-ended JSON sequences. It is one of the JSON streaming protocols. Transmission control The transmission control characters were intended to structure a data stream, and to manage re-transmission or graceful failure, as needed, in the face of transmission errors. The start of heading (SOH) character was to mark a non-data section of a data stream—the part of a stream containing addresses and other housekeeping data. The start of text character (STX) marked the end of the header, and the start of the textual part of a stream. The end of text character (ETX) marked the end of the data of a message. A widely used convention is to make the two characters preceding ETX a checksum or CRC for error-detection purposes. The end of transmission block character (ETB) was used to indicate the end of a block of data, where data was divided into such blocks for transmission purposes. The escape character (ESC) was intended to "quote" the next character, if it was another control character it would print it instead of performing the control function. It is almost never used for this purpose today. Various printable characters are used as visible "escape characters", depending on context. The substitute character (SUB) was intended to request a translation of the next character from a printable character to another value, usually by setting bit 5 to zero. This is handy because some media (such as sheets of paper produced by typewriters) can transmit only printable characters. However, on MS-DOS systems with files opened in text mode, "end of text" or "end of file" is marked by this Ctrl-Z character, instead of the Ctrl-C or Ctrl-D, which are common on other operating systems. The cancel character (CAN) signalled that the previous element should be discarded. The negative acknowledge character (NAK) is a definite flag for, usually, noting that reception was a problem, and, often, that the current element should be sent again. The acknowledge character (ACK) is normally used as a flag to indicate no problem detected with current element. When a transmission medium is half duplex (that is, it can transmit in only one direction at a time), there is usually a master station that can transmit at any time, and one or more slave stations that transmit when they have permission. The enquire character (ENQ) is generally used by a master station to ask a slave station to send its next message. A slave station indicates that it has completed its transmission by sending the end of transmission character (EOT). The device control codes (DC1 to DC4) were originally generic, to be implemented as necessary by each device. However, a universal need in data transmission is to request the sender to stop transmitting when a receiver is temporarily unable to accept any more data. Digital Equipment Corporation invented a convention which used 19 (the device control 3 character (DC3), also known as control-S, or XOFF) to "S"top transmission, and 17 (the device control 1 character (DC1), a.k.a. control-Q, or XON) to start transmission. It has become so widely used that most don't realize it is not part of official ASCII. This technique, however implemented, avoids additional wires in the data cable devoted only to transmission management, which saves money. A sensible protocol for the use of such transmission flow control signals must be used, to avoid potential deadlock conditions, however. The data link escape character (DLE) was intended to be a signal to the other end of a data link that the following character is a control character such as STX or ETX. For example a packet may be structured in the following way (DLE) <STX> <PAYLOAD> (DLE) <ETX>. Miscellaneous codes Code 7 (BEL) is intended to cause an audible signal in the receiving terminal. Many of the ASCII control characters were designed for devices of the time that are not often seen today. For example, code 22, "synchronous idle" (SYN), was originally sent by synchronous modems (which have to send data constantly) when there was no actual data to send. (Modern systems typically use a start bit to announce the beginning of a transmitted word— this is a feature of asynchronous communication. Synchronous communication links were more often seen with mainframes, where they were typically run over corporate leased lines to connect a mainframe to another mainframe or perhaps a minicomputer.) Code 0 (ASCII code name NUL) is a special case. In paper tape, it is the case when there are no holes. It is convenient to treat this as a fill character with no meaning otherwise. Since the position of a NUL character has no holes punched, it can be replaced with any other character at a later time, so it was typically used to reserve space, either for correcting errors or for inserting information that would be available at a later time or in another place. In computing it is often used for padding in fixed length records and more commonly, to mark the end of a string. Code 127 (DEL, a.k.a. "rubout") is likewise a special case. Its 7-bit code is all-bits-on in binary, which essentially erased a character cell on a paper tape when overpunched. Paper tape was a common storage medium when ASCII was developed, with a computing history dating back to WWII code breaking equipment at Biuro Szyfrów. Paper tape became obsolete in the 1970s, so this clever aspect of ASCII rarely saw any use after that. Some systems (such as the original Apples) converted it to a backspace. But because its code is in the range occupied by other printable characters, and because it had no official assigned glyph, many computer equipment vendors used it as an additional printable character (often an all-black "box" character useful for erasing text by overprinting with ink). Non-erasable programmable ROMs are typically implemented as arrays of fusible elements, each representing a bit, which can only be switched one way, usually from one to zero. In such PROMs, the DEL and NUL characters can be used in the same way that they were used on punched tape: one to reserve meaningless fill bytes that can be written later, and the other to convert written bytes to meaningless fill bytes. For PROMs that switch one to zero, the roles of NUL and DEL are reversed; also, DEL will only work with 7-bit characters, which are rarely used today; for 8-bit content, the character code 255, commonly defined as a nonbreaking space character, can be used instead of DEL. Many file systems do not allow control characters in filenames, as they may have reserved functions. See also Arrow keys#HJKL keys HJKL as arrow keys, used on ADM-3A terminal C0 and C1 control codes Escape sequence In-band signaling Whitespace character Notes and references External links ISO IR 1 C0 Set of ISO 646 (PDF)
2,336
5,313
https://en.wikipedia.org/wiki/Crouching%20Tiger%2C%20Hidden%20Dragon
Crouching Tiger, Hidden Dragon
Crouching Tiger, Hidden Dragon is a 2000 wuxia film directed by Ang Lee and written for the screen by Wang Hui-ling, James Schamus, and Tsai Kuo-jung. The film features a cast of actors of Chinese ethnicity, including Chow Yun-fat, Michelle Yeoh, Zhang Ziyi, and Chang Chen. It is based on the Chinese novel of the same name serialized between 1941 and 1942 by Wang Dulu, the fourth part of his Crane Iron pentalogy. A multinational venture, the film was made on a US$17 million budget, and was produced by Edko Films and Zoom Hunt Productions in collaboration with China Film Co-productions Corporation and Asian Union Film & Entertainment for Columbia Pictures Film Production Asia in association with Good Machine International. With dialogue in Standard Chinese, subtitled for various markets, Crouching Tiger, Hidden Dragon became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history. The film was the first foreign-language film to break the $100 million mark in the United States. The film premiered at the Cannes Film Festival on 18 May 2000, and was theatrically released in the United States on 8 December. An overwhelming critical and commercial success, Crouching Tiger, Hidden Dragon won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score, and Best Cinematography, receiving the most nominations ever for a non-English-language film at the time, until 2018's Roma tied this record. The film also won four BAFTAs and two Golden Globe Awards, one for Best Foreign Film. Along with its numerous awards, Crouching Tiger is often cited as one of the finest wuxia films ever made. The film has been praised for its story, direction, cinematography, and martial arts sequences. Plot In 19th-century Qing dynasty China, Li Mu Bai is a renowned Wudang swordsman, and his friend Yu Shu Lien, a female warrior, heads a private security company. Shu Lien and Mu Bai have long had feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not revealed their feelings. Mu Bai, choosing to retire, asks Shu Lien to give his fabled 400-year-old sword "Green Destiny" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang skills. While at Sir Te's place, Shu Lien meets Yu Jiaolong, or Jen, who is the daughter of the rich and powerful Governor Yu and is about to get married. One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. She reveals that she killed Mu Bai's teacher because he would sleep with her, but refuse to take a woman as a disciple, and she felt it poetic justice for him to die at a woman's hand. Just as Mu Bai is about to kill her, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual. Fox is illiterate and could only follow the diagrams, whereas Jen's ability to read the manual allowed her to surpass her teacher in martial arts. At night, a bandit named Lo breaks into Jen's bedroom and asks her to leave with him. A flashback reveals that in the past, when Governor Yu and his family were traveling in the western deserts, Lo and his bandits raided Jen's caravan and Lo stole her comb. She pursued him to his desert cave to get her comb back. However, the pair soon fell in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a cliff to make his wishes come true. Because the man's heart was pure, he did not die. Lo has come now to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage can be consummated. Disguised in male clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious. Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny and is able to destroy each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest. Mu Bai confronts Jen and offers to take her as his student. She promises to accept him as her teacher if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in one move, but Jen reneges, and Mu Bai throws the sword over a waterfall. Jen dives after it and is then rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern, where Mu Bai and Shu Lien discover her. Fox suddenly appears and attacks the others with poisoned needles. Mu Bai blocks the needles with his sword and mortally wounds Fox, only to realize that one of the needles has hit him in the neck. With her last breath, Fox confesses that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's fighting techniques from her. Contrite, Jen leaves to prepare an antidote for the poisoned dart. With his last breath, Mu Bai finally confesses his love for Shu Lien. He dies in her arms as Jen returns. Shu Lien forgives Jen, telling her to go to Lo and always be true to herself. The Green Destiny is returned to Sir Te. Jen later goes to Mount Wudang and spends the night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. Lo wishes for them to be together again, back in the desert. Jen then glides off the bridge and gently floats down into the mists. Cast Credits from British Film Institute: Chow Yun-fat as Li Mu Bai (C: 李慕白, P: Lǐ Mùbái) Michelle Yeoh as Yu Shu Lien (T: 俞秀蓮, S: 俞秀莲, P: Yú Xiùlián) Zhang Ziyi as Jen Yu (T: 玉嬌龍, S: 玉娇龙, P: Yù Jiāolóng) Chang Chen as Lo "Dark Cloud" Xiao Hou (T: 羅小虎, S: 罗小虎, P: Luó Xiǎohǔ) Lang Sihung as Sir Te (T: 貝勒爺, S: 贝勒爷, P: Bèi-lèyé) Cheng Pei-pei as Jade Fox (C: 碧眼狐狸, P: Bìyǎn Húli) Li Fazeng as Governor Yu (S: 玉大人, P: Yù Dàrén) Wang Deming as Inspector Tsai (S: 蔡九, P: Cài Jiǔ) Li Li as Tsai May (S: 蔡香妹, P: Cài Xiāng Mèi) Hai Yan as Madam Yu (S: 玉夫人, P: Yù Fūren) Gao Xi'an as Bo (S: 劉泰保, P: Liú Tàibǎo) Huang Suying as Aunt Wu (S: 吳媽, P: Wú Mā) Zhang JinTing as De Lu (S: 德祿, P: Dé Lù) Du ZhenXi as Uncle Jiao (S: 焦大爺, P: Jiāo Dà-Yé) Li Kai as Gou Jun Pei (S: 魯君佩, P: Lǔ Jūn Pèi) Feng Jianhua as Shining Phoenix Mountain Gou (S: 魯君雄, P: Lǔ Jūn Xióng) Ma Zhongxuan as Iron Arm Mi (S: 米大鏢, Mǐ-Dà Biāo) Li Bao-Cheng as Flying Machete Chang (S: 飛刀常, P: Fēi Dāo Cháng) Yang Yongde as Monk Jing (S: 法廣和尚, P: Fǎ Guǎng Héshang) Themes and interpretations Title The title "Crouching Tiger, Hidden Dragon" is a literal translation of the Chinese idiom "臥虎藏龍" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin (513–581) that reads "暗石疑藏虎,盤根似臥龍", which means "behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon". The title also has several other layers of meaning. On the most obvious level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean "tiger" and "dragon", respectively. On another level, the Chinese idiomatic phrase is an expression referring to the undercurrents of emotion, passion, and secret desire that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline. Gender roles The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior in the west. The storyline of Crouching Tiger, Hidden Dragon is mostly driven by the three female characters. In particular, Jen is driven by her desire to be free from the gender role imposed on her, while Shu Lien, herself oppressed by the gender role, tries to lead Jen back into the role deemed appropriate for her. Some prominent martial arts disciplines are traditionally held to have been originated by women, e.g., Wing Chun. The film's title refers to masters one does not notice, which necessarily includes mostly women, and therefore suggests the advantage of a female bodyguard. Poison Poison is also a significant theme in the film. The Chinese word "毒" (dú) means not only physical poison but also cruelty and sinfulness. In the world of martial arts, the use of poison is considered an act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl", referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that, without guidance, Jen could become a "poison dragon". China of the imagination The story is set during the Qing dynasty (1644–1912), but it does not specify an exact time. Lee sought to present a "China of the imagination" rather than an accurate vision of Chinese history. At the same time, Lee also wanted to make a film that Western audiences would want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the typical martial arts film such as an airborne battle among wispy bamboo plants. Production The film was adapted from the novel Crouching Tiger, Hidden Dragon by Wang Dulu, serialized between 1941 and 1942 in Qingdao Xinmin News. The novel is the fourth in a sequence of five. In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends. Casting Shu Qi was Ang Lee's first choice for the role of Jen, but she turned it down. Filming Although its Academy Award for Best Foreign Language Film was presented to Taiwan, Crouching Tiger, Hidden Dragon was in fact an international co-production between companies in four regions: the Chinese company China Film Co-production Corporation, the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine, the Hong Kong company Edko Films, and the Taiwanese Zoom Hunt Productions, as well as the unspecified United China Vision and Asia Union Film & Entertainment, created solely for this film. The film was made in Beijing, with location shooting in the Anhui, Hebei, Jiangsu, and Xinjiang provinces of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable—I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors" aloft. "Most of the time you can see their faces," he added. "That's really them in the trees." Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay, so she learned the Standard Chinese lines phonetically; Chang Chen is from Taiwan and he speaks Standard Chinese in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life." The film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films. As a result, high-quality English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences. Soundtrack The score was composed by Dun TAN in 1999. It was played for the movie by the Shanghai Symphony Orchestra, the Shanghai National Orchestra and the Shanghai Percussion Ensemble. It features solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later sang it at the Academy Awards. The composer Chen Yuanlin also collaborated in the project. The music for the entire film was produced in two weeks. Tan the next year (2000) adapted his filmscore as a cello concerto called simply "Crouching Tiger." Release Marketing The film was adapted into a video game and a series of comics, and it led to the original novel being adapted into a 34-episode Taiwanese television series. The latter was released in 2004 as New Crouching Tiger, Hidden Dragon for Northern American release. Home media The film was released on VHS and DVD on 5 June 2001 by Columbia TriStar Home Entertainment. It was also released on UMD on 26 June 2005. In the United Kingdom, it was watched by viewers on television in 2004, making it the year's most-watched foreign-language film on television. Restoration The film was re-released in a 4K restoration by Sony Pictures Classics in 2023. Reception Box office The film premiered in cinemas on 8 December 2000, in limited release within the United States. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On 12 January 2001, Crouching Tiger, Hidden Dragon premiered in cinemas in wide release throughout the U.S., grossing $8,647,295 in business, ranking in sixth place. The film Save the Last Dance came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place, screening in 837 theaters. Save the Last Dance remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, Crouching Tiger, Hidden Dragon opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19. Critical response Crouching Tiger, Hidden Dragon was very well received in the Western world, receiving numerous awards. On Rotten Tomatoes, the film holds an approval rating of 98% based on 168 reviews, with an average rating of 8.6/10. The site's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, Crouching Tiger, Hidden Dragon features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 94 out of 100, based on 32 reviews, indicating "universal acclaim". Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin Chinese as a mother tongue. All four main actors spoke Standard Chinese with vastly different accents: Chow speaks with a Cantonese accent, Yeoh with a Malaysian accent, Chang Chen with a Taiwanese accent, and Zhang Ziyi with a Beijing accent. Yeoh responded to this complaint in a 28 December 2000, interview with Cinescape. She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand," Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird." The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as Hero and House of Flying Daggers, both directed by Zhang Yimou, being marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted: Film Journal noted that Crouching Tiger, Hidden Dragon "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market. Accolades Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by Roma (2018). The film is ranked at number 497 on Empire's 2008 list of the 500 greatest movies of all time. and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010. In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years. In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world in a poll conducted by BBC. The film was included in BBC's 2018 list of The 100 greatest foreign language films ranked by 209 critics from 43 countries around the world. In 2019, The Guardian ranked the film 51st in its 100 best films of the 21st century list. Sequel A sequel to the film, Crouching Tiger, Hidden Dragon: Sword of Destiny, was released in 2016. It was directed by Yuen Wo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed into Chinese for Chinese releases. Sword of Destiny is based on Iron Knight, Silver Vase, the next (and last) novel in the Crane–Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in Sword of Destiny but refused, stating that she would only appear in a sequel if Ang Lee were directing it. In the West, the sequel was for the most part not shown in theaters, instead being distributed direct-to-video by the streaming service Netflix. Posterity The theme of Janet Jackson's song "China Love" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty. The names of the pterosaur genus Kryptodrakon and the ceratopsian genus Yinlong (both meaning "hidden dragon" in Greek and Chinese respectively) allude to the film. The character of Lo, or "Dark Cloud" the desert bandit, influenced the development of the protagonist of the Prince of Persia series of video games. References Further reading – Collection of articles External links 2000 films 2000 fantasy films 2000 martial arts films American martial arts films Martial arts fantasy films BAFTA winners (films) Best Film HKFA Best Foreign Language Film Academy Award winners Best Foreign Language Film BAFTA Award winners Best Foreign Language Film Golden Globe winners Chinese martial arts films Films based on Chinese novels Films directed by Ang Lee Films scored by Tan Dun Films set in 18th-century Qing dynasty Films set in Beijing Films set in the 1770s Films that won the Best Original Score Academy Award Films whose art director won the Best Art Direction Academy Award Films whose cinematographer won the Best Cinematography Academy Award Films whose director won the Best Direction BAFTA Award Films whose director won the Best Director Golden Globe Films with screenplays by James Schamus Georges Delerue Award winners Hong Kong martial arts films Hugo Award for Best Dramatic Presentation winning works Independent Spirit Award for Best Film winners Toronto International Film Festival People's Choice Award winners Magic realism films 2000s Mandarin-language films Nebula Award for Best Script-winning works Sony Pictures Classics films Taiwanese martial arts films Wuxia films 2000s American films 2000s Chinese films 2000s Hong Kong films
2,346
5,320
https://en.wikipedia.org/wiki/Carbon%20nanotube
Carbon nanotube
A carbon nanotube (CNT) is a tube made of carbon with diameters typically measured in nanometers. Single-wall carbon nanotubes (SWCNTs) are one of the allotropes of carbon, intermediate between fullerene cages and flat graphene, with diameters in the range of a nanometre. Although not made this way, single-wall carbon nanotubes can be idealized as cutouts from a two-dimensional hexagonal lattice of carbon atoms rolled up along one of the Bravais lattice vectors of the hexagonal lattice to form a hollow cylinder. In this construction, periodic boundary conditions are imposed over the length of this roll-up vector to yield a helical lattice of seamlessly bonded carbon atoms on the cylinder surface. Multi-wall carbon nanotubes (MWCNTs) consisting of nested single-wall carbon nanotubes weakly bound together by van der Waals interactions in a tree ring-like structure. If not identical, these tubes are very similar to Oberlin, Endo, and Koyama's long straight and parallel carbon layers cylindrically arranged around a hollow tube. Multi-wall carbon nanotubes are also sometimes used to refer to double- and triple-wall carbon nanotubes. Carbon nanotubes can also refer to tubes with an undetermined carbon-wall structure and diameters less than 100 nanometres. Such tubes were discovered in 1952 by Radushkevich and Lukyanovich. The length of a carbon nanotube produced by common production methods is often not reported, but is typically much larger than its diameter. Thus, for many purposes, end effects are neglected and the length of carbon nanotubes is assumed infinite. Carbon nanotubes can exhibit remarkable electrical conductivity, while others are semiconductors. They also have exceptional tensile strength and thermal conductivity because of their nanostructure and strength of the bonds between carbon atoms. In addition, they can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science. Rolling up a hexagonal lattice along different directions to form different infinitely long single-wall carbon nanotubes shows that all of these tubes not only have helical but also translational symmetry along the tube axis and many also have nontrivial rotational symmetry about this axis. In addition, most are chiral, meaning the tube and its mirror image cannot be superimposed. This construction also allows single-wall carbon nanotubes to be labeled by a pair of integers. A special group of achiral single-wall carbon nanotubes are metallic, but all the rest are either small or moderate band gap semiconductors. These electrical properties, however, do not depend on whether the hexagonal lattice is rolled from its back to front or from its front to back and hence are the same for the tube and its mirror image. The remarkable properties predicted for SWCNTs were tantalizing, but a path to creating them was lacking until 1993, when Iijima and Ichihashi at NEC and Bethune et al. at IBM independently discovered that co-vaporizing carbon and transition metals such as iron and cobalt could specifically catalyze SWCNT formation. These discoveries triggered research that succeeded in greatly increasing the efficiency of the catalytic production technique, and led to an explosion of work to characterize and find applications for SWCNTs. Structure of SWNTs Basic details The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it. In the study of nanotubes, one defines a zigzag path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an armchair path as one that makes two left turns of 60 degrees followed by two right turns every four steps. On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube. An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other. The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom A, and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet. The two halves of the atom A will end up on opposite edges of the strip, over two atoms A1 and A2 of the graphene. The line from A1 to A2 will correspond to the circumference of the cylinder that went through the atom A, and will be perpendicular to the edges of the strip. In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms A1 and A2, which correspond to the same atom A on the cylinder, must be in the same class. It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class. Let u and v be two linearly independent vectors that connect the graphene atom A1 to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom A2 with same class as A1, the vector from A1 to A2 can be written as a linear combination n u + m v, where n and m are integers. And, conversely, each pair of integers (n,m) defines a possible position for A2. Given n and m, one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints A1 and A2, and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair (k,0), the result is a zigzag nanotube, with closed zigzag paths of 2k atoms. If it is applied to a pair (k,k), one obtains an armchair tube, with closed armchair paths of 4k atoms. Types the structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around A1 before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair (n,m) to the pair (−2m,n+m). It follows that many possible positions of A2 relative to A1 — that is, many pairs (n,m) — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs (k,0) and (0,k) describe the same nanotube geometry. These redundancies can be avoided by considering only pairs (n,m) such that n > 0 and m ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair (n,m) that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly. Instead of the type (n,m), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle α between the directions of u and w, may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical. Chirality and mirror symmetry A nanotube is chiral if it has type (n,m), with m > 0 and m ≠ n; then its enantiomer (mirror image) has type (m,n), which is different from (n,m). This operation corresponds to mirroring the unrolled strip about the line L through A1 that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the (k,0) "zigzag" tubes and the (k,k) "armchair" tubes. If two enantiomers are to be considered the same structure, then one may consider only types (n,m) with 0 ≤ m ≤ n and n > 0. Then the angle α between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube. Circumference and diameter From n and m one can also compute the circumference c, which is the length of the vector w, which turns out to be: in picometres. The diameter of the tube is then , that is also in picometres. (These formulas are only approximate, especially for small n and m where the bonds are strained; and they do not take into account the thickness of the wall.) The tilt angle α between u and w and the circumference c are related to the type indices n and m by: where arg(x,y) is the clockwise angle between the X-axis and the vector (x,y); a function that is available in many programming languages as atan2(y,x). Conversely, given c and α, one can get the type (n,m) by the formulas: which must evaluate to integers. Physical limits Narrowest examples If n and m are too small, the structure described by the pair (n,m) will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable. The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations. The thinnest freestanding single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs. Length The observation of the longest carbon nanotubes grown so far, around 0.5 metre (550 mm) long, was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes. The shortest carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008 by Ramesh Jasti. Other small molecule carbon nanotubes have been synthesized since. Density The highest density of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ~22 kΩ). Variants There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). The International Standards Organization uses single-wall or multi-wall in its documents. Multi-walled Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal. Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to attacks by chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen. The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN. Junctions and crosslinking Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes. Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors. Other morphologies Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties. A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions. In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube. Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures. Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers. Properties Many properties of single-walled carbon nanotubes depend significantly on the (n,m) type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior. Mechanical Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m·kg−1 is the best of known materials, compared to high-carbon steel's 154 kN·m·kg−1. Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress. On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction. It was reported in 2020, CNT-filled polymer nanocomposites with 4 wt% and 6 wt% loadings are the most optimal concentrations, as they provide a good balance between mechanical properties and resilience of mechanical properties against UV exposure for the offshore umbilical sheathing layer. Electrical Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given (n,m) nanotube, if n = m, the nanotube is metallic; if n − m is a multiple of 3 and n ≠ m, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor. Thus, all armchair (n = m) nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting. Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the K point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion. The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes. Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2G0, where G0 = 2e2/h is the conductance of a single ballistic quantum channel. Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band. Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate. In 2021, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, published department findings on the use of carbon nanotubes to create an electrical current. By immersing the structures in an organic solvent, the liquid drew electrons out of the carbon particles. Strano was quoted as saying, "This allows you to do electrochemistry, but with no wires," and represents a significant breakthrough in the technology. Future applications include powering micro- or nanoscale robots, as well as driving alcohol oxidation reactions, which are important in the chemicals industry. Optical Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality. As shown below, optical absorption, photoluminescence, and Raman spectroscopies allow quick and reliable characterization of this "nanotube quality" in terms of non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features determine nearly any other properties such as optical, mechanical, and electrical properties. Carbon nanotubes are unique "one-dimensional systems" which can be envisioned as rolled single sheets of graphite (or more precisely graphene). This rolling can be done at different angles and curvatures resulting in different nanotube properties. The diameter typically varies in the range 0.4–40 nm (i.e., (X-ray wavelenghts), "only" ~100 times), but the length can vary ~100,000,000,000 times, from 0.14 nm to 55.5 cm. The nanotube aspect ratio, or the length-to-diameter ratio, can be as high as 132,000,000:1, which is unequalled by any other material. Consequently, all the properties of the carbon nanotubes relative to those of typical semiconductors are extremely anisotropic (directionally dependent) and tunable. Whereas mechanical, electrical, and electrochemical (supercapacitor) properties of the carbon nanotubes are well established and have immediate applications, the practical use of optical properties is yet unclear. The aforementioned tunability of properties is potentially useful in optics and photonics. In particular, light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties. Thermal All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air. Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone–Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity. Synthesis Techniques have been developed to produce nanotubes in sizeable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation, chemical vapor deposition (CVD) are batch by batch process and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, but achieving the repeatability becomes a major problem with CVD growth. The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and iron pentacarbonyl or nickel tetracarbonyl as a catalyst. These catalysts provide a nucleation site for the nanotubes to grow. Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties. When the substrate is heated to the growth temperature (~700 °C), the continuous iron film breaks up into small islands with each island then nucleating a carbon nanotube. The sputtered thickness controls the island size and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands and drive down the diameter of the nanotubes grown. The amount of time the metal island can sit at the growth temperature is limited as they are mobile and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter. The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications. Functionalization CNTs are known to have weak dispersibility in many solvents such as water as a consequence of strong intermolecular p–p interactions. This hinders the processability of CNTs in industrial applications. In order to tackle the issue, various techniques have been developed to modify the surface of CNTs in order to improve their stability and solubility in water. This enhances the processing and manipulation of insoluble CNTs rendering them useful for synthesizing innovative CNT nanofluids with impressive properties that are tunable for a wide range of applications. Chemical routes such as covalent functionalization have been studied extensively, which involves the oxidation of CNTs via strong acids (e.g. sulfuric acid, nitric acid, or a mixture of both) in order to set the carboxylic groups onto the surface of the CNTs as the final product or for further modification by esterification or amination. Free radical grafting is a promising technique among covalent functionalization methods, in which alkyl or aryl peroxides, substituted anilines, and diazonium salts are used as the starting agents. Free radical grafting of macromolecules (as the functional group) onto the surface of CNTs can improve the solubility of CNTs compared to common acid treatments which involve the attachment of small molecules such as hydroxyl onto the surface of CNTs. The solubility of CNTs can be improved significantly by free-radical grafting because the large functional molecules facilitate the dispersion of CNTs in a variety of solvents even at a low degree of functionalization. Recently an innovative environmentally friendly approach has been developed for the covalent functionalization of multi-walled carbon nanotubes (MWCNTs) using clove buds. This approach is innovative and green because it does not use toxic and hazardous acids which are typically used in common carbon nanomaterial functionalization procedures. The MWCNTs are functionalized in one pot using a free radical grafting reaction. The clove-functionalized MWCNTs are then dispersed in water producing a highly stable multi-walled carbon nanotube aqueous suspension (nanofluids). Modeling Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is computationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes. Metrology There are many metrology standards and reference materials available for carbon nanotubes. For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis. NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube. For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes. Chemical modification Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment. The surface of carbon nanotubes can be chemically modified by coating spinel nanoparticles by hydrothermal synthesis and can be used for water oxidation purposes. In addition, the surface of carbon nanotubes can be fluorinated or halofluorinated by heating while in contact with a fluoroorganic substance, thereby forming partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality. Applications A primary obstacle for applications of carbon nanotubes has been their cost. Prices for single-walled nanotubes declined from around $1500 per gram as of 2000 to retail prices of around $50 per gram of as-produced 40–60% by weight SWNTs as of March 2010. As of 2016, the retail price of as-produced 75% by weight SWNTs was $2 per gram. Current Current use and application of nanotubes has mostly been limited to the use of bulk nanotubes, which is a mass of rather unorganized fragments of nanotubes. Bulk nanotube materials may never achieve a tensile strength similar to that of individual tubes, but such composites may, nevertheless, yield strengths sufficient for many applications. Bulk carbon nanotubes have already been used as composite fibers in polymers to improve the mechanical, thermal and electrical properties of the bulk product. Easton-Bell Sports, Inc. have been in partnership with Zyvex Performance Materials, using CNT technology in a number of their bicycle components – including flat and riser handlebars, cranks, forks, seatposts, stems and aero bars. Amroy Europe Oy manufactures Hybtonite carbon nanoepoxy resins where carbon nanotubes have been chemically activated to bond to epoxy, resulting in a composite material that is 20% to 30% stronger than other composite materials. It has been used for wind turbines, marine paints and a variety of sports gear such as skis, ice hockey sticks, baseball bats, hunting arrows, and surfboards. Surrey NanoSystems synthesises carbon nanotubes to create vantablack. "Gecko tape" (also called "nano tape") is often commercially sold as double-sided adhesive tape. It can be used to hang lightweight items such as pictures and decorative items on smooth walls without punching holes in the wall. The carbon nanotube arrays comprising the synthetic setae leave no residue after removal and can stay sticky in extreme temperatures. In tissue engineering, carbon nanotubes have been used as scaffolding for bone growth. Tips for atomic force microscope probes. Under development Current research for modern applications include: Utilizing carbon nanotubes as the channel material of carbon nanotube field-effect transistors. Using carbon nanotubes as a scaffold for diverse microfabrication techniques. Energy dissipation in self-organized nanostructures under influence of an electric field. Using carbon nanotubes for environmental monitoring due to their active surface area and their ability to absorb gases. Jack Andraka used carbon nanotubes in his pancreatic cancer test. His method of testing won the Intel International Science and Engineering Fair Gordon E. Moore Award in the spring of 2012. The Boeing Company has patented the use of carbon nanotubes for structural health monitoring of composites used in aircraft structures. This technology will greatly reduce the risk of an in-flight failure caused by structural degradation of aircraft. Zyvex Technologies has also built a 54' maritime vessel, the Piranha Unmanned Surface Vessel, as a technology demonstrator for what is possible using CNT technology. CNTs help improve the structural performance of the vessel, resulting in a lightweight 8,000 lb boat that can carry a payload of 15,000 lb over a range of 2,500 miles. IMEC is using carbon nanotubes for pellicles in semiconductor lithography. Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel. IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020. Potential The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants. CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage. Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometre can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule. Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed. CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding. Safety and health The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanomaterials. Early scientific studies have indicated that nanoscale particles may pose a greater health risk than bulk materials due to a relative increase in surface area per unit mass. Increase in length and diameter of CNT is correlated to increased toxicity and pathological alterations in lung. The biological interactions of nanotubes are not well understood, and the field is open to continued toxicological studies. It is often difficult to separate confounding factors, and since carbon is relatively biologically inert, some of the toxicity attributed to carbon nanotubes may be instead due to residual metal catalyst contamination. In previous studies, only Mitsui-7 was reliably demonstrated to be carcinogenic, although for unclear/unknown reasons. Unlike many common mineral fibers (such as asbestos), most SWCNTs and MWCNTs do not fit the size and aspect-ratio criteria to be classified as respirable fibers. In 2013, given that the long-term health effects have not yet been measured, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers. The U.S. National Institute for Occupational Safety and Health has determined non-regulatory recommended exposure limits (RELs) of 1 μg/m3 for carbon nanotubes and carbon nanofibers as background-corrected elemental carbon as an 8-hour time-weighted average (TWA) respirable mass concentration. It must be noted that although CNT caused pulmonary inflammation and toxicity in mice, exposure to aerosols generated from sanding of composites containing polymer-coated MWCNTs, representative of the actual end-product, did not exert such toxicity. As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application. History The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal Carbon described the origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometre-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. His paper initiated a flurry of excitement and could be credited with inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991. In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometre diameter tubes made of carbon in the Journal of Physical Chemistry Of Russia. This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their Carbon editorial: In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents. In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given, as well as hypotheses for their growth in a nitrogen atmosphere at low pressures. In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytical disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that via this rolling, many different arrangements of graphene hexagonal nets are possible. They suggested two such possible arrangements: circular arrangement (armchair nanotube); and a spiral, helical arrangement (chiral tube). In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core...." Helping to create the initial excitement associated with carbon nanotubes were Iijima's 1991 discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods; and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, they would exhibit remarkable conducting properties. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune et al. at IBM of methods to specifically produce single-walled carbon nanotubes by adding transition-metal catalysts to the carbon in an arc discharge. Thess et al. refined this catalytic method by vaporizing the carbon/transition-metal combination in a high temperature furnace, which greatly improved the yield and purity of the SWNTs and made them widely available for characterization and application experiments. The arc discharge technique, well known to produce the famed Buckminsterfullerene , thus played a role in the discoveries of both multi- and single-wall nanotubes, extending the run of serendipitous discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole. In 2020, during archaeological excavation of Keezhadi in Tamil Nadu, India, ~2500-year-old pottery was discovered whose coatings appear to contain carbon nanotubes. The robust mechanical properties of the nanotubes are partially why the coatings have lasted for so many years, say the scientists. See also Buckypaper Carbide-derived carbon Carbon nanocone Carbon nanofibers Carbon nanoscrolls Carbon nanotube computer Carbon nanotubes in photovoltaics Colossal carbon tube Diamond nanothread Filamentous carbon Molecular modelling Nanoflower Ninithi (nanotube modelling software) Organic semiconductor References This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted. External links Nanocarbon: From Graphene to Buckyballs. Interactive 3D models of cyclohexane, benzene, graphene, graphite, chiral & non-chiral nanotubes, and C60 Buckyballs - WeCanFigureThisOut.org. The Nanotube site. Last updated 2013.04.12 EU Marie Curie Network CARBIO: Multifunctional carbon nanotubes for biomedical applications C60 and Carbon Nanotubes a short video explaining how nanotubes can be made from modified graphite sheets and the three different types of nanotubes that are formed Learning module for Bandstructure of Carbon Nanotubes and Nanoribbons Selection of free-download articles on carbon nanotubes WOLFRAM Demonstrations Project: Electronic Band Structure of a Single-Walled Carbon Nanotube by the Zone-Folding Method WOLFRAM Demonstrations Project: Electronic Structure of a Single-Walled Carbon Nanotube in Tight-Binding Wannier Representation Electrospinning Allotropes of carbon Emerging technologies Transparent electrodes Refractory materials Space elevator Discovery and invention controversies Nanomaterials
2,349
5,331
https://en.wikipedia.org/wiki/Demographics%20of%20Chad
Demographics of Chad
The people of Chad speak more than 100 different languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type. Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes. Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity. Not only do the social aspects of ethnic identity change but the biological composition (or gene pool) also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component. The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad. The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. Chad's languages fall into ten major groups, each of which belongs to either the Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times. Population According to the total population was in , compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or the country is projected to have a population of 34 millions peoples in 2050 and 61 millions peoples in 2100 . Population by Sex and Age Group (Census 20.V.2009): Vital statistics Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates. Source: UN DESA, World Population Prospects, 2022 Fertility and births Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR): Fertility data as of 2014-2015 (DHS Program): Religions The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief. The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown. Other demographic statistics Demographic statistics according to the World Population Review in 2022. One birth every 45 seconds One death every 3 minutes One net migrant every 1440 minutes Net gain of one person every 1 minutes The following demographic statistics are from the CIA World Factbook. Population 17,963,211 (2022 est.) 15,833,116 (July 2018 est.) 12,075,985 (2017 est.) Religions Muslim 52.1%, Protestant 23.9%, Roman Catholic 20%, animist 0.3%, other Christian 0.2%, none 2.8%, unspecified 0.7% (2014-15 est.) Age structure 0-14 years: 47.43% (male 4,050,505/female 3,954,413) 15-24 years: 19.77% (male 1,676,495/female 1,660,417) 25-54 years: 27.14% (male 2,208,181/female 2,371,490) 55-64 years: 3.24% (male 239,634/female 306,477) 65 years and over: 2.43% (2020 est.) (male 176,658/female 233,087) 0-14 years: 48.12% (male 3,856,001 /female 3,763,622) 15-24 years: 19.27% (male 1,532,687 /female 1,518,940) 25-54 years: 26.95% (male 2,044,795 /female 2,222,751) 55-64 years: 3.25% (male 228,930 /female 286,379) 65 years and over: 2.39% (male 164,257 /female 214,754) (2018 est.) Median age total: 16.1 years. Country comparison to the world: 223rd male: 15.6 years female: 16.5 years (2020 est.) total: 15.8 years. Country comparison to the world: 226th male: 15.3 years female: 16.3 years (2018 est.) Total: 17.8 years Male: 16.8 years Female: 18.8 years (2017 est.) Population growth rate 3.09% (2022 est.) Country comparison to the world: 10th 3.23% (2018 est.) Country comparison to the world: 5th Birth rate 40.45 births/1,000 population (2022 est.) Country comparison to the world: 6th 43 births/1,000 population (2018 est.) Country comparison to the world: 4th Death rate 9.45 deaths/1,000 population (2022 est.) Country comparison to the world: 49th 10.5 deaths/1,000 population (2018 est.) Country comparison to the world: 26th Net migration rate -0.13 migrant(s)/1,000 population (2022 est.) Country comparison to the world: 105th -3.2 migrant(s)/1,000 population (2017 est.) Country comparison to the world: 176th Total fertility rate 5.46 children born/woman (2022 est.) Country comparison to the world: 5th 5.9 children born/woman (2018 est.) Country comparison to the world: 4th Mother's mean age at first birth 18.1 years (2014/15 est.) note: median age at first birth among women 25-49 Dependency ratios total dependency ratio: 100.2 (2015 est.) youth dependency ratio: 95.2 (2015 est.) elderly dependency ratio: 4.9 (2015 est.) potential support ratio: 20.3 (2015 est.) Contraceptive prevalence rate 8.1% (2019) 5.7% (2014/15) Urbanization urban population: 24.1% of total population (2022) rate of urbanization: 4.1% annual rate of change (2020-25 est.) urban population: 23.1% of total population (2018) rate of urbanization: 3.88% annual rate of change (2015-20 est.) Sex ratio At birth: 1.04 male(s)/female Under 15 years: 1.01 male(s)/female 15–64 years: 0.92 male(s)/female 65 years and over: 0.66 male(s)/female Total population: 0.96 male(s)/female (2006 est.) Life expectancy at birth total population: 59.15 years. Country comparison to the world: 222nd male: 57.32 years female: 61.06 years (2022 est.) total population: 57.5 years (2018 est.) Country comparison to the world: 214th male: 55.7 years (2018 est.) female: 59.3 years (2018 est.) Total population: 50.6 years Male: 49.4 years Female: 51.9 years (2017 est.) HIV/AIDS Adult prevalence rate: 1.3% (2017 est.) People living with HIV/AIDS: 110,000(2017 est.) Deaths: 3,100 (2017 est.) Children under the age of 5 years underweight 28.8% (2015) Major infectious diseases degree of risk: very high (2020) food or waterborne diseases: bacterial and protozoal diarrhea, hepatitis A and E, and typhoid fever vectorborne diseases: malaria and dengue fever water contact diseases: schistosomiasis animal contact diseases: rabies respiratory diseases: meningococcal meningitis note: on 21 March 2022, the US Centers for Disease Control and Prevention (CDC) issued a Travel Alert for polio in Africa; Chad is currently considered a high risk to travelers for circulating vaccine-derived polioviruses (cVDPV); vaccine-derived poliovirus (VDPV) is a strain of the weakened poliovirus that was initially included in oral polio vaccine (OPV) and that has changed over time and behaves more like the wild or naturally occurring virus; this means it can be spread more easily to people who are unvaccinated against polio and who come in contact with the stool or respiratory secretions, such as from a sneeze, of an “infected” person who received oral polio vaccine; the CDC recommends that before any international travel, anyone unvaccinated, incompletely vaccinated, or with an unknown polio vaccination status should complete the routine polio vaccine series; before travel to any high-risk destination, CDC recommends that adults who previously completed the full, routine polio vaccine series receive a single, lifetime booster dose of polio vaccine Child marriage women married by age 15: 24.2% (2019) women married by age 18: 60.6% (2019) men married by age 18: 8.1% (2019 est.) Nationality Noun: Chadian(s) Adjective: Chadian Ethnic groups The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa. 200 distinct groups In the north and center: Arabs, Tubu (Daza, Teda), Zaghawa, Kanembu, Wadai, Baguirmi, Hadjarai, Fulani, Kotoko, Hausa, Bulala, and Maba, most of whom are Muslim In the south: Sara (Ngambaye, Mbaye, Goulaye), Mundang, Mussei, Massa, most of whom are Christian or animist About 5,000 French citizens live in Chad. Religions Islam 51.8% Roman Catholic 20.3% Protestant 23.5% Animist 0.6% Other Christians 0.3% Unknown 0.6% None 2.9% Languages Arabic (official), French (official), Sara (in south), more than 120 different languages and dialects Literacy Definition: age 15 and over can read and write French or Arabic total population: 22.3% (2016 est.) male: 31.3% (2016 est.) female: 14% (2016 est.) School life expectancy (primary to tertiary education) total: 7 years male: 9 years female: 6 years (2015) total: 8 years (2014) male: 9 years (2014) female: 6 years (2014) Notes References Attribution: Society of Chad
2,357
5,333
https://en.wikipedia.org/wiki/Economy%20of%20Chad
Economy of Chad
The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by ExxonMobil. In terms of gross domestic product, Chad ranks 143rd globally with $11.051 billion dollars as of 2018. Agriculture Chad produced in 2018: 969 thousand tons of sorghum; 893 thousand tons of peanut butter; 756 thousand tons of millet; 484 thousand tonnes of yam (8th largest producer in the world); 475 thousand tons of sugarcane; 437 thousand tons of maize; 284 thousand tons of cassava; 259 thousand tons of rice; 255 thousand tons of sweet potato; 172 thousand tons of sesame seed; 151 thousand tons of bean; 120 thousand tons of cotton; In addition to smaller productions of other agricultural products. Macro-economic trend The following table shows the main economic indicators in 1980–2017. Other statistics GDP: purchasing power parity – $28.62 billion (2017 est.) GDP – real growth rate: -3.1% (2017 est.) GDP – per capita: $2,300 (2017 est.) Gross national saving: 15.5% of GDP (2017 est.) GDP – composition by sector: agriculture: 52.3% (2017 est.) industry: 14.7% (2017 est.) services: 33.1% (2017 est.) Population below poverty line:: 46.7% (2011 est.) Distribution of family income – Gini index: 43.3 (2011 est.) Inflation rate (consumer prices): -0.9% (2017 est.) Labor force: 5.654 million (2017 est.) Labor force – by occupation: agriculture 80%, industry and services 20% (2006 est.) Budget: revenues: 1.337 billion (2017 est.) expenditures: 1.481 billion (2017 est.) Budget surplus (+) or deficit (-): -1.5% (of GDP) (2017 est.) Public debt: 52.5% of GDP (2017 est.) Industries: oil, cotton textiles, brewing, natron (sodium carbonate), soap, cigarettes, construction materials Industrial production growth rate: -4% (2017 est.) electrification: total population: 4% (2013) electrification: urban areas: 14% (2013) electrification: rural areas: 1% (2013) Electricity – production: 224.3 million kWh (2016 est.) Electricity – production by source: fossil fuel: 98% hydro: 0% nuclear: 0% other renewable: 3% (2017) Electricity – consumption: 208.6 million kWh (2016 est.) Electricity – exports: 0 kWh (2016 est.) Electricity – imports: 0 kWh (2016 est.) Agriculture – products: cotton, sorghum, millet, peanuts, sesame, corn, rice, potatoes, onions, cassava (manioc, tapioca), cattle, sheep, goats, camels Exports: $2.464 billion (2017 est.) Exports – commodities: oil, livestock, cotton, sesame, gum arabic, shea butter Exports – partners: US 38.7%, China 16.6%, Netherlands 15.7%, UAE 12.2%, India 6.3% (2017) Imports: $2.16 billion (2017 est.) Imports – commodities: machinery and transportation equipment, industrial goods, foodstuffs, textiles Imports – partners: China 19.9%, Cameroon 17.2%, France 17%, US 5.4%, India 4.9%, Senegal 4.5% (2017) Debt – external: $1.724 billion (31 December 2017 est.) Reserves of foreign exchange and gold: $22.9 million (31 December 2017 est.) See also Chad Economy of Africa Petroleum industry in Chad United Nations Economic Commission for Africa References General External links Chad latest trade data on ITC Trade Map World Bank – Chad-Cameroon Pipeline Project Chad Chad
2,359
5,360
https://en.wikipedia.org/wiki/Card%20game
Card game
A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific. Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules with international tournaments being held, but most are folk games whose rules may vary by region, culture, location or from circle to circle. Traditional card games are played with a deck or pack of playing cards which are identical in size and shape. Each card has two sides, the face and the back. Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single pack or shoe. Modern card games usually have bespoke decks, often with a vast amount of cards, and can include number or action cards. This type of game is generally regarded as part of the board game hobby. Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or "imperfect information"—as distinct from games of strategy or perfect information, where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their gameplay. Some games that are placed in the card game genre involve a board. The distinction is that the gameplay of a card game chiefly depends on the use of the cards by players (the board is a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose. Types Trick-taking games The object of a trick-taking game is based on the play of multiple rounds, or tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick. The specific object varies with each game and can include taking as many tricks as possible, taking as many scoring cards within the tricks won as possible, taking as few tricks (or as few penalty cards) as possible, taking a particular trick in the hand, or taking an exact number of tricks. Bridge, Whist and Spades, and the various Tarot card games, are popular examples. Matching games The object of a matching (or sometimes "melding") game is to acquire particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid. Shedding games In a shedding game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Paskahousu, Phase 10, Rummikub, the bluffing game I Doubt It, and the children's games Musta Maija and Old Maid, fall into both categories. Catch and collect games The object of an accumulating game is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features. Fishing games In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Tablanet (tablić) is fishing-style game popular in Balkans. Comparing games Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, mus, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games. Patience and solitaire games Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or stock by moving all cards to one or more "discard" or "foundation" piles. Drinking card games Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games. Compendium games Compendium games consist of a sequence of different contracts played in succession. A common pattern is for a number of reverse deals to be played, in which the aim is to avoid certain cards, followed by a final contract which is a domino-type game. Examples include: Barbu, Herzeln, Lorum and Rosbiratschka. In other games, such as Quodlibet and Rumpel, there is a range of widely varying contracts. Collectible card games (CCGs) Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two or more players. Each player has their own deck constructed from a very large pool of unique cards in the commercial market. The cards have different effects, costs, and art. New card sets are released periodically and sold as starter decks or booster packs. Obtaining the different cards makes the game a collectible card game, and cards are sold or traded on the secondary market. Magic: The Gathering, Pokémon, and Yu-Gi-Oh! are well-known collectible card games. Living card games (LCGs) Living card games (LCGs) are similar to collectible card games (CCGs), with their most distinguishing feature being a fixed distribution method, which breaks away from the traditional collectible card game format. While new cards for CCGs are usually sold in the form of starter decks or booster packs (the latter being often randomized), LCGs thrive on a model that requires players to acquire one core set in order to play the game, which players can further customize by acquiring extra sets or expansions featuring new content in the form of cards or scenarios. No randomization is involved in the process, thus players that get the same sets or expansions will get the exact same content. The term was popularized by Fantasy Flight Games (FFG) and mainly applies to its products, however some tabletop gaming companies can be seen using a very similar model. Casino or gambling card games These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house. Poker games Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence. Other card games Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer. Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such. Simulation card games A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football. Fictional card games Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible. Typical structure of card games Number and association of players Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as solitaire or patience card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice. In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation. One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order. Another way of extending a two-player game to more players is as a cut-throat or individual game, in which all players play for themselves, and win or lose alone. Most such card games are round games, i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all. For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are solo games, i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others. Direction of play The players of a card game normally form a circle around a table or other space that can hold cards. The game orientation or direction of play, which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. (In real-time card games, there may be no need for a direction of play.) Most regions have a traditional direction of play, such as: Counterclockwise in most of Asia and in Latin America. Clockwise in North America and Australia. Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (counterclockwise). Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules. Determining who deals Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the dealer, i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation. As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In the case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general, any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice. Hands, rounds and games A hand is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand. The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A game round consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score. Shuffling Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. Riffle shuffling is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The overhand shuffle and the Hindu shuffle are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes. For casino games that are played for large sums it is vital that the cards be properly randomized, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are shuffled well, but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards too well. French belote rules go so far as to prescribe that the deck never be shuffled between hands. Deal The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for cutting. If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly. The actual deal (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the stock (also called the talon, widow, skat or kitty depending on the game and region). Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card. When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations. Rules A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the "house rules" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club. When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the "Portland Club" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures. There is nothing static or "official" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent "house rules" to supplement or even largely replace the "standard" rules. If there is a sense in which a card game can have an "official" set of rules, it is when that card game has an "official" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the Deutscher Skatverband which publishes the Skatordnung. The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Poker's variants are largely traditional, but enforced by the World Series of Poker and the World Poker Tour organizations which sponsor tournament play. Even in these cases, the rules must only be followed exactly at games sanctioned by these governing bodies; players in less formal settings are free to implement agreed-upon supplemental or substitute rules at will. Rule infractions An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as "bleeding." In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament, there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt. If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc. As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, "Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc." Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a "proper" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions. In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit from it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being a possible benefit to the person responsible. Playing cards The oldest surviving reference to the card game in world history is from the 9th century China, when the Collection of Miscellanea at Duyang, written by Tang-dynasty writer Su E, described Princess Tongchang (daughter of Emperor Yizong of Tang) playing the "leaf game" with members of the Wei clan (the family of the princess's husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages. Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called naib, and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits swords, polo sticks, cups and coins and the ranks king, governor, second governor, and ten to one. The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards (jack), (cavalier/knight), (queen), and (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into "esoteric" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, "playing tarot" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the English-pattern pack such as corner card indices and "stamped" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the "full" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards. The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of swords, clubs, cups and coins. (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art. In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards The best-known deck internationally is the English pattern of the 52-card French deck, also called the International or Anglo-American pattern, used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen ranks and the four French suits spades, hearts, diamonds, and clubs. The ranks (from highest to lowest in bridge and poker) are ace, king, queen, jack (or knave), and the numbers from ten down to two (or deuce). The trump cards and knight cards from the French playing tarot are not included. Originally the term knave was more common than "jack"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel Great Expectations: "He calls the knaves, Jacks, this boy!") However, because the card abbreviation for knave ("Kn") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the "knave", and the term with this definition is still recognized in the United Kingdom. In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding. Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte, wife of George III, is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West. Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750. See also Game of chance Game of skill R. F. Foster (games) Henry Jones (writer) who wrote under the pseudonym "Cavendish" John Scarne Dice game List of card games by number of cards References External links International Playing Card Society Rules for historic card games Collection of rules to many card games Tabletop games
2,369
5,363
https://en.wikipedia.org/wiki/Video%20game
Video game
A video game is an electronic game that involves interaction with a user interface or input device such as a joystick, controller, keyboard or motion sensing device to generate visual feedback from a display device, most commonly shown in a video format on a television set, computer monitor, flat-panel display/touchscreen on handheld devices or virtual reality headset, hence the name. However, not all video games are dependent on graphical outputs, for example text adventure games and computer chess can be played through teletype printers. Most modern video games are audiovisual, with audio complement delivered through speakers or headphones, and sometimes also with other types of sensory feedbacks (e.g. haptic technology that provides tactile sensations), and some video games also allow microphone and/or webcam inputs for in-game chatting and livestreaming. Video games are typically categorized according to their hardware platform, which traditionally include arcade video games, console games and computer (PC) games, the latter also encompass LAN games, online games and browser games. More recently, the video game industry has expanded onto mobile gaming through mobile devices (such as smartphones and tablet computers), virtual and augmented reality systems, and remote cloud gaming. Video games are also classified into a wide range of genres based on their style of gameplay and target audience. The first video game prototypes in the 1950s and 1960s were simple extensions of electronic games using video-like output from large, room-sized mainframe computers. The first consumer video game was the arcade video game Computer Space in 1971. In 1972 came the iconic hit game Pong, and the first home console, the Magnavox Odyssey. The industry grew quickly during the golden age of arcade video games from the late 1970s to early 1980s, but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, dominated by Japanese companies such as Nintendo, Sega and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many of which continue to be followed. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or indie games) to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and proliferation of smartphone games in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service. Today, video game development requires numerous interdisciplinary skills, vision, teamworks and liaisons between different parties, including developers, publishers, distributors, retailers, hardware manufacturers and other marketing roles, to successfully bring a game out to its consumers. , the global video game market has estimated annual revenues of across hardware, software and services, which is three times the size of the global music industry and four times that of the film industry in 2019, making it a formidable heavyweight across the modern entertainment industry. The video game market is also a major influence behind the electronics industry, where personal computer component, console and peripheral sales as well as consumer demands for better game performance have been powerful driving factors for hardware design and innovation. Origins Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by Massachusetts Institute of Technology students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other. These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype "Brown Box". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions. Terminology The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes. "Computer game" may also be used as a descriptor, as all these types of games essentially require the use of a computer processor, and in some cases, it is used interchangeably with "video game". However, the term "computer game" can also be used to more specifically refer to games played primarily on personal computers or other type of flexible hardware systems (also known as a PC game), as a way distinguish them from console games or mobile games. Other terms such as "television game" or "telegame" had been used in the 1970s and early 1980s, particularly for the home consoles that connect to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", or TV geemu or terebi geemu. "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term "TV game" is still commonly used into the 21st century. The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a November 10, 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashboxs coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboards description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "what the hell... video game!" For many years, the traveling Videotopia exhibit served as the closest representation of such a vital resource. In addition to collecting home video game consoles, the Electronics Conservancy organization set out to locate and restore 400 antique arcade cabinets after realizing that the majority of these games had been destroyed and feared the loss of their historical significance. Video games have significantly began to be seen in the real-world as a purpose to present history in a way of understanding the methodology and terms that are being compared. Researchers have looked at how historical representations affect how the public perceives the past, and digital humanists encourage historians to use video games as primary materials. Video games, considering their past and age, have over time progressed as what a video game really means. Whether played through a monitor, TV, or a hand-held device, there are many ways that video games are being displayed for users to enjoy. People have drawn comparisons between flow-state-engaged video gamers and pupils in conventional school settings. In traditional, teacher-led classrooms, students have little say in what they learn, are passive consumers of the information selected by teachers, are required to follow the pace and skill level of the group (group teaching), and receive brief, imprecise, normative feedback on their work. Video games, as they continue to develop into better graphic definition and genre's, create new terminology when something unknown tends to become known. Yearly, consoles are being created to compete against other brands with similar functioning features that tends to lead the consumer into which they'd like to purchase. Now, companies have moved towards games only the specific console can play to grasp the consumer into purchasing their product compared to when video games first began, there was little to no variety. In 1989, a console war begun with Nintendo, one of the biggest in gaming was up against target, Sega with their brand new Master System which, failed to compete, allowing the Nintendo Emulator System to be one of the most consumed product in the world. More technology continued to be created, as the computer began to be used in people's houses for more than just office and daily use. Games began being implemented into computers and have progressively grown since then with coded robots to play against you. Early games like tic-tac-toe, solitaire, and Tennis for Two were great ways to bring new gaming to another system rather than one specifically meant for gaming. Definition While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment. The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display. Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means. The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse". Video game terminology The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen. Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game. Components To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display. Platform Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems – and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed. The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators. Computer game Most computer games are PC games, referring to those that involve a player interacting with a personal computer (PC) connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications or mods, open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming, typically using high-performance, high-cost components. In additional to personal computer gaming, there also exist games that work on mainframe computers and other similarly shared systems, with users logging in remotely to use the computer. Home console A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation and Nintendo. Handheld console A handheld game console is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. The handheld console has waned in the 2010s as mobile device gaming has become a more dominant factor. Arcade video game An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them. Browser game A browser game takes advantages of standardizations of technologies for the functionality of web browsers across multiple devices providing a cross-platform environment. These games may be identified based on the website that they appear, such as with Miniclip games. Others are named based on the programming platform used to develop them, such as Java and Flash games. Mobile game With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may use unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positing information and camera devices to support augmented reality gameplay. Cloud gaming Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers. Virtual reality Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit. Emulation An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings. Backward compatibility Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and build-in software within the platform. For example, the PlayStation 2 is capable of playing original PlayStation games simply by inserting the original game media into the newer console, while Nintendo's Wii could play GameCube titles as well in the same manner. Game media Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates. Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game. Input device Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game. Display and output By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays. The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game. Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game. Classifications Video games are frequently classified by a number of factors related to how one plays them. Genre A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror. Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game. Mode A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time. A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life. Intent Most video games are created for entertainment purposes, a category otherwise called "core games". There are a subset of games developed for additional purposes beyond entertainment. These include: Casual games Casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience, as opposed to a hardcore game. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird. Educational games Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem. Serious games Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of serious games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), flight simulators that simulate piloting commercial and military aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra). Art games Though video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer. Content rating Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry. Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+. Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games. Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over). Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms. Game theory and studies Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter. Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player. While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game. Intellectual property for video games Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well. Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game. Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets. Industry History The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American home video game market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today. The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry. Industry roles Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include: Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points. Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space. Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past. Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices. Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release. Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release. Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s. Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3. Gamers: The players and consumers of video games, broadly. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors. These gamers are allowed to influence how the game can update in the future for further development of a better game. Major regional markets The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominantly led by major companies in North America (primarily the United States and Canada), Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field. Game sales According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Effects on society Culture Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020–2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing. Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016. Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider. More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together. Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian. Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum. Beneficial uses Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking. Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types. Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds. A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures". Controversies Video games have had controversy since the 1970s. Video games have emerged as one of the primary playthings used by youngsters all over the world. Parents and children's advocates have raised concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games. Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogamy from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery. Collecting and preservation Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over as of 2020. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong. The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum. See also Lists of video games List of accessories to video games by system Outline of video games Notes References Sources Further reading The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World by Steven L. Kent, Crown, 2001, The Ultimate History of Video Games, Volume 2: Nintendo, Sony, Microsoft, and the Billion-Dollar Battle to Shape Modern Gaming by Steven L. Kent, Crown, 2021, External links Video games bibliography by the French video game research association Ludoscience The Virtual Museum of Computing (VMoC) Games and sports introduced in 1947 Digital media American inventions
2,372
5,367
https://en.wikipedia.org/wiki/Cambrian
Cambrian
The Cambrian Period ( ; sometimes symbolized Ꞓ) is the first geological period of the Paleozoic Era, and of the Phanerozoic Eon. The Cambrian lasted 53.4 million years from the end of the preceding Ediacaran Period 538.8 million years ago (mya) to the beginning of the Ordovician Period mya. Its subdivisions, and its base, are somewhat in flux. The period was established as "Cambrian series" by Adam Sedgwick, who named it after Cambria, the Latin name for 'Cymru' (Wales), where Britain's Cambrian rocks are best exposed. Sedgwick identified the layer as part of his task, along with Roderick Murchison, to subdivide the large "Transition Series", although the two geologists disagreed for a while on the appropriate categorization. The Cambrian is unique in its unusually high proportion of sedimentary deposits, sites of exceptional preservation where "soft" parts of organisms are preserved as well as their more resistant shells. As a result, our understanding of the Cambrian biology surpasses that of some later periods. The Cambrian marked a profound change in life on Earth: prior to the Cambrian, the majority of living organisms on the whole were small, unicellular and simple (Ediacaran fauna being notable exceptions). Complex, multicellular organisms gradually became more common in the millions of years immediately preceding the Cambrian, but it was not until this period that mineralized – hence readily fossilized – organisms became common. The rapid diversification of lifeforms in the Cambrian, known as the Cambrian explosion, produced the first representatives of all modern animal phyla. Phylogenetic analysis has supported the view that before the Cambrian radiation, in the Cryogenian or Tonian, animals (metazoans) evolved monophyletically from a single common ancestor: flagellated colonial protists similar to modern choanoflagellates. Although diverse life forms prospered in the oceans, the land is thought to have been comparatively barren – with nothing more complex than a microbial soil crust and a few molluscs and arthropods (albeit not terrestrial) that emerged to browse on the microbial biofilm. By the end of the Cambrian, myriapods, arachnids, and hexapods started adapting to the land, along with the first plants. Most of the continents were probably dry and rocky due to a lack of vegetation. Shallow seas flanked the margins of several continents created during the breakup of the supercontinent Pannotia. The seas were relatively warm, and polar ice was absent for much of the period. Stratigraphy The Cambrian Period followed the Ediacaran Period and was followed by the Ordovician Period. The base of the Cambrian lies atop a complex assemblage of trace fossils known as the Treptichnus pedum assemblage. The use of Treptichnus pedum, a reference ichnofossil to mark the lower boundary of the Cambrian, is problematic because very similar trace fossils belonging to the Treptichnids group are found well below T. pedum in Namibia, Spain and Newfoundland, and possibly in the western USA. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain. Subdivisions The Cambrian is divided into four epochs (series) and ten ages (stages). Currently only three series and six stages are named and have a GSSP (an internationally agreed-upon stratigraphic reference point). Because the international stratigraphic subdivision is not yet complete, many local subdivisions are still widely used. In some of these subdivisions the Cambrian is divided into three epochs with locally differing names – the Early Cambrian (Caerfai or Waucoban, mya), Middle Cambrian (St Davids or Albertan, mya) and Late Cambrian ( mya; also known as Merioneth or Croixan). Trilobite zones allow biostratigraphic correlation in the Cambrian. Rocks of these epochs are referred to as belonging to the Lower, Middle, or Upper Cambrian. Each of the local series is divided into several stages. The Cambrian is divided into several regional faunal stages of which the Russian-Kazakhian system is most used in international parlance: *Most Russian paleontologists define the lower boundary of the Cambrian at the base of the Tommotian Stage, characterized by diversification and global distribution of organisms with mineral skeletons and the appearance of the first Archaeocyath bioherms. Dating the Cambrian The International Commission on Stratigraphy lists the Cambrian Period as beginning at and ending at . The lower boundary of the Cambrian was originally held to represent the first appearance of complex life, represented by trilobites. The recognition of small shelly fossils before the first trilobites, and Ediacara biota substantially earlier, led to calls for a more precisely defined base to the Cambrian Period. Despite the long recognition of its distinction from younger Ordovician rocks and older Precambrian rocks, it was not until 1994 that the Cambrian system/period was internationally ratified. After decades of careful consideration, a continuous sedimentary sequence at Fortune Head, Newfoundland was settled upon as a formal base of the Cambrian Period, which was to be correlated worldwide by the earliest appearance of Treptichnus pedum. Discovery of this fossil a few metres below the GSSP led to the refinement of this statement, and it is the T. pedum ichnofossil assemblage that is now formally used to correlate the base of the Cambrian. This formal designation allowed radiometric dates to be obtained from samples across the globe that corresponded to the base of the Cambrian. Early dates of quickly gained favour, though the methods used to obtain this number are now considered to be unsuitable and inaccurate. A more precise date using modern radiometric dating yield a date of . The ash horizon in Oman from which this date was recovered corresponds to a marked fall in the abundance of carbon-13 that correlates to equivalent excursions elsewhere in the world, and to the disappearance of distinctive Ediacaran fossils (Namacalathus, Cloudina). Nevertheless, there are arguments that the dated horizon in Oman does not correspond to the Ediacaran-Cambrian boundary, but represents a facies change from marine to evaporite-dominated strata – which would mean that dates from other sections, ranging from 544 or 542 Ma, are more suitable. Paleogeography Plate reconstructions suggest a global supercontinent, Pannotia, was in the process of breaking up early in the Cambrian, with Laurentia (North America), Baltica, and Siberia having separated from the main supercontinent of Gondwana to form isolated land masses. Most continental land was clustered in the Southern Hemisphere at this time, but was drifting north. Large, high-velocity rotational movement of Gondwana appears to have occurred in the Early Cambrian. With a lack of sea ice – the great glaciers of the Marinoan Snowball Earth were long melted – the sea level was high, which led to large areas of the continents being flooded in warm, shallow seas ideal for sea life. The sea levels fluctuated somewhat, suggesting there were "ice ages", associated with pulses of expansion and contraction of a south polar ice cap. In Baltoscandia a Lower Cambrian transgression transformed large swathes of the Sub-Cambrian peneplain into an epicontinental sea. Climate Glaciers likely existed during the earliest Cambrian at high and possibly even at middle palaeolatitudes, possibly due to the ancient continent of Gondwana covering the South Pole and cutting off polar ocean currents. Middle Terreneuvian deposits, corresponding to the boundary between the Fortunian and Stage 2, show evidence of glaciation. However, other authors believe these very early, pretrilobitic glacial deposits may not even be of Cambrian age at all but instead date back to the Neoproterozoic, an era characterised by numerous severe icehouse periods. The beginning of Stage 3 was relatively cool, with the period between 521 and 517 Ma being known as the Cambrian Arthropod Radiation Cool Event (CARCE). The Earth was generally very warm during Stage 4; its climate was comparable to the hot greenhouse of the Late Cretaceous and Early Palaeogene, as evidenced by a maximum in continental weathering rates over the last 900 million years and the presence of tropical, lateritic palaeosols at high palaeolatitudes during this time. The Archaecyathid Extinction Warm Event (AEWE), lasting from 511 to 510.5 Ma, was particularly warm. Another warm event, the Redlichiid-Olenid Extinction Warm Event, occurred at the beginning of Stage 5. It became even warmer towards the end of the period, and sea levels rose dramatically. This warming trend continued into the Early Ordovician, the start of which was characterised by an extremely hot global climate. Flora The Cambrian flora was little different from the Ediacaran. The principal taxa were the marine macroalgae Fuxianospira, Sinocylindra, and Marpolia. No calcareous macroalgae are known from the period. No land plant (embryophyte) fossils are known from the Cambrian. However, biofilms and microbial mats were well developed on Cambrian tidal flats and beaches 500 mya, and microbes forming microbial Earth ecosystems, comparable with modern soil crust of desert regions, contributing to soil formation. Although molecular clock estimates suggest terrestrial plants may have first emerged during the Middle or Late Cambrian, the consequent large-scale removal of the greenhouse gas CO2 from the atmosphere through sequestration did not begin until the Ordovician. Oceanic life The Cambrian explosion was a period of rapid multicellular growth. Most animal life during the Cambrian was aquatic. Trilobites were once assumed to be the dominant life form at that time, but this has proven to be incorrect. Arthropods were by far the most dominant animals in the ocean, but trilobites were only a minor part of the total arthropod diversity. What made them so apparently abundant was their heavy armor reinforced by calcium carbonate (CaCO3), which fossilized far more easily than the fragile chitinous exoskeletons of other arthropods, leaving numerous preserved remains. The period marked a steep change in the diversity and composition of Earth's biosphere. The Ediacaran biota suffered a mass extinction at the start of the Cambrian Period, which corresponded with an increase in the abundance and complexity of burrowing behaviour. This behaviour had a profound and irreversible effect on the substrate which transformed the seabed ecosystems. Before the Cambrian, the sea floor was covered by microbial mats. By the end of the Cambrian, burrowing animals had destroyed the mats in many areas through bioturbation. As a consequence, many of those organisms that were dependent on the mats became extinct, while the other species adapted to the changed environment that now offered new ecological niches. Around the same time there was a seemingly rapid appearance of representatives of all the mineralized phyla, including the Bryozoa, which were once thought to have only appeared in the Lower Ordovician. However, many of those phyla were represented only by stem-group forms; and since mineralized phyla generally have a benthic origin, they may not be a good proxy for (more abundant) non-mineralized phyla. While the early Cambrian showed such diversification that it has been named the Cambrian Explosion, this changed later in the period, when there occurred a sharp drop in biodiversity. About 515 million years ago, the number of species going extinct exceeded the number of new species appearing. Five million years later, the number of genera had dropped from an earlier peak of about 600 to just 450. Also, the speciation rate in many groups was reduced to between a fifth and a third of previous levels. 500 million years ago, oxygen levels fell dramatically in the oceans, leading to hypoxia, while the level of poisonous hydrogen sulfide simultaneously increased, causing another extinction. The later half of Cambrian was surprisingly barren and showed evidence of several rapid extinction events; the stromatolites which had been replaced by reef building sponges known as Archaeocyatha, returned once more as the archaeocyathids became extinct. This declining trend did not change until the Great Ordovician Biodiversification Event. Some Cambrian organisms ventured onto land, producing the trace fossils Protichnites and Climactichnites. Fossil evidence suggests that euthycarcinoids, an extinct group of arthropods, produced at least some of the Protichnites. Fossils of the track-maker of Climactichnites have not been found; however, fossil trackways and resting traces suggest a large, slug-like mollusc. In contrast to later periods, the Cambrian fauna was somewhat restricted; free-floating organisms were rare, with the majority living on or close to the sea floor; and mineralizing animals were rarer than in future periods, in part due to the unfavourable ocean chemistry. Many modes of preservation are unique to the Cambrian, and some preserve soft body parts, resulting in an abundance of . These include Sirius Passet, the Sinsk Algal Lens, the Maotianshan Shales, the Emu Bay Shale, and the Burgess Shale,. Symbol The United States Federal Geographic Data Committee uses a "barred capital C" character to represent the Cambrian Period. The Unicode character is . Gallery See also Cambrian–Ordovician extinction event – circa 488 mya Dresbachian extinction event—circa 499 mya End Botomian extinction event—circa 513 mya List of fossil sites (with link directory) Type locality (geology), the locality where a particular rock type, stratigraphic unit, fossil or mineral species is first identified References Further reading External links Biostratigraphy – includes information on Cambrian trilobite biostratigraphy Dr. Sam Gon's trilobite pages (contains numerous Cambrian trilobites) Examples of Cambrian Fossils Paleomap Project Report on the web on Amthor and others from Geology vol. 31 Weird Life on the Mats Chronostratigraphy scale v.2018/08 | Cambrian Geological periods
2,373
5,370
https://en.wikipedia.org/wiki/Theory%20of%20Categories
Theory of Categories
In ontology, the theory of categories concerns itself with the categories of being: the highest genera or kinds of entities according to Amie Thomasson. To investigate the categories of being, or simply categories, is to determine the most fundamental and the broadest classes of entities. A distinction between such categories, in making the categories or applying them, is called an ontological distinction. Various systems of categories have been proposed, they often include categories for substances, properties, relations, states of affairs or events. A representative question within the theory of categories might articulate itself, for example, in a query like, "Are universals prior to particulars?" Early development The process of abstraction required to discover the number and names of the categories of being has been undertaken by many philosophers since Aristotle and involves the careful inspection of each concept to ensure that there is no higher category or categories under which that concept could be subsumed. The scholars of the twelfth and thirteenth centuries developed Aristotle's ideas. For example, Gilbert of Poitiers divides Aristotle's ten categories into two sets, primary and secondary, according to whether they inhere in the subject or not: Primary categories: Substance, Relation, Quantity and Quality Secondary categories: Place, Time, Situation, Condition, Action, Passion Furthermore, following Porphyry’s likening of the classificatory hierarchy to a tree, they concluded that the major classes could be subdivided to form subclasses, for example, Substance could be divided into Genus and Species, and Quality could be subdivided into Property and Accident, depending on whether the property was necessary or contingent. An alternative line of development was taken by Plotinus in the second century who by a process of abstraction reduced Aristotle's list of ten categories to five: Substance, Relation, Quantity, Motion and Quality. Plotinus further suggested that the latter three categories of his list, namely Quantity, Motion and Quality correspond to three different kinds of relation and that these three categories could therefore be subsumed under the category of Relation. This was to lead to the supposition that there were only two categories at the top of the hierarchical tree, namely Substance and Relation. Many supposed that relations only exist in the mind. Substance and Relation, then, are closely commutative with Mind and Matter--this is expressed most clearly in the dualism of René Descartes. Vaisheshika Stoic Aristotle One of Aristotle’s early interests lay in the classification of the natural world, how for example the genus "animal" could be first divided into "two-footed animal" and then into "wingless, two-footed animal". He realised that the distinctions were being made according to the qualities the animal possesses, the quantity of its parts and the kind of motion that it exhibits. To fully complete the proposition "this animal is ..." Aristotle stated in his work on the Categories that there were ten kinds of predicate where ... "... each signifies either substance or quantity or quality or relation or where or when or being-in-a-position or having or acting or being acted upon". He realised that predicates could be simple or complex. The simple kinds consist of a subject and a predicate linked together by the "categorical" or inherent type of relation. For Aristotle the more complex kinds were limited to propositions where the predicate is compounded of two of the above categories for example "this is a horse running". More complex kinds of proposition were only discovered after Aristotle by the Stoic, Chrysippus, who developed the "hypothetical" and "disjunctive" types of syllogism and these were terms which were to be developed through the Middle Ages and were to reappear in Kant's system of categories. Category came into use with Aristotle's essay Categories, in which he discussed univocal and equivocal terms, predication, and ten categories: Substance, essence (ousia) – examples of primary substance: this man, this horse; secondary substance (species, genera): man, horse Quantity (poson, how much), discrete or continuous – examples: two cubits long, number, space, (length of) time. Quality (poion, of what kind or description) – examples: white, black, grammatical, hot, sweet, curved, straight. Relation (pros ti, toward something) – examples: double, half, large, master, knowledge. Place (pou, where) – examples: in a marketplace, in the Lyceum Time (pote, when) – examples: yesterday, last year Position, posture, attitude (keisthai, to lie) – examples: sitting, lying, standing State, condition (echein, to have or be) – examples: shod, armed Action (poiein, to make or do) – examples: to lance, to heat, to cool (something) Affection, passion (paschein, to suffer or undergo) – examples: to be lanced, to be heated, to be cooled Plotinus Plotinus in writing his Enneads around AD 250 recorded that "philosophy at a very early age investigated the number and character of the existents ... some found ten, others less .... to some the genera were the first principles, to others only a generic classification of existents". He realised that some categories were reducible to others saying "why are not Beauty, Goodness and the virtues, Knowledge and Intelligence included among the primary genera?" He concluded that such transcendental categories and even the categories of Aristotle were in some way posterior to the three Eleatic categories first recorded in Plato's dialogue Parmenides and which comprised the following three coupled terms: Unity/Plurality Motion/Stability Identity/Difference Plotinus called these "the hearth of reality" deriving from them not only the three categories of Quantity, Motion and Quality but also what came to be known as "the three moments of the Neoplatonic world process": First, there existed the "One", and his view that "the origin of things is a contemplation" The Second "is certainly an activity ... a secondary phase ... life streaming from life ... energy running through the universe" The Third is some kind of Intelligence concerning which he wrote "Activity is prior to Intellection ... and self knowledge" Plotinus likened the three to the centre, the radii and the circumference of a circle, and clearly thought that the principles underlying the categories were the first principles of creation. "From a single root all being multiplies". Similar ideas were to be introduced into Early Christian thought by, for example, Gregory of Nazianzus who summed it up saying "Therefore, Unity, having from all eternity arrived by motion at duality, came to rest in trinity". Modern development This early modern dualism of Mind and Matter or Subject and Relation, as reflected in the writings of Descartes underwent a substantial revision in the late 18th century. The first objections to this stance were formulated in the eighteenth century by Immanuel Kant who realised that we can say nothing about Substance except through the relation of the subject to other things. For example: In the sentence "This is a house" the substantive subject "house" only gains meaning in relation to human use patterns or to other similar houses. The category of Substance disappears from Kant's tables, and under the heading of Relation, Kant lists inter alia the three relationship types of Disjunction, Causality and Inherence. The three older concepts of Quantity, Motion and Quality, as Peirce discovered, could be subsumed under these three broader headings in that Quantity relates to the subject through the relation of Disjunction; Motion relates to the subject through the relation of Causality; and Quality relates to the subject through the relation of Inherence. Sets of three continued to play an important part in the nineteenth century development of the categories, most notably in G.W.F. Hegel's extensive tabulation of categories, and in C.S. Peirce's categories set out in his work on the logic of relations. One of Peirce's contributions was to call the three primary categories Firstness, Secondness and Thirdness which both emphasises their general nature, and avoids the confusion of having the same name for both the category itself and for a concept within that category. In a separate development, and building on the notion of primary and secondary categories introduced by the Scholastics, Kant introduced the idea that secondary or "derivative" categories could be derived from the primary categories through the combination of one primary category with another. This would result in the formation of three secondary categories: the first, "Community" was an example that Kant gave of such a derivative category; the second, "Modality", introduced by Kant, was a term which Hegel, in developing Kant's dialectical method, showed could also be seen as a derivative category; and the third, "Spirit" or "Will" were terms that Hegel and Schopenhauer were developing separately for use in their own systems. Karl Jaspers in the twentieth century, in his development of existential categories, brought the three together, allowing for differences in terminology, as Substantiality, Communication and Will. This pattern of three primary and three secondary categories was used most notably in the nineteenth century by Peter Mark Roget to form the six headings of his Thesaurus of English Words and Phrases. The headings used were the three objective categories of Abstract Relation, Space (including Motion) and Matter and the three subjective categories of Intellect, Feeling and Volition, and he found that under these six headings all the words of the English language, and hence any possible predicate, could be assembled. Kant In the Critique of Pure Reason (1781), Immanuel Kant argued that the categories are part of our own mental structure and consist of a set of a priori concepts through which we interpret the world around us. These concepts correspond to twelve logical functions of the understanding which we use to make judgements and there are therefore two tables given in the Critique, one of the Judgements and a corresponding one for the Categories. To give an example, the logical function behind our reasoning from ground to consequence (based on the Hypothetical relation) underlies our understanding of the world in terms of cause and effect (the Causal relation). In each table the number twelve arises from, firstly, an initial division into two: the Mathematical and the Dynamical; a second division of each of these headings into a further two: Quantity and Quality, and Relation and Modality respectively; and, thirdly, each of these then divides into a further three subheadings as follows. Table of Judgements Mathematical Quantity Universal Particular Singular Quality Affirmative Negative Infinite Dynamical Relation Categorical Hypothetical Disjunctive Modality Problematic Assertoric Apodictic Table of Categories Mathematical Quantity Unity Plurality Totality Quality Reality Negation Limitation Dynamical Relation Inherence and Subsistence (substance and accident) Causality and Dependence (cause and effect) Community (reciprocity) Modality Possibility Existence Necessity Criticism of Kant's system followed, firstly, by Arthur Schopenhauer, who amongst other things was unhappy with the term "Community", and declared that the tables "do open violence to truth, treating it as nature was treated by old-fashioned gardeners", and secondly, by W.T.Stace who in his book The Philosophy of Hegel suggested that in order to make Kant's structure completely symmetrical a third category would need to be added to the Mathematical and the Dynamical. This, he said, Hegel was to do with his category of concept. Hegel G.W.F. Hegel in his Science of Logic (1812) attempted to provide a more comprehensive system of categories than Kant and developed a structure that was almost entirely triadic. So important were the categories to Hegel that he claimed "the first principle of the world, the Absolute, is a system of categories ... the categories must be the reason of which the world is a consequent". Using his own logical method of combination, later to be called the Hegelian dialectic, of arguing from thesis through antithesis to synthesis, he arrived, as shown in W.T.Stace's work cited, at a hierarchy of some 270 categories. The three very highest categories were Logic, Nature and Spirit. The three highest categories of Logic, however, he called Being, Essence and Notion which he explained as follows: Being was differentiated from Nothing by containing with it the concept of the "Other", an initial internal division that can be compared with Kant's category of Disjunction. Stace called the category of Being the sphere of common sense containing concepts such as consciousness, sensation, quantity, quality and measure. Essence. The "Other" separates itself from the "One" by a kind of motion, reflected in Hegel's first synthesis of "Becoming". For Stace this category represented the sphere of science containing within it firstly, the thing, its form and properties; secondly, cause, effect and reciprocity, and thirdly, the principles of classification, identity and difference. Notion. Having passed over into the "Other" there is an almost Neoplatonic return into a higher unity that in embracing the "One" and the "Other" enables them to be considered together through their inherent qualities. This according to Stace is the sphere of philosophy proper where we find not only the three types of logical proposition: Disjunctive, Hypothetical and Categorical but also the three transcendental concepts of Beauty, Goodness and Truth. Schopenhauer's category that corresponded with Notion was that of Idea, which in his "Four-Fold Root of Sufficient Reason" he complemented with the category of the Will. The title of his major work was "The World as Will and Idea". The two other complementary categories, reflecting one of Hegel's initial divisions, were those of Being and Becoming. At around the same time, Goethe was developing his colour theories in the Farbenlehre of 1810, and introduced similar principles of combination and complementation, symbolising, for Goethe, "the primordial relations which belong both to nature and vision". Hegel in his Science of Logic accordingly asks us to see his system not as a tree but as a circle. Twentieth-century development In the twentieth century the primacy of the division between the subjective and the objective, or between mind and matter, was disputed by, among others, Bertrand Russell and Gilbert Ryle. Philosophy began to move away from the metaphysics of categorisation towards the linguistic problem of trying to differentiate between, and define, the words being used. Ludwig Wittgenstein’s conclusion was that there were no clear definitions which we can give to words and categories but only a "halo" or "corona" of related meanings radiating around each term. Gilbert Ryle thought the problem could be seen in terms of dealing with "a galaxy of ideas" rather than a single idea, and suggested that category mistakes are made when a concept (e.g. "university"), understood as falling under one category (e.g. abstract idea), is used as though it falls under another (e.g. physical object). With regard to the visual analogies being used, Peirce and Lewis, just like Plotinus earlier, likened the terms of propositions to points, and the relations between the terms to lines. Peirce, taking this further, talked of univalent, bivalent and trivalent relations linking predicates to their subject and it is just the number and types of relation linking subject and predicate that determine the category into which a predicate might fall. Primary categories contain concepts where there is one dominant kind of relation to the subject. Secondary categories contain concepts where there are two dominant kinds of relation. Examples of the latter were given by Heidegger in his two propositions "the house is on the creek" where the two dominant relations are spatial location (Disjunction) and cultural association (Inherence), and "the house is eighteenth century" where the two relations are temporal location (Causality) and cultural quality (Inherence). A third example may be inferred from Kant in the proposition "the house is impressive or sublime" where the two relations are spatial or mathematical disposition (Disjunction) and dynamic or motive power (Causality). Both Peirce and Wittgenstein introduced the analogy of colour theory in order to illustrate the shades of meanings of words. Primary categories, like primary colours, are analytical representing the furthest we can go in terms of analysis and abstraction and include Quantity, Motion and Quality. Secondary categories, like secondary colours, are synthetic and include concepts such as Substance, Community and Spirit. Apart from these, the categorial scheme of Alfred North Whitehead and his Process Philosophy, alongside Nicolai Hartmann and his Critical Realism, remain one of the most detailed and advanced systems in categorial research in metaphysics. Peirce Charles Sanders Peirce, who had read Kant and Hegel closely, and who also had some knowledge of Aristotle, proposed a system of merely three phenomenological categories: Firstness, Secondness, and Thirdness, which he repeatedly invoked in his subsequent writings. Like Hegel, C.S. Peirce attempted to develop a system of categories from a single indisputable principle, in Peirce's case the notion that in the first instance he could only be aware of his own ideas. "It seems that the true categories of consciousness are first, feeling ... second, a sense of resistance ... and third, synthetic consciousness, or thought". Elsewhere he called the three primary categories: Quality, Reaction and Meaning, and even Firstness, Secondness and Thirdness, saying, "perhaps it is not right to call these categories conceptions, they are so intangible that they are rather tones or tints upon conceptions": Firstness (Quality): "The first is predominant in feeling ... we must think of a quality without parts, e.g. the colour of magenta ... When I say it is a quality I do not mean that it "inheres" in a subject ... The whole content of consciousness is made up of qualities of feeling, as truly as the whole of space is made up of points, or the whole of time by instants". Secondness (Reaction): "This is present even in such a rudimentary fragment of experience as a simple feeling ... an action and reaction between our soul and the stimulus ... The idea of second is predominant in the ideas of causation and of statical force ... the real is active; we acknowledge it by calling it the actual". Thirdness (Meaning): "Thirdness is essentially of a general nature ... ideas in which thirdness predominate [include] the idea of a sign or representation ... Every genuine triadic relation involves meaning ... the idea of meaning is irreducible to those of quality and reaction ... synthetical consciousness is the consciousness of a third or medium". Although Peirce's three categories correspond to the three concepts of relation given in Kant's tables, the sequence is now reversed and follows that given by Hegel, and indeed before Hegel of the three moments of the world-process given by Plotinus. Later, Peirce gave a mathematical reason for there being three categories in that although monadic, dyadic and triadic nodes are irreducible, every node of a higher valency is reducible to a "compound of triadic relations". Ferdinand de Saussure, who was developing "semiology" in France just as Peirce was developing "semiotics" in the US, likened each term of a proposition to "the centre of a constellation, the point where other coordinate terms, the sum of which is indefinite, converge". Others Edmund Husserl (1962, 2000) wrote extensively about categorial systems as part of his phenomenology. For Gilbert Ryle (1949), a category (in particular a "category mistake") is an important semantic concept, but one having only loose affinities to an ontological category. Contemporary systems of categories have been proposed by John G. Bennett (The Dramatic Universe, 4 vols., 1956–65), Wilfrid Sellars (1974), Reinhardt Grossmann (1983, 1992), Johansson (1989), Hoffman and Rosenkrantz (1994), Roderick Chisholm (1996), Barry Smith (ontologist) (2003), and Jonathan Lowe (2006). See also Categories (Aristotle) Categories (Peirce) Categories (Stoic) Category (Kant) Metaphysics Modal logic Ontology Schema (Kant) Similarity (philosophy) References Selected bibliography Aristotle, 1953. Metaphysics. Ross, W. D., trans. Oxford University Press. --------, 2004. Categories, Edghill, E. M., trans. Uni. of Adelaide library. John G. Bennett, 1956–1965. The Dramatic Universe. London, Hodder & Stoughton. Gustav Bergmann, 1992. New Foundations of Ontology. Madison: Uni. of Wisconsin Press. Browning, Douglas, 1990. Ontology and the Practical Arena. Pennsylvania State Uni. Butchvarov, Panayot, 1979. Being qua Being: A Theory of Identity, Existence, and Predication. Indiana Uni. Press. Roderick Chisholm, 1996. A Realistic Theory of Categories. Cambridge Uni. Press. Feibleman, James Kern, 1951. Ontology. The Johns Hopkins Press (reprinted 1968, Greenwood Press, Publishers, New York). Grossmann, Reinhardt, 1983. The Categorial Structure of the World. Indiana Uni. Press. Grossmann, Reinhardt, 1992. The Existence of the World: An Introduction to Ontology. Routledge. Haaparanta, Leila and Koskinen, Heikki J., 2012. Categories of Being: Essays on Metaphysics and Logic. New York: Oxford University Press. Hoffman, J., and Rosenkrantz, G. S.,1994. Substance among other Categories. Cambridge Uni. Press. Edmund Husserl, 1962. Ideas: General Introduction to Pure Phenomenology. Boyce Gibson, W. R., trans. Collier. ------, 2000. Logical Investigations, 2nd ed. Findlay, J. N., trans. Routledge. Johansson, Ingvar, 1989. Ontological Investigations. Routledge, 2nd ed. Ontos Verlag 2004. Kahn, Charles H., 2009. Essays on Being, Oxford University Press. Immanuel Kant, 1998. Critique of Pure Reason. Guyer, Paul, and Wood, A. W., trans. Cambridge Uni. Press. Charles Sanders Peirce, 1992, 1998. The Essential Peirce, vols. 1,2. Houser, Nathan et al., eds. Indiana Uni. Press. Gilbert Ryle, 1949. The Concept of Mind. Uni. of Chicago Press. Wilfrid Sellars, 1974, "Toward a Theory of the Categories" in Essays in Philosophy and Its History. Reidel. Barry Smith, 2003. "Ontology" in Blackwell Guide to the Philosophy of Computing and Information. Blackwell. External links Aristotle's Categories at MIT. "Ontological Categories and How to Use Them" – Amie Thomasson. "Recent Advances in Metaphysics" – E. J. Lowe. Theory and History of Ontology – Raul Corazzon. Concepts in metaphysics Philosophical categories
2,374
5,374
https://en.wikipedia.org/wiki/Condom
Condom
A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms. With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis. The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times. Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year. Medical uses Birth control The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables. The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection. Sexually transmitted infections Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effective birth control methods (such as IUD) in situations where STD protection is also desired. For this reason, condoms are frequently used by those in the swinging (sexual practice) community. According to a 2000 report by the National Institutes of Health (NIH), consistent use of latex condoms reduces the risk of HIV transmission by approximately 85% relative to risk when unprotected, putting the seroconversion rate (infection rate) at 0.9 per 100 person-years with condom, down from 6.7 per 100 person-years. Analysis published in 2007 from the University of Texas Medical Branch and the World Health Organization found similar risk reductions of 80–95%. The 2000 NIH review concluded that condom use significantly reduces the risk of gonorrhea for men. A 2006 study reports that proper condom use decreases the risk of transmission of human papillomavirus (HPV) to women by approximately 70%. Another study in the same year found consistent condom use was effective at reducing transmission of herpes simplex virus-2, also known as genital herpes, in both men and women. Although a condom is effective in limiting exposure, some disease transmission may occur even with a condom. Infectious areas of the genitals, especially when symptoms are present, may not be covered by a condom, and as a result, some diseases like HPV and herpes may be transmitted by direct contact. The primary effectiveness issue with using condoms to prevent STDs, however, is inconsistent use. Condoms may also be useful in treating potentially precancerous cervical changes. Exposure to human papillomavirus, even in individuals already infected with the virus, appears to increase the risk of precancerous changes. The use of condoms helps promote regression of these changes. In addition, researchers in the UK suggest that a hormone in semen can aggravate existing cervical cancer, condom use during sex can prevent exposure to the hormone. Causes of failure Condoms may slip off the penis after ejaculation, break due to improper application or physical damage (such as tears caused when opening the package), or break or slip due to latex degradation (typically from usage past the expiration date, improper storage, or exposure to oils). The rate of breakage is between 0.4% and 2.3%, while the rate of slippage is between 0.6% and 1.3%. Even if no breakage or slippage is observed, 1–3% of women will test positive for semen residue after intercourse with a condom. Failure rates are higher for anal sex, and until 2022, condoms were only approved by the FDA for vaginal sex. The One Male Condom received FDA approval for anal sex on February 23, 2022. "Double bagging", using two condoms at once, is often believed to cause a higher rate of failure due to the friction of rubber on rubber. This claim is not supported by research. The limited studies that have been done found that the simultaneous use of multiple condoms decreases the risk of condom breakage. Different modes of condom failure result in different levels of semen exposure. If a failure occurs during application, the damaged condom may be disposed of and a new condom applied before intercourse begins – such failures generally pose no risk to the user. One study found that semen exposure from a broken condom was about half that of unprotected intercourse; semen exposure from a slipped condom was about one-fifth that of unprotected intercourse. Standard condoms will fit almost any penis, with varying degrees of comfort or risk of slippage. Many condom manufacturers offer "snug" or "magnum" sizes. Some manufacturers also offer custom sized-to-fit condoms, with claims that they are more reliable and offer improved sensation/comfort. Some studies have associated larger penises and smaller condoms with increased breakage and decreased slippage rates (and vice versa), but other studies have been inconclusive. It is recommended for condoms manufacturers to avoid very thick or very thin condoms, because they are both considered less effective. Some authors encourage users to choose thinner condoms "for greater durability, sensation, and comfort", but others warn that "the thinner the condom, the smaller the force required to break it". Experienced condom users are significantly less likely to have a condom slip or break compared to first-time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. An article in Population Reports suggests that education on condom use reduces behaviors that increase the risk of breakage and slippage. A Family Health International publication also offers the view that education can reduce the risk of breakage and slippage, but emphasizes that more research needs to be done to determine all of the causes of breakage and slippage. Among people who intend condoms to be their form of birth control, pregnancy may occur when the user has sex without a condom. The person may have run out of condoms, or be traveling and not have a condom with them, or dislike the feel of condoms and decide to "take a chance". This behavior is the primary cause of typical use failure (as opposed to method or perfect use failure). Another possible cause of condom failure is sabotage. One motive is to have a child against a partner's wishes or consent. Some commercial sex workers from Nigeria reported clients sabotaging condoms in retaliation for being coerced into condom use. Using a fine needle to make several pinholes at the tip of the condom is believed to significantly impact on their effectiveness. Cases of such condom sabotage have occurred. Side effects The use of latex condoms by people with an allergy to latex can cause allergic symptoms, such as skin irritation. In people with severe latex allergies, using a latex condom can potentially be life-threatening. Repeated use of latex condoms can also cause the development of a latex allergy in some people. Irritation may also occur due to spermicides that may be present. Use Male condoms are usually packaged inside a foil or plastic wrapper, in a rolled-up form, and are designed to be applied to the tip of the penis and then unrolled over the erect penis. It is important that some space be left in the tip of the condom so that semen has a place to collect; otherwise it may be forced out of the base of the device. Most condoms have a teat end for this purpose. After use, it is recommended the condom be wrapped in tissue or tied in a knot, then disposed of in a trash receptacle. Condoms are used to reduce the likelihood of pregnancy during intercourse and to reduce the likelihood of contracting sexually-transmitted infections (STIs). Condoms are also used during fellatio to reduce the likelihood of contracting STIs. Some couples find that putting on a condom interrupts sex, although others incorporate condom application as part of their foreplay. Some men and women find the physical barrier of a condom dulls sensation. Advantages of dulled sensation can include prolonged erection and delayed ejaculation; disadvantages might include a loss of some sexual excitement. Advocates of condom use also cite their advantages of being inexpensive, easy to use, and having few side effects. Adult film industry In 2012 proponents gathered 372,000 voter signatures through a citizens' initiative in Los Angeles County to put Measure B on the 2012 ballot. As a result, Measure B, a law requiring the use of condoms in the production of pornographic films, was passed. This requirement has received much criticism and is said by some to be counter-productive, merely forcing companies that make pornographic films to relocate to other places without this requirement. Producers claim that condom use depresses sales. Sex education Condoms are often used in sex education programs, because they have the capability to reduce the chances of pregnancy and the spread of some sexually transmitted diseases when used correctly. A recent American Psychological Association (APA) press release supported the inclusion of information about condoms in sex education, saying "comprehensive sexuality education programs ... discuss the appropriate use of condoms", and "promote condom use for those who are sexually active." In the United States, teaching about condoms in public schools is opposed by some religious organizations. Planned Parenthood, which advocates family planning and sex education, argues that no studies have shown abstinence-only programs to result in delayed intercourse, and cites surveys showing that 76% of American parents want their children to receive comprehensive sexuality education including condom use. Infertility treatment Common procedures in infertility treatment such as semen analysis and intrauterine insemination (IUI) require collection of semen samples. These are most commonly obtained through masturbation, but an alternative to masturbation is use of a special collection condom to collect semen during sexual intercourse. Collection condoms are made from silicone or polyurethane, as latex is somewhat harmful to sperm. Some men prefer collection condoms to masturbation and some religions prohibit masturbation entirely. Also, compared with samples obtained from masturbation, semen samples from collection condoms have higher total sperm counts, sperm motility, and percentage of sperm with normal morphology. For this reason, they are believed to give more accurate results when used for semen analysis, and to improve the chances of pregnancy when used in procedures such as intracervical or intrauterine insemination. Adherents of religions that prohibit contraception, such as Catholicism, may use collection condoms with holes pricked in them. For fertility treatments, a collection condom may be used to collect semen during sexual intercourse where the semen is provided by the woman's partner. Private sperm donors may also use a collection condom to obtain samples through masturbation or by sexual intercourse with a partner and will transfer the ejaculate from the collection condom to a specially designed container. The sperm is transported in such containers, in the case of a donor, to a recipient woman to be used for insemination, and in the case of a woman's partner, to a fertility clinic for processing and use. However, transportation may reduce the fecundity of the sperm. Collection condoms may also be used where semen is produced at a sperm bank or fertility clinic. Condom therapy is sometimes prescribed to infertile couples when the female has high levels of antisperm antibodies. The theory is that preventing exposure to her partner's semen will lower her level of antisperm antibodies, and thus increase her chances of pregnancy when condom therapy is discontinued. However, condom therapy has not been shown to increase subsequent pregnancy rates. Other uses Condoms excel as multipurpose containers and barriers because they are waterproof, elastic, durable, and (for military and espionage uses) will not arouse suspicion if found. Ongoing military utilization began during World War II, and includes covering the muzzles of rifle barrels to prevent fouling, the waterproofing of firing assemblies in underwater demolitions, and storage of corrosive materials and garrotes by paramilitary agencies. Condoms have also been used to smuggle alcohol, cocaine, heroin, and other drugs across borders and into prisons by filling the condom with drugs, tying it in a knot and then either swallowing it or inserting it into the rectum. These methods are very dangerous and potentially lethal; if the condom breaks, the drugs inside become absorbed into the bloodstream and can cause an overdose. Medically, condoms can be used to cover endovaginal ultrasound probes, or in field chest needle decompressions they can be used to make a one-way valve. Condoms have also been used to protect scientific samples from the environment, and to waterproof microphones for underwater recording. Types Most condoms have a reservoir tip or teat end, making it easier to accommodate the man's ejaculate. Condoms come in different sizes and shapes. They also come in a variety of surfaces intended to stimulate the user's partner. Condoms are usually supplied with a lubricant coating to facilitate penetration, while flavored condoms are principally used for oral sex. As mentioned above, most condoms are made of latex, but polyurethane and lambskin condoms also exist. Female condom Male condoms have a tight ring to form a seal around the penis, while female condoms usually have a large stiff ring to prevent them from slipping into the body orifice. The Female Health Company produced a female condom that was initially made of polyurethane, but newer versions are made of nitrile rubber. Medtech Products produces a female condom made of latex. Materials Natural latex Latex has outstanding elastic properties: Its tensile strength exceeds 30 MPa, and latex condoms may be stretched in excess of 800% before breaking. In 1990 the ISO set standards for condom production (ISO 4074, Natural latex rubber condoms), and the EU followed suit with its CEN standard (Directive 93/42/EEC concerning medical devices). Every latex condom is tested for holes with an electric current. If the condom passes, it is rolled and packaged. In addition, a portion of each batch of condoms is subject to water leak and air burst testing. While the advantages of latex have made it the most popular condom material, it does have some drawbacks. Latex condoms are damaged when used with oil-based substances as lubricants, such as petroleum jelly, cooking oil, baby oil, mineral oil, skin lotions, suntan lotions, cold creams, butter or margarine. Contact with oil makes latex condoms more likely to break or slip off due to loss of elasticity caused by the oils. Additionally, latex allergy precludes use of latex condoms and is one of the principal reasons for the use of other materials. In May 2009, the U.S. Food and Drug Administration (FDA) granted approval for the production of condoms composed of Vytex, latex that has been treated to remove 90% of the proteins responsible for allergic reactions. An allergen-free condom made of synthetic latex (polyisoprene) is also available. Synthetic The most common non-latex condoms are made from polyurethane. Condoms may also be made from other synthetic materials, such as AT-10 resin, and most polyisoprene. Polyurethane condoms tend to be the same width and thickness as latex condoms, with most polyurethane condoms between 0.04 mm and 0.07 mm thick. Polyurethane can be considered better than latex in several ways: it conducts heat better than latex, is not as sensitive to temperature and ultraviolet light (and so has less rigid storage requirements and a longer shelf life), can be used with oil-based lubricants, is less allergenic than latex, and does not have an odor. Polyurethane condoms have gained FDA approval for sale in the United States as an effective method of contraception and HIV prevention, and under laboratory conditions have been shown to be just as effective as latex for these purposes. However, polyurethane condoms are less elastic than latex ones, and may be more likely to slip or break than latex, lose their shape or bunch up more than latex, and are more expensive. Polyisoprene is a synthetic version of natural rubber latex. While significantly more expensive, it has the advantages of latex (such as being softer and more elastic than polyurethane condoms) without the protein which is responsible for latex allergies. Unlike polyurethane condoms, they cannot be used with an oil-based lubricant. Lambskin Condoms made from sheep intestines, labeled "lambskin", are also available. Although they are generally effective as a contraceptive by blocking sperm, it is presumed that they are less effective than latex in preventing the transmission of sexually transmitted infections because of pores in the material. This is based on the idea that intestines, by their nature, are porous, permeable membranes, and while sperm are too large to pass through the pores, viruses — such as HIV, herpes, and genital warts — are small enough to pass. However, there are to date no clinical data confirming or denying this theory. As a result of laboratory data on condom porosity, in 1989, the FDA began requiring lambskin condom manufacturers to indicate that the products were not to be used for the prevention of sexually transmitted infections. This was based on the presumption that lambskin condoms would be less effective than latex in preventing HIV transmission, rather than a conclusion that lambskin condoms lack efficacy in STI prevention altogether. An FDA publication in 1992 states that lambskin condoms "provide good birth control and a varying degree of protection against some, but not all, sexually transmitted diseases" and that the labelling requirement was decided upon because the FDA "cannot expect people to know which STDs they need to be protected against", and since "the reality is that you don't know what your partner has, we wanted natural-membrane condoms to have labels that don't allow the user to assume they're effective against the small viral STDs." Some believe that lambskin condoms provide a more "natural" sensation and lack the allergens inherent to latex. Still, because of their lesser protection against infection, other hypoallergenic materials such as polyurethane are recommended for latex-allergic users and partners. Lambskin condoms are also significantly more expensive than different types, and as slaughter by-products, they are also not vegetarian. Spermicide Some latex condoms are lubricated at the manufacturer with a small amount of a nonoxynol-9, a spermicidal chemical. According to Consumer Reports, condoms lubricated with spermicide have no additional benefit in preventing pregnancy, have a shorter shelf life, and may cause urinary tract infections in women. In contrast, application of separately packaged spermicide is believed to increase the contraceptive efficacy of condoms. Nonoxynol-9 was once believed to offer additional protection against STDs (including HIV) but recent studies have shown that, with frequent use, nonoxynol-9 may increase the risk of HIV transmission. The World Health Organization says that spermicidally lubricated condoms should no longer be promoted. However, it recommends using a nonoxynol-9 lubricated condom over no condom at all. , nine condom manufacturers have stopped manufacturing condoms with nonoxynol-9 and Planned Parenthood has discontinued the distribution of condoms so lubricated. Ribbed and studded Textured condoms include studded and ribbed condoms which can provide extra sensations to both partners. The studs or ribs can be located on the inside, outside, or both; alternatively, they are located in specific sections to provide directed stimulation to either the G-spot or frenulum. Many textured condoms which advertise "mutual pleasure" also are bulb-shaped at the top, to provide extra stimulation to the penis. Some women experience irritation during vaginal intercourse with studded condoms. Other The anti-rape condom is another variation designed to be worn by women. It is designed to cause pain to the attacker, hopefully allowing the victim a chance to escape. A collection condom is used to collect semen for fertility treatments or sperm analysis. These condoms are designed to maximize sperm life. Some condom-like devices are intended for entertainment only, such as glow-in-the dark condoms. These novelty condoms may not provide protection against pregnancy and STDs. In February 2022, the U.S. Food and Drug Administration (FDA) approved the first condoms specifically indicated to help reduce transmission of sexually transmitted infections (STIs) during anal intercourse. Prevalence The prevalence of condom use varies greatly between countries. Most surveys of contraceptive use are among married women, or women in informal unions. Japan has the highest rate of condom usage in the world: in that country, condoms account for almost 80% of contraceptive use by married women. On average, in developed countries, condoms are the most popular method of birth control: 28% of married contraceptive users rely on condoms. In the average less-developed country, condoms are less common: only 6–8% of married contraceptive users choose condoms. History Before the 19th century Whether condoms were used in ancient civilizations is debated by archaeologists and historians. In ancient Egypt, Greece, and Rome, pregnancy prevention was generally seen as a woman's responsibility, and the only well documented contraception methods were female-controlled devices. In Asia before the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded. Condoms seem to have been used for contraception, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn. In 16th-century Italy, anatomist and physician Gabriele Falloppio wrote a treatise on syphilis. The earliest documented strain of syphilis, first appearing in Europe in a 1490s outbreak, caused severe symptoms and often death within a few months of contracting the disease. Falloppio's treatise is the earliest uncontested description of condom use: it describes linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Falloppio claimed that an experimental trial of the linen sheath demonstrated protection against syphilis. After this, the use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication that these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius, who condemned them as immoral. In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word or any similar spelling. Other early spellings include "condam" and "quondam", from which the Italian derivation guantone has been suggested, from guanto, "a glove". In addition to linen, condoms during the Renaissance were made out of intestines and bladder. In the late 16th century, Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis. Casanova in the 18th century was one of the first reported using "assurance caps" to prevent impregnating his mistresses. From at least the 18th century, condom use was opposed in some legal, religious, and medical circles for essentially the same reasons that are given today: condoms reduce the likelihood of pregnancy, which some thought immoral or undesirable for the nation; they do not provide full protection against sexually transmitted infections, while belief in their protective powers was thought to encourage sexual promiscuity; and, they are not used consistently due to inconvenience, expense, or loss of sensation. Despite some opposition, the condom market grew rapidly. In the 18th century, condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulfur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theater throughout Europe and Russia. They later spread to America, although in every place there were generally used only by the middle and upper classes, due to both expense and lack of sex education. 1800 through 1920s The early 19th century saw contraceptives promoted to the poorer classes for the first time. Writers on contraception tended to prefer other birth control methods to the condom. By the late 19th century, many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods controlled by women, such as diaphragms and spermicidal douches. Other writers cited both the expense of condoms and their unreliability (they were often riddled with holes and often fell off or tore). Still, they discussed condoms as a good option for some and the only contraceptive that protects from disease. Many countries passed laws impeding the manufacture and promotion of contraceptives. In spite of these restrictions, condoms were promoted by traveling lecturers and in newspaper advertisements, using euphemisms in places where such ads were illegal. Instructions on how to make condoms at home were distributed in the United States and Europe. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method. Beginning in the second half of the 19th century, American rates of sexually transmitted diseases skyrocketed. Causes cited by historians include the effects of the American Civil War and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sex education classes were introduced to public schools for the first time, teaching about venereal diseases and how they were transmitted. They generally taught abstinence was the only way to avoid sexually transmitted diseases. Condoms were not promoted for disease prevention because the medical community and moral watchdogs considered STDs to be punishment for sexual misbehavior. The stigma against people with these diseases was so significant that many hospitals refused to treat people with syphilis. The German military was the first to promote condom use among its soldiers in the later 19th century. Early 20th century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted diseases. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use. In the decades after World War I, there remained social and legal obstacles to condom use throughout the U.S. and Europe. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control because their failure rates were too high. Freud was especially opposed to the condom because he thought it cut down on sexual pleasure. Some feminists continued to oppose male-controlled contraceptives such as condoms. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance". The Bishop of London, Arthur Winnington-Ingram, complained of the huge number of condoms discarded in alleyways and parks, especially after weekends and holidays. However, European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Worldwide, condom sales doubled in the 1920s. Rubber and manufacturing advances In 1839, Charles Goodyear discovered a way of processing natural rubber, which is too stiff when cold and too soft when warm, in such a way as to make it elastic. This proved to have advantages for the manufacture of condoms; unlike the sheep's gut condoms, they could stretch and did not tear quickly when used. The rubber vulcanization process was patented by Goodyear in 1844. The first rubber condom was produced in 1855. The earliest rubber condoms had a seam and were as thick as a bicycle inner tube. Besides this type, small rubber condoms covering only the glans were often used in England and the United States. There was more risk of losing them and if the rubber ring was too tight, it would constrict the penis. This type of condom was the original "capote" (French for condom), perhaps because of its resemblance to a woman's bonnet worn at that time, also called a capote. For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped molds, then dipping the wrapped molds in a chemical solution to cure the rubber. In 1912, Polish-born inventor Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. Around 1920 patent lawyer and vice-president of the United States Rubber Company Ernest Hopkinson invented a new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent and warm air to dry the solution, as well as optionally preserving liquid latex with ammonia. Condoms made this way, commonly called "latex" ones, required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. The use of water to suspend the rubber instead of gasoline and benzene eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber). Until the twenties, all condoms were individually hand-dipped by semi-skilled workers. Throughout the decade of the 1920s, advances in the automation of the condom assembly line were made. The first fully automated line was patented in 1930. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market. 1930 to present In 1930 the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931 the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. In the 1930s, legal restrictions on condoms began to be relaxed. But during this period Fascist Italy and Nazi Germany increased restrictions on condoms (limited sales as disease preventatives were still allowed). During the Depression, condom lines by Schmid gained in popularity. Schmid still used the cement-dipping method of manufacture which had two advantages over the latex variety. Firstly, cement-dipped condoms could be safely used with oil-based lubricants. Secondly, while less comfortable, these older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s, and the U.S. Food and Drug Administration began to regulate the quality of condoms sold in the United States. Throughout World War II, condoms were not only distributed to male U.S. military members, but also heavily promoted with films, posters, and lectures. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to this day. After the war, condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. The birth control pill became the world's most popular method of birth control in the years after its 1960 début, but condoms remained a strong second. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone.(This number has grown in recent decades: in 2004, the government of India purchased 1.9 billion condoms for distribution at family planning clinics.) In the 1960s and 1970s quality regulations tightened, and more legal barriers to condom use were removed. In Ireland, legal condom sales were allowed for the first time in 1978. Advertising, however was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television; this policy remained in place until 1979. After it was discovered in the early 1980s that AIDS can be a sexually transmitted infection, the use of condoms was encouraged to prevent transmission of HIV. Despite opposition by some political, religious, and other figures, national condom promotion campaigns occurred in the U.S. and Europe. These campaigns increased condom use significantly. Due to increased demand and greater social acceptance, condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Walmart. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. The phenomenon of decreasing use of condoms as disease preventatives has been called prevention fatigue or condom fatigue. Observers have cited condom fatigue in both Europe and North America. As one response, manufacturers have changed the tone of their advertisements from scary to humorous. New developments continued to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Worldwide condom use is expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms by 2015. , condoms are available inside prisons in Canada, most of the European Union, Australia, Brazil, Indonesia, South Africa, and the US states of Vermont (on September 17, 2013, the Californian Senate approved a bill for condom distribution inside the state's prisons, but the bill was not yet law at the time of approval). The global condom market was estimated at US$9.2 billion in 2020. Etymology and other terms The term condom first appears in the early 18th century: early forms include condum (1706 and 1717), condon (1708) and cundum (1744). The word's etymology is unknown. In popular tradition, the invention and naming of the condom came to be attributed to an associate of England's King Charles II, one "Dr. Condom" or "Earl of Condom". There is however no evidence of the existence of such a person, and condoms had been used for over one hundred years before King Charles II ascended to the throne. A variety of unproven Latin etymologies have been proposed, including (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word guantone, derived from guanto, meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown". Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters or rubber johnnies. Additionally, condoms may be referred to using the manufacturer's name. Society and culture Some moral and scientific criticism of condoms exists despite the many benefits of condoms agreed on by scientific consensus and sexual health experts. Condom usage is typically recommended for new couples who have yet to develop full trust in their partner with regard to STDs. Established couples on the other hand have few concerns about STDs, and can use other methods of birth control such as the pill, which does not act as a barrier to intimate sexual contact. Note that the polar debate with regard to condom usage is attenuated by the target group the argument is directed. Notably the age category and stable partner question are factors, as well as the distinction between heterosexual and homosexuals, who have different kinds of sex and have different risk consequences and factors. Among the prime objections to condom usage is the blocking of erotic sensation, or the intimacy that barrier-free sex provides. As the condom is held tightly to the skin of the penis, it diminishes the delivery of stimulation through rubbing and friction. Condom proponents claim this has the benefit of making sex last longer, by diminishing sensation and delaying male ejaculation. Those who promote condom-free heterosexual sex (slang: "bareback") claim that the condom puts a barrier between partners, diminishing what is normally a highly sensual, intimate, and spiritual connection between partners. Religious The United Church of Christ (UCC), a Reformed denomination of the Congregationalist tradition, promotes the distribution of condoms in churches and faith-based educational settings. Michael Shuenemeyer, a UCC minister, has stated that "The practice of safer sex is a matter of life and death. People of faith make condoms available because we have chosen life so that we and our children may live." On the other hand, the Roman Catholic Church opposes all kinds of sexual acts outside of marriage, as well as any sexual act in which the chance of successful conception has been reduced by direct and intentional acts (for example, surgery to prevent conception) or foreign objects (for example, condoms). The use of condoms to prevent STI transmission is not specifically addressed by Catholic doctrine, and is currently a topic of debate among theologians and high-ranking Catholic authorities. A few, such as Belgian Cardinal Godfried Danneels, believe the Catholic Church should actively support condoms used to prevent disease, especially serious diseases such as AIDS. However, the majority view—including all statements from the Vatican—is that condom-promotion programs encourage promiscuity, thereby actually increasing STI transmission. This view was most recently reiterated in 2009 by Pope Benedict XVI. The Roman Catholic Church is the largest organized body of any world religion. The church has hundreds of programs dedicated to fighting the AIDS epidemic in Africa, but its opposition to condom use in these programs has been highly controversial. In a November 2011 interview, Pope Benedict XVI discussed for the first time the use of condoms to prevent STI transmission. He said that the use of a condom can be justified in a few individual cases if the purpose is to reduce the risk of an HIV infection. He gave as an example male prostitutes. There was some confusion at first whether the statement applied only to homosexual prostitutes and thus not to heterosexual intercourse at all. However, Federico Lombardi, spokesman for the Vatican, clarified that it applied to heterosexual and transsexual prostitutes, whether male or female, as well. He did, however, also clarify that the Vatican's principles on sexuality and contraception had not been changed. Scientific and environmental More generally, some scientific researchers have expressed objective concern over certain ingredients sometimes added to condoms, notably talc and nitrosamines. Dry dusting powders are applied to latex condoms before packaging to prevent the condom from sticking to itself when rolled up. Previously, talc was used by most manufacturers, but cornstarch is currently the most popular dusting powder. Although rare during normal use, talc is known to be potentially irritant to mucous membranes (such as in the vagina). Cornstarch is generally believed to be safe; however, some researchers have raised concerns over its use as well. Nitrosamines, which are potentially carcinogenic in humans, are believed to be present in a substance used to improve elasticity in latex condoms. A 2001 review stated that humans regularly receive 1,000 to 10,000 times greater nitrosamine exposure from food and tobacco than from condom use and concluded that the risk of cancer from condom use is very low. However, a 2004 study in Germany detected nitrosamines in 29 out of 32 condom brands tested, and concluded that exposure from condoms might exceed the exposure from food by 1.5- to 3-fold. In addition, the large-scale use of disposable condoms has resulted in concerns over their environmental impact via littering and in landfills, where they can eventually wind up in wildlife environments if not incinerated or otherwise permanently disposed of first. Polyurethane condoms in particular, given they are a form of plastic, are not biodegradable, and latex condoms take a very long time to break down. Experts, such as AVERT, recommend condoms be disposed of in a garbage receptacle, as flushing them down the toilet (which some people do) may cause plumbing blockages and other problems. Furthermore, the plastic and foil wrappers condoms are packaged in are also not biodegradable. However, the benefits condoms offer are widely considered to offset their small landfill mass. Frequent condom or wrapper disposal in public areas such as a parks have been seen as a persistent litter problem. While biodegradable, latex condoms damage the environment when disposed of improperly. According to the Ocean Conservancy, condoms, along with certain other types of trash, cover the coral reefs and smother sea grass and other bottom dwellers. The United States Environmental Protection Agency also has expressed concerns that many animals might mistake the litter for food. Cultural barriers to use In much of the Western world, the introduction of the pill in the 1960s was associated with a decline in condom use. In Japan, oral contraceptives were not approved for use until September 1999, and even then access was more restricted than in other industrialized nations. Perhaps because of this restricted access to hormonal contraception, Japan has the highest rate of condom usage in the world: in 2008, 80% of contraceptive users relied on condoms. Cultural attitudes toward gender roles, contraception, and sexual activity vary greatly around the world, and range from extremely conservative to extremely liberal. But in places where condoms are misunderstood, mischaracterised, demonised, or looked upon with overall cultural disapproval, the prevalence of condom use is directly affected. In less-developed countries and among less-educated populations, misperceptions about how disease transmission and conception work negatively affect the use of condoms; additionally, in cultures with more traditional gender roles, women may feel uncomfortable demanding that their partners use condoms. As an example, Latino immigrants in the United States often face cultural barriers to condom use. A study on female HIV prevention published in the Journal of Sex Health Research asserts that Latino women often lack the attitudes needed to negotiate safe sex due to traditional gender-role norms in the Latino community, and may be afraid to bring up the subject of condom use with their partners. Women who participated in the study often reported that because of the general machismo subtly encouraged in Latino culture, their male partners would be angry or possibly violent at the woman's suggestion that they use condoms. A similar phenomenon has been noted in a survey of low-income American black women; the women in this study also reported a fear of violence at the suggestion to their male partners that condoms be used. A telephone survey conducted by Rand Corporation and Oregon State University, and published in the Journal of Acquired Immune Deficiency Syndromes showed that belief in AIDS conspiracy theories among United States black men is linked to rates of condom use. As conspiracy beliefs about AIDS grow in a given sector of these black men, consistent condom use drops in that same sector. Female use of condoms was not similarly affected. In the African continent, condom promotion in some areas has been impeded by anti-condom campaigns by some Muslim and Catholic clerics. Among the Maasai in Tanzania, condom use is hampered by an aversion to "wasting" sperm, which is given sociocultural importance beyond reproduction. Sperm is believed to be an "elixir" to women and to have beneficial health effects. Maasai women believe that, after conceiving a child, they must have sexual intercourse repeatedly so that the additional sperm aids the child's development. Frequent condom use is also considered by some Maasai to cause impotence. Some women in Africa believe that condoms are "for prostitutes" and that respectable women should not use them. A few clerics even promote the lie that condoms are deliberately laced with HIV. In the United States, possession of many condoms has been used by police to accuse women of engaging in prostitution. The Presidential Advisory Council on HIV/AIDS has condemned this practice and there are efforts to end it. Middle-Eastern couples who have not had children, because of the strong desire and social pressure to establish fertility as soon as possible within marriage, rarely use condoms. In 2017, India restricted TV advertisements for condoms to between the hours of 10 pm to 6 am. Family planning advocates were against this, saying it was liable to "undo decades of progress on sexual and reproductive health". Major manufacturers One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. Most large manufacturers have ties to the business that reach back to the end of the 19th century. Economics In the United States condoms usually cost less than US$1.00. Research A spray-on condom made of latex is intended to be easier to apply and more successful in preventing the transmission of diseases. , the spray-on condom was not going to market because the drying time could not be reduced below two to three minutes. The Invisible Condom, developed at Université Laval in Quebec, Canada, is a gel that hardens upon increased temperature after insertion into the vagina or rectum. In the lab, it has been shown to effectively block HIV and herpes simplex virus. The barrier breaks down and liquefies after several hours. , the invisible condom is in the clinical trial phase, and has not yet been approved for use. Also developed in 2005 is a condom treated with an erectogenic compound. The drug-treated condom is intended to help the wearer maintain his erection, which should also help reduce slippage. If approved, the condom would be marketed under the Durex brand. , it was still in clinical trials. In 2009, Ansell Healthcare, the makers of Lifestyle condoms, introduced the X2 condom lubricated with "Excite Gel" which contains the amino acid L-arginine and is intended to improve the strength of the erectile response. In March 2013, philanthropist Bill Gates offered US$100,000 grants through his foundation for a condom design that "significantly preserves or enhances pleasure" to encourage more males to adopt the use of condoms for safer sex. The grant information stated: "The primary drawback from the male perspective is that condoms decrease pleasure as compared to no condom, creating a trade-off that many men find unacceptable, particularly given that the decisions about use must be made just prior to intercourse. Is it possible to develop a product without this stigma, or better, one that is felt to enhance pleasure?" In November of the same year, 11 research teams were selected to receive the grant money. References External links "Sheathing Cupid's Arrow: the Oldest Artificial Contraceptive May Be Ripe for a Makeover", The Economist, February 2014. 16th-century introductions HIV/AIDS Prevention of HIV/AIDS Penis Sexual health World Health Organization essential medicines Wikipedia medicine articles ready to translate Contraception for males
2,377
5,377
https://en.wikipedia.org/wiki/Calendar
Calendar
A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. Etymology The term calendar is taken from , the term for the first day of the month in the Roman calendar, related to the verb 'to call out', referring to the "calling" of the new moon when it was first seen. Latin meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as and from there in Middle English as by the 13th century (the spelling calendar is early modern). History The course of the sun and the moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year. The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars. During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures. A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar. A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars. Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar. The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year. Modern reforms The first calendar reform of the early modern era resulted in the Gregorian calendar, introduced in 1582 and based on the observation of a long-term shift between the Julian calendar and the solar year. There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke-Henry Permanent Calendar. Such ideas are mooted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity. Systems A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years. The simplest calendar system just counts time periods from a reference date. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction. Other calendars have one (or multiple) larger units of time. Calendars that contain one level of cycles: week and weekday – this system (without year, the week number keeps on increasing) is not very common year and ordinal date within the year, e.g., the ISO 8601 ordinal date system Calendars with two levels of cycles: year, month, and day – most systems, including the Gregorian calendar (and its very similar predecessor, the Julian calendar), the Islamic calendar, the Solar Hijri calendar and the Hebrew calendar year, week, and weekday – e.g., the ISO week date Cycles can be synchronized with periodic phenomena: Lunar calendars are synchronized to the motion of the Moon (lunar phases); an example is the Islamic calendar. Solar calendars are based on perceived seasonal changes synchronized to the apparent motion of the Sun; an example is the Persian calendar. Lunisolar calendars are based on a combination of both solar and lunar reckonings; examples include the traditional calendar of China, the Hindu calendar in India and Nepal, and the Hebrew calendar. The week cycle is an example of one that is not synchronized to any external phenomenon (although it may have been derived from lunar phases, beginning anew every month). Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar Solar calendars assign a date to each solar day. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may be allowed to vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. Lunar Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. Lunisolar A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle. Subdivisions Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. Other types Arithmetical and astronomical An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. Complete and incomplete Calendars may be either complete or incomplete. Complete calendars provide a way of naming each consecutive day, while incomplete calendars do not. The early Roman calendar, which had no way of designating the days of the winter months other than to lump them together as "winter", is an example of an incomplete calendar, while the Gregorian calendar is an example of a complete calendar. Usage The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used to help people manage their personal schedules, time, and activities, particularly when individuals have numerous work, school, and family commitments. People frequently use multiple systems and may keep both a business and family calendar to help prevent them from overcommitting their time. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase. Gregorian The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). The calendar was introduced in 1582 as a refinement to the Julian calendar, which had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). Religious The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. Eastern Christians, including the Orthodox Church, use the Julian calendar. The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622) With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used business dealings (such as for the dating of cheques). Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. National The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. Fiscal A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. Formats The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word. In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday. The US calendar display is also used in Britain. It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country) and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary. When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row. Software Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). See also List of calendars Advent calendar Calendar reform Calendrical calculation Docket (court) History of calendars Horology List of international common standards List of unofficial observances by date Real-time clock (RTC), which underlies the Calendar software on modern computers. Unit of time References Citations Sources Further reading External links Calendar converter, including all major civil, religious and technical calendars. Units of time
2,380
5,387
https://en.wikipedia.org/wiki/Condensed%20matter%20physics
Condensed matter physics
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with "condensed" phases of matter: systems of many constituents with strong interactions between them. More exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other theories to develop mathematical models. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics. Etymology According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time. References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'". History of condensed matter physics Classical physics One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures. In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas." Advent of quantum mechanics Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations was first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics. In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered a voltage developed across conductors transverse to an electric current in the conductor and magnetic field perpendicular to the current. This phenomenon arising due to the nature of charge carriers in the conductor came to be termed the Hall effect, but it was not properly explained at the time, since the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for the theoretical explanation for the quantum Hall effect discovered half a century later. Magnetism as a property of matter has been known in China since 4000 BC. However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets. The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization cannot occur in one dimension but is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices. Modern many-body physics The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Russian physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and John Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair. The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory. The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant .(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators. Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant . Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators. In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, a material which was superconducting at temperatures as high as 50 kelvins. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic. In 2009, David Field and researchers at Aarhus University discovered spontaneous electric fields when creating prosaic films of various gases. This has more recently expanded to form the research area of spontelectrics. In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations. Theoretical Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries. Emergence Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity. Electronic theory of solids The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments. This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law. In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms. In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem. Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly. Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids. Symmetry breaking Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry. Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations. Phase transition Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature. Classical phase transition occurs at finite temperature when the order of the system was destroyed. For example, when ice melts and becomes water, the ordered crystal structure is destroyed. In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances. Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system. The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed. Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition. Experimental Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction. Scattering Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density. Neutrons can also probe atomic length scales and are used to study scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes. Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy. External magnetic fields In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual electrons, thus giving information about the atomic, molecular, and bond structure of their neighborhood. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data. Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimentally testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect. Nuclear spectroscopy The local structure, the structure of the nearest neighbour atoms, of condensed matter can be investigated with methods of nuclear spectroscopy, which are very sensitive to small changes. Using specific and radioactive nuclei, the nucleus becomes the probe that interacts with its surrounding electric and magnetic fields (hyperfine interactions). The methods are suitable to study defects, diffusion, phase change, magnetism. Common methods are e.g. NMR, Mössbauer spectroscopy, or perturbed angular correlation (PAC). Especially PAC is ideal for the study of phase changes at extreme temperature above 2000 °C due to no temperature dependence of the method. Cold atomic gases Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering. In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state. Applications Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, and several phenomena studied in the context of nanotechnology. Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laurate in chemistry Ben Feringa. He and his team developed multiple molecular machines such as molecular car, molecular windmill and many more. In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, or the topological non-Abelian anyons from fractional quantum Hall effect states. Condensed matter physics also has important uses for biophysics, for example, the experimental method of magnetic resonance imaging, which is widely used in medical diagnosis. See also Notes References Further reading Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. . Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. . Coleman, Piers (2015). "Introduction to Many-Body Physics". Cambridge Core. Retrieved 2020-04-18. P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition, Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, . Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, . Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, . External links Materials science
2,384
5,391
https://en.wikipedia.org/wiki/City
City
A city is a human settlement of notable size. It can be defined as a permanent and densely settled place with administratively defined boundaries whose members work primarily on non-agricultural tasks. Cities generally have extensive systems for housing, transportation, sanitation, utilities, land use, production of goods, and communication. Their density facilitates interaction between people, government organisations and businesses, sometimes benefiting different parties in the process, such as improving efficiency of goods and service distribution. Historically, city-dwellers have been a small proportion of humanity overall, but following two centuries of unprecedented and rapid urbanization, more than half of the world population now lives in cities, which has had profound consequences for global sustainability. Present-day cities usually form the core of larger metropolitan areas and urban areas—creating numerous commuters traveling towards city centres for employment, entertainment, and education. However, in a world of intensifying globalization, all cities are to varying degrees also connected globally beyond these regions. This increased influence means that cities also have significant influences on global issues, such as sustainable development, global warming, and global health. Because of these major influences on global issues, the international community has prioritized investment in sustainable cities through Sustainable Development Goal 11. Due to the efficiency of transportation and the smaller land consumption, dense cities hold the potential to have a smaller ecological footprint per inhabitant than more sparsely populated areas. Therefore, compact cities are often referred to as a crucial element of fighting climate change. However, this concentration can also have significant negative consequences, such as forming urban heat islands, concentrating pollution, and stressing water supplies and other resources. Other important traits of cities besides population include the capital status and relative continued occupation of the city. For example, country capitals such as Beijing, London, Mexico City, Moscow, Nairobi, New Delhi, Paris, Rome, Athens, Seoul, Singapore, Tokyo, Jakarta, Manila, and Washington, D.C. reflect the identity and apex of their respective nations. Some historic capitals, such as Kyoto and Xi'an, maintain their reflection of cultural identity even without modern capital status. Religious holy sites offer another example of capital status within a religion, Jerusalem, Mecca, Varanasi, Ayodhya, Haridwar and Prayagraj each hold significance. Meaning A city can be distinguished from other human settlements by its relatively great size, but also by its functions and its special symbolic status, which may be conferred by a central authority. The term can also refer either to the physical streets and buildings of the city or to the collection of people who dwell there, and can be used in a general sense to mean urban rather than rural territory. National censuses use a variety of definitions – invoking factors such as population, population density, number of dwellings, economic function, and infrastructure – to classify populations as urban. Typical working definitions for small-city populations start at around 100,000 people. Common population definitions for an urban area (city or town) range between 1,500 and 50,000 people, with most U.S. states using a minimum between 1,500 and 5,000 inhabitants. Some jurisdictions set no such minima. In the United Kingdom, city status is awarded by the Crown and then remains permanently. (Historically, the qualifying factor was the presence of a cathedral, resulting in some very small cities such as Wells, with a population 12,000 and St Davids, with a population of 1,841 .) According to the "functional definition", a city is not distinguished by size alone, but also by the role it plays within a larger political context. Cities serve as administrative, commercial, religious, and cultural hubs for their larger surrounding areas. The presence of a literate elite is sometimes included in the definition. A typical city has professional administrators, regulations, and some form of taxation (food and other necessities or means to trade for them) to support the government workers. (This arrangement contrasts with the more typically horizontal relationships in a tribe or village accomplishing common goals through informal agreements between neighbors, or through leadership of a chief.) The governments may be based on heredity, religion, military power, work systems such as canal-building, food-distribution, land-ownership, agriculture, commerce, manufacturing, finance, or a combination of these. Societies that live in cities are often called civilizations. The degree of urbanization is a modern metric to help define what comprises a city: "a population of at least 50,000 inhabitants in contiguous dense grid cells (>1,500 inhabitants per square kilometer)". This metric was "devised over years by the European Commission, OECD, World Bank and others, and endorsed in March [2021] by the United Nations... largely for the purpose of international statistical comparison". Etymology The word city and the related civilization come from the Latin root civitas, originally meaning 'citizenship' or 'community member' and eventually coming to correspond with urbs, meaning 'city' in a more physical sense. The Roman civitas was closely linked with the Greek polis—another common root appearing in English words such as metropolis. In toponymic terminology, names of individual cities and towns are called astionyms (from Ancient Greek ἄστυ 'city or town' and ὄνομα 'name'). Geography Urban geography deals both with cities in their larger context and with their internal structure. Cities are estimated to cover about 3% of the land surface of the Earth. Site Town siting has varied through history according to natural, technological, economic, and military contexts. Access to water has long been a major factor in city placement and growth, and despite exceptions enabled by the advent of rail transport in the nineteenth century, through the present most of the world's urban population lives near the coast or on a river. Urban areas as a rule cannot produce their own food and therefore must develop some relationship with a hinterland which sustains them. Only in special cases such as mining towns which play a vital role in long-distance trade, are cities disconnected from the countryside which feeds them. Thus, centrality within a productive region influences siting, as economic forces would in theory favor the creation of market places in optimal mutually reachable locations. Center The vast majority of cities have a central area containing buildings with special economic, political, and religious significance. Archaeologists refer to this area by the Greek term temenos or if fortified as a citadel. These spaces historically reflect and amplify the city's centrality and importance to its wider sphere of influence. Today cities have a city center or downtown, sometimes coincident with a central business district. Public space Cities typically have public spaces where anyone can go. These include privately owned spaces open to the public as well as forms of public land such as public domain and the commons. Western philosophy since the time of the Greek agora has considered physical public space as the substrate of the symbolic public sphere. Public art adorns (or disfigures) public spaces. Parks and other natural sites within cities provide residents with relief from the hardness and regularity of typical built environments. Urban green spaces are another component of public space that provide the benefit of mitigating the urban heat island effect, especially with cities that are in warmer climates. These spaces prevent carbon imbalances, extreme habitat losses, electricity and water consumption and human health risks. Internal structure Urban structure generally follows one or more basic patterns: geomorphic, radial, concentric, rectilinear, and curvilinear. Physical environment generally constrains the form in which a city is built. If located on a mountainside, urban structure may rely on terraces and winding roads. It may be adapted to its means of subsistence (e.g. agriculture or fishing). And it may be set up for optimal defense given the surrounding landscape. Beyond these "geomorphic" features, cities can develop internal patterns, due to natural growth or to city planning. In a radial structure, main roads converge on a central point. This form could evolve from successive growth over a long time, with concentric traces of town walls and citadels marking older city boundaries. In more recent history, such forms were supplemented by ring roads moving traffic around the outskirts of a town. Dutch cities such as Amsterdam and Haarlem are structured as a central square surrounded by concentric canals marking every expansion. In cities such as Moscow, this pattern is still clearly visible. A system of rectilinear city streets and land plots, known as the grid plan, has been used for millennia in Asia, Europe, and the Americas. The Indus Valley civilisation built Mohenjo-Daro, Harappa and other cities on a grid pattern, using ancient principles described by Kautilya, and aligned with the compass points. The ancient Greek city of Priene exemplifies a grid plan with specialized districts used across the Hellenistic Mediterranean. Urban areas Urban-type settlement extends far beyond the traditional boundaries of the city proper in a form of development sometimes described critically as urban sprawl. Decentralization and dispersal of city functions (commercial, industrial, residential, cultural, political) has transformed the very meaning of the term and has challenged geographers seeking to classify territories according to an urban-rural binary. Metropolitan areas include suburbs and exurbs organized around the needs of commuters, and sometimes edge cities characterized by a degree of economic and political independence. (In the US these are grouped into metropolitan statistical areas for purposes of demography and marketing.) Some cities are now part of a continuous urban landscape called urban agglomeration, conurbation, or megalopolis (exemplified by the BosWash corridor of the Northeastern United States.) History The cities of Jericho, Aleppo, Faiyum, Yerevan, Athens, Damascus and Argos are among those laying claim to the longest continual inhabitation. Cities, characterized by population density, symbolic function, and urban planning, have existed for thousands of years. In the conventional view, civilization and the city both followed from the development of agriculture, which enabled production of surplus food, and thus a social division of labour (with concomitant social stratification) and trade. Early cities often featured granaries, sometimes within a temple. A minority viewpoint considers that cities may have arisen without agriculture, due to alternative means of subsistence (fishing), to use as communal seasonal shelters, to their value as bases for defensive and offensive military organization, or to their inherent economic function. Cities played a crucial role in the establishment of political power over an area, and ancient leaders such as Alexander the Great founded and created them with zeal. Ancient times Jericho and Çatalhöyük, dated to the eighth millennium BC, are among the earliest proto-cities known to archaeologists. However, the Mesopotamian city of Uruk from the mid fourth millennium BC (ancient Iraq) is considered by some to be the first true City, with its name attributed to the Uruk period. In the fourth and third millennium BC, complex civilizations flourished in the river valleys of Mesopotamia, India, China, and Egypt. Excavations in these areas have found the ruins of cities geared variously towards trade, politics, or religion. Some had large, dense populations, but others carried out urban activities in the realms of politics or religion without having large associated populations. Among the early Old World cities, Mohenjo-daro of the Indus Valley civilization in present-day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system. China's planned cities were constructed according to sacred principles to act as celestial microcosms. The Ancient Egyptian cities known physically by archaeologists are not extensive. They include (known by their Arab names) El Lahun, a workers' town associated with the pyramid of Senusret II, and the religious city Amarna built by Akhenaten and abandoned. These sites appear planned in a highly regimented and stratified fashion, with a minimalistic grid of rooms for the workers and increasingly more elaborate housing available for higher classes. In Mesopotamia, the civilization of Sumer, followed by Assyria and Babylon, gave rise to numerous cities, governed by kings and fostering multiple languages written in cuneiform. The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos to Carthage and Cádiz. In the following centuries, independent city-states of Greece, especially Athens, developed the polis, an association of male landowning citizens who collectively constituted the city. The agora, meaning "gathering place" or "assembly", was the center of athletic, artistic, spiritual and political life of the polis. Rome was the first city that surpassed one million inhabitants. Under the authority of its empire, Rome transformed and founded many cities (coloniae), and with them brought its principles of urban architecture, design, and society. In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th and 18th centuries BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec, Andean civilization, Mayan, Mississippians, and Pueblo peoples drew on these earlier urban traditions. Many of their ancient cities continue to be inhabited, including major metropolitan cities such as Mexico City, in the same location as Tenochtitlan; while ancient continuously inhabited Pueblos are near modern urban areas in New Mexico, such as Acoma Pueblo near the Albuquerque metropolitan area and Taos Pueblo near Taos; while others like Lima are located nearby ancient Peruvian sites such as Pachacamac. Jenné-Jeno, located in present-day Mali and dating to the third century BC, lacked monumental architecture and a distinctive elite social class—but nevertheless had specialized production and relations with a hinterland. Pre-Arabic trade contacts probably existed between Jenné-Jeno and North Africa. Other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi-Saleh the ancient capital of Ghana, and Maranda a center located on a trade route between Egypt and Gao. Middle Ages In the remnants of the Roman Empire, cities of late antiquity gained independence but soon lost population and importance. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba. From the 9th through the end of the 12th century, Constantinople, capital of the Eastern Roman Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. The Ottoman Empire gradually gained control over many cities in the Mediterranean area, including Constantinople in 1453. In the Holy Roman Empire, beginning in the 12th century, free imperial cities such as Nuremberg, Strasbourg, Frankfurt, Basel, Zurich, Nijmegen became a privileged elite among towns having won self-governance from their local lord or having been granted self-governanace by the emperor and being placed under his immediate protection. By 1480, these cities, as far as still part of the empire, became part of the Imperial Estates governing the empire with the emperor through the Imperial Diet. By the 13th and 14th centuries, some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy medieval communes developed into city-states including the Republic of Venice and the Republic of Genoa. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed a considerable autonomy in late medieval Japan. In the first millennium AD, the Khmer capital of Angkor in Cambodia grew into the most extensive preindustrial settlement in the world by area, covering over 1,000 km2 and possibly supporting up to one million people. Early modern In the West, nation-states became the dominant unit of political organization following the Peace of Westphalia in the seventeenth century. Western Europe's larger capitals (London and Paris) benefited from the growth of commerce following the emergence of an Atlantic trade. However, most towns remained small. During the Spanish colonization of the Americas the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories, and were bound to several laws regarding administration, finances and urbanism. Industrial age and the Holy Saviour's Church (1859–1873) The growth of modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas. Some industrialized confront health challenges associated with overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape. Post-industrial age In the second half of the 20th century, deindustrialization (or "economic restructuring") in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America's "Steel Belt" became a "Rust Belt" and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Such cities have shifted with varying success into the service economy and public-private partnerships, with concomitant gentrification, uneven revitalization efforts, and selective cultural development. Under the Great Leap Forward and subsequent five-year plans continuing today, China has undergone concomitant urbanization and industrialization and to become the world's leading manufacturer. Amidst these economic changes, high technology and instantaneous telecommunication enable select cities to become centers of the knowledge economy. A new smart city paradigm, supported by institutions such as the RAND Corporation and IBM, is bringing computerized surveillance, data analysis, and governance to bear on cities and city-dwellers. Some companies are building brand new masterplanned cities from scratch on greenfield sites. Urbanization Urbanization is the process of migration from rural into urban areas, driven by various political, economic, and cultural factors. Until the 18th century, an equilibrium existed between the rural agricultural population and towns featuring markets and small-scale manufacturing. With the agricultural and industrial revolutions urban population began its unprecedented growth, both through migration and through demographic expansion. In England the proportion of the population living in cities jumped from 17% in 1801 to 72% in 1891. In 1900, 15% of the world population lived in cities. The cultural appeal of cities also plays a role in attracting residents. Urbanization rapidly spread across the Europe and the Americas and since the 1950s has taken hold in Asia and Africa as well. The Population Division of the United Nations Department of Economic and Social Affairs, reported in 2014 that for the first time more than half of the world population lives in cities. Latin America is the most urban continent, with four-fifths of its population living in cities, including one fifth of the population said to live in shantytowns (favelas, poblaciones callampas, etc.). Batam, Indonesia, Mogadishu, Somalia, Xiamen, China and Niamey, Niger, are considered among the world's fastest-growing cities, with annual growth rates of 5–8%. In general, the more developed countries of the "Global North" remain more urbanized than the less developed countries of the "Global South"—but the difference continues to shrink because urbanization is happening faster in the latter group. Asia is home to by far the greatest absolute number of city-dwellers: over two billion and counting. The UN predicts an additional 2.5 billion citydwellers (and 300 million fewer countrydwellers) worldwide by 2050, with 90% of urban population expansion occurring in Asia and Africa. Megacities, cities with population in the multi-millions, have proliferated into the dozens, arising especially in Asia, Africa, and Latin America. Economic globalization fuels the growth of these cities, as new torrents of foreign capital arrange for rapid industrialization, as well as relocation of major businesses from Europe and North America, attracting immigrants from near and far. A deep gulf divides rich and poor in these cities, with usually contain a super-wealthy elite living in gated communities and large masses of people living in substandard housing with inadequate infrastructure and otherwise poor conditions. Cities around the world have expanded physically as they grow in population, with increases in their surface extent, with the creation of high-rise buildings for residential and commercial use, and with development underground. Urbanization can create rapid demand for water resources management, as formerly good sources of freshwater become overused and polluted, and the volume of sewage begins to exceed manageable levels. Government Local government of cities takes different forms including prominently the municipality (especially in England, in the United States, in India, and in other British colonies; legally, the municipal corporation; municipio in Spain and in Portugal, and, along with municipalidad, in most former parts of the Spanish and Portuguese empires) and the commune (in France and in Chile; or comune in Italy). The chief official of the city has the title of mayor. Whatever their true degree of political authority, the mayor typically acts as the figurehead or personification of their city. City governments have authority to make laws governing activity within cities, while its jurisdiction is generally considered subordinate (in ascending order) to state/provincial, national, and perhaps international law. This hierarchy of law is not enforced rigidly in practice—for example in conflicts between municipal regulations and national principles such as constitutional rights and property rights. Legal conflicts and issues arise more frequently in cities than elsewhere due to the bare fact of their greater density. Modern city governments thoroughly regulate everyday life in many dimensions, including public and personal health, transport, burial, resource use and extraction, recreation, and the nature and use of buildings. Technologies, techniques, and laws governing these areas—developed in cities—have become ubiquitous in many areas. Municipal officials may be appointed from a higher level of government or elected locally. Municipal services Cities typically provide municipal services such as education, through school systems; policing, through police departments; and firefighting, through fire departments; as well as the city's basic infrastructure. These are provided more or less routinely, in a more or less equal fashion. Responsibility for administration usually falls on the city government, though some services may be operated by a higher level of government, while others may be privately run. Armies may assume responsibility for policing cities in states of domestic turmoil such as America's King assassination riots of 1968. Finance The traditional basis for municipal finance is local property tax levied on real estate within the city. Local government can also collect revenue for services, or by leasing land that it owns. However, financing municipal services, as well as urban renewal and other development projects, is a perennial problem, which cities address through appeals to higher governments, arrangements with the private sector, and techniques such as privatization (selling services into the private sector), corporatization (formation of quasi-private municipally-owned corporations), and financialization (packaging city assets into tradable financial public contracts and other related rights. This situation has become acute in deindustrialized cities and in cases where businesses and wealthier citizens have moved outside of city limits and therefore beyond the reach of taxation. Cities in search of ready cash increasingly resort to the municipal bond, essentially a loan with interest and a repayment date. City governments have also begun to use tax increment financing, in which a development project is financed by loans based on future tax revenues which it is expected to yield. Under these circumstances, creditors and consequently city governments place a high importance on city credit ratings. Governance Governance includes government but refers to a wider domain of social control functions implemented by many actors including non-governmental organizations. The impact of globalization and the role of multinational corporations in local governments worldwide, has led to a shift in perspective on urban governance, away from the "urban regime theory" in which a coalition of local interests functionally govern, toward a theory of outside economic control, widely associated in academics with the philosophy of neoliberalism. In the neoliberal model of governance, public utilities are privatized, industry is deregulated, and corporations gain the status of governing actors—as indicated by the power they wield in public-private partnerships and over business improvement districts, and in the expectation of self-regulation through corporate social responsibility. The biggest investors and real estate developers act as the city's de facto urban planners. The related concept of good governance places more emphasis on the state, with the purpose of assessing urban governments for their suitability for development assistance. The concepts of governance and good governance are especially invoked in the emergent megacities, where international organizations consider existing governments inadequate for their large populations. Urban planning Urban planning, the application of forethought to city design, involves optimizing land use, transportation, utilities, and other basic systems, in order to achieve certain objectives. Urban planners and scholars have proposed overlapping theories as ideals for how plans should be formed. Planning tools, beyond the original design of the city itself, include public capital investment in infrastructure and land-use controls such as zoning. The continuous process of comprehensive planning involves identifying general objectives as well as collecting data to evaluate progress and inform future decisions. Government is legally the final authority on planning but in practice the process involves both public and private elements. The legal principle of eminent domain is used by government to divest citizens of their property in cases where its use is required for a project. Planning often involves tradeoffs—decisions in which some stand to gain and some to lose—and thus is closely connected to the prevailing political situation. The history of urban planning dates to some of the earliest known cities, especially in the Indus Valley and Mesoamerican civilizations, which built their cities on grids and apparently zoned different areas for different purposes. The effects of planning, ubiquitous in today's world, can be seen most clearly in the layout of planned communities, fully designed prior to construction, often with consideration for interlocking physical, economic, and cultural systems. Society Social structure Urban society is typically stratified. Spatially, cities are formally or informally segregated along ethnic, economic and racial lines. People living relatively close together may live, work, and play in separate areas, and associate with different people, forming ethnic or lifestyle enclaves or, in areas of concentrated poverty, ghettoes. While in the US and elsewhere poverty became associated with the inner city, in France it has become associated with the banlieues, areas of urban development which surround the city proper. Meanwhile, across Europe and North America, the racially white majority is empirically the most segregated group. Suburbs in the West, and, increasingly, gated communities and other forms of "privatopia" around the world, allow local elites to self-segregate into secure and exclusive neighborhoods. Landless urban workers, contrasted with peasants and known as the proletariat, form a growing stratum of society in the age of urbanization. In Marxist doctrine, the proletariat will inevitably revolt against the bourgeoisie as their ranks swell with disenfranchised and disaffected people lacking all stake in the status quo. The global urban proletariat of today, however, generally lacks the status as factory workers which in the nineteenth century provided access to the means of production. Economics Historically, cities rely on rural areas for intensive farming to yield surplus crops, in exchange for which they provide money, political administration, manufactured goods, and culture. Urban economics tends to analyze larger agglomerations, stretching beyond city limits, in order to reach a more complete understanding of the local labor market. As hubs of trade cities have long been home to retail commerce and consumption through the interface of shopping. In the 20th century, department stores using new techniques of advertising, public relations, decoration, and design, transformed urban shopping areas into fantasy worlds encouraging self-expression and escape through consumerism. In general, the density of cities expedites commerce and facilitates knowledge spillovers, helping people and firms exchange information and generate new ideas. A thicker labor market allows for better skill matching between firms and individuals. Population density enables also sharing of common infrastructure and production facilities, however in very dense cities, increased crowding and waiting times may lead to some negative effects. Although manufacturing fueled the growth of cities, many now rely on a tertiary or service economy. The services in question range from tourism, hospitality, entertainment, housekeeping, and prostitution to grey-collar work in law, finance, and administration. According to a scientific model of cities by professor Geoffrey West, with the doubling of a city's size, salaries per capita will generally increase by 15%. Culture and communications Cities are typically hubs for education and the arts, supporting universities, museums, temples, and other cultural institutions. They feature impressive displays of architecture ranging from small to enormous and ornate to brutal; skyscrapers, providing thousands of offices or homes within a small footprint, and visible from miles away, have become iconic urban features. Cultural elites tend to live in cities, bound together by shared cultural capital, and themselves playing some role in governance. By virtue of their status as centers of culture and literacy, cities can be described as the locus of civilization, human history, and social change. Density makes for effective mass communication and transmission of news, through heralds, printed proclamations, newspapers, and digital media. These communication networks, though still using cities as hubs, penetrate extensively into all populated areas. In the age of rapid communication and transportation, commentators have described urban culture as nearly ubiquitous or as no longer meaningful. Today, a city's promotion of its cultural activities dovetails with place branding and city marketing, public diplomacy techniques used to inform development strategy; to attract businesses, investors, residents, and tourists; and to create a shared identity and sense of place within the metropolitan area. Physical inscriptions, plaques, and monuments on display physically transmit a historical context for urban places. Some cities, such as Jerusalem, Mecca, and Rome have indelible religious status and for hundreds of years have attracted pilgrims. Patriotic tourists visit Agra to see the Taj Mahal, or New York City to visit the World Trade Center. Elvis lovers visit Memphis to pay their respects at Graceland. Place brands (which include place satisfaction and place loyalty) have great economic value (comparable to the value of commodity brands) because of their influence on the decision-making process of people thinking about doing business in—"purchasing" (the brand of)—a city. Bread and circuses among other forms of cultural appeal, attract and entertain the masses. Sports also play a major role in city branding and local identity formation. Cities go to considerable lengths in competing to host the Olympic Games, which bring global attention and tourism. Warfare Cities play a crucial strategic role in warfare due to their economic, demographic, symbolic, and political centrality. For the same reasons, they are targets in asymmetric warfare. Many cities throughout history were founded under military auspices, a great many have incorporated fortifications, and military principles continue to influence urban design. Indeed, war may have served as the social rationale and economic basis for the very earliest cities. Powers engaged in geopolitical conflict have established fortified settlements as part of military strategies, as in the case of garrison towns, America's Strategic Hamlet Program during the Vietnam War, and Israeli settlements in Palestine. While occupying the Philippines, the US Army ordered local people concentrated into cities and towns, in order to isolate committed insurgents and battle freely against them in the countryside. During World War II, national governments on occasion declared certain cities open, effectively surrendering them to an advancing enemy in order to avoid damage and bloodshed. Urban warfare proved decisive, however, in the Battle of Stalingrad, where Soviet forces repulsed German occupiers, with extreme casualties and destruction. In an era of low-intensity conflict and rapid urbanization, cities have become sites of long-term conflict waged both by foreign occupiers and by local governments against insurgency. Such warfare, known as counterinsurgency, involves techniques of surveillance and psychological warfare as well as close combat, functionally extends modern urban crime prevention, which already uses concepts such as defensible space. Although capture is the more common objective, warfare has in some cases spelt complete destruction for a city. Mesopotamian tablets and ruins attest to such destruction, as does the Latin motto Carthago delenda est. Since the atomic bombings of Hiroshima and Nagasaki and throughout the Cold War, nuclear strategists continued to contemplate the use of "countervalue" targeting: crippling an enemy by annihilating its valuable cities, rather than aiming primarily at its military forces. Climate change Infrastructure ]] Urban infrastructure involves various physical networks and spaces necessary for transportation, water use, energy, recreation, and public functions. Infrastructure carries a high initial cost in fixed capital but lower marginal costs and thus positive economies of scale. Because of the higher barriers to entry, these networks have been classified as natural monopolies, meaning that economic logic favors control of each network by a single organization, public or private. Infrastructure in general plays a vital role in a city's capacity for economic activity and expansion, underpinning the very survival of the city's inhabitants, as well as technological, commercial, industrial, and social activities. Structurally, many infrastructure systems take the form of networks with redundant links and multiple pathways, so that the system as a whole continue to operate even if parts of it fail. The particulars of a city's infrastructure systems have historical path dependence because new development must build from what exists already. Megaprojects such as the construction of airports, power plants, and railways require large upfront investments and thus tend to require funding from national government or the private sector. Privatization may also extend to all levels of infrastructure construction and maintenance. Urban infrastructure ideally serves all residents equally but in practice may prove uneven—with, in some cities, clear first-class and second-class alternatives. Utilities Public utilities (literally, useful things with general availability) include basic and essential infrastructure networks, chiefly concerned with the supply of water, electricity, and telecommunications capability to the populace. Sanitation, necessary for good health in crowded conditions, requires water supply and waste management as well as individual hygiene. Urban water systems include principally a water supply network and a network (sewerage system) for sewage and stormwater. Historically, either local governments or private companies have administered urban water supply, with a tendency toward government water supply in the 20th century and a tendency toward private operation at the turn of the twenty-first. The market for private water services is dominated by two French companies, Veolia Water (formerly Vivendi) and Engie (formerly Suez), said to hold 70% of all water contracts worldwide. Modern urban life relies heavily on the energy transmitted through electricity for the operation of electric machines (from household appliances to industrial machines to now-ubiquitous electronic systems used in communications, business, and government) and for traffic lights, street lights, and indoor lighting. Cities rely to a lesser extent on hydrocarbon fuels such as gasoline and natural gas for transportation, heating, and cooking. Telecommunications infrastructure such as telephone lines and coaxial cables also traverse cities, forming dense networks for mass and point-to-point communications. Transportation Because cities rely on specialization and an economic system based on wage labour, their inhabitants must have the ability to regularly travel between home, work, commerce, and entertainment. Citydwellers travel foot or by wheel on roads and walkways, or use special rapid transit systems based on underground, overground, and elevated rail. Cities also rely on long-distance transportation (truck, rail, and airplane) for economic connections with other cities and rural areas. City streets historically were the domain of horses and their riders and pedestrians, who only sometimes had sidewalks and special walking areas reserved for them. In the West, bicycles or (velocipedes), efficient human-powered machines for short- and medium-distance travel, enjoyed a period of popularity at the beginning of the twentieth century before the rise of automobiles. Soon after, they gained a more lasting foothold in Asian and African cities under European influence. In Western cities, industrializing, expanding, and electrifying public transit systems and especially streetcars enabled urban expansion as new residential neighborhoods sprung up along transit lines and workers rode to and from work downtown. Since the mid-20th century, cities have relied heavily on motor vehicle transportation, with major implications for their layout, environment, and aesthetics. (This transformation occurred most dramatically in the US—where corporate and governmental policies favored automobile transport systems—and to a lesser extent in Europe.) The rise of personal cars accompanied the expansion of urban economic areas into much larger metropolises, subsequently creating ubiquitous traffic issues with accompanying construction of new highways, wider streets, and alternative walkways for pedestrians. However, severe traffic jams still occur regularly in cities around the world, as private car ownership and urbanization continue to increase, overwhelming existing urban street networks. The urban bus system, the world's most common form of public transport, uses a network of scheduled routes to move people through the city, alongside cars, on the roads. Economic function itself also became more decentralized as concentration became impractical and employers relocated to more car-friendly locations (including edge cities). Some cities have introduced bus rapid transit systems which include exclusive bus lanes and other methods for prioritizing bus traffic over private cars. Many big American cities still operate conventional public transit by rail, as exemplified by the ever-popular New York City Subway system. Rapid transit is widely used in Europe and has increased in Latin America and Asia. Walking and cycling ("non-motorized transport") enjoy increasing favor (more pedestrian zones and bike lanes) in American and Asian urban transportation planning, under the influence of such trends as the Healthy Cities movement, the drive for sustainable development, and the idea of a carfree city. Techniques such as road space rationing and road use charges have been introduced to limit urban car traffic. Housing Housing of residents presents one of the major challenges every city must face. Adequate housing entails not only physical shelters but also the physical systems necessary to sustain life and economic activity. Home ownership represents status and a modicum of economic security, compared to renting which may consume much of the income of low-wage urban workers. Homelessness, or lack of housing, is a challenge currently faced by millions of people in countries rich and poor. Ecology Urban ecosystems, influenced as they are by the density of human buildings and activities, differ considerably from those of their rural surroundings. Anthropogenic buildings and waste, as well as cultivation in gardens, create physical and chemical environments which have no equivalents in wilderness, in some cases enabling exceptional biodiversity. They provide homes not only for immigrant humans but also for immigrant plants, bringing about interactions between species which never previously encountered each other. They introduce frequent disturbances (construction, walking) to plant and animal habitats, creating opportunities for recolonization and thus favoring young ecosystems with r-selected species dominant. On the whole, urban ecosystems are less complex and productive than others, due to the diminished absolute amount of biological interactions. Typical urban fauna include insects (especially ants), rodents (mice, rats), and birds, as well as cats and dogs (domesticated and feral). Large predators are scarce. Cities generate considerable ecological footprints, locally and at longer distances, due to concentrated populations and technological activities. From one perspective, cities are not ecologically sustainable due to their resource needs. From another, proper management may be able to ameliorate a city's ill effects. Air pollution arises from various forms of combustion, including fireplaces, wood or coal-burning stoves, other heating systems, and internal combustion engines. Industrialized cities, and today third-world megacities, are notorious for veils of smog (industrial haze) which envelop them, posing a chronic threat to the health of their millions of inhabitants. Urban soil contains higher concentrations of heavy metals (especially lead, copper, and nickel) and has lower pH than soil in comparable wilderness. Modern cities are known for creating their own microclimates, due to concrete, asphalt, and other artificial surfaces, which heat up in sunlight and channel rainwater into underground ducts. The temperature in New York City exceeds nearby rural temperatures by an average of 2–3 °C and at times 5–10 °C differences have been recorded. This effect varies nonlinearly with population changes (independently of the city's physical size). Aerial particulates increase rainfall by 5–10%. Thus, urban areas experience unique climates, with earlier flowering and later leaf dropping than in nearby countries. Poor and working-class people face disproportionate exposure to environmental risks (known as environmental racism when intersecting also with racial segregation). For example, within the urban microclimate, less-vegetated poor neighborhoods bear more of the heat (but have fewer means of coping with it). One of the main methods of improving the urban ecology is including in the cities more urban green space: parks, gardens, lawns, and trees. These areas improve the health, the well-being of the human, animal, and plant populations of the cities. Well-maintained urban trees can provide many social, ecological, and physical benefits to the residents of the city. A study published in Nature's Scientific Reports journal in 2019 found that people who spent at least two hours per week in nature were 23 percent more likely to be satisfied with their life and were 59 percent more likely to be in good health than those who had zero exposure. The study used data from almost 20,000 people in the UK. Benefits increased for up to 300 minutes of exposure. The benefits applied to men and women of all ages, as well as across different ethnicities, socioeconomic status, and even those with long-term illnesses and disabilities. People who did not get at least two hours – even if they surpassed an hour per week – did not get the benefits. The study is the latest addition to a compelling body of evidence for the health benefits of nature. Many doctors already give nature prescriptions to their patients. The study didn't count time spent in a person's own yard or garden as time in nature, but the majority of nature visits in the study took place within two miles from home. "Even visiting local urban green spaces seems to be a good thing," Dr. White said in a press release. "Two hours a week is hopefully a realistic target for many people, especially given that it can be spread over an entire week to get the benefit." World city system As the world becomes more closely linked through economics, politics, technology, and culture (a process called globalization), cities have come to play a leading role in transnational affairs, exceeding the limitations of international relations conducted by national governments. This phenomenon, resurgent today, can be traced back to the Silk Road, Phoenicia, and the Greek city-states, through the Hanseatic League and other alliances of cities. Today the information economy based on high-speed internet infrastructure enables instantaneous telecommunication around the world, effectively eliminating the distance between cities for the purposes of the international markets and other high-level elements of the world economy, as well as personal communications and mass media. Global city A global city, also known as a world city, is a prominent centre of trade, banking, finance, innovation, and markets. Saskia Sassen used the term "global city" in her 1991 work, The Global City: New York, London, Tokyo to refer to a city's power, status, and cosmopolitanism, rather than to its size. Following this view of cities, it is possible to rank the world's cities hierarchically. Global cities form the capstone of the global hierarchy, exerting command and control through their economic and political influence. Global cities may have reached their status due to early transition to post-industrialism or through inertia which has enabled them to maintain their dominance from the industrial era. This type of ranking exemplifies an emerging discourse in which cities, considered variations on the same ideal type, must compete with each other globally to achieve prosperity. Critics of the notion point to the different realms of power and interchange. The term "global city" is heavily influenced by economic factors and, thus, may not account for places that are otherwise significant. Paul James, for example argues that the term is "reductive and skewed" in its focus on financial systems. Multinational corporations and banks make their headquarters in global cities and conduct much of their business within this context. American firms dominate the international markets for law and engineering and maintain branches in the biggest foreign global cities. Global cities feature concentrations of extremely wealthy and extremely poor people. Their economies are lubricated by their capacity (limited by the national government's immigration policy, which functionally defines the supply side of the labor market) to recruit low- and high-skilled immigrant workers from poorer areas. More and more cities today draw on this globally available labor force. Transnational activity Cities increasingly participate in world political activities independently of their enclosing nation-states. Early examples of this phenomenon are the sister city relationship and the promotion of multi-level governance within the European Union as a technique for European integration. Cities including Hamburg, Prague, Amsterdam, The Hague, and City of London maintain their own embassies to the European Union at Brussels. New urban dwellers are increasingly transmigrants, keeping one foot each (through telecommunications if not travel) in their old and their new homes. Global governance Cities participate in global governance by various means including membership in global networks which transmit norms and regulations. At the general, global level, United Cities and Local Governments (UCLG) is a significant umbrella organization for cities; regionally and nationally, Eurocities, Asian Network of Major Cities 21, the Federation of Canadian Municipalities the National League of Cities, and the United States Conference of Mayors play similar roles. UCLG took responsibility for creating Agenda 21 for culture, a program for cultural policies promoting sustainable development, and has organized various conferences and reports for its furtherance. Networks have become especially prevalent in the arena of environmentalism and specifically climate change following the adoption of Agenda 21. Environmental city networks include the C40 Cities Climate Leadership Group, the United Nations Global Compact Cities Programme, the Carbon Neutral Cities Alliance (CNCA), the Covenant of Mayors and the Compact of Mayors, ICLEI – Local Governments for Sustainability, and the Transition Towns network. Cities with world political status as meeting places for advocacy groups, non-governmental organizations, lobbyists, educational institutions, intelligence agencies, military contractors, information technology firms, and other groups with a stake in world policymaking. They are consequently also sites for symbolic protest. United Nations System The United Nations System has been involved in a series of events and declarations dealing with the development of cities during this period of rapid urbanization. The Habitat I conference in 1976 adopted the "Vancouver Declaration on Human Settlements" which identifies urban management as a fundamental aspect of development and establishes various principles for maintaining urban habitats. Citing the Vancouver Declaration, the UN General Assembly in December 1977 authorized the United Nations Commission Human Settlements and the HABITAT Centre for Human Settlements, intended to coordinate UN activities related to housing and settlements. The 1992 Earth Summit in Rio de Janeiro resulted in a set of international agreements including Agenda 21 which establishes principles and plans for sustainable development. The Habitat II conference in 1996 called for cities to play a leading role in this program, which subsequently advanced the Millennium Development Goals and Sustainable Development Goals. In January 2002 the UN Commission on Human Settlements became an umbrella agency called the United Nations Human Settlements Programme or UN-Habitat, a member of the United Nations Development Group. The Habitat III conference of 2016 focused on implementing these goals under the banner of a "New Urban Agenda". The four mechanisms envisioned for effecting the New Urban Agenda are (1) national policies promoting integrated sustainable development, (2) stronger urban governance, (3) long-term integrated urban and territorial planning, and (4) effective financing frameworks. Just before this conference, the European Union concurrently approved an "Urban Agenda for the European Union" known as the Pact of Amsterdam. UN-Habitat coordinates the U.N. urban agenda, working with the UN Environmental Programme, the UN Development Programme, the Office of the High Commissioner for Human Rights, the World Health Organization, and the World Bank. The World Bank, a U.N. specialized agency, has been a primary force in promoting the Habitat conferences, and since the first Habitat conference has used their declarations as a framework for issuing loans for urban infrastructure. The bank's structural adjustment programs contributed to urbanization in the Third World by creating incentives to move to cities. The World Bank and UN-Habitat in 1999 jointly established the Cities Alliance (based at the World Bank headquarters in Washington, D.C.) to guide policymaking, knowledge sharing, and grant distribution around the issue of urban poverty. (UN-Habitat plays an advisory role in evaluating the quality of a locality's governance.) The Bank's policies have tended to focus on bolstering real estate markets through credit and technical assistance. The United Nations Educational, Scientific and Cultural Organization, UNESCO has increasingly focused on cities as key sites for influencing cultural governance. It has developed various city networks including the International Coalition of Cities against Racism and the Creative Cities Network. UNESCO's capacity to select World Heritage Sites gives the organization significant influence over cultural capital, tourism, and historic preservation funding. Representation in culture Cities figure prominently in traditional Western culture, appearing in the Bible in both evil and holy forms, symbolized by Babylon and Jerusalem. Cain and Nimrod are the first city builders in the Book of Genesis. In Sumerian mythology Gilgamesh built the walls of Uruk. Cities can be perceived in terms of extremes or opposites: at once liberating and oppressive, wealthy and poor, organized and chaotic. The name anti-urbanism refers to various types of ideological opposition to cities, whether because of their culture or their political relationship with the country. Such opposition may result from identification of cities with oppression and the ruling elite. This and other political ideologies strongly influence narratives and themes in discourse about cities. In turn, cities symbolize their home societies. Writers, painters, and filmmakers have produced innumerable works of art concerning the urban experience. Classical and medieval literature includes a genre of descriptiones which treat of city features and history. Modern authors such as Charles Dickens and James Joyce are famous for evocative descriptions of their home cities. Fritz Lang conceived the idea for his influential 1927 film Metropolis while visiting Times Square and marveling at the nighttime neon lighting. Other early cinematic representations of cities in the twentieth century generally depicted them as technologically efficient spaces with smoothly functioning systems of automobile transport. By the 1960s, however, traffic congestion began to appear in such films as The Fast Lady (1962) and Playtime (1967). Literature, film, and other forms of popular culture have supplied visions of future cities both utopian and dystopian. The prospect of expanding, communicating, and increasingly interdependent world cities has given rise to images such as Nylonkong (New York, London, Hong Kong) and visions of a single world-encompassing ecumenopolis. See also Lists of cities List of adjectivals and demonyms for cities Lost city Metropolis Compact city Megacity Settlement hierarchy Urbanization Notes References Bibliography Abrahamson, Mark (2004). Global Cities. Oxford University Press. Ashworth, G.J. War and the City. London & New York: Routledge, 1991. . Bridge, Gary, and Sophie Watson, eds. (2000). A Companion to the City. Malden, MA: Blackwell, 2000/2003. Brighenti, Andrea Mubi, ed. (2013). Urban Interstices: The Aesthetics and the Politics of the In-between. Farnham: Ashgate Publishing. . Carter, Harold (1995). The Study of Urban Geography. Fourth edition. London: Arnold. Curtis, Simon (2016). Global Cities and Global Order. Oxford University Press. Ellul, Jacques (1970). The Meaning of the City. Translated by Dennis Pardee. Grand Rapids, Michigan: Eerdmans, 1970. ; French original (written earlier, published later as): Sans feu ni lieu : Signification biblique de la Grande Ville; Paris: Gallimard, 1975. Republished 2003 with Gupta, Joyetta, Karin Pfeffer, Hebe Verrest, & Mirjam Ros-Tonen, eds. (2015). Geographies of Urban Governance: Advanced Theories, Methods and Practices. Springer, 2015. . Hahn, Harlan, & Charles Levine (1980). Urban Politics: Past, Present, & Future. New York & London: Longman. Hanson, Royce (ed.). Perspectives on Urban Infrastructure. Committee on National Urban Policy, Commission on Behavioral and Social Sciences and Education, National Research Council. Washington: National Academy Press, 1984. Herrschel, Tassilo & Peter Newman (2017). Cities as International Actors: Urban and Regional Governance Beyond the Nation State. Palgrave Macmillan (Springer Nature). Grava, Sigurd (2003). Urban Transportation Systems: Choices for Communities. McGraw Hill, e-book. Kaplan, David H.; James O. Wheeler; Steven R. Holloway; & Thomas W. Hodler, cartographer (2004). Urban Geography. John Wiley & Sons, Inc. Kavaratzis, Mihalis, Gary Warnaby, & Gregory J. Ashworth, eds. (2015). Rethinking Place Branding: Comprehensive Brand Development for Cities and Regions. Springer. . Kraas, Frauke, Surinder Aggarwal, Martin Coy, & Günter Mertins, eds. (2014). Megacities: Our Global Urban Future. United Nations "International Year of Planet Earth" book series. Springer. . Latham, Alan, Derek McCormack, Kim McNamara, & Donald McNeil (2009). Key Concepts in Urban Geography. London: SAGE. . Leach, William (1993). Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Vintage Books (Random House), 1994. . Levy, John M. (2017). Contemporary Urban Planning. 11th Edition. New York: Routledge (Taylor & Francis). Magnusson, Warren. Politics of Urbanism: Seeing like a city. London & New York: Routledge, 2011. . Marshall, John U. (1989). The Structure of Urban Systems. University of Toronto Press. . Marzluff, John M., Eric Schulenberger, Wilfried Endlicher, Marina Alberti, Gordon Bradley, Clre Ryan, Craig ZumBrunne, & Ute Simon (2008). Urban Ecology: An International Perspective on the Interaction Between Humans and Nature. New York: Springer Science+Business Media. . McQuillan, Eugene (1937/1987). The Law of Municipal Corporations: Third Edition. 1987 revised volume by Charles R.P. Keating, Esq. Wilmette, Illinois: Callaghan & Company. Moholy-Nagy, Sibyl (1968). Matrix of Man: An Illustrated History of Urban Environment. New York: Frederick A Praeger. Mumford, Lewis (1961). The City in History: Its Origins, Its Transformations, and Its Prospects. New York: Harcourt, Brace & World. Paddison, Ronan, ed. (2001). Handbook of Urban Studies. London; Thousand Oaks, California; and New Delhi: SAGE Publications. . Rybczynski, W., City Life: Urban Expectations in a New World, (1995) Smith, Michael E. (2002) The Earliest Cities. In Urban Life: Readings in Urban Anthropology, edited by George Gmelch and Walter Zenner, pp. 3–19. 4th ed. Waveland Press, Prospect Heights, IL. Southall, Aidan (1998). The City in Time and Space. Cambridge University Press. Wellman, Kath & Marcus Spiller, eds. (2012). Urban Infrastructure: Finance and Management. Chichester, UK: Wiley-Blackwell. . Further reading Berger, Alan S., The City: Urban Communities and Their Problems, Dubuque, Iowa : William C. Brown, 1978. Chandler, T. Four Thousand Years of Urban Growth: An Historical Census. Lewiston, NY: Edwin Mellen Press, 1987. Geddes, Patrick, City Development (1904) Kemp, Roger L. Managing America's Cities: A Handbook for Local Government Productivity, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London, 2007. (). Kemp, Roger L. How American Governments Work: A Handbook of City, County, Regional, State, and Federal Operations, McFarland and Company, Inc., Publisher, Jefferson, North Carolina and London. (). Kemp, Roger L. "City and Gown Relations: A Handbook of Best Practices," McFarland and Company, Inc., Publisher, Jefferson, North Carolina, US, and London, (2013). (). Monti, Daniel J. Jr., The American City: A Social and Cultural History. Oxford, England and Malden, Massachusetts: Blackwell Publishers, 1999. 391 pp. . Reader, John (2005) Cities. Vintage, New York. Robson, W.A., and Regan, D.E., ed., Great Cities of the World, (3d ed., 2 vol., 1972) Smethurst, Paul (2015). The Bicycle – Towards a Global History. Palgrave Macmillan. . Thernstrom, S., and Sennett, R., ed., Nineteenth-Century Cities (1969) Toynbee, Arnold J. (ed), Cities of Destiny, New York: McGraw-Hill, 1967. Pan historical/geographical essays, many images. Starts with "Athens", ends with "The Coming World City-Ecumenopolis". Weber, Max, The City, 1921. (tr. 1958) External links World Urbanization Prospects, Website of the United Nations Population Division Urban population (% of total) – World Bank website based on UN data. Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data. Cities Populated places by type Types of populated places Urban geography
2,387
5,397
https://en.wikipedia.org/wiki/Chris%20Morris%20%28satirist%29
Chris Morris (satirist)
Christopher J Morris (born 15 June 1962) is an English comedian, radio presenter, actor, and filmmaker. Known for his deadpan, dark humour, surrealism, and controversial subject matter, he has been praised by the British Film Institute for his "uncompromising, moralistic drive". In the early 1990s, Morris teamed up with his radio producer Armando Iannucci to create On the Hour, a satire of news programmes. This was expanded into a television spin off, The Day Today, which launched the career of comedian Steve Coogan and has since been hailed as one of the most important satirical shows of the 1990s. Morris further developed the satirical news format with Brass Eye, which lampooned celebrities whilst focusing on themes such as crime and drugs. For many, the apotheosis of Morris' career was a Brass Eye special, which dealt with the moral panic surrounding paedophilia. It quickly became one of the most complained-about programmes in British television history, leading the Daily Mail to describe him as "the most loathed man on TV". Meanwhile, Morris' postmodern sketch comedy and ambient music radio show Blue Jam, which had seen controversy similar to Brass Eye, helped him to gain a cult following. Blue Jam was adapted into the TV series Jam, which some hailed as "the most radical and original television programme broadcast in years", and he went on to win the BAFTA Award for Best Short Film after expanding a Blue Jam sketch into My Wrongs 8245–8249 & 117, which starred Paddy Considine. This was followed by Nathan Barley, a sitcom written in collaboration with a then little-known Charlie Brooker that satirised hipsters, which had low ratings but found success upon its DVD release. Morris followed this by joining the cast of the sitcom The IT Crowd, his first project in which he did not have writing or producing input. In 2010, Morris directed his first feature-length film, Four Lions, which satirised Islamic terrorism through a group of inept British Muslims. Reception of the film was largely positive, earning Morris his second BAFTA Film Award, this time for Outstanding Debut. Since 2012, he has directed four episodes of Iannucci's political comedy Veep and appeared onscreen in The Double and Stewart Lee's Comedy Vehicle. His second feature-length film, The Day Shall Come, was released in 2019. Early life Christopher J Morris was born on 15 June 1962 in Colchester, Essex, the son of Rosemary Parrington and Paul Michael Morris. His father was a GP. Morris has a large red birthmark almost completely covering the left side of his face and neck, which he disguises with makeup when acting. He grew up in a Victorian farmhouse in the village of Buckden, Cambridgeshire, which he described as "very dull". He has two younger brothers, including theatre director Tom Morris. From an early age, he was a prankster and had a passion for radio. From the age of 10, he was educated at the independent Jesuit boarding school Stonyhurst College in Stonyhurst, Lancashire. He went to study zoology at the University of Bristol, where he gained a 2:1. Career Radio On graduating, Morris pursued a career as a musician in various bands, for which he played the bass guitar. He then went to work for Radio West, a local radio station in Bristol. He then took up a news traineeship with BBC Radio Cambridgeshire, where he took advantage of access to editing and recording equipment to create elaborate spoofs and parodies. He also spent time in early 1987 hosting a 2–4pm afternoon show and finally ended up presenting Saturday morning show I.T. In July 1987, he moved on to BBC Radio Bristol to present his own show No Known Cure, broadcast on Saturday and Sunday mornings. The show was surreal and satirical, with odd interviews conducted with unsuspecting members of the public. He was fired from Bristol in 1990 after "talking over the news bulletins and making silly noises". In 1988 he also joined, from its launch, Greater London Radio (GLR). He presented The Chris Morris Show on GLR until 1993, when one show got suspended after a sketch was broadcast involving a child "outing" celebrities. In 1991, Morris joined Armando Iannucci's spoof news project On the Hour. Broadcast on BBC Radio 4, it saw him work alongside Iannucci, Steve Coogan, Stewart Lee, Richard Herring and Rebecca Front. In 1992, Morris hosted Danny Baker's Radio 5 Morning Edition show for a week whilst Baker was on holiday. In 1994, Morris began a weekly evening show, the Chris Morris Music Show, on BBC Radio 1 alongside Peter Baynham and 'man with a mobile phone' Paul Garner. In the shows, Morris perfected the spoof interview style that would become a central component of his Brass Eye programme. In the same year, Morris teamed up with Peter Cook (as Sir Arthur Streeb-Greebling), in a series of improvised conversations for BBC Radio 3 entitled Why Bother?. Move into television and film In 1994, a BBC 2 television series based on On the Hour was broadcast under the name The Day Today. The Day Today made a star of Morris, and marked the television debut of Steve Coogan's Alan Partridge character. The programme ended on a high after just one series, with Morris winning the 1994 British Comedy Award for Best Newcomer for his lead role as the Paxmanesque news anchor. In 1996, Morris appeared on the daytime programme The Time, The Place, posing as an academic, Thurston Lowe, in a discussion entitled "Are British Men Lousy Lovers?", but was found out when a producer alerted the show's host, John Stapleton. In 1997, the black humour which had featured in On the Hour and The Day Today became more prominent in Brass Eye, another spoof current affairs television documentary, shown on Channel 4. All three series satirised and exaggerated issues expected of news shows. The second episode of Brass Eye, for example, satirised drugs and the political rhetoric surrounding them. To help convey the satire, Morris invented a fictional drug by the name of "cake". In the episode, British celebrities and politicians describe the supposed symptoms in detail; David Amess mentioned the fictional drug at Parliament. In 2001, Morris' satirized the moral panic regarding pedophilia in the most controversial episode of Brass Eye, "Paedogeddon". Channel 4 apologised for the episode after receiving criticism from tabloids and around 3,000 complaints from viewers, which, at the time, was the most for an episode of British television. From 1997 to 1999, Morris created Blue Jam for BBC Radio 1, a surreal taboo-breaking radio show set to an ambient soundtrack. In 2000, this was followed by Jam, a television reworking. Morris released a 'remix' version of this, entitled Jaaaaam. In 2002, Morris ventured into film, directing the short My Wrongs#8245–8249 & 117, adapted from a Blue Jam monologue about a man led astray by a sinister talking dog. It was the first film project of Warp Films, a branch of Warp Records. In 2002 it won the BAFTA for best short film. In 2005 Morris worked on a sitcom entitled Nathan Barley, based on the character created by Charlie Brooker for his website TVGoHome (Morris had contributed to TVGoHome on occasion, under the pseudonym 'Sid Peach'). Co-written by Brooker and Morris, the series was broadcast on Channel 4 in early 2005. The IT Crowd and Comedy Vehicle Morris was a cast member in The IT Crowd, a Channel 4 sitcom which focused on the information technology department of the fictional company Reynholm Industries. The series was written and directed by Graham Linehan (with whom Morris collaborated on The Day Today, Brass Eye and Jam) and produced by Ash Atalla. Morris played Denholm Reynholm, the eccentric managing director of the company. This marked the first time Morris has acted in a substantial role in a project which he has not developed himself. Morris' character appeared to leave the series during episode two of the second series. His character made a brief return in the first episode of the third series. In November 2007, Morris wrote an article for The Observer in response to Ronan Bennett's article published six days earlier in The Guardian. Bennett's article, "Shame on us", accused the novelist Martin Amis of racism. Morris' response, "The absurd world of Martin Amis", was also highly critical of Amis; although he did not accede to Bennett's accusation of racism, Morris likened Amis to the Muslim cleric Abu Hamza (who was jailed for inciting racial hatred in 2006), suggesting that both men employ "mock erudition, vitriol and decontextualised quotes from the Qu'ran" to incite hatred. Morris served as script editor for the 2009 series Stewart Lee's Comedy Vehicle, working with former colleagues Stewart Lee, Kevin Eldon and Armando Iannucci. He maintained this role for the second (2011) and third series (2014), also appearing as a mock interviewer dubbed the "hostile interrogator" in the third and fourth series. Four Lions, Veep, and other appearances Morris completed his debut feature film Four Lions in late 2009, a satire based on a group of Islamist terrorists in Sheffield. It premiered at the Sundance Film Festival in January 2010 and was short-listed for the festival's World Cinema Narrative prize. The film (working title Boilerhouse) was picked up by Film Four. Morris told The Sunday Times that the film sought to do for Islamic terrorism what Dad's Army, the classic BBC comedy, did for the Nazis by showing them as "scary but also ridiculous". In 2012, Morris directed the seventh and penultimate episode of the first season of Veep, an Armando Iannucci-devised American version of The Thick of It. In 2013, he returned to direct two episodes for the second season of Veep, and a further episode for season three in 2014. In 2013, Morris appeared briefly in Richard Ayoade's The Double, a black comedy film based on the Fyodor Dostoyevsky novella of the same name. Morris had previously worked with Ayoade on Nathan Barley and The IT Crowd. In February 2014, Morris made a surprise appearance at the beginning of a Stewart Lee live show, introducing the comedian with fictional anecdotes about their work together. The following month, Morris appeared in the third series of Stewart Lee's Comedy Vehicle as a "hostile interrogator", a role previously occupied by Armando Iannucci. In December 2014, it was announced that a short radio collaboration with Noel Fielding and Richard Ayoade would be broadcast on BBC Radio 6. According to Fielding, the work had been in progress since around 2006. However, in January 2015 it was decided, 'in consultation with [Morris]', that the project was not yet complete, and so the intended broadcast did not go ahead. The Day Shall Come A statement released by Film4 in February 2016 made reference to funding what would be Morris' second feature film. In November 2017 it was reported that Morris had shot the movie, starring Anna Kendrick, in the Dominican Republic but the title was not made public. It was later reported in January 2018 that Jim Gaffigan and Rupert Friend had joined the cast of the still-untitled film, and that the plot would revolve around an FBI hostage situation gone wrong. The completed film, titled The Day Shall Come, had its world premiere at South by Southwest on 11 March 2019. Music Morris often co-writes and performs incidental music for his television shows, notably with Jam and the 'extended remix' version, Jaaaaam. In the early 1990s Morris contributed a Pixies parody track entitled "Motherbanger" to a flexi-disc given away with an edition of Select music magazine. Morris supplied sketches for British band Saint Etienne's 1993 single "You're in a Bad Way" (the sketch 'Spongbake' appears at the end of the 4th track on the CD single). In 2000, he collaborated by mail with Amon Tobin to create the track "Bad Sex", which was released as a B-side on the Tobin single "Slowly". British band Stereolab's song "Nothing to Do with Me" from their 2001 album Sound-Dust featured various lines from Chris Morris sketches as lyrics. Style Ramsey Ess of Vulture described Morris' comedy style as "crass" and "shocking", but noted an "underlying morality" and integrity, as well as the humor being Morris' priority. Recognition In 2003, Morris was listed in The Observer as one of the 50 funniest acts in British comedy. In 2005, Channel 4 aired a show called The Comedian's Comedian in which foremost writers and performers of comedy ranked their 50 favourite acts. Morris was at number eleven. Morris won the BAFTA for outstanding debut with his film Four Lions. Adeel Akhtar and Nigel Lindsay collected the award in his absence. Lindsay stated that Morris had sent him a text message before they collected the award reading, 'Doused in petrol, Zippo at the ready'. In June 2012 Morris was placed at number 16 in the Top 100 People in UK Comedy. In 2010, a biography, Disgusting Bliss: The Brass Eye of Chris Morris, was published. Written by Lucian Randall, the book depicted Morris as "brilliant but uncompromising", and a "frantic-minded perfectionist". In November 2014, a three-hour retrospective of Morris' radio career was broadcast on BBC Radio 4 Extra under the title 'Raw Meat Radio', presented by Mary Anne Hobbs and featuring interviews with Armando Iannucci, Peter Baynham, Paul Garner, and others. Awards Morris won the Best TV Comedy Newcomer award from the British Comedy Awards in 1994 for his performance in The Day Today. He has won two BAFTA awards: the BAFTA Award for Best Short Film in 2002 for My Wrongs#8245–8249 & 117, and the BAFTA Award for Outstanding Debut by a British director, writer or producer in 2011 for Four Lions. Personal life Morris and his wife, actress-turned-literary agent Jo Unwin, live in the Brixton district of London. The pair met in 1984 at the Edinburgh Festival, when he was playing bass guitar for the Cambridge Footlights Revue and she was in a comedy troupe called the Millies. They have two sons, Charles and Frederick, both of whom were born in Lambeth in south London. Giving very few interviews and avoiding all social media, Morris has been described as a recluse. He can be heard in a 2008 podcast for CERN, being taken on a tour of the facility by the physicist Brian Cox. In 2010, he made numerous media appearances to promote and support Four Lions in both the UK and US, at one point appearing as a guest on Late Night with Jimmy Fallon. In 2019, two lengthy interviews conducted with Morris on The Adam Buxton Podcast were released in the run-up to the release of Morris's film The Day Shall Come. Works Film Television Other Various works at BBC Radio Cambridgeshire (1986–1987) (presenter) No Known Cure (July 1987 – March 1990, BBC Radio Bristol) (presenter) Chris Morris (1988–1993, BBC GLR) (presenter) Morning Edition (July 1990, BBC Radio 5) (guest presenter) The Chris Morris Christmas Show (25 December 1990, BBC Radio 1) On the Hour (1991–1992, BBC Radio 4) (co-writer, performer) It's Only TV (September 1992, LWT) (unbroadcast pilot) Why Bother? (1994, BBC Radio 3) (performer, editor) The Chris Morris Music Show (1994, BBC Radio 1) (presenter) Blue Jam (1997–1999, BBC Radio 1) (writer, director, performer, editor) Second Class Male/Time To Go (1999, newspaper column for The Observer) The Smokehammer (2002, website) Absolute Atrocity Special (2002, newspaper pullout for The Observer) References External links 1962 births Living people Alumni of the University of Bristol English male comedians English radio DJs English radio writers English satirists English screenwriters English male screenwriters People educated at Stonyhurst College People from Colchester People from Buckden, Cambridgeshire English film directors Outstanding Debut by a British Writer, Director or Producer BAFTA Award winners
2,390
5,411
https://en.wikipedia.org/wiki/Cucurbitales
Cucurbitales
The Cucurbitales are an order of flowering plants, included in the rosid group of dicotyledons. This order mostly belongs to tropical areas, with limited presence in subtropical and temperate regions. The order includes shrubs and trees, together with many herbs and climbers. One major characteristic of the Cucurbitales is the presence of unisexual flowers, mostly pentacyclic, with thick pointed petals (whenever present). The pollination is usually performed by insects, but wind pollination is also present (in Coriariaceae and Datiscaceae). The order consists of roughly 2600 species in eight families. The largest families are Begoniaceae (begonia family) with around 1500 species and Cucurbitaceae (gourd family) with around 900 species. These two families include the only economically important plants. Specifically, the Cucurbitaceae (gourd family) include some food species, such as squash, pumpkin (both from Cucurbita), watermelon (Citrullus vulgaris), and cucumber and melons (Cucumis). The Begoniaceae are known for their horticultural species, of which there are over 130 with many more varieties. Overview The Cucurbitales are an order of plants with a cosmopolitan distribution, particularly diverse in the tropics. Most are herbs, climber herbs, woody lianas or shrubs but some genera include canopy-forming evergreen lauroid trees. Members of the Cucurbitales form an important component of low to montane tropical forest with greater representation in terms of the number of species. Although not known with certainty the total number of species in the order, conservative estimates indicate about 2600 species worldwide, distributed in 109 genera. Compared to other flowering plant orders, the taxonomy is poorly understood due to their great diversity, difficulty in identification, and limited study. The order Cucurbitales in the eurosid I clade comprises almost 2600 species in 109 or 110 genera in eight families, tropical and temperate, of very different sizes, morphology, and ecology. It is a case of divergent evolution. In contrast, there is convergent evolution with other groups not related due to ecological or physical drivers toward a similar solution, including analogous structures. Some species are trees that have similar foliage to the true laurels due to convergent evolution. The patterns of speciation in the Cucurbitales are diversified in a high number of species. They have a pantropical distribution with centers of diversity in Africa, South America, and Southeast Asia. They most likely originated in West Gondwana 67–107 million years ago, so the oldest split could relate to the break-up of Gondwana in the middle Eocene to late Oligocene, 45–24 million years ago. The group reached their current distribution by multiple intercontinental dispersal events. One factor was product of aridification, other groups responded to favorable climatic periods and expanded across the available habitat, occurring as opportunistic species across wide distribution; other groups diverged over long periods within isolated areas. The Cucurbitales comprise the families: Apodanthaceae, Anisophylleaceae, Begoniaceae, Coriariaceae, Corynocarpaceae, Cucurbitaceae, Tetramelaceae, and Datiscaceae. Some of the synapomorphies of the order are: leaves in spiral with secondary veins palmated, calyx or perianth valvate, and the elevated stomatal calyx/perianth bearing separate styles. The two whorls are similar in texture. Tetrameles nudiflora is a tree of immense proportions of height and width; Tetramelaceae, Anisophylleaceae, and Corynocarpaceae are tall canopy trees in temperate and tropical forests. The genus Dendrosicyos, with the only species being the cucumber tree, is adapted to the arid semidesert island of Socotra. Deciduous perennial Cucurbitales lose all of their leaves for part of the year depending on variations in rainfall. The leaf loss coincides with the dry season in tropical, subtropical and arid regions. In temperate or polar climates, the dry season is due to the inability of the plant to absorb water available in the form of ice. Apodanthaceae are obligatory endoparasites that only emerge once a year in the form of small flowers that develop into small berries, however taxonomists have not agreed on the exact placement of this family within the Cucurbitales. Over half of the known members of this order belong to the greatly diverse begonia family Begoniaceae, with around 1500 species in two genera. Before modern DNA-molecular classifications, some Cucurbitales species were assigned to orders as diverse as Ranunculales, Malpighiales, Violales, and Rafflesiales. Early molecular studies revealed several surprises, such as the nonmonophyly of the traditional Datiscaceae, including Tetrameles and Octomeles, but the exact relationships among the families remain unclear. The lack of knowledge about the order in general is due to many species being found in countries with limited economic means or unstable political environments, factors unsuitable for plant collection and detailed study. Thus the vast majority of species remain poorly determined, and a future increase in the number of species is expected. Classification Under the Cronquist system, the families Begoniaceae, Cucurbitaceae, and Datiscaceae were placed in the order Violales, within the subclass Dilleniidae, with the Tetramelaceae subsumed into the Datiscaceae. Corynocarpaceae was placed in order Celastrales, and Anisophylleaceae in order Rosales, both under subclass Rosidae. Coriariaceae was placed in Ranunculaceae, subclass Magnoliidae. Apodanthaceae was not recognised as a family, its genera being assigned to another parasitic plant family, the Rafflesiaceae. The present classification is due to APG III (2009). Systematics Modern molecular phylogenetics suggest the following relationships: References Further reading External links Extant Albian first appearances Angiosperm orders
2,399
5,428
https://en.wikipedia.org/wiki/History%20of%20Cambodia
History of Cambodia
The history of Cambodia, a country in mainland Southeast Asia, can be traced back to Indian civilization. Detailed records of a political structure on the territory of what is now Cambodia first appear in Chinese annals in reference to Funan, a polity that encompassed the southernmost part of the Indochinese peninsula during the 1st to 6th centuries. Centered at the lower Mekong, Funan is noted as the oldest regional Hindu culture, which suggests prolonged socio-economic interaction with maritime trading partners of the Indosphere in the west. By the 6th century a civilization, called Chenla or Zhenla in Chinese annals, firmly replaced Funan, as it controlled larger, more undulating areas of Indochina and maintained more than a singular centre of power. The Khmer Empire was established by the early 9th century. Sources refer here to a mythical initiation and consecration ceremony to claim political legitimacy by founder Jayavarman II at Mount Kulen (Mount Mahendra) in 802 CE. A succession of powerful sovereigns, continuing the Hindu devaraja cult tradition, reigned over the classical era of Khmer civilization until the 11th century. A new dynasty of provincial origin introduced Buddhism, which according to some scholars resulted in royal religious discontinuities and general decline. The royal chronology ends in the 14th century. Great achievements in administration, agriculture, architecture, hydrology, logistics, urban planning and the arts are testimony to a creative and progressive civilisation - in its complexity a cornerstone of Southeast Asian cultural legacy. The decline continued through a transitional period of approximately 100 years followed by the Middle Period of Cambodian history, also called the Post-Angkor Period, beginning in the mid 15th century. Although the Hindu cults had by then been all but replaced, the monument sites at the old capital remained an important spiritual centre. Yet since the mid 15th century the core population steadily moved to the east and – with brief exceptions – settled at the confluence of the Mekong and Tonle Sap rivers at Chaktomuk, Longvek and Oudong. Maritime trade was the basis for a very prosperous 16th century. But, as a result foreigners – Muslim Malays and Cham, Christian European adventurers and missionaries – increasingly disturbed and influenced government affairs. Ambiguous fortunes, a robust economy on the one hand and a disturbed culture and compromised royalty on the other were constant features of the Longvek era. By the 15th century, the Khmers' traditional neighbours, the Mon people in the west and the Cham people in the east had gradually been pushed aside or replaced by the resilient Siamese/Thai and Annamese/Vietnamese, respectively. These powers had perceived, understood and increasingly followed the imperative of controlling the lower Mekong basin as the key to control all Indochina. A weak Khmer kingdom only encouraged the strategists in Ayutthaya (later in Bangkok) and in Huế. Attacks on and conquests of Khmer royal residences left sovereigns without a ceremonial and legitimate power base. Interference in succession and marriage policies added to the decay of royal prestige. Oudong was established in 1601 as the last royal residence of the Middle Period. The 19th-century arrival of then technologically more advanced and ambitious European colonial powers with concrete policies of global control put an end to regional feuds and as Siam/Thailand, although humiliated and on the retreat, escaped colonisation as a buffer state, Vietnam was to be the focal point of French colonial ambition. Cambodia, although largely neglected, had entered the Indochinese Union as a perceived entity and was capable to carry and reclaim its identity and integrity into modernity. After 80 years of colonial hibernation, the brief episode of Japanese occupation during World War II, that coincided with the investiture of king Sihanouk was the opening act for the irreversible process towards re-emancipation and modern Cambodian history. The Kingdom of Cambodia (1953–70), independent since 1953, struggled to remain neutral in a world shaped by polarisation of the nuclear powers USA and Soviet Union. As the Indochinese war escalates, Cambodia becomes increasingly involved, the Khmer Republic is one of the results in 1970, another is civil war. 1975, abandoned and in the hands of the Khmer Rouge, Cambodia endures its darkest hour – Democratic Kampuchea and its long aftermath of Vietnamese occupation, the People's Republic of Kampuchea and the UN Mandate towards Modern Cambodia since 1993. Prehistory and early history Radiocarbon dating of a cave at Laang Spean in Battambang Province, northwest Cambodia confirmed the presence of Hoabinhian stone tools from 6000–7000 BCE and pottery from 4200 BCE. Starting in 2009 archaeological research of the Franco-Cambodian Prehistoric Mission has documented a complete cultural sequence from 71.000 years BP to the Neolithic period in the cave. Finds since 2012 lead to the common interpretation, that the cave contains the archaeological remains of a first occupation by hunter and gatherer groups, followed by Neolithic people with highly developed hunting strategies and stone tool making techniques, as well as highly artistic pottery making and design, and with elaborate social, cultural, symbolic and exequial practices. Cambodia participated in the Maritime Jade Road, which was in place in the region for 3,000 years, beginning in 2000 BCE to 1000 CE. Skulls and human bones found at Samrong Sen in Kampong Chhnang Province date from 1500 BCE. Heng Sophady (2007) has drawn comparisons between Samrong Sen and the circular earthwork sites of eastern Cambodia. These people may have migrated from South-eastern China to the Indochinese Peninsula. Scholars trace the first cultivation of rice and the first bronze making in Southeast Asia to these people. 2010 Examination of skeletal material from graves at Phum Snay in north-west Cambodia revealed an exceptionally high number of injuries, especially to the head, likely to have been caused by interpersonal violence. The graves also contain a quantity of swords and other offensive weapons used in conflict. The Iron Age period of Southeast Asia begins around 500 BCE and lasts until the end of the Funan era - around 500 A.D. as it provides the first concrete evidence for sustained maritime trade and socio-political interaction with India and South Asia. By the 1st century settlers have developed complex, organised societies and a varied religious cosmology, that required advanced spoken languages very much related to those of the present day. The most advanced groups lived along the coast and in the lower Mekong River valley and the delta regions in houses on stilts where they cultivated rice, fished and kept domesticated animals. Funan Kingdom (1st century – 550/627) Chinese annals contain detailed records of the first known organised polity, the Kingdom of Funan, on Cambodian and Vietnamese territory characterised by "high population and urban centers, the production of surplus food...socio-political stratification [and] legitimized by Indian religious ideologies". Centered around the lower Mekong and Bassac rivers from the first to sixth century CE with "walled and moated cities" such as Angkor Borei in Takeo Province and Óc Eo in modern An Giang Province, Vietnam. Early Funan was composed of loose communities, each with its own ruler, linked by a common culture and a shared economy of rice farming people in the hinterland and traders in the coastal towns, who were economically interdependent, as surplus rice production found its way to the ports. By the second century CE Funan controlled the strategic coastline of Indochina and the maritime trade routes. Cultural and religious ideas reached Funan via the Indian Ocean trade route. Trade with India had commenced well before 500 BCE as Sanskrit hadn't yet replaced Pali. Funan's language has been determined as to have been an early form of Khmer and its written form was Sanskrit. In the period 245–250 CE dignitaries of the Chinese Kingdom of Wu visited the Funan city Vyadharapura. Envoys Kang Tai and Zhu Ying defined Funan as to be a distinct Hindu culture. Trade with China had begun after the southward expansion of the Han Dynasty, around the 2nd century BCE Effectively Funan "controlled strategic land routes in addition to coastal areas" and occupied a prominent position as an "economic and administrative hub" between The Indian Ocean trade network and China, collectively known as the Maritime Silk Road. Trade routes, that eventually ended in distant Rome are corroborated by Roman and Persian coins and artefacts, unearthed at archaeological sites of 2nd and 3rd century settlements. Funan is associated with myths, such as the Kattigara legend and the Khmer founding legend in which an Indian Brahman or prince named Preah Thaong in Khmer, Kaundinya in Sanskrit and Hun-t’ien in Chinese records marries the local ruler, a princess named Nagi Soma (Lieu-Ye in Chinese records), thus establishing the first Cambodian royal dynasty. Scholars debate as to how deep the narrative is rooted in actual events and on Kaundinya's origin and status. A Chinese document, that underwent 4 alterations and a 3rd-century epigraphic inscription of Champa are the contemporary sources. Some scholars consider the story to be simply an allegory for the diffusion of Indic Hindu and Buddhist beliefs into ancient local cosmology and culture whereas some historians dismiss it chronologically. Chinese annals report that Funan reached its territorial climax in the early 3rd century under the rule of king Fan Shih-man, extending as far south as Malaysia and as far west as Burma. A system of mercantilism in commercial monopolies was established. Exports ranged from forest products to precious metals and commodities such as gold, elephants, ivory, rhinoceros horn, kingfisher feathers, wild spices like cardamom, lacquer, hides and aromatic wood. Under Fan Shih-man Funan maintained a formidable fleet and was administered by an advanced bureaucracy, based on a "tribute-based economy, that produced a surplus which was used to support foreign traders along its coasts and ostensibly to launch expansionist missions to the west and south". Historians maintain contradicting ideas about Funan's political status and integrity. Miriam T. Stark calls it simply Funan: [The]"notion of Fu Nan as an early "state"...has been built largely by historians using documentary and historical evidence" and Michael Vickery remarks: "Nevertheless, it is...unlikely that the several ports constituted a unified state, much less an 'empire'". Other sources though, imply imperial status: "Vassal kingdoms spread to southern Vietnam in the east and to the Malay peninsula in the west" and "Here we will look at two empires of this period...Funan and Srivijaya". The question of how Funan came to an end is in the face of almost universal scholarly conflict impossible to pin down. Chenla is the name of Funan's successor in Chinese annals, first appearing in 616/617 CE The archaeological approach to and interpretation of the entire early historic period is considered to be a decisive supplement for future research. The "Lower Mekong Archaeological Project" focuses on the development of political complexity in this region during the early historic period. LOMAP survey results of 2003 to 2005, for example, have helped to determine that "...the region's importance continued unabated throughout the pre-Angkorian period...and that at least three [surveyed areas] bear Angkorian-period dates and suggest the continued importance of the delta." Chenla Kingdom (6th century – 802) The History of the Chinese Sui dynasty contains records that a state called Chenla sent an embassy to China in 616 or 617 CE It says, that Chenla was a vassal of Funan, but under its ruler Citrasena-Mahendravarman conquered Funan and gained independence. Most of the Chinese recordings on Chenla, including that of Chenla conquering Funan, have been contested since the 1970s as they are generally based on single remarks in the Chinese annals, as author Claude Jacques emphasised the very vague character of the Chinese terms 'Funan' and 'Chenla', while more domestic epigraphic sources become available. Claude Jacques summarises: "Very basic historical mistakes have been made" because "the history of pre-Angkorean Cambodia was reconstructed much more on the basis of Chinese records than on that of [Cambodian] inscriptions" and as new inscriptions were discovered, researchers "preferred to adjust the newly discovered facts to the initial outline rather than to call the Chinese reports into question". The notion of Chenla's centre being in modern Laos has also been contested. "All that is required is that it be inland from Funan." The most important political record of pre-Angkor Cambodia, the inscription K53 from Ba Phnom, dated 667 CE does not indicate any political discontinuity, either in royal succession of kings Rudravarman, Bhavavarman I, Mahendravarman [Citrasena], Īśānavarman, and Jayavarman I or in the status of the family of officials who produced the inscription. Another inscription of a few years later, K44, 674 CE, commemorating a foundation in Kampot province under the patronage of Jayavarman I, refers to an earlier foundation in the time of King Raudravarma, presumably Rudravarman of Funan, and again there is no suggestion of political discontinuity. The History of the T'ang asserts that shortly after 706 the country was split into Land Chenla and Water Chenla. The names signify a northern and a southern half, which may conveniently be referred to as Upper and Lower Chenla. By the late 8th century Water Chenla had become a vassal of the Sailendra dynasty of Java – the last of its kings were killed and the polity incorporated into the Javanese monarchy around 790 CE. Land Chenla acquired independence under Jayavarman II in 802 CE The Khmers, vassals of Funan, reached the Mekong river from the northern Menam River via the Mun River Valley. Chenla, their first independent state developed out of Funanese influence. Ancient Chinese records mention two kings, Shrutavarman and Shreshthavarman who ruled at the capital Shreshthapura located in modern-day southern Laos. The immense influence on the identity of Cambodia to come was wrought by the Khmer Kingdom of Bhavapura, in the modern day Cambodian city of Kampong Thom. Its legacy was its most important sovereign, Ishanavarman who completely conquered the kingdom of Funan during 612–628. He chose his new capital at the Sambor Prei Kuk, naming it Ishanapura. Khmer Empire (802–1431) The six centuries of the Khmer Empire are characterised by unparalleled technical and artistic progress and achievements, political integrity and administrative stability. The empire represents the cultural and technical apogee of the Cambodian and Southeast Asian pre-industrial civilisation. The Khmer Empire was preceded by Chenla, a polity with shifting centres of power, which was split into Land Chenla and Water Chenla in the early 8th century. By the late 8th century Water Chenla was absorbed by the Malays of the Srivijaya Empire and the Javanese of the Shailandra Empire and eventually incorporated into Java and Srivijaya. Jayavarman II, ruler of Land Chenla, initiates a mythical Hindu consecration ceremony at Mount Kulen (Mount Mahendra) in 802 CE, intended to proclaim political autonomy and royal legitimacy. As he declared himself devaraja - god-king, divinely appointed and uncontested, he simultaneously declares independence from Shailandra and Srivijaya. He established Hariharalaya, the first capital of the Angkorean area near the modern town of Roluos. Indravarman I (877–889) and his son and successor Yasovarman I (889–900), who established the capital Yasodharapura ordered the construction of huge water reservoirs (barays) north of the capital. The water management network depended on elaborate configurations of channels, ponds, and embankments built from huge quantities of clayey sand, the available bulk material on the Angkor plain. Dikes of the East Baray still exist today, which are more than long and wide. The largest component is the West Baray, a reservoir about long and across, containing approximately 50 million m3 of water. Royal administration was based on the religious idea of the Shivaite Hindu state and the central cult of the sovereign as warlord and protector – the "Varman". This centralised system of governance appointed royal functionaries to provinces. The Mahidharapura dynasty – its first king was Jayavarman VI (1080 to 1107), which originated west of the Dângrêk Mountains in the Mun river valley discontinued the old "ritual policy", genealogical traditions and crucially, Hinduism as exclusive state religion. Some historians relate the empires' decline to these religious discontinuities. The area that comprises the various capitals was spread out over around , it is nowadays commonly called Angkor. The combination of sophisticated wet-rice agriculture, based on an engineered irrigation system and the Tonlé Sap's spectacular abundance in fish and aquatic fauna, as protein source guaranteed a regular food surplus. Recent Geo-surveys have confirmed that Angkor maintained the largest pre-industrial settlement complex worldwide during the 12th and 13th centuries – some three quarters of a million people lived there. Sizeable contingents of the public workforce were to be redirected to monument building and infrastructure maintenance. A growing number of researchers relates the progressive over-exploitation of the delicate local eco-system and its resources alongside large scale deforestation and resulting erosion to the empires' eventual decline. Under king Suryavarman II (1113–1150) the empire reached its greatest geographic extent as it directly or indirectly controlled Indochina, the Gulf of Thailand and large areas of northern maritime Southeast Asia. Suryavarman II commissioned the temple of Angkor Wat, built in a period of 37 years, its five towers representing Mount Meru is considered to be the most accomplished expression of classical Khmer architecture. However, territorial expansion ended when Suryavarman II was killed in battle attempting to invade Đại Việt. It was followed by a period of dynastic upheaval and a Cham invasion that culminated in the sack of Angkor in 1177. King Jayavarman VII (reigned 1181–1219) is generally considered to be Cambodia's greatest King. A Mahayana Buddhist, he initiates his reign by striking back against Champa in a successful campaign. During his nearly forty years in power he becomes the most prolific monument builder, who establishes the city of Angkor Thom with its central temple the Bayon. Further outstanding works are attributed to him – Banteay Kdei, Ta Prohm, Neak Pean and Sra Srang. The construction of an impressive number of utilitarian and secular projects and edifices, such as maintenance of the extensive road network of Suryavarman I, in particular the royal road to Phimai and the many rest houses, bridges and hospitals make Jayavarman VII unique among all imperial rulers. In August 1296, the Chinese diplomat Zhou Daguan arrived at Angkor and remained at the court of king Srindravarman until July 1297. He wrote a detailed report, The Customs of Cambodia, on life in Angkor. His portrayal is one of the most important sources of understanding historical Angkor as the text offers valuable information on the everyday life and the habits of the inhabitants of Angkor. The last Sanskrit inscription is dated 1327, and records the succession of Indrajayavarman by Jayavarman IX Parameshwara (1327–1336). The empire was an agrarian state that consisted essentially of three social classes, the elite, workers and slaves. The elite included advisers, military leaders, courtiers, priests, religious ascetics and officials. Workers included agricultural labourers and also a variety of craftsman for construction projects. Slaves were often captives from military campaigns or distant villages. Coinage did not exist and the barter economy was based on agricultural produce, principally rice, with regional trade as an insignificant part of the economy. Post-Angkor Period of Cambodia (1431–1863) The term "Post-Angkor Period of Cambodia", also the "Middle Period" refers to the historical era from the early 15th century to 1863, the beginning of the French Protectorate of Cambodia. Reliable sources – particularly for the 15th and 16th century – are very rare. A conclusive explanation that relates to concrete events manifesting the decline of the Khmer Empire has not yet been produced. However, most modern historians contest that several distinct and gradual changes of religious, dynastic, administrative and military nature, environmental problems and ecological imbalance coincided with shifts of power in Indochina and must all be taken into account to make an interpretation. In recent years, focus has notably shifted towards studies on climate changes, human–environment interactions and the ecological consequences. Epigraphy in temples, ends in the third decade of the fourteenth, and does not resume until the mid-16th century. Recording of the Royal Chronology discontinues with King Jayavarman IX Parameshwara (or Jayavarma-Paramesvara) – there exists not a single contemporary record of even a king's name for over 200 years. Construction of monumental temple architecture had come to a standstill after Jayavarman VII's reign. According to author Michael Vickery there only exist external sources for Cambodia's 15th century, the Chinese Ming Shilu annals and the earliest Royal Chronicle of Ayutthaya. Wang Shi-zhen (王世貞), a Chinese scholar of the 16th century, remarked: "The official historians are unrestrained and are skilful at concealing the truth; but the memorials and statutes they record and the documents they copy cannot be discarded." The central reference point for the entire 15th century is a Siamese intervention of some undisclosed nature at the capital Yasodharapura (Angkor Thom) around the year 1431. Historians relate the event to the shift of Cambodia's political centre southward to the region of Phnom Penh, Longvek and later Oudong. Sources for the 16th century are more numerous. The kingdom is centred at the Mekong, prospering as an integral part of the Asian maritime trade network, via which the first contact with European explorers and adventurers does occur. Wars with the Siamese result in loss of territory and eventually the conquest of the capital Longvek in 1594. The Vietnamese on their "Southward March" reach Prei Nokor/Saigon at the Mekong Delta in the 17th century. This event initiates the slow process of Cambodia losing access to the seas and independent marine trade. Siamese and Vietnamese dominance intensified during the 17th and 18th century, resulting in frequent displacements of the seat of power as the Khmer royal authority decreased to the state of a vassal. In the early 19th century with dynasties in Vietnam and Siam firmly established, Cambodia was placed under joint suzerainty, having lost its national sovereignty. British agent John Crawfurd states: "...the King of that ancient Kingdom is ready to throw himself under the protection of any European nation..." To save Cambodia from being incorporated into Vietnam and Siam, the Cambodians entreated the aid of the Luzones/Lucoes (Filipinos from Luzon-Philippines) that previously participated in the Burmese-Siamese wars as mercenaries. When the embassy arrived in Luzon, the rulers were now Spaniards, so they asked them for aid too, together with their Latin American troops imported from Mexico, in order to restore the then Christianised King, Satha II, as monarch of Cambodia, this, after a Thai/Siamese invasion was repelled. However that was only temporary. Nevertheless, the future King, Ang Duong, also enlisted the aid of the French who were allied to the Spanish (As Spain was ruled by a French royal dynasty the Bourbons). The Cambodian king agreed to colonial France's offers of protection in order to restore the existence of the Cambodian monarchy, which took effect with King Norodom Prohmbarirak signing and officially recognising the French protectorate on 11 August 1863. French colonial period (1863–1953) In August 1863 King Norodom signed an agreement with the French placing the kingdom under the protection of France. The original treaty left Cambodian sovereignty intact, but French control gradually increased, with important landmarks in 1877, 1884, and 1897, until by the end of the century the king's authority no longer existed outside the palace. Norodom died in 1904, and his two successors, Sisowath and Monivong, were content to allow the French to control the country, but in 1940 France was defeated in a brief border war with Thailand and forced to surrender the provinces of Battambang and Angkor (the ancient site of Angkor itself was retained). King Monivong died in April 1941, and the French placed the obscure Prince Sihanouk on the throne as king, believing that the inexperienced 18-year old would be more pliable than Monivong's middle-aged son, Prince Monireth. Cambodia's situation at the end of the war was chaotic. The Free French, under General Charles de Gaulle, were determined to recover Indochina, though they offered Cambodia and the other Indochinese protectorates a carefully circumscribed measure of self-government. Convinced that they had a "civilizing mission", they envisioned Indochina's participation in a French Union of former colonies that shared the common experience of French culture. Administration of Sihanouk (1953–70) On 9 March 1945, during the Japanese occupation of Cambodia, young king Norodom Sihanouk proclaimed an independent Kingdom of Kampuchea, following a formal request by the Japanese. Shortly thereafter the Japanese government nominally ratified the independence of Cambodia and established a consulate in Phnom Penh. The new government did away with the romanisation of the Khmer language that the French colonial administration was beginning to enforce and officially reinstated the Khmer script. This measure taken by the short-lived governmental authority would be popular and long-lasting, for since then no government in Cambodia has tried to romanise the Khmer language again. After Allied military units entered Cambodia, the Japanese military forces present in the country were disarmed and repatriated. The French were able to reimpose the colonial administration in Phnom Penh in October the same year. Sihanouk's "royal crusade for independence" resulted in grudging French acquiescence to his demands for a transfer of sovereignty. A partial agreement was struck in October 1953. Sihanouk then declared that independence had been achieved and returned in triumph to Phnom Penh. As a result of the Geneva Conference on Indochina, Cambodia was able to bring about the withdrawal of the Viet Minh troops from its territory and to withstand any residual impingement upon its sovereignty by external powers. Neutrality was the central element of Cambodian foreign policy during the 1950s and 1960s. By the mid-1960s, parts of Cambodia's eastern provinces were serving as bases for North Vietnamese Army and National Liberation Front (NVA/NLF) forces operating against South Vietnam, and the port of Sihanoukville was being used to supply them. As NVA/VC activity grew, the United States and South Vietnam became concerned, and in 1969, the United States began a 14-month-long series of bombing raids targeted at NVA/VC elements, contributing to destabilisation. The bombing campaign took place no further than ten, and later inside the Cambodian border, areas where the Cambodian population had been evicted by the NVA. Prince Sihanouk, fearing that the conflict between communist North Vietnam and South Vietnam might spill over to Cambodia, publicly opposed the idea of a bombing campaign by the United States along the Vietnam–Cambodia border and inside Cambodian territory. However, Peter Rodman claimed, "Prince Sihanouk complained bitterly to us about these North Vietnamese bases in his country and invited us to attack them". In December 1967 Washington Post journalist Stanley Karnow was told by Sihanouk that if the US wanted to bomb the Vietnamese communist sanctuaries, he would not object, unless Cambodians were killed. The same message was conveyed to US President Johnson's emissary Chester Bowles in January 1968. So the US had no real motivation to overthrow Sihanouk. However, Prince Sihanouk wanted Cambodia to stay out of the North Vietnam–South Vietnam conflict and was very critical of the United States government and its allies (the South Vietnamese government). Prince Sihanouk, facing internal struggles of his own, due to the rise of the Khmer Rouge, did not want Cambodia to be involved in the conflict. Sihanouk wanted the United States and its allies (South Vietnam) to keep the war away from the Cambodian border. Sihanouk did not allow the United States to use Cambodian air space and airports for military purposes. This upset the United States greatly and contributed to their view that of Prince Sihanouk as a North Vietnamese sympathiser and a thorn on the United States. However, declassified documents indicate that, as late as March 1970, the Nixon administration was hoping to garner "friendly relations" with Sihanouk. Throughout the 1960s, domestic Cambodian politics became polarised. Opposition to the government grew within the middle class and leftists including Paris-educated leaders like Son Sen, Ieng Sary, and Saloth Sar (later known as Pol Pot), who led an insurgency under the clandestine Communist Party of Kampuchea (CPK). Sihanouk called these insurgents the Khmer Rouge, literally the "Red Khmer". But the 1966 national assembly elections showed a significant swing to the right, and General Lon Nol formed a new government, which lasted until 1967. During 1968 and 1969, the insurgency worsened. However, members of the government and army, who resented Sihanouk's ruling style as well as his tilt away from the United States, did have a motivation to overthrow him. Khmer Republic and the War (1970–75) While visiting Beijing in 1970 Sihanouk was ousted by a military coup led by Prime Minister General Lon Nol and Prince Sisowath Sirik Matak in the early hours of 18 March 1970. However, as early as 12 March 1970, the CIA Station Chief told Washington that based on communications from Sirik Matak, Lon Nol's cousin, that "the (Cambodian) army was ready for a coup". Lon Nol assumed power after the military coup and immediately allied Cambodia with the United States. Son Ngoc Thanh, an opponent of Pol Pot, announced his support for the new government. On 9 October, the Cambodian monarchy was abolished, and the country was renamed the Khmer Republic. The new regime immediately demanded that the Vietnamese communists leave Cambodia. Hanoi rejected the new republic's request for the withdrawal of NVA troops. In response, the United States moved to provide material assistance to the new government's armed forces, which were engaged against both CPK insurgents and NVA forces. The North Vietnamese and Viet Cong forces, desperate to retain their sanctuaries and supply lines from North Vietnam, immediately launched armed attacks on the new government. The North Vietnamese quickly overran large parts of eastern Cambodia, reaching to within of Phnom Penh. The North Vietnamese turned the newly won territories over to the Khmer Rouge. The king urged his followers to help in overthrowing this government, hastening the onset of civil war. In April 1970, US President Richard Nixon announced to the American public that US and South Vietnamese ground forces had entered Cambodia in a campaign aimed at destroying NVA base areas in Cambodia (see Cambodian Incursion). The US had already been bombing Vietnamese positions in Cambodia for well over a year by that point. Although a considerable quantity of equipment was seized or destroyed by US and South Vietnamese forces, containment of North Vietnamese forces proved elusive. The Khmer Republic's leadership was plagued by disunity among its three principal figures: Lon Nol, Sihanouk's cousin Sirik Matak, and National Assembly leader In Tam. Lon Nol remained in power in part because none of the others were prepared to take his place. In 1972, a constitution was adopted, a parliament elected, and Lon Nol became president. But disunity, the problems of transforming a 30,000-man army into a national combat force of more than 200,000 men, and spreading corruption weakened the civilian administration and army. The Khmer Rouge insurgency inside Cambodia continued to grow, aided by supplies and military support from North Vietnam. Pol Pot and Ieng Sary asserted their dominance over the Vietnamese-trained communists, many of whom were purged. At the same time, the Khmer Rouge (CPK) forces became stronger and more independent of their Vietnamese patrons. By 1973, the CPK were fighting battles against government forces with little or no North Vietnamese troop support, and they controlled nearly 60% of Cambodia's territory and 25% of its population. The government made three unsuccessful attempts to enter into negotiations with the insurgents, but by 1974, the CPK was operating openly as divisions, and some of the NVA combat forces had moved into South Vietnam. Lon Nol's control was reduced to small enclaves around the cities and main transportation routes. More than two million refugees from the war lived in Phnom Penh and other cities. On New Year's Day 1975, Communist troops launched an offensive which, in 117 days of the hardest fighting of the war, caused the collapse of the Khmer Republic. Simultaneous attacks around the perimeter of Phnom Penh pinned down Republican forces, while other CPK units overran fire bases controlling the vital lower Mekong resupply route. A US-funded airlift of ammunition and rice ended when Congress refused additional aid for Cambodia. The Lon Nol government in Phnom Penh surrendered on 17 April 1975, just five days after the US mission evacuated Cambodia. Foreign involvement in the rise of the Khmer Rouge The relationship between the massive carpet bombing of Cambodia by the United States and the growth of the Khmer Rouge, in terms of recruitment and popular support, has been a matter of interest to historians. Some historians, including Michael Ignatieff, Adam Jones and Greg Grandin, have cited the United States intervention and bombing campaign (spanning 1965–1973) as a significant factor which lead to increased support for the Khmer Rouge among the Cambodian peasantry. According to Ben Kiernan, the Khmer Rouge "would not have won power without U.S. economic and military destabilization of Cambodia. ... It used the bombing's devastation and massacre of civilians as recruitment propaganda and as an excuse for its brutal, radical policies and its purge of moderate communists and Sihanoukists." Pol Pot biographer David P. Chandler writes that the bombing "had the effect the Americans wanted – it broke the Communist encirclement of Phnom Penh", but it also accelerated the collapse of rural society and increased social polarization. Peter Rodman and Michael Lind claimed that the United States intervention saved the Lon Nol regime from collapse in 1970 and 1973. Craig Etcheson acknowledged that U.S. intervention increased recruitment for the Khmer Rouge but disputed that it was a primary cause of the Khmer Rouge victory. William Shawcross wrote that the United States bombing and ground incursion plunged Cambodia into the chaos that Sihanouk had worked for years to avoid. By 1973, Vietnamese support of the Khmer Rouge had largely disappeared. China "armed and trained" the Khmer Rouge both during the civil war and the years afterward. Owing to Chinese, U.S., and Western support, the Khmer Rouge-dominated Coalition Government of Democratic Kampuchea (CGDK) held Cambodia's UN seat until 1993, long after the Cold War had ended. China has defended its ties with the Khmer Rouge. Chinese Foreign Ministry spokeswoman Jiang Yu said that "the government of Democratic Kampuchea had a legal seat at the United Nations, and had established broad foreign relations with more than 70 countries". Democratic Kampuchea (Khmer Rouge era) (1975–79) Immediately after its victory, the CPK ordered the evacuation of all cities and towns, sending the entire urban population into the countryside to work as farmers, as the CPK was trying to reshape society into a model that Pol Pot had conceived. The new government sought to completely restructure Cambodian society. Remnants of the old society were abolished and religion was suppressed. Agriculture was collectivised, and the surviving part of the industrial base was abandoned or placed under state control. Cambodia had neither a currency nor a banking system. Democratic Kampuchea's relations with Vietnam and Thailand worsened rapidly as a result of border clashes and ideological differences. While communist, the CPK was fiercely nationalistic, and most of its members who had lived in Vietnam were purged. Democratic Kampuchea established close ties with the People's Republic of China, and the Cambodian-Vietnamese conflict became part of the Sino-Soviet rivalry, with Moscow backing Vietnam. Border clashes worsened when the Democratic Kampuchea military attacked villages in Vietnam. The regime broke off relations with Hanoi in December 1977, protesting Vietnam's alleged attempt to create an Indochina Federation. In mid-1978, Vietnamese forces invaded Cambodia, advancing about before the arrival of the rainy season. The reasons for Chinese support of the CPK was to prevent a pan-Indochina movement, and maintain Chinese military superiority in the region. The Soviet Union supported a strong Vietnam to maintain a second front against China in case of hostilities and to prevent further Chinese expansion. Since Stalin's death, relations between Mao-controlled China and the Soviet Union had been lukewarm at best. In February to March 1979, China and Vietnam would fight the brief Sino-Vietnamese War over the issue. In December 1978, Vietnam announced the formation of the Kampuchean United Front for National Salvation (KUFNS) under Heng Samrin, a former DK division commander. It was composed of Khmer Communists who had remained in Vietnam after 1975 and officials from the eastern sector—like Heng Samrin and Hun Sen—who had fled to Vietnam from Cambodia in 1978. In late December 1978, Vietnamese forces launched a full invasion of Cambodia, capturing Phnom Penh on 7 January 1979 and driving the remnants of Democratic Kampuchea's army westward toward Thailand. Within the CPK, the Paris-educated leadership—Pol Pot, Ieng Sary, Nuon Chea, and Son Sen—were in control. A new constitution in January 1976 established Democratic Kampuchea as a Communist People's Republic, and a 250-member Assembly of the Representatives of the People of Kampuchea (PRA) was selected in March to choose the collective leadership of a State Presidium, the chairman of which became the head of state. Prince Sihanouk resigned as head of state on 2 April. On 14 April, after its first session, the PRA announced that Khieu Samphan would chair the State Presidium for a 5-year term. It also picked a 15-member cabinet headed by Pol Pot as prime minister. Prince Sihanouk was put under virtual house arrest. Destruction and deaths caused by the regime 20,000 people died of exhaustion or disease during the evacuation of Phnom Penh and its aftermath. Many of those forced to evacuate the cities were resettled in newly created villages, which lacked food, agricultural implements, and medical care. Many who lived in cities had lost the skills necessary for survival in an agrarian environment. Thousands starved before the first harvest. Hunger and malnutrition—bordering on starvation—were constant during those years. Most military and civilian leaders of the former regime who failed to disguise their pasts were executed. Some of the ethnicities in Cambodia, such as the Cham and Vietnamese, suffered specific and targeted and violent persecutions, to the point of some international sources referring to it as the "Cham genocide". Entire families and towns were targeted and attacked with the goal of significantly diminishing their numbers and eventually eliminated them. Life in 'Democratic Kampuchea' was strict and brutal. In many areas of the country people were rounded up and executed for speaking a foreign language, wearing glasses, scavenging for food, absent for government assigned work, and even crying for dead loved ones. Former businessmen and bureaucrats were hunted down and killed along with their entire families; the Khmer Rouge feared that they held beliefs that could lead them to oppose their regime. A few Khmer Rouge loyalists were even killed for failing to find enough 'counter-revolutionaries' to execute. When Cambodian socialists began to rebel in the eastern zone of Cambodia, Pol Pot ordered his armies to exterminate 1.5 million eastern Cambodians which he branded as "Cambodians with Vietnamese minds" in the area. The purge was done speedily and efficiently as Pol Pot's soldiers quickly killed at least more than 100,000 to 250,000 eastern Cambodians right after deporting them to execution site locations in Central, North and North-Western Zones within a month's time, making it the most bloodiest episode of mass murder under Pol Pot's regime Religious institutions were not spared by the Khmer Rouge as well, in fact religion was so viciously persecuted to such a terrifying extent that the vast majority of Cambodia's historic architecture, 95% of Cambodia's Buddhist temples, was completely destroyed. Ben Kiernan estimates that 1.671 million to 1.871 million Cambodians died as a result of Khmer Rouge policy, or between 21% and 24% of Cambodia's 1975 population. A study by French demographer Marek Sliwinski calculated slightly fewer than 2 million unnatural deaths under the Khmer Rouge out of a 1975 Cambodian population of 7.8 million; 33.5% of Cambodian men died under the Khmer Rouge compared to 15.7% of Cambodian women. According to a 2001 academic source, the most widely accepted estimates of excess deaths under the Khmer Rouge range from 1.5 million to 2 million, although figures as low as 1 million and as high as 3 million have been cited; conventionally accepted estimates of deaths due to Khmer Rouge executions range from 500,000 to 1 million, "a third to one half of excess mortality during the period." However, a 2013 academic source (citing research from 2009) indicates that execution may have accounted for as much as 60% of the total, with 23,745 mass graves containing approximately 1.3 million suspected victims of execution. While considerably higher than earlier and more widely accepted estimates of Khmer Rouge executions, the Documentation Center of Cambodia (DC-Cam)'s Craig Etcheson defended such estimates of over one million executions as "plausible, given the nature of the mass grave and DC-Cam's methods, which are more likely to produce an under-count of bodies rather than an over-estimate." Demographer Patrick Heuveline estimated that between 1.17 million and 3.42 million Cambodians died unnatural deaths between 1970 and 1979, with between 150,000 and 300,000 of those deaths occurring during the civil war. Heuveline's central estimate is 2.52 million excess deaths, of which 1.4 million were the direct result of violence. Despite being based on a house-to-house survey of Cambodians, the estimate of 3.3 million deaths promulgated by the Khmer Rouge's successor regime, the People's Republic of Kampuchea (PRK), is generally considered to be an exaggeration; among other methodological errors, the PRK authorities added the estimated number of victims that had been found in the partially-exhumed mass graves to the raw survey results, meaning that some victims would have been double-counted. An estimated 300,000 Cambodians starved to death between 1979 and 1980, largely as a result of the after-effects of Khmer Rouge policies. Vietnamese occupation and the PRK (1979–93) On 10 January 1979, after the Vietnamese army and the KUFNS (Kampuchean United Front for National Salvation) invaded Cambodia and overthrew the Khmer Rouge, the new People's Republic of Kampuchea (PRK) was established with Heng Samrin as head of state. Pol Pot's Khmer Rouge forces retreated rapidly to the jungles near the Thai border. The Khmer Rouge and the PRK began a costly struggle that played into the hands of the larger powers China, the United States and the Soviet Union. The Khmer People's Revolutionary Party's rule gave rise to a guerrilla movement of three major resistance groups – the FUNCINPEC (Front Uni National pour un Cambodge Indépendant, Neutre, Pacifique, et Coopératif), the KPLNF (Khmer People's National Liberation Front) and the PDK (Party of Democratic Kampuchea, the Khmer Rouge under the nominal presidency of Khieu Samphan). "All held dissenting perceptions concerning the purposes and modalities of Cambodia's future". Civil war displaced 600,000 Cambodians, who fled to refugee camps along the border to Thailand and tens of thousands of people were murdered throughout the country. Peace efforts began in Paris in 1989 under the State of Cambodia, culminating two years later in October 1991 in a comprehensive peace settlement. The United Nations was given a mandate to enforce a ceasefire and deal with refugees and disarmament known as the United Nations Transitional Authority in Cambodia (UNTAC). Modern Cambodia (1993–present) On 23 October 1991, the Paris Conference reconvened to sign a comprehensive settlement giving the UN full authority to supervise a cease-fire, repatriate the displaced Khmer along the border with Thailand, disarm and demobilise the factional armies, and prepare the country for free and fair elections. Prince Sihanouk, President of the Supreme National Council of Cambodia (SNC), and other members of the SNC returned to Phnom Penh in November 1991, to begin the resettlement process in Cambodia. The UN Advance Mission for Cambodia (UNAMIC) was deployed at the same time to maintain liaison among the factions and begin demining operations to expedite the repatriation of approximately 370,000 Cambodians from Thailand. On 16 March 1992, the UN Transitional Authority in Cambodia (UNTAC) arrived in Cambodia to begin implementation of the UN settlement plan and to become operational on 15 March 1992 under Yasushi Akashi, the Special Representative of the U.N. Secretary General. UNTAC grew into a 22,000-strong civilian and military peacekeeping force tasked to ensure the conduct of free and fair elections for a constituent assembly. Over 4 million Cambodians (about 90% of eligible voters) participated in the May 1993 elections. Pre-election violence and intimidation was widespread, caused by SOC (State of Cambodia – made up largely of former PDK cadre) security forces, mostly against the FUNCINPEC and BLDP parties according to UNTAC. The Khmer Rouge or Party of Democratic Kampuchea (PDK), whose forces were never actually disarmed or demobilized blocked local access to polling places. Prince Ranariddh's (son of Norodom Sihanouk) royalist Funcinpec Party was the top vote recipient with 45.5% of the vote, followed by Hun Sen's Cambodian People's Party and the Buddhist Liberal Democratic Party, respectively. Funcinpec then entered into a coalition with the other parties that had participated in the election. A coalition government resulted between the Cambodian People's Party and FUNCINPEC, with two co-prime ministers – Hun Sen, since 1985 the prime minister in the Communist government, and Norodom Ranariddh. The parties represented in the 120-member assembly proceeded to draft and approve a new constitution, which was promulgated 24 September 1993. It established a multiparty liberal democracy in the framework of a constitutional monarchy, with the former Prince Sihanouk elevated to King. Prince Ranariddh and Hun Sen became First and Second Prime Ministers, respectively, in the Royal Cambodian Government (RGC). The constitution provides for a wide range of internationally recognised human rights. Hun Sen and his government have seen much controversy. Hun Sen was a former Khmer Rouge commander who was originally installed by the Vietnamese and, after the Vietnamese left the country, maintains his strong man position by violence and oppression when deemed necessary. In 1997, fearing the growing power of his co-Prime Minister, Prince Norodom Ranariddh, Hun launched a coup, using the army to purge Ranariddh and his supporters. Ranariddh was ousted and fled to Paris while other opponents of Hun Sen were arrested, tortured and some summarily executed. On 4 October 2004, the Cambodian National Assembly ratified an agreement with the United Nations on the establishment of a tribunal to try senior leaders responsible for the atrocities committed by the Khmer Rouge. International donor countries have pledged a US$43 Million share of the three-year tribunal budget as Cambodia contributes US$13.3 Million. The tribunal has sentenced several senior Khmer Rouge leaders since 2008. Cambodia is still infested with countless land mines, indiscriminately planted by all warring parties during the decades of war and upheaval. The Cambodia National Rescue Party was dissolved ahead of the 2018 Cambodian general election and the ruling Cambodian People's Party also enacted tighter curbs on mass media. The CPP won every seat in the National Assembly without a major opposition, effectively solidifying de facto one-party rule in the country. Cambodia’s longtime Prime Minister Hun Sen, one of the world’s longest-serving leaders, has a very firm grip on power. He has been accused of the crackdown on opponents and critics. His Cambodian People’s Party (CPP) has been in power since 1979. In December 2021, Prime Minister Hun Sen announced his support for his son Hun Manet to succeed him after the next election, which is expected to take place in 2023. See also References Attribution: – Further reading Chanda, Nayan. "China and Cambodia: In the mirror of history." Asia Pacific Review 9.2 (2002): 1-11. Chandler, David. A history of Cambodia (4th ed. 2009) online. Corfield, Justin. The history of Cambodia (ABC-CLIO, 2009). Herz, Martin F. Short History of Cambodia (1958) online Slocomb, Margaret. An economic history of Cambodia in the twentieth century (National University of Singapore Press, 2010). Strangio, Sebastian. Cambodia: From Pol Pot to Hun Sen and Beyond (2020) External links Records of the United Nations Advance Mission in Cambodia (UNAMIC) (1991-1992) at the United Nations Archives Constitution of Cambodia State Department Background Note: Cambodia Summary of UNTAC mission History of Cambodian Civil War from the Dean Peter Krogh Foreign Affairs Digital Archives Cambodia under Sihanouk, 1954–70 Selective Mortality During the Khmer Rouge Period in Cambodia Crossroads in Cambodia: The United Nation's responsibility to withdrawn involvement from the establishment of a Cambodian Tribunal to prosecute the Khmer Rouge BBC article David Chandler - A History Of Cambodia, 4th Edition Westview Press ( 2009)
2,407
5,437
https://en.wikipedia.org/wiki/Khmer%20architecture
Khmer architecture
Khmer architecture (), also known as Angkorian architecture (), is the architecture produced by the Khmers during the Angkor period of the Khmer Empire from approximately the later half of the 8th century CE to the first half of the 15th century CE. The architecture of the Indian rock-cut temples, particularly in sculpture, had an influence on Southeast Asia and was widely adopted into the Indianised architecture of Cambodian (Khmer), Annamese and Javanese temples (of the Greater India). Evolved from Indian influences, Khmer architecture became clearly distinct from that of the Indian sub-continent as it developed its own special characteristics, some of which were created independently and others of which were incorporated from neighboring cultural traditions, resulting in a new artistic style in Asian architecture unique to the Angkorian tradition. The development of Khmer architecture as a distinct style is particularly evident in artistic depictions of divine and royal figures with facial features representative of the local Khmer population, including rounder faces, broader brows, and other physical characteristics. In any study of Angkorian architecture, the emphasis is necessarily on religious architecture, since all the remaining Angkorian buildings are religious in nature. During the period of Angkor, only temples and other religious buildings were constructed of stone. Non-religious buildings such as dwellings were constructed of perishable materials such as wood, and so have not survived. The religious architecture of Angkor has characteristic structures, elements, and motifs, which are identified in the glossary below. Since a number of different architectural styles succeeded one another during the Angkorean period, not all of these features were equally in evidence throughout the period. Indeed, scholars have referred to the presence or absence of such features as one source of evidence for dating the remains. Periodization Many temples had been built before Cambodia became a powerful Kingdom of Khmer Empire which dominated most of the Indochina region. At that time, Cambodia was known as Chenla kingdom, the predecessor state of the Khmer empire. There are three pre-Angkorean architectural styles: Sambor Prei Kuk style (610–650): Sambor Prei Kuk, also known as Isanapura, was the capital of the Chenla Kingdom. Temples of Sambor Prei Kuk were built in rounded, plain colonettes with capitals that include a bulb. Prei Khmeng style (635–700): Structures reveal masterpieces of sculpture but examples are scarce. Colonettes are larger than those of previous styles. Buildings were more heavily decorated but had general decline in standards. Kompong Preah style (700–800): Temples with more decorative rings on colonettes which remain cylindrical. Brick constructions were being continued. Scholars have worked to develop a periodization of Angkorean architectural styles. The following periods and styles may be distinguished. Each is named for a particular temple regarded as paradigmatic for the style. Kulen style (825–875): Continuation of pre-Angkorean style but it was a period of innovation and borrowing such as from Cham temples. Tower is mainly square and relatively high as well as brick with laterite walls and stone door surrounds but square and octagonal colonettes begin to appear. Preah Ko style (877–886): Hariharalaya was the first capital city of the Khmer empire located in the area of Angkor; its ruins are in the area now called Roluos some fifteen kilometers southeast of the modern city of Siem Reap. The earliest surviving temple of Hariharalaya is Preah Ko; the others are Bakong and Lolei. The temples of the Preah Ko style are known for their small brick towers and for the great beauty and delicacy of their lintels. Bakheng Style (889–923): Bakheng was the first temple mountain constructed in the area of Angkor proper north of Siem Reap. It was the state temple of King Yasovarman, who built his capital of Yasodharapura around it. Located on a hill (phnom), it is currently one of the most endangered of the monuments, having become a favorite perch for tourists eager to witness a glorious sundown at Angkor. Koh Ker Style (921–944): During the reign of King Jayavarman IV, capital of Khmer empire was removed from Angkor region through the north which is called Koh Ker. The architectural style of temples in Koh Ker, scale of buildings diminishes toward center. Brick still main material but sandstone also used. Pre Rup Style (944–968): Under King Rajendravarman, the Angkorian Khmer built the temples of Pre Rup, East Mebon and Phimeanakas. Their common style is named after the state temple mountain of Pre Rup. Banteay Srei Style (967–1000): Banteay Srei is the only major Angkorian temple constructed not by a monarch, but by a courtier. It is known for its small scale and the extreme refinement of its decorative carvings, including several famous narrative bas-reliefs dealing with scenes from Indian mythology. Khleang Style (968–1010): The Khleang temples, first use of galleries. Cruciform gopuras. Octagonal colonettes. Restrained decorative carving. A few temples that were built in this style are Ta Keo, Phimeanakas. Baphuon Style (1050–1080): Baphuon, the massive temple mountain of King Udayadityavarman II was apparently the temple that most impressed the Chinese traveller Zhou Daguan, who visited Angkor toward the end of the 13th century. Its unique relief carvings have a naive dynamic quality that contrast with the rigidity of the figures typical of some other periods. As of 2008, Baphuon is under restoration and cannot currently be appreciated in its full magnificence. Classical or Angkor Wat Style (1080–1175): Angkor Wat, the temple and perhaps the mausoleum of King Suryavarman II, is the greatest of the Angkorian temples and defines what has come to be known as the classical style of Angkorian architecture. Other temples in this style are Banteay Samre and Thommanon in the area of Angkor, and Phimai in modern Thailand. Bayon Style (1181–1243): In the final quarter of the 12th century, King Jayavarman VII freed the country of Angkor from occupation by an invasionary force from Champa. Thereafter, he began a massive program of monumental construction, paradigmatic for which was the state temple called the Bayon. The king's other foundations participated in the style of the Bayon, and included Ta Prohm, Preah Khan, Angkor Thom, and Banteay Chmar. Though grandiose in plan and elaborately decorated, the temples exhibit a hurriedness of construction that contrasts with the perfection of Angkor Wat. Post Bayon Style (1243–1431): Following the period of frantic construction under Jayavarman VII, Angkorian architecture entered the period of its decline. The 13th century Terrace of the Leper King is known for its dynamic relief sculptures of demon kings, dancers, and nāgas. Materials Angkorian builders used brick, sandstone, laterite and wood as their materials. The ruins that remain are of brick, sandstone and laterite, the wood elements having been lost to decay and other destructive processes. Brick The earliest Angkorian temples were made mainly of brick. Good examples are the temple towers of Preah Ko, Lolei and Bakong at Hariharalaya, and Chóp Mạt in Tay Ninh. Decorations were usually carved into a stucco applied to the brick, rather than into the brick itself. This was because bricks were a softer material, and did not lend themselves to sculpting, as opposed to stones of different kinds such as the Sandstones or the Granites. However, the tenets of the Sacred Architecture as enunciated in the Vedas and the Shastras, require no adhesives to be used while building blocks are assembled one over the other to create the Temples, as such bricks have been used only in relatively smaller temples such as Lolei and The Preah Ko. Besides, strength of bricks is much lesser as compared to the stones (mentioned here-in) and the former degrade with age. Angkor's neighbor state of Champa was also the home to numerous brick temples that are similar in style to those of Angkor. The most extensive ruins are at Mỹ Sơn in Vietnam. A Cham story tells of the time that the two countries settled an armed conflict by means of a tower-building contest proposed by the Cham King Po Klaung Garai. While the Khmer built a standard brick tower, Po Klaung Garai directed his people to build an impressive replica of paper and wood. In the end, the Cham replica was more impressive than the real brick tower of the Khmer, and the Cham won the contest. Sandstone The only stone used by Angkorian builders was sandstone, obtained from the Kulen mountains. Since its obtainment was considerably more expensive than that of brick, sandstone only gradually came into use, and at first was used for particular elements such as door frames. The 10th-century temple of Ta Keo is the first Angkorian temple to be constructed more or less entirely from Sandstone. Laterite Angkorian builders used laterite, a clay that is soft when taken from the ground but that hardens when exposed to the sun, for foundations and other hidden parts of buildings. Because the surface of laterite is uneven, it was not suitable for decorative carvings, unless first dressed with stucco. Laterite was more commonly used in the Khmer provinces than at Angkor itself. Because the water table in this entire region is well high, Laterite has been used in the underlying layers of Angkor Wat and other temples (especially the larger ones), because it can absorb water and help towards better stability of the Temple. Structures Central sanctuary The central sanctuary of an Angkorian temple was home to the temple's primary deity, the one to whom the site was dedicated: typically Shiva or Vishnu in the case of a Hindu temple, Buddha or a bodhisattva in the case of a Buddhist temple. The deity was represented by a statue (or in the case of Shiva, most commonly by a linga). Since the temple was not considered a place of worship for use by the population at large, but rather a home for the deity, the sanctuary needed only to be large enough to hold the statue or linga; it was never more than a few metres across. Its importance was instead conveyed by the height of the tower (prasat) rising above it, by its location at the centre of the temple, and by the greater decoration on its walls. Symbolically, the sanctuary represented Mount Meru, the legendary home of the Hindu gods. Prang The prang is the tall finger-like spire, usually richly carved, common to much Khmer religious architecture. Enclosure Khmer temples were typically enclosed by a concentric series of walls, with the central sanctuary in the middle; this arrangement represented the mountain ranges surrounding Mount Meru, the mythical home of the gods. Enclosures are the spaces between these walls, and between the innermost wall and the temple itself. By modern convention, enclosures are numbered from the centre outwards. The walls defining the enclosures of Khmer temples are frequently lined by galleries, while passage through the walls is by way of gopuras located at the cardinal points. Gallery A gallery is a passageway running along the wall of an enclosure or along the axis of a temple, often open to one or both sides. Historically, the form of the gallery evolved during the 10th century from the increasingly long hallways which had earlier been used to surround the central sanctuary of a temple. During the period of Angkor Wat in the first half of the 12th century, additional half galleries on one side were introduced to buttress the structure of the temple. Gopura A gopura is an entrance building. At Angkor, passage through the enclosure walls surrounding a temple compound is frequently accomplished by means of an impressive gopura, rather than just an aperture in the wall or a doorway. Enclosures surrounding a temple are often constructed with a gopura at each of the four cardinal points. In plan, gopuras are usually cross-shaped and elongated along the axis of the enclosure wall. If the wall is constructed with an accompanying gallery, the gallery is sometimes connected to the arms of the gopura. Many Angkorian gopuras have a tower at the centre of the cross. The lintels and pediments are often decorated, and guardian figures (dvarapalas) are often placed or carved on either side of the doorways. Hall of Dancers A Hall of Dancers is the structure of a type found in certain late 12th-century temples constructed under King Jayavarman VII: Ta Prohm, Preah Khan, Banteay Kdei and Banteay Chhmar. It is a rectangular building elongated along the temple's east axis and divided into four courtyards by galleries. Formerly it had a roof made of perishable materials; now only the stone walls remain. The pillars of the galleries are decorated with carved designs of dancing apsaras; hence scholars have suggested that the hall itself may have been used for dancing. House of Fire House of Fire, or Dharmasala, is the name given to a type of building found only in temples constructed during the reign of late 12th-century monarch Jayavarman VII: Preah Khan, Ta Prohm and Banteay Chhmar. A House of Fire has thick walls, a tower at the west end and south-facing windows. Scholars theorize that the House of Fire functioned as a "rest house with fire" for travellers. An inscription at Preah Khan tells of 121 such rest houses lining the highways into Angkor. The Chinese traveller Zhou Daguan expressed his admiration for these rest houses when he visited Angkor in 1296 CE. Another theory is that the House of Fire had a religious function as the repository the sacred flame used in sacred ceremonies. Library Structures conventionally known as "libraries" are a common feature of the Khmer temple architecture, but their true purpose remains unknown. Most likely they functioned broadly as religious shrines rather than strictly as repositories of manuscripts. Freestanding buildings, they were normally placed in pairs on either side of the entrance to an enclosure, opening to the west. Srah and baray Srahs and barays were reservoirs, generally created by excavation and embankment, respectively. It is not clear whether the significance of these reservoirs was religious, agricultural, or a combination of the two. The two largest reservoirs at Angkor were the West Baray and the East Baray located on either side of Angkor Thom. The East Baray is now dry. The West Mebon is an 11th-century temple standing at the center of the West Baray and the East Mebon is a 10th-century temple standing at the center of the East Baray. The baray associated with Preah Khan is the Jayataka, in the middle of which stands the 12th-century temple of Neak Pean. Scholars have speculated that the Jayataka represents the Himalayan lake of Anavatapta, known for its miraculous healing powers. Temple mountain The dominant scheme for the construction of state temples in the Angkorian period was that of the Temple Mountain, an architectural representation of Mount Meru, the home of the gods in Hinduism. Enclosures represented the mountain chains surrounding Mount Meru, while a moat represented the ocean. The temple itself took shape as a pyramid of several levels, and the home of the gods was represented by the elevated sanctuary at the center of the temple. The first great temple mountain was the Bakong, a five-level pyramid dedicated in 881 by King Indravarman I. The structure of Bakong took shape of stepped pyramid, popularly identified as temple mountain of early Khmer temple architecture. The striking similarity of the Bakong and Borobudur in Java, going into architectural details such as the gateways and stairs to the upper terraces, strongly suggests that Borobudur might have served as the prototype of Bakong. There must have been exchanges of travelers, if not mission, between Khmer kingdom and the Sailendras in Java. Transmitting to Cambodia not only ideas, but also technical and architectural details of Borobudur, including arched gateways in corbelling method. Other Khmer temple mountains include Baphuon, Pre Rup, Ta Keo, Koh Ker, the Phimeanakas, and most notably the Phnom Bakheng at Angkor. According to Charles Higham, "A temple was built for the worship of the ruler, whose essence, if a Saivite, was embodied in a linga... housed in the central sanctuary which served as a temple-mausoleum for the ruler after his death...these central temples also contained shrines dedicated to the royal ancestors and thus became centres of ancestor worship." Elements Bas-relief Bas-reliefs are individual figures, groups of figures, or entire scenes cut into stone walls, not as drawings but as sculpted images projecting from a background. Sculpture in bas-relief is distinguished from sculpture in haut-relief, in that the latter projects farther from the background, in some cases almost detaching itself from it. The Angkorian Khmer preferred to work in bas-relief, while their neighbors the Cham were partial to haut-relief. Narrative bas-reliefs are bas-reliefs depicting stories from mythology or history. Until about the 11th century, the Angkorian Khmer confined their narrative bas-reliefs to the space on the tympana above doorways. The most famous early narrative bas-reliefs are those on the tympana at the 10th-century temple of Banteay Srei, depicting scenes from Hindu mythology as well as scenes from the great works of Indian literature, the Ramayana and the Mahabharata. By the 12th century, however, the Angkorian artists were covering entire walls with narrative scenes in bas-relief. At Angkor Wat, the external gallery wall is covered with some 12,000 or 13,000 square meters of such scenes, some of them historical, some mythological. Similarly, the outer gallery at the Bayon contains extensive bas-reliefs documenting the everyday life of the medieval Khmer as well as historical events from the reign of King Jayavarman VII. The following is a listing of the motifs illustrated in some of the more famous Angkorian narrative bas-reliefs: bas-reliefs in the tympana at Banteay Srei (10th century) the duel of the monkey princes Vali and Sugriva, and the intervention of the human hero Rama on behalf of the latter the duel of Bhima and Duryodhana at the Battle of Kurukshetra the Rakshasa king Ravana shaking Mount Kailasa, upon which sit Shiva and his shakti Kama firing an arrow at Shiva as the latter sits on Mount Kailasa the burning of Khandava Forest by Agni and Indra's attempt to extinguish the flames bas-reliefs on the walls of the outer gallery at Angkor Wat (mid-12th century) the Battle of Lanka between the Rakshasas and the vanaras or monkeys the court and procession of King Suryavarman II, the builder of Angkor Wat the Battle of Kurukshetra between Pandavas and Kauravas the judgment of Yama and the tortures of Hell the Churning of the Ocean of Milk a battle between devas and asuras a battle between Vishnu and a force of asuras the conflict between Krishna and the asura Bana the story of the monkey princes Vali and Sugriva bas-reliefs on the walls of the outer and inner galleries at the Bayon (late 12th century) battles on land and sea between Khmer and Cham troops scenes from the everyday life of Angkor civil strife among the Khmer the legend of the Leper King the worship of Shiva groups of dancing apsaras Blind door and window Angkorean shrines frequently opened in only one direction, typically to the east. The other three sides featured fake or blind doors to maintain symmetry. Blind windows were often used along otherwise blank walls. Colonnette Colonnettes were narrow decorative columns that served as supports for the beams and lintels above doorways or windows. Depending on the period, they were round, rectangular, or octagonal in shape. Colonnettes were often circled with molded rings and decorated with carved leaves. Corbelling Angkorian engineers tended to use the corbel arch in order to construct rooms, passageways and openings in buildings. A corbel arch is constructed by adding layers of stones to the walls on either side of an opening, with each successive layer projecting further towards the centre than the one supporting it from below, until the two sides meet in the middle. The corbel arch is structurally weaker than the true arch. The use of corbelling prevented the Angkorian engineers from constructing large openings or spaces in buildings roofed with stone, and made such buildings particularly prone to collapse once they were no longer maintained. These difficulties did not, of course, exist for buildings constructed with stone walls surmounted by a light wooden roof. The problem of preventing the collapse of corbelled structures at Angkor remains a serious one for modern conservation. Lintel, pediment, and tympanum A lintel is a horizontal beam connecting two vertical columns between which runs a door or passageway. Because the Angkorean Khmer lacked the ability to construct a true arch, they constructed their passageways using lintels or corbelling. A pediment is a roughly triangular structure above a lintel. A tympanum is the decorated surface of a pediment. The styles employed by Angkorean artists in the decoration of lintels evolved over time, as a result, the study of lintels has proven a useful guide to the dating of temples. Some scholars have endeavored to develop a periodization of lintel styles. The most beautiful Angkorean lintels are thought to be those of the Preah Ko style from the late 9th century. Common motifs in the decoration of lintels include the kala, the nāga and the makara, as well as various forms of vegetation. Also frequently depicted are the Hindu gods associated with the four cardinal directions, with the identity of the god depicted on a given lintel or pediment depending on the direction faced by that element. Indra, the god of the sky, is associated with East; Yama, the god of judgment and Hell, with South; Varuna, the god of the ocean, with West; and Kubera, god of wealth, with North. List of Khmer lintel styles Sambor Prei Kuk style : Inward-facing makaras with tapering bodies. Four arches joined by three medallions, the central once carved with Indra. Small figure on each makara. A variation is with figures replacing the makaras and a scene with figures below the arch. Prei Khmeng style : Continuation of Sambor Prei Kuk but makaras disappear, being replaced by incurving ends and figures. Arches more rectilinear. Large figures sometimes at each end. A variation is a central scene below the arch, usually Vishnu Reclining. Kompong Preah style : High quality carving. Arches replaced by a garland of vegetation (like a wreath) more or less segmented. Medallions disappear, central one sometimes replaced by a knot of leaves. Leafy pendants spray out above and below garland. Kulen style : Great diversity, with influences from Champa and Java, including the kala and outward-facing makaras. Preah Ko style : Some of the most beautiful of all Khmer lintels, rich, will-carved and imaginative. Kala in center, issuing garland on either side. Distinct loops of vegetation curl down from garland. Outward-facing makaras sometimes appear at the ends. Vishnu on Garuda common. Bakheng style : Continuation of Preah Ko but less fanciful and tiny figures disappear. Loop of vegetation below the naga form tight circular coils. Garland begins to dip in the center. Koh Ker style : Center occupied by a prominent scene, taking up almost the entire height of the lintel. Usually no lower border. Dress of figures shows a curved line to the sampot tucked in below waist. Pre Rup style : Tendency to copy earlier style, especially Preah Ko and Bakheng. Central figures. Re-appearance of lower border. Banteay Srei style : Increase in complexity and detail. Garland sometimes makes pronounced loop on either side with kala at top of each loop. Central figure. Khleang style : Less ornate than those of Banteay Srei. Central kala with triangular tongue, its hands holding the garland which is bent at the center. Kala sometimes surmounted by a divinity. Loops of garland on either side divided by flora stalk and pendant. Vigorous treatment of vegetation. Baphuon style : The central kala surmounted by divinity, usually riding a steed or a Vishnu scene, typically from the life of Krishna. Loops of garland no longer cut. Another type is a scene with many figures and little vegetation. Angkor Wat style : Centered, framed and linked by garlands. A second type is a narrative scene filled with figures. When nagas appear, they curls are tight and prominent. Dress mirrors that of devatas and apsaras in bas-reliefs. No empty spaces. Bayon style : Most figures disappear, usually only a kala at the bottom of the lintel surmounted by small figure. Mainly Buddhist motifs. In the middle of the period the garland is cut into four parts, while later a series of whorls of foliage replace the four divisions. Stairs Angkorean stairs are notoriously steep. Frequently, the length of the riser exceeds that of the tread, producing an angle of ascent somewhere between 45 and 70 degrees. The reasons for this peculiarity appear to be both religious and monumental. From the religious perspective, a steep stairway can be interpreted as a "stairway to heaven," the realm of the gods. "From the monumental point of view," according to Angkor-scholar Maurice Glaize, "the advantage is clear – the square of the base not having to spread in surface area, the entire building rises to its zenith with a particular thrust." Motifs Apsara and devata Apsaras, divine nymphs or celestial dancing girls, are characters from Indian mythology. Their origin is explained in the story of the churning of the Ocean of Milk, or samudra manthan, found in the Vishnu Purana. Other stories in the Mahabharata detail the exploits of individual apsaras, who were often used by the gods as agents to persuade or seduce mythological demons, heroes and ascetics. The widespread use of apsaras as a motif for decorating the walls and pillars of temples and other religious buildings, however, was a Khmer innovation. In modern descriptions of Angkorian temples, the term "apsara" is sometimes used to refer not only to dancers but also to other minor female deities, though minor female deities who are depicted standing rather than dancing are more commonly called "devatas". Apsaras and devatas are ubiquitous at Angkor, but are most common in the foundations of the 12th century. Depictions of true (dancing) apsaras are found, for example, in the Hall of Dancers at Preah Khan, in the pillars that line the passageways through the outer gallery of the Bayon, and in the famous bas-relief of Angkor Wat depicting the churning of the Ocean of Milk. The largest population of devatas (around 2,000) is at Angkor Wat, where they appear individually and in groups. Dvarapala Dvarapalas are human or demonic temple guardians, generally armed with lances and clubs. They are presented either as a stone statues or as relief carvings in the walls of temples and other buildings, generally close to entrances or passageways. Their function is to protect the temples. Dvarapalas may be seen, for example, at Preah Ko, Lolei, Banteay Srei, Preah Khan and Banteay Kdei. Gajasimha and Reachisey The gajasimha is a mythical animal with the body of a lion and the head of an elephant. At Angkor, it is portrayed as a guardian of temples and as a mount for some warriors. The gajasimha may be found at Banteay Srei and at the temples belonging to the Roluos group. The reachisey is another mythical animal, similar to the gajasimha, with the head of a lion, a short elephantine trunk, and the scaly body of a dragon. It occurs at Angkor Wat in the epic bas reliefs of the outer gallery. Garuda Garuda is a divine being that is part man and part bird. He is the lord of birds, the mythological enemy of nāgas, and the battle steed of Vishnu. Depictions of Garuda at Angkor number in the thousands, and though Indian in inspiration exhibit a style that is uniquely Khmer. They may be classified as follows: As part of a narrative bas relief, Garuda is shown as the battle steed of Vishnu or Krishna, bearing the god on his shoulders, and simultaneously fighting against the god's enemies. Numerous such images of Garuda may be observed in the outer gallery of Angkor Wat. Garuda serves as an atlas supporting a superstructure, as in the bas relief at Angkor Wat that depicts heaven and hell. Garudas and stylized mythological lions are the most common atlas figures at Angkor. Garuda is depicted in the pose of a victor, often dominating a nāga, as in the gigantic relief sculptures on the outer wall of Preah Khan. In this context, Garuda symbolizes the military power of the Khmer kings and their victories over their enemies. Not coincidentally, the city of Preah Khan was built on the site of King Jayavarman VII's victory over invaders from Champa. In free-standing nāga sculptures, such as in nāga bridges and balustrades, Garuda is often depicted in relief against the fan of nāga heads. The relationship between Garuda and the nāga heads is ambiguous in these sculptures: it may be one of cooperation, or it may again be one of domination of the nāga by Garuda. Indra In the ancient religion of the Vedas, Indra the sky-god reigned supreme. In the medieval Hinduism of Angkor, however, he had no religious status, and served only as a decorative motif in architecture. Indra is associated with the East; since Angkorian temples typically open to the East, his image is sometimes encountered on lintels and pediments facing that direction. Typically, he is mounted on the three-headed elephant Airavata and holds his trusty weapon, the thunderbolt or vajra. The numerous adventures of Indra documented in Hindu epic Mahabharata are not depicted at Angkor. Kala The kala is a ferocious monster symbolic of time in its all-devouring aspect and associated with the destructive side of the god Siva. In Khmer temple architecture, the kala serves as a common decorative element on lintels, tympana and walls, where it is depicted as a monstrous head with a large upper jaw lined by large carnivorous teeth, but with no lower jaw. Some kalas are shown disgorging vine-like plants, and some serve as the base for other figures. Scholars have speculated that the origin of the kala as a decorative element in Khmer temple architecture may be found in an earlier period when the skulls of human victims were incorporated into buildings as a kind of protective magic or apotropaism. Such skulls tended to lose their lower jaws when the ligaments holding them together dried out. Thus, the kalas of Angkor may represent the Khmer civilization's adoption into its decorative iconography of elements derived from long forgotten primitive antecedents. Krishna Scenes from the life of Krishna, a hero and Avatar of the god Vishnu, are common in the relief carvings decorating Angkorian temples, and unknown in Angkorian sculpture in the round. The literary sources for these scenes are the Mahabharata, the Harivamsa, and the Bhagavata Purana. The following are some of the most important Angkorian depictions of the life of Krishna: A series of bas reliefs at the 11th-century temple pyramid called Baphuon depicts scenes of the birth and childhood of Krishna. Numerous bas reliefs in various temples show Krishna subduing the nāga Kaliya. In Angkorian depictions, Krishna is shown effortlessly stepping on and pushing down his opponent's multiple heads. Also common is the depiction of Krishna as he lifts Mount Govardhana with one hand in order to provide the cowherds with shelter from the deluge caused by Indra. Krishna is frequently depicted killing or subduing various demons, including his evil uncle Kamsa. An extensive bas relief in the outer gallery of Angkor Wat depicts Krishna's battle with the asura Bana. In battle, Krishna is shown riding on the shoulders of Garuda, the traditional mount of Vishnu. In some scenes, Krishna is depicted in his role as charioteer, advisor and protector of Arjuna, the hero of the Mahabharata. A well-known bas relief from the 10th-century temple of Banteay Srei depicts the Krishna and Arjuna helping Agni to burn down Khandava forest. Linga The linga is a phallic post or cylinder symbolic of the god Shiva and of creative power. As a religious symbol, the function of the linga is primarily that of worship and ritual, and only secondarily that of decoration. In the Khmer empire, certain lingas were erected as symbols of the king himself, and were housed in royal temples in order to express the king's consubstantiality with Siva. The lingas that survive from the Angkorean period are generally made of polished stone. The lingas of the Angkorian period are of several different types. Some lingas are implanted in a flat square base called a yoni, symbolic of the womb. On the surface of some lingas is engraved the face of Siva. Such lingas are called mukhalingas. Some lingas are segmented into three parts: a square base symbolic of Brahma, an octagonal middle section symbolic of Vishnu, and a round tip symbolic of Shiva. Makara A makara is a mythical sea monster with the body of a serpent, the trunk of an elephant, and a head that can have features reminiscent of a lion, a crocodile, or a dragon. In Khmer temple architecture, the motif of the makara is generally part of a decorative carving on a lintel, tympanum, or wall. Often the makara is depicted with some other creature, such as a lion or serpent, emerging from its gaping maw. The makara is a central motif in the design of the famously beautiful lintels of the Roluos group of temples: Preah Ko, Bakong, and Lolei. At Banteay Srei, carvings of makaras disgorging other monsters may be observed on many of the corners of the buildings. Nāga Mythical serpents, or nāgas, represent an important motif in Khmer architecture as well as in free-standing sculpture. They are frequently depicted as having multiple heads, always uneven in number, arranged in a fan. Each head has a flared hood, in the manner of a cobra. Nāgas are frequently depicted in Angkorian lintels. The composition of such lintels characteristically consists in a dominant image at the center of a rectangle, from which issue swirling elements that reach to the far ends of the rectangle. These swirling elements may take shape as either vinelike vegetation or as the bodies of nāgas. Some such nāgas are depicted wearing crowns, and others are depicted serving as mounts for human riders. To the Angkorian Khmer, nāgas were symbols of water and figured in the myths of origin for the Khmer people, who were said to be descended from the union of an Indian Brahman and a serpent princess from Cambodia. Nāgas were also characters in other well-known legends and stories depicted in Khmer art, such as the churning of the Ocean of Milk, the legend of the Leper King as depicted in the bas-reliefs of the Bayon, and the story of Mucalinda, the serpent king who protected the Buddha from the elements. Nāga Bridge Nāga bridges are causeways or true bridges lined by stone balustrades shaped as nāgas. In some Angkorian nāga-bridges, as for example those located at the entrances to 12th century city of Angkor Thom, the nāga-shaped balustrades are supported not by simple posts but by stone statues of gigantic warriors. These giants are the devas and asuras who used the nāga king Vasuki in order to the churn the Ocean of Milk in quest of the amrita or elixir of immortality. The story of the Churning of the Ocean of Milk or samudra manthan has its source in Indian mythology. Quincunx A quincunx is a spatial arrangement of five elements, with four elements placed as the corners of a square and the fifth placed in the center. The five peaks of Mount Meru were taken to exhibit this arrangement, and Khmer temples were arranged accordingly in order to convey a symbolic identification with the sacred mountain. The five brick towers of the 10th-century temple known as East Mebon, for example, are arranged in the shape of a quincunx. The quincunx also appears elsewhere in designs of the Angkorian period, as in the riverbed carvings of Kbal Spean. Shiva Most temples at Angkor are dedicated to Shiva. In general, the Angkorian Khmer represented and worshipped Shiva in the form of a lingam, though they also fashioned anthropomorphic statues of the god. Anthropomorphic representations are also found in Angkorian bas reliefs. A famous tympanum from Banteay Srei depicts Shiva sitting on Mount Kailasa with his consort, while the demon king Ravana shakes the mountain from below. At Angkor Wat and Bayon, Shiva is depicted as a bearded ascetic. His attributes include the mystical eye in the middle of his forehead, the trident, and the rosary. His vahana or mount is the bull Nandi. Vishnu Angkorian representations of Vishnu include anthropomorphic representations of the god himself, as well as representations of his incarnations or Avatars, especially Rama and Krishna. Depictions of Vishnu are prominent at Angkor Wat, the 12th-century temple that was originally dedicated to Vishnu. Bas reliefs depict Vishna battling with against asura opponents, or riding on the shoulders of his vahana or mount, the gigantic eagle-man Garuda. Vishnu's attributes include the discus, the conch shell, the baton, and the orb. Ordinary housing The nuclear family, in rural Cambodia, typically lives in a rectangular house that may vary in size from four by six meters to six by ten meters. It is constructed of a wooden frame with gabled thatch roof and walls of woven bamboo. Khmer houses typically are raised on stilts as much as three meters for protection from annual floods. Two ladders or wooden staircases provide access to the house. The steep thatch roof overhanging the house walls protects the interior from rain. Typically a house contains three rooms separated by partitions of woven bamboo. The front room serves as a living room used to receive visitors, the next room is the parents' bedroom, and the third is for unmarried daughters. Sons sleep anywhere they can find space. Family members and neighbors work together to build the house, and a house-raising ceremony is held upon its completion. The houses of poorer persons may contain only a single large room. Food is prepared in a separate kitchen located near the house but usually behind it. Toilet facilities consist of simple pits in the ground, located away from the house, that are covered up when filled. Any livestock is kept below the house. Chinese and Vietnamese houses in Cambodian town and villages typically are built directly on the ground and have earthen, cement, or tile floors, depending upon the economic status of the owner. Urban housing and commercial buildings may be of brick, masonry, or wood. See also New Khmer Architecture Rural Khmer house Khmer sculpture Indian influence: Influence of Indian Hindu temple architecture on Southeast Asia History of Indian influence on Southeast Asia Footnotes References Coedès, George. Pour mieux comprendre Angkor. Hanoi: Imprimerie d'Extrême-Orient, 1943. Forbes, Andrew; Henley, David (2011). Angkor, Eighth Wonder of the World. Chiang Mai: Cognoscenti Books. Freeman, Michael and Jacques, Claude. Ancient Angkor. Bangkok: River Books, 1999. . Glaize, Maurice. The Monuments of the Angkor Group. 1944. A translation from the original French into English is available online at theangkorguide.com. Jessup, Helen Ibbitson. Art & Architecture of Cambodia. London: Thames & Hudson, 2004. Ngô Vǎn Doanh, Champa:Ancient Towers. Hanoi: The Gioi Publishers, 2006. Roveda, Vittorio. Images of the Gods: Khmer Mythology in Cambodia, Laos & Thailand. Bangkok: River Books, 2005. Sthapatyakam. The Architecture of Cambodia. Phnom Penh: Department of Media and Communication, Royal University of Phnom Penh, 2012. External links "Churning the Sea of Time" Film Khmer Empire Hindu temple architecture Buddhist temples Archaeological sites in Cambodia
2,415
5,438
https://en.wikipedia.org/wiki/Capricorn
Capricorn
Capricorn (pl. capricorni or capricorns) may refer to: Places Capricorn and Bunker Group, islands of the southern Great Barrier Reef, Australia Capricorn District Municipality, Limpopo province, South Africa Animals Capricorn, an animal from the ibex family, particularly the Alpine ibex Capricornis, a genus of goat-like or antelope-like animals Astronomy and astrology Capricornus, one of the constellations of the zodiac Capricorn (astrology) Arts, entertainment, and media Fictional characters Capricorn (comics), several Marvel Comics characters Capricorn (Inkworld), Inkheart character Music Groups and labels Capricorn Records, an American record label active 1969-1979 Albums Capricorn (Jay Chou album), 2008 Capricorn (Trevor Powers album), 2020 Capricorn (Mike Tramp album), 1997 "Capricorn (A Brand New Name)", a 2002 single by 30 Seconds to Mars from their self-titled album Songs "Capricorn", a song by IQ from their 1997 concept album Subterranea "Capricorn", a song by Barclay James Harvest from the album Eyes of the Universe "Capricorn", a song by Motörhead from the album Overkill Other uses in arts, entertainment, and media Capricorn (manga), a 1988 manga series created by Johnny Manajeb Capricorn One, a 1978 thriller Brands and enterprises Capricorn (microprocessor), a family of microprocessors used in the HP series 80 scientific microcomputers Capricorn, one of the names for the Virgin Atlantic GlobalFlyer Capricorn Investment Holdings, an umbrella for the Capricorn group of companies Other uses Capricorn, a ship that on January 28, 1980 collided with and sank the USCGC Blackthorn (WLB-391) Capricorn Africa Society, a pressure group in British African colonies See also Caprica (disambiguation), various meanings in the science fiction franchise series Battlestar Galactica Capricornia (disambiguation) Tropic of Capricorn (disambiguation)
2,416
5,447
https://en.wikipedia.org/wiki/Cameroon
Cameroon
Cameroon (,, , Duala: , Ewondo: , , ), officially the Republic of Cameroon (), is a country in west-central Africa. It is bordered by Nigeria to the west and north; Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon and the Republic of the Congo to the south. Its coastline lies on the Bight of Biafra, part of the Gulf of Guinea and the Atlantic Ocean. Due to its strategic position at the crossroads between West Africa and Central Africa, it has been categorized as being in both camps. Its nearly 27 million people speak 250 native languages. Early inhabitants of the territory included the Sao civilisation around Lake Chad, and the Baka hunter-gatherers in the southeastern rainforest. Portuguese explorers reached the coast in the 15th century and named the area Rio dos Camarões (Shrimp River), which became Cameroon in English. Fulani soldiers founded the Adamawa Emirate in the north in the 19th century, and various ethnic groups of the west and northwest established powerful chiefdoms and fondoms. Cameroon became a German colony in 1884 known as Kamerun. After World War I, it was divided between France and the United Kingdom as League of Nations mandates. The Union des Populations du Cameroun (UPC) political party advocated independence, but was outlawed by France in the 1950s, leading to the national liberation insurgency fought between French and UPC militant forces until early 1971. In 1960, the French-administered part of Cameroon became independent, as the Republic of Cameroun, under President Ahmadou Ahidjo. The southern part of British Cameroons federated with it in 1961 to form the Federal Republic of Cameroon. The federation was abandoned in 1972. The country was renamed the United Republic of Cameroon in 1972 and back to the Republic of Cameroon in 1984 by a presidential decree by president Paul Biya. Paul Biya, the incumbent president, has led the country since 1982 following Ahidjo's resignation; he previously held office as prime minister from 1975 on. Cameroon is governed as a Unitary Presidential Republic. The official languages of Cameroon are French and English, the official languages of former French Cameroons and British Cameroons. Its religious population is predominantly Christian, with a significant minority practicing Islam, and others following traditional faiths. It has experienced tensions from the English-speaking territories, where politicians have advocated for greater decentralisation and even complete separation or independence (as in the Southern Cameroons National Council). In 2017, tensions over the creation of an Ambazonian state in the English-speaking territories escalated into open warfare. Large numbers of Cameroonians live as subsistence farmers. The country is often referred to as "Africa in miniature" for its geological, linguistic and cultural diversity. Its natural features include beaches, deserts, mountains, rainforests, and savannas. Its highest point, at almost , is Mount Cameroon in the Southwest Region. Its most populous cities are Douala on the Wouri River, its economic capital and main seaport; Yaoundé, its political capital; and Garoua. Limbe in the Southwest has a natural seaport. Cameroon is well known for its native music styles, particularly Makossa, Njang and Bikutsi, and for its successful national football team. It is a member state of the African Union, the United Nations, the Organisation Internationale de la Francophonie (OIF), the Commonwealth of Nations, Non-Aligned Movement and the Organisation of Islamic Cooperation. Etymology Originally, Cameroon was the exonym given by the Portuguese to the Wouri River, which they called Rio dos Camarões meaning "river of shrimps" or "shrimp river", referring to the then abundant Cameroon ghost shrimp. Today the country's name in Portuguese remains Camarões. History Early history Present-day Cameroon was first settled in the Neolithic Era. The longest continuous inhabitants are groups such as the Baka (Pygmies). From there, Bantu migrations into eastern, southern and central Africa are believed to have occurred about 2,000 years ago. The Sao culture arose around Lake Chad, c. 500 AD, and gave way to the Kanem and its successor state, the Bornu Empire. Kingdoms, fondoms, and chiefdoms arose in the west. Portuguese sailors reached the coast in 1472. They noted an abundance of the ghost shrimp Lepidophthalmus turneranus in the Wouri River and named it (Shrimp River), which became Cameroon in English. Over the following few centuries, European interests regularised trade with the coastal peoples, and Christian missionaries pushed inland. In 1896, Sultan Ibrahim Njoya created the Bamum script, or Shu Mom, for the Bamum language. It is taught in Cameroon today by the Bamum Scripts and Archives Project. German rule Germany began to establish roots in Cameroon in 1868 when the Woermann Company of Hamburg built a warehouse. It was built on the estuary of the Wouri River. Later Gustav Nachtigal made a treaty with one of the local kings to annex the region for the German emperor. The German Empire claimed the territory as the colony of Kamerun in 1884 and began a steady push inland; the natives resisted. Under the aegis of Germany, commercial companies were local administrations. These concessions used forced labour to run profitable banana, rubber, palm oil, and cocoa plantations. Even infrastructure projects relied on forced labor regimen. This economic policy was much criticised by the other colonial powers. French and British rule With the defeat of Germany in World War I, Kamerun became a League of Nations mandate territory and was split into French Cameroon () and British Cameroon in 1919. France integrated the economy of Cameroon with that of France and improved the infrastructure with capital investments and skilled workers, modifying the colonial system of forced labour. The British administered their territory from neighbouring Nigeria. Natives complained that this made them a neglected "colony of a colony". Nigerian migrant workers flocked to Southern Cameroons, ending forced labour altogether but angering the local natives, who felt swamped. The League of Nations mandates were converted into United Nations Trusteeships in 1946, and the question of independence became a pressing issue in French Cameroon. France outlawed the pro-independence political party, the Union of the Peoples of Cameroon (Union des Populations du Cameroun; UPC), on 13 July 1955. This prompted a long guerrilla war waged by the UPC and the assassination of several of the party's leaders, including Ruben Um Nyobè, Félix-Roland Moumié and Ernest Ouandie. In the British Cameroons, the question was whether to reunify with French Cameroon or join Nigeria; the British ruled out the option of independence. Independence On 1 January 1960, French Cameroun gained independence from France under President Ahmadou Ahidjo. On 1 October 1961, the formerly British Southern Cameroons gained independence from the United Kingdom by vote of the UN General Assembly and joined with French Cameroun to form the Federal Republic of Cameroon, a date which is now observed as Unification Day, a public holiday. Ahidjo used the ongoing war with the UPC to concentrate power in the presidency, continuing with this even after the suppression of the UPC in 1971. His political party, the Cameroon National Union (CNU), became the sole legal political party on 1 September 1966 and on 20 May 1972, a referendum was passed to abolish the federal system of government in favour of a United Republic of Cameroon, headed from Yaoundé. This day is now the country's National Day, a public holiday. Ahidjo pursued an economic policy of planned liberalism, prioritising cash crops and petroleum development. The government used oil money to create a national cash reserve, pay farmers, and finance major development projects; however, many initiatives failed when Ahidjo appointed unqualified allies to direct them. The national flag was changed on 20 May 1975. Two stars were removed, replaced with a large central star as a symbol of national unity. Ahidjo stepped down on 4 November 1982 and left power to his constitutional successor, Paul Biya. However, Ahidjo remained in control of the CNU and tried to run the country from behind the scenes until Biya and his allies pressured him into resigning. Biya began his administration by moving toward a more democratic government, but a failed coup d'état nudged him toward the leadership style of his predecessor. An economic crisis took effect in the mid-1980s to late 1990s as a result of international economic conditions, drought, falling petroleum prices, and years of corruption, mismanagement, and cronyism. Cameroon turned to foreign aid, cut government spending, and privatised industries. With the reintroduction of multi-party politics in December 1990, the former British Southern Cameroons pressure groups called for greater autonomy, and the Southern Cameroons National Council advocated complete secession as the Republic of Ambazonia. The 1992 Labour Code of Cameroon gives workers the freedom to belong to a trade union or not to belong to any trade union at all. It is the choice of a worker to join any trade union in his occupation since there are more than one trade union in each occupation. In June 2006, talks concerning a territorial dispute over the Bakassi peninsula were resolved. The talks involved President Paul Biya of Cameroon, then President Olusegun Obasanjo of Nigeria and then UN Secretary General Kofi Annan, and resulted in Cameroonian control of the oil-rich peninsula. The northern portion of the territory was formally handed over to the Cameroonian government in August 2006, and the remainder of the peninsula was left to Cameroon 2 years later, in 2008. The boundary change triggered a local separatist insurgency, as many Bakassians refused to accept Cameroonian rule. While most militants laid down their arms in November 2009, some carried on fighting for years. In February 2008, Cameroon experienced its worst violence in 15 years when a transport union strike in Douala escalated into violent protests in 31 municipal areas. In May 2014, in the wake of the Chibok schoolgirls kidnapping, presidents Paul Biya of Cameroon and Idriss Déby of Chad announced they were waging war on Boko Haram, and deployed troops to the Nigerian border. Boko Haram launched several attacks into Cameroon, killing 84 civilians in a December 2014 raid, but suffering a heavy defeat in a raid in January 2015. Cameroon declared victory over Boko Haram on Cameroonian territory in September 2018. Since November 2016, protesters from the predominantly English-speaking Northwest and Southwest regions of the country have been campaigning for continued use of the English language in schools and courts. People were killed and hundreds jailed as a result of these protests. In 2017, Biya's government blocked the regions' access to the Internet for three months. In September, separatists started a guerilla war for the independence of the Anglophone region as the Federal Republic of Ambazonia. The government responded with a military offensive, and the insurgency spread across the Northwest and Southwest regions. , fighting between separatist guerillas and government forces continues. During 2020, numerous terrorist attacks—many of them carried out without claims of credit—and government reprisals have led to bloodshed throughout the country. Since 2016, more than 450,000 people have fled their homes. The conflict indirectly led to an upsurge in Boko Haram attacks, as the Cameroonian military largely withdrew from the north to focus on fighting the Ambazonian separatists. More than 30,000 people in northern Cameroon fled to Chad after ethnic clashes over access to water between Musgum fishermen and ethnic Arab Choa herders in December 2021. Politics and government The President of Cameroon is elected and creates policy, administers government agencies, commands the armed forces, negotiates and ratifies treaties, and declares a state of emergency. The president appoints government officials at all levels, from the prime minister (considered the official head of government), to the provincial governors and divisional officers. The president is selected by popular vote every seven years. There have been 2 presidents since the independence of Cameroon. The National Assembly makes legislation. The body consists of 180 members who are elected for five-year terms and meet three times per year. Laws are passed on a majority vote. The 1996 constitution establishes a second house of parliament, the 100-seat Senate. The government recognises the authority of traditional chiefs, fons, and lamibe to govern at the local level and to resolve disputes as long as such rulings do not conflict with national law. Cameroon's legal system is a mixture of civil law, common law, and customary law. Although nominally independent, the judiciary falls under the authority of the executive's Ministry of Justice. The president appoints judges at all levels. The judiciary is officially divided into tribunals, the court of appeal, and the supreme court. The National Assembly elects the members of a nine-member High Court of Justice that judges high-ranking members of government in the event they are charged with high treason or harming national security. Political culture Cameroon is viewed as rife with corruption at all levels of government. In 1997, Cameroon established anti-corruption bureaus in 29 ministries, but only 25% became operational, and in 2012, Transparency International placed Cameroon at number 144 on a list of 176 countries ranked from least to most corrupt. On 18 January 2006, Biya initiated an anti-corruption drive under the direction of the National Anti-Corruption Observatory. There are several high corruption risk areas in Cameroon, for instance, customs, public health sector and public procurement. However, the corruption has gotten worse, regardless of the existing anti-corruption bureaus, as Transparency International ranked Cameroon 152 on a list of 180 countries in 2018. President Biya's Cameroon People's Democratic Movement (CPDM) was the only legal political party until December 1990. Numerous regional political groups have since formed. The primary opposition is the Social Democratic Front (SDF), based largely in the Anglophone region of the country and headed by John Fru Ndi. Biya and his party have maintained control of the presidency and the National Assembly in national elections, which rivals contend were unfair. Human rights organisations allege that the government suppresses the freedoms of opposition groups by preventing demonstrations, disrupting meetings, and arresting opposition leaders and journalists. In particular, English-speaking people are discriminated against; protests often escalate into violent clashes and killings. In 2017, President Biya shut down the Internet in the English-speaking region for 94 days, at the cost of hampering five million people, including Silicon Mountain startups. Freedom House ranks Cameroon as "not free" in terms of political rights and civil liberties. The last parliamentary elections were held on 9 February 2020. Foreign relations Cameroon is a member of both the Commonwealth of Nations and La Francophonie. Its foreign policy closely follows that of its main ally, France (one of its former colonial rulers). Cameroon relies heavily on France for its defence, although military spending is high in comparison to other sectors of government. President Biya has engaged in a decades-long clash with the government of Nigeria over possession of the oil-rich Bakassi peninsula. Cameroon and Nigeria share a 1,000-mile (1,600 km) border and have disputed the sovereignty of the Bakassi peninsula. In 1994 Cameroon petitioned the International Court of Justice to resolve the dispute. The two countries attempted to establish a cease-fire in 1996; however, fighting continued for years. In 2002, the ICJ ruled that the Anglo-German Agreement of 1913 gave sovereignty to Cameroon. The ruling called for a withdrawal by both countries and denied the request by Cameroon for compensation due to Nigeria's long-term occupation. By 2004, Nigeria had failed to meet the deadline to hand over the peninsula. A UN-mediated summit in June 2006 facilitated an agreement for Nigeria to withdraw from the region and both leaders signed the Greentree Agreement. The withdrawal and handover of control was completed by August 2006. In July 2019, UN ambassadors of 37 countries, including Cameroon, signed a joint letter to the UNHRC defending China's treatment of Uyghurs in the Xinjiang region. Military The Cameroon Armed Forces, (French: Forces armées camerounaises, FAC) consists of the country's army (Armée de Terre), the country's navy (Marine Nationale de la République (MNR), includes naval infantry), the Cameroonian Air Force (Armée de l'Air du Cameroun, AAC), and the Gendarmerie. Males and females that are 18 years of age up to 23 years of age and have graduated high school are eligible for military service. Those who join are obliged to complete 4 years of service. There is no conscription in Cameroon, but the government makes periodic calls for volunteers. Human rights Human rights organisations accuse police and military forces of mistreating and even torturing criminal suspects, ethnic minorities, homosexuals, and political activists. United Nations figures indicate that more than 21,000 people have fled to neighboring countries, while 160,000 have been internally displaced by the violence, many reportedly hiding in forests. Prisons are overcrowded with little access to adequate food and medical facilities, and prisons run by traditional rulers in the north are charged with holding political opponents at the behest of the government. However, since the first decade of the 21st century, an increasing number of police and gendarmes have been prosecuted for improper conduct. On 25 July 2018, UN High Commissioner for Human Rights Zeid Ra'ad Al Hussein expressed deep concern about reports of violations and abuses in the English-speaking Northwest and Southwest regions of Cameroon. Same-sex sexual acts are banned by section 347-1 of the penal code with a penalty of from 6 months up to 5 years' imprisonment. Since December 2020, Human Rights Watch claimed that Islamist armed group Boko Haram has stepped up attacks and killed at least 80 civilians in towns and villages in the Far North region of Cameroon. Administrative divisions The constitution divides Cameroon into 10 semi-autonomous regions, each under the administration of an elected Regional Council. Each region is headed by a presidentially appointed governor. These leaders are charged with implementing the will of the president, reporting on the general mood and conditions of the regions, administering the civil service, keeping the peace, and overseeing the heads of the smaller administrative units. Governors have broad powers: they may order propaganda in their area and call in the army, gendarmes, and police. All local government officials are employees of the central government's Ministry of Territorial Administration, from which local governments also get most of their budgets. The regions are subdivided into 58 divisions (French ). These are headed by presidentially appointed divisional officers (). The divisions are further split into sub-divisions (), headed by assistant divisional officers (). The districts, administered by district heads (), are the smallest administrative units. The three northernmost regions are the Far North (), North (), and Adamawa (). Directly south of them are the Centre () and East (). The South Province () lies on the Gulf of Guinea and the southern border. Cameroon's western region is split into four smaller regions: the Littoral () and South-West () regions are on the coast, and the North-West () and West () regions are in the western grassfields. Geography At , Cameroon is the world's 53rd-largest country. The country is located in Central and West Africa, known as the hinge of Africa, on the Bight of Bonny, part of the Gulf of Guinea and the Atlantic Ocean. Cameroon lies between latitudes 1° and 13°N, and longitudes 8° and 17°E. Cameroon controls 12 nautical miles of the Atlantic Ocean. Tourist literature describes Cameroon as "Africa in miniature" because it exhibits all major climates and vegetation of the continent: coast, desert, mountains, rainforest, and savanna. The country's neighbours are Nigeria and the Atlantic Ocean to the west; Chad to the northeast; the Central African Republic to the east; and Equatorial Guinea, Gabon and the Republic of the Congo to the south. Cameroon is divided into five major geographic zones distinguished by dominant physical, climatic, and vegetative features. The coastal plain extends inland from the Gulf of Guinea and has an average elevation of . Exceedingly hot and humid with a short dry season, this belt is densely forested and includes some of the wettest places on earth, part of the Cross-Sanaga-Bioko coastal forests. The South Cameroon Plateau rises from the coastal plain to an average elevation of . Equatorial rainforest dominates this region, although its alternation between wet and dry seasons makes it less humid than the coast. This area is part of the Atlantic Equatorial coastal forests ecoregion. An irregular chain of mountains, hills, and plateaus known as the Cameroon range extends from Mount Cameroon on the coast—Cameroon's highest point at —almost to Lake Chad at Cameroon's northern border at 13°05'N. This region has a mild climate, particularly on the Western High Plateau, although rainfall is high. Its soils are among Cameroon's most fertile, especially around volcanic Mount Cameroon. Volcanism here has created crater lakes. On 21 August 1986, one of these, Lake Nyos, belched carbon dioxide and killed between 1,700 and 2,000 people. This area has been delineated by the World Wildlife Fund as the Cameroonian Highlands forests ecoregion. The southern plateau rises northward to the grassy, rugged Adamawa Plateau. This feature stretches from the western mountain area and forms a barrier between the country's north and south. Its average elevation is , and its average temperature ranges from to with high rainfall between April and October peaking in July and August. The northern lowland region extends from the edge of the Adamawa to Lake Chad with an average elevation of . Its characteristic vegetation is savanna scrub and grass. This is an arid region with sparse rainfall and high median temperatures. Cameroon has four patterns of drainage. In the south, the principal rivers are the Ntem, Nyong, Sanaga, and Wouri. These flow southwestward or westward directly into the Gulf of Guinea. The Dja and Kadéï drain southeastward into the Congo River. In northern Cameroon, the Bénoué River runs north and west and empties into the Niger. The Logone flows northward into Lake Chad, which Cameroon shares with three neighbouring countries. Economy and infrastructure Cameroon's per capita GDP (Purchasing power parity) was estimated as US$3,700 in 2017. Major export] markets include the Netherlands, France, China, Belgium, Italy, Algeria, and Malaysia. Cameroon has had a decade of strong economic performance, with GDP growing at an average of 4% per year. During the 2004–2008 period, public debt was reduced from over 60% of GDP to 10% and official reserves quadrupled to over US$3 billion. Cameroon is part of the Bank of Central African States (of which it is the dominant economy), the Customs and Economic Union of Central Africa (UDEAC) and the Organization for the Harmonization of Business Law in Africa (OHADA). Its currency is the CFA franc. Unemployment was estimated at 3.38% in 2019, and 23.8% of the population was living below the international poverty threshold of US$1.90 a day in 2014. Since the late 1980s, Cameroon has been following programmes advocated by the World Bank and International Monetary Fund (IMF) to reduce poverty, privatise industries, and increase economic growth. The government has taken measures to encourage tourism in the country. An estimated 70% of the population farms, and agriculture comprised an estimated 16.7% of GDP in 2017. Most agriculture is done at the subsistence scale by local farmers using simple tools. They sell their surplus produce, and some maintain separate fields for commercial use. Urban centres are particularly reliant on peasant agriculture for their foodstuffs. Soils and climate on the coast encourage extensive commercial cultivation of bananas, cocoa, oil palms, rubber, and tea. Inland on the South Cameroon Plateau, cash crops include coffee, sugar, and tobacco. Coffee is a major cash crop in the western highlands, and in the north, natural conditions favour crops such as cotton, groundnuts, and rice. Production of Fairtrade cotton was initiated in Cameroon in 2004. Livestock are raised throughout the country. Fishing employs 5,000 people and provides over 100,000 tons of seafood each year. Bushmeat, long a staple food for rural Cameroonians, is today a delicacy in the country's urban centres. The commercial bushmeat trade has now surpassed deforestation as the main threat to wildlife in Cameroon. The southern rainforest has vast timber reserves, estimated to cover 37% of Cameroon's total land area. However, large areas of the forest are difficult to reach. Logging, largely handled by foreign-owned firms, provides the government US$60 million a year in taxes (), and laws mandate the safe and sustainable exploitation of timber. Nevertheless, in practice, the industry is one of the least regulated in Cameroon. Factory-based industry accounted for an estimated 26.5% of GDP in 2017. More than 75% of Cameroon's industrial strength is located in Douala and Bonabéri. Cameroon possesses substantial mineral resources, but these are not extensively mined (see Mining in Cameroon). Petroleum exploitation has fallen since 1986, but this is still a substantial sector such that dips in prices have a strong effect on the economy. Rapids and waterfalls obstruct the southern rivers, but these sites offer opportunities for hydroelectric development and supply most of Cameroon's energy. The Sanaga River powers the largest hydroelectric station, located at Edéa. The rest of Cameroon's energy comes from oil-powered thermal engines. Much of the country remains without reliable power supplies. Transport in Cameroon is often difficult. Only 6.6% of the roadways are tarred. Roadblocks often serve little other purpose than to allow police and gendarmes to collect bribes from travellers. Road banditry has long hampered transport along the eastern and western borders, and since 2005, the problem has intensified in the east as the Central African Republic has further destabilised. Intercity bus services run by multiple private companies connect all major cities. They are the most popular means of transportation followed by the rail service Camrail. Rail service runs from Kumba in the west to Bélabo in the east and north to Ngaoundéré. International airports are located in Douala and Yaoundé, with a third under construction in Maroua. Douala is the country's principal seaport. In the north, the Bénoué River is seasonally navigable from Garoua across into Nigeria. Although press freedoms have improved since the first decade of the 21st century, the press is corrupt and beholden to special interests and political groups. Newspapers routinely self-censor to avoid government reprisals. The major radio and television stations are state-run and other communications, such as land-based telephones and telegraphs, are largely under government control. However, cell phone networks and Internet providers have increased dramatically since the first decade of the 21st century and are largely unregulated. Demographics The population of Cameroon was in . The life expectancy was 62.3 years (60.6 years for males and 64 years for females). Cameroon has slightly more women (50.5%) than men (49.5%). Over 60% of the population is under age 25. People over 65 years of age account for only 3.11% of the total population. Cameroon's population is almost evenly divided between urban and rural dwellers. Population density is highest in the large urban centres, the western highlands, and the northeastern plain. Douala, Yaoundé, and Garoua are the largest cities. In contrast, the Adamawa Plateau, southeastern Bénoué depression, and most of the South Cameroon Plateau are sparsely populated. According to the World Health Organization, the fertility rate was 4.8 in 2013 with a population growth rate of 2.56%. People from the overpopulated western highlands and the underdeveloped north are moving to the coastal plantation zone and urban centres for employment. Smaller movements are occurring as workers seek employment in lumber mills and plantations in the south and east. Although the national sex ratio is relatively even, these out-migrants are primarily males, which leads to unbalanced ratios in some regions. Both monogamous and polygamous marriage are practised, and the average Cameroonian family is large and extended. In the north, women tend to the home, and men herd cattle or work as farmers. In the south, women grow the family's food, and men provide meat and grow cash crops. Cameroonian society is male-dominated, and violence and discrimination against women is common. The number of distinct ethnic and linguistic groups in Cameroon is estimated to be between 230 and 282. The Adamawa Plateau broadly bisects these into northern and southern divisions. The northern peoples are Sudanic groups, who live in the central highlands and the northern lowlands, and the Fulani, who are spread throughout northern Cameroon. A small number of Shuwa Arabs live near Lake Chad. Southern Cameroon is inhabited by speakers of Bantu and Semi-Bantu languages. Bantu-speaking groups inhabit the coastal and equatorial zones, while speakers of Semi-Bantu languages live in the Western grassfields. Some 5,000 Gyele and Baka Pygmy peoples roam the southeastern and coastal rainforests or live in small, roadside settlements. Nigerians make up the largest group of foreign nationals. Refugees In 2007, Cameroon hosted approximately 97,400 refugees and asylum seekers. Of these, 49,300 were from the Central African Republic (many driven west by war), 41,600 from Chad, and 2,900 from Nigeria. Kidnappings of Cameroonian citizens by Central African bandits have increased since 2005. In the first months of 2014, thousands of refugees fleeing the violence in the Central African Republic arrived in Cameroon. On 4 June 2014, AlertNet reported: Languages Both English and French are official languages, although French is by far the most understood language (more than 80%). German, the language of the original colonisers, has long since been displaced by French and English. Cameroonian Pidgin English is the lingua franca in the formerly British-administered territories. A mixture of English, French, and Pidgin called Camfranglais has been gaining popularity in urban centres since the mid-1970s. In addition to the colonial languages, there are approximately 250 other languages spoken by nearly 20 million Cameroonians. It is because of this that Cameroon is considered one of the most linguistically diverse countries in the world. In 2017 there were language protests by the anglophone population against perceived oppression by francophone speakers. The military was deployed against the protesters and people were killed, hundreds imprisoned and thousands fled the country. This culminated in the declaration of an independent Republic of Ambazonia, which has since evolved into the Anglophone Crisis. It is estimated that by June 2020, 740,000 people had been internally displaced as a result of this crisis. Religion Cameroon has a high level of religious freedom and diversity. The predominant faith is Christianity, practised by about two-thirds of the population, while Islam is a significant minority faith, adhered to by about one-fourth. In addition, traditional faiths are practised by many. Muslims are most concentrated in the north, while Christians are concentrated primarily in the southern and western regions, but practitioners of both faiths can be found throughout the country. Large cities have significant populations of both groups. Muslims in Cameroon are divided into Sufis, Salafis, Shias, and non-denominational Muslims. People from the North-West and South-West provinces, which used to be a part of British Cameroons, have the highest proportion of Protestants. The French-speaking regions of the southern and western regions are largely Catholic. Southern ethnic groups predominantly follow Christian or traditional African animist beliefs, or a syncretic combination of the two. People widely believe in witchcraft, and the government outlaws such practices. Suspected witches are often subject to mob violence. The Islamist jihadist group Ansar al-Islam has been reported as operating in North Cameroon. In the northern regions, the locally dominant Fulani ethnic group is mostly Muslim, but the overall population is fairly evenly divided among Muslims, Christians, and followers of indigenous religious beliefs (called Kirdi ("pagan") by the Fulani). The Bamum ethnic group of the West Region is largely Muslim. Native traditional religions are practised in rural areas throughout the country but rarely are practised publicly in cities, in part because many indigenous religious groups are intrinsically local in character. Education and health In 2013, the total adult literacy rate of Cameroon was estimated to be 71.3%. Among youths age 15–24 the literacy rate was 85.4% for males and 76.4% for females. Most children have access to state-run schools that are cheaper than private and religious facilities. The educational system is a mixture of British and French precedents with most instruction in English or French. Cameroon has one of the highest school attendance rates in Africa. Girls attend school less regularly than boys do because of cultural attitudes, domestic duties, early marriage, pregnancy, and sexual harassment. Although attendance rates are higher in the south, a disproportionate number of teachers are stationed there, leaving northern schools chronically understaffed. In 2013, the primary school enrollment rate was 93.5%. School attendance in Cameroon is also affected by child labour. Indeed, the United States Department of Labor Findings on the Worst Forms of Child Labor reported that 56% of children aged 5 to 14 were working children and that almost 53% of children aged 7 to 14 combined work and school. In December 2014, a List of Goods Produced by Child Labor or Forced Labor issued by the Bureau of International Labor Affairs mentioned Cameroon among the countries that resorted to child labor in the production of cocoa. The quality of health care is generally low. Life expectancy at birth is estimated to be 56 years in 2012, with 48 healthy life years expected. Fertility rate remains high in Cameroon with an average of 4.8 births per woman and an average mother's age of 19.7 years old at first birth. In Cameroon, there is only one doctor for every 5,000 people, according to the World Health Organization. In 2014, just 4.1% of total GDP expenditure was allocated to healthcare. Due to financial cuts in the health care system, there are few professionals. Doctors and nurses who were trained in Cameroon, emigrate because in Cameroon the payment is poor while the workload is high. Nurses are unemployed even though their help is needed. Some of them help out voluntarily so they will not lose their skills. Outside the major cities, facilities are often dirty and poorly equipped. In 2012, the top three deadly diseases were HIV/AIDS, lower respiratory tract infection, and diarrheal diseases. Endemic diseases include dengue fever, filariasis, leishmaniasis, malaria, meningitis, schistosomiasis, and sleeping sickness. The HIV/AIDS prevalence rate in 2016 was estimated at 3.8% for those aged 15–49, although a strong stigma against the illness keeps the number of reported cases artificially low. 46,000 children under age 14 were estimated to be living with HIV in 2016. In Cameroon, 58% of those living with HIV know their status, and just 37% receive ARV treatment. In 2016, 29,000 deaths due to AIDS occurred in both adults and children. Breast ironing, a traditional practice that is prevalent in Cameroon, may affect girls' health. Female genital mutilation (FGM), while not widespread, is practiced among some populations; according to a 2013 UNICEF report, 1% of women in Cameroon have undergone FGM. Also impacting women and girls' health, the contraceptive prevalence rate is estimated to be just 34.4% in 2014. Traditional healers remain a popular alternative to evidence-based medicine. Culture Music and dance Music and dance are integral parts of Cameroonian ceremonies, festivals, social gatherings, and storytelling. Traditional dances are highly choreographed and separate men and women or forbid participation by one sex altogether. The dances' purposes range from pure entertainment to religious devotion. Traditionally, music is transmitted orally. In a typical performance, a chorus of singers echoes a soloist. Musical accompaniment may be as simple as clapping hands and stamping feet, but traditional instruments include bells worn by dancers, clappers, drums and talking drums, flutes, horns, rattles, scrapers, stringed instruments, whistles, and xylophones; combinations of these vary by ethnic group and region. Some performers sing complete songs alone, accompanied by a harplike instrument. Popular music styles include ambasse bey of the coast, assiko of the Bassa, mangambeu of the Bangangte, and tsamassi of the Bamileke. Nigerian music has influenced Anglophone Cameroonian performers, and Prince Nico Mbarga's highlife hit "Sweet Mother" is the top-selling African record in history. The two most popular music styles are makossa and bikutsi. Makossa developed in Douala and mixes folk music, highlife, soul, and Congo music. Performers such as Manu Dibango, Francis Bebey, Moni Bilé, and Petit-Pays popularised the style worldwide in the 1970s and 1980s. Bikutsi originated as war music among the Ewondo. Artists such as Anne-Marie Nzié developed it into a popular dance music beginning in the 1940s, and performers such as Mama Ohandja and Les Têtes Brulées popularised it internationally during the 1960s, 1970s and 1980s. Holidays The most notable holiday associated with patriotism in Cameroon is National Day, also called Unity Day. Among the most notable religious holidays are Assumption Day, and Ascension Day, which is typically 39 days after Easter. In the Northwest and Southwest provinces, collectively called Ambazonia, October 1 is considered a national holiday, a date Ambazonians consider the day of their independence from Cameroon. Cuisine Cuisine varies by region, but a large, one-course, evening meal is common throughout the country. A typical dish is based on cocoyams, maize, cassava (manioc), millet, plantains, potatoes, rice, or yams, often pounded into dough-like fufu. This is served with a sauce, soup, or stew made from greens, groundnuts, palm oil, or other ingredients. Meat and fish are popular but expensive additions, with chicken often reserved for special occasions. Dishes are often quite spicy; seasonings include salt, red pepper sauce, and maggi. Cutlery is common, but food is traditionally manipulated with the right hand. Breakfast consists of leftovers of bread and fruit with coffee or tea. Generally breakfast is made from wheat flour in various different foods such as puff-puff (doughnuts), accra banana made from bananas and flour, bean cakes, and many more. Snacks are popular, especially in larger towns where they may be bought from street vendors. Fashion Cameroon's relatively large and diverse population is likewise diverse in its fashions. Climate, religious, ethnic and cultural beliefs, and the influences of colonialism, imperialism, and globalization are all factors in contemporary Cameroonian dress. Notable articles of clothing include: Pagnes, sarongs worn by Cameroon women; Chechia, a traditional hat; kwa, a male handbag; and Gandura, male custom attire. Wrappers and loincloths are used extensively by both women and men but their use varies by region, with influences from Fulani styles more present in the north and Igbo and Yoruba styles more often in the south and west. Imane Ayissi is one of Cameroon's top fashion designers and has received international recognition. Local arts and crafts Traditional arts and crafts are practiced throughout the country for commercial, decorative, and religious purposes. Woodcarvings and sculptures are especially common. The high-quality clay of the western highlands is used for pottery and ceramics. Other crafts include basket weaving, beadworking, brass and bronze working, calabash carving and painting, embroidery, and leather working. Traditional housing styles use local materials and vary from temporary wood-and-leaf shelters of nomadic Mbororo to the rectangular mud-and-thatch homes of southern peoples. Dwellings of materials such as cement and tin are increasingly common. Contemporary art is mainly promoted by independent cultural organizations (Doual'art, Africréa) and artist-run initiatives (Art Wash, Atelier Viking, ArtBakery). Literature Cameroonian literature has concentrated on both European and African themes. Colonial-era writers such as Louis-Marie Pouka and Sankie Maimo were educated by European missionary societies and advocated assimilation into European culture to bring Cameroon into the modern world. After World War II, writers such as Mongo Beti and Ferdinand Oyono analysed and criticised colonialism and rejected assimilation. Films and literature Shortly after independence, filmmakers such as Jean-Paul Ngassa and Thérèse Sita-Bella explored similar themes. In the 1960s, Mongo Beti, Ferdinand Léopold Oyono and other writers explored postcolonialism, problems of African development, and the recovery of African identity. In the mid-1970s, filmmakers such as Jean-Pierre Dikongué Pipa and Daniel Kamwa dealt with the conflicts between traditional and postcolonial society. Literature and films during the next two decades focused more on wholly Cameroonian themes. Sports National policy strongly advocates sport in all forms. Traditional sports include canoe racing and wrestling, and several hundred runners participate in the Mount Cameroon Race of Hope each year. Cameroon is one of the few tropical countries to have competed in the Winter Olympics. Sport in Cameroon is dominated by football. Amateur football clubs abound, organised along ethnic lines or under corporate sponsors. The national team has been one of the most successful in Africa since its strong showing in the 1982 and 1990 FIFA World Cups. Cameroon has won five African Cup of Nations titles and the gold medal at the 2000 Olympics. Cameroon was the host country of the Women Africa Cup of Nations in November–December 2016, the 2020 African Nations Championship and the 2021 Africa Cup of Nations. The women's football team is known as the "Indomitable Lionesses", and like their men's counterparts, are also successful at international stage, although it has not won any major trophy. Cricket has also entered into Cameroon as an emerging sport with the Cameroon Cricket Federation participating in international matches Cameroon has produced multiple National Basketball Association players including Pascal Siakam, Joel Embiid, D. J. Strawberry, Ruben Boumtje-Boumtje, Christian Koloko, and Luc Mbah a Moute. The current UFC Heavyweight Champion Francis Ngannou hails from Cameroon. See also Index of Cameroon-related articles Outline of Cameroon Telephone numbers in Cameroon Notes References Notes Further reading . Reporters without Borders. Retrieved 6 April 2007. . Human Development Report 2006. United Nations Development Programme. Retrieved 6 April 2007. Fonge, Fuabeh P. (1997). Modernization without Development in Africa: Patterns of Change and Continuity in Post-Independence Cameroonian Public Service. Trenton, New Jersey: Africa World Press, Inc. MacDonald, Brian S. (1997). "Case Study 4: Cameroon", Military Spending in Developing Countries: How Much Is Too Much? McGill-Queen's University Press. Njeuma, Dorothy L. (no date). "Country Profiles: Cameroon". The Boston College Center for International Higher Education. Retrieved 11 April 2008. Rechniewski, Elizabeth. "1947: Decolonisation in the Shadow of the Cold War: the Case of French Cameroon." Australian & New Zealand Journal of European Studies 9.3 (2017). online Sa'ah, Randy Joe (23 June 2006). "Cameroon girls battle 'breast ironing'". BBC News. Retrieved 6 April 2007. Wright, Susannah, ed. (2006). Cameroon. Madrid: MTH Multimedia S.L. "World Economic and Financial Surveys". World Economic Outlook Database, International Monetary Fund. September 2006. Retrieved 6 April 2007. External links Cameroon. The World Factbook. Central Intelligence Agency. Cameroon Corruption Profile from Business Anti-Corruption Portal Cameroon from UCB Libraries GovPubs Cameroon profile from the BBC News Key Development Forecasts for Cameroon from International Futures Government Presidency of the Republic of Cameroon Prime Minister's Office National Assembly of Cameroon Global Integrity Report: Cameroon has reporting on anti-corruption in Cameroon Chief of State and Cabinet Members Trade Summary Trade Statistics from World Bank 1960 establishments in Cameroon Central African countries Countries in Africa English-speaking countries and territories French-speaking countries and territories Member states of the African Union Member states of the Commonwealth of Nations Member states of the Organisation internationale de la Francophonie Member states of the Organisation of Islamic Cooperation Member states of the United Nations Republics in the Commonwealth of Nations States and territories established in 1960
2,418
5,452
https://en.wikipedia.org/wiki/Economy%20of%20Cameroon
Economy of Cameroon
The economy of Cameroon was one of the most prosperous in Africa for a quarter of a century after independence. The drop in commodity prices for its principal exports – petroleum, cocoa, coffee, and cotton – in the mid-1980s, combined with an overvalued currency and economic mismanagement, led to a decade-long recession. Real per capita GDP fell by more than 60% from 1986 to 1994. The current account and fiscal deficits widened, and foreign debt grew. Yet because of its oil reserves and favorable agricultural conditions, Cameroon still has one of the best-endowed primary commodity economies in sub-Saharan Africa. Agriculture In 2018, Cameroon produced: 5million tons of cassava (13th largest producer in the world); 3.9million tonnes of plantain (3rd largest producer in the world, only behind Congo and Ghana); 2.6million tons of palm oil (7th largest producer in the world); 2.3million tons of maize; 1.9million tons of taro (3rd largest producer in the world, second only to Nigeria and China); 1.4million tons of sorghum; 1.2million tons of banana; 1.2million tons of sugarcane; 1million tons of tomato (19th largest producer in the world); 674,000 tonnes of yam (7th largest producer in the world); 594,000 tons of peanut; 410,000 tons of sweet potato; 402,000 tons of beans; 332,000 tons of rice; 310,000 tons of pineapple; 307,000 tons of cocoa (5th largest producer in the world, after Ivory Coast, Ghana, Indonesia and Nigeria); 302,000 tons of potato; 301,000 tons of onion; 249,000 tons of cotton. In addition to smaller productions of other agricultural products, such as coffee (33,000 tons) and natural rubber (55,000 tons). Finance and banking Cameroon's financial system is the largest in the CEMAC region. Access to financial services is limited, particularly for SMEs. Aside from a traditional tendency for banks to prefer dealing with large, established companies, determining factors are also found in interest rates for loans to SMEs being capped at 15 percent and being heavily taxed. As of 2006, bank loans to SMEs hardly reached 15 percent of total outstanding loans (Molua, 2002). Less than 5 percent of Cameroonians have access to a bank account. While the microfinance sector is consequently becoming increasingly important, its development is hampered by a loose regulatory and supervisory framework for microfinance institutions (MFIs). The banking sector is highly concentrated and dominated by foreign commercial banks. 6 out of the 11 largest commercial banks are foreign-owned, and the three largest banks hold more than 50 percent of total financial system assets. While foreign banks generally display good solvency ratios, small domestic banks are in a much weaker position. Their capitalization is well below the average of banks in the CEMAC region and their profits are close to 2 percent, compared to 20 percent for foreign banks in the country. This is partially explained by the high levels of non-performing loans, which reached 12 percent in 2007, leading to most banks holding large amounts of excess reserves as a percentage of deposits and large levels of unutilized liquidity. In 2018, Cameroon's financial system is being requested by the International Monetary Fund (IMF) to increase its tax base to cover the losses from the North-West and South-West Cameroon's regions instabilities, the loss of oil revenue, the failure to deliver on port facilities, and the decline in oil production from mature oil fields. Macro-economic trend Cameroon became an oil-producing country in 1977. Claiming to want to make reserves for difficult times, the authorities manage "off-budget" oil revenues in total opacity (the funds are placed in Paris, Switzerland and New York accounts). Several billion dollars are thus diverted to the benefit of oil companies and regime officials. The influence of France and its 9,000 nationals in Cameroon remains considerable. African Affairs magazine noted in the early 1980s that they "continue to dominate almost all key sectors of the economy, much as they did before independence. French nationals control 55% of the modern sector of the Cameroonian economy and their control over the banking system is total. Recent signs, however, are encouraging. As of March 1998, Cameroon's fifth IMF program – a 3-year enhanced structural adjustment program approved in August 1997 – is on track. Cameroon has rescheduled its Paris Club debt at favorable terms. GDP has grown by about 5% a year beginning in 1995. There is cautious optimism that Cameroon is emerging from its long period of economic hardship. The Enhanced Structural Adjustment Facility (ESAF) signed recently by the IMF and Government of Cameroon calls for greater macroeconomic planning and financial accountability; privatization of most of Cameroon's nearly 100 remaining non-financial parastatal enterprises; elimination of state marketing board monopolies on the export of cocoa, certain coffees, and cotton; privatization and price competition in the banking sector; implementation of the 1992 labor code; a vastly improved judicial system; and political liberalization to boost investment. France is Cameroon's main trading partner and source of private investment and foreign aid. Cameroon has an investment guaranty agreement and a bilateral accord with the United States. USA investment in Cameroon is about $1 million, most of it in the oil sector. Inflation has been brought back under control. Cameroon aims at becoming emerging by 2035. The government embarked upon a series of economic reform programs supported by the World Bank and International Monetary Fund (IMF) beginning in the late 1980s. Many of these measures have been painful; the government slashed civil service salaries by 65% in 1993. The CFA franc – the common currency of Cameroon and 13 other African states – was devalued by 50% in January 1994. The government failed to meet the conditions of the first four IMF programs. This is a chart of trend of gross domestic product of Cameroon at market prices estimated by the International Monetary Fund with figures in millions of Central African CFA Francs. The following table shows the main economic indicators in 1980–2022. Inflation below 5% is in green. Gallery See also Cameroon Transport in Cameroon United Nations Economic Commission for Africa References External links Cameroon latest trade data on ITC Trade Map World Bank Summary Trade Statistics Cameroon Economics in developing countries Cameroon Cameroon
2,423
5,453
https://en.wikipedia.org/wiki/Telecommunications%20in%20Cameroon
Telecommunications in Cameroon
Telecommunications in Cameroon include radio, television, fixed and mobile telephones, and the Internet. History During German rule, It was set up in the protectorate of Kamerun the first telegraph line, the first telephone line, and the first wireless telegraph. However, the country remained undeveloped in telecommunications. During First World War, Germans followed a scorched-earth policy that meant the destruction of communication lines, included telephone and telegraph. In British Cameroon, from 1916 to 1950s, communications in the country relied on flag post runners that had been described as "human telephone lines". Paths followed by the runners served as a base of the development of telegraph lines in the territory. For instance, the line from Buea-Kumba to Ossidinge used the same paths that the mail runners. In the mid-1930s, the wiring of British Cameroon received more support. Radio and television Radio stations: state-owned Cameroon Radio Television (CRTV); one private radio broadcaster; about 70 privately owned, unlicensed radio stations operating, but subject to closure at any time; foreign news services are required to partner with a state-owned national station (2007); 2 AM, 9 FM, and 3 shortwave stations (2001). Television stations: state-owned Cameroon Radio Television (CRTV), 2 private TV broadcasters (2007); one station (2001). BBC World Service radio is available via local relays (98.4 FM in Yaounde, the capital). The government maintains tight control over broadcast media. State-owned Cameroon Radio Television (CRTV), operates both a TV and a radio network. It was the only officially recognized and fully licensed broadcaster until August 2007 when the government issued licenses to two private TV and one private radio broadcasters. Approximately 375 privately owned radio stations were operating in 2012, three-fourths of them in Yaounde and Douala. The government requires nonprofit rural radio stations to submit applications to broadcast, but they were exempt from licensing fees. Commercial radio and television broadcasters must submit a licensing application and pay an application fee and thereafter pay a high annual licensing fee. Several rural community radio stations function with foreign funding. The government prohibits these stations from discussing politics. In spite of the government's tight control, Reporters Without Borders reported in its 2011 field survey that "[i]t is clear from the diversity of the media and the outspoken reporting style that press freedom is a reality". Telephones Calling code: +237 International call prefix: 00 Main lines: 737,400 lines in use, 88th in the world (2012); 130,700 lines in use (2006). Mobile cellular: 13.1 million lines, 64th in the world (2012);   4.5 million lines (2007). Telephone system: system includes cable, microwave radio relay, and tropospheric scatter; Camtel, the monopoly provider of fixed-line service, provides connections for only about 3 per 100 persons; equipment is old and outdated, and connections with many parts of the country are unreliable; mobile-cellular usage, in part a reflection of the poor condition and general inadequacy of the fixed-line network, has increased sharply, reaching a subscribership base of 50 per 100 persons (2011). Communications cables: South Atlantic 3/West Africa Submarine Cable (SAT-3/WASC) fiber-optic cable system provides connectivity to Europe and Asia (2011); Africa Coast to Europe (ACE), cable system connecting countries along the west coast of Africa to each other and to Portugal and France, is planned. Satellite earth stations: 2 Intelsat (Atlantic Ocean) (2011). Internet Top-level domain: .cm Internet users: 1.1 million users, 113th in the world; 5.7% of the population, 184th in the world (2012). 985,565 users (2011); 749,600 users, 106th in the world (2009). Fixed broadband: 1,006 subscriptions, 180th in the world; less than 0.05% of the population, 190th in the world (2012). Wireless broadband: Unknown (2012). Internet hosts: 10,207 hosts, 134th in the world (2012);        69 hosts (2008). IPv4: 137,728 addresses allocated, less than 0.05% of the world total, 6.8 addresses per 1000 people (2012). Internet service providers (ISPs): Creolink Communications A number of projects are underway that will improve Internet access, telecommunications, and Information and communications technology (ICT) in general: Implementation of the e-post project, connecting 234 post offices throughout the country; Extension of the national optical fiber network, installation of the initial 3,200 km of fiber is complete and studies for the installation of an additional 3,400 km are underway; Construction of multipurpose community telecentres, some 115 telecentres are operating with an additional 205 under construction; Construction of metropolitan optical loops, the urban optical loop of Douala is complete and construction of the Yaounde loop is underway; Construction of submarine cable landing points; Establishment of public key infrastructure (PKI); Construction of a regional technology park to support the development of ICTs. Internet censorship and surveillance There are no government restrictions on access to the Internet or reports that the government monitors e-mail or Internet chat rooms. Although the law provides for freedom of speech and press, it also criminalizes media offenses, and the government restricts freedoms of speech and press. Government officials threaten, harass, arrest, and deny equal treatment to individuals or organizations that criticize government policies or express views at odds with government policy. Individuals who criticize the government publicly or privately sometimes face reprisals. Press freedom is constrained by strict libel laws that suppress criticism. These laws authorize the government, at its discretion and the request of the plaintiff, to criminalize a civil libel suit or to initiate a criminal libel suit in cases of alleged libel against the president and other high government officials. Such crimes are punishable by prison terms and heavy fines. Although the constitution and law prohibit arbitrary interference with privacy, family, home, or correspondence, these rights are subject to restriction for the "higher interests of the state", and there are credible reports that police and gendarmes harass citizens, conduct searches without warrants, and open or seize mail with impunity. See also Cameroon Radio Television, government-controlled national broadcaster. Commonwealth Telecommunications Organisation List of terrestrial fibre optic cable projects in Africa Media of Cameroon Cameroon References External links Antic.cm, top-level domain registry for Cameroon (.cm). Ministry of Posts and Telecommunications, Cameroon (MINPOSTEL) . English translation.
2,424
5,454
https://en.wikipedia.org/wiki/Transport%20in%20Cameroon
Transport in Cameroon
This article provides a breakdown of the transportation options available in Cameroon. The options available to citizens and tourists include railways, roadways, waterways, pipelines, and airlines. These avenues of transportation are used by citizens for personal transportation, by businesses for transporting goods, and by tourists for both accessing the country and traveling while there. Railways Railways in Cameroon are operated by Camrail, a subsidiary of French investment group Bolloré. As of May 2014 Camrail operated regular daily services on three routes: Douala - Kumba Douala - Yaoundé Yaoundé - Ngaoundéré Kribi - Mbalam and xxxx in Republic of the Congo - under construction in 2022. Edéa - Kribi - proposed connection to deep water port. There are no rail links with neighboring countries except Republic of the Congo. Roadways Total highways: 50,000 km Paved: 5,000 km Unpaved: 45,000 km (2004) Cameroon lies at a key point in the Trans-African Highway network, with three routes crossing its territory: Dakar-N'Djamena Highway, connecting just over the Cameroon border with the N'Djamena-Djibouti Highway Lagos-Mombasa Highway Tripoli-Cape Town Highway Cameroon's central location in the network means that efforts to close the gaps which exist in the network across Central Africa rely on the Cameroon's participation in maintaining the network, and the network has the potential to have a profound influence on Cameroon's regional trade. Except for the several relatively good toll roads which connect major cities (all of them one-lane) roads are poorly maintained and subject to inclement weather, since only 10% of the roadways are tarred. It is likely for instance that within a decade, a great deal of trade between West Africa and Southern Africa will be moving on the network through Yaoundé. National highways in Cameroon: N1: Yaoundé - Bertoua - Ngaoundéré - Garoua - Maroua - Kouséri, border with Chad. N2: Yaoundé - Mbalmayo - Ebolowa - Woleu Ntem, border with Gabon. N3: Yaoundé - Edéa - Douala - Idenau. N4: Yaoundé - Bafia - Bafoussam. N5: Douala - Nkongsamba - Bafang - Bafoussam. N6: Ejagham, border with Nigeria - Bamenda - Bafoussam - Tibati - Lokoti. N7: Edéa - Kribi. N8: Mutengene - Kumba - Mamfé. N9: Mbalmayo - Nki, border with Congo. N10: Yaoundé - Bertoua - Batouri - Kenzou, border with the Central African Republic. N11 Bamenda Ring Road Linking, Mezam, Ngokitujia, Mbui, Boyo and Menchum Prices of petrol rose steadily in 2007 and 2008, leading to a transport union strike in Douala on 25 February 2008. The strike quickly escalated into violent protests and spread to other major cities. The uprising finally subsided on 29 February. Waterways 2,090 km; of decreasing importance. Navigation mainly on the Benue River; limited during rainy season. Seaports and harbors Douala - main port, railhead, and second largest city. Bonaberi - railhead to northwest Garoua Kribi - oil pipeline from Chad Kribi South - proposed iron ore export port, about 40 km south of Kribi. Tiko Pipelines 888 km of oil line (2008) Airports The main international airport is the Douala International Airport and a secondary international airport at Yaoundé Nsimalen International Airport. As of May 2014 Cameroon had regular international air connections with nearly every major international airport in West and Southwest Africa as well as several connections to Europe and East Africa. In 2008 there were 34 airports, only 10 of which had paved runways. List of airports in Cameroon Airports - with paved runways total: 10 over 3,047 m: 2 2,438 to 3,047 m: 4 1,524 to 2,437 m: 3 914 to 1,523 m: 1 (2008) Airports - with unpaved runways total: 24 1,524 to 2,437 m: 4 914 to 1,523 m: 14 under 914 m: 6 (2008) See also Camrail Cameroon Transport News African Integrated High Speed Railway Network (AIHSRN) Railway stations in Cameroon References Sundance Resources Ltd report
2,425
5,456
https://en.wikipedia.org/wiki/Foreign%20relations%20of%20Cameroon
Foreign relations of Cameroon
Cameroon's noncontentious, low-profile approach to foreign relations puts it squarely in the middle of other African and developing country states on major issues. It supports the principles of non-interference in the affairs of third world countries and increased assistance to underdeveloped countries. Cameroon is an active participant in the United Nations, where its voting record demonstrates its commitment to causes that include international peacekeeping, the rule of law, environmental protection, and Third World economic development. In the UN and other human rights fora, Cameroon's non-confrontational approach has generally led it to avoid criticizing other countries. Cameroon enjoys good relations with the United States and other developed countries. Cameroon enjoys generally good relations with its African neighbors. It supports UN peacekeeping activities in Central Africa. International disputes Delimitation of international boundaries in the vicinity of Lake Chad, the lack of which led to border incidents in the past, is complete and awaits ratification by Cameroon, Chad, Niger, and Nigeria; dispute with Nigeria over land and maritime boundaries around the Bakasi Peninsula and Lake Chad is currently before the International Court of Justice (ICJ), as is a dispute with Equatorial Guinea over the exclusive maritime economic zone. As of 10 October 2012, it has been resolved that Cameroon own Bakassi. Cameroon also faces a complaint filed with the African Commission on Human Rights by the Southern Cameroons National Council (SCNC) and the Southern Cameroons Peoples Organisation (SCAPO) against the Government of the Republic of Cameroon, in which the complainants allege that the Republic of Cameroon is illegally occupying the territory of Southern Cameroons. The SCNC and SCAPO ultimately seek the independence of the territory of Southern Cameroons. As of 2008, both parties have submitted briefs and responded to the Human Rights Commissions' inquiries. A ruling by the African Commission on Human Rights is awaited. Bilateral relationships Multilateral relations In addition to the United Nations, Cameroon is very active in other multilateral organisations or global institutions such as the Organisation internationale de la Francophonie, The Commonwealth, the Organisation of Islamic Cooperation, the Group of 77, the Non-Aligned Movement, the African Union and the Economic Community of Central African States. Refugees and internally displaced persons Refugees (country of origin): 20,000-30,000 (Chad); 3,000 (Nigeria); 24,000 (Central African Republic) (2007) See also List of diplomatic missions in Cameroon List of diplomatic missions of Cameroon References
2,427