sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
dual space of whose topology is a generalization of the dual norm-induced topology on the continuous dual space ; see this footnote for more details). If is a metrizable locally convex TVS, then is normable if and only if is a Fréchet–Urysohn space. This shows that in the category of locally convex TVSs, Banach spaces are exactly those complete spaces that are both metrizable and have metrizable strong dual spaces. Completions Every normed space can be isometrically embedded onto a dense vector subspace of Banach space, where this Banach space is called a of the normed space. This Hausdorff completion is unique up to isometric isomorphism. More precisely, for every normed space there exist a Banach space and a mapping such that is an isometric mapping and is dense in If is another Banach space such that there is an isometric isomorphism from onto a dense subset of then is isometrically isomorphic to This Banach space is the Hausdorff of the normed space The underlying metric space for is the same as the metric completion of with the vector space operations extended from to The completion of is sometimes denoted by General theory Linear operators, isomorphisms If and are normed spaces over the same ground field the set of all continuous -linear maps is denoted by In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space to another normed space is continuous if and only if it is bounded on the closed unit ball of Thus, the vector space can be given the operator norm For a Banach space, the space is a Banach space with respect to this norm. If is a Banach space, the space forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps. If and are normed spaces, they are isomorphic normed spaces if there exists a linear bijection such that and its inverse are continuous. If one of the two spaces or is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces and are isometrically isomorphic if in addition, is an isometry, that is, for every in The Banach–Mazur distance between two isomorphic but not isometric spaces and gives a measure of how much the two spaces and differ. Continuous and bounded linear functions and seminorms Every continuous linear operator is a bounded linear operator and if dealing only with normed spaces then the converse is also true. That is, a linear operator between two normed spaces is bounded if and only if it is a continuous function. So in particular, because the scalar field (which is or ) is a normed space, a linear functional on a normed space is a bounded linear functional if and only if it is a continuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces. If is a subadditive function (such as a norm, a sublinear function, or real linear functional), then is continuous at the origin if and only if is uniformly continuous on all of ; and if in addition then is continuous if and only if its absolute value is continuous, which happens if and only if is an open subset of And very importantly for applying the Hahn-Banach theorem, a linear functional is continuous if and only if this is true of its real part and moreover, and the real part completely determines which is why the Hahn-Banach theorem is often stated only for real linear functionals. Also, a linear functional on is continuous if and only if the seminorm is continuous, which happens if and only if there exists a continuous seminorm such that ; this last statement involving the linear functional and seminorm is encountered in many versions of the Hahn-Banach theorem. Basic notions The Cartesian product of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as and give rise to isomorphic normed spaces. In this sense, the product (or the direct sum ) is complete if and only if the two factors are complete. If is a closed linear subspace of a normed space there is a natural norm on the quotient space The quotient is a Banach space when is complete. The quotient map from onto sending to its class is linear, onto and has norm except when in which case the quotient is the null space. The closed linear subspace of is said to be a complemented subspace of if is the range of a surjective bounded linear projection In this case, the space is isomorphic to the direct sum of and the kernel of the projection Suppose that and are Banach spaces and that There exists a canonical factorization of as where the first map is the quotient map, and the second map sends every class in the quotient to the image in This is well defined because all elements in the same class have the same image. The mapping is a linear bijection from onto the range whose inverse need not be bounded. Classical spaces Basic examples of Banach spaces include: the Lp spaces and their special cases, the sequence spaces that consist of scalar sequences indexed by natural numbers ; among them, the space of absolutely summable sequences and the space of square summable sequences; the space of sequences tending to zero and the space of bounded sequences; the space of continuous scalar functions on a compact Hausdorff space equipped with the max norm, According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some For every separable Banach space there is a closed subspace of such that Any Hilbert space serves as an example of a Banach space. A Hilbert space on is complete for a norm of the form where is the inner product, linear in its first argument that satisfies the following: For example, the space is a Hilbert space. The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others. Banach algebras A Banach algebra is a Banach space over or together with a structure of algebra over , such that the product map is continuous. An equivalent norm on can be found so that for all Examples The Banach space with the pointwise product, is a Banach algebra. The disk algebra consists of functions holomorphic in the open unit disk and continuous on its closure: Equipped with the max norm on the disk algebra is a closed subalgebra of The Wiener algebra is the algebra of functions on the unit circle with absolutely convergent Fourier series. Via the map associating a function on to the sequence of its Fourier coefficients, this algebra is isomorphic to the Banach algebra where the product is the convolution of sequences. For every Banach space the space of bounded linear operators on with the composition of maps as product, is a Banach algebra. A C*-algebra is a complex Banach algebra with an antilinear involution such that The space of bounded linear operators on a Hilbert space is a fundamental example of C*-algebra. The Gelfand–Naimark theorem states that every C*-algebra is isometrically isomorphic to a C*-subalgebra of some The space of complex continuous functions on a compact Hausdorff space is an example of commutative C*-algebra, where the involution associates to every function its complex conjugate Dual space If is a normed space and the underlying field (either the real or the complex numbers), the continuous dual space is the space of continuous linear maps from into or continuous linear functionals. The notation for the continuous dual is in this article. Since is a Banach space (using the absolute value as norm), the dual is a Banach space, for every normed space The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem. In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional. An important special case is the following: for every vector in a normed space there exists a continuous linear functional on such that When is not equal to the vector, the functional must have norm one, and is called a norming functional for The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane. The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane. A subset in a Banach space is total if the linear span of is dense in The subset is total in if and only if the only continuous linear functional that vanishes on is the functional: this equivalence follows from the Hahn–Banach theorem. If is the direct sum of two closed linear subspaces and then the dual of is isomorphic to the direct sum of the duals of and If is a closed linear subspace in one can associate the in the dual, The orthogonal is a closed linear subspace of the dual. The dual of is isometrically isomorphic to The dual of is isometrically isomorphic to The dual of a separable Banach space need not be separable, but: When is separable, the above criterion for totality can be used for proving the existence of a countable total subset in Weak topologies The weak topology on a Banach space is the coarsest topology on for which all elements in the continuous dual space are continuous. The norm topology is therefore finer than the weak topology. It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed. A norm-continuous linear map between two Banach spaces and is also weakly continuous, that is, continuous from the weak topology of to that of If is infinite-dimensional, there exist linear maps which are not continuous. The space of all linear maps from to the underlying field (this space is called the algebraic dual space, to distinguish it from
the maximal ideals are precisely kernels of Dirac measures on More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology. In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual Not every unital commutative Banach algebra is of the form for some compact Hausdorff space However, this statement holds if one places in the smaller category of commutative C*-algebras. Gelfand's representation theorem for commutative C*-algebras states that every commutative unital C*-algebra is isometrically isomorphic to a space. The Hausdorff compact space here is again the maximal ideal space, also called the spectrum of in the C*-algebra context. Bidual If is a normed space, the (continuous) dual of the dual is called , or of For every normed space there is a natural map, This defines as a continuous linear functional on that is, an element of The map is a linear map from to As a consequence of the existence of a norming functional for every this map is isometric, thus injective. For example, the dual of is identified with and the dual of is identified with the space of bounded scalar sequences. Under these identifications, is the inclusion map from to It is indeed isometric, but not onto. If is surjective, then the normed space is called reflexive (see below). Being the dual of a normed space, the bidual is complete, therefore, every reflexive normed space is a Banach space. Using the isometric embedding it is customary to consider a normed space as a subset of its bidual. When is a Banach space, it is viewed as a closed linear subspace of If is not reflexive, the unit ball of is a proper subset of the unit ball of The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual. In other words, for every in the bidual, there exists a net in so that The net may be replaced by a weakly*-convergent sequence when the dual is separable. On the other hand, elements of the bidual of that are not in cannot be weak*-limit of in since is weakly sequentially complete. Banach's theorems Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem. According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors. Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional. The Banach–Steinhaus theorem is not limited to Banach spaces. It can be extended for example to the case where is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood of in such that all in are uniformly bounded on This result is a direct consequence of the preceding Banach isomorphism theorem and of the canonical factorization of bounded linear maps. This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from onto sending to the sum Reflexivity The normed space is called reflexive when the natural map is surjective. Reflexive normed spaces are Banach spaces. This is a consequence of the Hahn–Banach theorem. Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space onto the Banach space then is reflexive. Indeed, if the dual of a Banach space is separable, then is separable. If is reflexive and separable, then the dual of is separable, so is separable. Hilbert spaces are reflexive. The spaces are reflexive when More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem. The spaces are not reflexive. In these examples of non-reflexive spaces the bidual is "much larger" than Namely, under the natural isometric embedding of into given by the Hahn–Banach theorem, the quotient is infinite-dimensional, and even nonseparable. However, Robert C. James has constructed an example of a non-reflexive space, usually called "the James space" and denoted by such that the quotient is one-dimensional. Furthermore, this space is isometrically isomorphic to its bidual. When is reflexive, it follows that all closed and bounded convex subsets of are weakly compact. In a Hilbert space the weak compactness of the unit ball is very often used in the following way: every bounded sequence in has weakly convergent subsequences. Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems. For example, every convex continuous function on the unit ball of a reflexive space attains its minimum at some point in As a special case of the preceding result, when is a reflexive space over every continuous linear functional in attains its maximum on the unit ball of The following theorem of Robert C. James provides a converse statement. The theorem can be extended to give a characterization of weakly compact convex sets. On every non-reflexive Banach space there exist continuous linear functionals that are not norm-attaining. However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual of Weak convergences of sequences A sequence in a Banach space is weakly convergent to a vector if converges to for every continuous linear functional in the dual The sequence is a weakly Cauchy sequence if converges to a scalar limit for every in A sequence in the dual is weakly* convergent to a functional if converges to for every in Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem. When the sequence in is a weakly Cauchy sequence, the limit above defines a bounded linear functional on the dual that is, an element of the bidual of and is the limit of in the weak*-topology of the bidual. The Banach space is weakly sequentially complete if every weakly Cauchy sequence is weakly convergent in It follows from the preceding discussion that reflexive spaces are weakly sequentially complete. An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the vector. The unit vector basis of for or of is another example of a weakly null sequence, that is, a sequence that converges weakly to For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to The unit vector basis of is not weakly Cauchy. Weakly Cauchy sequences in are weakly convergent, since -spaces are weakly sequentially complete. Actually, weakly convergent sequences in are norm convergent. This means that satisfies Schur's property. Results involving the basis Weakly Cauchy sequences and the basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal. A complement to this result is due to Odell and Rosenthal (1975). By the Goldstine theorem, every element of the unit ball of is weak*-limit of a net in the unit ball of When does not contain every element of is weak*-limit of a in the unit ball of When the Banach space is separable, the unit ball of the dual equipped with the weak*-topology, is a metrizable compact space and every element in the bidual defines a bounded function on : This function is continuous for the compact topology of if and only if is actually in considered as subset of Assume in addition for the rest of the paragraph that does not contain By the preceding result of Odell and Rosenthal, the function is the pointwise limit on of a sequence of continuous functions on it is therefore a first Baire class function on The unit ball of the bidual is a pointwise compact subset of the first Baire class on Sequences, weak and weak* compactness When is separable, the unit ball of the dual is weak*-compact by the Banach–Alaoglu theorem and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences. This applies to separable reflexive spaces, but more is true in this case, as stated below. The weak topology of a Banach space is metrizable if and only if is finite-dimensional. If the dual is separable, the weak topology of the unit ball of is metrizable. This applies in particular to separable reflexive Banach spaces. Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences. A Banach space is reflexive if and only if each bounded sequence in has a weakly convergent subsequence. A weakly compact subset in is norm-compact. Indeed, every sequence in has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of Schauder bases A Schauder basis in a Banach space is a sequence of vectors in with the property that for every vector there exist defined scalars depending on such that Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense. It follows from the Banach–Steinhaus theorem that the linear mappings are uniformly bounded by some constant Let denote the coordinate functionals which assign to every in the coordinate of in the above expansion. They are called biorthogonal functionals. When the basis vectors have norm the coordinate functionals have norm in the dual of Most classical separable spaces have explicit bases. The Haar system is a basis for The trigonometric system is a basis in when The Schauder system is a basis in the space The question of whether the disk algebra has a basis remained open for more than forty years, until Bočkarev showed in 1974 that admits a basis constructed from the Franklin system. Since every vector in a Banach space with a basis is the limit of with of finite rank and uniformly bounded, the space satisfies the bounded approximation property. The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis. Robert C. James characterized reflexivity in Banach spaces with a basis: the space with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete. In this case, the biorthogonal functionals form a basis of the dual of Tensor product Let and be two -vector spaces. The tensor product of and is a -vector space with a bilinear mapping which has the following universal property: If is any bilinear mapping into a -vector space then there exists a unique linear mapping such that The image under of a couple in is denoted by and called a simple tensor. Every element in is a finite sum of such simple tensors. There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955. In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the projective tensor product of two Banach spaces and is the of the algebraic tensor product equipped with the projective tensor norm, and similarly for the injective tensor product Grothendieck proved in particular that where is a compact Hausdorff space, the Banach space of continuous functions from to and the space of Bochner-measurable and integrable functions from to and where the isomorphisms are isometric. The two isomorphisms above are the respective extensions of the map sending the tensor to the vector-valued function Tensor products and the approximation property Let be a Banach space. The tensor product is identified isometrically with the closure in of the set of finite rank operators. When has the approximation property, this closure coincides with the space of compact operators on For every Banach space there is a natural norm linear map obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when is the dual of Precisely, for every Banach space the map is one-to-one if and only if has the approximation property. Grothendieck conjectured that and must be different whenever and are infinite-dimensional Banach spaces. This was disproved by Gilles Pisier in 1983. Pisier constructed an infinite-dimensional Banach space such that and are equal. Furthermore, just as Enflo's example, this space is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space does not have the approximation property. Some classification results Characterizations of Hilbert space among Banach spaces A necessary and sufficient condition for the norm of a Banach space to be associated to an inner product is the parallelogram identity: It follows, for example, that the Lebesgue space is a Hilbert space only when If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives: For complex scalars, defining the inner product so as to be -linear in antilinear in the polarization identity gives: To see that the parallelogram law is sufficient, one observes in the real case that is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and The parallelogram law implies that is additive in It follows that it is linear over the rationals, thus linear by continuity. Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available. The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant : Kwapień proved that if for every integer and all families of vectors then the Banach space is isomorphic to a Hilbert space. Here, denotes the average over the possible choices of signs In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces. Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer any finite-dimensional normed space, with dimension sufficiently large compared to contains subspaces nearly isometric to the -dimensional Euclidean space. The next result gives the solution of the so-called . An infinite-dimensional Banach space is said to be homogeneous if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to is homogeneous, and Banach asked for the converse. An infinite-dimensional Banach space is hereditarily indecomposable when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces. The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space contains, either a subspace with unconditional basis, or a hereditarily indecomposable subspace and in particular, is not isomorphic to its closed hyperplanes. If is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that is isomorphic to Metric classification If is an isometry from the Banach space onto the Banach space (where both and are vector spaces over ), then the Mazur–Ulam theorem states that must be an affine transformation. In particular, if this is maps the zero of to the zero of then must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure. Topological classification Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces. Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset. Spaces of continuous functions When two compact Hausdorff spaces and are homeomorphic, the Banach spaces and are isometric. Conversely, when is not homeomorphic to the (multiplicative) Banach–Mazur distance between and must be greater than or equal to see above the results by Amir and Cambern. Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin: The situation is different for countably infinite compact Hausdorff spaces. Every countably infinite compact is homeomorphic to some closed interval of ordinal numbers equipped with the order topology, where is a countably infinite ordinal. The Banach space is then isometric to . When are two countably infinite ordinals, and assuming the spaces and are isomorphic if and only if . For example, the Banach spaces are mutually non-isomorphic. Examples Derivatives Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details. The Fréchet derivative allows for an extension of the concept of a total derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces. Fréchet differentiability is a stronger condition than Gateaux differentiability. The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability. Generalizations Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions or the space of all distributions on are complete but are not normed vector spaces and hence not Banach spaces. In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits
created by Elizabeth Miller and Robert Eighteen-Bisang in 1998. Stoker at The London Library Stoker was a member of The London Library and it is here that he conducted much of the research for Dracula. In 2018, the Library discovered some of the books that Stoker used for his research, complete with notes and marginalia. Death After suffering a number of strokes, Stoker died at No. 26 St George's Square, London on 20 April 1912. Some biographers attribute the cause of death to overwork, others to tertiary syphilis. His death certificate named the cause of death as "Locomotor ataxia 6 months", presumed to be a reference to syphilis. He was cremated, and his ashes were placed in a display urn at Golders Green Crematorium in north London. The ashes of Irving Noel Stoker, the author's son, were added to his father's urn following his death in 1961. The original plan had been to keep his parents' ashes together, but after Florence Stoker's death, her ashes were scattered at the Gardens of Rest. Beliefs and philosophy Stoker was raised a Protestant in the Church of Ireland. He was a strong supporter of the Liberal Party and took a keen interest in Irish affairs. As a "philosophical home ruler", he supported Home Rule for Ireland brought about by peaceful means. He remained an ardent monarchist who believed that Ireland should remain within the British Empire, an entity that he saw as a force for good. He was an admirer of Prime Minister William Ewart Gladstone, whom he knew personally, and supported his plans for Ireland. Stoker believed in progress and took a keen interest in science and science-based medicine. Some of Stoker's novels represent early examples of science fiction, such as The Lady of the Shroud (1909). He had a writer's interest in the occult, notably mesmerism, but despised fraud and believed in the superiority of the scientific method over superstition. Stoker counted among his friends J.W. Brodie-Innis, a member of the Hermetic Order of the Golden Dawn, and hired member Pamela Colman Smith as an artist for the Lyceum Theatre, but no evidence suggests that Stoker ever joined the Order himself. Although Irving was an active Freemason, no evidence has been found of Stoker taking part in Masonic activities in London. The Grand Lodge of Ireland also has no record of his membership. Posthumous The short story collection Dracula's Guest and Other Weird Stories was published in 1914 by Stoker's widow, Florence Stoker, who was also his literary executrix. The first film adaptation of Dracula was F. W. Murnau's Nosferatu, released in 1922, with Max Schreck starring as Count Orlok. Florence Stoker eventually sued the filmmakers, and was represented by the attorneys of the British Incorporated Society of Authors. Her chief legal complaint was that she had neither been asked for permission for the adaptation nor paid any royalty. The case dragged on for some years, with Mrs. Stoker demanding the destruction of the negative and all prints of the film. The suit was finally resolved in the widow's favour in July 1925. A single print of the film survived, however, and it has become well known. The first authorised film version of Dracula did not come about until almost a decade later when Universal Studios released Tod Browning's Dracula starring Bela Lugosi. Dacre Stoker Canadian writer Dacre Stoker, a great-grandnephew of Bram Stoker, decided to write "a sequel that bore the Stoker name" to "reestablish creative control over" the original novel, with encouragement from screenwriter Ian Holt, because of the Stokers' frustrating history with Dracula's copyright. In 2009, Dracula: The Un-Dead was released, written by Dacre Stoker and Ian Holt. Both writers "based [their work] on Bram Stoker's own handwritten notes for characters and plot threads excised from the original edition" along with their own research for the sequel. This also marked Dacre Stoker's writing debut. In spring 2012, Dacre Stoker (in collaboration with Elizabeth Miller) presented the "lost" Dublin Journal written by Bram Stoker, which had been kept by his great-grandson Noel Dobbs. Stoker's diary entries shed a light on the issues that concerned him before his London years. A remark about a boy who caught flies in a bottle might be a clue for the later development of the Renfield character in Dracula. Commemorations On 8 November 2012, Stoker was honoured with a Google Doodle on Google's homepage commemorating the 165th anniversary of his birth. An annual festival takes place in Dublin, the birthplace of Bram Stoker, in honour of his literary achievements. The 2014 Bram Stoker Festival encompassed literary, film, family, street, and outdoor events, and ran from October 24 to 27 in Dublin. The festival is supported by the Bram Stoker Estate and funded by Dublin City Council and Fáilte Ireland. Bibliography Novels The Primrose Path (1875) The Snake's Pass (1890) The Watter's Mou' (1895) The Shoulder of Shasta (1895) Dracula (1897) Miss Betty (1898) The Mystery of the Sea (1902) The Jewel of Seven Stars (1903, revised 1912) The Man (1905); issued also as The Gates of Life Lady Athlyne (1908) The Lady of the Shroud (1909) The Lair of the White Worm (1911, posthumously abridged 1925); issued also as The Garden of Evil Seven Golden Buttons (written in 1891, much material reused in Miss Betty; posthumously published in 2015) Short story collections Under the Sunset (1881) – eight fairy tales for children Snowbound: The Record of a Theatrical Touring Party (1908) Dracula's Guest and Other Weird Stories (1914) Uncollected stories Non-fiction The Duties of Clerks of Petty Sessions in Ireland (1879) A Glimpse of America (1886) Personal Reminiscences of Henry Irving (1906) Famous Impostors (1910) Great Ghost Stories (1998) (Compiled by Peter Glassman, Illustrated by Barry Moser) Bram Stoker's Notes for Dracula: A Facsimile Edition (2008) Bram Stoker Annotated and Transcribed by Robert Eighteen-Bisang and Elizabeth Miller, Foreword by Michael Barsanti. Jefferson, NC & London: McFarland. Articles "Recollections of the Late W. G. Wills", The Graphic, 19 December 1891 "The Art of Ellen Terry", The Playgoer, October 1901 "The Question of a National Theatre", The Nineteenth Century and After, Vol. LXIII, January/June 1908 "Mr. De Morgan's Habits of Work", The World's Work, Vol. XVI, May/October 1908 "The Censorship of Fiction", The Nineteenth Century and After, Vol. LXIV, July/December 1908 "The Censorship of Stage Plays", The Nineteenth Century and After, Vol. LXVI, July/December 1909 "Irving and Stage Lightning", The Nineteenth Century and After, Vol. LXIX, January/June 1911 Critical works on Stoker William Hughes, Beyond Dracula: Bram Stoker's Fiction and Its Cultural Context (Palgrave, 2000) Belford, Barbara. Bram Stoker: A Biography of the Author of Dracula. London: Weidenfeld and Nicolson, 1996. Hopkins, Lisa. Bram Stoker: A Literary Life. Basingstoke, England: Palgrave Macmillan, 2007. Murray, Paul. From the Shadow of Dracula: A Life of Bram Stoker (London: Jonathan Cape, 2004) Senf, Carol. Science and Social Science in Bram Stoker's Fiction (Greenwood, 2002). Senf, Carol. Dracula: Between Tradition and Modernism (Twayne, 1998). Senf, Carol A. Bram Stoker (University of Wales Press, 2010). Shepherd, Mike. When Brave Men Shudder: the Scottish origins of Dracula (Wild Wolf Publishing, 2018). Skal, David J. Something in the Blood: The Untold Story of Bram Stoker (Liveright, 2016) Bibliographies William Hughes Bram Stoker – Victorian Fiction Research Guide References External links h2g2 article on Bram Stoker Archival material at Dracula 1847 births 1912 deaths 19th-century Irish male writers 20th-century Irish male writers 19th-century Irish novelists 20th-century Irish novelists 19th-century Irish short story writers 20th-century Irish short story writers Auditors of the College Historical Society Dublin University Football Club players Irish horror writers Irish male short story writers Irish Anglicans Alumni of Trinity College Dublin People from Clontarf, Dublin Irish male novelists Victorian novelists Golders Green Crematorium Irish fantasy writers Writers
the Rosenbach Museum and Library in Philadelphia. A facsimile edition of the notes was created by Elizabeth Miller and Robert Eighteen-Bisang in 1998. Stoker at The London Library Stoker was a member of The London Library and it is here that he conducted much of the research for Dracula. In 2018, the Library discovered some of the books that Stoker used for his research, complete with notes and marginalia. Death After suffering a number of strokes, Stoker died at No. 26 St George's Square, London on 20 April 1912. Some biographers attribute the cause of death to overwork, others to tertiary syphilis. His death certificate named the cause of death as "Locomotor ataxia 6 months", presumed to be a reference to syphilis. He was cremated, and his ashes were placed in a display urn at Golders Green Crematorium in north London. The ashes of Irving Noel Stoker, the author's son, were added to his father's urn following his death in 1961. The original plan had been to keep his parents' ashes together, but after Florence Stoker's death, her ashes were scattered at the Gardens of Rest. Beliefs and philosophy Stoker was raised a Protestant in the Church of Ireland. He was a strong supporter of the Liberal Party and took a keen interest in Irish affairs. As a "philosophical home ruler", he supported Home Rule for Ireland brought about by peaceful means. He remained an ardent monarchist who believed that Ireland should remain within the British Empire, an entity that he saw as a force for good. He was an admirer of Prime Minister William Ewart Gladstone, whom he knew personally, and supported his plans for Ireland. Stoker believed in progress and took a keen interest in science and science-based medicine. Some of Stoker's novels represent early examples of science fiction, such as The Lady of the Shroud (1909). He had a writer's interest in the occult, notably mesmerism, but despised fraud and believed in the superiority of the scientific method over superstition. Stoker counted among his friends J.W. Brodie-Innis, a member of the Hermetic Order of the Golden Dawn, and hired member Pamela Colman Smith as an artist for the Lyceum Theatre, but no evidence suggests that Stoker ever joined the Order himself. Although Irving was an active Freemason, no evidence has been found of Stoker taking part in Masonic activities in London. The Grand Lodge of Ireland also has no record of his membership. Posthumous The short story collection Dracula's Guest and Other Weird Stories was published in 1914 by Stoker's widow, Florence Stoker, who was also his literary executrix. The first film adaptation of Dracula was F. W. Murnau's Nosferatu, released in 1922, with Max Schreck starring as Count Orlok. Florence Stoker eventually sued the filmmakers, and was represented by the attorneys of the British Incorporated Society of Authors. Her chief legal complaint was that she had neither been asked for permission for the adaptation nor paid any royalty. The case dragged on for some years, with Mrs. Stoker demanding the destruction of the negative and all prints of the film. The suit was finally resolved in the widow's favour in July 1925. A single print of the film survived, however, and it has become well known. The first authorised film version of Dracula did not come about until almost a decade later when Universal Studios released Tod Browning's Dracula starring Bela Lugosi. Dacre Stoker Canadian writer Dacre Stoker, a great-grandnephew of Bram Stoker, decided to write "a sequel that bore the Stoker name" to "reestablish creative control over" the original novel, with encouragement from screenwriter Ian Holt, because of the Stokers' frustrating history with Dracula's copyright. In 2009, Dracula: The Un-Dead was released, written by Dacre Stoker and Ian Holt. Both writers "based [their work] on Bram Stoker's own handwritten notes for characters and plot threads excised from the original edition" along with their own research for the sequel. This also marked Dacre Stoker's writing debut. In spring 2012, Dacre Stoker (in collaboration with Elizabeth Miller) presented the "lost" Dublin Journal written by Bram Stoker, which had been kept by his great-grandson Noel Dobbs. Stoker's diary entries shed a light on the issues that concerned him before his London years. A remark about a boy who caught flies in a bottle might be a clue for the later development of the Renfield character in Dracula. Commemorations On 8 November 2012, Stoker was honoured with a Google Doodle on Google's homepage commemorating the 165th anniversary of his birth. An annual festival takes place in Dublin, the birthplace of Bram Stoker, in honour of his literary achievements. The 2014 Bram Stoker Festival encompassed literary, film, family, street, and outdoor events, and ran from October 24 to 27 in Dublin. The festival is supported by the Bram Stoker Estate and funded by Dublin City Council and Fáilte Ireland. Bibliography Novels The Primrose Path (1875) The Snake's Pass (1890) The Watter's Mou' (1895) The Shoulder of Shasta (1895) Dracula (1897) Miss Betty (1898) The Mystery of the Sea (1902) The Jewel of Seven Stars (1903, revised 1912) The Man (1905); issued also as The Gates of Life Lady Athlyne (1908) The Lady of the Shroud (1909) The Lair of the White Worm (1911, posthumously abridged 1925); issued also as The Garden of Evil Seven Golden Buttons (written in 1891, much material reused in Miss Betty; posthumously published in 2015) Short story collections Under the Sunset (1881) – eight fairy tales for children Snowbound: The Record of a Theatrical Touring Party (1908) Dracula's Guest and Other Weird Stories (1914) Uncollected stories Non-fiction The Duties of Clerks of Petty Sessions in Ireland (1879) A Glimpse of America (1886) Personal Reminiscences of Henry Irving (1906) Famous Impostors (1910) Great Ghost Stories (1998) (Compiled by Peter Glassman, Illustrated by Barry Moser) Bram Stoker's Notes for Dracula: A Facsimile Edition (2008) Bram Stoker Annotated and Transcribed by Robert Eighteen-Bisang and Elizabeth Miller, Foreword by Michael Barsanti. Jefferson, NC & London: McFarland. Articles "Recollections of the Late W. G. Wills", The Graphic, 19 December 1891 "The Art of Ellen Terry", The Playgoer, October 1901 "The Question of a National
series Billions (film), a 1920 silent comedy Billion (company), a Taiwanese modem manufacturer Jack Billion (born 1939), 2006 Democratic Party candidate for governor of South Dakota Mr. Billion, a 1977 film by Jonathan Kaplan "Billions"
1920 silent comedy Billion (company), a Taiwanese modem manufacturer Jack Billion (born 1939), 2006 Democratic Party candidate for governor of South Dakota Mr. Billion, a 1977 film by Jonathan Kaplan "Billions" (song), a song on Russell Dickerson's album Yours "Billion", a song
many tricks the partnership receiving the contract (the declaring side) needs to take to receive points for the deal. During the auction, partners endeavor to exchange information about their hands, including overall strength and distribution of the suits. The cards are then played, the trying to fulfill the contract, and the trying to stop the declaring side from achieving its goal. The deal is scored based on the number of tricks taken, the contract, and various other factors which depend to some extent on the variation of the game being played. Rubber bridge is the most popular variation for casual play, but most club and tournament play involves some variant of duplicate bridge, in which the cards are not re-dealt on each occasion, but the same deal is played by two or more sets of players (or "tables") to enable comparative scoring. History and etymology Bridge is a member of the family of trick-taking games and is a derivative of whist, which had become the dominant such game and enjoyed a loyal following for centuries. The idea of a trick-taking 52-card game has its first documented origins in Italy and France. The French physician and author Rabelais (1493–1553) mentions a game called "La Triomphe" in one of his works. In 1526 the Italian Francesco Berni wrote the oldest known (as of 1960) textbook on a game very similar to whist, known as "Triomfi". Also, a Spanish textbook in Latin from the first half of the 16th century, "Triumphens Historicus", deals with the same subject. Bridge departed from whist with the creation of "Biritch" in the 19th century, and evolved through the late 19th and early 20th centuries to form the present game. The first rule book for bridge, dated 1886, is Biritch, or Russian Whist written by John Collinson, an English financier working in Ottoman Constantinople (now Istanbul). It and his subsequent letter to The Saturday Review dated 28 May 1906, document the origin of Biritch as being the Russian community in Constantinople. The word biritch is thought to be a transliteration of the Russian word Бирюч (бирчий, бирич), an occupation of a diplomatic clerk or an announcer. Another theory is that British soldiers invented the game bridge while serving in the Crimean War, and named it after the Galata Bridge, which they crossed on their way to a coffeehouse to play cards. Biritch had many significant bridge-like developments: dealer chose the trump suit, or nominated his partner to do so; there was a call of no trumps (biritch); dealer's partner's hand became dummy; points were scored above and below the line; game was 3NT, 4 and 5 (although 8 club odd tricks and 15 spade odd tricks were needed); the score could be doubled and redoubled; and there were slam bonuses. It has some features in common with solo whist. This game, and variants of it known as "bridge" and "bridge whist", became popular in the United States and the United Kingdom in the 1890s despite the long-established dominance of whist. Its breakthrough was its acceptance in 1894 by Lord Brougham at London's Portland Club. In 1904 auction bridge was developed, in which the players bid in a competitive auction to decide the contract and declarer. The object became to make at least as many tricks as were contracted for, and penalties were introduced for failing to do so. Auction bridge bidding beyond winning the auction is pointless. If taking all 13 tricks, there is no difference in score between a 1 and a 7 final bid, as no bonus for game, small slam or grand slam exists. The modern game of contract bridge was the result of innovations to the scoring of auction bridge by Harold Stirling Vanderbilt and others. The most significant change was that only the tricks contracted for were scored below the line toward game or a slam bonus, a change that resulted in bidding becoming much more challenging and interesting. Also new was the concept of "vulnerability", making sacrifices to protect the lead in a rubber more expensive. The various scores were adjusted to produce a more balanced and interesting game. Vanderbilt set out his rules in 1925, and within a few years contract bridge had so supplanted other forms of the game that "bridge" became synonymous with "contract bridge". The form of bridge played in clubs and tournaments is duplicate bridge, which is played at clubs, in tournaments and online. The number of people playing contract bridge has declined since its peak in the 1940s, when a survey found it was played in 44% of US households. The game is still widely played, especially amongst retirees, and in 2005 the ACBL estimated there were 25 million players in the US. Rules Overview Bridge is a four-player partnership trick-taking game with thirteen tricks per deal. The dominant variations of the game are rubber bridge, more common in social play; and duplicate bridge, which enables comparative scoring in tournament play. Each player is dealt thirteen cards from a standard 52-card deck. A starts when a player leads, i.e. plays the first card. The leader to the first trick is determined by the auction; the leader to each subsequent trick is the player who won the preceding trick. Each player, in a clockwise order, plays one card on the trick. Players must play a card of the same suit as the original card led, unless they have none (said to be "void"), in which case they may play any card. The player who played the highest-ranked card wins the trick. Within a suit, the ace is ranked highest followed by the king, queen and jack and then the ten through to the two. In a deal where the auction has determined that there is no trump suit, the trick must be won by a card of the suit led. However, in a deal where there is a trump suit, cards of that suit are superior in rank to any of the cards of any other suit. If one or more players plays a trump to a trick when void in the suit led, the highest trump wins. For example, if the trump suit is spades and a player is void in the suit led and plays a spade card, they win the trick if no other player plays a higher spade. If a trump suit is led, the usual rule for trick-taking applies. Unlike its predecessor, whist, the goal of bridge is not simply to take the most tricks in a deal. Instead, the goal is to successfully estimate how many tricks one's partnership can take. To illustrate this, the simpler partnership trick-taking game of spades has a similar mechanism: the usual trick-taking rules apply with the trump suit being spades, but in the beginning of the game, players bid or estimate how many tricks they can win, and the number of tricks bid by both players in a partnership are added. If a partnership takes at least that many number of tricks, they receive points for the round; otherwise, they receive penalty points. Bridge extends the concept of bidding into an , where partnerships compete to take a , specifying how many tricks they will need to take in order to receive points, and also specifying the trump suit (or no trump, meaning that there will be no trump suit). Players take turns to call in a clockwise order: each player in turn either passes, doubleswhich increases the penalties for not making the contract specified by the opposing partnership's last bid, but also increases the reward for making itor redoubles, or states a contract that their partnership will adopt, which must be higher than the previous highest bid (if any). Eventually, the player who bid the highest contractwhich is determined by the contract's level as well as the trump suit or no trumpwins the contract for their partnership. In the example auction below, the east–west pair secures the contract of 6; the auction concludes when there have been three successive passes. Note that six tricks are added to contract values, so the six-level contract would actually be a contract of twelve tricks. In practice, establishing a contract without enough information on the other partner's hand is difficult, so there exist many bidding systems assigning meanings to bids, with common ones including Standard American, Acol, and 2/1 game forcing. Contrast with Spades, where players only have to bid their own hand. After the contract is decided, and the first lead is made, the declarer's partner (dummy) lays their cards face up on the table, and the declarer plays the dummy's cards as well as their own. The opposing partnership is called the , and their goal is to stop the declarer from fulfilling his contract. Once all the cards have been played, the hand is scored: if the declaring side make their contract, they receive points based on the level of the contract, with some trump suits being worth more points than others and no trump being the highest, as well as bonus points for . But if the declarer fails to fulfil the contract, the defenders receive points depending on the declaring side's undertricks (the number of tricks short of the contract) and whether the contract was doubled by the defenders. Setup and dealing The four players sit in two partnerships with players sitting opposite their partners. A cardinal direction is assigned to each seat, so that one partnership sits in North and South, while the other sits in West and East. The cards may be freshly dealt or, in duplicate bridge games, pre-dealt. All that is needed in basic games are the cards and a method of keeping score, but there is often other equipment on the table, such as a board containing the cards to be played (in duplicate bridge), bidding boxes, or screens. In rubber bridge, each player draws a card at the start of the game with the player who drew the highest cards dealing first. The second highest card becomes the dealer's partner and takes the chair on the opposite side of the table. They play against the other two. The deck is shuffled and cut, usually by the player to the left of the dealer, before dealing. Players take turns to deal, in a clockwise order. The dealer deals the cards clockwise, one card at a time. Normally rubber bridge is played with two packs of cards and whilst one pack is being dealt, the dealer's partner shuffles the other pack. After shuffling the pack is placed on the right ready for the next dealer. ("If you're not an idiot quite. Put the cards on the right.") Before dealing, the next dealer passes the cards to the previous dealer who cuts them. In duplicate bridge, the cards are pre-dealt, either by hand or by a computerized dealing machine, in order to allow for competitive scoring. Once dealt, the cards are placed in a device called a "board", having slots designated for each player's cardinal direction seating position. After a deal has been played, players return their cards to the appropriate slot in the board, ready to be played by the next table. Auction The dealer opens the auction and can make the first call, and the auction proceeds clockwise. When it is their turn to call, a player may passbut can enter into the bidding lateror bid a contract, specifying the level of their contract and either the trump suit or no trump (the denomination), provided that it is higher than the last bid by any player, including their partner. All bids promise to take a number of tricks in excess of six, so a bid must be between one (seven tricks) and seven (thirteen tricks). A bid is higher than another bid if either the level is greater (e.g., 2 over 1NT) or the denomination is higher, with the order being in ascending order: , , , , and NT (no trump). Calls may be made orally, or with a bidding box, or digitally in online bridge. If the last bid was by the opposing partnership, one may also the opponents' bid, increasing the penalties for undertricks, but also increasing the reward for making the contract. Doubling does not carry to future bids by the opponents unless future bids are doubled again. A player on the opposing partnership being doubled may also , which increases the penalties and rewards further. Players may not see their partner's hand during the auction, only their own. There exist many bidding conventions that assign agreed meanings to various calls to assist players in reaching an optimal contract (or obstruct the opponents). The auction ends when, after a player bids, doubles, or redoubles, every other player has passed, in which case the action proceeds to the play; or every player has passed and no bid has been made, in which case the round is considered to be "passed out" and not played. Play The player from the declaring side who first bid the denomination named in the final contract becomes declarer. The player left to the declarer leads to the first trick. Dummy then lays his or her cards face up on the table, organized in columns by suit. Play proceeds clockwise, with each player required to follow suit if possible. Tricks are won by the highest trump, or if there were none played, the highest card of the led suit. The player who won the previous trick leads to the next trick. The declarer has control of the dummy's cards and tells his partner which card to play at dummy's turn. There also exist conventions that communicate further information between defenders about their hands during the play. At any time, a player may , stating that their side will win a specific number of the remaining tricks. The claiming player lays his cards down on the table and explains the order in which he intends to play the remaining cards. The opponents can either accept the claim and the round is scored accordingly, or dispute the claim. If the claim is disputed, play continues with the claiming player's cards face up in rubber games, or in duplicate games, play ceases and the tournament director is called to adjudicate the hand. Scoring At the end of the hand, points are awarded to the declaring side if they make the contract, or else to the defenders. Partnerships can be , increasing the rewards for making the contract, but also increasing the penalties for undertricks. In rubber bridge, if a side has won 100 contract points, they have won a and are vulnerable for the remaining rounds, but in duplicate bridge, vulnerability is predetermined based on the number of each board. If the declaring side makes their contract, they receive points for , or tricks bid and made in excess of six. In both rubber and duplicate bridge, the declaring side is awarded 20 points per odd trick for a contract in clubs or diamonds, and 30 points per odd trick for a contract in hearts or spades. For a contract in notrump, the declaring side is awarded 40 points for the first odd trick and 30 points for the remaining odd tricks. Contract points are doubled or quadrupled if the contract is respectively doubled or redoubled. In rubber bridge, a partnership wins one game once it has accumulated 100 contract points; excess contract points do not carry over to the next game. A partnership that wins two games wins the rubber, receiving a bonus of 500 points if the opponents have won a game, and 700 points if they have not. Overtricks score the same number of points per odd trick, although their doubled and redoubled values differ. Bonuses vary between the two bridge variations both in score and in type (for example, rubber bridge awards a bonus for holding a certain combination of high cards), although some are common between the two. A larger bonus is awarded if the declaring side makes a small slam or grand slam, a contract of 12 or 13 tricks respectively. If the declaring side is not vulnerable, a small slam gets 500 points, and a grand slam 1000 points. If the declaring side is vulnerable, a small slam is 750 points and a grand slam is 1,500. In rubber bridge, the rubber finishes when a partnership has won two games, but the partnership receiving the most overall points wins the rubber. Duplicate bridge is scored comparatively, meaning that the score for the hand is compared to other tables playing the same cards and match points are scored according to the comparative results: usually either "matchpoint scoring", where each partnership receives 2 points (or 1 point) for each pair that they beat, and 1 point (or point) for each tie; or IMPs (international matchpoint) scoring, where the number of IMPs varies (but less than proportionately) with the points difference between the teams. Undertricks are scored in both variations as follows: Rules The rules of the game are referred to as the laws as promulgated by various bridge organizations. Laws of duplicate bridge The official rules of duplicate bridge are promulgated by the WBF as "The Laws of Duplicate Bridge 2017". The Laws Committee of the WBF, composed of world experts, updates the Laws every 10 years; it also issues a Laws Commentary advising on interpretations it has rendered. In addition to the basic rules of play, there are many additional rules covering playing conditions and the rectification of irregularities, which are primarily for use by tournament directors who act as referees and have overall control of procedures during competitions. But various details of procedure are left to the discretion of the zonal bridge organisation for tournaments under their aegis and some (for example, the choice of movement) to
void in the suit led, the highest trump wins. For example, if the trump suit is spades and a player is void in the suit led and plays a spade card, they win the trick if no other player plays a higher spade. If a trump suit is led, the usual rule for trick-taking applies. Unlike its predecessor, whist, the goal of bridge is not simply to take the most tricks in a deal. Instead, the goal is to successfully estimate how many tricks one's partnership can take. To illustrate this, the simpler partnership trick-taking game of spades has a similar mechanism: the usual trick-taking rules apply with the trump suit being spades, but in the beginning of the game, players bid or estimate how many tricks they can win, and the number of tricks bid by both players in a partnership are added. If a partnership takes at least that many number of tricks, they receive points for the round; otherwise, they receive penalty points. Bridge extends the concept of bidding into an , where partnerships compete to take a , specifying how many tricks they will need to take in order to receive points, and also specifying the trump suit (or no trump, meaning that there will be no trump suit). Players take turns to call in a clockwise order: each player in turn either passes, doubleswhich increases the penalties for not making the contract specified by the opposing partnership's last bid, but also increases the reward for making itor redoubles, or states a contract that their partnership will adopt, which must be higher than the previous highest bid (if any). Eventually, the player who bid the highest contractwhich is determined by the contract's level as well as the trump suit or no trumpwins the contract for their partnership. In the example auction below, the east–west pair secures the contract of 6; the auction concludes when there have been three successive passes. Note that six tricks are added to contract values, so the six-level contract would actually be a contract of twelve tricks. In practice, establishing a contract without enough information on the other partner's hand is difficult, so there exist many bidding systems assigning meanings to bids, with common ones including Standard American, Acol, and 2/1 game forcing. Contrast with Spades, where players only have to bid their own hand. After the contract is decided, and the first lead is made, the declarer's partner (dummy) lays their cards face up on the table, and the declarer plays the dummy's cards as well as their own. The opposing partnership is called the , and their goal is to stop the declarer from fulfilling his contract. Once all the cards have been played, the hand is scored: if the declaring side make their contract, they receive points based on the level of the contract, with some trump suits being worth more points than others and no trump being the highest, as well as bonus points for . But if the declarer fails to fulfil the contract, the defenders receive points depending on the declaring side's undertricks (the number of tricks short of the contract) and whether the contract was doubled by the defenders. Setup and dealing The four players sit in two partnerships with players sitting opposite their partners. A cardinal direction is assigned to each seat, so that one partnership sits in North and South, while the other sits in West and East. The cards may be freshly dealt or, in duplicate bridge games, pre-dealt. All that is needed in basic games are the cards and a method of keeping score, but there is often other equipment on the table, such as a board containing the cards to be played (in duplicate bridge), bidding boxes, or screens. In rubber bridge, each player draws a card at the start of the game with the player who drew the highest cards dealing first. The second highest card becomes the dealer's partner and takes the chair on the opposite side of the table. They play against the other two. The deck is shuffled and cut, usually by the player to the left of the dealer, before dealing. Players take turns to deal, in a clockwise order. The dealer deals the cards clockwise, one card at a time. Normally rubber bridge is played with two packs of cards and whilst one pack is being dealt, the dealer's partner shuffles the other pack. After shuffling the pack is placed on the right ready for the next dealer. ("If you're not an idiot quite. Put the cards on the right.") Before dealing, the next dealer passes the cards to the previous dealer who cuts them. In duplicate bridge, the cards are pre-dealt, either by hand or by a computerized dealing machine, in order to allow for competitive scoring. Once dealt, the cards are placed in a device called a "board", having slots designated for each player's cardinal direction seating position. After a deal has been played, players return their cards to the appropriate slot in the board, ready to be played by the next table. Auction The dealer opens the auction and can make the first call, and the auction proceeds clockwise. When it is their turn to call, a player may passbut can enter into the bidding lateror bid a contract, specifying the level of their contract and either the trump suit or no trump (the denomination), provided that it is higher than the last bid by any player, including their partner. All bids promise to take a number of tricks in excess of six, so a bid must be between one (seven tricks) and seven (thirteen tricks). A bid is higher than another bid if either the level is greater (e.g., 2 over 1NT) or the denomination is higher, with the order being in ascending order: , , , , and NT (no trump). Calls may be made orally, or with a bidding box, or digitally in online bridge. If the last bid was by the opposing partnership, one may also the opponents' bid, increasing the penalties for undertricks, but also increasing the reward for making the contract. Doubling does not carry to future bids by the opponents unless future bids are doubled again. A player on the opposing partnership being doubled may also , which increases the penalties and rewards further. Players may not see their partner's hand during the auction, only their own. There exist many bidding conventions that assign agreed meanings to various calls to assist players in reaching an optimal contract (or obstruct the opponents). The auction ends when, after a player bids, doubles, or redoubles, every other player has passed, in which case the action proceeds to the play; or every player has passed and no bid has been made, in which case the round is considered to be "passed out" and not played. Play The player from the declaring side who first bid the denomination named in the final contract becomes declarer. The player left to the declarer leads to the first trick. Dummy then lays his or her cards face up on the table, organized in columns by suit. Play proceeds clockwise, with each player required to follow suit if possible. Tricks are won by the highest trump, or if there were none played, the highest card of the led suit. The player who won the previous trick leads to the next trick. The declarer has control of the dummy's cards and tells his partner which card to play at dummy's turn. There also exist conventions that communicate further information between defenders about their hands during the play. At any time, a player may , stating that their side will win a specific number of the remaining tricks. The claiming player lays his cards down on the table and explains the order in which he intends to play the remaining cards. The opponents can either accept the claim and the round is scored accordingly, or dispute the claim. If the claim is disputed, play continues with the claiming player's cards face up in rubber games, or in duplicate games, play ceases and the tournament director is called to adjudicate the hand. Scoring At the end of the hand, points are awarded to the declaring side if they make the contract, or else to the defenders. Partnerships can be , increasing the rewards for making the contract, but also increasing the penalties for undertricks. In rubber bridge, if a side has won 100 contract points, they have won a and are vulnerable for the remaining rounds, but in duplicate bridge, vulnerability is predetermined based on the number of each board. If the declaring side makes their contract, they receive points for , or tricks bid and made in excess of six. In both rubber and duplicate bridge, the declaring side is awarded 20 points per odd trick for a contract in clubs or diamonds, and 30 points per odd trick for a contract in hearts or spades. For a contract in notrump, the declaring side is awarded 40 points for the first odd trick and 30 points for the remaining odd tricks. Contract points are doubled or quadrupled if the contract is respectively doubled or redoubled. In rubber bridge, a partnership wins one game once it has accumulated 100 contract points; excess contract points do not carry over to the next game. A partnership that wins two games wins the rubber, receiving a bonus of 500 points if the opponents have won a game, and 700 points if they have not. Overtricks score the same number of points per odd trick, although their doubled and redoubled values differ. Bonuses vary between the two bridge variations both in score and in type (for example, rubber bridge awards a bonus for holding a certain combination of high cards), although some are common between the two. A larger bonus is awarded if the declaring side makes a small slam or grand slam, a contract of 12 or 13 tricks respectively. If the declaring side is not vulnerable, a small slam gets 500 points, and a grand slam 1000 points. If the declaring side is vulnerable, a small slam is 750 points and a grand slam is 1,500. In rubber bridge, the rubber finishes when a partnership has won two games, but the partnership receiving the most overall points wins the rubber. Duplicate bridge is scored comparatively, meaning that the score for the hand is compared to other tables playing the same cards and match points are scored according to the comparative results: usually either "matchpoint scoring", where each partnership receives 2 points (or 1 point) for each pair that they beat, and 1 point (or point) for each tie; or IMPs (international matchpoint) scoring, where the number of IMPs varies (but less than proportionately) with the points difference between the teams. Undertricks are scored in both variations as follows: Rules The rules of the game are referred to as the laws as promulgated by various bridge organizations. Laws of duplicate bridge The official rules of duplicate bridge are promulgated by the WBF as "The Laws of Duplicate Bridge 2017". The Laws Committee of the WBF, composed of world experts, updates the Laws every 10 years; it also issues a Laws Commentary advising on interpretations it has rendered. In addition to the basic rules of play, there are many additional rules covering playing conditions and the rectification of irregularities, which are primarily for use by tournament directors who act as referees and have overall control of procedures during competitions. But various details of procedure are left to the discretion of the zonal bridge organisation for tournaments under their aegis and some (for example, the choice of movement) to the sponsoring organisation (for example, the club). Some zonal organisations of the WBF also publish editions of the Laws. For example, the American Contract Bridge League (ACBL) publishes the Laws of Duplicate Bridge and additional documentation for club and tournament directors. Rules of rubber bridge There are no universally accepted rules for rubber bridge, but some zonal organisations have published their own. An example for those wishing to abide by a published standard is The Laws of Rubber Bridge as published by the American Contract Bridge League. The majority of rules mirror those of duplicate bridge in the bidding and play and differ primarily in procedures for dealing and scoring. Laws of online play In 2001, the WBF promulgated a set of Laws for online play. Tournaments Bridge is a game of skill played with randomly dealt cards, which makes it also a game of chance, or more exactly, a tactical game with inbuilt randomness, imperfect knowledge and restricted communication. The chance element is in the deal of the cards; in duplicate bridge some of the chance element is eliminated by comparing results of multiple pairs in identical situations. This is achievable when there are eight or more players, sitting at two or more tables, and the deals from each table are preserved and passed to the next table, thereby duplicating them for the other table(s) of players. At the end of a session, the scores for each deal are compared, and the most points are awarded to the players doing the best with each particular deal. This measures relative skill (but still with an element of luck) because each pair or team is being judged only on the ability to bid with, and play, the same cards as other players. Duplicate bridge is played in clubs and tournaments, which can gather as many as several hundred players. Duplicate bridge is a mind sport, and its popularity gradually became comparable to that of chess, with which it is often compared for its complexity and the mental skills required for high-level competition. Bridge and chess are the only "mind sports" recognized by the International Olympic Committee, although they were not found eligible for the main Olympic program. In October 2017 the British High Court ruled against the English Bridge Union, finding that Bridge is not a sport under a definition of sport as involving physical activity, but did not rule on the "broad, somewhat philosophical question" as to whether or not bridge is a sport. The basic premise of duplicate bridge had previously been used for whist matches as early as 1857. Initially, bridge was not thought to be suitable for duplicate competition; it was not until the 1920s that (auction) bridge tournaments became popular. In 1925 when contract bridge first evolved, bridge tournaments were becoming popular, but the rules were somewhat in flux, and several different organizing bodies were involved in tournament sponsorship: the American Bridge League (formerly the American Auction Bridge League, which changed its name in 1929), the American Whist League, and the United States Bridge Association. In 1935, the first officially recognized world championship was held. In 1958, the World Bridge Federation (WBF) was founded to promote bridge worldwide, coordinate periodic revision to the Laws (each ten years, next in 2027) and conduct world championships. Bidding boxes and bidding screens In tournaments, "bidding boxes" are frequently used, as noted above. These avoid the possibility of players at other tables hearing any spoken bids. The bidding cards are laid out in sequence as the auction progresses. Although it is not a formal rule, many clubs adopt a protocol that the bidding cards stay revealed until the first playing card is tabled, after which point the bidding cards are put away. Bidding pads are an alternative to bidding boxes. A bidding pad is a block of 100mm square tear-off sheets. Players write their bids on the top sheet. When the first trick is complete the sheet is torn off and discarded. In top national and international events, "bidding screens" are used. These are placed diagonally across the table, preventing partners from seeing each other during the game; often the screen is removed after the auction is complete. Game strategy Bidding Much of the complexity in bridge arises from the difficulty of arriving at a good final contract in the auction (or deciding to let the opponents declare the contract). This is a difficult problem: the two players in a partnership must try to communicate enough information about their hands to arrive at a makeable contract, but the information they can exchange is restricted – information may be passed only by the calls made and later by the cards played, not by other means; in addition, the agreed-upon meaning of each call and play must be available to the opponents. Since a partnership that has freedom to bid gradually at leisure can exchange more information, and since a partnership that can interfere with the opponents' bidding (as by raising the bidding level rapidly) can cause difficulties for their opponents, bidding systems are both informational and strategic. It is this mixture of information exchange and evaluation, deduction, and tactics that is at the heart of bidding in bridge. A number of basic rules of thumb in bridge bidding and play are summarized as bridge maxims. Bidding systems and conventions A bidding system is a set of partnership agreements on the meanings of bids. A partnership's bidding system is usually made up of a core system, modified and complemented by specific conventions (optional customizations incorporated into the main system for handling specific bidding situations) which are pre-chosen between the partners prior to play. The line between a well-known convention and a part of a system is not always clear-cut: some bidding systems include specified conventions by default. Bidding systems can be divided into mainly natural systems such as Acol and Standard American, and mainly artificial systems such as the Precision Club and Polish Club. Calls are usually considered to be either natural or conventional (artificial). A natural call carries a meaning that reflects the call; a natural bid intuitively showing hand or suit strength based on the level or suit of the bid, and a natural double expressing that the player believes that the opposing partnership will not make their contract. By contrast, a conventional (artificial) call offers and/or asks for information by means of pre-agreed coded interpretations, in which some calls convey very specific information or requests that are not part of the natural meaning of the call. Thus in response to 4NT, a 'natural' bid of 5 would state a preference towards a diamond suit or a desire to play the contract in 5 diamonds, whereas if the partners have agreed to use the common Blackwood convention, a bid of 5 in the same situation would say nothing about the diamond suit, but tell the partner that the hand in question contains exactly one ace. Conventions are valuable in bridge because of the need to pass information beyond a simple like or dislike of a particular suit, and because the limited bidding space can be used more efficiently by adopting a conventional (artificial) meaning for a given call where a natural meaning would have less utility, because the information it would convey is not valuable or because the desire to convey that information would arise only rarely. The conventional meaning conveys more useful (or more frequently useful) information. There are a very large number of conventions from which players can choose; many books have been written detailing bidding conventions.
boat seen in Ancient Egypt, the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log. By the mid-19th century, many boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structure it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode. As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats. Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it wasn't until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight. Around the mid-1960s, boats made of fiberglass (aka "glassfibre") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong, and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam. Cold moulding is a modern construction method, using wood as the structural component. In cold moulding very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. Propulsion The most common means of boat propulsion are as follows: Engine Inboard motor Stern drive (Inboard/outboard) Outboard motor Paddle wheel Water jet (jetboat, personal water craft) Fan (hovercraft, air boat) Man (rowing, paddling, setting pole etc.) Wind (sailing) Buoyancy A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the
been used since prehistoric times. The earliest boats are thought to have been dugouts, and the oldest boats found by archaeological excavation date from around 7,000–10,000 years ago. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Rafts have operated for at least 8,000 years. A 7,000-year-old seagoing reed boat has been found at site H3 in Kuwait. Boats were used between 4000 and 3000 BC in Sumer, ancient Egypt and in the Indian Ocean. Boats played an important role in the commerce between the Indus Valley Civilization and Mesopotamia. Evidence of varying models of boats has also been discovered at various Indus Valley archaeological sites. Uru craft originate in Beypore, a village in south Calicut, Kerala, in southwestern India. This type of mammoth wooden ship was constructed solely of teak, with a transport capacity of 400 tonnes. The ancient Arabs and Greeks used such boats as trading vessels. The historians Herodotus, Pliny the Elder and Strabo record the use of boats for commerce, travel, and military purposes. Types Boats can be categorized into three main types: Unpowered or human-powered. Unpowered craft include rafts meant for one-way downstream travel. Human-powered boats include canoes, kayaks, gondolas and boats propelled by poles like a punt. Sailboats, propelled mainly by means of sails. Motorboats, propelled by mechanical means, such as engines. A number of large vessels are usually referred to as boats. Submarines are a prime example. Other types of large vessels which are traditionally called boats include Great Lakes freighters, riverboats, and ferryboats. Though large enough to carry their own boats and heavy cargoes, these vessels are designed for operation on inland or protected coastal waters. Terminology The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On most boats a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or covering much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads. The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port. Building materials Until the mid-19th century most boats were made of natural materials, primarily wood, although reed, bark and
interaction with various molecules alters the exact color. In vertebrates and other hemoglobin-using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot utilize oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin. Hemocyanin The blood of most mollusks – including cephalopods and gastropods – as well as some arthropods, such as horseshoe crabs, is blue, as it contains the copper-containing protein hemocyanin at concentrations of about 50 grams per liter. Hemocyanin is colorless when deoxygenated and dark blue when oxygenated. The blood in the circulation of these creatures, which generally live in cold environments with low oxygen tensions, is grey-white to pale yellow, and it turns dark blue when exposed to the oxygen in the air, as seen when they bleed. This is due to change in color of hemocyanin when it is oxidized. Hemocyanin carries oxygen in extracellular fluid, which is in contrast to the intracellular oxygen transport in mammals by hemoglobin in RBCs. Chlorocruorin The blood of most annelid worms and some marine polychaetes use chlorocruorin to transport oxygen. It is green in color in dilute solutions. Hemerythrin Hemerythrin is used for oxygen transport in the marine invertebrates sipunculids, priapulids, brachiopods, and the annelid worm, magelona. Hemerythrin is violet-pink when oxygenated. Hemovanadin The blood of some species of ascidians and tunicates, also known as sea squirts, contains proteins called vanadins. These proteins are based on vanadium, and give the creatures a concentration of vanadium in their bodies 100 times higher than the surrounding seawater. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. When exposed to oxygen, however, vanadins turn a mustard yellow. Disorders General medical Disorders of volume Injury can cause blood loss through bleeding. A healthy adult can lose almost 20% of blood volume (1 L) before the first symptom, restlessness, begins, and 40% of volume (2 L) before shock sets in. Thrombocytes are important for blood coagulation and the formation of blood clots, which can stop bleeding. Trauma to the internal organs or bones can cause internal bleeding, which can sometimes be severe. Dehydration can reduce the blood volume by reducing the water content of the blood. This would rarely result in shock (apart from the very severe cases) but may result in orthostatic hypotension and fainting. Disorders of circulation Shock is the ineffective perfusion of tissues, and can be caused by a variety of conditions including blood loss, infection, poor cardiac output. Atherosclerosis reduces the flow of blood through arteries, because atheroma lines arteries and narrows them. Atheroma tends to increase with age, and its progression can be compounded by many causes including smoking, high blood pressure, excess circulating lipids (hyperlipidemia), and diabetes mellitus. Coagulation can form a thrombosis, which can obstruct vessels. Problems with blood composition, the pumping action of the heart, or narrowing of blood vessels can have many consequences including hypoxia (lack of oxygen) of the tissues supplied. The term ischemia refers to tissue that is inadequately perfused with blood, and infarction refers to tissue death (necrosis), which can occur when the blood supply has been blocked (or is very inadequate). Hematological Anemia Insufficient red cell mass (anemia) can be the result of bleeding, blood disorders like thalassemia, or nutritional deficiencies, and may require one or more blood transfusions. Anemia can also be due to a genetic disorder in which the red blood cells simply do not function effectively. Anemia can be confirmed by a blood test if the hemoglobin value is less than 13.5 gm/dl in men or less than 12.0 gm/dl in women. Several countries have blood banks to fill the demand for transfusable blood. A person receiving a blood transfusion must have a blood type compatible with that of the donor. Sickle-cell anemia Disorders of cell proliferation Leukemia is a group of cancers of the blood-forming tissues and cells. Non-cancerous overproduction of red cells (polycythemia vera) or platelets (essential thrombocytosis) may be premalignant. Myelodysplastic syndromes involve ineffective production of one or more cell lines. Disorders of coagulation Hemophilia is a genetic illness that causes dysfunction in one of the blood's clotting mechanisms. This can allow otherwise inconsequential wounds to be life-threatening, but more commonly results in hemarthrosis, or bleeding into joint spaces, which can be crippling. Ineffective or insufficient platelets can also result in coagulopathy (bleeding disorders). Hypercoagulable state (thrombophilia) results from defects in regulation of platelet or clotting factor function, and can cause thrombosis. Infectious disorders of blood Blood is an important vector of infection. HIV, the virus that causes AIDS, is transmitted through contact with blood, semen or other body secretions of an infected person. Hepatitis B and C are transmitted primarily through blood contact. Owing to blood-borne infections, bloodstained objects are treated as a biohazard. Bacterial infection of the blood is bacteremia or sepsis. Viral Infection is viremia. Malaria and trypanosomiasis are blood-borne parasitic infections. Carbon monoxide poisoning Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Treatments Transfusion Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Intravenous administration Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. Letting In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. Etymology English blood (Old English blod) derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German Blut, Swedish blod, Gothic blōþ). There is no accepted Indo-European etymology. History Classical Greek medicine (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). Types The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937. Culture and religion Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Jewish and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. Indigenous Australians In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states:In many Aboriginal rituals and ceremonies, red ochre is rubbed all over the naked bodies of the dancers. In secret, sacred male ceremonies, blood extracted from the veins of the participant's arms is exchanged and rubbed on their bodies. Red ochre is used in similar ways in less-secret ceremonies. Blood is also used to fasten the feathers of birds onto people's bodies. Bird feathers contain a protein that is highly magnetically sensitive. Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. European paganism Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. Christianity In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, ″... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), “... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ...” (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you." (). Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. Judaism In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Islam Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Jehovah's Witnesses Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume
heme groups, and their interaction with various molecules alters the exact color. In vertebrates and other hemoglobin-using creatures, arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot utilize oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin. Hemocyanin The blood of most mollusks – including cephalopods and gastropods – as well as some arthropods, such as horseshoe crabs, is blue, as it contains the copper-containing protein hemocyanin at concentrations of about 50 grams per liter. Hemocyanin is colorless when deoxygenated and dark blue when oxygenated. The blood in the circulation of these creatures, which generally live in cold environments with low oxygen tensions, is grey-white to pale yellow, and it turns dark blue when exposed to the oxygen in the air, as seen when they bleed. This is due to change in color of hemocyanin when it is oxidized. Hemocyanin carries oxygen in extracellular fluid, which is in contrast to the intracellular oxygen transport in mammals by hemoglobin in RBCs. Chlorocruorin The blood of most annelid worms and some marine polychaetes use chlorocruorin to transport oxygen. It is green in color in dilute solutions. Hemerythrin Hemerythrin is used for oxygen transport in the marine invertebrates sipunculids, priapulids, brachiopods, and the annelid worm, magelona. Hemerythrin is violet-pink when oxygenated. Hemovanadin The blood of some species of ascidians and tunicates, also known as sea squirts, contains proteins called vanadins. These proteins are based on vanadium, and give the creatures a concentration of vanadium in their bodies 100 times higher than the surrounding seawater. Unlike hemocyanin and hemoglobin, hemovanadin is not an oxygen carrier. When exposed to oxygen, however, vanadins turn a mustard yellow. Disorders General medical Disorders of volume Injury can cause blood loss through bleeding. A healthy adult can lose almost 20% of blood volume (1 L) before the first symptom, restlessness, begins, and 40% of volume (2 L) before shock sets in. Thrombocytes are important for blood coagulation and the formation of blood clots, which can stop bleeding. Trauma to the internal organs or bones can cause internal bleeding, which can sometimes be severe. Dehydration can reduce the blood volume by reducing the water content of the blood. This would rarely result in shock (apart from the very severe cases) but may result in orthostatic hypotension and fainting. Disorders of circulation Shock is the ineffective perfusion of tissues, and can be caused by a variety of conditions including blood loss, infection, poor cardiac output. Atherosclerosis reduces the flow of blood through arteries, because atheroma lines arteries and narrows them. Atheroma tends to increase with age, and its progression can be compounded by many causes including smoking, high blood pressure, excess circulating lipids (hyperlipidemia), and diabetes mellitus. Coagulation can form a thrombosis, which can obstruct vessels. Problems with blood composition, the pumping action of the heart, or narrowing of blood vessels can have many consequences including hypoxia (lack of oxygen) of the tissues supplied. The term ischemia refers to tissue that is inadequately perfused with blood, and infarction refers to tissue death (necrosis), which can occur when the blood supply has been blocked (or is very inadequate). Hematological Anemia Insufficient red cell mass (anemia) can be the result of bleeding, blood disorders like thalassemia, or nutritional deficiencies, and may require one or more blood transfusions. Anemia can also be due to a genetic disorder in which the red blood cells simply do not function effectively. Anemia can be confirmed by a blood test if the hemoglobin value is less than 13.5 gm/dl in men or less than 12.0 gm/dl in women. Several countries have blood banks to fill the demand for transfusable blood. A person receiving a blood transfusion must have a blood type compatible with that of the donor. Sickle-cell anemia Disorders of cell proliferation Leukemia is a group of cancers of the blood-forming tissues and cells. Non-cancerous overproduction of red cells (polycythemia vera) or platelets (essential thrombocytosis) may be premalignant. Myelodysplastic syndromes involve ineffective production of one or more cell lines. Disorders of coagulation Hemophilia is a genetic illness that causes dysfunction in one of the blood's clotting mechanisms. This can allow otherwise inconsequential wounds to be life-threatening, but more commonly results in hemarthrosis, or bleeding into joint spaces, which can be crippling. Ineffective or insufficient platelets can also result in coagulopathy (bleeding disorders). Hypercoagulable state (thrombophilia) results from defects in regulation of platelet or clotting factor function, and can cause thrombosis. Infectious disorders of blood Blood is an important vector of infection. HIV, the virus that causes AIDS, is transmitted through contact with blood, semen or other body secretions of an infected person. Hepatitis B and C are transmitted primarily through blood contact. Owing to blood-borne infections, bloodstained objects are treated as a biohazard. Bacterial infection of the blood is bacteremia or sepsis. Viral Infection is viremia. Malaria and trypanosomiasis are blood-borne parasitic infections. Carbon monoxide poisoning Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Treatments Transfusion Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Intravenous administration Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. Letting In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. Etymology English blood (Old English blod) derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German Blut, Swedish blod, Gothic blōþ). There is no accepted Indo-European etymology. History Classical Greek medicine (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). Types The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937. Culture and religion Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Jewish and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. Indigenous Australians In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states:In many Aboriginal rituals and ceremonies, red ochre is rubbed all over the naked bodies of the dancers. In secret, sacred male ceremonies, blood extracted from the veins of the participant's arms is exchanged and rubbed on their bodies. Red ochre is used in similar ways in less-secret ceremonies. Blood is also used to fasten the feathers of birds onto people's bodies. Bird feathers contain a protein that is highly magnetically sensitive. Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. European paganism Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. Christianity In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, ″... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), “... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ...” (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you." (). Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. Judaism In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Islam Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the
uncle who despised rote learning: "Most of my time was spent playing chess, reading maps and learning how to open my eyes to everything around me." In 1936, when he was 11, the family emigrated from Poland to France. The move, World War II, and the influence of his father's brother, the mathematician Szolem Mandelbrojt (who had moved to Paris around 1920), further prevented a standard education. "The fact that my parents, as economic and political refugees, joined Szolem in France saved our lives," he writes. Mandelbrot attended the Lycée Rollin (now the Collège-lycée Jacques-Decour) in Paris until the start of World War II, when his family moved to Tulle, France. He was helped by Rabbi David Feuerwerker, the Rabbi of Brive-la-Gaillarde, to continue his studies. Much of France was occupied by the Nazis at the time, and Mandelbrot recalls this period: In 1944, Mandelbrot returned to Paris, studied at the Lycée du Parc in Lyon, and in 1945 to 1947 attended the École Polytechnique, where he studied under Gaston Julia and Paul Lévy. From 1947 to 1949 he studied at California Institute of Technology, where he earned a master's degree in aeronautics. Returning to France, he obtained his PhD degree in Mathematical Sciences at the University of Paris in 1952. Research career From 1949 to 1958, Mandelbrot was a staff member at the Centre National de la Recherche Scientifique. During this time he spent a year at the Institute for Advanced Study in Princeton, New Jersey, where he was sponsored by John von Neumann. In 1955 he married Aliette Kagan and moved to Geneva, Switzerland (to collaborate with Jean Piaget at the International Centre for Genetic Epistemology) and later to the Université Lille Nord de France. In 1958 the couple moved to the United States where Mandelbrot joined the research staff at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York. He remained at IBM for 35 years, becoming an IBM Fellow, and later Fellow Emeritus. From 1951 onward, Mandelbrot worked on problems and published papers not only in mathematics but in applied fields such as information theory, economics, and fluid dynamics. Randomness and fractals in financial markets Mandelbrot saw financial markets as an example of "wild randomness", characterized by concentration and long range dependence. He developed several original approaches for modelling financial fluctuations. In his early work, he found that the price changes in financial markets did not follow a Gaussian distribution, but rather Lévy stable distributions having infinite variance. He found, for example, that cotton prices followed a Lévy stable distribution with parameter α equal to 1.7 rather than 2 as in a Gaussian distribution. "Stable" distributions have the property that the sum of many instances of a random variable follows the same distribution but with a larger scale parameter.<ref>{{cite web |url=https://www.newscientist.com/article/mg15420784.700-flight-over-wall-st.html |title= New Scientist, 19 April 1997 |publisher=Newscientist.com |date=19 April 1997 |access-date=17 October 2010 |archive-date=21 April 2010 |archive-url= https://web.archive.org/web/20100421101729/http://www.newscientist.com/article/mg15420784.700-flight-over-wall-st.html |url-status=live }}</ref> The latter work from the early 60s was done with daily data of cotton prices from 1900, long before he introduced the word 'fractal'. In later years, after the concept of fractals had matured, the study of financial markets in the context of fractals became possible only after the availability of high frequency data in finance. In the late 80s Mandelbrot used intra-daily tick data supplied by Olsen & Associates in Zurich to apply fractal theory to market microstructure. This cooperation lead to the publication of the first comprehensive papers on scaling law in finance. This law shows similar properties at different time scales, confirming Mandelbrot's insight of the fractal nature of market microstructure. Mandelbrot's own research in this area is presented in his books 'Fractals and Scaling in Finance' and 'The (Mis)behavior of Markets'. Developing "fractal geometry" and the Mandelbrot set As a visiting professor at Harvard University, Mandelbrot began to study mathematical objects called Julia sets that were invariant under certain transformations of the complex plane. Building on previous work by Gaston Julia and Pierre Fatou, Mandelbrot used a computer to plot images of the Julia sets. While investigating the topology of these Julia sets, he studied the Mandelbrot set which was introduced by him in 1979. In 1975, Mandelbrot coined the term fractal to describe these structures and first published his ideas in the French book Les Objets Fractals: Forme, Hasard et Dimension, later translated in 1977 as Fractals: Form, Chance and Dimension. According to computer scientist and physicist Stephen Wolfram, the book was a "breakthrough" for Mandelbrot, who until then would typically "apply fairly straightforward mathematics ... to areas that had barely seen the light of serious mathematics before". Wolfram adds that as a result of this new research, he was no longer a "wandering scientist", and later called him "the father of fractals": Wolfram briefly describes fractals as a form of geometric repetition, "in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesque broccoli are two examples from nature." He points out an unexpected conclusion: Mandelbrot used the term "fractal" as it derived from the Latin word "fractus", defined as broken or shattered glass. Using the newly developed IBM computers at his disposal, Mandelbrot was able to create fractal images using graphics computer code, images that an interviewer described as looking like "the delirious exuberance of the 1960s psychedelic art with forms hauntingly reminiscent of nature and the human body". He also saw himself as a "would-be Kepler", after the 17th-century scientist Johannes Kepler, who calculated and described the orbits of the planets. Mandelbrot, however, never felt he was inventing a new idea. He described his feelings in a documentary with science writer Arthur C. Clarke: According to Clarke, "the Mandelbrot set is indeed one of the most astonishing discoveries in the entire history of mathematics. Who could have dreamed that such an incredibly simple equation could have generated images of literally infinite complexity?" Clarke also notes an "odd coincidencethe name Mandelbrot, and the word "mandala"—for a religious symbol—which I'm sure is a pure coincidence, but indeed the Mandelbrot set does seem to contain an enormous number of mandalas. In 1982, Mandelbrot expanded and updated his ideas in The Fractal Geometry of Nature. This influential work brought fractals into the mainstream of professional and popular mathematics, as well as silencing critics, who had dismissed fractals as "program artifacts". Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM decided to end pure research in his division. He joined the Department of Mathematics at Yale, and obtained his first tenured post in 1999, at the age of 75. At the time of his retirement in 2005, he was Sterling Professor of Mathematical Sciences. Fractals and the "theory of roughness" Mandelbrot created the first-ever "theory of roughness", and he saw "roughness" in the shapes of mountains, coastlines and river basins; the structures of plants, blood vessels and lungs; the clustering of galaxies. His personal quest was to create some mathematical formula to measure the overall "roughness" of such objects in nature. He began by asking himself various kinds of questions related to nature: In his paper "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", published in Science in 1967, Mandelbrot discusses self-similar curves that have Hausdorff dimension that are examples of fractals, although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Mandelbrot emphasized the use of fractals as realistic and useful models for describing many "rough" phenomena in the real world. He concluded that "real roughness is often fractal and can be measured." Although Mandelbrot coined the term "fractal", some of the mathematical objects he presented in The Fractal Geometry of Nature had been previously described by other mathematicians. Before Mandelbrot, however, they were regarded as isolated curiosities with unnatural and non-intuitive properties. Mandelbrot brought these objects together for the first time and turned them into essential tools for the long-stalled effort to extend the scope of science to explaining non-smooth, "rough" objects in the real world. His methods of research were both old and new: Fractals are also found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. —Mandelbrot, in his introduction to The Fractal Geometry of Nature Mandelbrot has been called an artist, and a visionary and a maverick. His informal and passionate style of writing and his emphasis on visual and geometric intuition (supported by the inclusion of numerous illustrations) made The Fractal Geometry of Nature accessible to non-specialists. The
for Mandelbrot, who until then would typically "apply fairly straightforward mathematics ... to areas that had barely seen the light of serious mathematics before". Wolfram adds that as a result of this new research, he was no longer a "wandering scientist", and later called him "the father of fractals": Wolfram briefly describes fractals as a form of geometric repetition, "in which smaller and smaller copies of a pattern are successively nested inside each other, so that the same intricate shapes appear no matter how much you zoom in to the whole. Fern leaves and Romanesque broccoli are two examples from nature." He points out an unexpected conclusion: Mandelbrot used the term "fractal" as it derived from the Latin word "fractus", defined as broken or shattered glass. Using the newly developed IBM computers at his disposal, Mandelbrot was able to create fractal images using graphics computer code, images that an interviewer described as looking like "the delirious exuberance of the 1960s psychedelic art with forms hauntingly reminiscent of nature and the human body". He also saw himself as a "would-be Kepler", after the 17th-century scientist Johannes Kepler, who calculated and described the orbits of the planets. Mandelbrot, however, never felt he was inventing a new idea. He described his feelings in a documentary with science writer Arthur C. Clarke: According to Clarke, "the Mandelbrot set is indeed one of the most astonishing discoveries in the entire history of mathematics. Who could have dreamed that such an incredibly simple equation could have generated images of literally infinite complexity?" Clarke also notes an "odd coincidencethe name Mandelbrot, and the word "mandala"—for a religious symbol—which I'm sure is a pure coincidence, but indeed the Mandelbrot set does seem to contain an enormous number of mandalas. In 1982, Mandelbrot expanded and updated his ideas in The Fractal Geometry of Nature. This influential work brought fractals into the mainstream of professional and popular mathematics, as well as silencing critics, who had dismissed fractals as "program artifacts". Mandelbrot left IBM in 1987, after 35 years and 12 days, when IBM decided to end pure research in his division. He joined the Department of Mathematics at Yale, and obtained his first tenured post in 1999, at the age of 75. At the time of his retirement in 2005, he was Sterling Professor of Mathematical Sciences. Fractals and the "theory of roughness" Mandelbrot created the first-ever "theory of roughness", and he saw "roughness" in the shapes of mountains, coastlines and river basins; the structures of plants, blood vessels and lungs; the clustering of galaxies. His personal quest was to create some mathematical formula to measure the overall "roughness" of such objects in nature. He began by asking himself various kinds of questions related to nature: In his paper "How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension", published in Science in 1967, Mandelbrot discusses self-similar curves that have Hausdorff dimension that are examples of fractals, although Mandelbrot does not use this term in the paper, as he did not coin it until 1975. The paper is one of Mandelbrot's first publications on the topic of fractals. Mandelbrot emphasized the use of fractals as realistic and useful models for describing many "rough" phenomena in the real world. He concluded that "real roughness is often fractal and can be measured." Although Mandelbrot coined the term "fractal", some of the mathematical objects he presented in The Fractal Geometry of Nature had been previously described by other mathematicians. Before Mandelbrot, however, they were regarded as isolated curiosities with unnatural and non-intuitive properties. Mandelbrot brought these objects together for the first time and turned them into essential tools for the long-stalled effort to extend the scope of science to explaining non-smooth, "rough" objects in the real world. His methods of research were both old and new: Fractals are also found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry: Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. —Mandelbrot, in his introduction to The Fractal Geometry of Nature Mandelbrot has been called an artist, and a visionary and a maverick. His informal and passionate style of writing and his emphasis on visual and geometric intuition (supported by the inclusion of numerous illustrations) made The Fractal Geometry of Nature accessible to non-specialists. The book sparked widespread popular interest in fractals and contributed to chaos theory and other fields of science and mathematics. Mandelbrot also put his ideas to work in cosmology. He offered in 1974 a new explanation of Olbers' paradox (the "dark night sky" riddle), demonstrating the consequences of fractal theory as a sufficient, but not necessary, resolution of the paradox. He postulated that if the stars in the universe were fractally distributed (for example, like Cantor dust), it would not be necessary to rely on the Big Bang theory to explain the paradox. His model would not rule out a Big Bang, but would allow for a dark sky even if the Big Bang had not occurred. Awards and honors Mandelbrot's awards include the Wolf Prize for Physics in 1993, the Lewis Fry Richardson Prize of the European Geophysical Society in 2000, the Japan Prize in 2003, and the Einstein Lectureship of the American Mathematical Society in 2006. The small asteroid 27500 Mandelbrot was named in his honor. In November 1990, he was made a Chevalier in France's Legion of Honour. In December 2005, Mandelbrot was appointed to the position of Battelle Fellow at the Pacific Northwest National Laboratory. Mandelbrot was promoted to an Officer of the Legion of Honour in January 2006. An honorary degree from Johns Hopkins University was bestowed on Mandelbrot in the May 2010 commencement exercises. A partial list of awards received by Mandelbrot: 2004 Best Business Book of the Year Award AMS Einstein Lectureship Barnard Medal Caltech Service Casimir Funk Natural Sciences Award Charles Proteus Steinmetz Medal High School Spelling Bee (1940) Fellow, American Geophysical Union Fellow of the American Statistical Association Fellow of the American Physical Society (1987) Franklin Medal Harvey Prize (1989) Honda Prize Humboldt Preis IBM Fellowship Japan Prize (2003) John Scott Award Légion d'honneur (Legion of Honour) Lewis Fry Richardson Medal Medaglia della Presidenza della Repubblica Italiana Médaille de Vermeil de la Ville de Paris Nevada Prize Member of the Norwegian Academy of Science and Letters. Member of the American Philosophical Society (2004) Science for Art Sven Berggren-Priset Władysław Orlicz Prize Wolf Foundation Prize for Physics (1993) Death and legacy Mandelbrot died from pancreatic cancer at the age of 85 in a hospice in Cambridge, Massachusetts on 14 October 2010. Reacting to news of his death, mathematician Heinz-Otto Peitgen said: "[I]f we talk about impact inside mathematics, and applications in the sciences, he is one of the most important figures of the last fifty years." Chris Anderson, TED conference curator, described Mandelbrot as "an icon who changed how we see the world". Nicolas Sarkozy, President of France at the time of Mandelbrot's death, said Mandelbrot had "a powerful, original mind that never shied away from innovating and shattering preconceived notions [... h]is work, developed entirely outside mainstream research, led to modern information theory." Mandelbrot's obituary in The Economist points out his fame as "celebrity beyond the academy" and lauds him as the "father of fractal geometry". Best-selling essayist-author Nassim Nicholas Taleb has remarked that Mandelbrot's book The (Mis)Behavior of Markets is in his opinion "The deepest and most realistic finance book ever published". Bibliography In English Fractals: Form, Chance and Dimension, 1977, 2020 The Fractal Geometry of Nature, 1982 Mandelbrot, B. (1959) Variables et processus stochastiques de Pareto-Levy, et la repartition des revenus. Comptes rendus de l'Académie des Sciences de Paris, 249, 613–615. Mandelbrot, B. (1960) The Pareto-Levy law and the distribution of income. International Economic Review, 1, 79–106. Mandelbrot, B. (1961) Stable Paretian random functions and the multiplicative variation of income. Econometrica, 29, 517–543. Mandelbrot, B. (1964) Random walks, fire damage amount and other Paretian risk phenomena. Operations Research, 12, 582–585. Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Selecta Volume E, 1997 by Benoit B. Mandelbrot and R.E. Gomory Mandelbrot, Benoit B. (1997) Fractals and Scaling in Finance: Discontinuity, Concentration, Risk, Springer. Fractales, hasard et finance, 1959–1997, 1 November 1998 Multifractals and 1/ƒ Noise: Wild Self-Affinity in Physics (1963–1976) (Selecta; V.N) 18 January 1999 by J.M. Berger and Benoit B. Mandelbrot Gaussian Self-Affinity and Fractals: Globality, The Earth, 1/f Noise, and R/S (Selected Works of Benoit B. Mandelbrot) 14 December 2001 by Benoit Mandelbrot and F.J. Damerau Mandelbrot, Benoit B., Gaussian Self-Affinity and Fractals, Springer: 2002. Fractals and Chaos: The Mandelbrot Set and Beyond, 9 January 2004 The Misbehavior of Markets: A Fractal View of Financial Turbulence, 2006 by Benoit Mandelbrot and Richard L. Hudson Mandelbrot, Benoit B. (2010). The Fractalist, Memoir of a Scientific Maverick. New York: Vintage Books, Division of Random House. The Fractalist: Memoir of a Scientific Maverick, 2014 Heinz-Otto Peitgen, Hartmut Jürgens, Dietmar Saupe and Cornelia Zahlten: Fractals: An Animated Discussion (63 min video film, interviews with Benoît Mandelbrot and Edward Lorenz, computer animations), W.H. Freeman and Company, 1990. (re-published by Films for the Humanities & Sciences, ) "Hunting the Hidden Dimension: mysteriously beautiful fractals are shaking up the world of mathematics and deepening our understanding of nature", NOVA, WGBH Educational Foundation, Boston for PBS, first aired 28 October 2008. References in popular culture In the 1990 book The
disciples who lived with him and witnessed his various miracles. These followers, he says, are Constantinus, who succeeded Benedict as Abbot of Monte Cassino, Honoratus, who was abbot of Subiaco when St Gregory wrote his Dialogues, Valentinianus, and Simplicius. In Gregory's day, history was not recognised as an independent field of study; it was a branch of grammar or rhetoric, and historia was an account that summed up the findings of the learned when they wrote what was, at that time, considered history. Gregory's Dialogues, Book Two, then, an authentic medieval hagiography cast as a conversation between the Pope and his deacon Peter, is designed to teach spiritual lessons. Early life He was the son of a Roman noble of Nursia, the modern Norcia, in Umbria. A tradition which Bede accepts makes him a twin with his sister Scholastica. If 480 is accepted as the year of his birth, the year of his abandonment of his studies and leaving home would be about 500. Gregory's narrative makes it impossible to suppose him younger than 20 at the time. Benedict was sent to Rome to study, but was disappointed by the life he found there. He does not seem to have left Rome for the purpose of becoming a hermit, but only to find some place away from the life of the great city. He took his old nurse with him as a servant and they settled down to live in Enfide. Enfide, which the tradition of Subiaco identifies with the modern Affile, is in the Simbruini mountains, about forty miles from Rome and two miles from Subiaco. A short distance from Enfide is the entrance to a narrow, gloomy valley, penetrating the mountains and leading directly to Subiaco. The path continues to ascend, and the side of the ravine, on which it runs, becomes steeper, until a cave is reached above which the mountain now rises almost perpendicularly; while on the right, it strikes in a rapid descent down to where, in Benedict's day, below, lay the blue waters of a lake. The cave has a large triangular-shaped opening and is about ten feet deep. On his way from Enfide, Benedict met a monk, Romanus of Subiaco, whose monastery was on the mountain above the cliff overhanging the cave. Romanus had discussed with Benedict the purpose which had brought him to Subiaco, and had given him the monk's habit. By his advice Benedict became a hermit and for three years, unknown to men, lived in this cave above the lake. Later life Gregory tells us little of these years. He now speaks of Benedict no longer as a youth (puer), but as a man (vir) of God. Romanus, Gregory tells us, served Benedict in every way he could. The monk apparently visited him frequently, and on fixed days brought him food. During these three years of solitude, broken only by occasional communications with the outer world and by the visits of Romanus, Benedict matured both in mind and character, in knowledge of himself and of his fellow-man, and at the same time he became not merely known to, but secured the respect of, those about him; so much so that on the death of the abbot of a monastery in the neighbourhood (identified by some with Vicovaro), the community came to him and begged him to become its abbot. Benedict was acquainted with the life and discipline of the monastery, and knew that "their manners were diverse from his and therefore that they would never agree together: yet, at length, overcome with their entreaty, he gave his consent" (ibid., 3). The experiment failed; the monks tried to poison him. The legend goes that they first tried to poison his drink. He prayed a blessing over the cup and the cup shattered. Thus he left the group and went back to his cave at Subiaco. There lived in the neighborhood a priest called Florentius who, moved by envy, tried to ruin him. He tried to poison him with poisoned bread. When he
and leading directly to Subiaco. The path continues to ascend, and the side of the ravine, on which it runs, becomes steeper, until a cave is reached above which the mountain now rises almost perpendicularly; while on the right, it strikes in a rapid descent down to where, in Benedict's day, below, lay the blue waters of a lake. The cave has a large triangular-shaped opening and is about ten feet deep. On his way from Enfide, Benedict met a monk, Romanus of Subiaco, whose monastery was on the mountain above the cliff overhanging the cave. Romanus had discussed with Benedict the purpose which had brought him to Subiaco, and had given him the monk's habit. By his advice Benedict became a hermit and for three years, unknown to men, lived in this cave above the lake. Later life Gregory tells us little of these years. He now speaks of Benedict no longer as a youth (puer), but as a man (vir) of God. Romanus, Gregory tells us, served Benedict in every way he could. The monk apparently visited him frequently, and on fixed days brought him food. During these three years of solitude, broken only by occasional communications with the outer world and by the visits of Romanus, Benedict matured both in mind and character, in knowledge of himself and of his fellow-man, and at the same time he became not merely known to, but secured the respect of, those about him; so much so that on the death of the abbot of a monastery in the neighbourhood (identified by some with Vicovaro), the community came to him and begged him to become its abbot. Benedict was acquainted with the life and discipline of the monastery, and knew that "their manners were diverse from his and therefore that they would never agree together: yet, at length, overcome with their entreaty, he gave his consent" (ibid., 3). The experiment failed; the monks tried to poison him. The legend goes that they first tried to poison his drink. He prayed a blessing over the cup and the cup shattered. Thus he left the group and went back to his cave at Subiaco. There lived in the neighborhood a priest called Florentius who, moved by envy, tried to ruin him. He tried to poison him with poisoned bread. When he prayed a blessing over the bread, a raven swept in and took the loaf away. From this time his miracles seem to have become frequent, and many people, attracted by his sanctity and character, came to Subiaco to be under his guidance. Having failed by sending him poisonous bread, Florentius tried to seduce his monks with some prostitutes. To avoid further temptations, in about 530 Benedict left Subiaco. He founded 12 monasteries in the vicinity of Subiaco, and, eventually, in 530 he founded the great Benedictine monastery of Monte Cassino, which lies on a hilltop between Rome and Naples. During the invasion of Italy, Totila, King of the Goths, ordered a general to wear his kingly robes and to see whether Benedict would discover the truth. Immediately Benedict detected the impersonation, and Totila came to pay him due respect. Veneration He is believed to have died of a fever at Monte Cassino not long after his twin sister, Scholastica, and was buried in the same place as his sister. According to tradition, this occurred on 21 March 547. He was named patron protector of Europe by Pope Paul VI in 1964. In 1980, Pope John Paul II declared him co-patron of Europe, together with Cyril and Methodius. Furthermore, he is the patron saint of speleologists. In the pre-1970 General Roman Calendar, his feast is kept on 21 March, the day of his death according to some manuscripts of the Martyrologium Hieronymianum and that of Bede. Because on that date his liturgical memorial would always be impeded by the observance of Lent, the 1969 revision of the General Roman Calendar moved his memorial to 11 July, the date that appears in some Gallic liturgical books of the end of the 8th century as the feast commemorating his birth (Natalis S. Benedicti). There is some uncertainty about the origin of this feast. Accordingly, on 21 March the Roman Martyrology mentions in a line and a half that it is Benedict's day of death and that his memorial is celebrated on 11 July, while on 11 July it devotes seven lines to speaking of him, and mentions the tradition that he died on 21 March. The Eastern Orthodox Church commemorates Saint Benedict on 14 March. The Anglican Communion has no single universal calendar, but a provincial calendar of saints is published in each province. In almost all of these, Saint Benedict is commemorated on 11 July. Benedict is remembered in the Church of England with a Lesser Festival on 11 July. Rule of Saint Benedict Benedict wrote the Rule in 516 for monks living communally under the authority of an abbot. The Rule comprises seventy-three short chapters. Its wisdom is twofold: spiritual (how to live a Christocentric life on earth) and administrative (how to run a monastery efficiently). More than half of the chapters describe how to be obedient and humble, and what to do when a member of the community is not. About one-fourth regulate the work of God (the "opus Dei"). One-tenth outline how, and by whom, the monastery should be managed. Following the golden rule of Ora et Labora – pray and work, the monks each day devoted eight hours to prayer, eight hours to sleep, and eight hours to manual work, sacred reading and/or works of charity. Saint Benedict Medal This devotional medal originally came from a cross in honor of Saint Benedict. On one side, the medal has an image of Saint Benedict, holding the Holy Rule in his left hand and a cross in his right. There is a raven on one side of him, with a cup on the other side of him. Around the medal's outer margin are the words "Eius in obitu nostro praesentia muniamur" ("May we be strengthened by his presence in the hour
veterans throughout the entire line in order to inspire the less experienced. The Pompeian cohorts were arrayed in an unusually thick formation, 10 men deep: their task was just to tie down the enemy foot while Pompey's cavalry, his key to victory, swept through Caesar's flank and rear. The column of legions was divided under command of three subordinates, with Lentulus in charge of the left, Scipio of the center and Ahenobarbus the right. Labienus was entrusted with command of the cavalry charge, while Pompey himself took up a position behind the left wing in order to oversee the course of the battle. Caesar also deployed his men in three lines, but, being outnumbered, had to thin his ranks to a depth of only six men, in order to match the frontage presented by Pompey. His left flank, resting on the Enipeus River, consisted of his battle worn IXth legion supplemented by the VIIIth legion, these were commanded by Mark Antony. The VI, XII, XI and XIII formed the centre and were commanded by Domitius, then came the VII and upon his right he placed his favored Xth legion, giving Sulla command of this flank – Caesar himself took his stand on the right, across from Pompey. Upon seeing the disposition of Pompey's army Caesar grew discomforted, and further thinned his third line in order to form a fourth line on his right: this to counter the onslaught of the enemy cavalry, which he knew his numerically inferior cavalry could not withstand. He gave this new line detailed instructions for the role they would play, hinting that upon them would rest the fortunes of the day, and gave strict orders to his third line not to charge until specifically ordered. Battle There was significant distance between the two armies, according to Caesar. Pompey ordered his men not to charge, but to wait until Caesar's legions came into close quarters; Pompey's adviser Gaius Triarius believed that Caesar's infantry would be fatigued and fall into disorder if they were forced to cover twice the expected distance of a battle march. Also, stationary troops were expected to be able to defend better against pila throws. Seeing that Pompey's army was not advancing, Caesar's infantry under Mark Antony and Gnaeus Domitius Calvinus started the advance. As Caesar's men neared throwing distance, without orders, they stopped to rest and regroup before continuing the charge; Pompey's right and centre line held as the two armies collided. As Pompey's infantry fought, Labienus ordered the Pompeian cavalry on his left flank to attack Caesar's cavalry; as expected they successfully pushed back Caesar's cavalry. Caesar then revealed his hidden fourth line of infantry and surprised Pompey's cavalry charge; Caesar's men were ordered to leap up and use their pila to thrust at Pompey's cavalry instead of throwing them. Pompey's cavalry panicked and suffered hundreds of casualties, as Caesar's cavalry came about and charged after them. After failing to reform, the rest of Pompey's cavalry retreated to the hills, leaving the left wing of his legions exposed to the hidden troops as Caesar's cavalry wheeled around their flank. Caesar then ordered in his third line, containing his most battle-hardened veterans, to attack. This broke Pompey's left wing troops, who fled the battlefield. After routing Pompey's cavalry, Caesar threw in his last line of reserves —a move which at this point meant that the battle was more or less decided. Pompey lost the will to fight as he watched both cavalry and legions under his command break formation and flee from battle, and he retreated to his camp, leaving the rest of his troops at the centre and right flank to their own devices. He ordered the garrisoned auxiliaries to defend the camp as he gathered his family, loaded up gold, and threw off his general's cloak to make a quick escape. As the rest of Pompey's army were left confused, Caesar urged his men to end the day by routing the rest of Pompey's troops and capturing the Pompeian camp. They complied with his wishes; after finishing off the remains of Pompey's men, they furiously attacked the camp walls. The Thracians and the other auxiliaries who were left in the Pompeian camp, in total seven cohorts, defended bravely, but were not able to fend off the assault. Caesar had won his greatest victory, claiming to have only lost about 200 soldiers and 30 centurions. In his history of the war, Caesar would praise his own men's discipline and experience, and remembered each of his centurions by name. He also questioned Pompey's decision not to charge. Aftermath Pompey, despairing of the defeat, fled with his advisors overseas to Mytilene and thence to Cilicia where he held a council of war; at the same time, Cato and supporters at Dyrrachium attempted first to hand over command to Marcus Tullius Cicero, who refused, deciding instead of return to Italy. They then regrouped at Corcyra and went thence to Libya. Others, including Marcus Junius Brutus sought Caesar's pardon, travelling over marshlands to Larissa where he was then welcomed graciously by Caesar in his camp. Pompey's council of war decided to flee to Egypt, which had in the previous year supplied him with military aid. In the aftermath of the battle, Caesar captured Pompey's camp and burned Pompey's correspondence. He then announced that he would forgive all who asked for mercy. Pompeian naval forces in the Adriatic and Italy mostly withdrew or surrendered. Hearing of Pompey's flight to Egypt, Caesar remained in hot pursuit, first landing in Asia and reaching Alexandria on 2 October 48 BC, where he discovered Pompey's murder and then was embroiled in a dynastic dispute between Ptolemy XIII and Cleopatra. Importance Paul K. Davis wrote that "Caesar's victory took him to the pinnacle of power, effectively ending the Republic." The battle itself did not end the civil war but it was decisive and gave Caesar a much needed boost in legitimacy. Until then much of the Roman world outside Italy supported Pompey and his allies due to the extensive list of clients he held in all corners of the Republic. After Pompey's defeat former allies began to align themselves with Caesar as some came to believe the gods favored him, while for others it was simple self-preservation. The ancients took great stock in success as a sign of favoritism by the gods. This is especially true of success in the face of almost certain defeat – as Caesar experienced at Pharsalus. This allowed Caesar to parlay this single victory into a huge network of willing clients to better secure his hold over power and force the Optimates into near exile in search for allies to continue the fight against Caesar. In popular culture The battle gives its name to the following artistic, geographical, and business concerns: Pharsalia, a poem by Lucan Pharsalia, New York, U.S. Pharsalia Technologies, Inc. In Alexander Dumas' The Three Musketeers, the author makes reference to Caesar's purported order that his men try to cut the faces of their opponents - their vanity supposedly being of more value to them than their lives. In Mankiewicz's 1963 film Cleopatra, the immediate aftermath of Pharsalus is used as an opening scene to set the action in motion. Notes Citations References Further reading Bruère, Richard Treat, (1951). "Palaepharsalus, Pharsalus, Pharsalia". Classical Philology, Vol. 46, No. 2 (Apr., 1951), pp. 111–115. Gwatkin, William E. (1956). "Some Reflections on the Battle of Pharsalus", Transactions and Proceedings of the American Philological Association, Vol. 87. James, Steven (2016). "48 BC: The Battle of Pharsalus". Non-peer-reviewed publication. Holmes, T. Rice (1908). "The Battle-Field of Old Pharsalvs" (The Battle-Field of Old Pharsalus). The Classical Quarterly, Vol. 2, No. 4 (Oct., 1908), pp. 271–292. Lucas, Frank Laurence (1921). "The Battlefield of Pharsalos", Annual of the British School at Athens, No. XXIV, 1919–21. Nordling, John G. (2006). "Caesar's Pre-Battle Speech at Pharsalus (B.C. 3.85.4): Ridiculum Acri Fortius ... Secat Res". The Classical Journal, Vol. 101, No. 2 (Dec. - Jan., 2005/2006), pp. 183–189. Pelling, C. B. R. (1973). "Pharsalus". Historia: Zeitschrift für Alte Geschichte. Bd. 22, H. 2 (2nd Qtr., 1973), pp. 249–259. Perrin, B. (1885). "Pharsalia, Pharsalus, Palaepharsalus". The American Journal of Philology, Vol. 6,
of which vastly outnumbered Caesar's own, that Pompey derived his greatest advantage. He seems to have had at his disposal anywhere between 5,000 and 7,000 cavalry, and thousands of archers, slingers and light infantrymen in general. These all formed a remarkably diverse group, including Gallic and Germanic horsemen alongside all polyglot peoples of the east – namely Greeks, Thracians, and Anatolians from the Balkans and Syrians, Phoenicians and Jews from the Levant. To this heterogeneous force Pompey added horsemen conscripted from his own slaves. Many of the foreigners were serving under their own rulers, for more than a dozen despots and petty kings under Roman influence in the east were Pompey's personal clients and some elected to attend in person, or send proxies. Caesarian legions Caesar had the following legions with him: the VI legion (later called Ferrata) veterans of his Gallic Wars the VII legion (later called Claudia Pia Fidelis) veterans of his Gallic Wars the VIII legion (later called Augusta) veterans of his Gallic Wars the IX legion (later called Hispania) veterans of his Gallic Wars the X legion (Equestris, later called Gemina) veterans of his Gallic Wars the XI legion (later called Paterna and Claudia Pia Fidelis, the same title as the seventh) veterans of his Gallic Wars the XII legion (later called Fulminata) veterans of his Gallic Wars the XIII legion (later also called Gemina, the 'twin' to the tenth) veterans of his Gallic Wars the XXVII legion, a legion constituted in the summer of 49 BC The bulk of Caesar's army at Pharsalus was made up of his veterans from the Gallic Wars; very experienced, battle-hardened troops who were absolutely devoted to their commander. Deployment The two generals deployed their legions in the traditional three lines (triplex acies), with Pompey's right and Caesar's left flanks resting on river Enipeus. As the stream provided enough protection to that side, Pompey moved almost all of his cavalry, archers, and slingers to the left, to make the most of their numerical strength. Only a small force of 500–600 Pontic cavalry and some Cappadocian light infantry was placed on his right flank. Pompey stationed his strongest legions in the center and wings of his infantry line, and dispersed some 2,000 re-enlisted veterans throughout the entire line in order to inspire the less experienced. The Pompeian cohorts were arrayed in an unusually thick formation, 10 men deep: their task was just to tie down the enemy foot while Pompey's cavalry, his key to victory, swept through Caesar's flank and rear. The column of legions was divided under command of three subordinates, with Lentulus in charge of the left, Scipio of the center and Ahenobarbus the right. Labienus was entrusted with command of the cavalry charge, while Pompey himself took up a position behind the left wing in order to oversee the course of the battle. Caesar also deployed his men in three lines, but, being outnumbered, had to thin his ranks to a depth of only six men, in order to match the frontage presented by Pompey. His left flank, resting on the Enipeus River, consisted of his battle worn IXth legion supplemented by the VIIIth legion, these were commanded by Mark Antony. The VI, XII, XI and XIII formed the centre and were commanded by Domitius, then came the VII and upon his right he placed his favored Xth legion, giving Sulla command of this flank – Caesar himself took his stand on the right, across from Pompey. Upon seeing the disposition of Pompey's army Caesar grew discomforted, and further thinned his third line in order to form a fourth line on his right: this to counter the onslaught of the enemy cavalry, which he knew his numerically inferior cavalry could not withstand. He gave this new line detailed instructions for the role they would play, hinting that upon them would rest the fortunes of the day, and gave strict orders to his third line not to charge until specifically ordered. Battle There was significant distance between the two armies, according to Caesar. Pompey ordered his men not to charge, but to wait until Caesar's legions came into close quarters; Pompey's adviser Gaius Triarius believed that Caesar's infantry would be fatigued and fall into disorder if they were forced to cover twice the expected distance of a battle march. Also, stationary troops were expected to be able to defend better against pila throws. Seeing that Pompey's army was not advancing, Caesar's infantry under Mark Antony and Gnaeus Domitius Calvinus started the advance. As Caesar's men neared throwing distance, without orders, they stopped to rest and regroup before continuing the charge; Pompey's right and centre line held as the two armies collided. As Pompey's infantry fought, Labienus ordered the Pompeian cavalry on his left flank to attack Caesar's cavalry; as expected they successfully pushed back Caesar's cavalry. Caesar then revealed his hidden fourth line of infantry and surprised Pompey's cavalry charge; Caesar's men were ordered to leap up and use their pila to thrust at Pompey's cavalry instead of throwing them. Pompey's cavalry panicked and suffered hundreds of casualties, as Caesar's cavalry came about and charged after them. After failing to reform, the rest of Pompey's cavalry retreated to the hills, leaving the left wing of his legions exposed to the hidden troops as Caesar's cavalry wheeled around their flank. Caesar then ordered in his third line, containing his most battle-hardened veterans, to attack. This broke Pompey's left wing troops, who fled the battlefield. After routing Pompey's cavalry, Caesar threw in his last line of reserves —a move which at this point meant that the battle was more or less decided. Pompey lost the will to fight as he watched both cavalry and legions under his command break formation and flee from battle, and he retreated to his camp, leaving the rest of his troops at the centre and right flank to their own devices. He ordered the garrisoned auxiliaries to defend the camp as he gathered his family, loaded up gold, and threw off his general's cloak to make a quick escape. As the rest of Pompey's army were left confused, Caesar urged his men to end the day by routing the rest of Pompey's troops and capturing the
possession of intentional "gifts" left by humans such as food and jewelry, and leaving items in their place such as rocks and twigs. Skeptics argue that many of these alleged human interactions are easily hoaxed, the result of misidentification, or are outright fabrications. Proposed explanations Various explanations have been suggested for sightings and to offer conjecture on what existing animal has been misidentified in supposed sightings of Bigfoot. Scientists typically attribute sightings either to hoaxes or to misidentification of known animals and their tracks, particularly black bears. Misidentification Reference needed Bears American black bears, the animal most often mistakenly identified as Bigfoot, have been observed and recorded walking upright, often as the result of an injury. In 2007, the Bigfoot Field Researchers Organization put forward photos which they stated showed a juvenile Bigfoot. The Pennsylvania Game Commission, however, stated that the photos were of a bear with mange. While upright, adult black bear stand roughly , and grizzly bear roughly , both within the range of anecdotal Bigfoot reports. Escaped apes Some have proposed that sightings of Bigfoot may simply be people observing and misidentifying known great apes such as chimpanzee, gorilla, and orangutan that have escaped from captivity such as zoos, circuses, and private owners. This explanation is often proposed in relation to the Bigfoot-like Skunk ape, as some argue the humid subtropical climate of the southeastern United States could potentially support a population of escaped apes. Humans Humans have been mistaken for Bigfoot, with some incidents leading to injuries. In 2013, a 21-year-old man in Oklahoma was arrested after he told law enforcement he accidentally shot his friend in the back while their group was allegedly hunting for Bigfoot. In 2017, a shamanist wearing clothing made of animal furs was vacationing in a North Carolina forest when local reports of alleged Bigfoot sightings flooded in. The Greenville Police Department issued a public notice not to shoot Bigfoot in fear of someone in a fur suit mistakenly being injured or killed. In 2018, a person was shot at multiple times by a hunter near Helena, Montana who claimed he mistook him for a Bigfoot. Additionally, some have attributed feral humans or hermits living in the wilderness as being another explanation for alleged Bigfoot sightings. One famous story, the Wild Man of the Navidad, tells of a wild ape-man who roamed the wilderness of eastern Texas in the mid-19th century, stealing food and goods from local residents. A search party allegedly captured an escaped African slave who was attributed to the story. During the 1980s, a number of psychologically damaged American Vietnam veterans were stated by the state of Washington's veterans' affairs director, Randy Fisher, to have been living in remote wooded areas of the state. Pareidolia Some have proposed that pareidolia may explain Bigfoot sightings, specifically the tendency to observe human-like faces and figures within the natural environment. Photos and videos of poor quality alleged to depict Bigfoots are often attributed to this phenomenon and commonly referred to as "Blobsquatch". Hoaxes Both Bigfoot believers and non-believers agree that many of the reported sightings are hoaxes or misidentified animals. Author Jerome Clark argues that the Jacko Affair was a hoax, involving an 1884 newspaper report of an ape-like creature captured in British Columbia. He cites research by John Green, who found that several contemporaneous British Columbia newspapers regarded the alleged capture as highly dubious, and notes that the Mainland Guardian of New Westminster, British Columbia wrote, "Absurdity is written on the face of it." In 1968, the frozen corpse of a supposed hair covered hominid measuring was paraded around the United States as part of a travelling exhibition. Many stories surfaced as to its origin such as it having been killed by hunters in Minnesota or killed by American soldiers near Da Nang during the Vietnam War. It was attributed by some to be proof of Bigfoot-like creatures. Primatologist John R. Napier studied the subject and concluded it was a hoax made of latex. Others disputed this, claiming Napier did not study the original subject. As of 2013, the subject, dubbed the Minnesota Iceman, is on display at the "Museum of the Weird" in Austin, Texas. Tom Biscardi, long-time Bigfoot enthusiast and CEO of "Searching for Bigfoot, Inc.", appeared on the Coast to Coast AM paranormal radio show on July 14, 2005, and said that he was "98% sure that his group will be able to capture a Bigfoot which they had been tracking in the Happy Camp, California area." A month later, he announced on the same radio show that he had access to a captured Bigfoot and was arranging a pay-per-view event for people to see it. He appeared on Coast to Coast AM again a few days later to announce that there was no captive Bigfoot. He blamed an unnamed woman for misleading him, and said that the show's audience was gullible. On July 9, 2008, Rick Dyer and Matthew Whitton posted a video to YouTube, claiming that they had discovered the body of a dead Bigfoot in a forest in northern Georgia. Tom Biscardi was contacted to investigate. Dyer and Whitton received $50,000 from "Searching for Bigfoot, Inc." The story was covered by many major news networks, including BBC, CNN, ABC News, and Fox News. Soon after a press conference, the alleged Bigfoot body was delivered in a block of ice in a freezer with the Searching for Bigfoot team. When the contents were thawed, observers found that the hair was not real, the head was hollow, and the feet were rubber. Dyer and Whitton admitted that it was a hoax after being confronted by Steve Kulls, executive director of SquatchDetective.com. In August 2012, a man in Montana was killed by a car while perpetrating a Bigfoot hoax using a ghillie suit. In January 2014, Rick Dyer, perpetrator of a previous Bigfoot hoax, said that he had killed a Bigfoot in September 2012 outside San Antonio. He claimed to have had scientific tests conducted on the body, "from DNA tests to 3D optical scans to body scans. It is the real deal. It's Bigfoot, and Bigfoot's here, and I shot it, and now I'm proving it to the world." He said that he had kept the body in a hidden location, and he intended to take it on tour across North America in 2014. He released photos of the body and a video showing a few individuals' reactions to seeing it, but never released any of the tests or scans. He refused to disclose the test results or to provide biological samples. He said that the DNA results were done by an undisclosed lab and could not be matched to identify any known animal. Dyer said that he would reveal the body and tests on February 9, 2014, at a news conference at Washington University, but he never made the test results available. After the Phoenix tour, the Bigfoot body was taken to Houston. On March 28, 2014, Dyer admitted on his Facebook page that his "Bigfoot corpse" was another hoax. He had paid Chris Russel of "Twisted Toybox" to manufacture the prop from latex, foam, and camel hair, which he nicknamed "Hank". Dyer earned approximately US$60,000 from the tour of this second fake Bigfoot corpse. He stated that he did kill a Bigfoot, but did not take the real body on tour for fear that it would be stolen. Gigantopithecus Bigfoot proponents Grover Krantz and Geoffrey H. Bourne both believed that Bigfoot could be a relict population of the extinct southeast Asian ape species Gigantopithecus blacki. According to Bourne, G. blacki may have followed the many other species of animals that migrated across the Bering land bridge to the Americas. To date, no Gigantopithecus fossils have been found in the Americas. In Asia, the only recovered fossils have been of mandibles and teeth, leaving uncertainty about G. blacki's locomotion. Krantz has argued that G. blacki could have been bipedal, based on his extrapolation from the shape of its mandible. However, the relevant part of the mandible is not present in any fossils. The more popular view is that G. blacki was quadrupedal, as its enormous mass would have made it difficult for it to adopt a bipedal gait. Matt Cartmill criticizes the G. blacki hypothesis: The trouble with this account is that Gigantopithecus was not a hominin and maybe not even a crown group hominoid; yet the physical evidence implies that Bigfoot is an upright biped with buttocks and a long, stout, permanently adducted hallux. These are hominin autapomorphies, not found in other mammals or other bipeds. It seems unlikely that Gigantopithecus would have evolved these uniquely hominin traits in parallel. Bernard G. Campbell writes: "That Gigantopithecus is in fact extinct has been questioned by those who believe it survives as the Yeti of the Himalayas and the Sasquatch of the north-west American coast. But the evidence for these creatures is not convincing." Extinct hominidae Primatologist John R. Napier and anthropologist Gordon Strasenburg have suggested a species of Paranthropus as a possible candidate for Bigfoot's identity, such as Paranthropus robustus, with its gorilla-like crested skull and bipedal gait —despite the fact that fossils of Paranthropus are found only in Africa. Michael Rugg of the Bigfoot Discovery Museum presented a comparison between human, Gigantopithecus, and Meganthropus skulls (reconstructions made by Grover Krantz) in episodes 131 and 132 of the Bigfoot Discovery Museum Show. He favorably compares a modern tooth suspected of coming from a Bigfoot to the Meganthropus fossil teeth, noting the worn enamel on the occlusal surface. The Meganthropus fossils originated from Asia, and the tooth was found near Santa Cruz, California. Some suggest Neanderthal, Homo erectus, or Homo heidelbergensis to be the creature, but no remains of any of those species have been found in the Americas. Scientific view Expert consensus is that allegations of the existence of Bigfoot are not credible science. Some Bigfoot enthusiasts theorize that Bigfoot may be the "missing link" between apes and humans, specifically those that believe Bigfoot may be a descendent of Gigantopithecus blacki. However, science maintains that G. blacki diverged from orangutans around 12 million years ago, and is not related to humans. Experts have more often attributed belief in the existence of such a large, ape-like creature to hoaxes or delusion rather than to sightings of a genuine creature. In a 1996 USA Today article, Washington State zoologist John Crane said, "There is no such thing as Bigfoot. No data other than material that's clearly been fabricated has ever been presented." In addition, scientists cite the fact that Bigfoot is alleged to live in regions unusual for a large, nonhuman primate, i.e., temperate latitudes in the northern hemisphere; all recognized nonhuman apes are found in the tropics of Africa and Asia. As with other similar beings, climate and food supply issues would make such a creature's survival in reported habitats unlikely. Great apes have not been found in the fossil record in the Americas, and no Bigfoot remains are known to have been found. Phillips Stevens, a cultural anthropologist at the University at Buffalo, summarized the scientific consensus as follows: In the 1970s, when Bigfoot "experts" were frequently given high-profile media coverage, McLeod writes that the scientific community generally avoided lending credence to such fringe theories by debating them. Primatologist Jane Goodall was asked for her personal opinion of Bigfoot in a 2002 interview on National Public Radio's "Science Friday". She joked, "Well, now you will be amazed when I tell you that I'm sure that they exist." She later added, chuckling, "Well, I'm a romantic, so I always wanted them to exist", and finally, "You know, why isn't there a body? I can't answer that, and maybe they don't exist, but I want them to." In 2012, when asked again by the Huffington Post, Goodall said "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts." Paleontologist and author Darren Naish states in an 2016 article for Scientific American that if "Bigfoot" existed, an abundance of evidence would also exist that cannot be found anywhere today, making the existence of such a creature exceedingly unlikely. Naish summarizes the evidence for "Bigfoot" that would exist if the creature itself existed: If "Bigfoot" existed, so would consistent reports of uniform vocalizations throughout North America as can be identified for any existing large animal in the region, rather than the scattered and widely varied "Bigfoot" sounds haphazardly reported; If "Bigfoot" existed, so would many tracks that would be easy for experts to find, just as they easily find tracks for other rare megafauna in North America, rather than a complete lack of such tracks alongside "tracks" that experts agree are fraudulent; Finally, if "Bigfoot" existed, an abundance of "Bigfoot" DNA would already have been found, again as it has been found for similar animals, instead of the current state of affairs, where there is no confirmed DNA for such a creature whatsoever. "DeNovo: Journal of Science" article A request to register the species name Homo sapiens cognatus was made by veterinarian Melba S. Ketchum, lead of The Sasquatch Genome Project, following publication of "Novel North American Hominins, Next Generation Sequencing of Three Whole Genomes and Associated Studies", Ketchum, M. S., et al, in the DeNovo: Journal of Science, 13 Feb 2013. The article examined 111 samples of blood, tissue, hair, and other specimens "characterized and hypothesized" to have been "obtained from elusive hominins in North America commonly referred to as Sasquatch." The title "DeNovo: Journal of
animals. Author Jerome Clark argues that the Jacko Affair was a hoax, involving an 1884 newspaper report of an ape-like creature captured in British Columbia. He cites research by John Green, who found that several contemporaneous British Columbia newspapers regarded the alleged capture as highly dubious, and notes that the Mainland Guardian of New Westminster, British Columbia wrote, "Absurdity is written on the face of it." In 1968, the frozen corpse of a supposed hair covered hominid measuring was paraded around the United States as part of a travelling exhibition. Many stories surfaced as to its origin such as it having been killed by hunters in Minnesota or killed by American soldiers near Da Nang during the Vietnam War. It was attributed by some to be proof of Bigfoot-like creatures. Primatologist John R. Napier studied the subject and concluded it was a hoax made of latex. Others disputed this, claiming Napier did not study the original subject. As of 2013, the subject, dubbed the Minnesota Iceman, is on display at the "Museum of the Weird" in Austin, Texas. Tom Biscardi, long-time Bigfoot enthusiast and CEO of "Searching for Bigfoot, Inc.", appeared on the Coast to Coast AM paranormal radio show on July 14, 2005, and said that he was "98% sure that his group will be able to capture a Bigfoot which they had been tracking in the Happy Camp, California area." A month later, he announced on the same radio show that he had access to a captured Bigfoot and was arranging a pay-per-view event for people to see it. He appeared on Coast to Coast AM again a few days later to announce that there was no captive Bigfoot. He blamed an unnamed woman for misleading him, and said that the show's audience was gullible. On July 9, 2008, Rick Dyer and Matthew Whitton posted a video to YouTube, claiming that they had discovered the body of a dead Bigfoot in a forest in northern Georgia. Tom Biscardi was contacted to investigate. Dyer and Whitton received $50,000 from "Searching for Bigfoot, Inc." The story was covered by many major news networks, including BBC, CNN, ABC News, and Fox News. Soon after a press conference, the alleged Bigfoot body was delivered in a block of ice in a freezer with the Searching for Bigfoot team. When the contents were thawed, observers found that the hair was not real, the head was hollow, and the feet were rubber. Dyer and Whitton admitted that it was a hoax after being confronted by Steve Kulls, executive director of SquatchDetective.com. In August 2012, a man in Montana was killed by a car while perpetrating a Bigfoot hoax using a ghillie suit. In January 2014, Rick Dyer, perpetrator of a previous Bigfoot hoax, said that he had killed a Bigfoot in September 2012 outside San Antonio. He claimed to have had scientific tests conducted on the body, "from DNA tests to 3D optical scans to body scans. It is the real deal. It's Bigfoot, and Bigfoot's here, and I shot it, and now I'm proving it to the world." He said that he had kept the body in a hidden location, and he intended to take it on tour across North America in 2014. He released photos of the body and a video showing a few individuals' reactions to seeing it, but never released any of the tests or scans. He refused to disclose the test results or to provide biological samples. He said that the DNA results were done by an undisclosed lab and could not be matched to identify any known animal. Dyer said that he would reveal the body and tests on February 9, 2014, at a news conference at Washington University, but he never made the test results available. After the Phoenix tour, the Bigfoot body was taken to Houston. On March 28, 2014, Dyer admitted on his Facebook page that his "Bigfoot corpse" was another hoax. He had paid Chris Russel of "Twisted Toybox" to manufacture the prop from latex, foam, and camel hair, which he nicknamed "Hank". Dyer earned approximately US$60,000 from the tour of this second fake Bigfoot corpse. He stated that he did kill a Bigfoot, but did not take the real body on tour for fear that it would be stolen. Gigantopithecus Bigfoot proponents Grover Krantz and Geoffrey H. Bourne both believed that Bigfoot could be a relict population of the extinct southeast Asian ape species Gigantopithecus blacki. According to Bourne, G. blacki may have followed the many other species of animals that migrated across the Bering land bridge to the Americas. To date, no Gigantopithecus fossils have been found in the Americas. In Asia, the only recovered fossils have been of mandibles and teeth, leaving uncertainty about G. blacki's locomotion. Krantz has argued that G. blacki could have been bipedal, based on his extrapolation from the shape of its mandible. However, the relevant part of the mandible is not present in any fossils. The more popular view is that G. blacki was quadrupedal, as its enormous mass would have made it difficult for it to adopt a bipedal gait. Matt Cartmill criticizes the G. blacki hypothesis: The trouble with this account is that Gigantopithecus was not a hominin and maybe not even a crown group hominoid; yet the physical evidence implies that Bigfoot is an upright biped with buttocks and a long, stout, permanently adducted hallux. These are hominin autapomorphies, not found in other mammals or other bipeds. It seems unlikely that Gigantopithecus would have evolved these uniquely hominin traits in parallel. Bernard G. Campbell writes: "That Gigantopithecus is in fact extinct has been questioned by those who believe it survives as the Yeti of the Himalayas and the Sasquatch of the north-west American coast. But the evidence for these creatures is not convincing." Extinct hominidae Primatologist John R. Napier and anthropologist Gordon Strasenburg have suggested a species of Paranthropus as a possible candidate for Bigfoot's identity, such as Paranthropus robustus, with its gorilla-like crested skull and bipedal gait —despite the fact that fossils of Paranthropus are found only in Africa. Michael Rugg of the Bigfoot Discovery Museum presented a comparison between human, Gigantopithecus, and Meganthropus skulls (reconstructions made by Grover Krantz) in episodes 131 and 132 of the Bigfoot Discovery Museum Show. He favorably compares a modern tooth suspected of coming from a Bigfoot to the Meganthropus fossil teeth, noting the worn enamel on the occlusal surface. The Meganthropus fossils originated from Asia, and the tooth was found near Santa Cruz, California. Some suggest Neanderthal, Homo erectus, or Homo heidelbergensis to be the creature, but no remains of any of those species have been found in the Americas. Scientific view Expert consensus is that allegations of the existence of Bigfoot are not credible science. Some Bigfoot enthusiasts theorize that Bigfoot may be the "missing link" between apes and humans, specifically those that believe Bigfoot may be a descendent of Gigantopithecus blacki. However, science maintains that G. blacki diverged from orangutans around 12 million years ago, and is not related to humans. Experts have more often attributed belief in the existence of such a large, ape-like creature to hoaxes or delusion rather than to sightings of a genuine creature. In a 1996 USA Today article, Washington State zoologist John Crane said, "There is no such thing as Bigfoot. No data other than material that's clearly been fabricated has ever been presented." In addition, scientists cite the fact that Bigfoot is alleged to live in regions unusual for a large, nonhuman primate, i.e., temperate latitudes in the northern hemisphere; all recognized nonhuman apes are found in the tropics of Africa and Asia. As with other similar beings, climate and food supply issues would make such a creature's survival in reported habitats unlikely. Great apes have not been found in the fossil record in the Americas, and no Bigfoot remains are known to have been found. Phillips Stevens, a cultural anthropologist at the University at Buffalo, summarized the scientific consensus as follows: In the 1970s, when Bigfoot "experts" were frequently given high-profile media coverage, McLeod writes that the scientific community generally avoided lending credence to such fringe theories by debating them. Primatologist Jane Goodall was asked for her personal opinion of Bigfoot in a 2002 interview on National Public Radio's "Science Friday". She joked, "Well, now you will be amazed when I tell you that I'm sure that they exist." She later added, chuckling, "Well, I'm a romantic, so I always wanted them to exist", and finally, "You know, why isn't there a body? I can't answer that, and maybe they don't exist, but I want them to." In 2012, when asked again by the Huffington Post, Goodall said "I'm fascinated and would actually love them to exist," adding, "Of course, it's strange that there has never been a single authentic hide or hair of the Bigfoot, but I've read all the accounts." Paleontologist and author Darren Naish states in an 2016 article for Scientific American that if "Bigfoot" existed, an abundance of evidence would also exist that cannot be found anywhere today, making the existence of such a creature exceedingly unlikely. Naish summarizes the evidence for "Bigfoot" that would exist if the creature itself existed: If "Bigfoot" existed, so would consistent reports of uniform vocalizations throughout North America as can be identified for any existing large animal in the region, rather than the scattered and widely varied "Bigfoot" sounds haphazardly reported; If "Bigfoot" existed, so would many tracks that would be easy for experts to find, just as they easily find tracks for other rare megafauna in North America, rather than a complete lack of such tracks alongside "tracks" that experts agree are fraudulent; Finally, if "Bigfoot" existed, an abundance of "Bigfoot" DNA would already have been found, again as it has been found for similar animals, instead of the current state of affairs, where there is no confirmed DNA for such a creature whatsoever. "DeNovo: Journal of Science" article A request to register the species name Homo sapiens cognatus was made by veterinarian Melba S. Ketchum, lead of The Sasquatch Genome Project, following publication of "Novel North American Hominins, Next Generation Sequencing of Three Whole Genomes and Associated Studies", Ketchum, M. S., et al, in the DeNovo: Journal of Science, 13 Feb 2013. The article examined 111 samples of blood, tissue, hair, and other specimens "characterized and hypothesized" to have been "obtained from elusive hominins in North America commonly referred to as Sasquatch." The title "DeNovo: Journal of Science" in which the paper was published was later found to be a Web site—registered only nine days before the paper was announced—whose first and only "journal" issue contained nothing but the "Sasquatch" article described above. In 2013, ZooBank, the non-governmental organization that is generally accepted by zoologists to assign species names, approved the registration request for the subspecies name Homo sapiens cognatus to be used for the reputed hominid more familiarly known as Bigfoot or Sasquatch. "Cognatus" is a Latin term meaning "related by blood." According to a statement by an ICZN associate scientist, "ZooBank and the ICZN do not review evidence for the legitimacy of organisms to which names are applied – that is outside our mandate, and is really the job of the relevant taxonomic/biological community (in this case, primatologists) to do that. When H. s. cognatus was first registered, needless to say we received a lot of inquiry about it. We scrutinized the original description and registration of this name as best as we could, and as far as we can determine, all the requirements were fulfilled for establishing the new name. Thus, at the moment, we have no grounds to reject the scientific name. This says nothing about the legitimacy of the taxon concept – it's just about whether the name was established according to the rules." Opinions of primatologists are generally against the existence of the purported species, as described above. Researchers Ivan T. Sanderson and Bernard Heuvelmans, founders of the subculture and pseudoscience of cryptozoology, have spent parts of their career searching for Bigfoot. Later scientists who researched the topic included Jason Jarvis, Carleton S. Coon, George Allen Agogino and William Charles Osman Hill, though they later stopped their research due to lack of evidence for the alleged creature. John Napier asserts that the scientific community's attitude towards Bigfoot stems primarily from insufficient evidence. Other scientists who have shown varying degrees of interest in the creature are Grover Krantz, Jeffrey Meldrum, John Bindernagel, David J. Daegling, George Schaller, Russell Mittermeier, Daris Swindler, Esteban Sarmiento, and Mireya Mayor. Formal studies One study was conducted by John Napier and published in his book Bigfoot: The Yeti and Sasquatch in Myth and Reality in 1973. Napier wrote that if a conclusion is to be reached based on scant extant "'hard' evidence," science must declare "Bigfoot does not exist." However, he found it difficult to entirely reject thousands of alleged tracks, "scattered over 125,000 square miles" (325,000 km2) or to dismiss all "the many hundreds" of eyewitness accounts. Napier concluded, "I am convinced that Sasquatch exists, but whether it is all it is cracked up to be is another matter altogether. There must be something in north-west America that needs explaining, and that something leaves man-like footprints." In 1974, the National Wildlife Federation funded a field study seeking Bigfoot evidence. No formal federation members were involved and the study made no notable discoveries. Beginning in the late 1970s, physical anthropologist Grover Krantz published several articles and four book-length treatments of Sasquatch. However, his work was found to contain multiple scientific failings including falling for hoaxes. A study published in the Journal of Biogeography in 2009 by J.D. Lozier et al. used ecological niche modeling on reported sightings of Bigfoot, using their locations to infer preferred ecological parameters. They found a very close match with the ecological parameters of the American black bear, Ursus americanus. They also note that an upright bear looks much like a Bigfoot's purported appearance and consider it highly improbable that two species should have very similar ecological preferences, concluding that Bigfoot sightings are likely misidentified sightings of black bears. In the first systematic genetic analysis of 30 hair samples that were suspected to be from Bigfoot-like creatures, only one was found to be primate in origin, and that was identified as human. A joint study by the University of Oxford and Lausanne's Cantonal Museum of Zoology and published in the Proceedings of the Royal Society B in 2014, the team used a previously published cleaning method to remove all surface contamination and the ribosomal mitochondrial DNA 12S fragment of the sample. The sample was sequenced and then compared to GenBank to identify the species origin. The samples submitted were from different parts of the world, including the United States, Russia, the Himalayas, and Sumatra. Other than one sample of human origin, all but two are from common animals. Black and brown bears accounted for most of the samples, other animals include cow, horse, dog/wolf/coyote, sheep, goat, deer, raccoon, porcupine, and tapir. The last two samples were thought to match a fossilized genetic sample of a 40,000 year old polar bear of the Pleistocene epoch; a second test identified the hairs as being from a rare type of brown bear. In 2019, the FBI declassified an analysis it conducted on alleged Bigfoot hairs in 1976. Amateur Bigfoot researcher Peter Byrnes sent the FBI 15 hairs attached to a small skin fragment and asked if the bureau could assist him in identifying it. Jay Cochran, Jr., assistant director of the FBI's Scientific and Technical Services division responded in 1977 that the hairs were of deer family origin. Claims After what The Huffington Post described as "a five-year study of purported Bigfoot (also known as Sasquatch) DNA samples", but prior to peer review of the work, DNA Diagnostics, a veterinary laboratory headed by veterinarian Melba Ketchum issued a press release on November 24, 2012, claiming that they had found proof that the Sasquatch "is a human relative that arose approximately 15,000 years ago as a hybrid cross of modern Homo sapiens with an unknown primate species." Ketchum called for this to be recognized officially, saying that "Government at all levels must recognize them as an indigenous people and immediately protect their human and Constitutional rights against those who would see in their physical and cultural differences a 'license' to hunt, trap, or kill them." Failing to find a scientific journal that would publish their results, Ketchum announced on February 13, 2013, that their research had been published in the DeNovo Journal of Science. The Huffington Post discovered that the journal's domain had been registered anonymously only nine days before the announcement. This was the only edition of DeNovo and was listed as Volume 1, Issue 1, with its only content being the Ketchum paper. Shortly after publication, the paper was analyzed and outlined by Sharon Hill of Doubtful News for the Committee for Skeptical Inquiry. Hill reported on the questionable journal, mismanaged DNA testing and poor quality paper, stating that "The few experienced geneticists who viewed the paper reported a dismal opinion of it noting it made little sense." The Scientist magazine also analyzed the paper, reporting that: A body print taken in the year 2000 from the Gifford Pinchot National Forest in Washington state dubbed the Skookum cast is also believed by some to have made by a Bigfoot that sat down in the mud to eat fruit left out by researchers during the filming of an episode of the Animal X television show. Skeptics believe the cast to have been made by a known animal such as an elk. Anthropologist Jeffrey Meldrum, who specializes in the study of primate bipedalism, possesses over 300 footprint casts that he maintains could not be made by wood carvings or human feet based on their anatomy, but instead are evidence of a large, non-human primate present today in America. In 2005, Matt Crowley obtained a copy of an alleged Bigfoot footprint cast, called the "Onion Mountain Cast", and was able to painstakingly recreate the dermal ridges. Michael Dennett of the Skeptical Inquirer spoke to police investigator and primate fingerprint expert Jimmy Chilcutt in 2006 for comment on the replica and he stated, "Matt has shown artifacts can be created, at least under laboratory conditions, and field researchers need to take precautions". Chilcutt had previously stated that some of the alleged Bigfoot footprint plaster casts he examined were genuine due to the presence of "unique dermal ridges". Dennett states that Chilcutt had published nothing on the statements about "unique dermal ridges" that Chilcutt states prove authenticity, nor had anyone else published anything on that topic, with Chilcutt making his statements solely through a posting on the Internet. Dennett states further that no reviews on Chilcutt's statements had been performed beyond those by what Dennett states to be "other Bigfoot enthusiasts". In 2015, Centralia College professor Michael Townsend claimed to have discovered prey bones with "human-like" bite impressions on the southside of Mount St. Helens. Townsend claimed the bites were over two times wider then a human bite, and that he and two of his students also found 16-inch footprints in the area. Jeremiah Byron, host of the Bigfoot Society Podcast, believes Bigfoot are omnivores, stating, "They eat both plants and meat. I've seen accounts that they eat everything from berries, leaves, nuts, and fruit to salmon, rabbit, elk, and bear. Ronny Le Blanc, host of Expedition Bigfoot on the Travel Channel indicated he has heard anecdotal reports of Bigfoot allegedly hunting and consuming deer. Claims about the origins and characteristics of Bigfoot have also crossed over with other paranormal claims, including that Bigfoot, extraterrestrials, and UFOs are related or that Bigfoot creatures are psychic, can cross into different dimensions, or are completely supernatural in origin. Additionally, claims regarding Bigfoot have been associated with conspiracy theories including a government cover-up. Patterson-Gimlin film The most well-known video of an alleged Bigfoot, the Patterson-Gimlin film, was recorded on October 20, 1967, by Roger Patterson and Robert "Bob" Gimlin as they explored an area called Bluff Creek in Northern California. The 59.5-second-long video has become an iconic piece of Bigfoot lore, and continues to be a highly scrutinized, analyzed, and debated subject. Some argue that the film has provided "no supportive data of any scientific value", while others believe that the subject in the film is proof of an unrecognized hominid living in North America. Organizations and events There are several organizations dedicated to the research and investigation of Bigfoot sightings in the United States. The oldest and largest is the Bigfoot Field Researchers Organization (BFRO). The BFRO also provides a free database to individuals and other organizations. Their website includes reports from across North America that have been investigated by researchers to determine credibility. Another includes the North American Wood Ape Conservancy (NAWAC), a nonprofit organization. Other similar organizations exist throughout many U.S. states and their members come from a variety of backgrounds. Some organizations, as well as private researchers and enthusiasts own and operate Bigfoot museums. Additionally, Bigfoot conferences and festivals are attended by thousands of people. These events commonly include guest speakers, research and lore presentations, and sometimes live music, vendors, food trucks, and other activities such as costume contests and "Bigfoot howl" competitions. The Chamber of Commerce in Willow Creek, California has hosted the "Bigfoot Daze" festival annually since the 1960s, drawing on the popularity of the local lore. In February 2016, the University of New Mexico at Gallup held a two-day Bigfoot conference at a cost of $7,000 in university funds. In popular culture Bigfoot has a demonstrable impact in popular culture, and has been compared to Michael Jordan as a cultural icon. In 2018, Smithsonian magazine declared "Interest in the existence of the creature is at an all-time high". According to a poll taken in May 2020, about 1 in 10 American adults believe that Bigfoot is a real
many Hindu singers to imitate and emulate him, notably Kishore Kumar, considered the "Bing Crosby of India". Entrepreneurship According to Shoshana Klebanoff, Crosby became one of the richest men in the history of show business. He had investments in real estate, mines, oil wells, cattle ranches, race horses, music publishing, baseball teams, and television. He made a fortune from the Minute Maid Orange Juice Corporation, in which he was a principal stockholder. Role in early tape recording During the Golden Age of Radio, performers had to create their shows live, sometimes even redoing the program a second time for the West Coast time zone. Crosby had to do two live radio shows on the same day, three hours apart, for the East and West Coasts. Crosby's radio career took a significant turn in 1945, when he clashed with NBC over his insistence that he be allowed to pre-record his radio shows. (The live production of radio shows was also reinforced by the musicians' union and ASCAP, which wanted to ensure continued work for their members.) In On the Air: The Encyclopedia of Old-Time Radio, John Dunning wrote about German engineers having developed a tape recorder with a near-professional broadcast quality standard: Crosby's insistence eventually factored into the further development of magnetic tape sound recording and the radio industry's widespread adoption of it. He used his clout, both professionally and financially , for innovations in audio. But NBC and CBS refused to broadcast prerecorded radio programs. Crosby left the network and remained off the air for seven months, creating a legal battle with his sponsor Kraft that was settled out of court. He returned to broadcasting for the last 13 weeks of the 1945–1946 season. The Mutual Network, on the other hand, pre-recorded some of its programs as early as 1938 for The Shadow with Orson Welles. ABC was formed from the sale of the NBC Blue Network in 1943 after a federal antitrust suit and was willing to join Mutual in breaking the tradition. ABC offered Crosby $30,000 per week to produce a recorded show every Wednesday that would be sponsored by Philco. He would get an additional $40,000 from 400 independent stations for the rights to broadcast the 30-minute show, which was sent to them every Monday on three 16-inch (40-cm) lacquer discs that played ten minutes per side at 33 rpm. Murdo MacKenzie of Bing Crosby Enterprises had seen a demonstration of the German Magnetophon in June 1947—the same device that Jack Mullin had brought back from Radio Frankfurt with 50 reels of tape, at the end of the war. It was one of the magnetic tape recorders that BASF and AEG had built in Germany starting in 1935. The 6.5mm ferric-oxide-coated tape could record 20 minutes per reel of high-quality sound. Alexander M. Poniatoff ordered Ampex, which he founded in 1944, to manufacture an improved version of the Magnetophone. Crosby hired Mullin to start recording his Philco Radio Time show on his German-made machine in August 1947 using the same 50 reels of I.G. Farben magnetic tape that Mullin had found at a radio station at Bad Nauheim near Frankfurt while working for the U.S. Army Signal Corps. The advantage was editing. As Crosby wrote in his autobiography: Mullin's 1976 memoir of these early days of experimental recording agrees with Crosby's account: Crosby invested US $50,000 in Ampex with the intent to produce more machines. In 1948, the second season of Philco shows was recorded with the Ampex Model 200A and Scotch 111 tape from 3M. Mullin explained how one new broadcasting technique was invented on the Crosby show with these machines: Crosby started the tape recorder revolution in America. In his 1950 film Mr. Music, he is seen singing into an Ampex tape recorder that reproduced his voice better than anything else. Also quick to adopt tape recording was his friend Bob Hope. He gave one of the first Ampex Model 300 recorders to his friend, guitarist Les Paul, which led to Paul's invention of multitrack recording. His organization, the Crosby Research Foundation, held tape recording patents and developed equipment and recording techniques such as the laugh track that are still in use today. With Frank Sinatra, Crosby was one of the principal backers for the United Western Recorders studio complex in Los Angeles. Videotape development Mullin continued to work for Crosby to develop a videotape recorder (VTR). Television production was mostly live television in its early years, but Crosby wanted the same ability to record that he had achieved in radio. The Fireside Theater (1950) sponsored by Procter & Gamble, was his first television production. Mullin had not yet succeeded with videotape, so Crosby filmed the series of 26-minute shows at the Hal Roach Studios, and the "telefilms" were syndicated to individual television stations. Crosby continued to finance the development of videotape. Bing Crosby Enterprises gave the world's first demonstration of videotape recording in Los Angeles on November 11, 1951. Developed by John T. Mullin and Wayne R. Johnson since 1950, the device aired what were described as "blurred and indistinct" images, using a modified Ampex 200 tape recorder and standard quarter-inch (6.3 mm) audio tape moving at 360 inches (9.1m) per second. Television station ownership A Crosby-led group purchased station KCOP-TV, in Los Angeles, California, in 1954. NAFI Corporation and Crosby purchased television station KPTV in Portland, Oregon, for $4 million on September 1, 1959. In 1960, NAFI purchased KCOP from Crosby's group. In the early 1950s, Crosby helped establish the CBS television affiliate in his hometown of Spokane, Washington. He partnered with Ed Craney, who owned the CBS radio affiliate KXLY (AM) and built a television studio west of Crosby's alma mater, Gonzaga University. After it began broadcasting, the station was sold within a year to Northern Pacific Radio and Television Corporation. Thoroughbred horse racing Crosby was a fan of thoroughbred horse racing and bought his first racehorse in 1935. In 1937, he became a founding partner of the Del Mar Thoroughbred Club and a member of its board of directors. Operating from the Del Mar Racetrack at Del Mar, California, the group included millionaire businessman Charles S. Howard, who owned a successful racing stable that included Seabiscuit. Charles' son, Lindsay C. Howard, became one of Crosby's closest friends; Crosby named his son Lindsay after him, and would purchase his 40-room Hillsborough, California estate from Lindsay in 1965. Crosby and Lindsay Howard formed Binglin Stable to race and breed thoroughbred horses at a ranch in Moorpark in Ventura County, California. They also established the Binglin Stock Farm in Argentina, where they raced horses at Hipódromo de Palermo in Palermo, Buenos Aires. A number of Argentine-bred horses were purchased and shipped to race in the United States. On August 12, 1938, the Del Mar Thoroughbred Club hosted a $25,000 winner-take-all match race won by Charles S. Howard's Seabiscuit over Binglin's horse Ligaroti. In 1943, Binglin's horse Don Bingo won the Suburban Handicap at Belmont Park in Elmont, New York. The Binglin Stable partnership came to an end in 1953 as a result of a liquidation of assets by Crosby, who needed to raise enough funds to pay the hefty federal and state inheritance taxes on his deceased wife's estate. The Bing Crosby Breeders' Cup Handicap at Del Mar Racetrack is named in his honor. Sports Crosby had a keen interest in sports. In the 1930s, his friend and former college classmate, Gonzaga head coach, Mike Pecarovich, appointed Crosby as an assistant football coach. From 1946 until his death, he owned a 25% share of the Pittsburgh Pirates. Although he was passionate about the team, he was too nervous to watch the deciding Game 7 of the 1960 World Series, choosing to go to Paris with Kathryn and listen to its radio broadcast. Crosby had arranged for Ampex, another of his financial investments, to record the NBC telecast on kinescope. The game was one of the most famous in baseball history, capped off by Bill Mazeroski's walk-off home run that won the game for Pittsburgh. He apparently viewed the complete film just once, and then stored it in his wine cellar, where it remained undisturbed until it was discovered in December 2009. The restored broadcast was shown on MLB Network in December 2010. Crosby was also an avid golfer, and in 1978, he and Bob Hope were voted the Bob Jones Award, the highest honor given by the United States Golf Association in recognition of distinguished sportsmanship. He is a member of the World Golf Hall of Fame, having been inducted in 1978. In 1937, Crosby hosted the first 'Crosby Clambake' as it was popularly known, at Rancho Santa Fe Golf Club in Rancho Santa Fe, California, the event's location prior to World War II. Sam Snead won the first tournament, in which the first place check was for $500. After the war, the event resumed play in 1947 on golf courses in Pebble Beach, where it has been played ever since. Now the AT&T Pebble Beach Pro-Am, it has been a leading event in the world of professional golf. In 1950, he became the third person to win the William D. Richardson award, which is given to a non-professional golfer "who has consistently made an outstanding contribution to golf". Crosby first took up golf at age 12 as a caddy, dropped it, then started again in 1930 with some fellow cast members in Hollywood during the filming of The King of Jazz. Crosby was accomplished at the sport, with a two handicap. He competed in both the British and U.S. Amateur championships, was a five-time club champion at Lakeside Golf Club in Hollywood, and once made a hole-in-one on the 16th hole at Cypress Point. Crosby also was a keen fisherman especially in his younger days, but it was a pastime that he enjoyed throughout his life. In the summer of 1966, he spent a week as the guest of Lord Egremont, staying in Cockermouth and fishing on the River Derwent. His trip was filmed for The American Sportsman on ABC, although all did not go well at first as the salmon were not running. He did make up for it at the end of the week by catching a number of sea trout. Personal life Crosby was married twice. His first wife was actress and nightclub singer Dixie Lee, to whom he was married from 1930 until her death from ovarian cancer in 1952. They had four sons: Gary, twins Dennis and Phillip, and Lindsay. Smash-Up: The Story of a Woman (1947) is based on Lee's life. The Crosby family lived at 10500 Camarillo Street in North Hollywood for more than five years. After his wife died, Crosby had relationships with model Pat Sheehan (who married his son Dennis in 1958) and actresses Inger Stevens and Grace Kelly before marrying actress Kathryn Grant, who converted to Catholicism, in 1957. They had three children: Harry Lillis III (who played Bill in Friday the 13th), Mary (best known for portraying Kristin Shepard on TV's Dallas), and Nathaniel (the 1981 U.S. Amateur champion in golf). Particularly during the late 1930s and through the 1940s Bing Crosby's domestic life was tragically dominated by his wife's excessive drinking. His efforts to cure her with the help of specialists failed. Tired of Dixie's drinking, he even asked her for a divorce in January 1941. During the 1940s, Crosby consistently had difficulties trying to stay away from home while also trying to be there as much as possible for his children. Crosby had one confirmed extramarital affair between 1945 and the late 1940s, while married to his first wife Dixie. Actress Patricia Neal (who herself at the time was having an affair with the married Gary Cooper) wrote in her 1988 autobiography As I Am about a trip on a cruise ship to England with actress Joan Caulfield in 1948: In the most recent Crosby biography, Bing Crosby: Swinging on a Star; the War Years, 1940-1946, Gary Giddins published excerpts from an original diary of two sisters, Violet and Mary Barsa, who, as young women, used to stalk Crosby in New York City during December 1945 and January 1946 and who detailed their observations in the diary. The document reveals that during that time Crosby was indeed taking Joan Caulfield out to dinner, visited theaters and opera houses with her and that Caulfield and a person in her company entered the Waldorf Hotel where Crosby was staying. However, the document also clearly indicates that at their meetings a third person, on most instances Caulfield's mother, was present. In 1954, Joan Caulfield admitted to a relationship with a "top film star" who was a married man with children who at the end chose his wife and children over her. Joan's sister Betty Caulfield confirmed the romantic relationship between Joan and Bing Crosby. Despite being a Catholic, Crosby was seriously considering divorce in order to marry Caulfield. Either in December 1945 or January 1946 Crosby approached Cardinal Francis Spellman with his difficulties with dealing with his wife's alcoholism and his love for Caulfield and his plan to file for divorce. According to Betty Caulfield, Spellman told Crosby: "Bing, you are Father O'Malley and under no circumstances can Father O'Malley get a divorce." Around the same time, Crosby talked to his mother about his intentions and she protested. Ultimately, Crosby chose to end the relationship and to stay with his wife. Bing and Dixie reconciled and he continued trying to help her overcome her alcohol issues. Crosby reportedly had an alcohol problem between the late 1920s and early 1930s, but he got a handle on his drinking in 1931. According to biographer Giddins, during an argument about Gary Crosby's drinking, Crosby told his son in anger that smoking marijuana would be better than drinking so much alcohol, adding "It killed your mother". Crosby told Barbara Walters in a 1977 televised interview that he thought marijuana should be legalized. In December 1999 the New York Post published an article by Bill Hoffmann and Murray Weiss called Bing Crosby's Single Life which claimed that "recently published" FBI files on Crosby revealed that he had ties with figures in the Mafia since his youth. This information gets repeatedly reproduced in news articles until today. However, Crosby's FBI files actually got already published in 1992 and looking into these there is no indication that Bing Crosby had ties to the Mafia except for one major but accidental encounter in Chicago in 1929 which is not mentioned in the files but is told by Crosby himself in his as-told-to autobiography Call Me Lucky. In the over 280 pages of Crosby's FBI files all but one reference to organized crime or gambling dens are content of a few of the many threats that Bing Crosby received throughout his life. The comments made by FBI investigators in the memos discredited the claims made in the letters. In all the files there is only one single reference to a person associated with the Mafia. In a memorandum dated January 16, 1959 it is said: "The Salt Lake City Office has developed information indicating that Moe Dalitz received an invitation to join a deer hunting party at Bing Crosby's Elko, Nevada, ranch, together with the crooner,
hit musical comedy films in the 1930s, Crosby starred with Bob Hope and Dorothy Lamour in six of the seven Road to musical comedies between 1940 and 1962 (Lamour was replaced with Joan Collins in The Road to Hong Kong and limited to a lengthy cameo), cementing Crosby and Hope as an on-and-off duo, despite never declaring themselves a "team" in the sense that Laurel and Hardy or Martin and Lewis (Dean Martin and Jerry Lewis) were teams. The series consists of Road to Singapore (1940), Road to Zanzibar (1941), Road to Morocco (1942), Road to Utopia (1946), Road to Rio (1947), Road to Bali (1952), and The Road to Hong Kong (1962). When they appeared solo, Crosby and Hope frequently made note of the other in a comically insulting fashion. They performed together countless times on stage, radio, film, and television, and made numerous brief and not so brief appearances together in movies aside from the "Road" pictures, Variety Girl (1947) being an example of lengthy scenes and songs together along with billing. In the 1949 Disney animated film The Adventures of Ichabod and Mr. Toad, Crosby provided the narration and song vocals for The Legend of Sleepy Hollow segment. In 1960, he starred in High Time, a collegiate comedy with Fabian Forte and Tuesday Weld that predicted the emerging gap between him and the new younger generation of musicians and actors who had begun their careers after World War II. The following year, Crosby and Hope reunited for one more Road movie, The Road to Hong Kong, which teamed them up with the much younger Joan Collins and Peter Sellers. Collins was used in place of their longtime partner Dorothy Lamour, whom Crosby felt was getting too old for the role, though Hope refused to do the film without her, and she instead made a lengthy and elaborate cameo appearance. Shortly before his death in 1977, he had planned another Road film in which he, Hope, and Lamour search for the Fountain of Youth. He won an Academy Award for Best Actor for Going My Way in 1944 and was nominated for the 1945 sequel, The Bells of St. Mary's. He received critical acclaim for his performance as an alcoholic entertainer in The Country Girl and received his third Academy Award nomination. Television The Fireside Theater (1950) was his first television production. The series of 26-minute shows was filmed at Hal Roach Studios rather than performed live on the air. The "telefilms" were syndicated to individual television stations. He was a frequent guest on the musical variety shows of the 1950s and 1960s, appearing on various variety shows as well as numerous late-night talk shows and his own highly rated specials. Bob Hope memorably devoted one of his monthly NBC specials to his long intermittent partnership with Crosby titled "On the Road With Bing". Crosby was associated with ABC's The Hollywood Palace as the show's first and most frequent guest host and appeared annually on its Christmas edition with his wife Kathryn and his younger children, and continued after The Hollywood Palace was eventually canceled. In the early 1970s, he made two late appearances on the Flip Wilson Show, singing duets with the comedian. His last TV appearance was a Christmas special, Merrie Olde Christmas, taped in London in September 1977 and aired weeks after his death. It was on this special that he recorded a duet of "The Little Drummer Boy" and "Peace on Earth" with rock musician David Bowie. Their duet was released in 1982 as a single 45-rpm record and reached No. 3 in the UK singles charts. It has since become a staple of holiday radio and the final popular hit of Crosby's career. At the end of the 20th century, TV Guide listed the Crosby-Bowie duet one of the 25 most memorable musical moments of 20th-century television. Bing Crosby Productions, affiliated with Desilu Studios and later CBS Television Studios, produced a number of television series, including Crosby's own unsuccessful ABC sitcom The Bing Crosby Show in the 1964–1965 season (with co-stars Beverly Garland and Frank McHugh). The company produced two ABC medical dramas, Ben Casey (1961–1966) and Breaking Point (1963–1964), the popular Hogan's Heroes (1965–1971) military comedy on CBS, as well as the lesser-known show Slattery's People (1964–1965). Singing style and vocal characteristics Crosby was one of the first singers to exploit the intimacy of the microphone rather than use the deep, loud vaudeville style associated with Al Jolson. He was, by his own definition, a "phraser", a singer who placed equal emphasis on both the lyrics and the music. Paul Whiteman's hiring of Crosby, with phrasing that echoed jazz, particularly his bandmate Bix Beiderbecke's trumpet, helped bring the genre to a wider audience. In the framework of the novelty-singing style of the Rhythm Boys, he bent notes and added off-tune phrasing, an approach that was rooted in jazz. He had already been introduced to Louis Armstrong and Bessie Smith before his first appearance on record. Crosby and Armstrong remained warm acquaintances for decades, occasionally singing together in later years, e.g. "Now You Has Jazz" in the film High Society (1956). During the early portion of his solo career (about 1931–1934), Crosby's emotional, often pleading style of crooning was popular. But Jack Kapp, manager of Brunswick and later Decca, talked him into dropping many of his jazzier mannerisms in favor of a clear vocal style. Crosby credited Kapp for choosing hit songs, working with many other musicians, and most important, diversifying his repertoire into several styles and genres. Kapp helped Crosby have number one hits in Christmas music, Hawaiian music, and country music, and top-thirty hits in Irish music, French music, rhythm and blues, and ballads. Crosby elaborated on an idea of Al Jolson's: phrasing, or the art of making a song's lyric ring true. "I used to tell Sinatra over and over," said Tommy Dorsey, "there's only one singer you ought to listen to and his name is Crosby. All that matters to him is the words, and that's the only thing that ought to for you, too." Critic Henry Pleasants wrote: Career achievements Crosby's was among the most popular and successful musical acts of the 20th century. Billboard magazine used different methodologies during his career. But his chart success remains impressive: 396 chart singles, including roughly 41 number 1 hits. Crosby had separate charting singles every year between 1931 and 1954; the annual re-release of "White Christmas" extended that streak to 1957. He had 24 separate popular singles in 1939 alone. Statistician Joel Whitburn at Billboard determined that Crosby was America's most successful recording act of the 1930s and again in the 1940s. In 1960 Crosby was honored as "First Citzen of Record Industry" based on having sold 200 million discs. According to different sources he sold 300 million or even 500 million records worldwide. The single "White Christmas" sold over 50 million copies according to Guinness World Records. For fifteen years (1934, 1937, 1940, 1943–1954), Crosby was among the top ten acts in box-office sales, and for five of those years (1944–1948) he topped the world. He sang four Academy Award winning songs – "Sweet Leilani" (1937), "White Christmas" (1942), "Swinging on a Star" (1944), "In the Cool, Cool, Cool of the Evening" (1951) – and won the Academy Award for Best Actor for his role in Going My Way (1944). A survey in 2000 found that with 1,077,900,000 movie tickets sold, Crosby was the third most popular actor of all time, behind Clark Gable (1,168,300,000) and John Wayne (1,114,000,000). The International Motion Picture Almanac lists him in a tie for second-most years at number one on the All Time Number One Stars List with Clint Eastwood, Tom Hanks, and Burt Reynolds. His most popular film, White Christmas, grossed $30 million in 1954 ($ million in current value). He received 23 gold and platinum records, according to the book Million Selling Records. The Recording Industry Association of America did not institute its gold record certification program until 1958 when Crosby's record sales were low. Before 1958, gold records were awarded by record companies. Crosby charted 23 Billboard hits from 47 recorded songs with the Andrews Sisters, whose Decca record sales were second only to Crosby's throughout the 1940s. They were his most frequent collaborators on disc from 1939 to 1952, a partnership that produced four million-selling singles: "Pistol Packin' Mama", "Jingle Bells", "Don't Fence Me In", and "South America, Take it Away". They made one film appearance together in Road to Rio singing "You Don't Have to Know the Language", and sang together on the radio throughout the 1940s and 1950s. They appeared as guests on each other's shows and on Armed Forces Radio Service during and after World War II. The quartet's Top-10 Billboard hits from 1943 to 1945 include "The Vict'ry Polka", "There'll Be a Hot Time in the Town of Berlin (When the Yanks Go Marching In)", and "Is You Is or Is You Ain't (Ma' Baby?)" and helped morale of the American public. In 1962, Crosby was given the Grammy Lifetime Achievement Award. He has been inducted into the halls of fame for both radio and popular music. In 2007, he was inducted into the Hit Parade Hall of Fame and in 2008 the Western Music Hall of Fame. Popularity and influence Crosby's popularity around the world was such that in an interview with Dorothy Masuka, the best-selling African recording artist in Africa, she stated "Only Bing Crosby the famous American crooner sold more records than me in Africa." His great popularity throughout Africa led other African singers to emulate him, including Dolly Rathebe, Masuka, and Míriam Makeba, known locally as "The Bing Crosby of Africa" though she is female. Presenter Mike Douglas commented in a 1975 interview, "During my days in the Navy in World War II, I remember walking the streets of Calcutta, India, on the coast; it was a lonely night, so far from my home and from my new wife, Gen. I needed something to lift my spirits. As I passed a Hindu sitting on the corner of a street, I heard something surprisingly familiar. I came back to see the man playing one of those old Vitrolas, like those of RCA with the horn speaker. The man was listening to Bing Crosby sing, "Ac-Cent-Tchu-Ate The Positive". I stopped and smiled in grateful acknowledgment. The Hindu nodded and smiled back. The whole world knew and loved Bing Crosby." His popularity in India led many Hindu singers to imitate and emulate him, notably Kishore Kumar, considered the "Bing Crosby of India". Entrepreneurship According to Shoshana Klebanoff, Crosby became one of the richest men in the history of show business. He had investments in real estate, mines, oil wells, cattle ranches, race horses, music publishing, baseball teams, and television. He made a fortune from the Minute Maid Orange Juice Corporation, in which he was a principal stockholder. Role in early tape recording During the Golden Age of Radio, performers had to create their shows live, sometimes even redoing the program a second time for the West Coast time zone. Crosby had to do two live radio shows on the same day, three hours apart, for the East and West Coasts. Crosby's radio career took a significant turn in 1945, when he clashed with NBC over his insistence that he be allowed to pre-record his radio shows. (The live production of radio shows was also reinforced by the musicians' union and ASCAP, which wanted to ensure continued work for their members.) In On the Air: The Encyclopedia of Old-Time Radio, John Dunning wrote about German engineers having developed a tape recorder with a near-professional broadcast quality standard: Crosby's insistence eventually factored into the further development of magnetic tape sound recording and the radio industry's widespread adoption of it. He used his clout, both professionally and financially , for innovations in audio. But NBC and CBS refused to broadcast prerecorded radio programs. Crosby left the network and remained off the air for seven months, creating a legal battle with his sponsor Kraft that was settled out of court. He returned to broadcasting for the last 13 weeks of the 1945–1946 season. The Mutual Network, on the other hand, pre-recorded some of its programs as early as 1938 for The Shadow with Orson Welles. ABC was formed from the sale of the NBC Blue Network in 1943 after a federal antitrust suit and was willing to join Mutual in breaking the tradition. ABC offered Crosby $30,000 per week to produce a recorded show every Wednesday that would be sponsored by Philco. He would get an additional $40,000 from 400 independent stations for the rights to broadcast the 30-minute show, which was sent to them every Monday on three 16-inch (40-cm) lacquer discs that played ten minutes per side at 33 rpm. Murdo MacKenzie of Bing Crosby Enterprises had seen a demonstration of the German Magnetophon in June 1947—the same device that Jack Mullin had brought back from Radio Frankfurt with 50 reels of tape, at the end of the war. It was one of the magnetic tape recorders that BASF and AEG had built in Germany starting in 1935. The 6.5mm ferric-oxide-coated tape could record 20 minutes per reel of high-quality sound. Alexander M. Poniatoff ordered Ampex, which he founded in 1944, to manufacture an improved version of the Magnetophone. Crosby hired Mullin to start recording his Philco Radio Time show on his German-made machine in August 1947 using the same 50 reels of I.G. Farben magnetic tape that Mullin had found at a radio station at Bad Nauheim near Frankfurt while working for the U.S. Army Signal Corps. The advantage was editing. As Crosby wrote in his autobiography: Mullin's 1976 memoir of these early days of experimental recording agrees with Crosby's account: Crosby invested US $50,000 in Ampex with the intent to produce more machines. In 1948, the second season of Philco shows was recorded with the Ampex Model 200A and Scotch 111 tape from 3M. Mullin explained how one new broadcasting technique was invented on the Crosby show with these machines: Crosby started the tape recorder revolution in America. In his 1950 film Mr. Music, he is seen singing into an Ampex tape recorder that reproduced his voice better than anything else. Also quick to adopt tape recording was his friend Bob Hope. He gave one of the first Ampex Model 300 recorders to his friend, guitarist Les Paul, which led to Paul's invention of multitrack recording. His organization, the Crosby Research Foundation, held tape recording patents and developed equipment and recording techniques such as the laugh track that are still in use today. With Frank Sinatra, Crosby was one of the principal backers for the United Western Recorders studio complex in Los Angeles. Videotape development Mullin continued to work for Crosby to develop a videotape recorder (VTR). Television production was mostly live television in its early years, but Crosby wanted the same ability to record that he had achieved in radio. The Fireside Theater (1950) sponsored by Procter & Gamble, was his first television production. Mullin had not yet succeeded with videotape, so Crosby filmed the series of 26-minute shows at the Hal Roach Studios, and the "telefilms" were syndicated to individual television stations. Crosby continued to finance the development of videotape. Bing Crosby Enterprises gave the world's first demonstration of videotape recording in Los Angeles on November 11, 1951. Developed by John T. Mullin and Wayne R. Johnson since 1950, the device aired what were described as "blurred and indistinct" images, using a modified Ampex 200 tape recorder and standard quarter-inch (6.3 mm) audio tape moving at 360 inches (9.1m) per second. Television station ownership A Crosby-led group purchased station KCOP-TV, in Los Angeles, California, in 1954. NAFI Corporation and Crosby purchased television station KPTV in Portland, Oregon, for $4 million on September 1, 1959. In 1960, NAFI purchased KCOP from Crosby's group. In the early 1950s, Crosby helped establish the CBS television affiliate in his hometown of Spokane, Washington. He partnered with Ed Craney, who owned the CBS radio affiliate KXLY (AM) and built a television studio west of Crosby's alma mater, Gonzaga University. After it began broadcasting, the station was sold within a year to Northern Pacific Radio and Television Corporation. Thoroughbred horse racing Crosby was a fan of thoroughbred horse racing and bought his first racehorse in 1935. In 1937, he became a founding partner of the Del Mar Thoroughbred Club and a member of its board of directors. Operating from the Del Mar Racetrack at Del Mar, California, the group included millionaire businessman Charles S. Howard, who owned a successful racing stable that included Seabiscuit. Charles' son, Lindsay C. Howard, became one of Crosby's closest friends; Crosby named his son Lindsay after him, and would purchase his 40-room Hillsborough, California estate from Lindsay in 1965. Crosby and Lindsay Howard formed Binglin Stable to race and breed thoroughbred horses at a ranch in Moorpark in Ventura County, California. They also established the Binglin Stock Farm in Argentina, where they raced horses at Hipódromo de Palermo in Palermo, Buenos Aires. A number of Argentine-bred horses were purchased and shipped to race in the United States. On August 12, 1938, the Del Mar Thoroughbred Club hosted a $25,000 winner-take-all match race won by Charles S. Howard's Seabiscuit over Binglin's horse Ligaroti. In 1943, Binglin's horse Don Bingo won the Suburban Handicap at Belmont Park in Elmont, New York. The Binglin Stable partnership came to an end in 1953 as a result of a liquidation of assets by Crosby, who needed to raise enough funds to pay the hefty federal and state inheritance taxes on his deceased wife's estate. The Bing Crosby Breeders' Cup Handicap at Del Mar Racetrack is named in his honor. Sports Crosby had a keen interest in sports. In the 1930s, his friend and former college classmate, Gonzaga head coach, Mike Pecarovich, appointed Crosby as an assistant football coach. From 1946 until his death, he owned a 25% share of the Pittsburgh Pirates. Although he was passionate about the team, he was too nervous to watch the deciding Game 7 of the 1960 World Series, choosing to go to Paris with Kathryn and listen to its radio broadcast. Crosby had arranged for Ampex, another of his financial investments, to record the NBC telecast on kinescope. The game was one of the most famous in baseball history, capped off by Bill Mazeroski's walk-off home run that won the game for Pittsburgh. He apparently viewed the complete film just once, and then stored it in his wine cellar, where it remained undisturbed until it was discovered in December 2009. The restored broadcast was shown on MLB Network in December 2010. Crosby was also an avid golfer, and in 1978, he and Bob Hope were voted the Bob Jones Award, the highest honor given by the United States Golf Association in recognition of distinguished sportsmanship. He is a member of the World Golf Hall of Fame, having been inducted in 1978. In 1937, Crosby hosted the first 'Crosby Clambake' as it was popularly known, at Rancho Santa Fe Golf Club in Rancho Santa Fe, California, the event's location prior to World War II. Sam Snead won the first tournament, in which the first place check was for $500. After the war, the event resumed play in 1947 on golf courses in Pebble Beach, where it has been played ever since. Now the AT&T Pebble Beach Pro-Am, it has been a leading event in the world of professional golf. In 1950, he became the third person to win the William D. Richardson award, which is given to a non-professional golfer "who has consistently made an outstanding contribution to golf". Crosby first took up golf at age 12 as a caddy, dropped it, then started again in 1930 with some fellow cast members in Hollywood during the filming of The King of Jazz. Crosby was accomplished at the sport, with a two handicap. He competed in both the British and U.S. Amateur championships, was a five-time club champion at Lakeside Golf Club in Hollywood, and once made a hole-in-one on the 16th hole at Cypress Point. Crosby also was a keen fisherman especially in his younger days, but it was a pastime that he enjoyed throughout his life. In the summer of 1966, he spent a week as the guest of Lord Egremont, staying in Cockermouth and fishing on the River Derwent. His trip was filmed for The American Sportsman on ABC, although all did not go well at first as the salmon were not running. He did make up for it at the end of the week by catching a number of sea trout. Personal life Crosby was married twice. His first wife was actress and nightclub singer Dixie Lee, to whom he was married from 1930 until her death from ovarian cancer in 1952. They had four sons: Gary, twins Dennis and Phillip, and Lindsay. Smash-Up: The Story of a Woman (1947) is based on Lee's life. The Crosby family lived at 10500 Camarillo Street in North Hollywood for more than five years. After his wife died, Crosby had relationships with model Pat Sheehan (who married his son Dennis in 1958) and actresses Inger Stevens and Grace Kelly before marrying actress Kathryn Grant, who converted to Catholicism, in 1957. They had three children: Harry Lillis III (who played Bill in Friday the 13th), Mary (best known for portraying Kristin Shepard on TV's Dallas), and Nathaniel (the 1981 U.S. Amateur champion in golf). Particularly during the late 1930s and through the 1940s Bing Crosby's domestic life was tragically dominated by his wife's excessive drinking. His efforts to cure her with the help of specialists failed. Tired of Dixie's drinking, he even asked her for a divorce in January 1941. During the 1940s, Crosby consistently had difficulties trying to stay away from home while also trying to be there as much as possible for his children. Crosby had one confirmed extramarital affair between 1945 and the late 1940s, while married to his first wife Dixie. Actress Patricia Neal (who herself at the time was having an affair with the married Gary Cooper) wrote in her 1988 autobiography As I Am about a trip on a cruise ship to England with actress Joan Caulfield in 1948: In the most recent Crosby biography, Bing Crosby: Swinging on a Star; the War Years, 1940-1946, Gary Giddins published excerpts from an original diary of two sisters, Violet and Mary Barsa, who, as young women, used to stalk Crosby in New York City during December 1945 and January 1946 and who detailed their observations in the diary. The document reveals that during that time Crosby was indeed taking Joan Caulfield out to dinner, visited theaters and opera houses with her and that Caulfield and a person in her company entered the Waldorf Hotel where Crosby was staying. However, the document also clearly indicates that at their meetings a third person, on most instances Caulfield's mother, was present. In 1954, Joan Caulfield admitted to a relationship with a "top film star" who was a married man with children who at the end chose his wife and children over her. Joan's sister Betty Caulfield confirmed the romantic relationship between Joan and Bing Crosby. Despite being a Catholic, Crosby was seriously considering divorce in order to marry Caulfield. Either in December 1945 or January 1946 Crosby approached Cardinal Francis Spellman with his difficulties with dealing with his wife's alcoholism and his love for Caulfield and his plan to file for divorce. According to Betty Caulfield, Spellman told Crosby: "Bing, you are Father O'Malley and under no circumstances can Father O'Malley get a divorce." Around the same time, Crosby talked to his mother about his intentions and she protested. Ultimately, Crosby chose to end the relationship and to stay with his wife. Bing and Dixie reconciled and he continued trying to help her overcome her alcohol issues. Crosby reportedly had an alcohol problem between the late 1920s and early 1930s, but he got a handle on his drinking in 1931. According to biographer Giddins, during an argument about Gary Crosby's drinking, Crosby told his son in anger that smoking marijuana would be better than drinking so much alcohol, adding "It killed your mother". Crosby told Barbara Walters in a 1977 televised interview that he thought marijuana should be legalized. In December 1999 the New York Post published an article by Bill Hoffmann and Murray Weiss called Bing Crosby's Single Life which claimed that "recently published" FBI files on Crosby revealed that he had ties with figures in the Mafia since his youth. This information gets repeatedly reproduced in news articles until today. However, Crosby's FBI files actually got already published in 1992 and looking into these there is no indication that Bing Crosby had ties to the Mafia except for one major but accidental encounter in Chicago in 1929 which is not mentioned in the files but is told by Crosby himself in his as-told-to autobiography Call Me Lucky. In the over 280 pages of Crosby's FBI files all but one reference to organized crime or gambling dens are content of a few of the many threats that Bing Crosby received throughout his life. The comments made by FBI investigators in the memos discredited the claims made in the letters. In all the files there is only one single reference to a person associated with the Mafia. In a memorandum dated January 16, 1959 it is said: "The Salt Lake City Office has developed information indicating that Moe Dalitz received an invitation to join a deer hunting party at Bing Crosby's Elko, Nevada, ranch, together with the crooner, his Las Vegas dentist and several business associates." However, Crosby had already sold his Elko ranch a year earlier, in 1958, and it is doubtful how much he was really involved in that meeting. Crosby and his family lived in the San Francisco area for many years. In 1963, he and his wife Kathryn moved with their 3 young children from Los Angeles to a $175,000 10-bedroom Tudor estate in Hillsborough because they did not want to raise their children in Hollywood, according to son Nathaniel. This house went up for sale by its current owners in 2021 for $13.75 million. In 1965, the Crosbys moved to a larger, 40-room French-chateau style house on nearby Jackling Drive, where Kathryn Crosby continued to reside after Bing's death. This house served as a setting for some of the family's Minute Maid orange juice television commercials. After Crosby's death, his eldest son, Gary, wrote a highly critical memoir, Going My Own Way (1983), depicting his father as cruel, cold, remote, and physically and psychologically abusive. However, Bing Crosby's daughter Mary Crosby said in an interview that Gary Crosby told her the publishers had encouraged him to exaggerate his claims and he had written the book just for money. Crosby's younger son Phillip vociferously disputed his brother Gary's claims about their father. Around the time Gary published his claims, Phillip stated to the press that "Gary is a whining, bitching crybaby, walking around with a two-by-four on his shoulder and just daring people to nudge it off." Nevertheless, Phillip did not deny that Crosby believed in corporal punishment. In an interview with People magazine, Phillip stated that "we never got an extra whack or a cuff we didn't deserve". During an interview in 1999 by the Globe, Phillip said: However, Dennis and Lindsay Crosby confirmed that Bing sometimes subjected his sons to harsh physical discipline and verbal put-downs. Regarding the writing of Gary's memoir, Lindsay said, "I'm glad [Gary] did it. I hope it clears up a lot of the old lies and rumors." Unlike Gary, though, Lindsay stated that he preferred to remember "all the good things I did with my dad and forget the times that were rough". When the book was published, Dennis distanced himself by calling it "Gary's business" but did not publicly deny its claims. Bing's younger brother, singer and jazz bandleader Bob Crosby, recalled at the time of Gary's
LibreOffice's database module OpenOffice.org Base, OpenOffice.org's database module, also known as ooBase Mathematics Base of computation, commonly called radix, the number of distinct digits in a positional numeral system Base of a logarithm, the number whose logarithm is Base (exponentiation), the number in an expression of the form Base (geometry), a side of a plane figure (for example a triangle) or face of a solid Base (group theory), a sequence of elements not jointly stabilized by any nontrivial group element. Base (topology), a type of generating set for a topological space Organizations Backward Society Education, a Nepali non-governmental organization BASE (social centre), a self-managed social centre in Bristol, UK Beaverton Academy of Science and Engineering, part of Beaverton School District in Hillsboro, Oregon, US Bible Archaeology Search and Exploration Institute British Association for Screen Entertainment Brooklyn Academy of Science and the Environment, a high school in New York, US Science and technology Base (chemistry), a substance that can accept hydrogen ions (protons) Base, an attribute to medication in uncombined form, for example erythromycin base Base, one of the three terminals of a Bipolar junction transistor BASE experiment, an antiproton experiment at CERN Base pair, a pair of connected nucleotides on complementary DNA and RNA strands Beta-alumina solid electrolyte, a fast
part of Beaverton School District in Hillsboro, Oregon, US Bible Archaeology Search and Exploration Institute British Association for Screen Entertainment Brooklyn Academy of Science and the Environment, a high school in New York, US Science and technology Base (chemistry), a substance that can accept hydrogen ions (protons) Base, an attribute to medication in uncombined form, for example erythromycin base Base, one of the three terminals of a Bipolar junction transistor BASE experiment, an antiproton experiment at CERN Base pair, a pair of connected nucleotides on complementary DNA and RNA strands Beta-alumina solid electrolyte, a fast ion conductor material Nucleobase, in genetics, the parts of DNA and RNA involved in forming base pairs Social science Base (politics), a group of voters who almost always support a single party's candidates Base (social class), a lower social class Base and superstructure (Marxism) Sports Base (baseball), the first, second, and third base on a baseball diamond Base, a position in some cheerleading stunts BASE jumping, parachuting or wingsuit flying from a fixed structure or cliff Base, a variant name for the children's game darebase Other uses Base (EP), an album by South Korean singer Kim Jonghyun Base, Maharashtra, a village in India Rob Base, American rapper Base, or binder
movements of hazardous waste between nations, and specifically to prevent transfer of hazardous waste from developed to less developed countries (LDCs). It does not, however, address the movement of radioactive waste. The convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist LDCs in environmentally sound management of the hazardous and other wastes they generate. The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of October 2018, 199 states and the SAARS are parties to the convention. Haiti and the United States have signed the convention but not ratified it. Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country. History With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, globalization of shipping made transboundary movement of waste more accessible, and many LDCs were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to LDCs, grew rapidly. One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea. Another is the 1988 Koko case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small town of Koko in Nigeria in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland. At its meeting that took place from 27 November to 1 December 2006, the conference of the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships. According to Maureen Walsh, only around 4% of hazardous wastes that come from OECD countries are actually shipped across international borders. These wastes include, among others, chemical waste, radioactive waste, municipal solid waste, asbestos, incinerator ash, and old tires. Of internationally shipped waste that comes from developed countries, more than half is shipped for recovery and the remainder for final disposal. Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste". As of October 2018, there are 187 parties to the treaty, which includes 184 UN member states, the Cook Islands, the European Union, and the State of Palestine. The nine UN member states that are not party to the treaty are East Timor, Fiji, Grenada, Haiti, San Marino, Solomon Islands, South Sudan, Tuvalu, and United States. Definition of hazardous waste Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the convention, it is sufficient for waste to be
characteristics contained in Annex III. In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit. The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling. Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste. Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered. Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded. Obligations In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. It is of note that the convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries. The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention. Parties to the convention must honor import bans of other parties. Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention. The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions. According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders. The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered. Basel Ban Amendment After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to LDCs. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention. Lobbying at 1995 Basel conference by LDCs, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation. In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative. Regulation of plastic waste In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and
All compositions by John Zorn Disc One "Gevurah" – 6:55 "Nezikin" – 1:51 "Mahshav" – 4:33 "Rokhev" – 3:10 "Abidan" – 5:19 "Sheloshim" – 5:03 "Hath-Arob" – 2:25 "Paran" – 4:48 "Mahlah" – 7:48 "Socoh" – 4:07 "Yechida" – 8:24 "Bikkurim" – 3:25 "Idalah-Abal" – 5:04 Disc Two "Tannaim" – 4:38 "Nefesh" – 3:33 "Abidan" – 3:13 "Mo'ed" – 4:59 "Maskil" – 4:41 "Mishpatim" – 6:46 "Sansanah" – 6:56 "Shear-Jashub" – 2:06 "Mahshav" – 4:50 "Sheloshim" – 6:45 "Mochin" – 13:11 "Karaim" – 3:39 Recorded at Baby Monster Studios, New York City in August 1994, December 1995 and March 1996 Personnel John
by John Zorn, recorded between 1994 and 1996. It features music from Zorn's Masada project, rearranged for small ensembles. It also features the original soundtrack from The Art of Remembrance – Simon Wiesenthal, a film by Hannah Heer and Werner Schmiedel (1994–95). Reception The AllMusic review by Marc Gilman noted: "While some compositions retain their original structure and sound, some are expanded and probed by Zorn's arrangements, and resemble avant-garde classical music more than jazz. But this is the beauty of the album; the ensembles provide a forum for Zorn to expand his compositions. The album consistently impresses." Track listing All compositions by John Zorn Disc One "Gevurah" – 6:55 "Nezikin" – 1:51 "Mahshav" – 4:33 "Rokhev" – 3:10 "Abidan" – 5:19 "Sheloshim" – 5:03 "Hath-Arob" – 2:25 "Paran" –
system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. Visual Basic In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus. Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. On March 11, 2020, Microsoft announced that evolution of the VB.NET language had also concluded, although it was still supported. Meanwhile, competitors exist such as Xojo and Gambas. Post-1990 versions and dialects Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic. Many versions of BASIC are also now available for smartphones and tablets via the Apple App Store, or Google Play store for Android. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch. Calculators Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments, HP, Casio, and others. Windows command-line QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. Other The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS. Legacy The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs. Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014, as did other organisations; at least one organisation of VBA programmers organised a 35th anniversary observance in 1999. Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event. Syntax Typical BASIC keywords Data manipulation LET assigns a value (which may be the result of an expression) to a variable. In most dialects of BASIC, LET is optional, and a line with no other identifiable keyword will assume the keyword to be LET. DATA holds a list of values which are assigned sequentially using the READ command. READ reads a value from a DATA statement and assigns it to a variable. An internal pointer keeps track of the last DATA element that was read and moves it one position forward with each READ. RESTORE resets the internal pointer to the first DATA statement, allowing the program to begin READing from the first value. Many dialects allow an optional line number or ordinal value to allow the pointer to be reset to a selected location. DIM Sets up an array. Program flow control IF ... THEN ... {ELSE} used to perform comparisons or make decisions. Early dialects only allowed a line number after the THEN, but later versions allowed any valid statement to follow. ELSE was not widely supported, especially in earlier versions. FOR ... TO ... {STEP} ... NEXT repeat a section of code a given number of times. A variable that acts as a counter, the "index", is available within the loop. WHILE ... WEND and REPEAT ... UNTIL repeat a section of code while the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Both of these commands are found mostly in later dialects. DO ... LOOP {WHILE} or {UNTIL} repeat a section of code indefinitely or while/until the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Similar to WHILE, these keywords are mostly found in later dialects. GOTO jumps to a numbered or labelled line in the program. GOSUB jumps to a numbered or labelled line, executes the code it finds there until it reaches a RETURN command, on which it jumps back to the statement following the GOSUB, either after a colon, or on the next line. This is used to implement subroutines. ON ... GOTO/GOSUB chooses where to jump based on the specified conditions. See Switch statement for other forms. DEF FN a pair of keywords introduced in the early 1960s to define functions. The original BASIC functions were modelled on FORTRAN single-line functions. BASIC functions were one expression with variable arguments, rather than subroutines, with a syntax on the model of DEF FND(x) = x*x at the beginning of a program. Function names were originally restricted to FN, plus one letter, i.e., FNA, FNB ... Input and output LIST displays the full source code of the current program. PRINT displays a message on the screen or other output device. INPUT asks the user to enter the value of a variable. The statement may include a prompt message. TAB used with PRINT to set the position where the next character will be shown on the screen or printed on paper. AT is an alternative form. SPC prints out a number of space characters. Similar in concept to TAB but moves by a number of additional spaces from the current column rather that moving to a specified column. Mathematical functions ABS Absolute value ATN Arctangent (result in radians) COS Cosine (argument in radians) EXP Exponential function INT Integer part (typically floor function) LOG Natural logarithm RND Random number generation SIN Sine (argument in radians) SQR Square root TAN Tangent (argument in radians) Miscellaneous REM holds a programmer's comment or REMark; often used to give a title to the program and to help identify the purpose of a given section of code. USR transfers program control to a machine language subroutine, usually entered as an alphanumeric string or in a list of DATA statements. CALL alternative form of USR found in some dialects. Does not require an artificial parameter to complete the function-like syntax of USR, and has a clearly defined method of calling different routines in memory. TRON turns on display of each line number as it is run ("TRace ON"). This was useful for debugging or correcting of problems in a program. TROFF turns off the display of line numbers. ASM some compilers such as Freebasic, Purebasic, and Powerbasic also support inline assembly language, allowing the programmer to intermix high-level and low-level code, typically prefixed with "ASM" or "!" statements. Data types and variables Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables. Some dialects of BASIC supported matrices and matrix operations, useful for the solution of sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements. Examples Unstructured BASIC New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program: 10 PRINT "Hello, World!" 20 END An infinite loop could be used to fill the display with the message: 10 PRINT "Hello, World!" 20 GOTO 10 Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement: 10 LET N=10 20 FOR I=1 TO N 30 PRINT "Hello, World!" 40 NEXT I Most first-generation BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes: 10 INPUT "What is your name: "; U$ 20 PRINT "Hello "; U$ 30 INPUT "How many stars do you want: "; N 40 S$ = "" 50 FOR I = 1 TO N 60 S$ = S$ + "*" 70 NEXT I 80 PRINT S$ 90 INPUT "Do you want more stars? "; A$ 100 IF LEN(A$) = 0 THEN GOTO 90 110 A$ = LEFT$(A$, 1) 120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30 130 PRINT "Goodbye "; U$ 140 END The resulting dialog might resemble: What is your name: Mike Hello Mike How many stars do you want: 7 ******* Do you want more stars? yes How many stars do you want: 3 *** Do you want more stars? no Goodbye Mike The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input: 5 LET S = 0 10 MAT INPUT V 20 LET N = NUM 30 IF N = 0 THEN 99 40 FOR I = 1 TO N 45 LET S = S + V(I) 50 NEXT I 60 PRINT S/N 70 GO TO 5
by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies". General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973. Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute. Spread on minicomputers BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had very small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to the reduced need for working memory. A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG). DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system. During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions. Explosive growth: the home computer era The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers. The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy. Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known. Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented. Commodore Business Machines included Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each had two versions of BASIC, a smaller introductory version introduced with the initial releases of the machines and an MS-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit family had its own Atari BASIC that was modified in order to fit on an 8 KB ROM cartridge. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers Ltd, incorporating many extra structured programming keywords and advanced floating-point operation features. As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications. In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era. IBM PC and compatibles When IBM was designing the IBM PC they followed the paradigm of existing home computers in wanting to have a built-in BASIC. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft BASIC Compiler aimed at professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions are still being marketed by the original author under the name PowerBASIC). Microsoft wrote the windowed AmigaBASIC that was supplied with version 1.1 of the pre-emptive multitasking GUI Amiga computers (late 1985 / early 1986), although the product unusually did not bear any Microsoft marks. These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. However, by the latter half of the 1980s, users were increasingly using pre-made applications written by others rather than learning programming themselves; while professional programmers now had a wide range of more advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. Visual Basic In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language. The IDE, with its event-driven GUI builder, was also influential on other tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus. Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. On March 11, 2020, Microsoft announced that evolution of the VB.NET language had also concluded, although it was still supported. Meanwhile, competitors exist such as Xojo and Gambas. Post-1990 versions and dialects Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic. Many versions of BASIC are also now available for smartphones and tablets via the Apple App Store, or Google Play store for Android. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch. Calculators Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments, HP, Casio, and others. Windows command-line QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. Other The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS. Legacy The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In
the state and its subjects, he was the last authority and legislator of the empire and all his work was in imitation of the sacred kingdom of God, also according to the Christian principles,he was the ultimate benefecator and protector of his people. The title of all Emperors preceding Heraclius was officially "Augustus", although other titles such as Dominus were also used. Their names were preceded by Imperator Caesar and followed by Augustus. Following Heraclius, the title commonly became the Greek Basileus (Gr. Βασιλεύς), which had formerly meant sovereign, though Augustus continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title "Autokrator" (Gr. Αὐτοκράτωρ) was increasingly used. In later centuries, the Emperor could be referred to by Western Christians as the "Emperor of the Greeks". Towards the end of the Empire, the standard imperial formula of the Byzantine ruler was "[Emperor's name] in Christ, Emperor and Autocrat of the Romans" (cf. Ῥωμαῖοι and Rûm). In the medieval period, dynasties were common, but the principle of hereditary succession was
Byzantine emperors considered themselves to be rightful Roman emperors in direct succession from Augustus; the term "Byzantine" was coined by Western historiography only in the 16th century. The use of the title "Roman Emperor" by those ruling from Constantinople was not contested until after the Papal coronation of the Frankish Charlemagne as Holy Roman Emperor (25 December 800), done partly in response to the Byzantine coronation of Empress Irene, whose claim, as a woman, was not recognized by Pope Leo III. In practice, according to the Hellenistic political traditions the Byzantine emperor had been given total power through God to shape the state and its subjects, he was the last authority and legislator of the empire and all his work was in imitation of the sacred kingdom of God, also according to the Christian principles,he was the ultimate benefecator and protector of his people. The title of all Emperors preceding Heraclius was officially "Augustus", although other titles such as Dominus were also used. Their names were preceded by Imperator Caesar and followed by Augustus. Following Heraclius, the title commonly became the Greek Basileus (Gr. Βασιλεύς), which had formerly meant sovereign, though Augustus continued to be used in a reduced capacity. Following the establishment of the rival Holy Roman Empire in Western Europe, the title "Autokrator" (Gr. Αὐτοκράτωρ) was increasingly
parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" discussed the probability of time travel. In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following suggestions from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. While the "butterfly effect" is often
may have large effects in weather was earlier recognized by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" discussed the probability of time travel. In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following suggestions from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing represents a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of
the Linux platform for the first time. Kylix was launched in 2001. Plans to spin off the InterBase division as a separate company were abandoned after Borland and the people who were to run the new company could not agree on terms for the separation. Borland stopped open-source releases of InterBase and has developed and sold new versions at a fast pace. In 2001 Delphi 6 became the first integrated development environment to support web services. All of the company's development platforms now support web services. C#Builder was released in 2003 as a native C# development tool, competing with Visual Studio .NET. By the 2005 release, C#Builder, Delphi for Win32, and Delphi for .NET was combined into a single IDE called "Borland Developer Studio" (though the combined IDE is still popularly known as "Delphi"). In late 2002 Borland purchased design tool vendor TogetherSoft and tool publisher Starbase, makers of the StarTeam configuration management tool and the CaliberRM requirements management tool (eventually, CaliberRM was renamed as "Caliber"). The latest releases of JBuilder and Delphi integrate these tools to give developers a broader set of tools for development. Former CEO Dale Fuller quit in July 2005, but remained on the board of directors. Former COO Scott Arnold took the title of interim president and chief executive officer until November 8, 2005, when it was announced that Tod Nielsen would take over as CEO effective November 9, 2005. Nielsen remained with the company until January 2009, when he accepted the position of chief operating officer at VMware; CFO Erik Prusch then took over as acting president and CEO. In early 2007 Borland announced new branding for its focus around open application life-cycle management. In April 2007 Borland announced that it would relocate its headquarters and development facilities to Austin, Texas. It also has development centers at Singapore, Santa Ana, California, and Linz, Austria. On May 6, 2009, the company announced it was to be acquired by Micro Focus for $75 million. The transaction was approved by Borland shareholders on July 22, 2009, with Micro Focus acquiring the company for $1.50/share. Following Micro Focus shareholder approval and the required corporate filings, the transaction was completed in late July 2009. It was estimated to have 750 employees at the time. On April 5, 2015, Micro Focus announced the completion of integrating Attachmate Group of companies that was merged in November 20, 2014. During the integration period, the affected companies was merged into a single organization. In the announced reorganization, Borland products would be part of Micro Focus portfolio. Subsidiaries Legadero: In October 2005, Borland acquired Legadero, in order to add its IT management and governance suite, called Tempo, to the Borland product line. Codegear: On February 8, 2006, Borland announced the divestiture of their IDE division, including Delphi, JBuilder, and InterBase. At the same time they announced the planned acquisition of Segue Software, a maker of software test and quality tools, in order to concentrate on application life-cycle management (ALM). On March 20, 2006, Borland announced its acquisition of Gauntlet Systems, a provider of technology that screens software under development for quality and security. On November 14, 2006, Borland announced its decision to separate the developer tools group into a wholly owned subsidiary. The newly formed operation, CodeGear, was responsible for four IDE product lines. On May 7, 2008, Borland announced the sale of CodeGear division to Embarcadero Technologies for an expected $23 million price and $7 million in CodeGear accounts receivables retained by Borland. Products Recent The products acquired from Segue Software include Silk Central, Silk Performer, and Silk Test. The Silk line was first announced in 1997. Other programs are: Historical products Unreleased software Turbo Modula-2: Later sold by TopSpeed as TopSpeed Modula-2. Marketing CB Magazine: It is an official magazine by Borland Japan. The magazine was republished on April 3, 1997. Renaming to Inprise Corporation Along with renaming from Borland International, Inc. to Inprise Corporation, the company refocused its efforts on targeting enterprise applications development. Borland hired marketing firm Lexicon Branding to come up with a new name for the company. Yocam explained that the new name, Inprise, was meant to evoke "integrating the enterprise". The idea was to integrate Borland's tools, Delphi, C++ Builder, and JBuilder with enterprise environment software, including Visigenic's implementations of CORBA, Visibroker for C++ and Java, and the new product, Application Server. Frank Borland Frank Borland is a mascot character for Borland products. According to Philippe Kahn, the mascot first appeared in advertisements and cover of Borland Sidekick 1.0 manual, which was in 1984 during Borland International, Inc. era. Frank Borland also appeared in Turbo Tutor - A Turbo Pascal Tutorial, Borland JBuilder 2. A live action version of Frank Borland was made after
first headquartered in Scotts Valley, California, then in Cupertino, California and then in Austin, Texas. In 2009 the company became a full subsidiary of the British firm Micro Focus International plc. History The 1980s: Foundations Borland Ltd. was founded in August 1981 by three Danish citizens, Niels Jensen, Ole Henriksen, and Mogens Glad, to develop products like Word Index for the CP/M operating system using an off-the-shelf company. However, response to the company's products at the CP/M-82 show in San Francisco showed that a U.S. company would be needed to reach the American market. They met Philippe Kahn, who had just moved to Silicon Valley, and who had been a key developer of the Micral. The three Danes had embarked, at first successfully, on marketing software first from Denmark, and later from Ireland, before running into some challenges at the time when they met Philippe Kahn. Kahn was chairman, president, and CEO of Borland Inc. from its inception in 1983 until 1995. Main shareholders at the incorporation of Borland were Niels Jensen (250,000 shares), Ole Henriksen (160,000), Mogens Glad (100,000), and Kahn (80,000). Borland International, Inc. era Borland developed a series of software development tools. Its first product was Turbo Pascal in 1983, developed by Anders Hejlsberg (who later developed .NET and C# for Microsoft) and before Borland acquired the product sold in Scandinavia under the name of Compas Pascal. 1984 saw the launch of Borland Sidekick, a time organization, notebook, and calculator utility that was an early terminate-and-stay-resident program (TSR) for DOS operating systems. By the mid-1980s the company had the largest exhibit at the 1985 West Coast Computer Faire other than IBM or AT&T. Bruce Webster reported that "the legend of Turbo Pascal has by now reached mythic proportions, as evidenced by the number of firms that, in marketing meetings, make plans to become 'the next Borland'". After Turbo Pascal and Sidekick the company launched other applications such as SuperKey and Lightning, all developed in Denmark. While the Danes remained majority shareholders, board members included Kahn, Tim Berry, John Nash, and David Heller. With the assistance of John Nash and David Heller, both British members of the Borland Board, the company was taken public on London's Unlisted Securities Market (USM) in 1986. Schroders was the lead investment banker. According to the London IPO filings, the management team was Philippe Kahn as president, Spencer Ozawa as VP of Operations, Marie Bourget as CFO, and Spencer Leyton as VP of sales and business development, while all software development was continuing to take place in Denmark and later London as the Danish co-founders moved there. A first US IPO followed in 1989 after Ben Rosen joined the Borland board with Goldman Sachs as the lead banker and a second offering in 1991 with Lazard as the lead banker. In 1985 Borland acquired Analytica and its Reflex database product. The engineering team of Analytica, managed by Brad Silverberg and including Reflex co-founder Adam Bosworth, became the core of Borland's engineering team in the USA. Brad Silverberg was VP of engineering until he left in early 1990 to head up the Personal Systems division at Microsoft. Adam Bosworth initiated and headed up the Quattro project until moving to Microsoft later in 1990 to take over the project which eventually became Access. In 1987 Borland purchased Wizard Systems and incorporated portions of the Wizard C technology into Turbo C. Bob Jervis, the author of Wizard C became a Borland employee. Turbo C was released on May 18, 1987. This apparently drove a wedge between Borland and Niels Jensen and the other members of his team who had been working on a brand new series of compilers at their London development centre. An agreement was reached and they spun off a company called Jensen & Partners International(JPI), later TopSpeed. JPI first launched a MS-DOS compiler named JPI Modula-2, that later became TopSpeed Modula-2, and followed up with TopSpeed C, TopSpeed C++ and TopSpeed Pascal compilers for both the MS-DOS and OS/2 operating systems. The TopSpeed compiler technology exists today as the underlying technology of the Clarion 4GL programming language, a Windows development tool. In September 1987 Borland purchased Ansa-Software, including their Paradox (version 2.0) database management tool. Richard Schwartz, a cofounder of Ansa, became Borland's CTO and Ben Rosen joined the Borland board. The Quattro Pro spreadsheet was launched in 1989 with, at the time, an improvement and charting capabilities. Lotus Development, under the leadership of Jim Manzi sued Borland for copyright infringement (see Look and feel). The litigation, Lotus Dev. Corp. v. Borland Int'l, Inc., brought forward Borland's open standards position as opposed to Lotus' closed approach. Borland, under Kahn's leadership took a position of principle and announced that they would defend against Lotus' legal position and "fight for programmer's rights". After a decision in favor of Borland by the First Circuit Court of Appeals, the case went to the United States Supreme Court. Because Justice John Paul Stevens had recused himself, only eight Justices heard the case, and it ended in a 4–4 tie. As a result, the First Circuit decision remained standing, but the Supreme Court result, being a tie, did not bind any other court and set no national precedent. Additionally, Borland's approach towards software piracy and intellectual property (IP) included its "Borland no-nonsense license agreement". This allowed the developer/user to utilize its products "just like a book"; he or she was allowed to make multiple copies of a program, as long as only one copy was in use at any point in time. The 1990s: Rise and change In September 1991 Borland purchased Ashton-Tate, bringing the dBASE and InterBase databases to the house, in an all-stock transaction. Competition with Microsoft was fierce. Microsoft launched the competing database Microsoft Access and bought the dBASE clone FoxPro in 1992, undercutting Borland's prices. During the early 1990s Borland's implementation of C and C++ outsold Microsoft's. Borland survived as a company, but no longer had the dominance in software tools that it once had. It went through a radical transition in products, financing, and staff, and became a very different company from the one which challenged Microsoft and Lotus in the early 1990s. The internal problems that arose with the Ashton-Tate merger were a large part of the downfall. Ashton-Tate's product portfolio proved to be weak, with no provision for evolution into the GUI environment of Windows. Almost all product lines were discontinued. The consolidation of duplicate support and development offices was costly and disruptive. Worst of all, the highest revenue earner of the combined company was dBASE with no Windows version ready. Borland had an internal project to clone dBASE which was intended to run on Windows and was part of the strategy of the acquisition, but by late 1992 this was abandoned due to technical flaws and the company
evolutionary process—an integral function of the universe." Fuller wrote that the natural analytic geometry of the universe was based on arrays of tetrahedra. He developed this in several ways, from the close-packing of spheres and the number of compressive or tensile members required to stabilize an object in space. One confirming result was that the strongest possible homogeneous truss is cyclically tetrahedral. He had become a guru of the design, architecture, and 'alternative' communities, such as Drop City, the community of experimental artists to whom he awarded the 1966 "Dymaxion Award" for "poetically economic" domed living structures. Major design projects The geodesic dome Fuller was most famous for his lattice shell structures – geodesic domes, which have been used as parts of military radar stations, civic buildings, environmental protest camps and exhibition attractions. An examination of the geodesic design by Walther Bauersfeld for the Zeiss-Planetarium, built some 28 years prior to Fuller's work, reveals that Fuller's Geodesic Dome patent (U.S. 2,682,235; awarded in 1954) is the same design as Bauersfeld's. Their construction is based on extending some basic principles to build simple "tensegrity" structures (tetrahedron, octahedron, and the closest packing of spheres), making them lightweight and stable. The geodesic dome was a result of Fuller's exploration of nature's constructing principles to find design solutions. The Fuller Dome is referenced in the Hugo Award-winning novel Stand on Zanzibar by John Brunner, in which a geodesic dome is said to cover the entire island of Manhattan, and it floats on air due to the hot-air balloon effect of the large air-mass under the dome (and perhaps its construction of lightweight materials). Transportation The Dymaxion car was a vehicle designed by Fuller, featured prominently at Chicago's 1933-1934 Century of Progress World's Fair. During the Great Depression, Fuller formed the Dymaxion Corporation and built three prototypes with noted naval architect Starling Burgess and a team of 27 workmen — using donated money as well as a family inheritance. Fuller associated the word Dymaxion, a blend of the words dynamic, maximum, and tension to sum up the goal of his study, "maximum gain of advantage from minimal energy input". The Dymaxion was not an automobile but rather the 'ground-taxying mode' of a vehicle that might one day be designed to fly, land and drive — an "Omni-Medium Transport" for air, land and water. Fuller focused on the landing and taxiing qualities, and noted severe limitations in its handling. The team made improvements and refinements to the platform, and Fuller noted the Dymaxion "was an invention that could not be made available to the general public without considerable improvements". The bodywork was aerodynamically designed for increased fuel efficiency and its platform featured a lightweight cromoly-steel hinged chassis, rear-mounted V8 engine, front-drive and three-wheels. The vehicle was steered via the third wheel at the rear, capable of 90° steering lock. Able to steer in a tight circle, the Dymaxion often caused a sensation, bringing nearby traffic to a halt. Shortly after launch, a prototype crashed after being hit by another car, killing the Dymaxion's driver. The other car was driven by a local politician and was removed from the accident scene, leaving reporters who arrived subsequently to blame the Dymaxion's unconventional design — though investigations exonerated the prototype. Fuller would himself later crash another prototype with his young daughter aboard. Despite courting the interest of important figures from the auto industry, Fuller used his family inheritance to finish the second and third prototypes — eventually selling all three, dissolving Dymaxion Corporation and maintaining the Dymaxion was never intended as a commercial venture. One of the three original prototypes survives. Housing Fuller's energy-efficient and inexpensive Dymaxion house garnered much interest, but only two prototypes were ever produced. Here the term "Dymaxion" is used in effect to signify a "radically strong and light tensegrity structure". One of Fuller's Dymaxion Houses is on display as a permanent exhibit at the Henry Ford Museum in Dearborn, Michigan. Designed and developed during the mid-1940s, this prototype is a round structure (not a dome), shaped something like the flattened "bell" of certain jellyfish. It has several innovative features, including revolving dresser drawers, and a fine-mist shower that reduces water consumption. According to Fuller biographer Steve Crooks, the house was designed to be delivered in two cylindrical packages, with interior color panels available at local dealers. A circular structure at the top of the house was designed to rotate around a central mast to use natural winds for cooling and air circulation. Conceived nearly two decades earlier, and developed in Wichita, Kansas, the house was designed to be lightweight, adapted to windy climates, cheap to produce and easy to assemble. Because of its light weight and portability, the Dymaxion House was intended to be the ideal housing for individuals and families who wanted the option of easy mobility. The design included a "Go-Ahead-With-Life Room" stocked with maps, charts, and helpful tools for travel "through time and space". It was to be produced using factories, workers, and technologies that had produced World War II aircraft. It looked ultramodern at the time, built of metal, and sheathed in polished aluminum. The basic model enclosed of floor area. Due to publicity, there were many orders during the early Post-War years, but the company that Fuller and others had formed to produce the houses failed due to management problems. In 1967, Fuller developed a concept for an offshore floating city named Triton City and published a report on the design the following year. Models of the city aroused the interest of President Lyndon B. Johnson who, after leaving office, had them placed in the Lyndon Baines Johnson Library and Museum. In 1969, Fuller began the Otisco Project, named after its location in Otisco, New York. The project developed and demonstrated concrete spray with mesh-covered wireforms for producing large-scale, load-bearing spanning structures built on-site, without the use of pouring molds, other adjacent surfaces or hoisting. The initial method used a circular concrete footing in which anchor posts were set. Tubes cut to length and with ends flattened were then bolted together to form a duodeca-rhombicahedron (22-sided hemisphere) geodesic structure with spans ranging to . The form was then draped with layers of ¼-inch wire mesh attached by twist ties. Concrete was sprayed onto the structure, building up a solid layer which, when cured, would support additional concrete to be added by a variety of traditional means. Fuller referred to these buildings as monolithic ferroconcrete geodesic domes. However, the tubular frame form proved problematic for setting windows and doors. It was replaced by an iron rebar set vertically in the concrete footing and then bent inward and welded in place to create the dome's wireform structure and performed satisfactorily. Domes up to three stories tall built with this method proved to be remarkably strong. Other shapes such as cones, pyramids and arches proved equally adaptable. The project was enabled by a grant underwritten by Syracuse University and sponsored by U.S. Steel (rebar), the Johnson Wire Corp, (mesh) and Portland Cement Company (concrete). The ability to build large complex load bearing concrete spanning structures in free space would open many possibilities in architecture, and is considered one of Fuller's greatest contributions. Dymaxion map and World Game Fuller, along with co-cartographer Shoji Sadao, also designed an alternative projection map, called the Dymaxion map. This was designed to show Earth's continents with minimum distortion when projected or printed on a flat surface. In the 1960s, Fuller developed the World Game, a collaborative simulation game played on a 70-by-35-foot Dymaxion map, in which players attempt to solve world problems. The object of the simulation game is, in Fuller's words, to "make the world work, for 100% of humanity, in the shortest possible time, through spontaneous cooperation, without ecological offense or the disadvantage of anyone". Appearance and style Buckminster Fuller wore thick-lensed spectacles to correct his extreme hyperopia, a condition that went undiagnosed for the first five years of his life. Fuller's hearing was damaged during his Naval service in World War I and deteriorated during the 1960s. After experimenting with bullhorns as hearing aids during the mid-1960s, Fuller adopted electronic hearing aids from the 1970s onward. In public appearances, Fuller always wore dark-colored suits, appearing like "an alert little clergyman". Previously, he had experimented with unconventional clothing immediately after his 1927 epiphany, but found that breaking social fashion customs made others devalue or dismiss his ideas. Fuller learned the importance of physical appearance as part of one's credibility, and decided to become "the invisible man" by dressing in clothes that would not draw attention to himself. With self-deprecating humor, Fuller described this black-suited appearance as resembling a "second-rate bank clerk". Writer Guy Davenport met him in 1965 and described him thus: He's a dwarf, with a worker's hands, all callouses and squared fingers. He carries an ear trumpet, of green plastic, with WORLD SERIES 1965 printed on it. His smile is golden and frequent; the man's temperament is angelic, and his energy is just a touch more than that of [Robert] Gallway (champeen runner, footballeur, and swimmer). One leg is shorter than the other, and the prescription shoe worn to correct the imbalance comes from a country doctor deep in the wilderness of Maine. Blue blazer, Khrushchev trousers, and a briefcase full of Japanese-made wonderments; Lifestyle Following his global prominence from the 1960s onward, Fuller became a frequent flier, often crossing time zones to lecture. In the 1960s and 1970s, he wore three watches simultaneously; one for the time zone of his office at Southern Illinois University, one for the time zone of the location he would next visit, and one for the time zone he was currently in. In the 1970s, Fuller was only in 'homely' locations (his personal home in Carbondale, Illinois; his holiday retreat in Bear Island, Maine; and his daughter's home in Pacific Palisades, California) roughly 65 nights per year—the other 300 nights were spent in hotel beds in the locations he visited on his lecturing and consulting circuits. In the 1920s, Fuller experimented with polyphasic sleep, which he called Dymaxion sleep. Inspired by the sleep habits of animals such as dogs and cats, Fuller worked until he was tired, and then slept short naps. This generally resulted in Fuller sleeping 30-minute naps every 6 hours. This allowed him "twenty-two thinking hours a day", which aided his work productivity. Fuller reportedly kept this Dymaxion sleep habit for two years, before quitting the routine because it conflicted with his business associates' sleep habits. Despite no longer personally partaking in the habit, in 1943 Fuller suggested Dymaxion sleep as a strategy that the United States could adopt to win World War II. Despite only practicing true polyphasic sleep for a period during the 1920s, Fuller was known for his stamina throughout his life. He was described as "tireless" by Barry Farrell in Life magazine, who noted that Fuller stayed up all night replying to mail during Farrell's 1970 trip to Bear Island. In his seventies, Fuller generally slept for 5–8 hours per night. Fuller documented his life copiously from 1915 to 1983, approximately of papers in a collection called the Dymaxion Chronofile. He also kept copies of all incoming and outgoing correspondence. The enormous R. Buckminster Fuller Collection is currently housed at Stanford University. Language and neologisms Buckminster Fuller spoke and wrote in a unique style and said it was important to describe the world as accurately as possible. Fuller often created long run-on sentences and used unusual compound words (omniwell-informed, intertransformative, omni-interaccommodative, omniself-regenerative) as well as terms he himself invented. His style of speech was characterized by progressively rapid and breathless delivery and rambling digressions of thought, which Fuller described as "thinking out loud". The effect, combined with Fuller's dry voice and New England accent, was varyingly considered "hypnotic" or "overwhelming". Fuller used the word Universe without the definite or indefinite articles (the or a) and always capitalized the word. Fuller wrote that "by Universe I mean: the aggregate of all humanity's consciously apprehended and communicated (to self or others) Experiences". The words "down" and "up", according to Fuller, are awkward in that they refer to a planar concept of direction inconsistent with human experience. The words "in" and "out" should be used instead, he argued, because they better describe an object's relation to a gravitational center, the Earth. "I suggest to audiences that they say, 'I'm going "outstairs" and "instairs."' At first that sounds strange to them; They all laugh about it. But if they try saying in and out for a few days in fun, they find themselves beginning to realize that they are indeed going inward and outward in respect to the center of Earth, which is our Spaceship Earth. And for the first time they begin to feel real 'reality.'" "World-around" is a term coined by Fuller to replace "worldwide". The general belief in a flat Earth died out in classical antiquity, so using "wide" is an anachronism when referring to the surface of the Earth—a spheroidal surface has area and encloses a volume but has no width. Fuller held that unthinking use of obsolete scientific ideas detracts from and misleads intuition. Other neologisms collectively invented by the Fuller family, according to Allegra Fuller Snyder, are the terms "sunsight" and "sunclipse", replacing "sunrise" and "sunset" to overturn the geocentric bias of most pre-Copernican celestial mechanics. Fuller also invented the word "livingry", as opposed to weaponry (or "killingry"), to mean that which is in support of all human, plant, and Earth life. "The architectural profession—civil, naval, aeronautical, and astronautical—has always been the place where the most competent thinking is conducted regarding livingry, as opposed to weaponry." As well as contributing significantly to the development of tensegrity technology, Fuller invented the term "tensegrity", a portmanteau of "tensional integrity". "Tensegrity describes a structural-relationship principle in which structural shape is guaranteed by the finitely closed, comprehensively continuous, tensional behaviors of the system and not by the discontinuous and exclusively local compressional member behaviors. Tensegrity provides the ability to yield increasingly without ultimately breaking or coming asunder." "Dymaxion" is a portmanteau of "dynamic maximum tension". It was invented around 1929 by two admen at Marshall Field's department store in Chicago to describe Fuller's concept house, which was shown as part of a house of the future store display. They created the term utilizing three words that Fuller used repeatedly to describe his design – dynamic, maximum, and tension. Fuller also helped to popularize the concept of Spaceship Earth: "The most important fact about Spaceship Earth: an instruction manual didn't come with it." In the preface for his "cosmic fairy tale" Tetrascroll: Goldilocks and the Three Bears, Fuller stated that his distinctive speaking style grew out of years of embellishing the classic tale for the benefit of his daughter, allowing him to explore both his new theories and how to present them. The Tetrascroll narrative was eventually transcribed onto a set of tetrahedral lithographs (hence the name), as well as being published as a traditional book. Concepts and buildings His concepts and buildings include: Influence and legacy Among the many people who were influenced by Buckminster Fuller are: Constance Abernathy, Ruth Asawa, J. Baldwin, Michael Ben-Eli, Pierre Cabrol, John Cage, Joseph Clinton, Peter Floyd, Norman Foster, Medard Gabel, Michael Hays, Ted Nelson, David Johnston, Peter Jon Pearce, Shoji Sadao, Edwin Schlossberg, Kenneth Snelson, Robert Anton Wilson, Stewart Brand, and Jason McLennan. An allotrope of carbon, fullerene—and a particular molecule of that allotrope C60 (buckminsterfullerene or buckyball) has been named after him. The Buckminsterfullerene molecule, which consists of 60 carbon atoms, very closely resembles a spherical version of Fuller's geodesic dome. The 1996 Nobel prize in chemistry was given to Kroto, Curl, and Smalley for their discovery of the fullerene. He is quoted in the lyric of "The Tower of Babble" in the musical Godspell: "Man is a complex of patterns and processes." The indie band Driftless Pony Club named their 2011 album, Buckminster, after him. All the songs within the album are based upon his life and works. On July 12, 2004, the United States Post Office released a new commemorative stamp honoring R. Buckminster Fuller on the 50th anniversary of his patent for the geodesic dome and by the occasion of his 109th birthday. The stamp's design replicated the January 10, 1964, cover of Time magazine. Fuller was the subject of two documentary films: The World of Buckminster Fuller (1971) and Buckminster Fuller: Thinking Out Loud (1996). Additionally, filmmaker Sam Green and the band Yo La Tengo collaborated on a 2012 "live documentary" about Fuller, The Love Song of R. Buckminster Fuller. In June 2008, the Whitney Museum of American Art presented "Buckminster Fuller: Starting with the Universe", the most comprehensive retrospective to date of his work and ideas. The exhibition traveled to the Museum of Contemporary Art, Chicago in 2009. It presented a combination of models, sketches, and other artifacts, representing six decades of the artist's integrated approach to housing, transportation, communication, and cartography. It also featured the extensive connections with Chicago from his years spent living, teaching, and working in the city. In 2009, a number of US
structural and mathematical resemblance to geodesic spheres. He also served as the second World President of Mensa International from 1974 to 1983. Life and work Fuller was born on July 12, 1895, in Milton, Massachusetts, the son of Richard Buckminster Fuller and Caroline Wolcott Andrews, and grand-nephew of Margaret Fuller, an American journalist, critic, and women's rights advocate associated with the American transcendentalism movement. The unusual middle name, Buckminster, was an ancestral family name. As a child, Richard Buckminster Fuller tried numerous variations of his name. He used to sign his name differently each year in the guest register of his family summer vacation home at Bear Island, Maine. He finally settled on R. Buckminster Fuller. Fuller spent much of his youth on Bear Island, in Penobscot Bay off the coast of Maine. He attended Froebelian Kindergarten. He was dissatisfied with the way geometry was taught in school, disagreeing with the notions that a chalk dot on the blackboard represented an "empty" mathematical point, or that a line could stretch off to infinity. To him these were illogical, and led to his work on synergetics. He often made items from materials he found in the woods, and sometimes made his own tools. He experimented with designing a new apparatus for human propulsion of small boats. By age 12, he had invented a 'push pull' system for propelling a rowboat by use of an inverted umbrella connected to the transom with a simple oar lock which allowed the user to face forward to point the boat toward its destination. Later in life, Fuller took exception to the term "invention". Years later, he decided that this sort of experience had provided him with not only an interest in design, but also a habit of being familiar with and knowledgeable about the materials that his later projects would require. Fuller earned a machinist's certification, and knew how to use the press brake, stretch press, and other tools and equipment used in the sheet metal trade. Education Fuller attended Milton Academy in Massachusetts, and after that began studying at Harvard College, where he was affiliated with Adams House. He was expelled from Harvard twice: first for spending all his money partying with a vaudeville troupe, and then, after having been readmitted, for his "irresponsibility and lack of interest". By his own appraisal, he was a non-conforming misfit in the fraternity environment. Wartime experience Between his sessions at Harvard, Fuller worked in Canada as a mechanic in a textile mill, and later as a laborer in the meat-packing industry. He also served in the U.S. Navy in World War I, as a shipboard radio operator, as an editor of a publication, and as commander of the crash rescue boat USS Inca. After discharge, he worked again in the meat packing industry, acquiring management experience. In 1917, he married Anne Hewlett. During the early 1920s, he and his father-in-law developed the Stockade Building System for producing light-weight, weatherproof, and fireproof housing—although the company would ultimately fail in 1927. Depression and epiphany Buckminster Fuller recalled 1927 as a pivotal year of his life. His daughter Alexandra had died in 1922 of complications from polio and spinal meningitis just before her fourth birthday. Barry Katz, a Stanford University scholar who wrote about Fuller, found signs that around this time in his life Fuller was suffering from depression and anxiety. Fuller dwelled on his daughter's death, suspecting that it was connected with the Fullers' damp and drafty living conditions. This provided motivation for Fuller's involvement in Stockade Building Systems, a business which aimed to provide affordable, efficient housing. In 1927, at age 32, Fuller lost his job as president of Stockade. The Fuller family had no savings, and the birth of their daughter Allegra in 1927 added to the financial challenges. Fuller drank heavily and reflected upon the solution to his family's struggles on long walks around Chicago. During the autumn of 1927, Fuller contemplated suicide by drowning in Lake Michigan, so that his family could benefit from a life insurance payment. Fuller said that he had experienced a profound incident which would provide direction and purpose for his life. He felt as though he was suspended several feet above the ground enclosed in a white sphere of light. A voice spoke directly to Fuller, and declared: Fuller stated that this experience led to a profound re-examination of his life. He ultimately chose to embark on "an experiment, to find what a single individual could contribute to changing the world and benefiting all humanity". Speaking to audiences later in life, Fuller would regularly recount the story of his Lake Michigan experience, and its transformative impact on his life. Recovery In 1927 Fuller resolved to think independently which included a commitment to "the search for the principles governing the universe and help advance the evolution of humanity in accordance with them ... finding ways of doing more with less to the end that all people everywhere can have more and more". By 1928, Fuller was living in Greenwich Village and spending much of his time at the popular café Romany Marie's, where he had spent an evening in conversation with Marie and Eugene O'Neill several years earlier. Fuller accepted a job decorating the interior of the café in exchange for meals, giving informal lectures several times a week, and models of the Dymaxion house were exhibited at the café. Isamu Noguchi arrived during 1929—Constantin Brâncuși, an old friend of Marie's, had directed him there—and Noguchi and Fuller were soon collaborating on several projects, including the modeling of the Dymaxion car based on recent work by Aurel Persu. It was the beginning of their lifelong friendship. Geodesic domes Fuller taught at Black Mountain College in North Carolina during the summers of 1948 and 1949, serving as its Summer Institute director in 1949. Fuller had been shy and withdrawn, but he was persuaded to participate in a theatrical performance of Erik Satie's Le piège de Méduse produced by John Cage, who was also teaching at Black Mountain. During rehearsals, under the tutelage of Arthur Penn, then a student at Black Mountain, Fuller broke through his inhibitions to become confident as a performer and speaker. At Black Mountain, with the support of a group of professors and students, he began reinventing a project that would make him famous: the geodesic dome. Although the geodesic dome had been created, built and awarded a German patent on June 19, 1925 by Dr. Walther Bauersfeld, Fuller was awarded United States patents. Fuller's patent application made no mention of Bauersfeld's self-supporting dome built some 26 years prior. Although Fuller undoubtedly popularized this type of structure he is mistakenly given credit for its design. One of his early models was first constructed in 1945 at Bennington College in Vermont, where he lectured often. Although Bauersfeld's dome could support a full skin of concrete it was not until 1949 that Fuller erected a geodesic dome building that could sustain its own weight with no practical limits. It was in diameter and constructed of aluminium aircraft tubing and a vinyl-plastic skin, in the form of an icosahedron. To prove his design, Fuller suspended from the structure's framework several students who had helped him build it. The U.S. government recognized the importance of this work, and employed his firm Geodesics, Inc. in Raleigh, North Carolina to make small domes for the Marines. Within a few years, there were thousands of such domes around the world. Fuller's first "continuous tension – discontinuous compression" geodesic dome (full sphere in this case) was constructed at the University of Oregon Architecture School in 1959 with the help of students. These continuous tension – discontinuous compression structures featured single force compression members (no flexure or bending moments) that did not touch each other and were 'suspended' by the tensional members. Dymaxion Chronofile For half of a century, Fuller developed many ideas, designs and inventions, particularly regarding practical, inexpensive shelter and transportation. He documented his life, philosophy and ideas scrupulously by a daily diary (later called the Dymaxion Chronofile), and by twenty-eight publications. Fuller financed some of his experiments with inherited funds, sometimes augmented by funds invested by his collaborators, one example being the Dymaxion car project. World stage International recognition began with the success of huge geodesic domes during the 1950s. Fuller lectured at North Carolina State University in Raleigh in 1949, where he met James Fitzgibbon, who would become a close friend and colleague. Fitzgibbon was director of Geodesics, Inc. and Synergetics, Inc. the first licensees to design geodesic domes. Thomas C. Howard was lead designer, architect and engineer for both companies. Richard Lewontin, a new faculty member in population genetics at North Carolina State University, provided Fuller with computer calculations for the lengths of the domes' edges. Fuller began working with architect Shoji Sadao in 1954, together designing a hypothetical Dome over Manhattan in 1960, and in 1964 they co-founded the architectural firm Fuller & Sadao Inc., whose first project was to design the large geodesic dome for the U.S. Pavilion at Expo 67 in Montreal. This building is now the "Montreal Biosphère". In 1962, the artist and searcher John McHale wrote the first monograph on Fuller, published by George Braziller in New York. After employing several Southern Illinois University Carbondale graduate students to rebuild his models following an apartment fire in the summer of 1959, Fuller was recruited by longtime friend Harold Cohen to serve as a research professor of "design science exploration" at the institution's School of Art and Design. According to SIU architecture professor Jon Davey, the position was "unlike most faculty appointments ... more a celebrity role than a teaching job" in which Fuller offered few courses and was only stipulated to spend two months per year on campus. Nevertheless, his time in Carbondale was "extremely productive", and Fuller was promoted to university professor in 1968 and distinguished university professor in 1972. Working as a designer, scientist, developer, and writer, he continued to lecture for many years around the world. He collaborated at SIU with John McHale. In 1965, they inaugurated the World Design Science Decade (1965 to 1975) at the meeting of the International Union of Architects in Paris, which was, in Fuller's own words, devoted to "applying the principles of science to solving the problems of humanity." From 1972 until retiring as university professor emeritus in 1975, Fuller held a joint appointment at Southern Illinois University Edwardsville, where he had designed the dome for the campus Religious Center in 1971. During this period, he also held a joint fellowship at a consortium of Philadelphia-area institutions, including the University of Pennsylvania, Bryn Mawr College, Haverford College, Swarthmore College and the University City Science Center; as a result of this affiliation, the University of Pennsylvania appointed him university professor emeritus in 1975. Fuller believed human societies would soon rely mainly on renewable sources of energy, such as solar- and wind-derived electricity. He hoped for an age of "omni-successful education and sustenance of all humanity". Fuller referred to himself as "the property of universe" and during one radio interview he gave later in life, declared himself and his work "the property of all humanity". For his lifetime of work, the American Humanist Association named him the 1969 Humanist of the Year. In 1976, Fuller was a key participant at UN Habitat I, the first UN forum on human settlements. Honors Fuller was awarded 28 United States patents and many honorary doctorates. In 1960, he was awarded the Frank P. Brown Medal from The Franklin Institute. Fuller was elected as an honorary member of Phi Beta Kappa in 1967, on the occasion of the 50th year reunion of his Harvard class of 1917 (from which he was expelled in his first year). He was elected a Fellow of the American Academy of Arts and Sciences in 1968. In 1968, he was elected into the National Academy of Design as an Associate member, and became a full Academician in 1970. In 1970, he received the Gold Medal award from the American Institute of Architects. In 1976, he received the St. Louis Literary Award from the Saint Louis University Library Associates. In 1977, Fuller received the Golden Plate Award of the American Academy of Achievement. He also received numerous other awards, including the Presidential Medal of Freedom presented to him on February 23, 1983, by President Ronald Reagan. Last filmed appearance Fuller's last filmed interview took place on June 21, 1983, in which he spoke at Norman Foster's Royal Gold Medal for architecture ceremony. His speech can be watched in the archives of the AA School of Architecture, in which he spoke after Sir Robert Sainsbury's introductory speech and Foster's keynote address. Death In the year of his death, Fuller described himself as follows: Guinea Pig B: I AM NOW CLOSE TO 88 and I am confident that the only thing important about me is that I am an average healthy human. I am also a living case history of a thoroughly documented, half-century, search-and-research project designed to discover what, if anything, an unknown, moneyless individual, with a dependent wife and newborn child, might be able to do effectively on behalf of all humanity that could not be accomplished by great nations, great religions or private enterprise, no matter how rich or powerfully armed. Fuller died on July 1, 1983, 11 days before his 88th birthday. During the period leading up to his death, his wife had been lying comatose in a Los Angeles hospital, dying of cancer. It was while visiting her there that he exclaimed, at a certain point: "She is squeezing my hand!" He then stood up, suffered a heart attack, and died an hour later, at age 87. His wife of 66 years died 36 hours later. They are buried in Mount Auburn Cemetery in Cambridge, Massachusetts. Philosophy and worldview Buckminster Fuller was a Unitarian, like his grandfather Arthur Buckminster Fuller, a Unitarian minister. Fuller was also an early environmental activist, aware of the Earth's finite resources, and promoted a principle he termed "ephemeralization", which, according to futurist and Fuller disciple Stewart Brand, was defined as "doing more with less". Resources and waste from crude, inefficient products could be recycled into making more valuable products, thus increasing the efficiency of the entire process. Fuller also coined the word synergetics, a catch-all term used broadly for communicating experiences using geometric concepts, and more specifically, the empirical study of systems in transformation; his focus was on total system behavior unpredicted by the behavior of any isolated components. Fuller was a pioneer in thinking globally, and explored energy and material efficiency in the fields of architecture, engineering and design. In his book Critical Path (1981) he cited the opinion of François de Chadenèdes (1920-1999) that petroleum, from the standpoint of its replacement cost in our current energy "budget" (essentially, the net incoming solar flux), had cost nature "over a million dollars" per U.S. gallon (US$300,000 per litre) to produce. From this point of view, its use as a transportation fuel by people commuting to work represents a huge net loss compared to their actual earnings. An encapsulation quotation of his views might best be summed up as: "There is no energy crisis, only a crisis of ignorance." Though Fuller was concerned about sustainability and human survival under the existing socio-economic system, he remained optimistic about humanity's future. Defining wealth in terms of knowledge, as the "technological ability to protect, nurture, support, and accommodate all growth needs of life", his analysis of the condition of "Spaceship Earth" caused him to conclude that at a certain time during the 1970s, humanity had attained an unprecedented state. He was convinced that the accumulation of relevant knowledge, combined with the quantities of major recyclable resources that had already been extracted from the earth, had attained a critical level, such that competition for necessities had become unnecessary. Cooperation had become the optimum survival strategy. He declared: "selfishness is unnecessary and hence-forth unrationalizable ... War is obsolete." He criticized previous utopian schemes as too exclusive, and thought this was a major source of their failure. To work, he thought that a utopia needed to include everyone. Fuller was influenced by Alfred Korzybski's idea of general semantics. In the 1950s, Fuller attended seminars and workshops organized by the
which time he discovered comic strips such as Pogo, Krazy Kat, and Charles Schulz' Peanuts which subsequently inspired and influenced his desire to become a professional cartoonist. On one occasion when he was in fourth grade, he wrote a letter to Charles Schulz, who responded, much to Watterson's surprise. This made a big impression on him at the time. His parents encouraged him in his artistic pursuits. Later, they recalled him as a "conservative child" — imaginative, but "not in a fantasy way", and certainly nothing like the character of Calvin that he later created. Watterson found avenues for his cartooning talents throughout primary and secondary school, creating high school-themed super hero comics with his friends and contributing cartoons and art to the school newspaper and yearbook. From 1976 to 1980, Watterson attended Kenyon College and graduated with a Bachelor of Arts degree in political science. He had already decided on a career in cartooning, but he felt his studies would help him move into editorial cartooning. At college, he continued to develop his art skills; during his sophomore year, he painted Michelangelo's Creation of Adam on the ceiling of his dorm room. He also contributed cartoons to the college newspaper, some of which included the original "Spaceman Spiff" cartoons. Later, when Watterson was creating names for the characters in his comic strip, he decided on Calvin (after the Protestant reformer John Calvin) and Hobbes (after the social philosopher Thomas Hobbes), allegedly as a "tip of the hat" to Kenyon's political science department. In The Complete Calvin and Hobbes, Watterson stated that Calvin was named for "a 16th-century theologian who believed in predestination," and Hobbes for "a 17th-century philosopher with a dim view of human nature." Watterson wrote a brief, tongue-in-cheek autobiography in the late 1980s. Career Early work Watterson was inspired by the work of The Cincinnati Enquirer political cartoonist Jim Borgman, a 1976 graduate of Kenyon College, and decided to try to follow the same career path as Borgman, who in turn offered support and encouragement to the aspiring artist. Watterson graduated in 1980 and was hired on a trial basis at the Cincinnati Post, a competing paper of the Enquirer. Watterson quickly discovered that the job was full of unexpected challenges which prevented him from performing his duties to the standards set for him. Not the least of these challenges was his unfamiliarity with the Cincinnati political scene, as he had never resided in or near the city, having grown up in the Cleveland area and attending college in central Ohio. The Post abruptly fired Watterson before his contract was up. He then joined a small advertising agency and worked there for four years as a designer, creating grocery advertisements while also working on his own projects, including development of his own cartoon strip and contributions to Target: The Political Cartoon Quarterly. As a freelance artist, Watterson has drawn other works for various merchandise, including album art for his brother's band, calendars, clothing graphics, educational books, magazine covers, posters, and post cards. Calvin and Hobbes and rise to success Watterson has said that he works for personal fulfilment. As he told the graduating class of 1990 at Kenyon College, "It's surprising how hard we'll work when the work is done just for ourselves." Calvin and Hobbes was first published on November 18, 1985. In Calvin and Hobbes Tenth Anniversary Book, he wrote that his influences included Charles Schulz's Peanuts, Walt Kelly's Pogo, and George Herriman's Krazy Kat. Watterson wrote the introduction to the first volume of The Komplete Kolor Krazy Kat. Watterson's style also reflects the influence of Winsor McCay's Little Nemo in Slumberland. Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about "building character", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes. Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined, and that the space which they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he opined that art should not be judged by the medium for which it is created (i.e., there is no "high" art or "low" art—just art). Fight against merchandising his characters For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic. He refused to merchandise his creations on the grounds that displaying Calvin and Hobbes images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and said that he had signed his contract without fully perusing it because, as a new artist, he was happy to find a syndicate willing to give him a chance (two other syndicates had previously turned him down). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him and continue Calvin and Hobbes with a new artist. Watterson's position eventually won out and he was able to renegotiate his contract so that he would receive all rights to his work, but later added that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991. Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, "Only thieves and vandals have made money on Calvin and Hobbes merchandise." Changing the format of the Sunday strip Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and remained so, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. (In this context, half-page is an absolute sizeapproximately half a nominal page sizeand not related to the actual page size on which a cartoon might eventually be printed for distribution.) Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste
Tenth Anniversary Book, he wrote that his influences included Charles Schulz's Peanuts, Walt Kelly's Pogo, and George Herriman's Krazy Kat. Watterson wrote the introduction to the first volume of The Komplete Kolor Krazy Kat. Watterson's style also reflects the influence of Winsor McCay's Little Nemo in Slumberland. Like many artists, Watterson incorporated elements of his life, interests, beliefs, and values into his work—for example, his hobby as a cyclist, memories of his own father's speeches about "building character", and his views on merchandising and corporations. Watterson's cat Sprite very much inspired the personality and physical features of Hobbes. Watterson spent much of his career trying to change the climate of newspaper comics. He believed that the artistic value of comics was being undermined, and that the space which they occupied in newspapers continually decreased, subject to arbitrary whims of shortsighted publishers. Furthermore, he opined that art should not be judged by the medium for which it is created (i.e., there is no "high" art or "low" art—just art). Fight against merchandising his characters For years, Watterson battled against pressure from publishers to merchandise his work, something that he felt would cheapen his comic. He refused to merchandise his creations on the grounds that displaying Calvin and Hobbes images on commercially sold mugs, stickers, and T-shirts would devalue the characters and their personalities. Watterson said that Universal kept putting pressure on him and said that he had signed his contract without fully perusing it because, as a new artist, he was happy to find a syndicate willing to give him a chance (two other syndicates had previously turned him down). He added that the contract was so one-sided that, if Universal really wanted to, they could license his characters against his will, and could even fire him and continue Calvin and Hobbes with a new artist. Watterson's position eventually won out and he was able to renegotiate his contract so that he would receive all rights to his work, but later added that the licensing fight exhausted him and contributed to the need for a nine-month sabbatical in 1991. Despite Watterson's efforts, many unofficial knockoffs have been found, including items that depict Calvin and Hobbes consuming alcohol or Calvin urinating on a logo. Watterson has said, "Only thieves and vandals have made money on Calvin and Hobbes merchandise." Changing the format of the Sunday strip Watterson was critical of the prevailing format for the Sunday comic strip that was in place when he began drawing (and remained so, to varying degrees). The typical layout consists of three rows with eight total squares, which take up half a page if published with its normal size. (In this context, half-page is an absolute sizeapproximately half a nominal page sizeand not related to the actual page size on which a cartoon might eventually be printed for distribution.) Some newspapers are restricted with space for their Sunday features and reduce the size of the strip. One of the more common ways is to cut out the top two panels, which Watterson believed forced him to waste the space on throwaway jokes that did not always fit the strip. While he was set to return from his first sabbatical (a second took place during 1994), Watterson discussed with his syndicate a new format for Calvin and Hobbes that would enable him to use his space more efficiently and would almost require the papers to publish it as a half-page. Universal agreed that they would sell the strip as the half-page and nothing else, which garnered anger from papers and criticism for Watterson from both editors and some of his fellow cartoonists (whom he described as "unnecessarily hot-tempered"). Eventually, Universal compromised and agreed to offer papers a choice between the full half-page or a reduced-sized version to alleviate concerns about the size issue. Watterson conceded that this caused him to lose space in many papers, but he said that, in the end, it was a benefit because he felt that he was giving the papers' readers a better strip for their money and editors were free not to run Calvin and Hobbes at their own risk. He added that he was not going to apologize for drawing a popular feature. End of Calvin and Hobbes Watterson announced the end of Calvin and Hobbes on November 9, 1995, with the following letter to newspaper editors: The last strip of Calvin and Hobbes was published on December 31, 1995. After Calvin and Hobbes In the years since Calvin and Hobbes was ended, many attempts have been made to contact Watterson. Both The Plain Dealer and the Cleveland Scene sent reporters, in 1998 and 2003 respectively, but neither were able to make contact with the media-shy Watterson. Since 1995, Watterson has taken up painting, at one point drawing landscapes of the woods with his father. He has kept away from the public eye and shown no interest in resuming the strip, creating new works based on the strip's characters, or embarking on new commercial projects, though he has published several Calvin and Hobbes "treasury collection" anthologies. He does not sign autographs or license his characters, staying true to his stated principles. In previous years, Watterson was known to sneak autographed copies of his books onto the shelves of the Fireside Bookshop, a family-owned bookstore in his hometown of Chagrin Falls, Ohio. He ended this practice after discovering that some of the autographed books were being sold online for high prices. Watterson rarely gives interviews or makes public appearances. His lengthiest interviews include the cover story in The Comics Journal No. 127 in February 1989, an interview that appeared in a 1987 issue of Honk Magazine, and one in a 2015 Watterson exhibition catalogue. On December 21, 1999, a short piece was published in the Los Angeles Times, written by Watterson to mark the forthcoming retirement of iconic Peanuts creator Charles Schulz. Circa 2003, Gene Weingarten of The Washington Post sent Watterson the first edition of the Barnaby book as an incentive, hoping to land an interview. Weingarten passed the
was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances. The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. Astronomy A black hole is a region of spacetime where gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies. Although a black hole itself is black, infalling material forms an accretion disk, one of the brightest types of object in the universe. Black-body radiation refers to the radiation coming from a body at a given temperature where all incoming energy (light) is converted to heat. Black sky refers to the appearance of space as one emerges from Earth's atmosphere. Why the night sky and space are black – Olbers' paradox The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black. The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black. The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering. The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury. Biology Culture In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first Emperor of China Qin Shi Huang seized power from the Zhou Dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han Dynasty appeared in 206 BC was red restored as the imperial color. In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th and 11th century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions. In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day. In Indonesia black is associated with depth, the subterranean world, demons, disaster, and the left hand. When black is combined with white, however, it symbolizes harmony and equilibrium. Political movementsAnarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States and Russia. In the latter, the movement initially allied itself with the Bolsheviks. The Black Army was a collection of anarchist military units which fought in the Russian Civil War, sometimes on the side of the Bolshevik Red Army, and sometimes for the opposing White Army. It was officially known as the Revolutionary Insurrectionary Army of Ukraine, and it was under the command of the anarchist Nestor Makhno.Fascism. The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN). Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts. Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II. The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind." Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy," prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists. Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands.Patriotic resistance'''. The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today. Military Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia. The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies. The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion. Religion In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism. In Christianity, the devil is often called the "prince of darkness." The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (c. 1606), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman." Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence. In Islam, black, along with green, plays an important symbolic role. It is the color of the Black Standard, the banner that is said to have been carried by the soldiers of Muhammad. It is also used as a symbol in Shi'a Islam (heralding the advent of the Mahdi), and the flag of followers of Islamism and Jihadism. In Hinduism, the goddess Kali, goddess of time and change, is portrayed with black or dark blue skin. wearing a necklace adorned with severed heads and hands. Her name means "The black one". She destroys anger and passion according to Hindu mythology and her devotees are supposed to abstain from meat or intoxication. Kali does not eat meat, but it is the śāstra's injunction that those who are unable to give up meat-eating, they may sacrifice one goat, not cow, one small animal before the goddess Kali, on amāvāsya (new moon) day, night, not day, and they can eat it. In Paganism, black represents dignity, force, stability, and protection. The color is often used to banish and release negative energies, or binding. An athame is a ceremonial blade often having a black handle, which is used in some forms of witchcraft. Sports The national rugby union team of New Zealand is called the All Blacks, in reference to their black outfits, and the color is also shared by other New Zealand national teams such as the Black Caps (cricket) and the Kiwis (rugby league). Association football (soccer) referees traditionally wear all-black uniforms, however nowadays other uniform colors may also be worn. In auto racing, a black flag signals a driver to go into the pits. In baseball, "the black" refers to the batter's eye, a blacked out area around the center-field bleachers, painted black to give hitters a decent background for pitched balls. A large number of teams have uniforms designed with black colors—many feeling the color sometimes imparts a psychological advantage in its wearers. Black is used by numerous professional and collegiate sports teams Idioms and expressions In general, the Negro race of African origin is called "Black", while the Caucasian race of European origin is called "White". In the United States, "Black Friday" (the day after Thanksgiving Day, the fourth Thursday in November) is traditionally the busiest shopping day of the year. Many Americans are on holiday because of Thanksgiving, and many retailers open earlier and close later than normal, and offer special prices. The day's name originated in Philadelphia sometime before 1961, and originally was used to describe the heavy and disruptive downtown pedestrian and vehicle traffic which would occur on that day.Martin L. Apfelbaum, Philadelphia's "Black Friday," American Philatelist, vol. 69, no. 4, p. 239 (January 1966). Later an alternative explanation began to be offered: that "Black Friday" indicates the point in the year that retailers begin to turn a profit, or are "in the black", because of the large volume of sales on that day. "In the black" means profitable. Accountants originally used black ink in ledgers to indicate profit, and red ink to indicate a loss. Black Friday also refers to any particularly disastrous day on financial markets. The first Black Friday (1869), September 24, 1869, was caused by the efforts of two speculators, Jay Gould and James Fisk, to corner the gold market on the New York Gold Exchange. A blacklist is a list of undesirable persons or entities (to be placed on the list is to be "blacklisted"). Black comedy is a form of comedy dealing with morbid and serious topics. The expression is similar to black humor or black humour. A black mark against a person relates to something bad they have done. A black mood is a bad one (cf Winston Churchill's clinical depression, which he called "my black dog"). Black market is used to denote the trade of illegal goods, or alternatively the illegal trade of otherwise legal items at considerably higher prices, e.g. to evade rationing. Black propaganda is the use of known falsehoods, partial truths, or masquerades in propaganda to confuse an opponent. Blackmail is the act of threatening someone to do something that would hurt them in some way, such as by revealing sensitive information about them, in order to force the threatened party to fulfill certain demands. Ordinarily, such a threat is illegal. If the black eight-ball, in billiards, is sunk before all others are out of play, the player loses. The black sheep of the family is the ne'er-do-well. To blackball someone is to block their entry into a club or some such institution. In the traditional English gentlemen's club, members vote on the admission of a candidate by secretly placing a white or black ball in a hat. If upon the completion of voting, there was even one black ball amongst the white, the candidate would be denied membership, and he would never know who had "blackballed" him. Black tea in the Western culture is known as "crimson tea" in Chinese and culturally influenced languages (紅 茶, Mandarin Chinese hóngchá; Japanese kōcha; Korean hongcha). "The black" is a wildfire suppression term referring to a burned area on a wildfire capable of acting as a safety zone. Black coffee refers to coffee without sugar or cream. Associations and symbolism Mourning In Europe and America, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning. In Victorian England, the colors and fabrics of mourning were specified in an unofficial dress code: "non-reflective black paramatta and crape for the first year of deepest mourning, followed by nine months of dullish black silk, heavily trimmed with crape, and then three months when crape was discarded. Paramatta was a fabric of combined silk and wool or cotton; crape was a harsh black silk fabric with a crimped appearance produced by heat. Widows were allowed to change into the colors of half-mourning, such as gray and lavender, black and white, for the final six months." A "black day" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government. In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street Crash of 1929, the stock market crash on October 29, 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on October 24 the previous week. Darkness and evil In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic. In the Book of Revelation, the last book in the New Testament of the Bible, the Four Horsemen of the Apocalypse are supposed to announce the Apocalypse before the Last Judgment. The horseman representing famine rides a black horse. The vampire of literature and films, such as Count Dracula of the Bram Stoker novel, dressed in black, and could only move at night. The Wicked Witch of the West in the 1939 film The Wizard of Oz became the archetype of witches for generations of children. Whereas witches and sorcerers inspired real fear in the 17th century, in the 21st century children and adults dressed as witches for Halloween parties and parades. Power, authority and solemnity Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church. Until the 20th century most police uniforms were black, until they were largely replaced by a less menacing blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak ("blacks") after their uniform. Black today is the most common color for limousines and the official cars of government officials. Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics. Functionality In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black. Black house paint is becoming more popular with Sherwin-Williams reporting that the color, Tricorn Black, was the 6th most popular exterior house paint color in Canada and the 12th most popular paint in the United States in 2018. Ethnography The term "black" is often used in the West to describe people whose skin is darker. In the United States, it is particularly used to describe African Americans. The terms for African Americans have changed over the years, as shown by the categories in the United States Census, taken every ten years. In the first U.S. Census, taken in 1790, just four categories were used: Free White males, Free White females, other free persons, and slaves. In the 1820 census the new category "colored" was added. In the 1850 census, slaves were listed by owner, and a B indicated black, while an M indicated "mulatto." In the 1890 census, the categories for race were white, black, mulatto, quadroon (a person one-quarter black); octoroon (a person one-eighth black), Chinese, Japanese, or American Indian. In the 1930 census, anyone with any black blood was supposed to be listed as "Negro." In the 1970 census, the category "Negro or black" was used for the first time. In the 2000 and 2012 census, the category "Black or African-American" was used, defined as "a person having their origin in any of the racial groups in Africa." In the 2012 Census 12.1 percent of Americans identified themselves as Black or African-American. Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. The 2011 British census asked residents to describe themselves, and categories offered included Black, African, Caribbean, or Black British. Other possible categories were African British, African Scottish, Caribbean British and Caribbean Scottish. Of the total UK population in 2001, 1.0 percent identified themselves as Black Caribbean, 0.8 percent as Black African, and 0.2 percent as Black (others). In Canada, census respondents can identify themselves as Black. In the 2006 census, 2.5 percent of the population identified themselves as black. In Australia, the term black is not used in the census. In the 2006 census, 2.3 percent of Australians identified themselves as Aboriginal and/or Torres Strait Islanders. In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). In 2008 6.8 percent of the population identified themselves as "preto". Opposite of white Black and white have often been used to describe opposites; particularly light and darkness and good and evil. In Medieval literature, the white knight usually represented virtue, the black knight something mysterious and sinister. In American westerns, the hero often wore a white hat, the villain a black hat. In the original game of chess invented in Persia or India, the colors of the two sides were varied; a 12th-century Iranian chess set in the New York Metropolitan Museum of Art, has red and green pieces. But when the game was imported into Europe, the colors, corresponding to European culture, usually became black and white. Studies have shown that something printed in black letters on white has more authority with readers than any other color of printing. In philosophy and arguments, the issue is often described as black-and-white, meaning that the issue at hand is dichotomized (having two clear, opposing sides with no middle ground). Conspiracy Black is commonly associated with secrecy. The Black Chamber was a term given to an office which secretly opened and read diplomatic mail and broke codes. Queen Elizabeth I had such an office, headed by her Secretary, Sir Francis Walsingham, which successfully broke the Spanish codes and broke up several plots against the Queen. In France a cabinet noir was established inside the French post office by Louis XIII to open diplomatic mail. It was closed during the French Revolution but re-opened under Napoleon I. The Habsburg Empire and Dutch Republic had similar black chambers. The United States created a secret peacetime Black Chamber, called the Cipher Bureau, in 1919. It was funded by the State Department and Army and disguised as a commercial company in New York. It successfully broke a number of diplomatic codes, including the code of the Japanese government. It was closed down in 1929 after the State Department withdrew funding, when the new Secretary of State, Henry Stimson, stated that "Gentlemen do not read each other's mail." The Cipher Bureau was the ancestor of the U.S. National Security Agency. A black project'' is a secret military project, such as Enigma Decryption during World War II, or a secret counter-narcotics or police sting operation. Black ops are covert operations carried out by a government, government agency or military. A black budget is a government budget that is allocated for classified or other secret operations of a nation. The black budget is an account expenses and spending related to military research and covert operations. The black budget is mostly classified due to security reasons. Elegant fashion Black is the color most commonly associated with elegance in Europe and the United States, followed by silver, gold, and white. Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. (See history above.) In the 19th century, it was the fashion for men both in business and for evening wear, in the form of a black coat whose tails came down the knees. In the evening it was the custom of the men to leave the women after dinner to go to a special smoking room to enjoy cigars or cigarettes. This meant that their tailcoats eventually smelled of tobacco. According to the legend, in 1865 Edward VII, then the Prince of Wales, had his tailor make a special short smoking jacket. The smoking jacket then evolved into the dinner jacket. Again according to legend, the first Americans to wear the jacket were members of the Tuxedo Club in New York State. Thereafter the jacket became known as a tuxedo in the U.S. The term "smoking" is still used today in Russia and other countries. The tuxedo was always black until the 1930s, when the Duke of Windsor began to wear a tuxedo that was a very dark midnight blue. He did so because a black tuxedo looked greenish in artificial light, while a dark blue tuxedo looked blacker than black itself. For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. (See history.) Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The designer Karl Lagerfeld, explaining why black was so popular, said: "Black is the color that goes with everything. If you're wearing black, you're on sure ground." Skirts have gone up and down and fashions have changed, but the black dress has not lost its position as the essential element of a woman's wardrobe. The fashion designer Christian Dior said, "elegance is a combination of distinction, naturalness, care and simplicity," and black exemplified elegance. The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The
Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts. Modern 16th and 17th centuries While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the Pope and his Cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white. In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray. In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft. Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches. 18th and 19th centuries In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color. Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'”. Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the engravings of French artist Gustave Doré. A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet. The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America. Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother. Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black." Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black." Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.) 20th and 21st centuries In the 20th century, black was the color of Italian and German fascism. (See the section political movements.) In art, black regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea." Black was also appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument." In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who didn't accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress. In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; President Lyndon Johnson and all his successors were inaugurated wearing business suits. Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's. The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the late 1960s and 1970s, and popularized the slogan "Black is Beautiful". Science Physics In the visible spectrum, black is the absorption of all colors. Black can be defined as the visual impression experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye "look black". A black pigment can, however, result from a combination of several pigments that collectively absorb all colors. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called "black". This provides two superficially opposite but actually complementary descriptions of black. Black is the absorption of all colors of light, or an exhaustive combination of multiple colors of pigment. In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce. Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector). As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum. Chemistry Pigments The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment. Vine black was produced in Roman times by burning the cut branches of grapevines. It could also be produced by burning the remains of the crushed grapes, which were collected and dried in an oven. According to the historian Vitruvius, the deepness and richness of the black produced corresponded to the quality of the wine. The finest wines produced a black with a bluish tinge the color of indigo. The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use." Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint. Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment. Lamp black was used as a pigment for painting and frescoes. as a dye for fabrics, and in some societies for making tattoos. The 15th century Florentine painter Cennino Cennini described how it was made during the Renaissance: "... take a lamp full of linseed oil and fill the lamp with the oil and light the lamp. Then place it, lit, under a thoroughly clean pan and make sure that the flame from the lamp is two or three fingers from the bottom of the pan. The smoke that comes off the flame will hit the bottom of the pan and gather, becoming thick. Wait a bit. take the pan and brush this pigment (that is, this smoke) onto paper or into a pot with something. And it is not necessary to mull or grind it because it is a very fine pigment. Re-fill the lamp with the oil and put it under the pan like this several times and, in this way, make as much of it as is necessary." This same pigment was used by Indian artists to paint the Ajanta Caves, and as dye in ancient Japan. Ivory black, also known as bone char, was originally produced by burning ivory and mixing the resulting charcoal powder with oil. The color is still made today, but ordinary animal bones are substituted for ivory. Mars black is a black pigment made of synthetic iron oxides. It is commonly used in water-colors and oil painting. It takes its name from Mars, the god of war and patron of iron. Dyes Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black. A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe. Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps. Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks. Inks The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting. India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances. The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. Astronomy A black hole is a region of spacetime where gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Black holes of stellar mass are
refer to: Places Black Flag, Western Australia, an abandoned town named after the Black Flag gold mine and farm Black Flag to Ora Banda Road, the road in Western Australia on which the abovementioned town is located People The Black Flag, a nom de guerre of terror suspect Ali Charaf Damache Flags Black flag: The Anarchist black flag A type of racing flag One of various flags that are primarily black: list of black flags Black Standard, legendary flag in Islamic tradition Jolly Roger, flag
black flag A type of racing flag One of various flags that are primarily black: list of black flags Black Standard, legendary flag in Islamic tradition Jolly Roger, flag associated with piracy Pan-African flag, a trans-national unity symbol Arts, entertainment, and media Black Flag (band), an American hardcore punk band Black Flag (Ektomorf album), a 2012 album by Ektomorf Black Flag (Machine Gun Kelly mixtape), 2013 "Black Flag" (song), a 1992 song by King's X Black Flag (newspaper), a publication in Britain Assassin's Creed IV: Black Flag, 2013 videogame by Ubisoft Black Flags:
Keen of the British Tabulating Machine Company. Each machine was about high and wide, deep and weighed about a ton. At its peak, GC&CS was reading approximately 4,000 messages per day. As a hedge against enemy attack most bombes were dispersed to installations at Adstock and Wavendon (both later supplanted by installations at Stanmore and Eastcote), and Gayhurst. Luftwaffe messages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months. Britain produced modified bombes, but it was the success of the US Navy Bombe that was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links. The Lorenz messages were codenamed Tunny at Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in the Testery (named after Ralph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated in Colossus, the world's first programmable digital electronic computer. This was designed and built by Tommy Flowers and his team at the Post Office Research Station at Dollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time for D-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named the Newmanry after its head Max Newman. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions. Italian signals Italian signals had been of interest since Italy's attack on Abyssinia in 1935. During the Spanish Civil War the Italian Navy used the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937. When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers. Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who included Margaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; and Mavis Lever. Mavis Lever solved the signals revealing the Italian Navy's operational plans before the Battle of Cape Matapan in 1941, leading to a British victory. Although most Bletchley staff did not know the results of their work, Admiral Cunningham visited Bletchley in person a few weeks later to congratulate them. On entering World War II in June 1940, the Italians were using book codes for most of their military messages. The exception was the Italian Navy, which after the Battle of Cape Matapan started using the C-38 version of the Boris Hagelin rotor-based cipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa. As a consequence, JRM Butler recruited his former student Bernard Willson to join a team with two others in Hut4. In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct the Royal Navy and Royal Air Force to sink enemy ships carrying supplies from Europe to Rommel's Afrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for the Luftwaffe in North Africa reduced by 90 percent. After an intensive language course, in March 1944 Willson switched to Japanese language-based codes. A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained. John Chadwick started cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria. Soviet signals Soviet signals had been studied since the 1920s. In 193940, John Tiltman (who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and at Sarafand in Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square. Japanese signals An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, the Far East Combined Bureau (FECB). The FECB naval staff moved in 1940 to Singapore, then Colombo, Ceylon, then Kilindini, Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune. The Army and Air Force staff went from Singapore to the Wireless Experimental Centre at Delhi, India. In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages in Hut 7, under John Tiltman. By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service at Arlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (like John Tiltman, Hugh Foss, and Eric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers". Postwar Continued secrecy After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work. Churchill referred to the Bletchley staff as "the geese that laid the golden eggs and never cackled". That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print. With the publication of F.W. Winterbotham's The Ultra Secret (1974) public discussion of Bletchley's work finally became possible (though even today some former staff still consider themselves bound to silence) and in July 2009 the British government announced that Bletchley personnel would be recognised with a commemorative badge. Site After the war, the site passed through a succession of hands and saw a number of uses, including as a teacher-training college and local GPO headquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment. In February 1992, the Milton Keynes Borough Council declared most of the Park a conservation area, and the Bletchley Park Trust was formed to maintain the site as a museum. The site opened to visitors in 1993, and was formally inaugurated by the Duke of Kent as Chief Patron in July 1994. In 1999 the land owners, the Property Advisors to the Civil Estate and BT, granted a lease to the Trust giving it control over most of the site. Heritage attraction June 2014 saw the completion of an £8 million restoration project by museum design specialist, Event Communications, which was marked by a visit from Catherine, Duchess of Cambridge. The Duchess' paternal grandmother, Valerie, and Valerie's twin sister, Mary (née Glassborow), both worked at Bletchley Park during the war. The twin sisters worked as Foreign Office Civilians in Hut 6, where they managed the interception of enemy and neutral diplomatic signals for decryption. Valerie married Catherine's grandfather, Captain Peter Middleton. A memorial at Bletchley Park commemorates Mary and Valerie Middleton's work as code-breakers. Exhibitions Block C Visitor Centre Secrets Revealed introduction The Road to Bletchley Park. Codebreaking in World War One. Intel Security Cybersecurity exhibition. Online security and privacy in the 21st Century. Block B Lorenz Cipher Alan Turing Enigma machines Japanese codes Home Front exhibition. How people lived in WW2 The Mansion Office of Alistair Denniston Library. Dressed as a World War II naval intelligence office The Imitation Game exhibition Gordon Welchman: Architect of Ultra Intelligence exhibition Huts 3 and 6. Codebreaking offices as they would have looked during World War II. Hut 8. Interactive exhibitions explaining codebreaking Alan Turing's office Pigeon exhibition. The use of pigeons in World War II. Hut 11. Life as a WRNS Bombe operator Hut 12. Bletchley Park: Rescued and Restored. Items found during the restoration work. Wartime garages Hut 19. 2366 Bletchley Park Air Training Corp Squadron Learning Department The Bletchley Park Learning Department offers educational group visits with active learning activities for schools and universities. Visits can be booked in advance during term time, where students can engage with the history of Bletchley Park and understand its wider relevance for computer history and national security. Their workshops cover introductions to codebreaking, cyber security and the story of Enigma and Lorenz. Funding In October 2005, American billionaire Sidney Frank donated £500,000 to Bletchley Park Trust to fund a new Science Centre dedicated to Alan Turing. Simon Greenish joined as Director in 2006 to lead the fund-raising effort in a post he held until 2012 when Iain Standen took over the leadership role. In July 2008, a letter to The Times from more than a hundred academics condemned the neglect of the site. In September 2008, PGP, IBM, and other technology firms announced a fund-raising campaign to repair the facility. On 6 November 2008 it was announced that English Heritage would donate £300,000 to help maintain the buildings at Bletchley Park, and that they were in discussions regarding the donation of a further £600,000. In October 2011, the Bletchley Park Trust received a £4.6m Heritage Lottery Fund grant to be used "to complete the restoration of the site, and to tell its story to the highest modern standards" on the condition that £1.7m of 'match funding' is raised by the Bletchley Park Trust. Just weeks later, Google contributed £550k and by June 2012 the trust had successfully raised £2.4m to unlock the grants to restore Huts 3 and 6, as well as develop its exhibition centre in Block C. Additional income is raised by renting Block H to the National Museum of Computing, and some office space in various parts of the park to private firms. Due to the COVID-19 pandemic the Trust expected to lose more than £2m in 2020 and be required to cut a third of its workforce. Former MP John Leech asked tech giants Amazon, Apple, Google, Facebook and Microsoft to donate £400,000 each to secure the future of the Trust. Leech had led the successful campaign to pardon Alan Turing and implement Turing's Law. Other organisations sharing the campus The National Museum of Computing The National Museum of Computing is housed in Block H, which is rented from the Bletchley Park Trust. Its Colossus and Tunny galleries tell an important part of allied breaking of German codes during World War II. There is a working reconstruction of a Bombe and a rebuilt Colossus computer which was used on the high-level Lorenz cipher, codenamed Tunny by the British. The museum, which opened in 2007, is an independent voluntary organisation that is governed by its own board of trustees. Its aim is "To collect
The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development. More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bombe machine and a rebuilt Colossus computer, is housed in Block H on the site. History The site appears in the Domesday Book of 1086 as part of the Manor of Eaton. Browne Willis built a mansion there in 1711, but after Thomas Harrison purchased the property in 1793 this was pulled down. It was first known as Bletchley Park after its purchase by the architect Samuel Lipscomb Seckham in 1877, who built a house there. The estate of was bought in 1883 by Sir Herbert Samuel Leon, who expanded the then-existing house into what architect Landis Gores called a "maudlin and monstrous pile" combining Victorian Gothic, Tudor, and Dutch Baroque styles. At his Christmas family gatherings there was a fox hunting meet on Boxing Day with glasses of sloe gin from the butler, and the house was always "humming with servants". With 40 gardeners, a flower bed of yellow daffodils could become a sea of red tulips overnight. After the death of Herbert Leon in 1926, the estate continued to be occupied by his widow Fanny Leon (née Higham) until her death in 1937. In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral Sir Hugh Sinclair, head of the Secret Intelligence Service (SIS or MI6), bought the mansion and of land for £6,000 (£ today) for use by GC&CS and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so. A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party") was Bletchley's geographical centrality. It was almost immediately adjacent to Bletchley railway station, where the "Varsity Line" between Oxford and Cambridgewhose universities were expected to supply many of the code-breakersmet the main West Coast railway line connecting London, Birmingham, Manchester, Liverpool, Glasgow and Edinburgh. Watling Street, the main road linking London to the north-west (subsequently the A5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearby Fenny Stratford. Bletchley Park was known as "B.P." to those who worked there. "Station X" (X = Roman numeral ten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war. The formal posting of the many "Wrens"members of the Women's Royal Naval Serviceworking there, was to HMS Pembroke V. Royal Air Force names of Bletchley Park and its outstations included RAF Eastcote, RAF Lime Grove and RAF Church Green. The postal address that staff had to use was "Room 47, Foreign Office". After the war, the Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving to Eastcote in 1946 and to Cheltenham in the 1950s. The site was used by various government agencies, including the GPO and the Civil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving. In 1990 the site was at risk of being sold for housing development. However, Milton Keynes Council made it into a conservation area. Bletchley Park Trust was set up in 1991 by a group of people who recognised the site's importance. The initial trustees included Roger Bristow, Ted Enever, Peter Wescombe, Dr Peter Jarvis of the Bletchley Archaeological & Historical Society, and Tony Sale who in 1994 became the first director of the Bletchley Park Museums. Personnel Admiral Hugh Sinclair was the founder and head of GC&CS between 1919 and 1938 with Commander Alastair Denniston being operational head of the organization from 1919 to 1942, beginning with its formation from the Admiralty's Room 40 (NID25) and the War Office's MI1b. Key GC&CS cryptanalysts who moved from London to Bletchley Park included John Tiltman, Dillwyn "Dilly" Knox, Josh Cooper, Oliver Strachey and Nigel de Grey. These people had a variety of backgroundslinguists and chess champions were common, and Knox's field was papyrology. The British War Office recruited top solvers of cryptic crossword puzzles, as these individuals had strong lateral thinking skills. On the day Britain declared war on Germany, Denniston wrote to the Foreign Office about recruiting "men of the professor type". Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs. In one 1941 recruiting stratagem, The Daily Telegraph was asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort". Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed; Oxford's Peter Twinn joined GC&CS in February 1939; Cambridge's Alan Turing and Gordon Welchman began training in 1938 and reported to Bletchley the day after war was declared, along with John Jeffreys. Later-recruited cryptanalysts included the mathematicians Derek Taunt, Jack Good, Bill Tutte, and Max Newman; historian Harry Hinsley, and chess champions Hugh Alexander and Stuart Milner-Barry. Joan Clarke was one of the few women employed at Bletchley as a full-fledged cryptanalyst. This eclectic staff of "Boffins and Debs" (scientists and debutantes, young women of high society) caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society". During a September 1941 morale-boosting visit, Winston Churchill reportedly remarked to Denniston: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally." Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done." The Army CIGS Alan Brooke wrote that on 16 April 1942 "Took lunch in car and went to see the organization for breaking down ciphers – a wonderful set of professors and genii! I marvel at the work they succeed in doing." After initial training at the Inter-Service Special Intelligence School set up by John Tiltman (initially at an RAF depot in Buckingham and later in Bedfordwhere it was known locally as "the Spy School") staff worked a six-day week, rotating through three shifts: 4p.m. to midnight, midnight to 8a.m. (the most disliked shift), and 8a.m. to 4p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8a.m. and came back at 4p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest. Recruitment took place to combat a shortage of experts in Morse code and German. In January 1945, at the peak of codebreaking efforts, nearly 10,000 personnel were working at Bletchley and its outstations. About three-quarters of these were women. Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes. Among them were Eleanor Ireland who worked on the Colossus computers and Ruth Briggs, a German scholar, who worked within the Naval Section. The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies". Knox's methods enabled Mavis Lever (who married mathematician and fellow code-breaker Keith Batey) and Margaret Rock to solve a German code, the Abwehr cipher. Many of the women had backgrounds in languages, particularly French, German and Italian. Among them were Rozanne Colchester, a translator who worked mainly for the Italian air forces Section, and Cicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals. For a long time, the British Government failed to acknowledge the contributions the personnel at Bletchley Park had made. Their work achieved official recognition only in 2009. Secrecy Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures, and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultra secret"higher even than the normally highest classification and security was paramount. All staff signed the Official Secrets Act (1939) and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut..." Nevertheless, there were security leaks. Jock Colville, the Assistant Private Secretary to Winston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietor Lord Camrose had discovered Ultra and that security leaks "increase in number and seriousness". Without doubt, the most serious of these was that Bletchley Park had been infiltrated by John Cairncross, the notorious Soviet mole and member of the Cambridge Spy Ring, who leaked Ultra material to Moscow. Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearby Whaddon Hall came to light in 2020, after being anonymously donated to the Bletchley Park Trust. A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have still photographs" of the park and its associated sites. Early work The first personnel of the Government Code and Cypher School (GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated to MI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections. After the United States joined World War II, a number of American cryptographers were posted to Hut 3, and from May 1943 onwards there was close co-operation between British and American intelligence. (See 1943 BRUSA Agreement.) In contrast, the Soviet Union was never officially told of Bletchley Park and its activities a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat. The only direct enemy damage to the site was done 2021 November 1940 by three bombs probably intended for Bletchley railway station; Hut4, shifted two feet off its foundation, was winched back into place as work inside continued. Intelligence reporting Initially, when only a very limited amount of Enigma traffic was being read, deciphered non-Naval Enigma messages were sent from Hut 6 to Hut 3 which handled their translation and onward transmission. Subsequently, under Group Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3. Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L". It also housed the Traffic Analysis Section, SIXTA. An important function that allowed the synthesis of raw messages into valuable Military intelligence was the indexing and cross-referencing of information in a number of different filing systems. Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field. Naval Enigma deciphering was in Hut 8, with translation in Hut 4. Verbatim translations were sent to the Naval Intelligence Division (NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology. Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut8 provided excellent "cribs" for Known-plaintext attacks on the daily naval Enigma key. Listening stations Initially, a wireless room was established at Bletchley Park. It was set up in the mansion's water tower under the code name "Station X",
is possible that this priest is the other name listed in the Liber Vitae. At the age of seven, Bede was sent as a puer oblatus to the monastery of Monkwearmouth by his family to be educated by Benedict Biscop and later by Ceolfrith. Bede does not say whether it was already intended at that point that he would be a monk. It was fairly common in Ireland at this time for young boys, particularly those of noble birth, to be fostered out as an oblate; the practice was also likely to have been common among the Germanic peoples in England. Monkwearmouth's sister monastery at Jarrow was founded by Ceolfrith in 682, and Bede probably transferred to Jarrow with Ceolfrith that year. The dedication stone for the church has survived ; it is dated 23 April 685, and as Bede would have been required to assist with menial tasks in his day-to-day life it is possible that he helped in building the original church. In 686, plague broke out at Jarrow. The Life of Ceolfrith, written in about 710, records that only two surviving monks were capable of singing the full offices; one was Ceolfrith and the other a young boy, who according to the anonymous writer had been taught by Ceolfrith. The two managed to do the entire service of the liturgy until others could be trained. The young boy was almost certainly Bede, who would have been about 14. When Bede was about 17 years old, Adomnán, the abbot of Iona Abbey, visited Monkwearmouth and Jarrow. Bede would probably have met the abbot during this visit, and it may be that Adomnán sparked Bede's interest in the Easter dating controversy. In about 692, in Bede's nineteenth year, Bede was ordained a deacon by his diocesan bishop, John, who was bishop of Hexham. The canonical age for the ordination of a deacon was 25; Bede's early ordination may mean that his abilities were considered exceptional, but it is also possible that the minimum age requirement was often disregarded. There might have been minor orders ranking below a deacon; but there is no record of whether Bede held any of these offices. In Bede's thirtieth year (about 702), he became a priest, with the ordination again performed by Bishop John. In about 701 Bede wrote his first works, the De Arte Metrica and De Schematibus et Tropis; both were intended for use in the classroom. He continued to write for the rest of his life, eventually completing over 60 books, most of which have survived. Not all his output can be easily dated, and Bede may have worked on some texts over a period of many years. His last surviving work is a letter to Ecgbert of York, a former student, written in 734. A 6th-century Greek and Latin manuscript of Acts of the Apostles that is believed to have been used by Bede survives and is now in the Bodleian Library at University of Oxford; it is known as the Codex Laudianus. Bede may also have worked on some of the Latin Bibles that were copied at Jarrow, one of which, the Codex Amiatinus, is now held by the Laurentian Library in Florence. Bede was a teacher as well as a writer; he enjoyed music and was said to be accomplished as a singer and as a reciter of poetry in the vernacular. It is possible that he suffered a speech impediment, but this depends on a phrase in the introduction to his verse life of Saint Cuthbert. Translations of this phrase differ, and it is uncertain whether Bede intended to say that he was cured of a speech problem, or merely that he was inspired by the saint's works. In 708, some monks at Hexham accused Bede of having committed heresy in his work De Temporibus. The standard theological view of world history at the time was known as the Six Ages of the World; in his book, Bede calculated the age of the world for himself, rather than accepting the authority of Isidore of Seville, and came to the conclusion that Christ had been born 3,952 years after the creation of the world, rather than the figure of over 5,000 years that was commonly accepted by theologians. The accusation occurred in front of the bishop of Hexham, Wilfrid, who was present at a feast when some drunken monks made the accusation. Wilfrid did not respond to the accusation, but a monk present relayed the episode to Bede, who replied within a few days to the monk, writing a letter setting forth his defence and asking that the letter also be read to Wilfrid. Bede had another brush with Wilfrid, for the historian says that he met Wilfrid sometime between 706 and 709 and discussed Æthelthryth, the abbess of Ely. Wilfrid had been present at the exhumation of her body in 695, and Bede questioned the bishop about the exact circumstances of the body and asked for more details of her life, as Wilfrid had been her advisor. In 733, Bede travelled to York to visit Ecgbert, who was then bishop of York. The See of York was elevated to an archbishopric in 735, and it is likely that Bede and Ecgbert discussed the proposal for the elevation during his visit. Bede hoped to visit Ecgbert again in 734 but was too ill to make the journey. Bede also travelled to the monastery of Lindisfarne and at some point visited the otherwise unknown monastery of a monk named , a visit that is mentioned in a letter to that monk. Because of his widespread correspondence with others throughout the British Isles, and because many of the letters imply that Bede had met his correspondents, it is likely that Bede travelled to some other places, although nothing further about timing or locations can be guessed. It seems certain that he did not visit Rome, however, as he did not mention it in the autobiographical chapter of his Historia Ecclesiastica. Nothhelm, a correspondent of Bede's who assisted him by finding documents for him in Rome, is known to have visited Bede, though the date cannot be determined beyond the fact that it was after Nothhelm's visit to Rome. Except for a few visits to other monasteries, his life was spent in a round of prayer, observance of the monastic discipline and study of the Sacred Scriptures. He was considered the most learned man of his time and wrote excellent biblical and historical books. Bede died on the Feast of the Ascension, Thursday, 26 May 735, on the floor of his cell, singing "Glory be to the Father and to the Son and to the Holy Spirit" and was buried at Jarrow. Cuthbert, a disciple of Bede's, wrote a letter to a Cuthwin (of whom nothing else is known), describing Bede's last days and his death. According to Cuthbert, Bede fell ill, "with frequent attacks of breathlessness but almost without pain", before Easter. On the Tuesday, two days before Bede died, his breathing became worse and his feet swelled. He continued to dictate to a scribe, however, and despite spending the night awake in prayer he dictated again the following day. At three o'clock, according to Cuthbert, he asked for a box of his to be brought and distributed among the priests of the monastery "a few treasures" of his: "some pepper, and napkins, and some incense". That night he dictated a final sentence to the scribe, a boy named Wilberht, and died soon afterwards. The account of Cuthbert does not make entirely clear whether Bede died before midnight or after. However, by the reckoning of Bede's time, passage from the old day to the new occurred at sunset, not midnight, and Cuthbert is clear that he died after sunset. Thus, while his box was brought at three o'clock Wednesday afternoon of 25 May, by the time of the final dictation it might be considered already 26 May in that ecclesiastical sense, although 25 May in the ordinary sense. Cuthbert's letter also relates a five-line poem in the vernacular that Bede composed on his deathbed, known as "Bede's Death Song". It is the most-widely copied Old English poem and appears in 45 manuscripts, but its attribution to Bede is not certain—not all manuscripts name Bede as the author, and the ones that do are of later origin than those that do not. Bede's remains may have been transferred to Durham Cathedral in the 11th century; his tomb there was looted in 1541, but the contents were probably re-interred in the Galilee chapel at the cathedral. One further oddity in his writings is that in one of his works, the Commentary on the Seven Catholic Epistles, he writes in a manner that gives the impression he was married. The section in question is the only one in that work that is written in first-person view. Bede says: "Prayers are hindered by the conjugal duty because as often as I perform what is due to my wife I am not able to pray." Another passage, in the Commentary on Luke, also mentions a wife in the first person: "Formerly I possessed a wife in the lustful passion of desire and now I possess her in honourable sanctification and true love of Christ." The historian Benedicta Ward argues that these passages are Bede employing a rhetorical device. Works Bede wrote scientific, historical and theological works, reflecting the range of his writings from music and metrics to exegetical Scripture commentaries. He knew patristic literature, as well as Pliny the Elder, Virgil, Lucretius, Ovid, Horace and other classical writers. He knew some Greek. Bede's scriptural commentaries employed the allegorical method of interpretation, and his history includes accounts of miracles, which to modern historians has seemed at odds with his critical approach to the materials in his history. Modern studies have shown the important role such concepts played in the world-view of Early Medieval scholars. Although Bede is mainly studied as an historian now, in his time his works on grammar, chronology, and biblical studies were as important as his historical and hagiographical works. The non-historical works contributed greatly to the Carolingian renaissance. He has been credited with writing a penitential, though his authorship of this work is disputed. Ecclesiastical History of the English People Bede's best-known work is the Historia ecclesiastica gentis Anglorum, or An Ecclesiastical History of the English People, completed in about 731. Bede was aided in writing this book by Albinus, abbot of St Augustine's Abbey, Canterbury. The first of the five books begins with some geographical background and then sketches the history of England, beginning with Caesar's invasion in 55 BC. A brief account of Christianity in Roman Britain, including the martyrdom of St Alban, is followed by the story of Augustine's mission to England in 597, which brought Christianity to the Anglo-Saxons. The second book begins with the death of Gregory the Great in 604 and follows the further progress of Christianity in Kent and the first attempts to evangelise Northumbria. These ended in disaster when Penda, the pagan king of Mercia, killed the newly Christian Edwin of Northumbria at the Battle of Hatfield Chase in about 632. The setback was temporary, and the third book recounts the growth of Christianity in Northumbria under kings Oswald of Northumbria and Oswy. The climax of the third book is the account of the Council of Whitby, traditionally seen as a major turning point in English history. The fourth book begins with the consecration of Theodore as Archbishop of Canterbury and recounts Wilfrid's efforts to bring Christianity to the Kingdom of Sussex. The fifth book brings the story up to Bede's day and includes an account of missionary work in Frisia and of the conflict with the British church over the correct dating of Easter. Bede wrote a preface for the work, in which he dedicates it to Ceolwulf, king of Northumbria. The preface mentions that Ceolwulf received an earlier draft of the book; presumably Ceolwulf knew enough Latin to understand it, and he may even have been able to read it. The preface makes it clear that Ceolwulf had requested the earlier copy, and Bede had asked for Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility. Sources The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library. For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the Historia Ecclesiastica, and also the Chronicon, though he had neither in the original Greek; instead he had a Latin translation of the Historia, by Rufinus, and Saint Jerome's translation of the Chronicon. He also knew Orosius's Adversus Paganus, and Gregory of Tours' Historia Francorum, both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's Life of Germanus as a source for Germanus's visits to Britain. Bede's account of the invasion of the Anglo-Saxons is drawn largely from Gildas's De Excidio et Conquestu Britanniae. Bede would also have been familiar with more recent accounts such as Stephen of Ripon's Life of Wilfrid, and anonymous Life of Gregory the Great and Life of Cuthbert. He also drew on Josephus's Antiquities, and
Ceolwulf's approval; this correspondence with the king indicates that Bede's monastery had connections among the Northumbrian nobility. Sources The monastery at Wearmouth-Jarrow had an excellent library. Both Benedict Biscop and Ceolfrith had acquired books from the Continent, and in Bede's day the monastery was a renowned centre of learning. It has been estimated that there were about 200 books in the monastic library. For the period prior to Augustine's arrival in 597, Bede drew on earlier writers, including Solinus. He had access to two works of Eusebius: the Historia Ecclesiastica, and also the Chronicon, though he had neither in the original Greek; instead he had a Latin translation of the Historia, by Rufinus, and Saint Jerome's translation of the Chronicon. He also knew Orosius's Adversus Paganus, and Gregory of Tours' Historia Francorum, both Christian histories, as well as the work of Eutropius, a pagan historian. He used Constantius's Life of Germanus as a source for Germanus's visits to Britain. Bede's account of the invasion of the Anglo-Saxons is drawn largely from Gildas's De Excidio et Conquestu Britanniae. Bede would also have been familiar with more recent accounts such as Stephen of Ripon's Life of Wilfrid, and anonymous Life of Gregory the Great and Life of Cuthbert. He also drew on Josephus's Antiquities, and the works of Cassiodorus, and there was a copy of the Liber Pontificalis in Bede's monastery. Bede quotes from several classical authors, including Cicero, Plautus, and Terence, but he may have had access to their work via a Latin grammar rather than directly. However, it is clear he was familiar with the works of Virgil and with Pliny the Elder's Natural History, and his monastery also owned copies of the works of Dionysius Exiguus. He probably drew his account of St. Alban from a life of that saint which has not survived. He acknowledges two other lives of saints directly; one is a life of Fursa, and the other of St. Æthelburh; the latter no longer survives. He also had access to a life of Ceolfrith. Some of Bede's material came from oral traditions, including a description of the physical appearance of Paulinus of York, who had died nearly 90 years before Bede's Historia Ecclesiastica was written. Bede also had correspondents who supplied him with material. Albinus, the abbot of the monastery in Canterbury, provided much information about the church in Kent, and with the assistance of Nothhelm, at that time a priest in London, obtained copies of Gregory the Great's correspondence from Rome relating to Augustine's mission. Almost all of Bede's information regarding Augustine is taken from these letters. Bede acknowledged his correspondents in the preface to the Historia Ecclesiastica; he was in contact with Bishop Daniel of Winchester, for information about the history of the church in Wessex and also wrote to the monastery at Lastingham for information about Cedd and Chad. Bede also mentions an Abbot Esi as a source for the affairs of the East Anglian church, and Bishop Cynibert for information about Lindsey. The historian Walter Goffart argues that Bede based the structure of the Historia on three works, using them as the framework around which the three main sections of the work were structured. For the early part of the work, up until the Gregorian mission, Goffart feels that Bede used De excidio. The second section, detailing the Gregorian mission of Augustine of Canterbury was framed on Life of Gregory the Great written at Whitby. The last section, detailing events after the Gregorian mission, Goffart feels were modelled on Life of Wilfrid. Most of Bede's informants for information after Augustine's mission came from the eastern part of Britain, leaving significant gaps in the knowledge of the western areas, which were those areas likely to have a native Briton presence. Models and style Bede's stylistic models included some of the same authors from whom he drew the material for the earlier parts of his history. His introduction imitates the work of Orosius, and his title is an echo of Eusebius's Historia Ecclesiastica. Bede also followed Eusebius in taking the Acts of the Apostles as the model for the overall work: where Eusebius used the Acts as the theme for his description of the development of the church, Bede made it the model for his history of the Anglo-Saxon church. Bede quoted his sources at length in his narrative, as Eusebius had done. Bede also appears to have taken quotes directly from his correspondents at times. For example, he almost always uses the terms "Australes" and "Occidentales" for the South and West Saxons respectively, but in a passage in the first book he uses "Meridiani" and "Occidui" instead, as perhaps his informant had done. At the end of the work, Bede adds a brief autobiographical note; this was an idea taken from Gregory of Tours' earlier History of the Franks. Bede's work as a hagiographer and his detailed attention to dating were both useful preparations for the task of writing the Historia Ecclesiastica. His interest in computus, the science of calculating the date of Easter, was also useful in the account he gives of the controversy between the British and Anglo-Saxon church over the correct method of obtaining the Easter date. Bede is described by Michael Lapidge as "without question the most accomplished Latinist produced in these islands in the Anglo-Saxon period". His Latin has been praised for its clarity, but his style in the Historia Ecclesiastica is not simple. He knew rhetoric and often used figures of speech and rhetorical forms which cannot easily be reproduced in translation, depending as they often do on the connotations of the Latin words. However, unlike contemporaries such as Aldhelm, whose Latin is full of difficulties, Bede's own text is easy to read. In the words of Charles Plummer, one of the best-known editors of the Historia Ecclesiastica, Bede's Latin is "clear and limpid ... it is very seldom that we have to pause to think of the meaning of a sentence ... Alcuin rightly praises Bede for his unpretending style." Intent Bede's primary intention in writing the Historia Ecclesiastica was to show the growth of the united church throughout England. The native Britons, whose Christian church survived the departure of the Romans, earn Bede's ire for refusing to help convert the Saxons; by the end of the Historia the English, and their church, are dominant over the Britons. This goal, of showing the movement towards unity, explains Bede's animosity towards the British method of calculating Easter: much of the Historia is devoted to a history of the dispute, including the final resolution at the Synod of Whitby in 664. Bede is also concerned to show the unity of the English, despite the disparate kingdoms that still existed when he was writing. He also wants to instruct the reader by spiritual example and to entertain, and to the latter end he adds stories about many of the places and people about which he wrote. N.J. Higham argues that Bede designed his work to promote his reform agenda to Ceolwulf, the Northumbrian king. Bede painted a highly optimistic picture of the current situation in the Church, as opposed to the more pessimistic picture found in his private letters. Bede's extensive use of miracles can prove difficult for readers who consider him a more or less reliable historian but do not accept the possibility of miracles. Yet both reflect an inseparable integrity and regard for accuracy and truth, expressed in terms both of historical events and of a tradition of Christian faith that continues to the present day. Bede, like Gregory the Great whom Bede quotes on the subject in the Historia, felt that faith brought about by miracles was a stepping stone to a higher, truer faith, and that as a result miracles had their place in a work designed to instruct. Omissions and biases Bede is somewhat reticent about the career of Wilfrid, a contemporary and one of the most prominent clerics of his day. This may be because Wilfrid's opulent lifestyle was uncongenial to Bede's monastic mind; it may also be that the events of Wilfrid's life, divisive and controversial as they were, simply did not fit with Bede's theme of the progression to a unified and harmonious church. Bede's account of the early migrations of the Angles and Saxons to England omits any mention of a movement of those peoples across the English Channel from Britain to Brittany described by Procopius, who was writing in the sixth century. Frank Stenton describes this omission as "a scholar's dislike of the indefinite"; traditional material that could not be dated or used for Bede's didactic purposes had no interest for him. Bede was a Northumbrian, and this tinged his work with a local bias. The sources to which he had access gave him less information about the west of England than for other areas. He says relatively little about the achievements of Mercia and Wessex, omitting, for example, any mention of Boniface, a West Saxon missionary to the continent of some renown and of whom Bede had almost certainly heard, though Bede does discuss Northumbrian missionaries to the continent. He is also parsimonious in his praise for Aldhelm, a West Saxon who had done much to convert the native Britons to the Roman form of Christianity. He lists seven kings of the Anglo-Saxons whom he regards as having held imperium, or overlordship; only one king of Wessex, Ceawlin, is listed, and none from Mercia, though elsewhere he acknowledges the secular power several of the Mercians held. Historian Robin Fleming states that he was so hostile to Mercia because Northumbria had been diminished by Mercian power that he consulted no Mercian informants and included no stories about its saints. Bede relates the story of Augustine's mission from Rome, and tells how the British clergy refused to assist Augustine in the conversion of the Anglo-Saxons. This, combined with Gildas's negative assessment of the British church at the time of the Anglo-Saxon invasions, led Bede to a very critical view of the native church. However, Bede ignores the fact that at the time of Augustine's mission, the history between the two was one of warfare and conquest, which, in the words of Barbara Yorke, would have naturally "curbed any missionary impulses towards the Anglo-Saxons from the British clergy." Use of Anno Domini At the time Bede wrote the Historia Ecclesiastica, there were two common ways of referring to dates. One was to use indictions, which were 15-year cycles, counting from 312 AD. There were three different varieties of indiction, each starting on a different day of the year. The other approach was to use regnal years—the reigning Roman emperor, for example, or the ruler of whichever kingdom was under discussion. This meant that in discussing conflicts between kingdoms, the date would have to be given in the regnal years of all the kings involved. Bede used both these approaches on occasion but adopted a third method as his main approach to dating: the Anno Domini method invented by Dionysius Exiguus. Although Bede did not invent this method, his adoption of it and his promulgation of it in De Temporum Ratione, his work on chronology, is the main reason it is now so widely used. Beda Venerabilis' Easter table, contained in De Temporum Ratione, was developed from Dionysius Exiguus' famous Paschal table. Assessment The Historia Ecclesiastica was copied often in the Middle Ages, and about 160 manuscripts containing it survive. About half of those are located on the European continent, rather than in the British Isles. Most of the 8th- and 9th-century texts of Bede's Historia come from the northern parts of the Carolingian Empire. This total does not include manuscripts with only a part of the work, of which another 100 or so survive. It was printed for the first time between 1474 and 1482, probably at Strasbourg, France. Modern historians have studied the Historia extensively, and several editions have been produced. For many years, early Anglo-Saxon history was essentially a retelling of the Historia, but recent scholarship has focused as much on what Bede did not write as what he did. The belief that the Historia was the culmination of Bede's works, the aim of all his scholarship, was a belief common among historians in the past but is no longer accepted by most scholars. Modern historians and editors of Bede have been lavish in their praise of his achievement in the Historia Ecclesiastica. Stenton regards it as one of the "small class of books which transcend all but the most fundamental conditions of time and place", and regards its quality as dependent on Bede's "astonishing power of co-ordinating the fragments of information which came to him through tradition, the relation of friends, or documentary evidence ... In an age where little was attempted beyond the registration of fact, he had reached the conception of history." Patrick Wormald describes him as "the first and greatest of England's historians". The Historia Ecclesiastica has given Bede a high reputation, but his concerns were different from those of a modern writer of history. His focus on the history of the organisation of the English church, and on heresies and the efforts made to root them out, led him to exclude the secular history of kings and kingdoms except where a moral lesson could be drawn or where they illuminated events in the church. Besides the Anglo-Saxon Chronicle, the medieval writers William of Malmesbury, Henry of Huntingdon, and Geoffrey of Monmouth used his works as sources and inspirations. Early modern writers, such as Polydore Vergil and Matthew Parker, the Elizabethan Archbishop of Canterbury, also utilised the Historia, and his works were used by both Protestant and Catholic sides in the wars of religion. Some historians have questioned the reliability of some of Bede's accounts. One historian, Charlotte Behr, thinks that the Historia's account of the arrival of the Germanic invaders in Kent should not be considered to relate what actually happened, but rather relates myths that were current in Kent during Bede's time. It is likely that Bede's work, because it was so widely copied, discouraged others from writing histories and may even have led to the disappearance of manuscripts containing older historical works. Other historical works Chronicles As Chapter 66 of his On the Reckoning of Time, in 725 Bede wrote the Greater Chronicle (chronica maiora), which sometimes circulated as a separate work. For recent events the Chronicle, like his Ecclesiastical History, relied upon Gildas, upon a version of the Liber Pontificalis current at least to the papacy of Pope Sergius I (687–701), and other sources. For earlier events he drew on Eusebius's Chronikoi Kanones. The dating of events in the Chronicle is inconsistent with his other works, using the era of creation, the Anno Mundi. Hagiography His other historical works included lives of the abbots of Wearmouth and Jarrow, as well as verse and prose lives of Saint Cuthbert of
from South America during Japanese colonial rule. Larger pearls (Chinese: 波霸/黑珍珠; pinyin: bō bà/hēi zhēn zhū) quickly replaced these. Today, there are some cafés that specialize in bubble tea production. Some cafés use plastic lids, but more authentic bubble tea shops serve drinks using a machine to seal the top of the cup with heated plastic cellophane. The latter method allows the tea to be shaken in the serving cup and makes it spill-free until a person is ready to drink it. The cellophane is then pierced with an oversize straw, now referred to as a boba straw, which is larger than a typical drinking straw to allow the toppings to pass through. Due to its popularity, bubble tea has inspired a variety of bubble tea flavored snacks such as bubble tea ice cream and bubble tea candy. The high increase of bubble tea demand and its related industry can provide opportunities for possible market expansion. The market size of bubble tea was valued at $2.4 billion in 2019 and is projected to reach $4.3 billion by the end of 2027. Some of the largest global bubble tea chains include Chatime, CoCo Fresh Tea & Juice and Gong Cha. Variants Drink Bubble tea comes in many variations which usually consist of black tea, green tea, oolong tea, and sometimes white tea. Another variation, yuenyeung, (Chinese: 鴛鴦, named after the Mandarin duck) originated in Hong Kong and consists of black tea, coffee, and milk. Other varieties of the drink include blended tea drinks. These variations are often either blended using ice cream, or are smoothies that contain both tea and fruit. Toppings Tapioca pearls (boba) are the most common ingredient, although there are other ways to make the chewy spheres found in bubble tea. The pearls vary in color according to the ingredients mixed in with the tapioca. Most pearls are black from brown sugar. Jelly comes in different shapes: small cubes, stars, or rectangular strips, and flavors such as coconut jelly, konjac, lychee, grass jelly, mango, coffee and green tea. Azuki bean or mung bean paste, typical toppings for Taiwanese shaved ice desserts, give bubble tea an added subtle flavor as well as texture. Aloe, egg pudding (custard), grass jelly, and sago also can be found in many bubble tea shops. Popping boba, or spheres that have fruit juices or syrups inside them, are other popular bubble tea toppings. Flavors include mango, strawberry, coconut, kiwi and honey melon. Some shops offer milk or cheese foam on top of the drink, giving the drink a consistency similar to that of whipped cream, and a saltier flavor profile. Ice and sugar level Bubble tea shops often give customers the option of choosing the amount of ice or sugar in their drink. Sugar level is usually specified in percentages (e.g. 25%, 50%, 75%, 100%), and ice level is usually specified ordinally (e.g. no ice, less ice, normal ice), though they can both be specified ordinally in some shops. Packaging In Southeast Asia, bubble tea is traditionally packaged in a plastic takeaway cup, sealed with plastic or a rounded cap. New entrants into the market have attempted to distinguish their products by packaging it in bottles and other interesting shapes. Some have even done away with the bottle and used plastic sealed bags. Nevertheless, the traditional plastic takeaway cup with a sealed cap is still the most ubiquitous packaging method. Preparation method The traditional way of bubble tea preparation is to mix the ingredients (sugar, powders and other flavorants) together using a bubble tea shaker cup, by hand. Many present-day bubble tea shops use a bubble tea shaker machine. This eliminates the need for humans to shake the bubble tea by hand. It also reduces staffing needs as multiple cups of bubble tea may be prepared by a single barista. One bubble tea shop in Taiwan, named Jhu Dong Auto Tea, makes bubble tea entirely without manual work. All stages of the bubble tea sales process, from ordering, to making, to collection, are fully automated. History Milk and sugar have been added to tea in Taiwan since the Dutch colonization of Taiwan in 1624–1662. There are two competing stories for the discovery of bubble tea. The Hanlin Tea Room of Tainan claims that bubble tea was invented in 1986 when teahouse owner Tu Tsong-he was inspired by white tapioca balls he saw in the local market of Ah-bó-liâu (鴨母寮, or Yamuliao in Mandarin) . He later made tea using these traditional Taiwanese snacks. This resulted in what is known as "pearl tea". Another claim for the invention of bubble tea comes from the Chun Shui Tang tea room in Taichung. Its founder, Liu Han-Chieh, began serving Chinese tea cold after she observed coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business, and multiple chains serving this tea were established. The company's product development manager, Lin Hsiu Hui, said she created the first bubble tea in 1988 when she poured tapioca balls into her tea during a staff meeting and encouraged others to drink it. The beverage was well received by everyone at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product. Popularity In the 1990s, bubble tea spread all over East and Southeast Asia with its ever-growing popularity. In regions like Hong Kong, Mainland China, Japan, Vietnam, and Singapore, the bubble tea trend expanded rapidly among young people. In some popular shops, people would line up for more than thirty minutes to get a cup of the drink. In recent years, the mania for bubble tea has gone beyond the beverage itself, with boba lovers inventing various bubble tea food such as bubble tea ice cream, bubble tea pizza, bubble tea toast, bubble tea sushi, bubble tea ramen, etc. Taiwan In Taiwan, bubble tea has become more than a beverage, but an enduring icon of the culture and food history for the nation. In 2020, the date April 30 was officially declared as National Bubble Tea Day in Taiwan. That same year, the image of bubble tea was proposed as an alternative cover design
as "pearl tea". Another claim for the invention of bubble tea comes from the Chun Shui Tang tea room in Taichung. Its founder, Liu Han-Chieh, began serving Chinese tea cold after she observed coffee was served cold in Japan while on a visit in the 1980s. The new style of serving tea propelled his business, and multiple chains serving this tea were established. The company's product development manager, Lin Hsiu Hui, said she created the first bubble tea in 1988 when she poured tapioca balls into her tea during a staff meeting and encouraged others to drink it. The beverage was well received by everyone at the meeting, leading to its inclusion on the menu. It ultimately became the franchise's top-selling product. Popularity In the 1990s, bubble tea spread all over East and Southeast Asia with its ever-growing popularity. In regions like Hong Kong, Mainland China, Japan, Vietnam, and Singapore, the bubble tea trend expanded rapidly among young people. In some popular shops, people would line up for more than thirty minutes to get a cup of the drink. In recent years, the mania for bubble tea has gone beyond the beverage itself, with boba lovers inventing various bubble tea food such as bubble tea ice cream, bubble tea pizza, bubble tea toast, bubble tea sushi, bubble tea ramen, etc. Taiwan In Taiwan, bubble tea has become more than a beverage, but an enduring icon of the culture and food history for the nation. In 2020, the date April 30 was officially declared as National Bubble Tea Day in Taiwan. That same year, the image of bubble tea was proposed as an alternative cover design for Taiwan's passport. According to Al Jazeera, bubble tea has become synonymous with Taiwan and is an important symbol of Taiwanese identity both domestically and internationally. Bubble tea is used to represent Taiwan in the context of the Milk Tea Alliance. In other countries and territories Hong Kong Hong Kong is famous for its traditional Hong Kong-style milk tea, which is made with brewed black tea and evaporated milk. While milk tea has long become integrated into people's daily life, the expansion of Taiwanese bubble tea chains, including Tiger Sugar, Youiccha, and Xing Fu Tang, into Hong Kong created a new wave for “boba tea”. China Since the idea of adding tapioca pearls into milk tea was introduced into China in the 1990s, bubble tea has increased its popularity. It is estimated that the consumption of bubble tea is 5 times that of coffee in the recent years. According to data from QianZhen Industry Research Institute, the value of the tea-related beverage market in China has reached 53.7 billion yuan (about $7.63 billion) in 2018. While bubble tea chains from Taiwan (e.g., Gong Cha and Coco) are still popular, more local brands, like Yi Dian Dian, Nayuki, Hey Tea, etc., are now dominating the market. In China, young people's growing obsession with bubble tea shaped their way of social interaction. Buying someone a cup of bubble tea has become a new way of thanking someone informally. It is also a favored topic among friends and on social media. Japan Bubble tea first entered Japan by the late 1990s, but it failed to leave a lasting impression on the public markets. It was not until the 2010s when the bubble tea trend finally swept Japan. Shops from Taiwan, Korea, China as well as local brands began to pop up in cities, and bubble tea has remained one of the hottest social trends since then. Especially among teenagers, bubble tea has become so commonplace that teenage girls in Japan invented a slang for it: tapiru (タピる). The word is a short for drinking tapioca tea in Japanese, and it won first place in a survey of "Japanese slang for middle school girls" in 2018. People were so obsessed with tapioca tea that they even built a tapioca theme park in Harajuku, Tokyo in 2019. Singapore Known locally in Chinese as 泡泡茶 (Pinyin: pào pào chá), bubble tea is loved by many in Singapore. The drink was sold in Singapore as early as 1992 and became phenomenally popular among young people in 2001. However, the popularity did not last long because of the intense competition and price war among shops. As a result, most of the bubble tea shops were closed and bubble tea lost its popularity by 2003. When Taiwanese chains like Koi and Gong Cha came to Singapore in 2007 and 2009, the beverage experienced only short resurgences in popularity. In 2018, the interest in bubble tea rose again at an unprecedented speed in Singapore, as new brands like The Alley and Tiger Sugar entered the market; social media also played an important role in driving this renaissance of bubble tea. United States In the 1990s, Taiwanese immigrants opened the first bubble tea shop, Fantasia Coffee & Tea, in Cupertino, California. Since then, chains like Tapioca Express, Quickly, Lollicup and Q-Cup emerged in the late 1990s and early 2000s, bringing the Taiwanese bubble tea trend to the US. Within the Asian American community, bubble tea is commonly known under its colloquial term "boba". As the beverage gained popularity in the US, it gradually became more than a drink, but a cultural identity for Asian Americans. This phenomenon was referred to as “boba life” by Chinese-American brothers Andrew and David Fung in their music video, “Bobalife,” released in 2013. Boba symbolizes a subculture that Asian Americans as social minorities could define themselves as, and “boba life” is reflection of their desire for both cultural and political recognition. Other regions with large concentrations of bubble tea restaurants in the United States are the Northeast and Southwest. This is reflected in the coffeehouse-style teahouse chains that originate from the regions, such as Boba Tea Company from Albuquerque, New Mexico, No. 1 Boba Tea in Las Vegas, Nevada, and Kung Fu Tea from New York City. Albuquerque and Las Vegas have a large concentrations of boba tea restaurants, as the drink is popular especially among the Hispano, Navajo, Pueblo, and other Native American, Hispanic and Latino American communities in the Southwest. A massive shipping and supply chain crisis on the U.S. West coast, coupled with the obstruction of the Suez Canal in March 2021, caused a shortage of tapioca pearls for bubble tea shops in the U.S. and Canada. Most of the tapioca consumed in the U.S. is imported from Asia, since the critical ingredient, tapioca starch, are mostly grown in Asia. Australia Individual bubble tea shops began to appear in Australia in the 1990s, along with other regional drinks like Eis Cendol. Chains of stores were established as early as 2002 when the Bubble Cup franchise opened its first store in Melbourne. Although originally associated with the rapid growth of immigration from Asia and the vast tertiary student cohort from Asia, in Melbourne and Sydney bubble tea has become
; ) fought on , was a major battle of the War of the Spanish Succession. The overwhelming Allied victory ensured the safety of Vienna from the Franco-Bavarian army, thus preventing the collapse of the reconstituted Grand Alliance. Louis XIV of France sought to knock the Holy Roman Emperor, Leopold, out of the war by seizing Vienna, the Habsburg capital, and gain a favourable peace settlement. The dangers to Vienna were considerable: Maximilian II Emanuel, Elector of Bavaria, and Marshal Ferdinand de Marsin's forces in Bavaria threatened from the west, and Marshal Louis Joseph de Bourbon, duc de Vendôme's large army in northern Italy posed a serious danger with a potential offensive through the Brenner Pass. Vienna was also under pressure from Rákóczi's Hungarian revolt from its eastern approaches. Realising the danger, the Duke of Marlborough resolved to alleviate the peril to Vienna by marching his forces south from Bedburg to help maintain Emperor Leopold within the Grand Alliance. A combination of deception and skilled administration – designed to conceal his true destination from friend and foe alike – enabled Marlborough to march unhindered from the Low Countries to the River Danube in five weeks. After securing Donauwörth on the Danube, Marlborough sought to engage Maximilian's and Marsin's army before Marshal Camille d'Hostun, duc de Tallard, could bring reinforcements through the Black Forest. The Franco-Bavarian commanders proved reluctant to fight until their numbers were deemed sufficient, and Marlborough failed in his attempts to force an engagement. When Tallard arrived to bolster Maximilian's army, and Prince Eugene of Savoy arrived with reinforcements for the Allies, the two armies finally met on the banks of the Danube in and around the small village of Blindheim, from which the English "Blenheim" is derived. Blenheim was one of the battles that altered the course of the war, which until then was favouring the French and Spanish Bourbons. Although the battle did not win the war, it prevented a potentially devastating loss for the Grand Alliance and shifted the war's momentum, ending French plans of knocking Emperor Leopold out of the war. The French suffered catastrophic casualties in the battle including their commander-in-chief, Tallard, who was taken captive to England. Before the 1704 campaign ended, the Allies had taken Landau, and the towns of Trier and Trarbach on the Moselle in preparation for the following year's campaign into France itself. This offensive never materialised as the Grand Alliance's army had to depart the Moselle to defend Liège from a French counter-offensive. The war raged on for another decade before ending in 1714. Background By 1704, the War of the Spanish Succession was in its fourth year. The previous year had been one of successes for France and her allies, most particularly on the Danube, where Marshal Claude-Louis-Hector de Villars and Maximilian II Emanuel, Elector of Bavaria, had created a direct threat to Vienna, the Habsburg capital. Vienna had been saved by dissension between the two commanders, leading to Villars being replaced by the less dynamic Marshal Ferdinand de Marsin. Nevertheless, the threat was still real: Rákóczi's Hungarian revolt was threatening the Empire's eastern approaches, and Marshal Louis Joseph, Duke of Vendôme's forces threatened an invasion from northern Italy. In the courts of Versailles and Madrid, Vienna's fall was confidently anticipated, an event which would almost certainly have led to the collapse of the reconstituted Grand Alliance. To isolate the Danube from any Allied intervention, Marshal François de Neufville, duc de Villeroi's 46,000 troops were expected to pin the 70,000 Dutch and British troops around Maastricht in the Low Countries, while General Robert Jean Antoine de Franquetot de Coigny protected Alsace against surprise with a further corps. The only forces immediately available for Vienna's defence were Prince Louis of Baden's 36,000 men stationed in the Lines of Stollhofen to watch Marshal Camille d'Hostun, duc de Tallard, at Strasbourg; and 10,000 men under Prince Eugene of Savoy south of Ulm. Both the Imperial Austrian Ambassador in London, Count Wratislaw, and the Duke of Marlborough realised the implications of the situation on the Danube. The Dutch were against any adventurous military operation as far south as the Danube and would not permit any major weakening of the forces in the Spanish Netherlands. Marlborough, realising the only way to reinforce the Austrians was by the use of secrecy and guile, set out to deceive his Dutch allies by pretending to move his troops to the Moselle – a plan approved of by The Hague – but once there, he would slip the Dutch leash and link up with Austrian forces in southern Germany. Prelude Protagonists march to the Danube Marlborough's march started on 19 May from Bedburg, northwest of Cologne. The army assembled by Marlborough's brother, General Charles Churchill, consisted of 66 squadrons of cavalry, 31 battalions of infantry and 38 guns and mortars, totalling 21,000 men, 16,000 of whom were British. This force was augmented en route, and by the time it reached the Danube it numbered 40,00047 battalions and 88 squadrons. While Marlborough led this army south, the Dutch general, Henry Overkirk, Count of Nassau, maintained a defensive position in the Dutch Republic against the possibility of Villeroi mounting an attack. Marlborough had assured the Dutch that if the French were to launch an offensive he would return in good time, but he calculated that as he marched south, the French army would be drawn after him. In this assumption Marlborough proved correct: Villeroi shadowed Marlborough with 30,000 men in 60 squadrons and 42 battalions. Marlborough wrote to Godolphin "I am very sensible that I take a great deal upon me, but should I act otherwise, the Empire would be undone ..." While the Allies were making their preparations, the French were striving to maintain and re-supply Marsin. He had been operating with Maximilian II against Prince Louis, and was somewhat isolated from France: his only lines of communication lay through the rocky passes of the Black Forest. On 14 May, Tallard brought 8,000 reinforcements and vast supplies and munitions through the difficult terrain, whilst outmanoeuvring , the Imperial general who sought to block his path. Tallard then returned with his own force to the Rhine, once again side-stepping Thüngen's efforts to intercept him. On 26 May, Marlborough reached Coblenz, where the Moselle meets the Rhine. If he intended an attack along the Moselle his army would now have to turn west; instead it crossed to the right bank of the Rhine, and was reinforced by 5,000 waiting Hanoverians and Prussians. The French realised that there would be no campaign on the Moselle. A second possible objective now occurred to theman Allied incursion into Alsace and an attack on Strasbourg. Marlborough furthered this apprehension by constructing bridges across the Rhine at Philippsburg, a ruse that not only encouraged Villeroi to come to Tallard's aid in the defence of Alsace, but one that ensured the French plan to march on Vienna was delayed while they waited to see what Marlborough's army would do. Encouraged by Marlborough's promise to return to the Netherlands if a French attack developed there, transferring his troops up the Rhine on barges at a rate of a day, the Dutch States General agreed to release the Danish contingent of seven battalions and 22 squadrons as reinforcements. Marlborough reached Ladenburg, in the plain of the Neckar and the Rhine, and there halted for three days to rest his cavalry and allow the guns and infantry to close up. On 6 June he arrived at Wiesloch, south of Heidelberg. The following day, the Allied army swung away from the Rhine towards the hills of the Swabian Jura and the Danube beyond. At last Marlborough's destination was established without doubt. Strategy On 10 June, Marlborough met for the first time the President of the Imperial War Council, Prince Eugene – accompanied by Count Wratislaw – at the village of Mundelsheim, halfway between the Danube and the Rhine. By 13 June, the Imperial Field Commander, Prince Louis, had joined them in Großheppach. The three generals commanded a force of nearly 110,000 men. At this conference, it was decided that Prince Eugene would return with 28,000 men to the Lines of Stollhofen on the Rhine to watch Villeroi and Tallard and prevent them going to the aid of the Franco-Bavarian army on the Danube. Meanwhile, Marlborough's and Prince Louis's forces would combine, totalling 80,000 men, and march on the Danube to seek out Maximilian II and Marsin before they could be reinforced. Knowing Marlborough's destination, Tallard and Villeroi met at Landau in the Palatinate on 13 June to construct a plan to save Bavaria. The rigidity of the French command system was such that any variations from the original plan had to be sanctioned by Versailles. The Count of Mérode-Westerloo, commander of the Flemish troops in Tallard's army, wrote "One thing is certain: we delayed our march from Alsace for far too long and quite inexplicably." Approval from King Louis arrived on 27 June: Tallard was to reinforce Marsin and Maximilian II on the Danube via the Black Forest, with 40 battalions and 50 squadrons; Villeroi was to pin down the Allies defending the Lines of Stollhofen, or, if the Allies should move all their forces to the Danube, he was to join with Tallard; Coigny with 8,000 men would protect Alsace. On 1 July Tallard's army of 35,000 re-crossed the Rhine at Kehl and began its march. On 22 June, Marlborough's forces linked up with Prince Louis' Imperial forces at Launsheim, having covered in five weeks. Thanks to a carefully planned timetable, the effects of wear and tear had been kept to a minimum. Captain Parker described the march discipline: "As we marched through the country of our Allies, commissars were appointed to furnish us with all manner of necessaries for man and horse ... the soldiers had nothing to do but pitch their tents, boil kettles and lie down to rest." In response to Marlborough's manoeuvres, Maximilian and Marsin, conscious of their numerical disadvantage with only 40,000 men, moved their forces to the entrenched camp at Dillingen on the north bank of the Danube. Marlborough could not attack Dillingen because of a lack of siege guns – he had been unable to bring any from the Low Countries, and Prince Louis had failed to supply any, despite prior assurances that he would. The Allies needed a base for provisions and a good river crossing. Consequently, on 2 July Marlborough stormed the fortress of Schellenberg on the heights above the town of Donauwörth. Count Jean d'Arco had been sent with 12,000 men from the Franco-Bavarian camp to hold the town and grassy hill but after a fierce battle, with heavy casualties on both sides, Schellenberg fell. This forced Donauwörth to surrender shortly afterwards. Maximilian, knowing his position at Dillingen was now not tenable, took up a position behind the strong fortifications of Augsburg. Tallard's march presented a dilemma for Prince Eugene. If the Allies were not to be outnumbered on the Danube, he realised that he had to either try to cut Tallard off before he could get there, or to reinforce Marlborough. If he withdrew from the Rhine to the Danube, Villeroi might also make a move south to link up with Maximilian and Marsin. Prince Eugene compromisedleaving 12,000 troops behind guarding the Lines of Stollhofenhe marched off with the rest of his army to forestall Tallard. Lacking in numbers, Prince Eugene could not seriously disrupt Tallard's march but the French marshal's progress was proving slow. Tallard's force had suffered considerably more than Marlborough's troops on their march – many of his cavalry horses were suffering from glanders and the mountain passes were proving tough for the 2,000 wagonloads of provisions. Local German peasants, angry at French plundering, compounded Tallard's problems, leading Mérode-Westerloo to bemoan – "the enraged peasantry killed several thousand of our men before the army was clear of the Black Forest." At Augsburg, Maximilian was informed on 14 July that Tallard was on his way through the Black Forest. This good news bolstered his policy of inaction, further encouraging him to wait for the reinforcements. This reticence to fight induced Marlborough to undertake a controversial policy of spoliation in Bavaria, burning buildings and crops throughout the rich lands south of the Danube. This had two aims: firstly to put pressure on Maximilian to fight or come to terms before Tallard arrived with reinforcements; and secondly, to ruin Bavaria as a base from which the French and Bavarian armies could attack Vienna, or pursue Marlborough into Franconia if, at some stage, he had to withdraw northwards. But this destruction, coupled with a protracted siege of Rain over 9 to 16 July, caused Prince Eugene to lament "... since the Donauwörth action I cannot admire their performances", and later to conclude "If he has to go home without having achieved his objective, he will certainly be ruined." Final positioning Tallard, with 34,000 men, reached Ulm, joining with Maximilian and Marsin at Augsburg on 5 August, although Maximilian had dispersed his army in response to Marlborough's campaign of ravaging the region. Also on 5 August, Prince Eugene reached Höchstädt, riding that same night to meet with Marlborough at Schrobenhausen. Marlborough knew that another crossing point over the Danube was required in case Donauwörth fell to the enemy. So on 7 August, the first of Prince Louis' 15,000 Imperial troops left Marlborough's main force to besiege the heavily defended city of Ingolstadt, farther down the Danube, with the remainder following two days later. With Prince Eugene's forces at Höchstädt on the north bank of the Danube, and Marlborough's at Rain on the south bank, Tallard and Maximilian debated their next move. Tallard preferred to bide his time, replenish supplies and allow Marlborough's Danube campaign to flounder in the colder autumn weather; Maximilian and Marsin, newly reinforced, were keen to push ahead. The French and Bavarian commanders eventually agreed to attack Prince Eugene's smaller force. On 9 August, the Franco-Bavarian forces began to cross to the north bank of the Danube. On 10 August, Prince Eugene sent an urgent dispatch reporting that he was falling back to Donauwörth. By a series of swift marches Marlborough concentrated his forces on Donauwörth and, by noon 11 August, the link-up was complete. During 11 August, Tallard pushed forward from the river crossings at Dillingen. By 12 August, the Franco-Bavarian forces were encamped behind the small River Nebel near the village of Blenheim on the plain of Höchstädt. The same day, Marlborough and Prince Eugene carried out a reconnaissance of the French position from the church spire at Tapfheim, and moved their combined forces to Münster – from the French camp. A French reconnaissance under Jacques Joseph Vipart, Marquis de Silly went forward to probe the enemy, but were driven off by Allied troops who had deployed to cover the pioneers of the advancing army, labouring to bridge the numerous streams in the area and improve the passage leading westwards to Höchstädt. Marlborough quickly moved forward two brigades under the command of Lieutenant General John Wilkes and Brigadier Archibald Rowe to secure the narrow strip of land between the Danube and the wooded Fuchsberg hill, at the Schwenningen defile. Tallard's army numbered 56,000 men and 90 guns; the army of the Grand Alliance, 52,000 men and 66 guns. Some Allied officers who were acquainted with the superior numbers of the enemy, and aware of their strong defensive position, remonstrated with Marlborough about the hazards of attacking; but he was resolute. Battle The battlefield The battlefield stretched for nearly . The extreme right flank of the Franco-Bavarian army rested on the Danube, the undulating pine-covered hills of the Swabian Jura lay to their left. A small stream, the Nebel, fronted the French line; the ground either side of this was marshy and only fordable intermittently. The French right rested on the village of Blenheim near where the Nebel flows into the Danube; the village itself was surrounded by hedges, fences, enclosed gardens, and meadows. Between Blenheim and the village of Oberglauheim to the north west the fields of wheat had been cut to stubble and were now ideal for the deployment of troops. From Oberglauheim to the next hamlet of Lutzingen the terrain of ditches, thickets and brambles was potentially difficult ground for the attackers. Initial manoeuvres At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin. Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat. About 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned. Deployment The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by , including the effective Irish Brigade known as the "Wild Geese". Six batteries of guns were ranged
and brambles was potentially difficult ground for the attackers. Initial manoeuvres At 02:00 on 13 August, 40 Allied cavalry squadrons were sent forward, followed at 03:00, in eight columns, by the main Allied force pushing over the River Kessel. At about 06:00 they reached Schwenningen, from Blenheim. The British and German troops who had held Schwenningen through the night joined the march, making a ninth column on the left of the army. Marlborough and Prince Eugene made their final plans. The Allied commanders agreed that Marlborough would command 36,000 troops and attack Tallard's force of 33,000 on the left, including capturing the village of Blenheim, while Prince Eugene's 16,000 men would attack Maximilian and Marsin's combined forces of 23,000 troops on the right. If this attack was pressed hard, it was anticipated that Maximilian and Marsin would feel unable to send troops to aid Tallard on their right. Lieutenant-General John Cutts would attack Blenheim in concert with Prince Eugene's attack. With the French flanks busy, Marlborough could cross the Nebel and deliver the fatal blow to the French at their centre. The Allies would have to wait until Prince Eugene was in position before the general engagement could begin. Tallard was not anticipating an Allied attack; he had been deceived by intelligence gathered from prisoners taken by de Silly the previous day, and his army's strong position. Tallard and his colleagues believed that Marlborough and Prince Eugene were about to retreat north-westwards towards Nördlingen. Tallard wrote a report to this effect to King Louis that morning. Signal guns were fired to bring in the foraging parties and pickets as the French and Bavarian troops drew into battle-order to face the unexpected threat. About 08:00 the French artillery on their right wing opened fire, answered by Colonel Holcroft Blood's batteries. The guns were heard by Prince Louis in his camp before Ingolstadt. An hour later Tallard, Maximilian, and Marsin climbed Blenheim's church tower to finalise their plans. It was settled that Maximilian and Marsin would hold the front from the hills to Oberglauheim, whilst Tallard would defend the ground between Oberglauheim and the Danube. The French commanders were divided as to how to utilise the Nebel. Tallard's preferred tactic was to lure the Allies across before unleashing his cavalry upon them. This was opposed by Marsin and Maximilian who felt it better to close their infantry right up to the stream itself, so that while the enemy was struggling in the marshes, they would be caught in crossfire from Blenheim and Oberglauheim. Tallard's approach was sound if all its parts were implemented, but in the event it allowed Marlborough to cross the Nebel without serious interference and fight the battle he had planned. Deployment The Franco-Bavarian commanders deployed their forces. In the village of Lutzingen, Count Alessandro de Maffei positioned five Bavarian battalions with a great battery of 16 guns at the village's edge. In the woods to the left of Lutzingen, seven French battalions under César Armand, Marquis de Rozel moved into place. Between Lutzingen and Oberglauheim Maximilian placed 27 squadrons of cavalry and 14 Bavarian squadrons commanded by d'Arco with 13 more in support nearby under Baron Veit Heinrich Moritz Freiherr von Wolframsdorf. To their right stood Marsin's 40 French squadrons and 12 battalions. The village of Oberglauheim was packed with 14 battalions commanded by , including the effective Irish Brigade known as the "Wild Geese". Six batteries of guns were ranged alongside the village. On the right of these French and Bavarian positions, between Oberglauheim and Blenheim, Tallard deployed 64 French and Walloon squadrons, 16 of which were from Marsin, supported by nine French battalions standing near the Höchstädt road. In the cornfield next to Blenheim stood three battalions from the Regiment de Roi. Nine battalions occupied the village itself, commanded by Philippe, Marquis de Clérambault. Four battalions stood to the rear and a further eleven were in reserve. These battalions were supported by Count Gabriel d'Hautefeuille's twelve squadrons of dismounted dragoons. By 11:00 Tallard, Maximilian, and Marsin were in place. Many of the Allied generals were hesitant to attack such a strong position. The Earl of Orkney later said that, "had I been asked to give my opinion, I had been against it." Prince Eugene was expected to be in position by 11:00, but due to the difficult terrain and enemy fire, progress was slow. Cutts' column – which by 10:00 had expelled the enemy from two water mills on the Nebel – had already deployed by the river against Blenheim, enduring over the next three hours severe fire from a six-gun heavy battery posted near the village. The rest of Marlborough's army, waiting in their ranks on the forward slope, were also forced to bear the cannonade from the French artillery, suffering 2,000 casualties before the attack could even start. Meanwhile, engineers repaired a stone bridge across the Nebel, and constructed five additional bridges or causeways across the marsh between Blenheim and Oberglauheim. Marlborough's anxiety was finally allayed when, just past noon, Colonel William Cadogan reported that Prince Eugene's Prussian and Danish infantry were in place – the order for the general advance was given. At 13:00, Cutts was ordered to attack the village of Blenheim whilst Prince Eugene was requested to assault Lutzingen on the Allied right flank. Blenheim Cutts ordered Rowe's brigade to attack. The English infantry rose from the edge of the Nebel, and silently marched towards Blenheim, a distance of some . James Ferguson's Scottish brigade supported Rowe's left, and moved towards the barricades between the village and the river, defended by Hautefeuille's dragoons. As the range closed to within , the French fired a deadly volley. Rowe had ordered that there should be no firing from his men until he struck his sword upon the palisades, but as he stepped forward to give the signal, he fell mortally wounded. The survivors of the leading companies closed up the gaps in their ranks and rushed forward. Small parties penetrated the defences, but repeated French volleys forced the English back and inflicted heavy casualties. As the attack faltered, eight squadrons of elite Gens d'Armes, commanded by the veteran Swiss officer, , fell on the English troops, cutting at the exposed flank of Rowe's own regiment. Wilkes' Hessian brigade, nearby in the marshy grass at the water's edge, stood firm and repulsed the Gens d'Armes with steady fire, enabling the English and Hessians to re-order and launch another attack. Although the Allies were again repulsed, these persistent attacks on Blenheim eventually bore fruit, panicking Clérambault into making the worst French error of the day. Without consulting Tallard, Clérambault ordered his reserve battalions into the village, upsetting the balance of the French position and nullifying the French numerical superiority. "The men were so crowded in upon one another", wrote Mérode-Westerloo, "that they couldn't even fire – let alone receive or carry out any orders". Marlborough, spotting this error, now countermanded Cutts' intention to launch a third attack, and ordered him simply to contain the enemy within Blenheim; no more than 5,000 Allied soldiers were able to pen in twice the number of French infantry and dragoons. Lutzingen On the Allied right, Prince Eugene's Prussian and Danish forces were desperately fighting the numerically superior forces of Maximilian and Marsin. Leopold I, Prince of Anhalt-Dessau led forward four brigades across the Nebel to assault the well-fortified position of Lutzingen. Here, the Nebel was less of an obstacle, but the great battery positioned on the edge of the village enjoyed a good field of fire across the open ground stretching to the hamlet of Schwennenbach. As soon as the infantry crossed the stream, they were struck by Maffei's infantry, and salvoes from the Bavarian guns positioned both in front of the village and in enfilade on the wood-line to the right. Despite heavy casualties the Prussians attempted to storm the great battery, whilst the Danes, under Count , attempted to drive the French infantry out of the copses beyond the village. With the infantry heavily engaged, Prince Eugene's cavalry picked its way across the Nebel. After an initial success, his first line of cavalry, under the Imperial General of Horse, Prince Maximilian of Hanover, were pressed by the second line of Marsin's cavalry and forced back across the Nebel in confusion. The exhausted French were unable to follow up their advantage, and both cavalry forces tried to regroup and reorder their ranks. Without cavalry support, and threatened with envelopment, the Prussian and Danish infantry were in turn forced to pull back across the Nebel. Panic gripped some of Prince Eugene's troops as they crossed the stream. Ten infantry colours were lost to the Bavarians, and hundreds of prisoners taken; it was only through the leadership of Prince Eugene and the Prince Maximilian of Hanover that the Imperial infantry was prevented from abandoning the field. After rallying his troops near Schwennenbach – well beyond their starting point – Prince Eugene prepared to launch a second attack, led by the second-line squadrons under the Duke of Württemberg-Teck. Yet again they were caught in the murderous crossfire from the artillery in Lutzingen and Oberglauheim, and were once again thrown back in disarray. The French and Bavarians were almost as disordered as their opponents, and they too were in need of inspiration from their commander, Maximilian, who was seen " ... riding up and down, and inspiring his men with fresh courage." Anhalt-Dessau's Danish and Prussian infantry attacked a second time but could not sustain the advance without proper support. Once again they fell back across the stream. Centre and Oberglauheim Whilst these events around Blenheim and Lutzingen were taking place, Marlborough was preparing to cross the Nebel. Hulsen's brigade of Hessians and Hanoverians and the earl of Orkney's British brigade advanced across the stream and were supported by dismounted British dragoons and ten British cavalry squadrons. This covering force allowed Charles Churchill's Dutch, British and German infantry and further cavalry units to advance and form up on the plain beyond. Marlborough arranged his infantry battalions in a novel manner with gaps sufficient to allow the cavalry to move freely between them. Marlborough ordered the formation forward. Once again Zurlauben's Gens d'Armes charged, looking to rout Henry Lumley's English cavalry who linked Cutts' column facing Blenheim with Churchill's infantry. As the elite French cavalry attacked, they were faced by five English squadrons under Colonel Francis Palmes. To the consternation of the French, the Gens d'Armes were pushed back in confusion and pursued well beyond the Maulweyer stream that flows through Blenheim. "What? Is it possible?" exclaimed Maximilian, "the gentlemen of France fleeing?" Palmes attempted to follow up his success but was repulsed by other French cavalry and musket fire from the edge of Blenheim. Nevertheless, Tallard was alarmed by the repulse of the Gens d'Armes and urgently rode across the field to
more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Ramillies Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Spaar; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song Clare's Dragoons. Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and intentions on the opposite side of the Petite Gheete, Marlborough was throwing his full weight against Ramillies and the open plain to the south. Villeroi meanwhile, was still moving more reserves of infantry in the opposite direction towards his left flank; crucially, it would be some time before the French commander noticed the subtle change in emphasis of the Allied dispositions. At around 15:30, Overkirk advanced his massed squadrons on the open plain in support of the infantry attack on Ramillies. Overkirk's squadrons – 48 Dutch, supported on their left by 21 Danish – steadily advanced towards the enemy (taking care not to prematurely tire the horses), before breaking into a trot to gain the impetus for their charge. The Marquis de Feuquières writing after the battle described the scene – "They advanced in four lines … As they approached they advanced their second and fourth lines into the intervals of their first and third lines; so that when they made their advance upon us, they formed only one front, without any intermediate spaces." The initial clash favoured the Dutch and Danish squadrons. The disparity of numbers – exacerbated by Villeroi stripping their ranks of infantry to reinforce his left flank – enabled Overkirk's cavalry to throw the first line of French horse back in some disorder towards their second-line squadrons. This line also came under severe pressure and, in turn, was forced back to their third-line of cavalry and the few battalions still remaining on the plain. But these French horsemen were amongst the best in Louis XIV's army – the Maison du Roi, supported by four elite squadrons of Bavarian Cuirassiers. Ably led by de Guiscard, the French cavalry rallied, thrusting back the Allied squadrons in successful local counterattacks. On Overkirk's right flank, close to Ramillies, ten of his squadrons suddenly broke ranks and were scattered, riding headlong to the rear to recover their order, leaving the left flank of the Allied assault on Ramillies dangerously exposed. Notwithstanding the lack of infantry support, de Guiscard threw his cavalry forward in an attempt to split the Allied army in two. A crisis threatened the centre, but from his vantage point Marlborough was at once aware of the situation. The Allied commander now summoned the cavalry on the right wing to reinforce his centre, leaving only the English squadrons in support of Orkney. Thanks to a combination of battle-smoke and favourable terrain, his redeployment went unnoticed by Villeroi who made no attempt to transfer any of his own 50 unused squadrons. While he waited for the fresh reinforcements to arrive, Marlborough flung himself into the mêlée, rallying some of the Dutch cavalry who were in confusion. But his personal involvement nearly led to his undoing. A number of French horsemen, recognising the Duke, came surging towards his party. Marlborough's horse tumbled and the Duke was thrown – "Milord Marlborough was rid over," wrote Orkney some time later. It was a critical moment of the battle. "Major-General Murray," recalled one eyewitness, " … seeing him fall, marched up in all haste with two Swiss battalions to save him and stop the enemy who were hewing all down in their way." Fortunately Marlborough's newly appointed aide-de-camp, Richard Molesworth, galloped to the rescue, mounted the Duke on his horse and made good their escape, before Murray's disciplined ranks threw back the pursuing French troopers. After a brief pause, Marlborough's equerry, Colonel Bringfield (or Bingfield), led up another of the Duke's spare horses; but while assisting him onto his mount, the unfortunate Bringfield was hit by an errant cannonball that sheared off his head. One account has it that the cannonball flew between the Captain-General's legs before hitting the unfortunate colonel, whose torso fell at Marlborough's feet – a moment subsequently depicted in a lurid set of contemporary playing cards. Nevertheless, the danger passed, enabling the Duke to attend to the positioning of the cavalry reinforcements feeding down from his right flank – a change of which Villeroi remained blissfully unaware. Breakthrough The time was about 16:30, and the two armies were in close contact across the whole front, from the skirmishing in the marshes in the south, through the vast cavalry battle on the open plain; to the fierce struggle for Ramillies at the centre, and to the north, where, around the cottages of Offus and Autre-Eglise, Orkney and de la Guiche faced each other across the Petite Gheete ready to renew hostilities. The arrival of the transferring squadrons now began to tip the balance in favour of the Allies. Tired, and suffering a growing list of casualties, the numerical inferiority of Guiscard's squadrons battling on the plain at last began to tell. After earlier failing to hold or retake Franquenée and Taviers, Guiscard's right flank had become dangerously exposed and a fatal gap had opened on the right of their line. Taking advantage of this breach, Württemberg's Danish cavalry now swept forward, wheeling to penetrate the flank of the Maison du Roi whose attention was almost entirely fixed on holding back the Dutch. Sweeping forwards, virtually without resistance, the 21 Danish squadrons reformed behind the French around the area of the Tomb of Ottomond, facing north across the plateau of Mont St André towards the exposed flank of Villeroi's army. The final Allied reinforcements for the cavalry contest to the south were at last in position; Marlborough's superiority on the left could no longer be denied, and his fast-moving plan took hold of the battlefield. Now, far too late, Villeroi tried to redeploy his 50 unused squadrons, but a desperate attempt to form line facing south, stretching from Offus to Mont St André, floundered amongst the baggage and tents of the French camp carelessly left there after the initial deployment. The Allied commander ordered his cavalry forward against the now heavily outnumbered French and Bavarian horsemen. De Guiscard's right flank, without proper infantry support, could no longer resist the onslaught and, turning their horses northwards, they broke and fled in complete disorder. Even the squadrons currently being scrambled together by Villeroi behind Ramillies could not withstand the onslaught. "We had not got forty yards on our retreat," remembered Captain Peter Drake, an Irishmen serving with the French – "when the words sauve qui peut went through the great part, if not the whole army, and put all to confusion" In Ramillies the Allied infantry, now reinforced by the English troops brought down from the north, at last broke through. The Régiment de Picardie stood their ground but were caught between Colonel Borthwick's Scots-Dutch regiment and the English reinforcements. Borthwick was killed, as was Charles O’Brien, the Irish Viscount Clare in French service, fighting at the head of his regiment. The Marquis de Maffei attempted one last stand with his Bavarian and Cologne Guards, but it proved in vain. Noticing a rush of horsemen fast approaching from the south, he later recalled – " … I went towards the nearest of these squadrons to instruct their officer, but instead of being listened to [I] was immediately surrounded and called upon to ask for quarter." Pursuit The roads leading north and west were choked with fugitives. Orkney now sent his English troops back across the Petite Gheete stream to once again storm Offus where de la Guiche's infantry had begun to drift away in the confusion. To the right of the infantry Lord John Hay's 'Scots Greys' also picked their way across the stream and charged the Régiment du Roi within Autre-Eglise. "Our dragoons," wrote John Deane, "pushing into the village … made terrible slaughter of the enemy." The Bavarian Horse Grenadiers and the Electoral Guards withdrew and formed a shield about Villeroi and the Elector but were scattered by Lumley's cavalry. Stuck in the mass of fugitives fleeing the battlefield, the French and Bavarian commanders narrowly escaped capture by General Cornelius Wood who, unaware of their identity, had to content himself with the seizure of two Bavarian Lieutenant-Generals. Far to the south, the remnants of de la Colonie's brigade headed in the opposite direction towards the French held fortress of Namur." The retreat became a rout. Individual Allied commanders drove their troops forward in pursuit, allowing their beaten enemy no chance to recover. Soon the Allied infantry could no longer keep up, but their cavalry were off the leash, heading through the gathering night for the crossings on the river Dyle. At last, however, Marlborough called a halt to the pursuit shortly after midnight near Meldert, from the field. "It was indeed a truly shocking sight to see the miserable remains of this mighty army," wrote Captain Drake, "… reduced to a handful." Aftermath What was left of Villeroi's army was now broken in spirit; the imbalance of the casualty figures amply demonstrates the extent of the disaster for Louis XIV's army: (see below). In addition, hundreds of French soldiers were fugitives, many of whom would never remuster to the colours. Villeroi also lost 52 artillery pieces and his entire engineer pontoon train. In the words of Marshal Villars, the French defeat at Ramillies was – "The most shameful, humiliating and disastrous of routs." Town after town now succumbed to the Allies. Leuven fell on 25 May 1706; three days later, the Allies entered Brussels, the capital of the Spanish Netherlands. Marlborough realised the great opportunity created by the early victory of Ramillies: "We now have the whole summer before us," wrote the Duke from Brussels to Robert Harley, "and with the blessing of God I shall make the best use of it." Malines, Lierre, Ghent, Alost, Damme, Oudenaarde, Bruges, and on 6 June Antwerp, all subsequently fell to Marlborough's victorious army and, like Brussels, proclaimed the Austrian candidate for the Spanish throne, the Archduke Charles, as their sovereign. Villeroi was helpless to arrest the process of collapse. When Louis XIV learnt of the disaster he recalled Marshal Vendôme from northern Italy to take command in Flanders; but it would be weeks before the command changed hands. As news spread of the Allies' triumph, the Prussians, Hessians and Hanoverian contingents, long delayed by their respective rulers, eagerly joined the pursuit of the broken French and Bavarian forces. "This," wrote Marlborough wearily, "I take to be owing to our late success." Meanwhile, Overkirk took the port of Ostend on 4 July thus opening a direct route to the English Channel for communication and supply, but the Allies were making scant progress against Dendermonde
stream. The centre was formed by the mass of Dutch, German, Protestant Swiss and Scottish infantry – perhaps 30,000 men – facing Offus and Ramillies. Also facing Ramillies Marlborough placed a powerful battery of thirty 24-pounders, dragged into position by a team of oxen; further batteries were positioned overlooking the Petite Gheete. On their left, on the broad plain between Taviers and Ramillies – and where Marlborough thought the decisive encounter must take place – Overkirk drew the 69 squadrons of the Dutch and Danish horse, supported by 19 battalions of Dutch infantry and two artillery pieces. Meanwhile, Villeroi deployed his forces. In Taviers on his right, he placed two battalions of the Greder Suisse Régiment, with a smaller force forward in Franquenée; the whole position was protected by the boggy ground of the river Mehaigne, thus preventing an Allied flanking movement. In the open country between Taviers and Ramillies, he placed 82 squadrons under General de Guiscard supported by several interleaved brigades of French, Swiss and Bavarian infantry. Along the Ramillies–Offus–Autre Eglise ridge-line, Villeroi positioned Walloon and Bavarian infantry, supported by the Elector of Bavaria's 50 squadrons of Bavarian and Walloon cavalry placed behind on the plateau of Mont St. André. Ramillies, Offus and Autre-Eglise were all packed with troops and put in a state of defence, with alleys barricaded and walls loop-holed for muskets. Villeroi also positioned powerful batteries near Ramillies. These guns (some of which were of the three barrelled kind first seen at Elixheim the previous year) enjoyed good arcs of fire, able to fully cover the approaches of the plateau of Jandrenouille over which the Allied infantry would have to pass. Marlborough, however, noticed several important weaknesses in the French dispositions. Tactically, it was imperative for Villeroi to occupy Taviers on his right and Autre-Eglise on his left, but by adopting this posture he had been forced to over-extend his forces. Moreover, this disposition – concave in relation to the Allied army – gave Marlborough the opportunity to form a more compact line, drawn up in a shorter front between the 'horns' of the French crescent; when the Allied blow came it would be more concentrated and carry more weight. Additionally, the Duke's disposition facilitated the transfer of troops across his front far more easily than his foe, a tactical advantage that would grow in importance as the events of the afternoon unfolded. Although Villeroi had the option of enveloping the flanks of the Allied army as they deployed on the plateau of Jandrenouille – threatening to encircle their army – the Duke correctly gauged that the characteristically cautious French commander was intent on a defensive battle along the ridge-line. Taviers At 13:00 the batteries went into action; a little later two Allied columns set out from the extremities of their line and attacked the flanks of the Franco-Bavarian army. To the south the Dutch Guards, under the command of Colonel Wertmüller, came forward with their two field guns to seize the hamlet of Franquenée. The small Swiss garrison in the village, shaken by the sudden onslaught and unsupported by the battalions to their rear, were soon compelled back towards the village of Taviers. Taviers was of particular importance to the Franco-Bavarian position: it protected the otherwise unsupported flank of General de Guiscard's cavalry on the open plain, while at the same time, it allowed the French infantry to pose a threat to the flanks of the Dutch and Danish squadrons as they came forward into position. But hardly had the retreating Swiss rejoined their comrades in that village when the Dutch Guards renewed their attack. The fighting amongst the alleys and cottages soon deteriorated into a fierce bayonet and clubbing mêlée, but the superiority in Dutch firepower soon told. The accomplished French officer, Colonel de la Colonie, standing on the plain nearby remembered – "this village was the opening of the engagement, and the fighting there was almost as murderous as the rest of the battle put together." By about 15:00 the Swiss had been pushed out of the village into the marshes beyond. Villeroi's right flank fell into chaos and was now open and vulnerable. Alerted to the situation de Guiscard ordered an immediate attack with 14 squadrons of French dragoons currently stationed in the rear. Two other battalions of the Greder Suisse Régiment were also sent, but the attack was poorly co-ordinated and consequently went in piecemeal. The Anglo-Dutch commanders now sent dismounted Dutch dragoons into Taviers, which, together with the Guards and their field guns, poured concentrated musketry- and canister-fire into the advancing French troops. Colonel d’Aubigni, leading his regiment, fell mortally wounded. As the French ranks wavered, the leading squadrons of Württemberg's Danish horse – now unhampered by enemy fire from either village – were also sent into the attack and fell upon the exposed flank of the Franco-Swiss infantry and dragoons. De la Colonie, with his Grenadiers Rouge regiment, together with the Cologne Guards who were brigaded with them, was now ordered forward from his post south of Ramillies to support the faltering counter-attack on the village. But on his arrival, all was chaos – "Scarcely had my troops got over when the dragoons and Swiss who had preceded us, came tumbling down upon my battalions in full flight … My own fellows turned about and fled along with them." De La Colonie managed to rally some of his grenadiers, together with the remnants of the French dragoons and Greder Suisse battalions, but it was an entirely peripheral operation, offering only fragile support for Villeroi's right flank. Offus and Autre-Eglise While the attack on Taviers went on the Earl of Orkney launched his first line of English across the Petite Gheete in a determined attack against the barricaded villages of Offus and Autre-Eglise on the Allied right. Villeroi, posting himself near Offus, watched anxiously the redcoats' advance, mindful of the counsel he had received on 6 May from Louis XIV – "Have particular care to that part of the line which will endure the first shock of the English troops." Heeding this advice the French commander began to transfer battalions from his centre to reinforce the left, drawing more foot from the already weakened right to replace them. As the English battalions descended the gentle slope of the Petite Gheete valley, struggling through the boggy stream, they were met by Major General de la Guiche's disciplined Walloon infantry sent forward from around Offus. After concentrated volleys, exacting heavy casualties on the redcoats, the Walloons reformed back to the ridgeline in good order. The English took some time to reform their ranks on the dry ground beyond the stream and press on up the slope towards the cottages and barricades on the ridge. The vigour of the English assault, however, was such that they threatened to break through the line of the villages and out onto the open plateau of Mont St André beyond. This was potentially dangerous for the Allied infantry who would then be at the mercy of the Elector's Bavarian and Walloon squadrons patiently waiting on the plateau for the order to move. Although Henry Lumley's English cavalry had managed to cross the marshy ground around the Petite Gheete, it was soon evident to Marlborough that sufficient cavalry support would not be practicable and that the battle could not be won on the Allied right. The Duke, therefore, called off the attack against Offus and Autre-Eglise. To make sure that Orkney obeyed his order to withdraw, Marlborough sent his Quartermaster-General in person with the command. Despite Orkney's protestations, Cadogan insisted on compliance and, reluctantly, Orkney gave the word for his troops to fall back to their original positions on the edge of the plateau of Jandrenouille. It is still not clear how far Orkney's advance was planned only as a feint; according to historian David Chandler it is probably more accurate to surmise that Marlborough launched Orkney in a serious probe with a view to sounding out the possibilities of the sector. Nevertheless, the attack had served its purpose. Villeroi had given his personal attention to that wing and strengthened it with large bodies of horse and foot that ought to have been taking part in the decisive struggle south of Ramillies. Ramillies Meanwhile, the Dutch assault on Ramillies was gaining pace. Marlborough's younger brother, General of Infantry, Charles Churchill, ordered four brigades of foot to attack the village. The assault consisted of 12 battalions of Dutch infantry commanded by Major Generals Schultz and Spaar; two brigades of Saxons under Count Schulenburg; a Scottish brigade in Dutch service led by the 2nd Duke of Argyle; and a small brigade of Protestant Swiss. The 20 French and Bavarian battalions in Ramillies, supported by the Irish who had left Ireland in the Flight of the Wild Geese to join Clare's Dragoons who fought as infantry and captured a colour from the British 3rd Regiment of Foot and a small brigade of Cologne and Bavarian Guards under the Marquis de Maffei, put up a determined defence, initially driving back the attackers with severe losses as commemorated in the song Clare's Dragoons. Seeing that Schultz and Spaar were faltering, Marlborough now ordered Orkney's second-line British and Danish battalions (who had not been used in the assault on Offus and Autre-Eglise) to move south towards Ramillies. Shielded as they were from observation by a slight fold in the land, their commander, Brigadier-General Van Pallandt, ordered the regimental colours to be left in place on the edge of the plateau to convince their opponents they were still in their initial position. Therefore, unbeknown to the French who remained oblivious to the Allies' real strength and
for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. In 1972, Kernighan described memory management in strings using "hello" and "world", in the programming language B, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in Computer Science. Kernighan's was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats. Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019. Other achievements during his career include: The AMPL programming language The AWK programming language, with Alfred Aho and Peter J. Weinberger, and
with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a Professor of Computer Science at Princeton University since 2000 and is the Director of Undergraduate Studies in the Department of Computer Science. In 2015, he co-authored the book The Go Programming Language. Early life and education Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research Kernighan has held a professorship in the Department of Computer Science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for BASIC, FORTRAN, and Pascal, and most notably his "Ratfor" (rational FORTRAN) was put in the public domain. He has said that if stranded on an island with only one programming language it would have to be C. Kernighan coined the term "Unix" and helped popularize Thompson's Unix philosophy. Kernighan is also known as a coiner of the expression "What You See Is All You Get" (WYSIAYG), which is a sarcastic variant of the original "What You See Is What You Get" (WYSIWYG). Kernighan's term is used to indicate that WYSIWYG systems might throw away information in a document that could be useful in other contexts. In 1972, Kernighan described memory management in strings using "hello" and "world", in the programming language B, which became the iconic example we know today. Kernighan's original 1978 implementation of Hello, World! was sold at The Algorithm Auction, the world's first auction of computer algorithms. In 1996, Kernighan taught CS50 which is the Harvard University introductory course in Computer Science. Kernighan's was an influence on David J. Malan who subsequently taught the course and scaled it up to run at multiple universities and in multiple digital formats. Kernighan was elected a member of the National Academy of Engineering in 2002 for contributions to software and to programming languages. He was also elected a member of the American Academy of Arts and Sciences in 2019. Other achievements during his career include: The AMPL programming language The AWK programming language, with Alfred Aho and Peter J. Weinberger, and its book The AWK Programming Language ditroff, or "device independent troff", which allowed troff to be used with any device The Elements of Programming Style, with P. J. Plauger The first documented "Hello, world!" program, in Kernighan's "A Tutorial Introduction to the Language B" (1972) Ratfor Software
the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99. The book BCPL: The language and its compiler describes the philosophy of BCPL as follows: History BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System (CTSS), was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology (MIT) in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference. BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book. BCPL is the language in which the original hello world program was written. The first MUD was also written in BCPL (MUD1). Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the seminal Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL. An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Martin Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method. By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, the IBM 360, the PDP-10, the TX-2, the CDC 6400, the UNIVAC 1108, the PDP-9, the KDF 9 and the Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET. There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Dr. Martin Richards. The BBC Domesday Project made use of the
several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. Design BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Further, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 man-months. This approach became common practice later (e.g. Pascal, Java). The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit. The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking. Hungarian notation was developed to help programmers avoid inadvertent type errors. The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %). BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively BCPL gives the programmer control of the linking process. The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid. BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99. The book BCPL: The language and its compiler describes the philosophy of BCPL as follows: History BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System (CTSS), was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology (MIT) in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer
upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention. World War II The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its potential against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. fired the last major-caliber salvo of this battle, the last salvo fired by a battleship against another heavy ship. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. Cold War After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The French was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956–57. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic en route to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of Kirov by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . End of the battleship era After Indiana was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. and were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. Wisconsin now serves as a museum ship in Norfolk, Virginia. Massachusetts, which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. Texas, the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but is currently closed for repairs. North Carolina is on display in Wilmington, North Carolina. Alabama is on display in Mobile, Alabama. The wreck of , sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The wreck of , also sunk during the attack, is a historic landmark. The only other 20th-century battleship on display is the Japanese pre-dreadnought . A replica of the ironclad battleship was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China. Former battleships that were previously used as museum ships included , SMS Tegetthoff, and SMS Erzherzog Franz Ferdinand. Strategy and doctrine Doctrine Battleships were the embodiment of sea power. For Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in The Influence of Sea Power Upon History, 1660–1783 of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it. Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the decisive battle doctrine in some navies, while targeting merchant ships (commerce raiding or guerre de course, as posited by the Jeune École) could never succeed. Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era. The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect. Tactics While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The Jeune École doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was vitiated by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers. Battleship doctrine emphasized the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time. Strategic and diplomatic impact The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS Missouri was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS New Jersey stopped the firing. Gunfire from New Jersey later killed militia leaders. Value for money Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The Jeune École school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the Jeune École were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle. Former operators : lost its two Dingyuan-class battleships Dingyuan and Zhenyuan during the Battle of Weihaiwei in 1895. : lost its entire navy following the collapse of the Empire at the end of World War I. : its only battleship, KB Jugoslavija, was sunk by Italian frogmen during the Raid on Pula. : lost its entire navy upon its reintegration into the Soviet Union in 1921. : sole surviving battleship TCG Turgut Reis was decommissioned in 1933. : lost its two surviving s during the Spanish Civil War, both in 1937. : lost its two s during the German bombing of Salamis in 1941. : scuttled its two surviving s in 1945, during the closing months of World War II. : surrendered its sole surviving battleship, Nagato to the United States following World War II. : decommissioned its last battleship Minas Geraes in 1952. : decommissioned its two s in 1953. : decommissioned its last two s in 1956. : decommissioned its last battleship ARA Rivadavia in 1957. : decommissioned its last battleship, Almirante Latorre in 1958. : decommissioned its last battleship, HMS Vanguard in 1960. : decommissioned its last battleship, Jean Bart in 1970. : decommissioned its last battleship USS Missouri in 1992. She was the last active battleship of any navy. See also Arsenal ship List of battleships List of sunken battleships List of ships of the Second World War List of battleships of the Second World War Notes References Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." (1994). Originally Classified and in two volumes. . Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." Volume I (2015) Originally published in January 1914. Naval Institute Press Corbett, Sir Julian. "Maritime Operations in the Russo-Japanese War 1904–1905." Volume II (2015) Originally published in October 1915. Naval Institute Press Friedman, Norman (2013). "Naval Firepower, Battleship Guns and Gunnery in the Dreadnaught Era." Seaforth Publishing, Great Britain. Polmar, Norman. The Naval Institute Guide to the Ships and Aircraft of the US Fleet. 2001, Naval Institute Press. . Further reading Mahan, Alred Thayer. Reflections, Historic and Other, Suggested by the Battle of the Japan Sea. By Captain A. T. Mahan, US Navy. US Naval Proceedings magazine; June 1906, volume XXXIV, number 2. United States Naval Institute Press. Taylor, Bruce, ed. The world of the battleship: The design and careers of
of the first dreadnoughts, but she and her sister, , were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with Dreadnoughts wing turrets. They thus retained the same broadside, despite having two fewer guns. Arms race In 1897, before the revolution in design brought about by , the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards. World War I By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany. The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser , were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle. In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli. In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine in less than an hour. The British Super-dreadnought soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships. As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of , which was caught in the Dardanelles by a British submarine and and were torpedoed by U-21 as well as , , etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS Audacious turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS Dreadnought rammed and sank the German submarine U-29 on March 18, 1915, off the Moray Firth. Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed. Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought by during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought . She was sunk by destroyers during the night phase of the Battle of Jutland. The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war. Inter-war period For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919, just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships. The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out. While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939, at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. Rise of air power As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled The Command of the Air, which foresaw the dominance of air power over naval units. In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship and the American pre-dreadnought . Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of Ostfriesland was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the back foot. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program. Rearmament The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention. World War II The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its potential against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. fired the last major-caliber salvo of this battle, the last salvo fired by a battleship against another heavy ship. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. Cold War After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions. The was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The French was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956–57. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic en route to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16 inch shells and over 14,000 rounds of 5 inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of Kirov by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . End of the battleship era After Indiana was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. and were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the
of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river. Prose Edda The bridge is mentioned in the Prose Edda books Gylfaginning and Skáldskaparmál, where it is referred to as Bifröst. In chapter 13 of Gylfaginning, Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question isn't an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, "and is built with art and skill to a greater extent than other constructions." High notes that, although the bridge is strong, it will break when "Muspell's lads" attempt to cross it, and their horses will have to make do with swimming over "great rivers." Gangleri says that it doesn't seem that the gods "built the bridge in good faith if it is liable to break, considering that they can do as they please." High responds that the gods do not deserve blame for the breaking of the bridge, for "there is nothing in this world that will be secure when Muspell's sons attack." In chapter 15 of Gylfaginning, Just-As-High says that Bifröst is also called Asbrú, and that every day the gods ride their horses across it (with the exception of Thor, who instead wades through the boiling waters of the rivers Körmt and Örmt) to reach Urðarbrunnr, a holy well where the gods have their court. As a reference, Just-As-High quotes the second of the two stanzas in Grímnismál that mention the bridge (see above). Gangleri asks if fire burns over Bifröst. High says that the red in the bridge is burning fire, and, without it, the frost jotnar and mountain jotnar would "go up into heaven" if anyone who wanted could cross Bifröst. High adds that, in heaven, "there are many beautiful places" and that "everywhere there has divine protection around it." In chapter 17, High tells Gangleri that the location of Himinbjörg "stands at the edge of heaven where Bifrost reaches heaven." While describing the god Heimdallr in chapter 27, High says that Heimdallr lives in Himinbjörg by Bifröst, and guards the bridge from mountain jotnar while sitting at the edge of heaven. In chapter 34, High quotes the first of the two Grímnismál stanzas that mention the bridge. In chapter 51, High foretells the events of Ragnarök. High says that, during Ragnarök, the sky will split open, and from the split will ride forth the "sons of Muspell". When the "sons of Muspell" ride over Bifröst it will break, "as was said above." In the Prose Edda book Skáldskaparmál, the bridge receives a single mention. In chapter 16, a work by the 10th century skald Úlfr Uggason is provided, where Bifröst is referred to as "the powers' way." Theories In his translation of the Prose Edda, Henry Adams Bellows comments that the Grímnismál stanza mentioning Thor and the bridge stanza may mean that "Thor has to go on foot in the last days of the destruction, when the bridge is burning. Another interpretation, however, is that when Thor leaves the heavens
that Bilröst is the best of bridges. Later in Grímnismál, Grímnir notes that Asbrú "burns all with flames" and that, every day, the god Thor wades through the waters of Körmt and Örmt and the two Kerlaugar: In Fáfnismál, the dying wyrm Fafnir tells the hero Sigurd that, during the events of Ragnarök, bearing spears, gods will meet at Óskópnir. From there, the gods will cross Bilröst, which will break apart as they cross over it, causing their horses to dredge through an immense river. Prose Edda The bridge is mentioned in the Prose Edda books Gylfaginning and Skáldskaparmál, where it is referred to as Bifröst. In chapter 13 of Gylfaginning, Gangleri (King Gylfi in disguise) asks the enthroned figure of High what way exists between heaven and earth. Laughing, High replies that the question isn't an intelligent one, and goes on to explain that the gods built a bridge from heaven and earth. He incredulously asks Gangleri if he has not heard the story before. High says that Gangleri must have seen it, and notes that Gangleri may call it a rainbow. High says that the bridge consists of three colors, has great strength, "and is built with art and skill to a greater extent than other constructions." High notes that, although the bridge is strong, it will break when "Muspell's lads" attempt to cross it, and their horses will have to make do with swimming over "great rivers." Gangleri says that it doesn't seem that the gods "built the bridge in good faith if it is liable to break, considering that they can do as they please." High responds that the gods do not deserve blame for the breaking of the bridge, for "there is nothing in this world that will be secure when Muspell's sons attack." In chapter
the focus on the Baltic was probably unimportant at the time the ships were designed, but was inflated later, after the disastrous Dardanelles Campaign. The final British battlecruiser design of the war was the , which was born from a requirement for an improved version of the Queen Elizabeth battleship. The project began at the end of 1915, after Fisher's final departure from the Admiralty. While initially envisaged as a battleship, senior sea officers felt that Britain had enough battleships, but that new battlecruisers might be required to combat German ships being built (the British overestimated German progress on the Mackensen class as well as their likely capabilities). A battlecruiser design with eight 15-inch guns, 8 inches of armour and capable of 32 knots was decided on. The experience of battlecruisers at the Battle of Jutland meant that the design was radically revised and transformed again into a fast battleship with armour up to 12 inches thick, but still capable of . The first ship in the class, , was built according to this design to counter the possible completion of any of the Mackensen-class ship. The plans for her three sisters, on which little work had been done, were revised once more later in 1916 and in 1917 to improve protection. The Admiral class would have been the only British ships capable of taking on the German Mackensen class; nevertheless, German shipbuilding was drastically slowed by the war, and while two Mackensens were launched, none were ever completed. The Germans also worked briefly on a further three ships, of the , which were modified versions of the Mackensens with 15-inch guns. Work on the three additional Admirals was suspended in March 1917 to enable more escorts and merchant ships to be built to deal with the new threat from U-boats to trade. They were finally cancelled in February 1919. Battlecruisers in action The first combat involving battlecruisers during World War I was the Battle of Heligoland Bight in August 1914. A force of British light cruisers and destroyers entered the Heligoland Bight (the part of the North Sea closest to Hamburg) to attack German destroyer patrols. When they met opposition from light cruisers, Vice Admiral David Beatty took his squadron of five battlecruisers into the Bight and turned the tide of the battle, ultimately sinking three German light cruisers and killing their commander, Rear Admiral Leberecht Maass. The German battlecruiser perhaps made the most impact early in the war. Stationed in the Mediterranean, she and the escorting light cruiser evaded British and French ships on the outbreak of war, and steamed to Constantinople (Istanbul) with two British battlecruisers in hot pursuit. The two German ships were handed over to the Ottoman Navy, and this was instrumental in bringing the Ottoman Empire into the war as one of the Central Powers. Goeben herself, renamed Yavuz Sultan Selim, fought engagements against the Imperial Russian Navy in the Black Sea before being knocked out of the action for the remainder of the war after the Battle of Imbros against British forces in the Aegean Sea in January 1918. The original battlecruiser concept proved successful in December 1914 at the Battle of the Falkland Islands. The British battlecruisers and did precisely the job for which they were intended when they chased down and annihilated the German East Asia Squadron, centered on the armoured cruisers and , along with three light cruisers, commanded by Admiral Maximilian Graf Von Spee, in the South Atlantic Ocean. Prior to the battle, the Australian battlecruiser had unsuccessfully searched for the German ships in the Pacific. During the Battle of Dogger Bank in 1915, the aftermost barbette of the German flagship Seydlitz was struck by a British 13.5-inch shell from HMS Lion. The shell did not penetrate the barbette, but it dislodged a piece of the barbette armour that allowed the flame from the shell's detonation to enter the barbette. The propellant charges being hoisted upwards were ignited, and the fireball flashed up into the turret and down into the magazine, setting fire to charges removed from their brass cartridge cases. The gun crew tried to escape into the next turret, which allowed the flash to spread into that turret as well, killing the crews of both turrets. Seydlitz was saved from near-certain destruction only by emergency flooding of her after magazines, which had been effected by Wilhelm Heidkamp. This near-disaster was due to the way that ammunition handling was arranged and was common to both German and British battleships and battlecruisers, but the lighter protection on the latter made them more vulnerable to the turret or barbette being penetrated. The Germans learned from investigating the damaged Seydlitz and instituted measures to ensure that ammunition handling minimised any possible exposure to flash. Apart from the cordite handling, the battle was mostly inconclusive, though both the British flagship Lion and Seydlitz were severely damaged. Lion lost speed, causing her to fall behind the rest of the battleline, and Beatty was unable to effectively command his ships for the remainder of the engagement. A British signalling error allowed the German battlecruisers to withdraw, as most of Beatty's squadron mistakenly concentrated on the crippled armoured cruiser Blücher, sinking her with great loss of life. The British blamed their failure to win a decisive victory on their poor gunnery and attempted to increase their rate of fire by stockpiling unprotected cordite charges in their ammunition hoists and barbettes. At the Battle of Jutland on 31 May 1916, both British and German battlecruisers were employed as fleet units. The British battlecruisers became engaged with both their German counterparts, the battlecruisers, and then German battleships before the arrival of the battleships of the British Grand Fleet. The result was a disaster for the Royal Navy's battlecruiser squadrons: Invincible, Queen Mary, and exploded with the loss of all but a handful of their crews. The exact reason why the ships' magazines detonated is not known, but the plethora of exposed cordite charges stored in their turrets, ammunition hoists and working chambers in the quest to increase their rate of fire undoubtedly contributed to their loss. Beatty's flagship Lion herself was almost lost in a similar manner, save for the heroic actions of Major Francis Harvey. The better-armoured German battlecruisers fared better, in part due to the poor performance of British fuzes (the British shells tended to explode or break up on impact with the German armour). —the only German battlecruiser lost at Jutland—had only 128 killed, for instance, despite receiving more than thirty hits. The other German battlecruisers, , Von der Tann, Seydlitz, and , were all heavily damaged and required extensive repairs after the battle, Seydlitz barely making it home, for they had been the focus of British fire for much of the battle. Interwar period In the years immediately after World War I, Britain, Japan and the US all began design work on a new generation of ever more powerful battleships and battlecruisers. The new burst of shipbuilding that each nation's navy desired was politically controversial and potentially economically crippling. This nascent arms race was prevented by the Washington Naval Treaty of 1922, where the major naval powers agreed to limits on capital ship numbers. The German navy was not represented at the talks; under the terms of the Treaty of Versailles, Germany was not allowed any modern capital ships at all. Through the 1920s and 1930s only Britain and Japan retained battlecruisers, often modified and rebuilt from their original designs. The line between the battlecruiser and the modern fast battleship became blurred; indeed, the Japanese Kongōs were formally redesignated as battleships after their very comprehensive reconstruction in the 1930s. Plans in the aftermath of World War I Hood, launched in 1918, was the last World War I battlecruiser to be completed. Owing to lessons from Jutland, the ship was modified during construction; the thickness of her belt armour was increased by an average of 50 percent and extended substantially, she was given heavier deck armour, and the protection of her magazines was improved to guard against the ignition of ammunition. This was hoped to be capable of resisting her own weapons—the classic measure of a "balanced" battleship. Hood was the largest ship in the Royal Navy when completed; thanks to her great displacement, in theory she combined the firepower and armour of a battleship with the speed of a battlecruiser, causing some to refer to her as a fast battleship. However, her protection was markedly less than that of the British battleships built immediately after World War I, the . The navies of Japan and the United States, not being affected immediately by the war, had time to develop new heavy guns for their latest designs and to refine their battlecruiser designs in light of combat experience in Europe. The Imperial Japanese Navy began four s. These vessels would have been of unprecedented size and power, as fast and well armoured as Hood whilst carrying a main battery of ten 16-inch guns, the most powerful armament ever proposed for a battlecruiser. They were, for all intents and purposes, fast battleships—the only differences between them and the s which were to precede them were less side armour and a increase in speed. The United States Navy, which had worked on its battlecruiser designs since 1913 and watched the latest developments in this class with great care, responded with the . If completed as planned, they would have been exceptionally fast and well armed with eight 16-inch guns, but carried armour little better than the Invincibles—this after an increase in protection following Jutland. The final stage in the post-war battlecruiser race came with the British response to the Amagi and Lexington types: four G3 battlecruisers. Royal Navy documents of the period often described any battleship with a speed of over about as a battlecruiser, regardless of the amount of protective armour, although the G3 was considered by most to be a well-balanced fast battleship. The Washington Naval Treaty meant that none of these designs came to fruition. Ships that had been started were either broken up on the slipway or converted to aircraft carriers. In Japan, Amagi and were selected for conversion. Amagi was damaged beyond repair by the 1923 Great Kantō earthquake and was broken up for scrap; the hull of one of the proposed Tosa-class battleships, , was converted in her stead. The United States Navy also converted two battlecruiser hulls into aircraft carriers in the wake of the Washington Treaty: and , although this was only considered marginally preferable to scrapping the hulls outright (the remaining four: Constellation, Ranger, Constitution and United States were scrapped). In Britain, Fisher's "large light cruisers," were converted to carriers. Furious had already been partially converted during the war and Glorious and Courageous were similarly converted. Rebuilding programmes In total, nine battlecruisers survived the Washington Naval Treaty, although HMS Tiger later became a victim of the London Naval Conference 1930 and was scrapped. Because their high speed made them valuable surface units in spite of their weaknesses, most of these ships were significantly updated before World War II. and were modernized significantly in the 1920s and 1930s. Between 1934 and 1936, Repulse was partially modernized and had her bridge modified, an aircraft hangar, catapult and new gunnery equipment added and her anti-aircraft armament increased. Renown underwent a more thorough reconstruction between 1937 and 1939. Her deck armour was increased, new turbines and boilers were fitted, an aircraft hangar and catapult added and she was completely rearmed aside from the main guns which had their elevation increased to +30 degrees. The bridge structure was also removed and a large bridge similar to that used in the battleships installed in its place. While conversions of this kind generally added weight to the vessel, Renowns tonnage actually decreased due to a substantially lighter power plant. Similar thorough rebuildings planned for Repulse and Hood were cancelled due to the advent of World War II. Unable to build new ships, the Imperial Japanese Navy also chose to improve its existing battlecruisers of the Kongō class (initially the , , and —the only later as it had been disarmed under the terms of the Washington treaty) in two substantial reconstructions (one for Hiei). During the first of these, elevation of their main guns was increased to +40 degrees, anti-torpedo bulges and of horizontal armour added, and a "pagoda" mast with additional command positions built up. This reduced the ships' speed to . The second reconstruction focused on speed as they had been selected as fast escorts for aircraft carrier task forces. Completely new main engines, a reduced number of boilers and an increase in hull length by allowed them to reach up to 30 knots once again. They were reclassified as "fast battleships," although their armour and guns still fell short compared to surviving World War I–era battleships in the American or the British navies, with dire consequences during the Pacific War, when Hiei and Kirishima were easily crippled by US gunfire during actions off Guadalcanal, forcing their scuttling shortly afterwards. Perhaps most tellingly, Hiei was crippled by medium-caliber gunfire from heavy and light cruisers in a close-range night engagement. There were two exceptions: Turkey's Yavuz Sultan Selim and the Royal Navy's Hood. The Turkish Navy made only minor improvements to the ship in the interwar period, which primarily focused on repairing wartime damage and the installation of new fire control systems and anti-aircraft batteries. Hood was in constant service with the fleet and could not be withdrawn for an extended reconstruction. She received minor improvements over the course of the 1930s, including modern fire control systems, increased numbers of anti-aircraft guns, and in March 1941, radar. Naval rearmament In the late 1930s navies began to build capital ships again, and during this period a number of large commerce raiders and small, fast battleships were built that are sometimes referred to as battlecruisers. Germany and Russia designed new battlecruisers during this period, though only the latter laid down two of the 35,000-ton . They were still on the slipways when the Germans invaded in 1941 and construction was suspended. Both ships were scrapped after the war. The Germans planned three battlecruisers of the as part of the expansion of the Kriegsmarine (Plan Z). With six 15-inch guns, high speed, excellent range, but very thin armour, they were intended as commerce raiders. Only one was ordered shortly before World War II; no work was ever done on it. No names were assigned, and they were known by their contract names: 'O', 'P', and 'Q'. The new class was not universally welcomed in the Kriegsmarine. Their abnormally-light protection gained it the derogatory nickname Ohne Panzer Quatsch (without armour nonsense) within certain circles of the Navy. World War II The Royal Navy deployed some of its battlecruisers during the Norwegian Campaign in April 1940. The and the were engaged during the action off Lofoten by Renown in very bad weather and disengaged after Gneisenau was damaged. One of Renowns 15-inch shells passed through Gneisenaus director-control tower without exploding, severing electrical and communication cables as it went and destroyed the rangefinders for the forward 150 mm (5.9 in) turrets. Main-battery fire control had to be shifted aft due to the loss of electrical power. Another shell from Renown knocked out Gneisenaus aft turret. The British ship was struck twice by German shells that failed to inflict any significant damage. She was the only pre-war battlecruiser to survive the war. In the early years of the war various German ships had a measure of success hunting merchant ships in the Atlantic. Allied battlecruisers such as Renown, Repulse, and the fast battleships Dunkerque and were employed on operations to hunt down the commerce-raiding German ships. The one stand-up fight occurred when the battleship and the heavy cruiser sortied into the North Atlantic to attack British shipping and were intercepted by Hood and the battleship in May 1941 in the Battle of the Denmark Strait. The elderly British battlecruiser was no match for the modern German battleship: within minutes, the Bismarcks 15-inch shells caused a magazine explosion in Hood reminiscent of the Battle of Jutland. Only three men survived. The first battlecruiser to see action in the Pacific War was Repulse when she was sunk by Japanese torpedo bombers north of Singapore on 10 December 1941 whilst in company with Prince of Wales. She was lightly damaged by a single bomb and near-missed by two others in the first Japanese attack. Her speed and agility enabled her to avoid the other attacks by level bombers and dodge 33 torpedoes. The last group of torpedo bombers attacked from multiple directions and Repulse was struck by five torpedoes. She quickly capsized with the loss of 27 officers and 486 crewmen; 42 officers and 754 enlisted men were rescued by the escorting destroyers. The loss of Repulse and Prince of Wales conclusively proved the vulnerability of capital ships to aircraft without air cover of their own. The Japanese Kongō-class battlecruisers were extensively used as carrier escorts for most of their wartime career due to their high speed. Their World War I–era armament was weaker and their upgraded armour was still thin compared to contemporary battleships. On 13 November 1942, during the First Naval Battle of Guadalcanal, Hiei stumbled across American cruisers and destroyers at point-blank range. The ship was badly damaged in the encounter and had to be towed by her sister ship Kirishima. Both were spotted by American aircraft the following morning and Kirishima was forced to cast off her tow because of repeated aerial attacks. Hieis captain ordered her crew to abandon ship after further damage and scuttled Hiei in the early evening of 14 November. On the night of 14/15 November during the Second Naval Battle of Guadalcanal, Kirishima returned to Ironbottom Sound, but encountered the American battleships and . While failing to detect Washington, Kirishima engaged South Dakota with some effect. Washington opened fire a few minutes later at short range and badly damaged Kirishima, knocking out her aft turrets, jamming her rudder, and hitting the ship below the waterline. The flooding proved to be uncontrollable and Kirishima capsized three and a half hours later. Returning to Japan after the Battle of Leyte Gulf, Kongō was torpedoed and sunk by the American submarine on 21 November 1944. Haruna was moored at Kure, Japan when the naval base was attacked by American carrier aircraft on 24 and 28 July. The ship was only lightly damaged by a single bomb hit on 24 July, but was hit a dozen more times on 28 July and sank at her pier. She was refloated after the war and scrapped in early 1946. Large cruisers or "cruiser killers" A late renaissance in popularity of ships between battleships and cruisers in size occurred on the eve of World War II. Described by some as battlecruisers, but never classified as capital ships, they were variously described as "super cruisers", "large cruisers" or even "unrestricted cruisers". The Dutch, American, and Japanese navies all planned these new classes specifically to counter the heavy cruisers, or their counterparts, being built by their naval rivals. The first such battlecruisers were the Dutch Design 1047, designed to protect their colonies in the East Indies in the face of Japanese aggression. Never officially assigned names, these ships were designed with German and Italian assistance. While they broadly resembled the German Scharnhorst class and had the same main battery, they would have been more lightly armoured and only protected against eight-inch gunfire. Although the design was mostly completed, work on the vessels never commenced as the Germans overran the Netherlands in May 1940. The first ship would have been laid down in June of that year. The only class of these late battlecruisers actually built were the United States Navy's "large cruisers". Two of them were completed, and ; a third, , was cancelled while under construction and three others, to be named Philippines, Puerto Rico and Samoa, were cancelled before they were laid down. They were classified as "large cruisers" instead of battlecruisers, and their status as non-capital ships evidenced by their being named for territories or protectorates. (Battleships, in contrast, were named after states and cruisers after cities.) With a main armament of nine 12-inch guns in three triple turrets and a displacement of , the Alaskas were twice the size of s and had guns some 50% larger in diameter. They lacked the thick armoured belt and intricate torpedo defence system of true capital ships. However, unlike most battlecruisers, they were considered a balanced design according to cruiser standards as their protection could withstand fire from their own caliber of gun, albeit only in a very narrow range band. They were designed to hunt down Japanese heavy cruisers, though by the time they entered service most Japanese cruisers had been sunk by American aircraft or submarines. Like the contemporary fast battleships, their speed ultimately made them more useful as carrier escorts and bombardment ships than as the surface combatants they were developed to be. The Japanese started designing the B64 class, which was similar to the Alaska but with guns. News of the Alaskas led them to upgrade the design, creating Design B-65. Armed with 356 mm guns, the B65s would have been the best armed of the new breed of battlecruisers, but they still would have had only sufficient protection
dreadnought succeeded the pre-dreadnought battleship. The goal of the design was to outrun any ship with similar armament, and chase down any ship with lesser armament; they were intended to hunt down slower, older armoured cruisers and destroy them with heavy gunfire while avoiding combat with the more powerful but slower battleships. However, as more and more battlecruisers were built, they were increasingly used alongside the better-protected battleships. Battlecruisers served in the navies of the United Kingdom, Germany, the Ottoman Empire, Australia and Japan during World War I, most notably at the Battle of the Falkland Islands and in the several raids and skirmishes in the North Sea which culminated in a pitched fleet battle, the Battle of Jutland. British battlecruisers in particular suffered heavy losses at Jutland, where poor fire safety and ammunition handling practices left them vulnerable to catastrophic magazine explosions following hits to their main turrets from large-calibre shells. This dismal showing led to a persistent general belief that battlecruisers were too thinly armoured to function successfully. By the end of the war, capital ship design had developed, with battleships becoming faster and battlecruisers becoming more heavily armoured, blurring the distinction between a battlecruiser and a fast battleship. The Washington Naval Treaty, which limited capital ship construction from 1922 onwards, treated battleships and battlecruisers identically, and the new generation of battlecruisers planned was scrapped under the terms of the treaty. Improvements in armor design and propulsion created the 1930s "fast battleship" with the speed of a battlecruiser and armor of a battleship, making the battlecruiser in the traditional sense effectively an obsolete concept. Thus from the 1930s on, only the Royal Navy continued to use "battlecruiser" as a classification for the World War I–era capital ships that remained in the fleet; while Japan's battlecruisers remained in service, they had been significantly reconstructed and were re-rated as full-fledged fast battleships. Battlecruisers were put into action again during World War II, and only one survived to the end. There was also renewed interest in large "cruiser-killer" type warships, but few were ever begun, as construction of battleships and battlecruisers was curtailed in favor of more-needed convoy escorts, aircraft carriers, and cargo ships. In the post–Cold War era, the Soviet of large guided missile cruisers have also been termed "battlecruisers". Background The battlecruiser was developed by the Royal Navy in the first years of the 20th century as an evolution of the armoured cruiser. The first armoured cruisers had been built in the 1870s, as an attempt to give armour protection to ships fulfilling the typical cruiser roles of patrol, trade protection and power projection. However, the results were rarely satisfactory, as the weight of armour required for any meaningful protection usually meant that the ship became almost as slow as a battleship. As a result, navies preferred to build protected cruisers with an armoured deck protecting their engines, or simply no armour at all. In the 1890s, technology began to change this balance. New Krupp steel armour meant that it was now possible to give a cruiser side armour which would protect it against the quick-firing guns of enemy battleships and cruisers alike. In 1896–97 France and Russia, who were regarded as likely allies in the event of war, started to build large, fast armoured cruisers taking advantage of this. In the event of a war between Britain and France or Russia, or both, these cruisers threatened to cause serious difficulties for the British Empire's worldwide trade. Britain, which had concluded in 1892 that it needed twice as many cruisers as any potential enemy to adequately protect its empire's sea lanes, responded to the perceived threat by laying down its own large armoured cruisers. Between 1899 and 1905, it completed or laid down seven classes of this type, a total of 35 ships. This building program, in turn, prompted the French and Russians to increase their own construction. The Imperial German Navy began to build large armoured cruisers for use on their overseas stations, laying down eight between 1897 and 1906. The cost of this cruiser arms race was significant. In the period 1889–1896, the Royal Navy spent £7.3 million on new large cruisers. From 1897 to 1904, it spent £26.9 million. Many armoured cruisers of the new kind were just as large and expensive as the equivalent battleship. The increasing size and power of the armoured cruiser led to suggestions in British naval circles that cruisers should displace battleships entirely. The battleship's main advantage was its 12-inch heavy guns, and heavier armour designed to protect from shells of similar size. However, for a few years after 1900 it seemed that those advantages were of little practical value. The torpedo now had a range of 2,000 yards, and it seemed unlikely that a battleship would engage within torpedo range. However, at ranges of more than 2,000 yards it became increasingly unlikely that the heavy guns of a battleship would score any hits, as the heavy guns relied on primitive aiming techniques. The secondary batteries of 6-inch quick-firing guns, firing more plentiful shells, were more likely to hit the enemy. As naval expert Fred T. Jane wrote in June 1902,Is there anything outside of 2,000 yards that the big gun in its hundreds of tons of medieval castle can affect, that its weight in 6-inch guns without the castle could not affect equally well? And inside 2,000, what, in these days of gyros, is there that the torpedo cannot effect with far more certainty? In 1904, Admiral John "Jacky" Fisher became First Sea Lord, the senior officer of the Royal Navy. He had for some time thought about the development of a new fast armoured ship. He was very fond of the "second-class battleship" , a faster, more lightly armoured battleship. As early as 1901, there is confusion in Fisher's writing about whether he saw the battleship or the cruiser as the model for future developments. This did not stop him from commissioning designs from naval architect W. H. Gard for an armoured cruiser with the heaviest possible armament for use with the fleet. The design Gard submitted was for a ship between , capable of , armed with four 9.2-inch and twelve guns in twin gun turrets and protected with six inches of armour along her belt and 9.2-inch turrets, on her 7.5-inch turrets, 10 inches on her conning tower and up to on her decks. However, mainstream British naval thinking between 1902 and 1904 was clearly in favour of heavily armoured battleships, rather than the fast ships that Fisher favoured. The Battle of Tsushima proved conclusively the effectiveness of heavy guns over intermediate ones and the need for a uniform main caliber on a ship for fire control. Even before this, the Royal Navy had begun to consider a shift away from the mixed-calibre armament of the 1890s pre-dreadnought to an "all-big-gun" design, and preliminary designs circulated for battleships with all 12-inch or all 10-inch guns and armoured cruisers with all 9.2-inch guns. In late 1904, not long after the Royal Navy had decided to use 12-inch guns for its next generation of battleships because of their superior performance at long range, Fisher began to argue that big-gun cruisers could replace battleships altogether. The continuing improvement of the torpedo meant that submarines and destroyers would be able to destroy battleships; this in Fisher's view heralded the end of the battleship or at least compromised the validity of heavy armour protection. Nevertheless, armoured cruisers would remain vital for commerce protection. Fisher's views were very controversial within the Royal Navy, and even given his position as First Sea Lord, he was not in a position to insist on his own approach. Thus he assembled a "Committee on Designs", consisting of a mixture of civilian and naval experts, to determine the approach to both battleship and armoured cruiser construction in the future. While the stated purpose of the committee was to investigate and report on future requirements of ships, Fisher and his associates had already made key decisions. The terms of reference for the committee were for a battleship capable of with 12-inch guns and no intermediate calibres, capable of docking in existing drydocks; and a cruiser capable of , also with 12-inch guns and no intermediate armament, armoured like , the most recent armoured cruiser, and also capable of using existing docks. First battlecruisers Under the Selborne plan of 1902, the Royal Navy intended to start three new battleships and four armoured cruisers each year. However, in late 1904 it became clear that the 1905–1906 programme would have to be considerably smaller, because of lower than expected tax revenue and the need to buy out two Chilean battleships under construction in British yards, lest they be purchased by the Russians for use against the Japanese, Britain's ally. These economies meant that the 1905–1906 programme consisted only of one battleship, but three armoured cruisers. The battleship became the revolutionary battleship , and the cruisers became the three ships of the . Fisher later claimed, however, that he had argued during the committee for the cancellation of the remaining battleship. The construction of the new class was begun in 1906 and completed in 1908, delayed perhaps to allow their designers to learn from any problems with Dreadnought. The ships fulfilled the design requirement quite closely. On a displacement similar to Dreadnought, the Invincibles were longer to accommodate additional boilers and more powerful turbines to propel them at . Moreover, the new ships could maintain this speed for days, whereas pre-dreadnought battleships could not generally do so for more than an hour. Armed with eight 12-inch Mk X guns, compared to ten on Dreadnought, they had of armour protecting the hull and the gun turrets. (Dreadnoughts armour, by comparison, was at its thickest.) The class had a very marked increase in speed, displacement and firepower compared to the most recent armoured cruisers but no more armour. While the Invincibles were to fill the same role as the armoured cruisers they succeeded, they were expected to do so more effectively. Specifically their roles were: Heavy reconnaissance. Because of their power, the Invincibles could sweep away the screen of enemy cruisers to close with and observe an enemy battlefleet before using their superior speed to retire. Close support for the battle fleet. They could be stationed at the ends of the battle line to stop enemy cruisers harassing the battleships, and to harass the enemy's battleships if they were busy fighting battleships. Also, the Invincibles could operate as the fast wing of the battlefleet and try to outmanoeuvre the enemy. Pursuit. If an enemy fleet ran, then the Invincibles would use their speed to pursue, and their guns to damage or slow enemy ships. Commerce protection. The new ships would hunt down enemy cruisers and commerce raiders. Confusion about how to refer to these new battleship-size armoured cruisers set in almost immediately. Even in late 1905, before work was begun on the Invincibles, a Royal Navy memorandum refers to "large armoured ships" meaning both battleships and large cruisers. In October 1906, the Admiralty began to classify all post-Dreadnought battleships and armoured cruisers as "capital ships", while Fisher used the term "dreadnought" to refer either to his new battleships or the battleships and armoured cruisers together. At the same time, the Invincible class themselves were referred to as "cruiser-battleships", "dreadnought cruisers"; the term "battlecruiser" was first used by Fisher in 1908. Finally, on 24 November 1911, Admiralty Weekly Order No. 351 laid down that "All cruisers of the “Invincible” and later types are for the future to be described and classified as “battle cruisers” to distinguish them from the armoured cruisers of earlier date." Along with questions over the new ships' nomenclature came uncertainty about their actual role due to their lack of protection. If they were primarily to act as scouts for the battle fleet and hunter-killers of enemy cruisers and commerce raiders, then the seven inches of belt armour with which they had been equipped would be adequate. If, on the other hand, they were expected to reinforce a battle line of dreadnoughts with their own heavy guns, they were too thin-skinned to be safe from an enemy's heavy guns. The Invincibles were essentially extremely large, heavily armed, fast armoured cruisers. However, the viability of the armoured cruiser was already in doubt. A cruiser that could have worked with the Fleet might have been a more viable option for taking over that role. Because of the Invincibles size and armament, naval authorities considered them capital ships almost from their inception—an assumption that might have been inevitable. Complicating matters further was that many naval authorities, including Lord Fisher, had made overoptimistic assessments from the Battle of Tsushima in 1905 about the armoured cruiser's ability to survive in a battle line against enemy capital ships due to their superior speed. These assumptions had been made without taking into account the Russian Baltic Fleet's inefficiency and tactical ineptitude. By the time the term "battlecruiser" had been given to the Invincibles, the idea of their parity with battleships had been fixed in many people's minds. Not everyone was so convinced. Brasseys Naval Annual, for instance, stated that with vessels as large and expensive as the Invincibles, an admiral "will be certain to put them in the line of battle where their comparatively light protection will be a disadvantage and their high speed of no value." Those in favor of the battlecruiser countered with two points—first, since all capital ships were vulnerable to new weapons such as the torpedo, armour had lost some of its validity; and second, because of its greater speed, the battlecruiser could control the range at which it engaged an enemy. Battlecruisers in the dreadnought arms race Between the launching of the Invincibles to just after the outbreak of the First World War, the battlecruiser played a junior role in the developing dreadnought arms race, as it was never wholeheartedly adopted as the key weapon in British imperial defence, as Fisher had presumably desired. The biggest factor for this lack of acceptance was the marked change in Britain's strategic circumstances between their conception and the commissioning of the first ships. The prospective enemy for Britain had shifted from a Franco-Russian alliance with many armoured cruisers to a resurgent and increasingly belligerent Germany. Diplomatically, Britain had entered the Entente cordiale in 1904 and the Anglo-Russian Entente. Neither France nor Russia posed a particular naval threat; the Russian navy had largely been sunk or captured in the Russo-Japanese War of 1904–1905, while the French were in no hurry to adopt the new dreadnought-type design. Britain also boasted very cordial relations with two of the significant new naval powers: Japan (bolstered by the Anglo-Japanese Alliance, signed in 1902 and renewed in 1905), and the US. These changed strategic circumstances, and the great success of the Dreadnought ensured that she rather than the Invincible became the new model capital ship. Nevertheless, battlecruiser construction played a part in the renewed naval arms race sparked by the Dreadnought. For their first few years of service, the Invincibles entirely fulfilled Fisher's vision of being able to sink any ship fast enough to catch them, and run from any ship capable of sinking them. An Invincible would also, in many circumstances, be able to take on an enemy pre-dreadnought battleship. Naval circles concurred that the armoured cruiser in its current form had come to the logical end of its development and the Invincibles were so far ahead of any enemy armoured cruiser in firepower and speed that it proved difficult to justify building more or bigger cruisers. This lead was extended by the surprise both Dreadnought and Invincible produced by having been built in secret; this prompted most other navies to delay their building programmes and radically revise their designs. This was particularly true for cruisers, because the details of the Invincible class were kept secret for longer; this meant that the last German armoured cruiser, , was armed with only guns, and was no match for the new battlecruisers. The Royal Navy's early superiority in capital ships led to the rejection of a 1905–1906 design that would, essentially, have fused the battlecruiser and battleship concepts into what would eventually become the fast battleship. The 'X4' design combined the full armour and armament of Dreadnought with the 25-knot speed of Invincible. The additional cost could not be justified given the existing British lead and the new Liberal government's need for economy; the slower and cheaper , a relatively close copy of Dreadnought, was adopted instead. The X4 concept would eventually be fulfilled in the and later by other navies. The next British battlecruisers were the three , slightly improved Invincibles built to fundamentally the same specification, partly due to political pressure to limit costs and partly due to the secrecy surrounding German battlecruiser construction, particularly about the heavy armour of . This class came to be widely seen as a mistake and the next generation of British battlecruisers were markedly more powerful. By 1909–1910 a sense of national crisis about rivalry with Germany outweighed cost-cutting, and a naval panic resulted in the approval of a total of eight capital ships in 1909–1910. Fisher pressed for all eight to be battlecruisers, but was unable to have his way; he had to settle for six battleships and two battlecruisers of the . The Lions carried eight 13.5-inch guns, the now-standard caliber of the British "super-dreadnought" battleships. Speed increased to and armour protection, while not as good as in German designs, was better than in previous British battlecruisers, with armour belt and barbettes. The two Lions were followed by the very similar . By 1911 Germany had built battlecruisers of her own, and the superiority of the British ships could no longer be assured. Moreover, the German Navy did not share Fisher's view of the battlecruiser. In contrast to the British focus on increasing speed and firepower, Germany progressively improved the armour and staying power of their ships to better the British battlecruisers. Von der Tann, begun in 1908 and completed in 1910, carried eight 11.1-inch guns, but with 11.1-inch (283 mm) armour she was far better protected than the Invincibles. The two s were quite similar but carried ten 11.1-inch guns of an improved design. , designed in 1909 and finished in 1913, was a modified Moltke; speed increased by one knot to , while her armour had a maximum thickness of 12 inches, equivalent to the s of a few years earlier. Seydlitz was Germany's last battlecruiser completed before World War I. The next step in battlecruiser design came from Japan. The Imperial Japanese Navy had been planning the ships from 1909, and was determined that, since the Japanese economy could support relatively few ships, each would be more powerful than its likely competitors. Initially the class was planned with the Invincibles as the benchmark. On learning of the British plans for Lion, and the likelihood that new U.S. Navy battleships would be armed with guns, the Japanese decided to radically revise their plans and go one better. A new plan was drawn up, carrying eight 14-inch guns, and capable of , thus marginally having the edge over the Lions in speed and firepower. The heavy guns were also better-positioned, being superfiring both fore and aft with no turret amidships. The armour scheme was also marginally improved over the Lions, with nine inches of armour on the turrets and on the barbettes. The first ship in the class was built in Britain, and a further three constructed in Japan. The Japanese also re-classified their powerful armoured cruisers of the Tsukuba and Ibuki classes, carrying four 12-inch guns, as battlecruisers; nonetheless, their armament was weaker and they were slower than any battlecruiser. The next British battlecruiser, , was intended initially as the fourth ship in the Lion class, but was substantially redesigned. She retained the eight 13.5-inch guns of her predecessors, but they were positioned like those of Kongō for better fields of fire. She was faster (making on sea trials), and carried a heavier secondary armament. Tiger was also more heavily armoured on the whole; while the maximum thickness of armour was the same at nine inches, the height of the main armour belt was increased. Not all the desired improvements for this ship were approved, however. Her designer, Sir Eustace Tennyson d'Eyncourt, had wanted small-bore water-tube boilers and geared turbines to give her a speed of , but he received no support from the authorities and the engine makers refused his request. 1912 saw work begin on three more German battlecruisers of the , the first German battlecruisers to mount 12-inch guns. These ships, like Tiger and the Kongōs, had their guns arranged in superfiring turrets for greater efficiency. Their armour and speed was similar to the previous Seydlitz class. In 1913, the Russian Empire also began the construction of the four-ship , which were designed for service in the Baltic Sea. These ships were designed to carry twelve 14-inch guns, with armour up to 12 inches thick, and a speed of . The heavy armour and relatively slow speed of these ships made them more similar to German designs than to British ships; construction of the Borodinos was halted by the First World War and all were scrapped after the end of the Russian Civil War. World War I Construction For most of the combatants, capital ship construction was very limited during the war. Germany finished the Derfflinger class and began work on the . The Mackensens were a development of the Derfflinger class, with 13.8-inch guns and a broadly similar armour scheme, designed for . In Britain, Jackie Fisher returned to the office of First Sea Lord in October 1914. His enthusiasm for big, fast ships was unabated, and he set designers to producing a design for a battlecruiser with 15-inch guns. Because Fisher expected the next German battlecruiser to steam at 28 knots, he required the new British design to be capable of 32 knots. He planned to reorder two s, which had been approved but not yet laid down, to a new design. Fisher finally received approval for this project on 28 December 1914 and they became the . With six 15-inch guns but only 6-inch armour they were a further step forward from Tiger in firepower and speed, but returned to the
number of reforms, although there were occasional points of tension between the two. The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor Governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%. Economic policy The Hawke Government oversaw significant economic reforms, and is often cited by economic historians as being a "turning point" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, "the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system". Although the Fraser Government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke Government took the opportunity to implement a comprehensive program of economic reform, in the process "transform(ing) economics and politics in Australia". Hawke and Keating together led the process for overseeing the economic changes by launching a "National Economic Summit" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke Government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank. The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the "main loser" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke Government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance "to the most disadvantaged recipients over the whole range of welfare benefits." Social and environmental policy Although criticisms were leveled against the Hawke Government that it did not achieve all it said it would do on social policy, it nevertheless enacting a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam Governments. In 1984, the Hawke Government enacted the landmark Sex Discrimination Act, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, creating set up the Higher Education Contributions Scheme (HECS). Under the original HECS, a $1,800 fee was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the University sector by various means. by doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aborigines and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision. The Hawke Government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his "proudest achievement". Industrial relations policy As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare. Inflation had been a significant issue for the previous decade prior to the election of the Hawke Government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements. Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke Government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991. Foreign policy Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating Government's later establishment of the APEC Leaders' Forum. Elsewhere in Asia, the Hawke Government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand in the aftermath of the Tiananmen square massacre in 1989; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act. The Hawke Government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government. Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with British Prime Minister Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke Government for the role it played in the boycott. Election wins and leadership challenges Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser prior to the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result. However, Hawke's time as Prime Minister also saw friction develop between himself and the grassroots of the Labor Party, many of whom were unhappy at what they viewed as Hawke's iconoclasm and willingness to cooperate with business interests. Hawke regularly and publicly expressed his willingness to cull Labor's "sacred cows". The Labor Left faction, as well as prominent Labor backbencher Barry Jones, offered repeated criticisms of a number of government decisions. Hawke also was subject to challenges from some former colleagues in the trade union movement over his "confrontationalist style" in siding with the airline companies in the 1989 Australian pilots' strike. Nevertheless, Hawke was able to comfortably maintain a lead as preferred prime minister in the vast majority of opinion polls carried out throughout his time in office. He recorded the highest popularity rating ever measured by an Australian opinion poll, reaching 75% approval in 1984. After leading Labor to a comfortable victory in the snap 1984 election, called to bring the mandate of the House of Representatives back in line with the Senate, Hawke was able to secure an unprecedented third consecutive term for Labor with a landslide victory in the double dissolution election of 1987. Hawke was subsequently able to lead the nation in the bicentennial celebrations of 1988, culminating with him welcoming Queen Elizabeth II to open the newly constructed Parliament House. The economic downturn of the late 1980s, and accompanying high interest rates, saw the Government fall in opinion polls, with many doubting that Hawke could win a fourth election. Keating, who had long understood that he would eventually succeed Hawke as prime minister, began to plan a leadership change; at the end of 1988, Keating put pressure on Hawke to retire in the new year. Hawke rejected this suggestion but reached a secret agreement with Keating, the so-called "Kirribilli Agreement", stating that he would step down in Keating's favour at some point after the 1990 election. Hawke subsequently won that election, in the process leading Labor to a record fourth consecutive electoral victory, albeit by a slim margin. Hawke appointed Keating as deputy prime minister to replace the retiring Lionel Bowen. By the end of 1990, frustrated by the lack of any indication from Hawke as to when he might retire, Keating made a provocative speech to the Federal Parliamentary Press Gallery. Hawke considered the speech disloyal, and told Keating he would renege on the Kirribilli Agreement as a result. After attempting to force a resolution privately, Keating finally resigned from the Government in June 1991 to challenge Hawke for the leadership. Hawke won the leadership spill, and in a press conference after the result, Keating declared that he had fired his "one shot" on the leadership. Hawke appointed John Kerin to replace Keating as Treasurer. Despite his victory in the June spill, Hawke quickly began to be regarded by many of his colleagues as a "wounded" leader; he had now lost his long-term political partner, his rating in opinion polls were beginning to fall significantly, and after nearly nine years as Prime Minister, there was speculation that it would soon be time for a new leader. Hawke's leadership was ultimately irrevocably damaged at the end of 1991; after Liberal Leader John Hewson released 'Fightback!', a detailed proposal for sweeping economic change, including the introduction of a goods and services tax, Hawke was forced to sack Kerin as Treasurer after the latter made a public gaffe attempting to attack the policy? Keating duly challenged for the leadership a second time on 19 December, arguing that he would better placed to defeat Hewson; this time, Keating succeeded, narrowly defeating Hawke by 56 votes to 51. In a speech to the House of Representatives following the vote, Hawke declared that his nine years as prime minister had left Australia a better and wealthier country, and he was given a standing ovation by those present. He subsequently tendered his resignation to the Governor-General and pledged support to his successor. Hawke briefly returned to the backbench, before resigning from Parliament on 20 February 1992, sparking a by-election which was won by the independent candidate Phil Cleary from among a record field of 22 candidates. Keating would go on to lead Labor to a fifth victory at the 1993 election, although he was defeated by the Liberal Party at the 1996 election. Hawke wrote that he had very few regrets over his time in office, although stated he wished he had been able to advance the cause of Indigenous land rights further. His bitterness towards Keating over the leadership challenges surfaced in his earlier memoirs, although by the 2000s Hawke stated he and Keating had buried their differences, and that they regularly dined together and considered each other friends. The publication of the book Hawke: The Prime Minister, by Hawke's second wife, Blanche d'Alpuget, in 2010, reignited conflict between the two, with Keating accusing Hawke and d'Alpuget of spreading falsehoods about his role in the Hawke Government. Despite this, the two campaigned together for Labor several times, including at the 2019 election, where they released their first joint article for nearly three decades; Craig Emerson, who worked for both men, said they had reconciled in later years after Hawke grew ill. Retirement and later life After leaving Parliament, Hawke entered the business world, taking on a number of directorships and consultancy positions which enabled him to achieve considerable financial success. He avoided public involvement with the Labor Party during Keating's tenure as Prime Minister, not wanting to be seen as attempting to overshadow his successor. After Keating's defeat and the election of the Howard Government at the 1996 election, he returned to public campaigning with Labor and regularly appearing at election launches. Despite his personal affection for Queen Elizabeth II, boasting that he had been her "favourite Prime Minister", Hawke was an enthusiastic republican and joined the campaign for a Yes vote in the 1999 republic referendum. In 2002, Hawke was named to South Australia's Economic Development Board during the Rann Government. In the lead up to the 2007 election, Hawke made a considerable personal effort to support Kevin Rudd, making speeches at a large number of campaign office openings across Australia, and appearing in multiple campaign advertisements. As well as campaigning against WorkChoices, Hawke also attacked John Howard's record as Treasurer, stating "it was the judgement of every economist and international financial institution that it was the restructuring reforms undertaken by my government, with the full cooperation of the trade union movement, which created the strength of the Australian economy today". In February 2008, after Rudd's victory, Hawke joined former Prime Ministers Gough Whitlam, Malcolm Fraser and Paul Keating in Parliament House to witness the long anticipated apology to the Stolen Generations. In 2009, Hawke helped establish the Centre for Muslim and Non-Muslim Understanding at the University of South Australia. Interfaith dialogue was an important issue for Hawke, who told the Adelaide Review that he was "convinced that one of
morning about the possible leadership change, on the same that Hawke assumed the leadership of the Labor Party, Malcolm Fraser called a snap election for 5 March 1983, unsuccessfully attempting to prevent Labor from making the leadership change. However, he was unable to have the Governor-General confirm the election before Labor announced the change. At the 1983 election, Hawke led Labor to a landslide victory, achieving a 24-seat swing and ending seven years of Liberal Party rule. With the election called at the same time that Hawke became Labor leader this meant that Hawke never sat in Parliament as Leader of the Opposition having spent the entirety of his short Opposition leadership in the election campaign which he won. Prime Minister of Australia Leadership style After Labor's landslide victory, Hawke was sworn in as the Prime Minister by the Governor-General Ninian Stephen on 11 March 1983. The style of the Hawke Government were deliberately distinct from the Whitlam Government, the most recent Labor Government that preceded it. Rather than immediately initiating multiple extensive reform programs as Whitlam had, Hawke announced that Malcolm Fraser's pre-election concealment of the budget deficit meant that many of Labor's election commitments would have to be deferred. As part of his internal reforms package, Hawke divided the government into two tiers, with only the most senior ministers sitting in the Cabinet. The Labor caucus was still given the authority to determine who would make up the Ministry, but this move gave Hawke unprecedented powers to empower individual ministers. In particular, the political partnership that developed between Hawke and his Treasurer, Paul Keating, proved to be essential to Labor's success in government, with multiple Labor figures in years since citing the partnership as the party's greatest ever. The two men proved a study in contrasts: Hawke was a Rhodes Scholar; Keating left high school early. Hawke's enthusiasms were cigars, betting and most forms of sport; Keating preferred classical architecture, Mahler symphonies and collecting British Regency and French Empire antiques. Despite not knowing one another before Hawke assumed the leadership in 1983, the two formed a personal as well as political relationship which enabled the Government to pursue a significant number of reforms, although there were occasional points of tension between the two. The Labor Caucus under Hawke also developed a more formalised system of parliamentary factions, which significantly altered the dynamics of caucus operations. Unlike many of his predecessor leaders, Hawke's authority within the Labor Party was absolute. This enabled him to persuade MPs to support a substantial set of policy changes which had not been considered achievable by Labor Governments in the past. Individual accounts from ministers indicate that while Hawke was not often the driving force behind individual reforms, outside of broader economic changes, he took on the role of providing political guidance on what was electorally feasible and how best to sell it to the public, tasks at which he proved highly successful. Hawke took on a very public role as Prime Minister, campaigning frequently even outside of election periods, and for much of his time in office proved to be incredibly popular with the Australian electorate; to this date he still holds the highest ever AC Nielsen approval rating of 75%. Economic policy The Hawke Government oversaw significant economic reforms, and is often cited by economic historians as being a "turning point" from a protectionist, agricultural model to a more globalised and services-oriented economy. According to the journalist Paul Kelly, "the most influential economic decisions of the 1980s were the floating of the Australian dollar and the deregulation of the financial system". Although the Fraser Government had played a part in the process of financial deregulation by commissioning the 1981 Campbell Report, opposition from Fraser himself had stalled this process. Shortly after its election in 1983, the Hawke Government took the opportunity to implement a comprehensive program of economic reform, in the process "transform(ing) economics and politics in Australia". Hawke and Keating together led the process for overseeing the economic changes by launching a "National Economic Summit" one month after their election in 1983, which brought together business and industrial leaders together with politicians and trade union leaders; the three-day summit led to a unanimous adoption of a national economic strategy, generating sufficient political capital for widespread reform to follow. Among other reforms, the Hawke Government floated the Australian dollar, repealed rules that prohibited foreign-owned banks to operate in Australia, dismantled the protectionist tariff system, privatised several state sector industries, ended the subsidisation of loss-making industries, and sold off part of the state-owned Commonwealth Bank. The taxation system was also significantly reformed, with income tax rates reduced and the introduction of a fringe benefits tax and a capital gains tax; the latter two reforms were strongly opposed by the Liberal Party at the time, but were never reversed by them when they eventually returned to office in 1996. Partially offsetting these imposts upon the business community—the "main loser" from the 1985 Tax Summit according to Paul Kelly—was the introduction of full dividend imputation, a reform insisted upon by Keating. Funding for schools was also considerably increased as part of this package, while financial assistance was provided for students to enable them to stay at school longer; the number of Australian children completing school rose from 3 in 10 at the beginning of the Hawke Government to 7 in 10 by its conclusion in 1991. Considerable progress was also made in directing assistance "to the most disadvantaged recipients over the whole range of welfare benefits." Social and environmental policy Although criticisms were leveled against the Hawke Government that it did not achieve all it said it would do on social policy, it nevertheless enacting a series of reforms which remain in place to the present day. From 1983 to 1989, the Government oversaw the permanent establishment of universal health care in Australia with the creation of Medicare, doubled the number of subsidised childcare places, began the introduction of occupational superannuation, oversaw a significant increase in school retention rates, created subsidised homecare services, oversaw the elimination of poverty traps in the welfare system, increased the real value of the old-age pension, reintroduced the six-monthly indexation of single-person unemployment benefits, and established a wide-ranging programme for paid family support, known as the Family Income Supplement. During the 1980s, the proportion of total government outlays allocated to families, the sick, single parents, widows, the handicapped, and veterans was significantly higher than under the previous Fraser and Whitlam Governments. In 1984, the Hawke Government enacted the landmark Sex Discrimination Act, which eliminated discrimination on the grounds of sex within the workplace. In 1989, Hawke oversaw the gradual re-introduction of some tuition fees for university study, creating set up the Higher Education Contributions Scheme (HECS). Under the original HECS, a $1,800 fee was charged to all university students, and the Commonwealth paid the balance. A student could defer payment of this HECS amount and repay the debt through the tax system, when the student's income exceeds a threshold level. As part of the reforms, Colleges of Advanced Education entered the University sector by various means. by doing so, university places were able to be expanded. Further notable policy decisions taken during the Government's time in office included the public health campaign regarding HIV/AIDS, and Indigenous land rights reform, with an investigation of the idea of a treaty between Aborigines and the Government being launched, although the latter would be overtaken by events, notably the Mabo court decision. The Hawke Government also drew attention for a series of notable environmental decisions, particularly in its second and third terms. In 1983, Hawke personally vetoed the construction of the Franklin Dam in Tasmania, responding to a groundswell of protest around the issue. Hawke also secured the nomination of the Wet Tropics of Queensland as a UNESCO World Heritage Site in 1987, preventing the forests there from being logged. Hawke would later appoint Graham Richardson as Environment Minister, tasking him with winning the second-preference support from environmental parties, something which Richardson later claimed was the major factor in the government's narrow re-election at the 1990 election. In the Government's fourth term, Hawke personally led the Australian delegation to secure changes to the Protocol on Environmental Protection to the Antarctic Treaty, ultimately winning a guarantee that drilling for minerals within Antarctica would be totally prohibited until 2048 at the earliest. Hawke later claimed that the Antarctic drilling ban was his "proudest achievement". Industrial relations policy As a former ACTU President, Hawke was well-placed to engage in reform of the industrial relations system in Australia, taking a lead on this policy area as in few others. Working closely with ministerial colleagues and the ACTU Secretary, Bill Kelty, Hawke negotiated with trade unions to establish the Prices and Incomes Accord in 1983, an agreement whereby unions agreed to restrict their demands for wage increases, and in turn the Government guaranteed to both minimise inflation and promote an increased social wage, including by establishing new social programmes such as Medicare. Inflation had been a significant issue for the previous decade prior to the election of the Hawke Government, regularly running into double-digits. The process of the Accord, by which the Government and trade unions would arbitrate and agree upon wage increases in many sectors, led to a decrease in both inflation and unemployment through to 1990. Criticisms of the Accord would come from both the right and the left of politics. Left-wing critics claimed that it kept real wages stagnant, and that the Accord was a policy of class collaboration and corporatism. By contrast, right-wing critics claimed that the Accord reduced the flexibility of the wages system. Supporters of the Accord, however, pointed to the improvements in the social security system that occurred, including the introduction of rental assistance for social security recipients, the creation of labour market schemes such as NewStart, and the introduction of the Family Income Supplement. In 1986, the Hawke government passed a bill to de-register the Builders Labourers Federation federally due to the union not following the Accord agreements. Despite a percentage fall in real money wages from 1983 to 1991, the social wage of Australian workers was argued by the Government to have improved drastically as a result of these reforms, and the ensuing decline in inflation. The Accord was revisited six further times during the Hawke Government, each time in response to new economic developments. The seventh and final revisiting would ultimately lead to the establishment of the enterprise bargaining system, although this would be finalised shortly after Hawke left office in 1991. Foreign policy Arguably the most significant foreign policy achievement of the Government took place in 1989, after Hawke proposed a south-east Asian region-wide forum for leaders and economic ministers to discuss issues of common concern. After winning the support of key countries in the region, this led to the creation of the Asia-Pacific Economic Cooperation (APEC). The first APEC meeting duly took place in Canberra in November 1989; the economic ministers of Australia, Brunei, Canada, Indonesia, Japan, South Korea, Malaysia, New Zealand, Philippines, Singapore, Thailand and the United States all attended. APEC would subsequently grow to become one of the most pre-eminent high-level international forums in the world, particularly after the later inclusions of China and Russia, and the Keating Government's later establishment of the APEC Leaders' Forum. Elsewhere in Asia, the Hawke Government played a significant role in the build-up to the United Nations peace process for Cambodia, culminating in the Transitional Authority; Hawke's Foreign Minister Gareth Evans was nominated for the Nobel Peace Prize for his role in negotiations. Hawke also took a major public stand in the aftermath of the Tiananmen square massacre in 1989; despite having spent years trying to get closer relations with China, Hawke gave a tearful address on national television describing the massacre in graphic detail, and unilaterally offered asylum to over 42,000 Chinese students who were living in Australia at the time, many of whom had publicly supported the Tiananmen protesters. Hawke did so without even consulting his Cabinet, stating later that he felt he simply had to act. The Hawke Government pursued a close relationship with the United States, assisted by Hawke's close friendship with US Secretary of State George Shultz; this led to a degree of controversy when the Government supported the US's plans to test ballistic missiles off the coast of Tasmania in 1985, as well as seeking to overturn Australia's long-standing ban on uranium exports. Although the US ultimately withdrew the plans to test the missiles, the furore led to a fall in Hawke's approval ratings. Shortly after the 1990 election, Hawke would lead Australia into its first overseas military campaign since the Vietnam War, forming a close alliance with US President George H. W. Bush to join the coalition in the Gulf War. The Australian Navy contributed several destroyers and frigates to the war effort, which successfully concluded in February 1991, with the expulsion of Iraqi forces from Kuwait. The success of the campaign, and the lack of any Australian casualties, led to a brief increase in the popularity of the Government. Through his role on the Commonwealth Heads of Government Meeting, Hawke played a leading role in ensuring the Commonwealth initiated an international boycott on foreign investment into South Africa, building on work undertaken by his predecessor Malcolm Fraser, and in the process clashing publicly with British Prime Minister Margaret Thatcher, who initially favoured a more cautious approach. The resulting boycott, led by the Commonwealth, was widely credited with helping bring about the collapse of apartheid, and resulted in a high-profile visit by Nelson Mandela in October 1990, months after the latter's release from a 27-year stint in prison. During the visit, Mandela publicly thanked the Hawke Government for the role it played in the boycott. Election wins and leadership challenges Hawke benefited greatly from the disarray into which the Liberal Party fell after the resignation of Fraser following the 1983 election. The Liberals were torn between supporters of the more conservative John Howard and the more liberal Andrew Peacock, with the pair frequently contesting the leadership. Hawke and Keating were also able to use the concealment of the size of the budget deficit by Fraser prior to the 1983 election to great effect, damaging the Liberal Party's economic credibility as a result. However, Hawke's
to sea by Hyrrokin, a giantess, who came riding on a wolf and gave the ship such a push that fire flashed from the rollers and all the earth shook. Upon Frigg's entreaties, delivered through the messenger Hermod, Hel promised to release Baldr from the underworld if all objects alive and dead would weep for him. All did, except a giantess, Þökk (often presumed to be the god Loki in disguise), who refused to mourn the slain god. Thus Baldr had to remain in the underworld, not to emerge until after Ragnarök, when he and his brother Höðr would be reconciled and rule the new earth together with Thor's sons. Besides descriptions of Baldr, the Prose Edda also explicitly links Baldr to the Anglo-Saxon Beldeg in its prologue. Gesta Danorum Writing during the end of the 12th century, the Danish historian Saxo Grammaticus tells the story of Baldr (recorded as Balderus) in a form that professes to be historical. According to him, Balderus and Høtherus were rival suitors for the hand of Nanna, daughter of Gewar, King of Norway. Balderus was a demigod and common steel could not wound his sacred body. The two rivals encountered each other in a terrific battle. Though Odin and Thor and the other gods fought for Balderus, he was defeated and fled away, and Høtherus married the princess. Nevertheless, Balderus took heart of grace and again met Høtherus in a stricken field. But he fared even worse than before. Høtherus dealt him a deadly wound with a magic sword, named Mistletoe, which he had received from Mimir, the satyr of the woods; after lingering three days in pain Balderus died of his injury and was buried with royal honours in a barrow. Chronicon Lethrense and Annales Lundenses There are also two lesser known Danish Latin chronicles, the Chronicon Lethrense and the Annales Lundenses of which the latter is included in the former. These two sources provide a second euhemerized account of Höðr's slaying of Baldr. It relates that Hother was the king of the Saxons and son of Hothbrodd and Hadding. Hother first slew Othen's (i.e. Odin) son Balder in battle and then chased Othen and Thor. Finally, Othen's son Both killed Hother. Hother, Balder, Othen and Thor were incorrectly considered to be gods. Utrecht Inscription A Latin votive inscription from Utrecht, from the 3rd or 4th century C.E., has been theorized as containing the dative form Baldruo, pointing to a Latin nominative singular *Baldruus, which some have identified with the Norse/Germanic god, although both the reading and this interpretation have been questioned. Anglo Saxon Chronicles In the Anglo Saxon Chronicles Baldr is named as the ancestor of the monarchy of Kent, Bernicia, Deira, and Wessex through his supposed son Brond. Eponyms Plants [[Image:Tripleurospermum perforatum.JPG|right|thumb|Baldr's brow (Matricaria perforata)]] As referenced in Gylfaginning, in Sweden and Norway, the scentless mayweed (Matricaria perforata) and the similar sea mayweed (Matricaria maritima) are both called baldursbrá "Balder's brow" and regionally in northern England (baldeyebrow). In Iceland only the former is found. In Germany valerian is known as Baldrian; variations using or influenced by reflexes of Phol include Faltrian (upper Austria), Villumfallum (Salzburg), and Fildron or Faldron (Tyrol). Toponyms There are a few old place names in Scandinavia that contain the name Baldr. The most certain and notable one is the (former) parish name Balleshol in Hedmark county, Norway: "a Balldrshole" 1356 (where the last element is hóll m "mound; small hill"). Others may be (in Norse forms) Baldrsberg in Vestfold county, Baldrsheimr in Hordaland county Baldrsnes in Sør-Trøndelag county—and (very uncertain) the Balsfjorden fjord and Balsfjord municipality in Troms county. In Copenhagen, there is also a Baldersgade, or "Balder's Street". A street in downtown Reykjavík is called Baldursgata (Baldur's Street). In Sweden there is a Baldersgatan (Balder's Street) in Stockholm. There is also Baldersnäs (Balder's isthmus), Baldersvik (Balder's bay), Balders udde (Balder's headland) and Baldersberg (Balder's mountain) at various places. In popular culture Balder the Brave is a fictional character based on Baldr. He is appearing in comic books published by Marvel Comics as the half-brother of Thor, and son of Odin, ruler of
where the gods were indulging in their new pastime of hurling objects at Baldr, which would bounce off without harming him. Loki gave the spear to Baldr's brother, the blind god Höðr, who then inadvertently killed his brother with it (other versions suggest that Loki guided the arrow himself). For this act, Odin and the asynja Rindr gave birth to Váli, who grew to adulthood within a day and slew Höðr. Baldr was ceremonially burnt upon his ship, Hringhorni, the largest of all ships. As he was carried to the ship, Odin whispered in his ear. This was to be a key riddle asked by Odin (in disguise) of the giant Vafthrudnir (and which was unanswerable) in the poem Vafthrudnismal. The riddle also appears in the riddles of Gestumblindi in Hervarar saga. The dwarf Litr was kicked by Thor into the funeral fire and burnt alive. Nanna, Baldr's wife, also threw herself on the funeral fire to await Ragnarök when she would be reunited with her husband (alternatively, she died of grief). Baldr's horse with all its trappings was also burned on the pyre. The ship was set to sea by Hyrrokin, a giantess, who came riding on a wolf and gave the ship such a push that fire flashed from the rollers and all the earth shook. Upon Frigg's entreaties, delivered through the messenger Hermod, Hel promised to release Baldr from the underworld if all objects alive and dead would weep for him. All did, except a giantess, Þökk (often presumed to be the god Loki in disguise), who refused to mourn the slain god. Thus Baldr had to remain in the underworld, not to emerge until after Ragnarök, when he and his brother Höðr would be reconciled and rule the new earth together with Thor's sons. Besides descriptions of Baldr, the Prose Edda also explicitly links Baldr to the Anglo-Saxon Beldeg in its prologue. Gesta Danorum Writing during the end of the 12th century, the Danish historian Saxo Grammaticus tells the story of Baldr (recorded as Balderus) in a form that professes to be historical. According to him, Balderus and Høtherus were rival suitors for the hand of Nanna, daughter of Gewar, King of Norway. Balderus was a demigod and common steel could not wound his sacred body. The two rivals encountered each other in a terrific battle. Though Odin and Thor and the other gods fought for Balderus, he was defeated and fled away, and Høtherus married the princess. Nevertheless, Balderus took heart of grace and again met Høtherus in a stricken field. But he fared even worse than before. Høtherus dealt him a deadly wound with a magic sword, named Mistletoe, which he had received from Mimir, the satyr of the woods; after lingering three days in pain Balderus died of his injury and was buried with royal honours in a barrow. Chronicon Lethrense and Annales Lundenses There are also two lesser known Danish Latin chronicles, the Chronicon Lethrense and the Annales Lundenses of which the latter is included in the former. These two sources provide a second euhemerized account of Höðr's slaying of Baldr. It relates that Hother was the king of the Saxons and son of Hothbrodd and Hadding. Hother first slew Othen's (i.e. Odin) son Balder in battle and then chased Othen and Thor. Finally, Othen's son Both killed Hother. Hother, Balder, Othen and Thor were incorrectly considered to be gods. Utrecht Inscription A Latin votive inscription from Utrecht, from the 3rd or 4th century C.E., has been theorized as containing the dative form Baldruo, pointing to a Latin nominative singular *Baldruus, which some have identified with the Norse/Germanic god, although both the reading and this interpretation have been questioned. Anglo Saxon Chronicles In the Anglo Saxon Chronicles Baldr is named as the ancestor of the monarchy of Kent, Bernicia, Deira, and Wessex through his supposed son Brond. Eponyms Plants [[Image:Tripleurospermum perforatum.JPG|right|thumb|Baldr's brow (Matricaria perforata)]] As referenced in Gylfaginning, in Sweden and Norway, the scentless mayweed (Matricaria perforata) and the similar sea mayweed (Matricaria maritima) are both called baldursbrá "Balder's brow" and regionally in northern England (baldeyebrow). In Iceland only the former is found. In Germany valerian is known as Baldrian; variations using or influenced by reflexes of Phol include Faltrian (upper Austria), Villumfallum (Salzburg), and Fildron or Faldron (Tyrol). Toponyms There are a few old place names in Scandinavia that contain the name Baldr. The most certain and notable one is the (former) parish name Balleshol in Hedmark county, Norway: "a Balldrshole" 1356 (where the last element is hóll m "mound; small hill"). Others may be (in Norse forms) Baldrsberg in Vestfold county, Baldrsheimr in Hordaland county Baldrsnes in Sør-Trøndelag county—and (very uncertain) the Balsfjorden fjord and Balsfjord municipality in Troms county. In Copenhagen, there is also a Baldersgade, or "Balder's Street". A street in downtown Reykjavík is called Baldursgata
place the abode called Breidablik, and there is not in heaven a fairer dwelling." Later in the work, when Snorri describes Baldr, he gives a longer description, citing Grímnismál, though he does not name the poem: "He dwells in the place called Breidablik, which is in heaven; in that place may nothing unclean be, even as is said here: Breidablik 't is called, | where Baldr has A hall made for himself: In that land | where I know lie Fewest baneful runes." Breiðablik is not otherwise mentioned in the Eddic sources. In popular culture Breidablik
Breidablik 't is called, | where Baldr has A hall made for himself: In that land | where I know lie Fewest baneful runes." Breiðablik is not otherwise mentioned in the Eddic sources. In popular culture Breidablik is a sacred weapon in Fire Emblem Heroes that the Summoner uses to summon Heroes coming from different Fire Emblem games. In the PlayStation game Xenogears, Bledavik is the name of the capital city of the desert kingdom of Aveh on the Ignas continent. Breiðablik Tonhall, an undisclosed place in jail where the then convicted Norwegian-born French musician and writer Louis Cachet (a.k.a. Varg Vikernes) recorded his first
kingdom of Þrúðheimr (or Þrúðvangar according to Gylfaginning and Ynglinga saga). Modern influence The hall inspired the name of an Asgard starship commanded by Supreme Commander Thor, in the television series Stargate SG-1 named Beliskner. There is a NS / pagan black metal band from
as are all the dwellings of the gods, in the kingdom of Þrúðheimr (or Þrúðvangar according to Gylfaginning and Ynglinga saga). Modern influence The hall inspired the name of an Asgard starship commanded by Supreme Commander Thor, in the television series Stargate SG-1
the story, the arrival of Christianity dissolves the old curse that traditionally was to endure until Ragnarök. The battle of Högni and Heðinn is recorded in several medieval sources, including the skaldic poem Ragnarsdrápa, Skáldskaparmál (section 49), and Gesta Danorum: king Högni's daughter, Hildr, is kidnapped by king Heðinn. When Högni comes to fight Heðinn on an island, Hildr comes to offer her father a necklace on behalf of Heðinn for peace; but the two kings still battle, and Hildr resurrects the fallen to make them fight until Ragnarök. None of these earlier sources mentions Freyja or king Olaf Tryggvason, the historical figure who Christianized Norway and Iceland in the 10th Century. Archaeological record A Völva was buried with considerable splendour in Hagebyhöga in Östergötland, Sweden. In addition to being buried with her wand, she had received great riches which included horses, a wagon and an Arabian bronze pitcher. There was also a silver pendant, which represents a woman with a broad necklace around her neck. This kind of necklace was only worn by the most prominent women during the Iron Age and some have interpreted it as Freyja's necklace Brísingamen. The pendant may represent Freyja herself. Modern influence Alan Garner wrote a children's fantasy novel called The Weirdstone of Brisingamen, published in 1960, about an enchanted teardrop bracelet. Diana Paxson's novel Brisingamen features Freyja and her bracelet. Black Phoenix Alchemy Lab has a perfumed oil scent named Brisingamen. Freyja's necklace Brisingamen features prominently in Betsy Tobin's novel Iceland, where the necklace is seen to have significant protective powers. The Brisingamen feature as a major item in Joel Rosenberg's Keepers of the Hidden Ways series of books. In it, there are seven jewels that were created for the necklace by the Dwarfs and given to the Norse goddess. She in turn eventually split them up into the seven separate jewels and hid them throughout the realm, as together they hold the power to shape the universe by its holder. The book's plot is about discovering one of them and deciding what to do with the power they allow while avoiding Loki and other Norse characters. In Christopher Paolini's Inheritance Cycle, the word "brisingr" means fire. This is probably a distillation of the word brisinga. Ursula Le Guin's short story Semley's Necklace,
called "Seeker of Freyja's Necklace" (Skáldskaparmál, section 8) and Loki is called "Thief of Brísingamen" (Skáldskaparmál, section 16). A similar story appears in the later Sörla þáttr, where Heimdallr does not appear. Sörla þáttr Sörla þáttr is a short story in the later and extended version of the Saga of Olaf Tryggvason in the manuscript of the Flateyjarbók, which was written and compiled by two Christian priests, Jon Thordson and Magnus Thorhalson, in the late 14th century. In the end of the story, the arrival of Christianity dissolves the old curse that traditionally was to endure until Ragnarök. The battle of Högni and Heðinn is recorded in several medieval sources, including the skaldic poem Ragnarsdrápa, Skáldskaparmál (section 49), and Gesta Danorum: king Högni's daughter, Hildr, is kidnapped by king Heðinn. When Högni comes to fight Heðinn on an island, Hildr comes to offer her father a necklace on behalf of Heðinn for peace; but the two kings still battle, and Hildr resurrects the fallen to make them fight until Ragnarök. None of these earlier sources mentions Freyja or king Olaf Tryggvason, the historical figure who Christianized Norway and Iceland in the 10th Century. Archaeological record A Völva was buried with considerable splendour in Hagebyhöga in Östergötland, Sweden. In addition to being buried with her wand, she had received great riches which included horses, a wagon and an Arabian bronze pitcher. There was also a silver pendant, which represents a woman with a broad necklace around her neck. This kind of necklace was only worn by the most prominent women during the Iron Age and some have interpreted it as Freyja's necklace Brísingamen. The pendant may represent Freyja herself. Modern influence Alan Garner wrote a children's fantasy novel called The Weirdstone of Brisingamen, published in 1960, about an enchanted teardrop bracelet. Diana Paxson's novel Brisingamen features Freyja and her bracelet. Black Phoenix Alchemy Lab has a perfumed oil scent named Brisingamen. Freyja's necklace Brisingamen features prominently in Betsy Tobin's novel Iceland, where the necklace is seen to have significant protective powers. The Brisingamen feature as a major item in Joel Rosenberg's Keepers of the Hidden Ways series of books. In it, there are seven jewels that were created for the necklace by the Dwarfs and given to the Norse goddess. She in turn eventually split them up into the seven separate jewels and hid them throughout the realm, as together they hold the power to shape the universe by its holder. The book's plot is about discovering one of them and deciding what to do with the power they allow while avoiding Loki and other Norse characters. In Christopher Paolini's Inheritance Cycle, the word "brisingr" means fire. This is probably a distillation of the word brisinga. Ursula Le Guin's short story Semley's Necklace, the first part of her novel Rocannon's World, is a retelling of the Brisingamen story on an alien planet. Brisingamen is represented as a card in the Yu-Gi-Oh! Trading Card Game, "Nordic Relic
two points of which are within of each other, their images under g are within of each other. Define a triangulation of with edges of length at most . Label each vertex of the triangulation with a label in the following way: The absolute value of the label is the index of the coordinate with the highest absolute value of g: . The sign of the label is the sign of g, so that: . Because g is odd, the labeling is also odd: . Hence, by Tucker's lemma, there are two adjacent vertices with opposite labels. Assume w.l.o.g. that the labels are . By the definition of l, this means that in both and , coordinate #1 is the largest coordinate: in this coordinate is positive while in it is negative. By the construction of the triangulation, the distance between and is at most , so in particular (since and have opposite signs) and so . But since the largest coordinate of is coordinate #1, this means that for each . So , where is some constant depending on and the norm which you have chosen. The above is true for every ; since is compact there must hence be a point u in which . Corollaries No subset of is homeomorphic to The ham sandwich theorem: For any compact sets A1, ..., An in we can always find a hyperplane dividing each of them into two subsets of equal measure. Equivalent results Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent. Generalizations In the original theorem, the domain of the function f is the unit n-sphere (the boundary of the unit n-ball). In general, it is true also when the domain of f is the boundary of any open bounded
Proof: If the theorem is correct, then every continuous odd function from must include 0 in its range. However, so there cannot be a continuous odd function whose range is . Conversely, if it is incorrect, then there is a continuous odd function with no zeroes. Then we can construct another odd function by: since has no zeroes, is well-defined and continuous. Thus we have a continuous odd retraction. Proofs 1-dimensional case The 1-dimensional case can easily be proved using the intermediate value theorem (IVT). Let be an odd real-valued continuous function on a circle. Pick an arbitrary . If then we are done. Otherwise, without loss of generality, But Hence, by the IVT, there is a point between and at which . General case Algebraic topological proof Assume that is an odd continuous function with (the case is treated above, the case can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with coefficients [where denotes the field with two elements], sends to . But then we get that is sent to , a contradiction. One can also show the stronger statement that any odd map has odd degree and then deduce the theorem from this result. Combinatorial proof The Borsuk–Ulam theorem can be proved from Tucker's lemma. Let be a continuous odd function. Because g is continuous on a compact domain, it is uniformly continuous. Therefore, for every , there is a such that, for every two points of which are within of each other, their images under g are within of each other. Define a triangulation of with edges of length at most . Label each vertex of the triangulation with a label in the following way: The absolute value of the label is the index of the coordinate with the highest absolute value of g: . The sign of the label is the sign of g, so that: . Because g is odd, the labeling is also odd: . Hence, by Tucker's lemma, there are two adjacent vertices with opposite labels. Assume w.l.o.g. that the labels are . By the definition of l, this means that in both and , coordinate #1 is the largest coordinate: in this coordinate is positive while in it is negative. By the construction of the triangulation, the distance between and is at most , so in particular (since and have opposite signs) and so . But since the largest coordinate of is coordinate #1, this means that for each . So , where is some constant depending on and the norm which you have chosen. The above is true for every ;
second. A connection has been also suggested with the Old Norse bragarfull, the cup drunk in solemn occasions with the taking of vows. The word is usually taken to semantically derive from the second meaning of bragr ('first one, noblest'). A relation with the Old English term brego ('lord, prince') remains uncertain. Bragi regularly appears as a personal name in Old Norse and Old Swedish sources, which according to linguist Jan de Vries might indicate the secondary character of the god's name. Attestations Snorri Sturluson writes in the Gylfaginning after describing Odin, Thor, and Baldr: In Skáldskaparmál Snorri writes: That Bragi is Odin's son is clearly mentioned only here and in some versions of a list of the sons of Odin (see Sons of Odin). But "wish-son" in stanza 16 of the Lokasenna could mean "Odin's son" and is translated by Hollander as Odin's kin. Bragi's mother is possibly the giantess Gunnlod. If Bragi's mother is Frigg, then Frigg is somewhat dismissive of Bragi in the Lokasenna in stanza 27 when Frigg complains that if she had a son in Ægir's hall as brave as Baldr then Loki would have to fight for his life. In that poem Bragi at first forbids Loki to enter the hall but is overruled by Odin. Loki then gives a greeting to all gods and goddesses who are in the hall save to Bragi. Bragi generously offers his sword, horse, and an arm ring as peace gift but Loki only responds by accusing Bragi of cowardice, of being the most afraid to fight of any of the Æsir and Elves within the hall. Bragi responds that if they were outside the hall, he would have Loki's head, but Loki only repeats the accusation. When Bragi's wife Iðunn attempts to calm Bragi, Loki accuses her of embracing her brother's slayer, a reference to matters that have not survived. It may be that Bragi had slain Iðunn's brother. A passage in the Poetic Edda poem Sigrdrífumál describes runes being graven on the sun, on the ear of one of the sun-horses and on the hoofs of the other, on Sleipnir's teeth, on bear's paw, on eagle's beak, on wolf's claw, and on several other things including on Bragi's tongue. Then the runes are shaved off and the shavings are mixed with mead and sent abroad so that Æsir have some, Elves have some, Vanir have some, and Men have some, these being speech runes and birth runes, ale runes, and magic runes. The meaning of this is obscure. The first part of Snorri Sturluson's Skáldskaparmál is a dialogue between Ægir and Bragi about
hall save to Bragi. Bragi generously offers his sword, horse, and an arm ring as peace gift but Loki only responds by accusing Bragi of cowardice, of being the most afraid to fight of any of the Æsir and Elves within the hall. Bragi responds that if they were outside the hall, he would have Loki's head, but Loki only repeats the accusation. When Bragi's wife Iðunn attempts to calm Bragi, Loki accuses her of embracing her brother's slayer, a reference to matters that have not survived. It may be that Bragi had slain Iðunn's brother. A passage in the Poetic Edda poem Sigrdrífumál describes runes being graven on the sun, on the ear of one of the sun-horses and on the hoofs of the other, on Sleipnir's teeth, on bear's paw, on eagle's beak, on wolf's claw, and on several other things including on Bragi's tongue. Then the runes are shaved off and the shavings are mixed with mead and sent abroad so that Æsir have some, Elves have some, Vanir have some, and Men have some, these being speech runes and birth runes, ale runes, and magic runes. The meaning of this is obscure. The first part of Snorri Sturluson's Skáldskaparmál is a dialogue between Ægir and Bragi about the nature of poetry, particularly skaldic poetry. Bragi tells the origin of the mead of poetry from the blood of Kvasir and how Odin obtained this mead. He then goes on to discuss various poetic metaphors known as kennings. Snorri Sturluson clearly distinguishes the god Bragi from the mortal skald Bragi Boddason, whom he often mentions separately. The appearance of Bragi in the Lokasenna indicates that if these two Bragis were originally the same, they have become separated for that author also, or that chronology has become very muddled and Bragi Boddason has been relocated to mythological time. Compare the appearance of the Welsh Taliesin in the second branch of the Mabinogi. Legendary chronology sometimes does become muddled. Whether Bragi the god originally arose as a deified version of Bragi Boddason was much debated in the 19th century, especially by the scholars Eugen Mogk and Sophus Bugge. The debate remains undecided. In the poem Eiríksmál Odin, in Valhalla, hears the coming of the dead Norwegian king Eric Bloodaxe and his host, and bids the heroes Sigmund and Sinfjötli rise to greet him. Bragi is then mentioned, questioning how Odin knows that it is Eric and why Odin has let such a king die. In the poem Hákonarmál, Hákon the Good is taken to
came with his De l'Esprit géométrique ("Of the Geometrical Spirit"), originally written as a preface to a geometry textbook for one of the famous Petites écoles de Port-Royal ("Little Schools of Port-Royal"). The work was unpublished until over a century after his death. Here, Pascal looked into the issue of discovering truths, arguing that the ideal of such a method would be to found all propositions on already established truths. At the same time, however, he claimed this was impossible because such established truths would require other truths to back them up—first principles, therefore, cannot be reached. Based on this, Pascal argued that the procedure used in geometry was as perfect as possible, with certain principles assumed and other propositions developed from them. Nevertheless, there was no way to know the assumed principles to be true. Pascal also used De l'Esprit géométrique to develop a theory of definition. He distinguished between definitions which are conventional labels defined by the writer and definitions which are within the language and understood by everyone because they naturally designate their referent. The second type would be characteristic of the philosophy of essentialism. Pascal claimed that only definitions of the first type were important to science and mathematics, arguing that those fields should adopt the philosophy of formalism as formulated by Descartes. In De l'Art de persuader ("On the Art of Persuasion"), Pascal looked deeper into geometry's axiomatic method, specifically the question of how people come to be convinced of the axioms upon which later conclusions are based. Pascal agreed with Montaigne that achieving certainty in these axioms and conclusions through human methods is impossible. He asserted that these principles can be grasped only through intuition, and that this fact underscored the necessity for submission to God in searching out truths. The Pensées Blaise Pascal, Pensées No. 200 Pascal's most influential theological work, referred to posthumously as the Pensées ("Thoughts") is widely considered to be a masterpiece, and a landmark in French prose. When commenting on one particular section (Thought #72), Sainte-Beuve praised it as the finest pages in the French language. Will Durant hailed the Pensées as "the most eloquent book in French prose". The Pensées was not completed before his death. It was to have been a sustained and coherent examination and defense of the Christian faith, with the original title Apologie de la religion Chrétienne ("Defense of the Christian Religion"). The first version of the numerous scraps of paper found after his death appeared in print as a book in 1669 titled Pensées de M. Pascal sur la religion, et sur quelques autres sujets ("Thoughts of M. Pascal on religion, and on some other subjects") and soon thereafter became a classic. One of the Apologies main strategies was to use the contradictory philosophies of Pyrrhonism and Stoicism, personalized by Montaigne on one hand, and Epictetus on the other, in order to bring the unbeliever to such despair and confusion that he would embrace God. Last works and death T. S. Eliot described him during this phase of his life as "a man of the world among ascetics, and an ascetic among men of the world." Pascal's ascetic lifestyle derived from a belief that it was natural and necessary for a person to suffer. In 1659, Pascal fell seriously ill. During his last years, he frequently tried to reject the ministrations of his doctors, saying, "Sickness is the natural state of Christians." Louis XIV suppressed the Jansenist movement at Port-Royal in 1661. In response, Pascal wrote one of his final works, Écrit sur la signature du formulaire ("Writ on the Signing of the Form"), exhorting the Jansenists not to give in. Later that year, his sister Jacqueline died, which convinced Pascal to cease his polemics on Jansenism. Pascal's last major achievement, returning to his mechanical genius, was inaugurating perhaps the first bus line, the carrosses à cinq sols, moving passengers within Paris in a carriage with many seats. Pascal also designated the operation principles which were later used to plan public transportation: The carriages has a fixed route, fixed price, and left even if there were no passengers. It is widely considered that the idea of public transportation was well ahead of time. The lines were not commercially successful, and the last one closed by 1675. In 1662, Pascal's illness became more violent, and his emotional condition had severely worsened since his sister's death. Aware that his health was fading quickly, he sought a move to the hospital for incurable diseases, but his doctors declared that he was too unstable to be carried. In Paris on 18 August 1662, Pascal went into convulsions and received extreme unction. He died the next morning, his last words being "May God never abandon me," and was buried in the cemetery of Saint-Étienne-du-Mont. An autopsy performed after his death revealed grave problems with his stomach and other organs of his abdomen, along with damage to his brain. Despite the autopsy, the cause of his poor health was never precisely determined, though speculation focuses on tuberculosis, stomach cancer, or a combination of the two. The headaches which afflicted Pascal are generally attributed to his brain lesion. Legacy Cultural references One of the Universities of Clermont-Ferrand, France – Université Blaise Pascal – is named after him. Établissement scolaire français Blaise-Pascal in Lubumbashi, Democratic Republic of the Congo is named after Pascal. The 1969 Eric Rohmer film My Night at Maud's is based on the work of Pascal. Roberto Rossellini directed a filmed biopic, Blaise Pascal, which originally aired on Italian television in 1971. Pascal was a subject of the first edition of the 1984 BBC Two documentary, Sea of Faith, presented by Don Cupitt. The chameleon in the film Tangled is named for Pascal. A programming language is named for Pascal. In 2014, Nvidia announced its new Pascal microarchitecture, which is named for Pascal. The first graphics cards featuring Pascal were released in 2016. The 2017 game Nier: Automata has multiple characters named after famous philosophers; one of these is a sentient pacifistic machine named Pascal, who serves as a major supporting character. Pascal creates a village for machines to live peacefully with the androids they're at war with and acts as a parental figure for other machines trying to adapt to their newly-found individuality. The otter in the Animal Crossing series is named for Pascal. Minor planet 4500 Pascal is named in his honor. Works "Essai pour les coniques" [Essay on conics] (1639) Experiences nouvelles touchant le vide [New experiments with the vacuum] (1647) Récit de la grande expérience de l'équilibre des liqueurs [Account of the great experiment on equilibrium in liquids] (1648) Traité du triangle arithmétique [Treatise on the arithmetical triangle] (written c. 1654; publ. 1665) Lettres provinciales [The provincial letters] (1656–57) De l'Esprit géométrique [On the geometrical spirit] (1657 or 1658) Écrit sur la signature du formulaire (1661) Pensées [Thoughts] (incomplete at death; publ. 1670) See also Expected value Gambler's ruin Pascal's barrel Pascal distribution Pascal's mugging Pascal's pyramid Pascal's simplex Problem of points Scientific revolution List of pioneers in computer science List of works by Eugène Guillaume References Further reading Adamson, Donald. Blaise Pascal: Mathematician, Physicist, and Thinker about God (1995) Adamson, Donald. "Pascal's Views on Mathematics and the Divine," Mathematics and the Divine: A Historical Study (eds. T. Koetsier and L. Bergmans. Amsterdam: Elsevier 2005), pp. 407–21. Broome, J.H. Pascal. (London: E. Arnold, 1965). Davidson, Hugh M. Blaise Pascal. (Boston: Twayne Publishers), 1983. Farrell, John. "Pascal and Power". Chapter seven of Paranoia and Modernity: Cervantes to Rousseau (Cornell UP, 2006). Goldmann, Lucien, The hidden God; a study of tragic vision in the Pensees of Pascal and the tragedies of Racine (original ed. 1955, Trans. Philip Thody. London: Routledge, 1964). Groothuis, Douglas. On Pascal. (Belmont: Wadsworth, 2002). Jordan, Jeff. Pascal's Wager: Pragmatic Arguments and Belief in God. (Oxford: Clarendon Press, 2006). Landkildehus, Søren. "Kierkegaard and Pascal as kindred spirits in the Fight against Christendom" in Kierkegaard and the Renaissance and Modern Traditions (ed. Jon Stewart. Farnham: Ashgate Publishing, 2009). Mackie, John Leslie. The Miracle of Theism: Arguments for and against the Existence of God. (Oxford: Oxford University Press, 1982). Stafford Harry Northcote, Viscount Saint Cyres, Pascal (London: Smith, Elder & Company, 1909; New York: E. P. Dutton) Pugh, Anthony R. The Composition of Pascal's Apologia, (University of Toronto Press, 1984). Tobin, Paul. "The Rejection of Pascal's Wager: A Skeptic's Guide to the Bible and the Historical Jesus". authorsonline.co.uk, 2009. Yves Morvan, Pascal à Mirefleurs ? Les dessins de la maison de Domat, Impr. Blandin, 1985. (FRBNF40378895)
age of 18; he died just two months after his 39th birthday. Life Early life and education Pascal was born in Clermont-Ferrand, which is in France's Auvergne region, by the Massif Central. He lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal (1588–1651), who also had an interest in science and mathematics, was a local judge and member of the "Noblesse de Robe". Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, Étienne Pascal moved with his children to Paris. The newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, particularly his son Blaise. The young Pascal showed an amazing aptitude for mathematics and science. "Essay on Conics" Particularly of interest to Pascal was a work of Desargues on conic sections. Following Desargues' thinking, the 16-year-old Pascal produced, as a means of proof, a short treatise on what was called the "Mystic Hexagram", "Essai pour les coniques" ("Essay on Conics") and sent it—his first serious work of mathematics—to Père Mersenne in Paris; it is known still today as Pascal's theorem. It states that if a hexagon is inscribed in a circle (or conic) then the three intersection points of opposite sides lie on a line (called the Pascal line). Pascal's work was so precocious that René Descartes was convinced that Pascal's father had written it. When assured by Mersenne that it was, indeed, the product of the son and not the father, Descartes dismissed it with a sniff: "I do not find it strange that he has offered demonstrations about conics more appropriate than those of the ancients," adding, "but other matters related to this subject can be proposed that would scarcely occur to a 16-year-old child." Leaving Paris In France at that time offices and positions could be—and were—bought and sold. In 1631, Étienne sold his position as second president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, and enjoy, Paris. But in 1638 Richelieu, desperate for money to carry on the Thirty Years' War, defaulted on the government's bonds. Suddenly Étienne Pascal's worth had dropped from nearly 66,000 livres to less than 7,300. Like so many others, Étienne was eventually forced to flee Paris because of his opposition to the fiscal policies of Cardinal Richelieu, leaving his three children in the care of his neighbour Madame Sainctot, a great beauty with an infamous past who kept one of the most glittering and intellectual salons in all France. It was only when Jacqueline performed well in a children's play with Richelieu in attendance that Étienne was pardoned. In time, Étienne was back in good graces with the cardinal and in 1639 had been appointed the king's commissioner of taxes in the city of Rouen—a city whose tax records, thanks to uprisings, were in utter chaos. Pascaline In 1642, in an effort to ease his father's endless, exhausting calculations, and recalculations, of taxes owed and paid (into which work the young Pascal had been recruited), Pascal, not yet 19, constructed a mechanical calculator capable of addition and subtraction, called Pascal's calculator or the Pascaline. Of the eight Pascalines known to have survived, four are held by the Musée des Arts et Métiers in Paris and one more by the Zwinger museum in Dresden, Germany, exhibit two of his original mechanical calculators. Although these machines are pioneering forerunners to a further 400 years of development of mechanical methods of calculation, and in a sense to the later field of computer engineering, the calculator failed to be a great commercial success. Partly because it was still quite cumbersome to use in practice, but probably primarily because it was extraordinarily expensive, the Pascaline became little more than a toy, and a status symbol, for the very rich both in France and elsewhere in Europe. Pascal continued to make improvements to his design through the next decade, and he refers to some 50 machines that were built to his design. He built 20 finished machines over the following 10 years. Mathematics Probability Pascal's development of probability theory was his most influential contribution to mathematics. Originally applied to gambling, today it is extremely important in economics, especially in actuarial science. John Ross writes, "Probability theory and the discoveries following it changed the way we regard uncertainty, risk, decision-making, and an individual's and society's ability to influence the course of future events." However, Pascal and Fermat, though doing important early work in probability theory, did not develop the field very far. Christiaan Huygens, learning of the subject from the correspondence of Pascal and Fermat, wrote the first book on the subject. Later figures who continued the development of the theory include Abraham de Moivre and Pierre-Simon Laplace. In 1654, prompted by his friend the Chevalier de Méré, he corresponded with Pierre de Fermat on the subject of gambling problems, and from that collaboration was born the mathematical theory of probabilities. The specific problem was that of two players who want to finish a game early and, given the current circumstances of the game, want to divide the stakes fairly, based on the chance each has of winning the game from that point. From this discussion, the notion of expected value was introduced. Pascal later (in the Pensées) used a probabilistic argument, Pascal's wager, to justify belief in God and a virtuous life. The work done by Fermat and Pascal into the calculus of probabilities laid important groundwork for Leibniz' formulation of the calculus. Treatise on the Arithmetical Triangle Pascal's Traité du triangle arithmétique, written in 1654 but published posthumously in 1665, described a convenient tabular presentation for binomial coefficients which he called the arithmetical triangle, but is now called Pascal's triangle. The triangle can also be represented: He defined the numbers in the triangle by recursion: Call the number in the (m + 1)th row and (n + 1)th column tmn. Then tmn = tm–1,n + tm,n–1, for m = 0, 1, 2, ... and n = 0, 1, 2, ... The boundary conditions are tm,−1 = 0, t−1,n = 0 for m = 1, 2, 3, ... and n = 1, 2, 3, ... The generator t00 = 1. Pascal concluded with the proof, In the same treatise, Pascal gave an explicit statement of the principle of mathematical induction. In 1654, he proved Pascal's identity relating the sums of the p-th powers of the first n positive integers for p = 0, 1, 2, ..., k. That same year, Pascal had a religious experience, and mostly gave up work in mathematics. Cycloid In 1658, Pascal, while suffering from a toothache, began considering several problems concerning the cycloid. His toothache disappeared, and he took this as a heavenly sign to proceed with his research. Eight days later he had completed his essay and, to publicize the results, proposed a contest. Pascal proposed three questions relating to the center of gravity, area and volume of the cycloid, with the winner or winners to receive prizes of 20 and 40 Spanish doubloons. Pascal, Gilles de Roberval and Pierre de Carcavi were the judges, and neither of the two submissions (by John Wallis and Antoine de Lalouvère) were judged to be adequate. While the contest was ongoing, Christopher Wren sent Pascal a proposal for a proof of the rectification of the cycloid; Roberval claimed promptly that he had known of the proof for years. Wallis published Wren's proof (crediting Wren) in Wallis's Tractus Duo, giving Wren priority for the first published proof. Physics Pascal contributed to several fields in physics, most notably the fields of fluid mechanics and pressure. In honour of his scientific contributions, the name Pascal has been given to the SI unit of pressure and Pascal's law (an important principle of hydrostatics). He introduced a primitive form of roulette and the roulette wheel in his search for a perpetual motion machine. Fluid dynamics His work in the fields of hydrodynamics and hydrostatics centered on the principles of hydraulic fluids. His inventions include the hydraulic press (using hydraulic pressure to multiply force) and the syringe. He proved that hydrostatic pressure depends not on the weight of the fluid but on the elevation difference. He demonstrated this principle by attaching a thin tube to a barrel full of water and filling the tube with water up to the level of the third floor of a building. This caused the barrel to leak, in what became known as Pascal's barrel experiment. Vacuum By 1647, Pascal had learned of Evangelista Torricelli's experimentation with barometers. Having replicated an experiment that involved placing a tube filled with mercury upside down in a bowl of mercury, Pascal questioned what force kept some mercury in the tube and what filled the space above the mercury in the tube. At the time, most scientists including Descartes believed in a plenum, i. e. some invisible matter filled all of space, rather than a vacuum. "Nature abhors a vacuum." This was based on the Aristotelian notion that everything in motion was a substance, moved by another substance. Furthermore, light passed through the glass tube, suggesting a substance such as aether rather than vacuum filled the space. Following more experimentation in this vein, in 1647 Pascal produced Experiences nouvelles touchant le vide ("New experiments with the vacuum"), which detailed basic rules describing to what degree various liquids could be supported by air pressure. It also provided reasons why it was indeed a vacuum above the column of liquid in a barometer tube. This work was followed by Récit de la grande expérience de l'équilibre des liqueurs ("Account of the great experiment on equilibrium in liquids") published in 1648. The Torricellian vacuum found that air pressure is equal to the weight of 30 inches of mercury. If air has a finite weight, Earth's atmosphere must have a maximum height. Pascal reasoned that if true, air pressure on a high mountain must be less than at a lower altitude. He lived near the Puy de Dôme mountain, tall, but his health was poor so could not climb it. On 19 September 1648, after many months of Pascal's friendly but insistent prodding, Florin Périer, husband of Pascal's elder sister Gilberte, was finally able to carry out the fact-finding mission vital to Pascal's theory. The account, written by Périer, reads: Pascal replicated the experiment in Paris by carrying a barometer up to the top of the bell tower at the church of Saint-Jacques-de-la-Boucherie, a height of about 50 metres. The mercury dropped two lines. In a reply to the plenist Estienne Noel, Pascal wrote, echoing contemporary notions of science and falsifiability: "In order to show that a hypothesis is evident, it does not suffice that all the phenomena follow from it; instead, if it leads to something contrary to a single one of the phenomena, that suffices to establish its falsity." Blaise Pascal Chairs are given to outstanding international scientists to conduct their research in the Ile de France region. Adult life: religion, literature, and philosophy Religious conversion In the winter of 1646, Pascal's 58-year-old father broke his hip when he slipped and fell on an icy street of Rouen; given the man's age and the state of medicine in the 17th century, a broken hip could be a very serious condition, perhaps even fatal. Rouen was home to two of the finest doctors in France, Deslandes and de la Bouteillerie. The elder Pascal "would not let anyone other than these men attend him...It was a good choice, for the old man survived and was able to walk again..." But treatment and rehabilitation took three months, during which time La Bouteillerie and Deslandes had become regular visitors. Both men were followers of Jean Guillebert, proponent of a splinter group from Catholic teaching known as Jansenism. This still fairly small sect was making surprising inroads into the French Catholic community at that time. It espoused rigorous Augustinism. Blaise spoke with the doctors frequently, and after their successful treatment of his father, borrowed from them works by Jansenist authors. In this period, Pascal experienced a sort of "first conversion" and began to write on theological subjects in the course of the following year. Pascal fell away from this initial religious engagement and experienced a few years of what some biographers have called his "worldly period" (1648–54). His father died in 1651 and left his inheritance to Pascal and his sister Jacqueline, for whom Pascal acted as conservator. Jacqueline announced that she would soon become a postulant in the Jansenist convent of Port-Royal. Pascal was deeply affected and very sad, not because of her choice, but because of his chronic poor health; he needed her just as she had needed him. By the end of October in 1651, a truce had been reached between brother and sister. In return for a healthy annual stipend, Jacqueline signed over her part of the inheritance to her brother. Gilberte had already been given her inheritance in the form of a dowry. In early January, Jacqueline left for Port-Royal. On that day, according to Gilberte concerning her brother, "He retired very sadly to his rooms without seeing Jacqueline, who was waiting in the little parlor..." In early June 1653, after what must have seemed like endless badgering from Jacqueline, Pascal formally signed over the whole of his sister's inheritance to Port-Royal, which, to him, "had begun to smell like a cult." With two-thirds of his father's estate now gone, the 29-year-old Pascal was now consigned to genteel poverty. For a
well as for rather more mundane words which displaced native terms (most notably, the word for "fish" in all the Brittonic languages derives from the Latin piscis rather than the native *ēskos - which may survive, however, in the Welsh name of the River Usk, ). Approximately 800 of these Latin loan-words have survived in the three modern Brittonic languages. Pictish may have resisted Latin influence to a greater extent than the other Brittonic languages. It is probable that at the start of the Post-Roman period Common Brittonic was differentiated into at least two major dialect groups – Southwestern and Western (also we may posit additional dialects, such as Eastern Brittonic, spoken in what is now the East of England, which have left little or no evidence). Between the end of the Roman occupation and the mid 6th century the two dialects began to diverge into recognizably separate varieties, the Western into Cumbric and Welsh and the Southwestern into Cornish and its closely related sister language Breton, which was carried to continental Armorica. Jackson showed that a few of the dialect distinctions between West and Southwest Brittonic go back a long way. New divergencies began around AD 500 but other changes that were shared occurred in the 6th century. Other common changes occurred in the 7th century onward and are possibly due to inherent tendencies. Thus the concept of a Common Brittonic language ends by AD 600. Substantial numbers of Britons certainly remained in the expanding area controlled by Anglo-Saxons, but over the fifth and sixth centuries they mostly adopted the English language. Decline The Brittonic languages spoken in what is now Scotland, the Isle of Man and what is now England began to be displaced in the 5th century through the settlement of Irish-speaking Gaels and Germanic peoples. Henry of Huntingdon wrote that Pictish was "no longer spoken" in c.1129. The displacement of the languages of Brittonic descent was probably complete in all of Britain except Cornwall and Wales and the English counties bordering these areas such as Devon by the 11th century. Western Herefordshire continued to speak Welsh until the late nineteenth century, and isolated pockets of Shropshire speak Welsh today. Sound changes The regular consonantal sound changes from Proto-Celtic to Welsh, Cornish, and Breton are summarised in the following table. Where the graphemes have a different value from the corresponding IPA symbols, the IPA equivalent is indicated between slashes. V represents a vowel; C represents a consonant. Remnants in England, Scotland and Ireland Place names and river names The principal legacy left behind in those territories from which the Brittonic languages were displaced is that of toponyms (place names) and hydronyms (river names). There are many Brittonic place names in lowland Scotland and in the parts of England where it is agreed that substantial Brittonic speakers remained (Brittonic names, apart from those of the former Romano-British towns, are scarce over most of England). Names derived (sometimes indirectly) from Brittonic include London, Penicuik, Perth, Aberdeen, York, Dorchester, Dover and Colchester. Brittonic elements found in England include bre- and bal- for hills, while some such as combe or coomb(e) for a small deep valley and tor for a hill are examples of Brittonic words that were borrowed into English. Others reflect the presence of Britons such as Dumbarton – from the Scottish Gaelic Dùn Breatainn meaning "Fort of the Britons", or Walton meaning a tun or settlement where the Wealh "Britons" still lived. The number of Celtic river names in England generally increases from east to west, a map showing these being given by Jackson. These names include ones such as Avon, Chew, Frome, Axe, Brue and Exe, but also river names containing the elements "der-/dar-/dur-" and "-went" e.g. "Derwent, Darwen, Deer, Adur, Dour, Darent, Went". These names exhibit multiple different Celtic roots. One is *dubri- "water" [Bret. "dour", C. "dowr", W. "dŵr"], also found in the place-name "Dover" (attested in the Roman period as "Dubrīs"); this is the source of rivers named "Dour". Another is *deru̯o- "oak" or "true" [Bret. "derv", C. "derow", W. "derw"], coupled with 2 agent suffixes, *-ent- and *-iū; this is the origin of "Derwent", " Darent" and "Darwen" (attested in the Roman period as "Deru̯entiō"). The final root to be examined is "went". In Roman Britain, there were three tribal capitals named "U̯entā" (modern Winchester, Caerwent and Caistor St Edmunds), whose meaning was 'place, town'. Brittonicisms in English Some, including J. R. R. Tolkien, have argued that Celtic has acted as a substrate to English for both the lexicon and syntax. It is generally accepted that Brittonic effects on English are lexically few aside from toponyms, consisting of a small number of domestic and geographical words, which may include bin, brock, carr, comb, crag and tor. Another legacy may be the sheep-counting system Yan Tan Tethera in the north, in the traditionally Celtic areas of England such as Cumbria. Several Cornish mining words are still in use in English language mining terminology, such as costean, gunnies, and vug. Those who argue against the theory of a more significant Brittonic influence than is widely accepted point out that many toponyms have no semantic continuation from the Brittonic language. A notable example is Avon which comes from the Celtic term for river abona or the Welsh term for river, afon, but was used by the English as a personal name. Likewise the River Ouse, Yorkshire contains the word usa which merely means
generally considered to all derive from a common ancestral language termed Brittonic, British, Common Brittonic, Old Brittonic or Proto-Brittonic, which is thought to have developed from Proto-Celtic or early Insular Celtic by the 6th century BC. A major archaeogenetics study uncovered a migration into southern Britain in the Bronze Age, during the 500-year period 1,300–800 BC. The newcomers were genetically most similar to ancient individuals from Gaul. During 1,000–875 BC, their genetic marker swiftly spread through southern Britain, but not northern Britain. The authors describe this as a "plausible vector for the spread of early Celtic languages into Britain". There was much less inward migration during the Iron Age, so it is likely that Celtic reached Britain before then. Barry Cunliffe suggests that a branch of Celtic was already being spoken in Britain, and that the Bronze Age migration introduced the Brittonic branch. Brittonic languages were probably spoken before the Roman invasion throughout most of Great Britain, though the Isle of Man later had a Goidelic language, Manx. During the period of the Roman occupation of what is now England and Wales (AD 43 to c. 410), Common Brittonic borrowed a large stock of Latin words, both for concepts unfamiliar in the pre-urban society of Celtic Britain such as urbanization and new tactics of warfare as well as for rather more mundane words which displaced native terms (most notably, the word for "fish" in all the Brittonic languages derives from the Latin piscis rather than the native *ēskos - which may survive, however, in the Welsh name of the River Usk, ). Approximately 800 of these Latin loan-words have survived in the three modern Brittonic languages. Pictish may have resisted Latin influence to a greater extent than the other Brittonic languages. It is probable that at the start of the Post-Roman period Common Brittonic was differentiated into at least two major dialect groups – Southwestern and Western (also we may posit additional dialects, such as Eastern Brittonic, spoken in what is now the East of England, which have left little or no evidence). Between the end of the Roman occupation and the mid 6th century the two dialects began to diverge into recognizably separate varieties, the Western into Cumbric and Welsh and the Southwestern into Cornish and its closely related sister language Breton, which was carried to continental Armorica. Jackson showed that a few of the dialect distinctions between West and Southwest Brittonic go back a long way. New divergencies began around AD 500 but other changes that were shared occurred in the 6th century. Other common changes occurred in the 7th century onward and are possibly due to inherent tendencies. Thus the concept of a Common Brittonic language ends by AD 600. Substantial numbers of Britons certainly remained in the expanding area controlled by Anglo-Saxons, but over the fifth and sixth centuries they mostly adopted the English language. Decline The Brittonic languages spoken in what is now Scotland, the Isle of Man and what is now England began to be displaced in the 5th century through the settlement of Irish-speaking Gaels and Germanic peoples. Henry of Huntingdon wrote that Pictish was "no longer spoken" in c.1129. The displacement of the languages of Brittonic descent was probably complete in all of Britain except Cornwall and Wales and the English counties bordering these areas such as Devon by the 11th century. Western Herefordshire continued to speak Welsh until the late nineteenth century, and isolated pockets of Shropshire speak Welsh today. Sound changes The regular consonantal sound changes from Proto-Celtic to Welsh, Cornish, and Breton are summarised in the following table. Where the graphemes have a different value from the corresponding IPA symbols, the IPA equivalent is indicated between slashes. V represents a vowel; C represents a consonant. Remnants in England, Scotland and Ireland Place names and river names The principal legacy left behind in those territories from which the Brittonic languages were displaced is that of toponyms (place names) and hydronyms (river names). There are many Brittonic place names in lowland Scotland and in the parts of England where it is agreed that substantial Brittonic speakers remained (Brittonic names, apart from those of the former Romano-British towns, are scarce over most of England). Names derived (sometimes indirectly) from Brittonic include London, Penicuik, Perth, Aberdeen, York, Dorchester, Dover and Colchester. Brittonic elements found in England include bre- and bal- for hills, while some such as combe or coomb(e) for a small deep valley and tor for a hill are examples of Brittonic words that were borrowed into English. Others reflect the presence of Britons such as Dumbarton – from the Scottish Gaelic Dùn Breatainn meaning "Fort of the Britons", or Walton meaning a tun or settlement where the Wealh "Britons" still lived. The number of Celtic river names in England generally increases from east to west, a map showing these being given by Jackson. These names include ones such as Avon, Chew, Frome, Axe, Brue and Exe, but also river names containing the elements "der-/dar-/dur-" and "-went" e.g. "Derwent, Darwen, Deer, Adur, Dour, Darent, Went". These names exhibit multiple different Celtic roots. One is *dubri- "water" [Bret. "dour", C. "dowr", W. "dŵr"], also found in the place-name "Dover" (attested in the Roman period as "Dubrīs"); this is the source of rivers named "Dour". Another is *deru̯o- "oak" or "true" [Bret. "derv", C. "derow", W. "derw"], coupled with 2 agent suffixes, *-ent- and *-iū; this is the origin of "Derwent", " Darent" and "Darwen" (attested in the Roman period as "Deru̯entiō"). The final root to be examined is "went". In Roman Britain, there were three tribal capitals named "U̯entā" (modern Winchester, Caerwent and Caistor St Edmunds), whose meaning was 'place, town'. Brittonicisms in English Some, including J. R. R. Tolkien, have argued that Celtic has acted as a substrate to English for both
band used a series of vocalists before dissolving in 1995. Steve Bronski revived the band in 2016, recording new material with 1990s member Ian Donaldson. Steinbachek died later that year; Bronski died in 2021. History 1983–1985: Early years and The Age of Consent Bronski Beat formed in 1983 when Jimmy Somerville, Steve Bronski (both from Glasgow) and Larry Steinbachek (from Southend, Essex) shared a three-bedroom flat at Lancaster House in Brixton, London. Steinbachek had heard Somerville singing during the making of Framed Youth: The Revenge of the Teenage Perverts and suggested they make some music. They first performed publicly at an arts festival, September in the Pink. The trio were unhappy with the inoffensive nature of contemporary gay performers and sought to be more outspoken and political. Bronski Beat signed a recording contract with London Records in 1984 after doing only nine live gigs. The band's debut single, "Smalltown Boy", about a gay teenager leaving his family and fleeing his home town, was a hit, peaking at No 3 in the UK Singles Chart, and topping charts in Belgium and the Netherlands. The single was accompanied by a promotional video directed by Bernard Rose, showing Somerville trying to befriend an attractive diver at a swimming pool, then being attacked by the diver's homophobic associates, being returned to his family by the police and having to leave home. (The police officer was played by Colin Bell, then the marketing manager of London Records.) "Smalltown Boy" reached 48 in the U.S. chart and peaked at 8 in Australia. The follow-up single, "Why?", adopted a Hi-NRG sound and was more lyrically focused on anti-gay prejudice. It also achieved Top 10 status in the UK, reaching 6, and was another Top 10 hit for the band in Australia, Switzerland, Germany, France and the Netherlands. At the end of 1984, the trio released an album titled The Age of Consent. The inner sleeve listed the varying ages of consent for consensual gay sex in different nations around the world. At the time, the age of consent for sexual acts between men in the UK was 21 compared with 16 for heterosexual acts, with several other countries having more liberal laws on gay sex. The album peaked at 4 in the UK Albums Chart, 36 in the U.S., and 12 in Australia. Around the same time, the band headlined "Pits and Perverts", a concert at the Electric Ballroom in London to raise funds for the Lesbians and Gays Support the Miners campaign. This event is featured in the film Pride. The third single, released before Christmas 1984, was a revival of "It Ain't Necessarily So", the George and Ira Gershwin classic (from Porgy and Bess). The song questions the accuracy of biblical tales. It also reached the UK Top 20. In 1985, the trio joined up with Marc Almond to record a version of Donna Summer's "I Feel Love". The full version was actually a medley that also incorporated snippets of Summer's "Love to Love You Baby" and John Leyton's "Johnny Remember Me". It was a big success, reaching 3 in the UK and equalling the chart achievement of "Smalltown Boy". Although the original had been one of Marc Almond's all-time favourite songs, he had never read the lyrics and thus incorrectly sang "What’ll it be, what’ll it be, you and me" instead of "Falling free, falling free, falling free" on the finished record. The band and their producer Mike Thorne had gone back into the studio in early 1985 to record a new single, "Run From Love", and PolyGram (London Records' parent company at that time) had pressed a number of promo singles and 12" versions of the song and sent them to radio and record stores in the UK. However, the single was shelved as tensions in the band, both personal and political, resulted in Somerville leaving Bronski Beat in the summer of that year. "Run From Love" was subsequently released in a remix form on the Bronski Beat album Hundreds & Thousands, a collection of mostly remixes (LP) and B-sides (as bonus tracks on the CD version) as well as the hit "I Feel Love". Somerville went on to form The Communards with Richard Coles while the remaining members of Bronski Beat searched for a new vocalist. 1985–1995: Post-Jimmy Somerville phase Bronski Beat recruited John Foster as Somerville's replacement (Foster is credited as "Jon Jon"). A single, "Hit That Perfect Beat", was released in November 1985, reaching 3 in the UK. It repeated this success on the Australian chart and was also featured in the film Letter to Brezhnev. A second single, "C'mon C'mon", also charted in the UK Top 20 and an album, Truthdare Doubledare, released in May 1986, peaked at 18. The film Parting Glances (1986) included Bronski Beat songs "Love and Money", "Smalltown Boy" and "Why?". During this period, the band teamed up with producer Mark Cunningham on the first-ever BBC Children In Need single, a cover of David Bowie's "Heroes", released in 1986 under the name of The County Line. Foster left the band in 1987. Following Foster's departure, Bronski Beat began work on their next album, Out and About. The tracks were recorded at Berry Street studios in London with engineer Brian Pugsley. Some of the song titles were "The Final Spin" and "Peace And Love". The latter track featured Strawberry Switchblade vocalist Rose McDowall and appeared on several internet sites in 2006. One of the other songs from the project called "European Boy" was recorded in 1987 by disco group Splash. The lead singer of Splash was former Tight Fit singer Steve Grant. Steinbachek and Bronski toured extensively with the new material with positive
campaign. This event is featured in the film Pride. The third single, released before Christmas 1984, was a revival of "It Ain't Necessarily So", the George and Ira Gershwin classic (from Porgy and Bess). The song questions the accuracy of biblical tales. It also reached the UK Top 20. In 1985, the trio joined up with Marc Almond to record a version of Donna Summer's "I Feel Love". The full version was actually a medley that also incorporated snippets of Summer's "Love to Love You Baby" and John Leyton's "Johnny Remember Me". It was a big success, reaching 3 in the UK and equalling the chart achievement of "Smalltown Boy". Although the original had been one of Marc Almond's all-time favourite songs, he had never read the lyrics and thus incorrectly sang "What’ll it be, what’ll it be, you and me" instead of "Falling free, falling free, falling free" on the finished record. The band and their producer Mike Thorne had gone back into the studio in early 1985 to record a new single, "Run From Love", and PolyGram (London Records' parent company at that time) had pressed a number of promo singles and 12" versions of the song and sent them to radio and record stores in the UK. However, the single was shelved as tensions in the band, both personal and political, resulted in Somerville leaving Bronski Beat in the summer of that year. "Run From Love" was subsequently released in a remix form on the Bronski Beat album Hundreds & Thousands, a collection of mostly remixes (LP) and B-sides (as bonus tracks on the CD version) as well as the hit "I Feel Love". Somerville went on to form The Communards with Richard Coles while the remaining members of Bronski Beat searched for a new vocalist. 1985–1995: Post-Jimmy Somerville phase Bronski Beat recruited John Foster as Somerville's replacement (Foster is credited as "Jon Jon"). A single, "Hit That Perfect Beat", was released in November 1985, reaching 3 in the UK. It repeated this success on the Australian chart and was also featured in the film Letter to Brezhnev. A second single, "C'mon C'mon", also charted in the UK Top 20 and an album, Truthdare Doubledare, released in May 1986, peaked at 18. The film Parting Glances (1986) included Bronski Beat songs "Love and Money", "Smalltown Boy" and "Why?". During this period, the band teamed up with producer Mark Cunningham on the first-ever BBC Children In Need single, a cover of David Bowie's "Heroes", released in 1986 under the name of The County Line. Foster left the band in 1987. Following Foster's departure, Bronski Beat began work on their next album, Out and About. The tracks were recorded at Berry Street studios in London with engineer Brian Pugsley. Some of the song titles were "The Final Spin" and "Peace And Love". The latter track featured Strawberry Switchblade vocalist Rose McDowall and appeared on several internet sites in 2006. One of the other songs from the project called "European Boy" was recorded in 1987 by disco group Splash. The lead singer of Splash was former Tight Fit singer Steve Grant. Steinbachek and Bronski toured extensively with the new material with positive reviews, however the project was abandoned as the group was dropped by London Records. Also in 1987, Bronski Beat and Somerville performed at a reunion concert for "International AIDS Day", supported by New Order, at the Brixton Academy, London. In 1989, Jonathan Hellyer became lead singer, and the band extensively toured the U.S. and Europe with back-up vocalist Annie Conway. They achieved one minor hit with the song "Cha Cha Heels", a one-off collaboration sung by American actress and singer Eartha Kitt, which peaked at 32 in the UK. The song was originally written for movie and recording star Divine, who was unable to record the song before his death in 1988. 1990–91 saw Bronski Beat release three further singles on the Zomba record label, "I'm Gonna Run Away", "One More Chance" and "What More Can I Say". The singles were produced by Mike Thorne. Foster and Bronski Beat teamed up again in 1994, and released a techno "Tell Me Why '94" and an acoustic "Smalltown Boy '94" on the German record label, ZYX Music. The album Rainbow Nation was released the following year with Hellyer returning as lead vocalist, as Foster had dropped out of the project and Ian Donaldson was brought on board to do keyboards and programming. After a few years of touring, Bronski Beat then dissolved, with Steve Bronski going on to become a producer for other artists and Ian Donaldson becoming a successful DJ (Sordid Soundz). Larry Steinbachek became the musical director for Michael Laub's theatre company, 'Remote Control Productions'. 2007–2016: Steve Bronski solo activities, new version of Bronski Beat In 2007, Steve Bronski remixed the song "Stranger to None" by the UK alternative rock band, All Living Fear. Four different mixes were made, with one appearing on their retrospective album, Fifteen
(wine), for fermenting or ageing wine Barrel (fastener), a simple hinge consisting of a barrel and a pivot Gun barrel the venturi of a carburetor a component of a clarinet a component of a snorkel a
Harry Turtledove's books; see Victoria: An Empire Under the Sun the outside of a low voltage DC connector "The Barrel", a song by Aldous Harding from her 2019 album Designer See also
to truncate the last three digits and append K, essentially using K as a decimal prefix similar to SI, but always truncating to the next lower whole number instead of rounding to the nearest. The exact values words, words and words would then be described as "32K", "65K" and "131K". (If these values had been rounded to nearest they would have become 33K, 66K, and 131K, respectively.) This style was used from about 1965 to 1975. These two styles (K = 1024 and truncation) were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the exact size was evident from context. (For memory sizes of "41K" and below, there is no difference between the two styles.) The HP 21MX real-time computer (1974) denoted (which is 192×1024) as "196K" and as "1M", while the HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The "truncation" method gradually waned. Capitalization of the letter K became the de facto standard for binary notation, although this could not be extended to higher powers, and use of the lowercase k did persist. Nevertheless, the practice of using the SI-inspired "kilo" to indicate 1024 was later extended to "megabyte" meaning 10242 () bytes, and later "gigabyte" for 10243 () bytes. For example, a "512 megabyte" RAM module is 512×10242 bytes (512 × , or ), rather than . The symbols Kbit, Kbyte, Mbit and Mbyte started to be used as "binary units"—"bit" or "byte" with a multiplier that is a power of 1024—in the early 1970s. For a time, memory capacities were often expressed in K, even when M could have been used: The IBM System/370 Model 158 brochure (1972) had the following: "Real storage capacity is available in 512K increments ranging from 512K to 2,048K bytes." Megabyte was used to describe the 22-bit addressing of DEC PDP-11/70 (1975) and gigabyte the 30-bit addressing DEC VAX-11/780 (1977). In 1998, the International Electrotechnical Commission IEC introduced the binary prefixes kibi, mebi, gibi ... to mean 1024, 10242, 10243 etc., so that 1048576 bytes could be referred to unambiguously as 1 mebibyte. The IEC prefixes were defined for use alongside the International System of Quantities (ISQ) in 2009. Disk drives The disk drive industry has followed a different pattern. Disk drive capacity is generally specified with unit prefixes with decimal meaning, in accordance to SI practices. Unlike computer main memory, disk architecture or construction does not mandate or make it convenient to use binary multiples. Drives can have any practical number of platters or surfaces, and the count of tracks, as well as the count of sectors per track may vary greatly between designs. The first commercially sold disk drive, the IBM 350, had fifty physical disk platters containing a total of 50,000 sectors of 100 characters each, for a total quoted capacity of 5 million characters. It was introduced in September 1956. In the 1960s most disk drives used IBM's variable block length format, called Count Key Data (CKD). Any block size could be specified up to the maximum track length. Since the block headers occupied space, the usable capacity of the drive was dependent on the block size. Blocks ("records" in IBM's terminology) of 88, 96, 880 and 960 were often used because they related to the fixed block size of 80- and 96-character punch cards. The drive capacity was usually stated under conditions of full track record blocking. For example, the 100-megabyte 3336 disk pack only achieved that capacity with a full track block size of 13,030 bytes. Floppy disks for the IBM PC and compatibles quickly standardized on 512-byte sectors, so two sectors were easily referred to as "1K". The 3.5-inch "360 KB" and "720 KB" had 720 (single-sided) and 1440 sectors (double-sided) respectively. When the High Density "1.44 MB" floppies came along, with 2880 of these 512-byte sectors, that terminology represented a hybrid binary-decimal definition of "1 MB" = 210 × 103 = 1 024 000 bytes. In contrast, hard disk drive manufacturers used megabytes or MB, meaning 106 bytes, to characterize their products as early as 1974. By 1977, in its first edition, Disk/Trend, a leading hard disk drive industry marketing consultancy segmented the industry according to MBs (decimal sense) of capacity. One of the earliest hard disk drives in personal computing history, the Seagate ST-412, was specified as Formatted: 10.0 Megabytes. The drive contains four heads and active surfaces (tracks per cylinder), 306 cylinders. When formatted with a sector size of 256 bytes and 32 sectors/track it has a capacity of . This drive was one of several types installed in the IBM PC/XT and extensively advertised and reported as a "10 MB" (formatted) hard disk drive. The cylinder count of 306 is not conveniently close to any power of 1024; operating systems and programs using the customary binary prefixes show this as 9.5625 MB. Many later drives in the personal computer market used 17 sectors per track; still later, zone bit recording was introduced, causing the number of sectors per track to vary from the outer track to the inner. The hard drive industry continues to use decimal prefixes for drive capacity, as well as for transfer rate. For example, a "300 GB" hard drive offers slightly more than , or , bytes, not (which would be about ). Operating systems such as Microsoft Windows that display hard drive sizes using the customary binary prefix "GB" (as it is used for RAM) would display this as "279.4 GB" (meaning bytes, or ). On the other hand, macOS has since version 10.6 shown hard drive size using decimal prefixes (thus matching the drive makers' packaging). (Previous versions of Mac OS X used binary prefixes.) Disk drive manufacturers sometimes use both IEC and SI prefixes with their standardized meanings. Seagate has specified data transfer rates in select manuals of some hard drives with both units, with the conversion between units clearly shown and the numeric values adjusted accordingly. "Advanced Format" drives uses the term "4K sectors", which it defines as having size of "4096 (4K) bytes". Information transfer and clock rates Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was 4.77 MHz, that is . Similarly, digital information transfer rates are quoted using decimal prefixes: The ATA-100 disk interface refers to bytes per second A "56K" modem refers to bits per second SATA-2 has a raw bit rate of 3 Gbit/s = bits per second PC2-6400 RAM transfers bytes per second Firewire 800 has a raw rate of bits per second In 2011, Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. Standardization of dual definitions By the mid-1970s it was common to see K meaning 1024 and the occasional M meaning for words or bytes of main memory (RAM) while K and M were commonly used with their decimal meaning for disk storage. In the 1980s, as capacities of both types of devices increased, the SI prefix G, with SI meaning, was commonly applied to disk storage, while M in its binary meaning, became common for computer memory. In the 1990s, the prefix G, in its binary meaning, became commonly used for computer memory capacity. The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007. The dual usage of the kilo (K), mega (M), and giga (G) prefixes as both powers of 1000 and powers of 1024 has been recorded in standards and dictionaries. For example, the 1986 ANSI/IEEE Std 1084-1986 defined dual uses for kilo and mega. The binary units Kbyte and Mbyte were formally defined in ANSI/IEEE Std 1212-1991. Many dictionaries have noted the practice of using customary prefixes to indicate binary multiples. Oxford online dictionary defines, for example, megabyte as: "Computing: a unit of information equal to one million or (strictly) bytes." The units Kbyte, Mbyte, and Gbyte are found in the trade press and in IEEE journals. Gigabyte was formally defined in IEEE Std 610.10-1994 as either or 230 bytes. Kilobyte, Kbyte, and KB are equivalent units and all are defined in the obsolete standard, IEEE 100–2000. The hardware industry measures system memory (RAM) using the binary meaning while magnetic disk storage uses the SI definition. However, many exceptions exist. Labeling of one type of diskette uses the megabyte to denote 1024×1000 bytes. In the optical disks market, compact discs use MB to mean 10242 bytes while DVDs use GB to mean 10003 bytes. Inconsistent use of units Deviation between powers of 1024 and powers of 1000 Computer storage has become cheaper per unit and thereby larger, by many orders of magnitude since "K" was first used to mean 1024. Because both the SI and "binary" meanings of kilo, mega, etc., are based on powers of 1000 or 1024 rather than simple multiples, the difference between 1M "binary" and 1M "decimal" is proportionally larger than that between 1K "binary" and 1k "decimal," and so on up the scale. The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 21% for the yotta prefix. Consumer confusion In the early days of computers (roughly, prior to the advent of personal computers) there was little or no consumer confusion because of the technical sophistication of the buyers and their familiarity with the products. In addition, it was common for computer manufacturers to specify their products with capacities in full precision. In the personal computing era, one source of consumer confusion is the difference in the way many operating systems display hard drive sizes, compared to the way hard drive manufacturers describe them. Hard drives are specified and sold using "GB" and "TB" in their decimal meaning: one billion and one trillion bytes. Many operating systems and other software, however, display hard drive and file sizes using "MB", "GB" or other SI-looking prefixes in their binary sense, just as they do for displays of RAM capacity. For example, many such systems display a hard drive marketed as "1 TB" as "931 GB". The earliest known presentation of hard disk drive capacity by an operating system using "KB" or "MB" in a binary sense is 1984; earlier operating systems generally presented the hard disk drive capacity as an exact number of bytes, with no prefix of any sort, for example, in the output of the MS-DOS or PC DOS CHKDSK command. Legal disputes The different interpretations of disk size prefixes has led to class action lawsuits against digital storage manufacturers. These
"International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit." Other standards bodies and organizations The IEC standard binary prefixes are now supported by other standardization bodies and technical organizations. The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for "Prefixes for binary multiples" and has a web site documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them. The microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary with a note: "The definitions of kilo, giga, and mega based on powers of two are included only to reflect common usage." The JEDEC standards for semiconductor memory use the customary prefix symbols K, M and G in the binary sense. On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. However, , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer. The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in SI. The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not recommend or otherwise cite the IEC binary prefixes. The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007. Current practice Most computer hardware uses SI prefixes to state capacity and define other performance parameters such as data rate. Main and cache memories are notable exceptions. Capacities of main memory and cache memory are usually expressed with customary binary prefixes On the other hand, flash memory, like that found in solid state drives, mostly uses SI prefixes to state capacity. Some operating systems and other software continue to use the customary binary prefixes in displays of memory, disk storage capacity, and file size, but SI prefixes in other areas such as network communication speeds and processor speeds. In the following subsections, unless otherwise noted, examples are first given using the common prefixes used in each case, and then followed by interpretation using other notation where appropriate. Operating systems Prior to the release of Macintosh System Software (1984), file sizes were typically reported by the operating system without any prefixes. Today, most operating systems report file sizes with prefixes. The Linux kernel uses standards-compliant decimal and binary prefixes when booting up. However, many Unix-like system utilities, such as the ls command, use powers of 1024 indicated as K/M (customary binary prefixes) if called with the "" option. They give the exact value in bytes otherwise. The GNU versions will also use powers of 10 indicated with k/M if called with the "" option. The Ubuntu Linux distribution uses the IEC prefixes for base-2 numbers as of the 10.10 release. Microsoft Windows reports file sizes and disk device capacities using the customary binary prefixes or, in a "Properties" dialog, using the exact value in bytes. iOS 10 and earlier, Mac OS X Leopard and earlier and watchOS use the binary system (1 GB = ). Apple product specifications, iOS and macOS (including Mac OS X Snow Leopard: version 10.6) now report sizes using SI decimal prefixes (1 GB = bytes). Software , most software does not distinguish symbols for binary and decimal prefixes. The IEC binary naming convention has been adopted by a few, but this is not used universally. One of the stated goals of the introduction of the IEC prefixes was "to preserve the SI prefixes as unambiguous decimal multipliers." Programs such as fdisk/cfdisk, parted, and apt-get use SI prefixes with their decimal meaning. Example of the use of IEC binary prefixes in the Linux operating system displaying traffic volume on a network interface in kibibytes (KiB) and mebibytes (MiB), as obtained with the ifconfig utility: eth0 Link encap:Ethernet [...] RX packets:254804 errors:0 dropped:0 overruns:0 frame:0 TX packets:756 errors:0 dropped:0 overruns:0 carrier:0 [...] RX bytes:18613795 (17.7 MiB) TX bytes:45708 (44.6 KiB) Software that uses IEC binary prefixes for powers of 1024 and uses standard SI prefixes for powers of 1000 includes: GNU Core Utilities GParted FreeDOS-32 ifconfig GNOME Network SLIB Cygwin/X HTTrack Pidgin (IM client) Deluge yafc tnftp WinSCP MediaInfo Software that uses standard SI prefixes for powers of 1000, but not IEC binary prefixes for powers of 1024, includes: Mac OS X v10.6 and later for hard drive and file sizes Software that supports decimal prefixes for powers of 1000 and binary prefixes for powers of 1024 (but does not follow SI or IEC nomenclature for this) includes: 4DOS (uses lowercase letters as decimal and uppercase letters as binary prefixes) Computer hardware Hardware types that use powers-of-1024 multipliers, such as memory, continue to be marketed with customary binary prefixes. Computer memory Measurements of most types of electronic memory such as RAM and ROM are given using customary binary prefixes (kilo, mega, and giga). This includes some flash memory, like EEPROMs. For example, a "512-megabyte" memory module is bytes (512 × , or ). JEDEC Solid State Technology Association, the semiconductor engineering standardization body of the Electronic Industries Alliance (EIA), continues to include the customary binary definitions of kilo, mega and giga in their Terms, Definitions, and Letter Symbols document, and uses those definitions in later memory standards (See also JEDEC memory standards.) Many computer programming tasks reference memory in terms of powers of two because of the inherent binary design of current hardware addressing systems. For example, a 16-bit processor register can reference at most 65,536 items (bytes, words, or other objects); this is conveniently expressed as "64K" items. An operating system might map memory as 4096-byte pages, in which case exactly 8192 pages could be allocated within bytes of memory: "8K" (8192) pages of "4 kilobytes" (4096 bytes) each within "32 megabytes" (32 MiB) of memory. Hard disk drives All hard disk drive manufacturers state capacity using SI prefixes. Flash drives USB flash drives, flash-based memory cards like CompactFlash or Secure Digital, and flash-based solid-state drives (SSDs) use SI prefixes; for example, a "256 MB" flash card provides at least 256 million bytes (), not 256×1024×1024 (). The flash memory chips inside these devices contain considerably more than the quoted capacities, but much like a traditional hard drive, some space is reserved for internal functions of the flash drive. These include wear leveling, error correction, sparing, and metadata needed by the device's internal firmware. Floppy drives Floppy disks have existed in numerous physical and logical formats, and have been sized inconsistently. In part, this is because the end user capacity of a particular disk is a function of the controller hardware, so that the same disk could be formatted to a variety of capacities. In many cases, the media are marketed without any indication of the end user capacity, as for example, DSDD, meaning double-sided double-density. The last widely adopted diskette was the 3½-inch high density. This has a formatted capacity of bytes or 1440 KB (1440 × 1024, using "KB" in the customary binary sense). These are marketed as "HD", or "1.44 MB" or both. This usage creates a third definition of "megabyte" as 1000×1024 bytes. Most operating systems display the capacity using "MB" in the customary binary sense, resulting in a display of "1.4 MB" (). Some users have noticed the missing 0.04 MB and both Apple and Microsoft have support bulletins referring to them as 1.4 MB. The earlier "1200 KB" (1200×1024 bytes) 5¼-inch diskette sold with the IBM PC AT was marketed as "1.2 MB" (). The largest 8-inch diskette formats could contain more than a megabyte, and the capacities of those devices were often irregularly specified in megabytes, also without controversy. Older and smaller diskette formats were usually identified as an accurate number of (binary) KB, for example the Apple Disk II described as "140KB" had a 140×1024-byte capacity, and the original "360KB" double sided, double density disk drive used on the IBM PC had a 360×1024-byte capacity. In many cases diskette hardware was marketed based on unformatted capacity, and the overhead required to format sectors on the media would reduce the nominal capacity as well (and this overhead typically varied based on the size of the formatted sectors), leading to more irregularities. Optical discs The capacities of most optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) are given using SI decimal prefixes. A "4.7 GB" DVD has a nominal capacity of about 4.38 GiB. However, CD capacities are always given using customary binary prefixes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about 700 MiB (approximately 730 MB). Tape drives and media Tape drive and media manufacturers use SI decimal prefixes to identify capacity. Data transmission and clock rates Certain units are always used with SI decimal prefixes even in computing contexts. Two examples are hertz (Hz), which is used to measure the clock rates of electronic components, and to bit/s and B/s, which are used to measure data transmission speed. A 1 GHz processor receives clock ticks per second. A sound file sampled at has samples per second. A MP3 stream consumes bits (16 kilobytes, ) per second. A Internet connection can transfer bits per second ( bytes per second ≈ , assuming an 8-bit byte and no overhead) A Ethernet connection can transfer at nominal speed of
118 members of the Hall of Fame have been inducted posthumously, including four who died after their selection was announced. Of the 39 Negro league members, 31 were inducted posthumously, including all 26 selected since the 1990s. The Hall of Fame includes one female member, Effa Manley. The newest members to be inducted on July 24, 2022, are players David Ortiz, Tony Oliva, Jim Kaat, Gil Hodges and Minnie Minoso, as well as executives Buck O'Neil and Bud Fowler, Former Yankees closer Mariano Rivera was the first player ever to be elected unanimously. Derek Jeter, Marvin Miller, Ted Simmons, and Larry Walker were all originally scheduled to be inducted in 2020, but their induction ceremony was re-scheduled and subsequently held on July 24, 2021, due to the COVID-19 pandemic. On June 21, 2021, it was announced that the induction ceremony for the 2020 class would be open to the public as previous years had been, due to the lifting of various COVID-19 restrictions. Selection process Players are currently inducted into the Hall of Fame through election by either the Baseball Writers' Association of America (or BBWAA), or the Veterans Committee, which now consists of four subcommittees, each of which considers and votes for candidates from a separate era of baseball. Five years after retirement, any player with 10 years of major league experience who passes a screening committee (which removes from consideration players of clearly lesser qualification) is eligible to be elected by BBWAA members with 10 years' membership or more who also have been actively covering MLB at any time in the 10 years preceding the election (the latter requirement was added for the 2016 election). From a final ballot typically including 25–40 candidates, each writer may vote for up to 10 players; until the late 1950s, voters were advised to cast votes for the maximum 10 candidates. Any player named on 75% or more of all ballots cast is elected. A player who is named on fewer than 5% of ballots is dropped from future elections. In some instances, the screening committee had restored their names to later ballots, but in the mid-1990s, dropped players were made permanently ineligible for Hall of Fame consideration, even by the Veterans Committee. A 2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election). Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente's induction in 1973 set the precedent when the writers chose to put him up for consideration after his death on New Year's Eve, 1972. The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although those who had already been eligible under the old rule were grandfathered into the ballot, thus permitting Joe DiMaggio to be elected within four years of his retirement. Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive. If a player fails to be elected by the BBWAA within 10 years of his retirement from active play, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, it is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below. From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures: One committee voted on managers and umpires for induction in every even-numbered year. This committee voted only twice—in 2007 for induction in 2008 and in 2009 for induction in 2010. One committee voted on executives and builders for induction in every even-numbered year. This committee conducted its only two votes in the same years as the managers/umpires committee. The pre–World War II players committee was intended to vote every five years on players whose careers began in 1942 or earlier. It conducted its only vote as part of the election process for induction in 2009. Players of the Negro leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the "Pre-Integration Era" by the Hall; since 2016, Negro leagues figures are primarily considered alongside other figures from what the Hall calls the "Early Baseball" era (1871–1949). Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book Whatever Happened to the Hall of Fame? goes into detail about who he believes does and does not belong in the Hall of Fame. Non-induction of banned players The selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's "permanently ineligible" list, such as Pete Rose or "Shoeless Joe" Jackson. Many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose. Jackson and Rose were both banned from MLB for life for actions related to gambling on their own teams—Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, and Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. (Baseball's Rule 21, prominently posted in every clubhouse locker room, mandates permanent banishment from the MLB for having a gambling interest of any sort on a game in which a player or manager is directly involved.) Rose later admitted that he bet on the Reds in his 2004 autobiography. Baseball fans are deeply split on the issue of whether these two should remain banned or have their punishment revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to "those women who show up at murder trials wanting to marry the cute murderer". Changes to Veterans Committee process The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates. In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, "The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors." In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras. In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years. Further changes to the Veterans Committee process were announced by the Hall on July 26, 2010, effective with the 2011 election. All individuals eligible for induction but not eligible for BBWAA consideration were considered on a single ballot, grouped by the following eras in which they made their greatest contributions: Pre-Integration Era (1871–1946) Golden Era (1947–1972) Expansion Era (1973 and later) The Hall used the BBWAA's Historical Overview Committee to formulate the ballots for each era, consisting of 12 individuals for the Expansion Era and 10 for the other eras. The Hall's board of directors selected a committee of 16 voters for each era, made up of Hall of Famers, executives, baseball historians, and media members. Each committee met and voted at the Baseball Winter Meetings once every three years. The Expansion Era committee held its first vote in 2010 for 2011 induction, with longtime general manager Pat Gillick becoming the first individual elected under the new procedure. The Golden Era committee voted in 2011 for the induction class of 2012, with Ron Santo becoming the first player elected under the new procedure. The Pre-Integration Era committee voted in 2012 for the induction class of 2013, electing three figures. Subsequent elections rotated among the three committees in that order through the 2016 election. In July 2016, the Hall of Fame announced a restructuring of the timeframes to be considered, with a much greater emphasis on modern eras. Four new committees were established: Today's Game (1988–present) Modern Baseball (1970–1987) Golden Days (1950–1969) Early Baseball (1871–1949) All committees' ballots now include 10 candidates. At least one committee is scheduled to convene each December as part of the election process for the following calendar year's induction ceremony. Due to the COVID-19 pandemic, the December 2020 meetings of the Early Baseball and Golden Days committees were postponed, apparently pushing back the rotation of all committee meetings by a year. The eligibility criteria for Era Committee consideration differ between players, managers, and executives. Players: When a player is no longer eligible on the BBWAA ballot (either 15 years after retirement—five-year period and the 10 years after he first becomes eligible to appear on the BBWAA ballot or when the player is not eligible after earning less than five percent of the BBWAA ballot during a year), he will be considered by the respective committee. The Hall has not yet established a policy on when players who die while active or during the standard 5-year waiting period for BBWAA eligibility will be eligible for committee consideration. As noted earlier, such players become eligible for the BBWAA ballot 6 months after their deaths. Managers and umpires who have served at least 10 seasons in that role are eligible 5 years after retirement, unless they are 65 or older, in which case the waiting period is 6 months. Executives are eligible 5 years after retirement, or upon reaching age 70. For those who meet the age cutoff, they are explicitly eligible for consideration regardless of their current position in an organization or their status as active or retired. Before the 2016 changes to the committee system, active executives 65 years or older were eligible for consideration. Players and managers with multiple teams While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a "primary team", which does not necessarily match the cap logo. The Hall selects the logo "based on where that player makes his
2001 change in the election procedures restored the eligibility of these dropped players; while their names will not appear on future BBWAA ballots, they may be considered by the Veterans Committee. Players receiving 5% or more of the votes but fewer than 75% are reconsidered annually until a maximum of ten years of eligibility (lowered from fifteen years for the 2015 election). Under special circumstances, certain players may be deemed eligible for induction even though they have not met all requirements. Addie Joss was elected in 1978, despite only playing nine seasons before he died of meningitis. Additionally, if an otherwise eligible player dies before his fifth year of retirement, then that player may be placed on the ballot at the first election at least six months after his death. Roberto Clemente's induction in 1973 set the precedent when the writers chose to put him up for consideration after his death on New Year's Eve, 1972. The five-year waiting period was established in 1954 after an evolutionary process. In 1936 all players were eligible, including active ones. From the 1937 election until the 1945 election, there was no waiting period, so any retired player was eligible, but writers were discouraged from voting for current major leaguers. Since there was no formal rule preventing a writer from casting a ballot for an active player, the scribes did not always comply with the informal guideline; Joe DiMaggio received a vote in 1945, for example. From the 1946 election until the 1954 election, an official one-year waiting period was in effect. (DiMaggio, for example, retired after the 1951 season and was first eligible in the 1953 election.) The modern rule establishing a wait of five years was passed in 1954, although those who had already been eligible under the old rule were grandfathered into the ballot, thus permitting Joe DiMaggio to be elected within four years of his retirement. Contrary to popular belief, no formal exception was made for Lou Gehrig (other than to hold a special one-man election for him): there was no waiting period at that time, and Gehrig met all other qualifications, so he would have been eligible for the next regular election after he retired during the 1939 season. However, the BBWAA decided to hold a special election at the 1939 Winter Meetings in Cincinnati, specifically to elect Gehrig (most likely because it was known that he was terminally ill, making it uncertain that he would live long enough to see another election). Nobody else was on that ballot, and the numerical results have never been made public. Since no elections were held in 1940 or 1941, the special election permitted Gehrig to enter the Hall while still alive. If a player fails to be elected by the BBWAA within 10 years of his retirement from active play, he may be selected by the Veterans Committee. Following changes to the election process for that body made in 2010 and 2016, it is now responsible for electing all otherwise eligible candidates who are not eligible for the BBWAA ballot — both long-retired players and non-playing personnel (managers, umpires, and executives). From 2011 to 2016, each candidate could be considered once every three years; now, the frequency depends on the era in which an individual made his greatest contributions. A more complete discussion of the new process is available below. From 2008 to 2010, following changes made by the Hall in July 2007, the main Veterans Committee, then made up of living Hall of Famers, voted only on players whose careers began in 1943 or later. These changes also established three separate committees to select other figures: One committee voted on managers and umpires for induction in every even-numbered year. This committee voted only twice—in 2007 for induction in 2008 and in 2009 for induction in 2010. One committee voted on executives and builders for induction in every even-numbered year. This committee conducted its only two votes in the same years as the managers/umpires committee. The pre–World War II players committee was intended to vote every five years on players whose careers began in 1942 or earlier. It conducted its only vote as part of the election process for induction in 2009. Players of the Negro leagues have also been considered at various times, beginning in 1971. In 2005, the Hall completed a study on African American players between the late 19th century and the integration of the major leagues in 1947, and conducted a special election for such players in February 2006; seventeen figures from the Negro leagues were chosen in that election, in addition to the eighteen previously selected. Following the 2010 changes, Negro leagues figures were primarily considered for induction alongside other figures from the 1871–1946 era, called the "Pre-Integration Era" by the Hall; since 2016, Negro leagues figures are primarily considered alongside other figures from what the Hall calls the "Early Baseball" era (1871–1949). Predictably, the selection process catalyzes endless debate among baseball fans over the merits of various candidates. Even players elected years ago remain the subjects of discussions as to whether they deserved election. For example, Bill James' 1994 book Whatever Happened to the Hall of Fame? goes into detail about who he believes does and does not belong in the Hall of Fame. Non-induction of banned players The selection rules for the Baseball Hall of Fame were modified to prevent the induction of anyone on Baseball's "permanently ineligible" list, such as Pete Rose or "Shoeless Joe" Jackson. Many others have been barred from participation in MLB, but none have Hall of Fame qualifications on the level of Jackson or Rose. Jackson and Rose were both banned from MLB for life for actions related to gambling on their own teams—Jackson was determined to have cooperated with those who conspired to intentionally lose the 1919 World Series, and for accepting payment for losing, and Rose voluntarily accepted a permanent spot on the ineligible list in return for MLB's promise to make no official finding in relation to alleged betting on the Cincinnati Reds when he was their manager in the 1980s. (Baseball's Rule 21, prominently posted in every clubhouse locker room, mandates permanent banishment from the MLB for having a gambling interest of any sort on a game in which a player or manager is directly involved.) Rose later admitted that he bet on the Reds in his 2004 autobiography. Baseball fans are deeply split on the issue of whether these two should remain banned or have their punishment revoked. Writer Bill James, though he advocates Rose eventually making it into the Hall of Fame, compared the people who want to put Jackson in the Hall of Fame to "those women who show up at murder trials wanting to marry the cute murderer". Changes to Veterans Committee process The actions and composition of the Veterans Committee have been at times controversial, with occasional selections of contemporaries and teammates of the committee members over seemingly more worthy candidates. In 2001, the Veterans Committee was reformed to comprise the living Hall of Fame members and other honorees. The revamped Committee held three elections, in 2003 and 2007, for both players and non-players, and in 2005 for players only. No individual was elected in that time, sparking criticism among some observers who expressed doubt whether the new Veterans Committee would ever elect a player. The Committee members, most of whom were Hall members, were accused of being reluctant to elect new candidates in the hope of heightening the value of their own selection. After no one was selected for the third consecutive election in 2007, Hall of Famer Mike Schmidt noted, "The same thing happens every year. The current members want to preserve the prestige as much as possible, and are unwilling to open the doors." In 2007, the committee and its selection processes were again reorganized; the main committee then included all living members of the Hall, and voted on a reduced number of candidates from among players whose careers began in 1943 or later. Separate committees, including sportswriters and broadcasters, would select umpires, managers and executives, as well as players from earlier eras. In the first election to be held under the 2007 revisions, two managers and three executives were elected in December 2007 as part of the 2008 election process. The next Veterans Committee elections for players were held in December 2008 as part of the 2009 election process; the main committee did not select a player, while the panel for pre–World War II players elected Joe Gordon in its first and ultimately only vote. The main committee voted as part of the election process for inductions in odd-numbered years, while the pre-World War II panel would vote every five years, and the panel for umpires, managers, and executives voted as part of the election process for inductions in even-numbered years. Further changes to the Veterans Committee process were announced by the Hall on July 26, 2010, effective with the 2011 election. All individuals eligible for induction but not eligible for BBWAA consideration were considered on a single ballot, grouped by the following eras in which they made their greatest contributions: Pre-Integration Era (1871–1946) Golden Era (1947–1972) Expansion Era (1973 and later) The Hall used the BBWAA's Historical Overview Committee to formulate the ballots for each era, consisting of 12 individuals for the Expansion Era and 10 for the other eras. The Hall's board of directors selected a committee of 16 voters for each era, made up of Hall of Famers, executives, baseball historians, and media members. Each committee met and voted at the Baseball Winter Meetings once every three years. The Expansion Era committee held its first vote in 2010 for 2011 induction, with longtime general manager Pat Gillick becoming the first individual elected under the new procedure. The Golden Era committee voted in 2011 for the induction class of 2012, with Ron Santo becoming the first player elected under the new procedure. The Pre-Integration Era committee voted in 2012 for the induction class of 2013, electing three figures. Subsequent elections rotated among the three committees in that order through the 2016 election. In July 2016, the Hall of Fame announced a restructuring of the timeframes to be considered, with a much greater emphasis on modern eras. Four new committees were established: Today's Game (1988–present) Modern Baseball (1970–1987) Golden Days (1950–1969) Early Baseball (1871–1949) All committees' ballots now include 10 candidates. At least one committee is scheduled to convene each December as part of the election process for the following calendar year's induction ceremony. Due to the COVID-19 pandemic, the December 2020 meetings of the Early Baseball and Golden Days committees were postponed, apparently pushing back the rotation of all committee meetings by a year. The eligibility criteria for Era Committee consideration differ between players, managers, and executives. Players: When a player is no longer eligible on the BBWAA ballot (either 15 years after retirement—five-year period and the 10 years after he first becomes eligible to appear on the BBWAA ballot or when the player is not eligible after earning less than five percent of the BBWAA ballot during a year), he will be considered by the respective committee. The Hall has not yet established a policy on when players who die while active or during the standard 5-year waiting period for BBWAA eligibility will be eligible for committee consideration. As noted earlier, such players become eligible for the BBWAA ballot 6 months after their deaths. Managers and umpires who have served at least 10 seasons in that role are eligible 5 years after retirement, unless they are 65 or older, in which case the waiting period is 6 months. Executives are eligible 5 years after retirement, or upon reaching age 70. For those who meet the age cutoff, they are explicitly eligible for consideration regardless of their current position in an organization or their status as active or retired. Before the 2016 changes to the committee system, active executives 65 years or older were eligible for consideration. Players and managers with multiple teams While the text on a player's or manager's plaque lists all teams for which the inductee was a member in that specific role, inductees are usually depicted wearing the cap of a specific team, though in a few cases, like umpires, they wear caps without logos. (Executives are not depicted wearing caps.) Additionally, as of 2015, inductee biographies on the Hall's website for all players and managers, and executives who were associated with specific teams, list a "primary team", which does not necessarily match the cap logo. The Hall selects the logo "based on where that player makes his most indelible mark." Although the Hall always made the final decision on which logo was shown, until 2001 the Hall deferred to the wishes of players or managers whose careers were linked with multiple teams. Some examples of inductees associated with multiple teams are the following: Frank Robinson: Robinson chose to have the Baltimore Orioles cap displayed on his plaque, although he had played ten seasons with the Cincinnati Reds and six seasons with Baltimore. Robinson won four pennants and two World Series with the Orioles and one pennant with Cincinnati. His second World Series ring came in the 1970 World Series against the Reds. Robinson also won an MVP award while playing for each team. Catfish Hunter: Hunter chose not to have any logo on his cap when elected to the Hall of Fame in 1987. Hunter had success for both teams for which he played – the Kansas City/Oakland Athletics (his first ten seasons) and the New York Yankees (his final five seasons). Furthermore, both during and after his career he maintained good relations with both teams and their respective owners (Charles Finley and George Steinbrenner), and did not wish to slight either team by selecting the other. Nolan Ryan: Born and raised in Texas, Ryan entered the Hall in 1999 wearing a Texas Rangers cap on his plaque, although he spent only five seasons with the Rangers, while raised in the Houston area and having longer and more successful tenures with the Houston Astros (nine seasons, 1980–88 and his record-setting fifth career no-hitter) and California Angels (eight seasons, 1972–79 and the first four of his seven career no-hitters). Ryan's only championship was as a member of the New York Mets in 1969. Ryan finished his career with the Rangers, reaching his 5,000th strikeout and 300th win, and throwing the last two of his no-hitters. Ryan later took ownership of the Rangers when they were sold to his Rangers Baseball Express group in 2010. He sold his Rangers interest in 2013 and is now in the Astros' front office. In 2020 Ryan discontinued his executive role with the Astros. The minor-league team in which he has an ownership interest, the Round Rock Express of Round Rock, Texas outside of Austin, will be the AAA franchise of the Texas Rangers. Reggie Jackson: Jackson chose to be depicted with a Yankees cap over an Athletics cap. As a member of the Kansas City/Oakland A's, Jackson played ten seasons (1967–75, '87), winning three World Series (1972, 1973, 1974) and the 1973 AL MVP Award. During his five years in New York (1977–81), Jackson won two World Series (1977–78), with his crowning achievement occurring during Game Six of the 1977 World Series, when he hit three home runs on consecutive pitches and earned his nickname "Mr. October". Carlton Fisk: Fisk went into the hall with a Boston Red Sox cap on his plaque in 2000 despite having played with the Chicago White Sox longer and posting more significant numbers with the White Sox. Fisk's choice of the Red Sox was likely due to his being a New England native, as well as his famous "Stay fair!" walk-off home run in Game Six of the 1975 World Series for which he is most associated. Sparky Anderson: Also in 2000, Anderson entered the Hall with a Cincinnati Reds cap on his plaque despite managing almost twice as many seasons with the Detroit Tigers (17 in Detroit; nine in Cincinnati). He chose the Reds to honor that team's former general manager Bob Howsam, who gave him his first major-league managing job. Anderson won two World Series with the Reds and one with the Tigers. Dave Winfield: Winfield had spent the most years in his career with the Yankees and had great success there, though he chose to go into the Hall as a member of the San Diego Padres due to his feud with Yankees owner George Steinbrenner. In all of the above cases, the "primary team" is the team for which the inductee spent the largest portion of his career except for Ryan, whose primary team is listed as the Angels despite playing one fewer season for that team than for the Astros. In 2001, the Hall of Fame decided to change the policy on cap logo selection, as a result of rumors that some teams were offering compensation, such
even know if P is a strict subset of PSPACE. BPP is contained in the second level of the polynomial hierarchy and therefore it is contained in PH. More precisely, the Sipser–Lautemann theorem states that . As a result, P = NP leads to P = BPP since PH collapses to P in this case. Thus either P = BPP or P ≠ NP or both. Adleman's theorem states that membership in any language in BPP can be determined by a family of polynomial-size Boolean circuits, which means BPP is contained in P/poly. Indeed, as a consequence of the proof of this fact, every BPP algorithm operating on inputs of bounded length can be derandomized into a deterministic algorithm using a fixed string of random bits. Finding this string may be expensive, however. Some weak separation results for Monte Carlo time classes were proven by , see also . Closure properties The class BPP is closed under complementation, union and intersection. Relativization Relative to oracles, we know that there exist oracles A and B, such that PA = BPPA and PB ≠ BPPB. Moreover, relative to a random oracle with probability 1, P = BPP and BPP is strictly contained in NP and co-NP. There is even an oracle in which BPP=EXPNP (and hence P<NP<BPP=EXP=NEXP), which can be iteratively constructed as follows. For a fixed ENP (relativized) complete problem, the oracle will give correct answers with high probability if queried with the problem instance followed by a random string of length kn (n is instance length; k is an appropriate small constant). Start with n=1. For every instance of the problem of length n fix oracle answers (see lemma below) to fix the instance output. Next, provide the instance outputs for queries consisting of the instance followed by kn-length string, and then treat output for queries of length ≤(k+1)n as fixed, and proceed with instances of length n+1. Lemma: Given a problem (specifically, an oracle machine code and time constraint) in relativized ENP, for every partially constructed oracle and input of length n, the output can be fixed by specifying 2O(n) oracle answers. Proof: The machine is simulated, and the oracle answers (that are not already fixed) are fixed step-by-step. There is at most one oracle query per deterministic computation step. For the relativized NP oracle, if possible fix the output to be yes by choosing a computation path and fixing the answers of the base oracle; otherwise no fixing is necessary, and either way there is at most 1 answer of the base oracle per step. Since there are 2O(n) steps, the lemma follows. The lemma ensures that (for a large enough k), it is possible to do the construction while leaving enough strings for the relativized ENP answers. Also, we can ensure that for the relativized ENP, linear time suffices, even for function problems (if given a function oracle and linear output size) and with exponentially small (with linear exponent) error probability. Also, this construction is effective in that given an arbitrary oracle A we can arrange the oracle B to have PA≤PB and EXPNPA=EXPNPB=BPPB. Also, for a ZPP=EXP oracle (and hence ZPP=BPP=EXP<NEXP), one would fix the answers in the relativized E computation to a special nonanswer, thus ensuring that no fake answers are given. Derandomization The existence of certain strong pseudorandom number generators is conjectured by most experts of the field. This conjecture implies that randomness does not give additional computational power to polynomial time computation, that is, P = RP = BPP. Note that ordinary generators are not sufficient to show this result; any probabilistic algorithm implemented using a typical random number generator will always produce incorrect results on certain inputs irrespective of the seed (though these inputs might be rare). László Babai, Lance Fortnow, Noam Nisan, and Avi Wigderson showed that unless EXPTIME collapses to MA, BPP is contained in The class i.o.-SUBEXP, which stands for infinitely often SUBEXP, contains problems which have sub-exponential time
corresponds to the output of the random coin flips that the probabilistic Turing machine would have made. For some applications this definition is preferable since it does not mention probabilistic Turing machines. In practice, an error probability of 1/3 might not be acceptable, however, the choice of 1/3 in the definition is arbitrary. Modifying the definition to use any constant between 0 and 1/2 (exclusive) in place of 1/3 would not change the resulting set BPP. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1/2100, this would result in the same class of problems. The error probability does not even have to be constant: the same class of problems is defined by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. This flexibility in the choice of error probability is based on the idea of running an error-prone algorithm many times, and using the majority result of the runs to obtain a more accurate algorithm. The chance that the majority of the runs are wrong drops off exponentially as a consequence of the Chernoff bound. Problems All problems in P are obviously also in BPP. However, many problems have been known to be in BPP but not known to be in P. The number of such problems is decreasing, and it is conjectured that P = BPP. For a long time, one of the most famous problems known to be in BPP but not known to be in P was the problem of determining whether a given number is prime. However, in the 2002 paper PRIMES is in P, Manindra Agrawal and his students Neeraj Kayal and Nitin Saxena found a deterministic polynomial-time algorithm for this problem, thus showing that it is in P. An important example of a problem in BPP (in fact in co-RP) still not known to be in P is polynomial identity testing, the problem of determining whether a polynomial is identically equal to the zero polynomial, when you have access to the value of the polynomial for any given input, but not to the coefficients. In other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero? It suffices to choose each variable's value uniformly at random from a finite subset of at least d values to achieve bounded error probability, where d is the total degree of the polynomial. Related classes If the access to randomness is removed from the definition of BPP, we get the complexity class P. In the definition of the class, if we replace the ordinary Turing
in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. A BQP-complete problem Similar to the notion of NP-completeness and other complete problems, we can define a BQP-complete problem as a problem that is in BQP and that every problem in BQP reduces to it in polynomial time. Here is an intuitive BQP-complete problem, which stems directly from the definition of BQP. APPROX-QCIRCUIT-PROB problem Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases: measuring the first qubit of the state yields with probability measuring the first qubit of the state yields with probability Note that the problem does not specify the behavior if an instance is not covered by these two cases. Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof. Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers , distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting . For any , there exists family of quantum circuits such that for all , a state of qubits, if ; else if . Fix an input of qubits, and the corresponding quantum circuit . We can first construct a circuit such that . This can be done easily by hardwiring and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get , and now . And finally, necessarily the results of is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of , we get the output. This will be our circuit , and we decide the membership of in by running with . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so reduces to APPROX-QCIRCUIT-PROB. APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP. Relationship to other complexity classes BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are: As the problem of P ≟ PSPACE has not yet been solved, the proof of inequality
quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. A BQP-complete problem Similar to the notion of NP-completeness and other complete problems, we can define a BQP-complete problem as a problem that is in BQP and that every problem in BQP reduces to it in polynomial time. Here is an intuitive BQP-complete problem, which stems directly from the definition of BQP. APPROX-QCIRCUIT-PROB problem Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases: measuring the first qubit of the state yields with probability measuring the first qubit of the state yields with probability Note that the problem does not specify the behavior if an instance is not covered by these two cases. Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB. Proof. Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum circuit acting on qubits, and two numbers , distinguishes between the above two cases. We can solve any problem in BQP with this oracle, by setting . For any , there exists family of quantum circuits such that for all , a state of qubits, if ; else if . Fix an input of qubits, and the corresponding quantum circuit . We can first construct a circuit such that . This can be done easily by hardwiring and apply a sequence of CNOT gates to flip the qubits. Then we can combine two circuits to get , and now . And finally, necessarily the results of is obtained by measuring several qubits and apply some (classical) logic gates to them. We can always defer the measurement and reroute the circuits so that by measuring the first qubit of , we get the output. This will be our circuit , and we decide the membership of in by running with . By definition of BQP, we will either fall into the first case (acceptance), or the second case (rejection), so reduces to APPROX-QCIRCUIT-PROB. APPROX-QCIRCUIT-PROB comes handy when we try to prove the relationships between some well-known complexity classes and BQP. Relationship to other complexity classes BQP is defined for quantum computers; the corresponding complexity class for classical computers (or more formally for probabilistic Turing machines) is BPP. Just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition. If a polynomial time algorithm calls polynomial time algorithms as subroutines, the resulting algorithm is still polynomial time. BQP contains P and BPP and is contained in AWPP, PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the possible difference in power between these similar classes. The known relationships with classic complexity classes are:
adaptation The plot element of a replicant giving birth served as the basis for the 2017 film Blade Runner 2049. See also Blade Runner: Do Androids Dream of Electric Sheep? - original story by P K Dick Blade Runner 1: A Story of the Future - film novelization by Les Martin Blade Runner 2: The Edge of Human - K. W. Jeter Blade Runner 4: Eye and Talon - K. W. Jeter References 1996 novels Blade Runner (franchise) 1996 science fiction novels Novels by K. W. Jeter Novels set on Mars Novels about androids Novels
of his days as a blade runner. He finds himself drawn into a mission on behalf of the replicants he was once assigned to kill. Meanwhile, the mystery surrounding the beginnings of the Tyrell Corporation is being exposed. Characters Rick Deckard, a former bounty hunter, now working as a film consultant Sarah Tyrell, the niece of Eldon Tyrell; she has been living on Mars since the events of Blade Runner 2 Anson Tyrell, Sarah's father Ruth Tyrell, Sarah's mother Rachael, a ten-year-old girl Roy Batty, the human template for the replicant Deckard fought in the previous novel. That replicant's personality now resides inside Deckard's briefcase. Sebastien, a dehydrated deity Urbenton, director of the movie Blade Runner on which Rick Deckard is a consultant Dave Holden, Deckard's
Runner in a number of ways: Deckard, Pris, Sebastian, Leon, Batty, and Holden all appeared in Blade Runner. Many of the parts of the "conspiracy" are based on errors or plot holes identified by fans of the original movie, such as Leon's ability to bring a gun into the Tyrell building, or the reference to the sixth replicant. The character of John Isidore, and his "pet hospital", is taken from Dick's original novel Do Androids Dream of Electric Sheep?, although that book contained no suggestion that the shop ran a sideline in modifying replicants. Blade Runner's Sebastian was based on Electric Sheep's Isidore, though Jeter features them as separate characters in The Edge of Human. The idea of replicant models being mass-produced, and in particular a woman identical to Rachael existing, is also from Do Androids Dream of Electric Sheep?; although in that book, Pris was the replicant double of Rachael, and there was no suggestion that replicants were constructed based on human templates. The etymology of the term "blade runner" is revealed to come from the German phrase bleib ruhig, meaning "remain calm." It was supposedly developed by the Tyrell Corporation to prevent news about replicants malfunctioning. However, it also contradicts material in some ways: Sebastian was stated as being dead in the movie, yet he is alive in The Edge of Human. Pris was clearly stated as being a replicant in both the movie and the original novel, yet The Edge of Human claims she was human. Pris was clearly destroyed by Deckard in both the movie and the original novel. Sebastian's ability to bring Pris back to life as a replicant introduces numerous problems: the book implies that Sebastian was able to do this without realising that her original body was human. It is likewise unclear why Deckard would have left her, or any suspected replicant he retired, in a state from which they could be repaired. "The Final Cut" of Blade Runner removed the reference of a surviving sixth replicant, as it was normally considered a leftover from an early script. Reception Michael Giltz of Entertainment Weekly gave the book a "C-", feeling that "only hardcore fans will be satisfied by this tale" and saying Jeter's "habit of echoing dialogue and scenes from the film is annoying and begs comparisons he would do well to avoid." Tal Cohen of Tal Cohen's Bookshelf called The Edge of Human "a good book", praising Jeter's "further, and deeper, investigation of the questions Philip K. Dick originally asked", but criticized the book for its "needless grandioseness" and for "rel[ying] on Blade Runner too heavily, [as] the number of new characters introduced is extremely small..." Ian Kaplan of BearCave.com gave the book three stars out of five, saying that while he was "not entirely satisfied" and felt that the "story tends to be shallow", "Jeter does deal with the moral dilemma of the Blade Runners who hunt down beings that are virtually human in every way." J. Patton of The Bent Cover praised Jeter for "[not] try[ing] to emulate Philip K. Dick", adding, "This book also has all the grittiness and dark edges that the movie showed
replicant. He shoots him. Deckard returns to Sarah with his suspicion: there is no sixth replicant. Sarah, speaking via a remote camera, confesses that she invented and maintained the rumor herself in order to deliberately discredit and eventually destroy the Tyrell Corporation because her uncle Eldon had based Rachel on her and then abandoned the real Sarah. Sarah brings Rachael back to the Corporation to meet with Deckard, and they escape. However, Holden, recovering from his injuries during the fight, later uncovers the truth: Rachael has been killed by Tyrell agents, and the "Rachael" who escaped with Deckard was actually Sarah. She has completed her revenge by both destroying Tyrell and taking back Rachael's place. Characters Rick Deckard: The Tyrell Corporation finally locates him, residing at a cabin in the woods with the frozen Rachael. In exchange for getting Rachael back, Deckard agrees to hunt the missing sixth replicant. Roy Batty: The man which Tyrell used as the template for his combat replicants is in fact a man of considerable instability, suffering from a brain disorder that prevents him from experiencing fear. Sarah Tyrell: The niece of Eldon Tyrell, Sarah locates and hires Deckard to eliminate the final replicant in order to retain her corporation's hold over the market. Dave Holden: Starting off bed-ridden after his attack by the replicant Leon, Holden is rescued by Roy who in turn leads him to some startling revelations. J.R. Isidore: A lowly employee of a vet's office, Isidore also works as an underground replicant sympathizer, having made modifications to replicants in order to help them escape detection. Relationship to other works The book's plot draws from other material related to Blade Runner in a number of ways: Deckard, Pris, Sebastian, Leon, Batty, and Holden all appeared in Blade Runner. Many of the parts of the "conspiracy" are based on errors or plot holes identified by fans of the original movie, such as Leon's ability to bring a gun into the Tyrell building, or the reference to the sixth replicant. The character of John Isidore, and his "pet hospital", is taken from Dick's original novel Do Androids Dream of Electric Sheep?, although that book contained no suggestion that the shop ran a sideline in modifying replicants. Blade Runner's Sebastian was based on Electric Sheep's Isidore, though Jeter features them as separate characters in The Edge of Human. The idea of replicant
previous loop <- Decrement the loop Counter in Cell #0 ] Loop until Cell #0 is zero; number of iterations is 8 The result of this is: Cell no : 0 1 2 3 4 5 6 Contents: 0 0 72 104 88 32 8 Pointer : ^ >>. Cell #2 has value 72 which is 'H' >---. Subtract 3 from Cell #3 to get 101 which is 'e' +++++++..+++. Likewise for 'llo' from Cell #3 >>. Cell #5 is 32 for the space <-. Subtract 1 from Cell #4 for 87 to give a 'W' <. Cell #3 was set to 'o' from the end of 'Hello' +++.------.--------. Cell #3 for 'rl' and 'd' >>+. Add 1 to Cell #5 gives us an exclamation point >++. And finally a newline from Cell #6 For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as: ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++. ROT13 This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65-77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates. The basic approach used is as follows. Calling the input character x, divide x-1 by 32, keeping quotient and remainder. Unless the quotient is 2 or 3, just output x, having kept a copy of it during the division. If the quotient is 2 or 3, divide the remainder ((x-1) modulo 32) by 13; if the quotient here is 0, output x+13; if 1, output x-13; if 2, output x. Regarding the division algorithm, when dividing y by z to get a quotient q and remainder r, there is an outer loop which sets q and r first to the quotient and remainder of 1/z, then to those of 2/z, and so on; after it has executed y times, this outer loop terminates, leaving q and r set to the quotient and remainder of y/z. (The dividend y is used as a diminishing counter that controls how many times this loop is executed.) Within the loop, there is code to increment r and decrement y, which is usually sufficient; however, every zth time through the outer loop, it is necessary to zero r and increment q. This is done with a diminishing counter set to the divisor z; each time through the outer loop, this counter is decremented, and when it reaches zero, it is refilled by moving the value from r back into it. -,+[ Read first character and start outer character reading loop -[ Skip forward if character is 0 >>++++[>++++++++<-] Set up divisor (32) for division loop (MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero) <+<-[ Set up dividend (x minus 1) and enter division loop >+>+>-[>>>] Increase copy and remainder / reduce divisor / Normal case: skip forward <[[>+<-]>>+>] Special case: move remainder back to divisor and increase quotient <<<<<- Decrement dividend ] End division loop ]>>>[-]+ End skip loop; zero former divisor and reuse space for a flag >--[-[<->+++[-]]]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag ++++++++++++<[ If flag then set up divisor (13) for second division loop (MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero) >-[>+>>] Reduce divisor; Normal case: increase remainder >[+[<+>-]>+>>] Special case: increase remainder / move it back to divisor / increase quotient <<<<<- Decrease dividend ] End division loop >>[<+>-] Add remainder back to divisor to get a useful 13 >[ Skip forward if quotient was 0 -[ Decrement quotient and skip forward if quotient was 1 -<<[-]>> Zero quotient and divisor if quotient was 2 ]<<[<<->>-]>> Zero divisor and subtract 13 from copy if quotient was 1 ]<<[<<+>>-] Zero divisor and add 13 to copy if quotient was 0 ] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3) <[-] Clear remainder from first division if second division was skipped <.[-] Output ROT13ed character from copy and clear it <-,+ Read next character ] End character reading loop Portability issues Partly because Urban Müller did not write a thorough language specification, the many subsequent brainfuck interpreters and compilers have implemented slightly different dialects of brainfuck. Cell size In the classic distribution, the cells are of 8-bit size (cells are bytes), and this is still the most common size. However, to read non-textual data, a brainfuck program may need to distinguish an end-of-file condition from any possible byte value; thus 16-bit cells have also been used. Some implementations have used 32-bit cells, 64-bit cells, or bignum cells with theoretically unlimited range, but programs that use this extra range are likely to be slow, since storing the value into a cell requires time as a cell's value may only be changed by incrementing and decrementing. In all these variants, the , and . commands still read and write data in bytes. In most of them, the cells wrap around, i.e. incrementing a cell which holds its maximal value (with the + command) will bring it to its minimal value and vice versa. The exceptions are implementations which are distant from the underlying hardware, implementations that use bignums, and implementations that try to enforce portability. It is usually easy to write brainfuck programs that do not ever cause integer wraparound or overflow, and therefore don't depend on cell size. Generally this means avoiding increment of +255 (unsigned 8-bit wraparound), or avoiding overstepping the boundaries of [-128, +127] (signed 8-bit wraparound) (since there are no comparison operators, a program cannot distinguish between a signed and unsigned two's complement fixed-bit-size cell and negativeness of numbers is a matter of interpretation). For more details on integer wraparound, see the Integer overflow article. Array size In the classic distribution, the array has 30,000 cells, and the pointer begins at the leftmost cell. Even more cells are needed to store things like the millionth Fibonacci number, and the easiest way to make the language Turing complete is to make the array unlimited on the right. A few implementations extend the array to the left as well; this is an uncommon feature, and therefore portable brainfuck programs do not depend on it. When the pointer moves outside the bounds of the array, some implementations will give an error message, some will try to extend the array dynamically, some will not notice and will produce undefined behavior, and a few will move the pointer to the opposite end of the array. Some tradeoffs are involved: expanding the array dynamically to the right is the most user-friendly approach and is good for memory-hungry programs, but it carries a speed penalty. If a fixed-size array is used it is helpful to make it very large, or better yet let the user set the size. Giving an error message for bounds violations is very useful for debugging but even that carries a speed penalty unless it can be handled by the operating system's memory protections. End-of-line code Different operating systems (and sometimes different programming environments) use subtly different versions of ASCII. The most important difference is in the code used for the end of a line of text. MS-DOS and Microsoft Windows use a CRLF, i.e. a 13 followed by a 10, in most contexts. UNIX and its descendants (including Linux and macOS) and Amigas use just 10, and older Macs use just 13. It would be difficult if brainfuck programs had to be rewritten for different operating systems. However, a unified standard was easy to create. Urban Müller's compiler and his example programs use 10, on both input and output; so do a large majority of existing brainfuck programs; and 10 is also more convenient to use than CRLF. Thus, brainfuck implementations should make sure that brainfuck programs that assume newline =
0 72 104 88 32 8 Pointer : ^ >>. Cell #2 has value 72 which is 'H' >---. Subtract 3 from Cell #3 to get 101 which is 'e' +++++++..+++. Likewise for 'llo' from Cell #3 >>. Cell #5 is 32 for the space <-. Subtract 1 from Cell #4 for 87 to give a 'W' <. Cell #3 was set to 'o' from the end of 'Hello' +++.------.--------. Cell #3 for 'rl' and 'd' >>+. Add 1 to Cell #5 gives us an exclamation point >++. And finally a newline from Cell #6 For "readability", this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as: ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++. ROT13 This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65-77) to N-Z (78-90), and vice versa. Also it must map a-m (97-109) to n-z (110-122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates. The basic approach used is as follows. Calling the input character x, divide x-1 by 32, keeping quotient and remainder. Unless the quotient is 2 or 3, just output x, having kept a copy of it during the division. If the quotient is 2 or 3, divide the remainder ((x-1) modulo 32) by 13; if the quotient here is 0, output x+13; if 1, output x-13; if 2, output x. Regarding the division algorithm, when dividing y by z to get a quotient q and remainder r, there is an outer loop which sets q and r first to the quotient and remainder of 1/z, then to those of 2/z, and so on; after it has executed y times, this outer loop terminates, leaving q and r set to the quotient and remainder of y/z. (The dividend y is used as a diminishing counter that controls how many times this loop is executed.) Within the loop, there is code to increment r and decrement y, which is usually sufficient; however, every zth time through the outer loop, it is necessary to zero r and increment q. This is done with a diminishing counter set to the divisor z; each time through the outer loop, this counter is decremented, and when it reaches zero, it is refilled by moving the value from r back into it. -,+[ Read first character and start outer character reading loop -[ Skip forward if character is 0 >>++++[>++++++++<-] Set up divisor (32) for division loop (MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero) <+<-[ Set up dividend (x minus 1) and enter division loop >+>+>-[>>>] Increase copy and remainder / reduce divisor / Normal case: skip forward <[[>+<-]>>+>] Special case: move remainder back to divisor and increase quotient <<<<<- Decrement dividend ] End division loop ]>>>[-]+ End skip loop; zero former divisor and reuse space for a flag >--[-[<->+++[-]]]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag ++++++++++++<[ If flag then set up divisor (13) for second division loop (MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero) >-[>+>>] Reduce divisor; Normal case: increase remainder >[+[<+>-]>+>>] Special case: increase remainder / move it back to divisor / increase quotient <<<<<- Decrease dividend ] End division loop >>[<+>-] Add remainder back to divisor to get a useful 13 >[ Skip forward if quotient was 0 -[ Decrement quotient and skip forward if quotient was 1 -<<[-]>> Zero quotient and divisor if quotient was 2 ]<<[<<->>-]>> Zero divisor and subtract 13 from copy if quotient was 1 ]<<[<<+>>-] Zero divisor and add 13 to copy if quotient was 0 ] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3) <[-] Clear remainder from first division if second division was skipped <.[-] Output ROT13ed character from copy and clear it <-,+ Read next character ] End character reading loop Portability issues Partly because Urban Müller did not write a thorough language specification, the many subsequent brainfuck interpreters and compilers have implemented slightly different dialects of brainfuck. Cell size In the classic distribution, the cells are of 8-bit size (cells are bytes), and this is still the most common size. However, to read non-textual data, a brainfuck program may need to distinguish an end-of-file condition from any possible byte value; thus 16-bit cells have also been used. Some implementations have used 32-bit cells, 64-bit cells, or bignum cells with theoretically unlimited range, but programs that use this extra range are likely to be slow, since storing the value into a cell requires time as a cell's value may only be changed by incrementing and decrementing. In all these variants, the , and . commands still read and write data in bytes. In most of them, the cells wrap around, i.e. incrementing a cell which holds its maximal value (with the + command) will bring it to its minimal value and vice versa. The exceptions are implementations which are distant from the underlying hardware, implementations that use bignums, and implementations that try to enforce portability. It is usually easy to write brainfuck programs that do not ever cause integer wraparound or overflow, and therefore don't depend on cell size. Generally this means avoiding increment of +255 (unsigned 8-bit wraparound), or avoiding overstepping the boundaries of [-128, +127] (signed 8-bit wraparound) (since there are no comparison operators, a program cannot distinguish between a signed and unsigned two's complement fixed-bit-size cell and negativeness of numbers is a matter of interpretation). For more details on integer wraparound, see the Integer overflow article. Array size In the classic distribution, the array has 30,000 cells, and the pointer begins at the leftmost cell. Even more cells are needed to store things like the millionth Fibonacci number, and the easiest way to make the language Turing complete is to make the array unlimited on the right. A few implementations extend the array to the left as well; this is an uncommon feature, and therefore portable brainfuck programs do not depend on it. When the pointer moves outside the bounds of the array, some implementations will give an error message, some will try to extend the array dynamically, some will not notice and will produce undefined behavior, and a few will move the pointer to the opposite end of the array. Some tradeoffs are involved: expanding the array dynamically to the right is the most user-friendly approach and is good for memory-hungry programs, but it carries a speed penalty. If a fixed-size array is used it is helpful to make it very large, or better yet let the user set the size. Giving an error message for bounds violations is very useful for debugging but even that carries a speed penalty unless it can be handled by the operating system's memory protections. End-of-line code Different operating systems (and sometimes different programming environments) use subtly different versions of ASCII. The most important difference is in the code used for the end of a line of text. MS-DOS and Microsoft Windows use a CRLF, i.e. a 13 followed by a 10, in most contexts. UNIX and its descendants (including Linux and macOS) and Amigas use just 10, and older Macs use just 13. It would be difficult if brainfuck programs had to be rewritten for different operating systems. However, a unified standard was easy to create. Urban Müller's compiler and his example programs use 10, on both input and output; so do a large majority of existing brainfuck programs; and 10 is also more convenient to use than CRLF. Thus, brainfuck implementations should make sure that brainfuck programs that assume newline = 10 will run properly; many do so, but some do not. This assumption is also consistent with most of the world's sample code for C and other languages, in that they use "\n", or 10, for their newlines. On systems that use CRLF line endings, the C standard library transparently remaps "\n" to "\r\n" on output and "\r\n" to "\n" on input for streams not opened in binary mode. End-of-file behavior The behavior of the , command when an end-of-file condition has been encountered varies. Some implementations set the cell at the pointer to 0, some set it to the C constant EOF (in practice this is usually -1), some leave the cell's value unchanged. There is no real consensus; arguments for the three behaviors are as follows. Setting the cell to 0 avoids the use of negative numbers, and makes it marginally more concise to write a loop that reads characters until EOF occurs. This is a language extension devised by Panu Kalliokoski. Setting the cell to -1 allows EOF to be distinguished from any byte value (if the cells are larger than bytes), which is necessary for reading non-textual data; also, it is the behavior of the C translation of , given in Müller's readme file. However, it is not obvious that those C
was also named Consul of Accademia delle Arti del Disegno of Florence, which had been founded by the Duke Cosimo I in 1563. In 1569, Ammanati was commissioned to build the Ponte Santa Trinita, a bridge over the Arno River. The three arches are elliptic, and though very light and elegant, has survived, when floods had damaged other Arno bridges at different times. Santa Trinita was destroyed in 1944, during World War II, and rebuilt in 1957. Ammannati designed what is considered a prototypic mannerist sculptural ensemble in the Fountain of Neptune (Fontana del Nettuno), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli.
during World War II, and rebuilt in 1957. Ammannati designed what is considered a prototypic mannerist sculptural ensemble in the Fountain of Neptune (Fontana del Nettuno), prominently located in the Piazza della Signoria in the center of Florence. The assignment was originally given to the aged Bartolommeo Bandinelli; however when Bandinelli died, Ammannati's design, bested the submissions of Benvenuto Cellini and Vincenzo Danti, to gain the commission. From 1563 and 1565, Ammannati and his assistants, among them Giambologna, sculpted the block of marble that had been chosen by Bandinelli. He took Grand Duke Cosimo I as model for Neptune's face. The statue was meant to highlight Cosimo's goal of establishing a Florentine Naval force. The ungainly sea god was placed at the corner of the Palazzo Vecchio within sight of Michelangelo's David statue, and the then 87-year-old sculptor is said to have scoffed at Ammannati— saying that he had ruined a beautiful piece of marble— with the ditty: "Ammannati, Ammanato, che bel marmo hai rovinato!" Ammannati continued work on this fountain for a decade, adding around the perimeter a cornucopia of demigod figures: bronze
of a titular see, which is usually an ancient city that used to have a bishop, but, for some reason or other, does not have one now. Titular bishops often serve as auxiliary bishops. In the Ecumenical Patriarchate, bishops of modern dioceses are often given a titular see alongside their modern one (for example, the archbishop of Thyateira and Great Britain). Auxiliary bishop An auxiliary bishop is a full-time assistant to a diocesan bishop (the Catholic and Eastern Orthodox equivalent of an Anglican suffragan bishop). An auxiliary bishop is a titular bishop, and he is to be appointed as a vicar general or at least as an episcopal vicar of the diocese in which he serves. Coadjutor bishop A coadjutor bishop is an auxiliary bishop who is given almost equal authority in a diocese with the diocesan bishop, and the automatic right to succeed the incumbent diocesan bishop. The appointment of coadjutors is often seen as a means of providing for continuity of church leadership. Assistant bishop Honorary assistant bishop, assisting bishop, or bishop emeritus: these titles are usually applied to retired bishops who are given a general licence to minister as episcopal pastors under a diocesan's oversight. The titles, in this meaning, are not used by the Catholic Church. General bishop a title and role in some churches, not associated with a diocese. In the Coptic Orthodox Church the episcopal ranks from highest to lowest are metropolitan archbishops, metropolitan bishops, diocesan bishops, bishops exarchs of the throne, suffragan bishops, auxiliary bishops, general bishops, and finally chorbishops. Bishops of the same category rank according to date of consecration. Chorbishop A chorbishop is an official of a diocese in some Eastern Christian churches. Chorbishops are not generally ordained bishops – they are not given the sacrament of Holy Orders in that degree – but function as assistants to the diocesan bishop with certain honorary privileges. Supreme bishop The obispo maximo, or supreme bishop, of the Iglesia Filipina Independiente is elected by the General Assembly of the Church. He is the chief executive officer of the Church. He also holds an important pastoral role, being the spiritual head and chief pastor of the Church. He has precedence of honor and prominence of position among, and recognized to have primacy, over other bishops. Te Pīhopa The Anglican Church in Aotearoa, New Zealand and Polynesia uses — even in English language usage — this Māori language term for its tikanga Māori bishops. Duties In Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, High Church Lutheranism, and Anglicanism, only a bishop can ordain other bishops, priests, and deacons. In the Eastern liturgical tradition, a priest can celebrate the Divine Liturgy only with the blessing of a bishop. In Byzantine usage, an antimension signed by the bishop is kept on the altar partly as a reminder of whose altar it is and under whose omophorion the priest at a local parish is serving. In Syriac Church usage, a consecrated wooden block called a thabilitho is kept for the same reasons. The pope, in addition to being the Bishop of Rome and spiritual head of the Catholic Church, is also the Patriarch of the Latin Rite. Each bishop within the Latin Rite is answerable directly to the Pope and not any other bishop except to metropolitans in certain oversight instances. The pope previously used the title Patriarch of the West, but this title was dropped from use in 2006 a move which caused some concern within the Eastern Orthodox Communion as, to them, it implied wider papal jurisdiction. In Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican cathedrals there is a special chair set aside for the exclusive use of the bishop. This is the bishop's cathedra and is often called the throne. In some Christian denominations, for example, the Anglican Communion, parish churches may maintain a chair for the use of the bishop when he visits; this is to signify the parish's union with the bishop. The bishop is the ordinary minister of the sacrament of confirmation in the Latin Rite Catholic Church, and in the Old Catholic communion only a bishop may administer this sacrament. In the Lutheran and Anglican churches, the bishop normatively administers the rite of confirmation, although in those denominations that do not have an episcopal polity, confirmation is administered by the priest. However, in the Byzantine and other Eastern rites, whether Eastern or Oriental Orthodox or Eastern Catholic, chrismation is done immediately after baptism, and thus the priest is the one who confirms, using chrism blessed by a bishop. Ordination of Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican bishops Bishops in all of these communions are ordained by other bishops through the laying on of hands. While traditional teaching maintains that any bishop with apostolic succession can validly perform the ordination of another bishop, some churches require two or three bishops participate, either to ensure sacramental validity or to conform with church law. Catholic doctrine holds that one bishop can validly ordain another (priest) as a bishop. Though a minimum of three bishops participating is desirable (there are usually several more) in order to demonstrate collegiality, canonically only one bishop is necessary. The practice of only one bishop ordaining was normal in countries where the Church was persecuted under Communist rule. The title of archbishop or metropolitan may be granted to a senior bishop, usually one who is in charge of a large ecclesiastical jurisdiction. He may, or may not, have provincial oversight of suffragan bishops and may possibly have auxiliary bishops assisting him. Ordination of a bishop, and thus continuation of apostolic succession, takes place through a ritual centred on the imposition of hands and prayer. Apart from the ordination, which is always done by other bishops, there are different methods as to the actual selection of a candidate for ordination as bishop. In the Catholic Church the Congregation for Bishops generally oversees the selection of new bishops with the approval of the pope. The papal nuncio usually solicits names from the bishops of a country, consults with priests and leading members of a laity, and then selects three to be forwarded to the Holy See. In Europe, some cathedral chapters have duties to elect bishops. The Eastern Catholic churches generally elect their own bishops. Most Eastern Orthodox churches allow varying amounts of formalised laity or lower clergy influence on the choice of bishops. This also applies in those Eastern churches which are in union with the pope, though it is required that he give assent. Catholic, Eastern Orthodox, Oriental Orthodox, Anglican, Old Catholic and some Lutheran bishops claim to be part of the continuous sequence of ordained bishops since the days of the apostles referred to as apostolic succession. In Scandinavia and the Baltic region, Lutheran churches participating in the Porvoo Communion (those of Iceland, Norway, Sweden, Finland, Estonia, and Lithuania), as well as many non-Porvoo membership Lutheran churches (including those of Kenya, Latvia, and Russia), as well as the confessional Communion of Nordic Lutheran Dioceses, believe that they ordain their bishops in the apostolic succession in lines stemming from the original apostles. The New Westminster Dictionary of Church History states that "In Sweden the apostolic succession was preserved because the Catholic bishops were allowed to stay in office, but they had to approve changes in the ceremonies." The Catholic Church does recognise as valid (though illicit) ordinations done by breakaway Catholic, Old Catholic or Oriental bishops, and groups descended from them; it also regards as both valid and licit those ordinations done by bishops of the Eastern churches, so long as those receiving the ordination conform to other canonical requirements (for example, is an adult male) and an eastern orthodox rite of episcopal ordination, expressing the proper functions and sacramental status of a bishop, is used; this has given rise to the phenomenon of episcopi vagantes (for example, clergy of the Independent Catholic groups which claim apostolic succession, though this claim is rejected by both Catholicism and Eastern Orthodoxy). With respect to Lutheranism, "the Catholic Church has never officially expressed its judgement on the validity of orders as they have been handed down by episcopal succession in these two national Lutheran churches" (the Evangelical Lutheran Church of Sweden and the Evangelical Lutheran Church of Finland) though it does "question how the ecclesiastical break in the 16th century has affected the apostolicity of the churches of the Reformation and thus the apostolicity of their ministry". Since Pope Leo XIII issued the bull Apostolicae curae in 1896, the Catholic Church has insisted that Anglican orders are invalid because of the Reformed changes in the Anglican ordination rites of the 16th century and divergence in understanding of the theology of priesthood, episcopacy and Eucharist. However, since the 1930s, Utrecht Old Catholic bishops (recognised by the Holy See as validily ordained) have sometimes taken part in the ordination of Anglican bishops. According to the writer Timothy Dufort, by 1969, all Church of England bishops had acquired Old Catholic lines of apostolic succession recognised by the Holy See. This development has muddied the waters somewhat as it could be argued that the strain of apostolic succession has been re-introduced into Anglicanism, at least within the Church of England. The Eastern Orthodox Churches would not accept the validity of any ordinations performed by the Independent Catholic groups, as Eastern Orthodoxy considers to be spurious any consecration outside the Church as a whole. Eastern Orthodoxy considers apostolic succession to exist only within the Universal Church, and not through any authority held by individual bishops; thus, if a bishop ordains someone to serve outside the (Eastern Orthodox) Church, the ceremony is ineffectual, and no ordination has taken place regardless of the ritual used or the ordaining prelate's position within the Eastern Orthodox Churches. The position of the Catholic Church is slightly different. Whilst it does recognise the validity of the orders of certain groups which separated from communion with Holy See. The Holy See accepts as valid the ordinations of the Old Catholics in communion with Utrecht, as well as the Polish National Catholic Church (which received its orders directly from Utrecht, and was—until recently—part of that communion); but Catholicism does not recognise the orders of any group whose teaching is at variance with what they consider the core tenets of Christianity; this is the case even though the clergy of the Independent Catholic groups may use the proper ordination ritual. There are also other reasons why the Holy See does not recognise the validity of the orders of the Independent clergy: They hold that the continuing practice among many Independent clergy of one person receiving multiple ordinations in order to secure apostolic succession, betrays an incorrect and mechanistic theology of ordination. They hold that the practice within Independent groups of ordaining women demonstrates an understanding of priesthood that they vindicate is totally unacceptable to the Catholic and Eastern Orthodox churches as they believe that the Universal Church does not possess such authority; thus, they uphold that any ceremonies performed by these women should be considered being sacramentally invalid. The theology of male clergy within the Independent movement is also suspect according to the Catholics, as they presumably approve of the ordination of females, and may have even undergone an (invalid) ordination ceremony conducted by a woman. Whilst members of the Independent Catholic movement take seriously the issue of valid orders, it is highly significant that the relevant Vatican Congregations tend not to respond to petitions from Independent Catholic bishops and clergy who seek to be received into communion with the Holy See, hoping to continue in some sacramental role. In those instances where the pope does grant reconciliation, those deemed to be clerics within the Independent Old Catholic movement are invariably admitted as laity and not priests or bishops. There is a mutual recognition of the validity of orders amongst Catholic, Eastern Orthodox, Old Catholic, Oriental Orthodox and Assyrian Church of the East churches. Some provinces of the Anglican Communion have begun ordaining women as bishops in recent decades – for example, England, Ireland, Scotland, Wales, the United States, Australia, New Zealand, Canada and Cuba. The first woman to be consecrated a bishop within Anglicanism was Barbara Harris, who was ordained in the United States in 1989. In 2006, Katharine Jefferts Schori, the Episcopal Bishop of Nevada, became the first woman to become the presiding bishop of the Episcopal Church. In the Evangelical Lutheran Church in America (ELCA) and the Evangelical Lutheran Church in Canada (ELCIC), the largest Lutheran Church bodies in the United States and Canada, respectively, and roughly based on the Nordic Lutheran national churches (similar to that of the Church of England), bishops are elected by Synod Assemblies, consisting of both lay members and clergy, for a term of six years, which can be renewed, depending upon the local synod's "constitution" (which is mirrored on either the ELCA or ELCIC's national constitution). Since the implementation of concordats between the ELCA and the Episcopal Church of the United States and the ELCIC and the Anglican Church of Canada, all bishops, including the presiding bishop (ELCA) or the national bishop (ELCIC), have been consecrated using the historic succession in line with bishops from the Evangelical Lutheran Church of Sweden, with at least one Anglican bishop serving as co-consecrator. Since going into ecumenical communion with their respective Anglican body, bishops in the ELCA or the ELCIC not only approve the "rostering" of all ordained pastors, diaconal ministers, and associates in ministry, but they serve as the principal celebrant of all pastoral ordination and installation ceremonies, diaconal consecration ceremonies, as well as serving as the "chief pastor" of the local synod, upholding the teachings of Martin Luther as well as the documentations of the Ninety-Five Theses and the Augsburg Confession. Unlike their counterparts in the United Methodist Church, ELCA and ELCIC synod bishops do not appoint pastors to local congregations (pastors, like their counterparts in the Episcopal Church, are called by local congregations). The presiding bishop of the ELCA and the national bishop of the ELCIC, the national bishops of their respective bodies, are elected for a single 6-year term and may be elected to an additional term. Although ELCA agreed with the Episcopal Church to limit ordination to the bishop "ordinarily", ELCA pastor-ordinators are given permission to perform the rites in "extraordinary" circumstance. In practice, "extraordinary" circumstance have included disagreeing with Episcopalian views of the episcopate, and as a result, ELCA pastors ordained by other pastors are not permitted to be deployed to Episcopal Churches (they can, however, serve in Presbyterian Church USA, United Methodist Church, Reformed Church in America, and Moravian Church congregations, as the ELCA is in full communion with these denominations). The Lutheran Church–Missouri Synod (LCMS) and the Wisconsin Evangelical Lutheran Synod (WELS), the second and third largest Lutheran bodies in the United States and the two largest Confessional Lutheran bodies in North America, do not follow an episcopal form of governance, settling instead on a form of quasi-congregationalism patterned off what they believe to be the practice of the early church. The second largest of the three predecessor bodies of the ELCA, the American Lutheran Church, was a congregationalist body, with national and synod presidents before they were re-titled as bishops (borrowing from the Lutheran churches in Germany) in the 1980s. With regard to ecclesial discipline and oversight, national and synod presidents typically function similarly to bishops in episcopal bodies. Methodism African Methodist Episcopal Church In the African Methodist Episcopal Church, "Bishops are the Chief Officers of the Connectional Organization. They are elected for life by a majority vote of the General Conference which meets every four years." Christian Methodist Episcopal Church In the Christian Methodist Episcopal Church in the United States, bishops are administrative superintendents of the church; they are elected by "delegate" votes for as many years deemed until the age of 74, then he/she must retire. Among their duties, are responsibility for appointing clergy to serve local churches as pastor, for performing ordinations, and for safeguarding the doctrine and discipline of the Church. The General Conference, a meeting every four years, has an equal number of clergy and lay delegates. In each Annual Conference, CME bishops serve for four-year terms. CME Church bishops may be male or female. United Methodist Church In the United Methodist Church (the largest branch of Methodism in the world) bishops serve as administrative and pastoral superintendents of the church. They are elected for life from among the ordained elders (presbyters) by vote of the delegates in regional (called jurisdictional) conferences, and are consecrated by the other bishops present at the conference through the laying on of hands. In the United Methodist Church bishops remain members of the "Order of Elders" while being consecrated to the "Office of the Episcopacy". Within the United Methodist Church only bishops are empowered to consecrate bishops and ordain clergy. Among their most critical duties is the ordination and appointment of clergy to serve local churches as pastor, presiding at sessions of the Annual, Jurisdictional, and General Conferences, providing pastoral ministry for the clergy under their charge, and safeguarding the doctrine and discipline of the Church. Furthermore, individual bishops, or the Council of Bishops as a whole, often serve a prophetic role, making statements on important social issues and setting forth a vision for the denomination, though they have no legislative authority of their own. In all of these areas, bishops of the United Methodist Church function very much in the historic meaning of the term. According to the Book of Discipline of the United Methodist Church, a bishop's responsibilities are In each Annual Conference,
the full priesthood given by Jesus Christ, and therefore may ordain other clergy, including other bishops. A person ordained as a deacon, priest, and then bishop is understood to hold the fullness of the (ministerial) priesthood, given responsibility by Christ to govern, teach, and sanctify the Body of Christ. Priests, deacons and lay ministers co-operate and assist their bishops in pastoral ministry. Some Protestant churches, including the Lutheran, Anglican, Methodist and some Pentecostal churches have “bishops” with similar functions, though not within direct apostolic succession. Term The English term bishop derives from the Greek word epískopos, meaning "overseer" in Greek, the early language of the Christian Church. However, the term epískopos did not originate in Christianity. In Greek literature, the term had been used for several centuries before the advent of Christianity. It later transformed into the Latin episcopus, Old English biscop, Middle English bisshop and lastly bishop. In the early Christian era the term was not always clearly distinguished from presbýteros (literally: "elder" or "senior", origin of the modern English word "priest"), but is used in the sense of the order or office of bishop, distinct from that of presbyter, in the writings attributed to Ignatius of Antioch. (died c. 110). History in Christianity The earliest organization of the Church in Jerusalem was, according to most scholars, similar to that of Jewish synagogues, but it had a council or college of ordained presbyters ( elders). In Acts 11:30 and Acts 15:22, we see a collegiate system of government in Jerusalem chaired by James the Just, according to tradition the first bishop of the city. In Acts 14:23, the Apostle Paul ordains presbyters in churches in Anatolia. The word presbyter was not yet distinguished from overseer ( episkopos, later used exclusively to mean bishop), as in Acts 20:17, Titus 1:5–7 and 1 Peter 5:1. The earliest writings of the Apostolic Fathers, the Didache and the First Epistle of Clement, for example, show the church used two terms for local church offices—presbyters (seen by many as an interchangeable term with episcopos or overseer) and deacon. In Timothy and Titus in the New Testament a more clearly defined episcopate can be seen. We are told that Paul had left Timothy in Ephesus and Titus in Crete to oversee the local church. Paul commands Titus to ordain presbyters/bishops and to exercise general oversight. Early sources are unclear but various groups of Christian communities may have had the bishop surrounded by a group or college functioning as leaders of the local churches. Eventually the head or "monarchic" bishop came to rule more clearly, and all local churches would eventually follow the example of the other churches and structure themselves after the model of the others with the one bishop in clearer charge, though the role of the body of presbyters remained important. Eventually, as Christendom grew, bishops no longer directly served individual congregations. Instead, the metropolitan bishop (the bishop in a large city) appointed priests to minister each congregation, acting as the bishop's delegate. Apostolic Fathers Around the end of the 1st century, the church's organization became clearer in historical documents. In the works of the Apostolic Fathers, and Ignatius of Antioch in particular, the role of the episkopos, or bishop, became more important or, rather, already was very important and being clearly defined. While Ignatius of Antioch offers the earliest clear description of monarchial bishops (a single bishop over all house churches in a city) he is an advocate of monepiscopal structure rather than describing an accepted reality. To the bishops and house churches to which he writes, he offers strategies on how to pressure house churches who don't recognize the bishop into compliance. Other contemporary Christian writers do not describe monarchial bishops, either continuing to equate them with the presbyters or speaking of episkopoi (bishops, plural) in a city. "Blessed be God, who has granted unto you, who are yourselves so excellent, to obtain such an excellent bishop." — Epistle of Ignatius to the Ephesians 1:1 "and that, being subject to the bishop and the presbytery, ye may in all respects be sanctified." — Epistle of Ignatius to the Ephesians 2:1 "For your justly renowned presbytery, worthy of God, is fitted as exactly to the bishop as the strings are to the harp." — Epistle of Ignatius to the Ephesians 4:1 "Do ye, beloved, be careful to be subject to the bishop, and the presbyters and the deacons." — Epistle of Ignatius to the Ephesians 5:1 "Plainly therefore we ought to regard the bishop as the Lord Himself" — Epistle of Ignatius to the Ephesians 6:1. "your godly bishop" — Epistle of Ignatius to the Magnesians 2:1. "the bishop presiding after the likeness of God and the presbyters after the likeness of the council of the Apostles, with the deacons also who are most dear to me, having been entrusted with the diaconate of Jesus Christ" — Epistle of Ignatius to the Magnesians 6:1. "Therefore as the Lord did nothing without the Father, [being united with Him], either by Himself or by the Apostles, so neither do ye anything without the bishop and the presbyters." — Epistle of Ignatius to the Magnesians 7:1. "Be obedient to the bishop and to one another, as Jesus Christ was to the Father [according to the flesh], and as the Apostles were to Christ and to the Father, that there may be union both of flesh and of spirit." — Epistle of Ignatius to the Magnesians 13:2. "In like manner let all men respect the deacons as Jesus Christ, even as they should respect the bishop as being a type of the Father and the presbyters as the council of God and as the college of Apostles. Apart from these there is not even the name of a church." — Epistle of Ignatius to the Trallesians 3:1. "follow your bishop, as Jesus Christ followed the Father, and the presbytery as the Apostles; and to the deacons pay respect, as to God's commandment" — Epistle of Ignatius to the Smyrnans 8:1. "He that honoureth the bishop is honoured of God; he that doeth aught without the knowledge of the bishop rendereth service to the devil" — Epistle of Ignatius to the Smyrnans 9:1. — Lightfoot translation. As the Church continued to expand, new churches in important cities gained their own bishop. Churches in the regions outside an important city were served by Chorbishop, an official rank of bishops. However, soon, presbyters and deacons were sent from bishop of a city church. Gradually priests replaced the chorbishops. Thus, in time, the bishop changed from being the leader of a single church confined to an urban area to being the leader of the churches of a given geographical area. Clement of Alexandria (end of the 2nd century) writes about the ordination of a certain Zachæus as bishop by the imposition of Simon Peter Bar-Jonah's hands. The words bishop and ordination are used in their technical meaning by the same Clement of Alexandria. The bishops in the 2nd century are defined also as the only clergy to whom the ordination to priesthood (presbyterate) and diaconate is entrusted: "a priest (presbyter) lays on hands, but does not ordain." (cheirothetei ou cheirotonei) At the beginning of the 3rd century, Hippolytus of Rome describes another feature of the ministry of a bishop, which is that of the "Spiritum primatus sacerdotii habere potestatem dimittere peccata": the primate of sacrificial priesthood and the power to forgive sins. Christian bishops and civil government The efficient organization of the Roman Empire became the template for the organisation of the church in the 4th century, particularly after Constantine's Edict of Milan. As the church moved from the shadows of privacy into the public forum it acquired land for churches, burials and clergy. In 391, Theodosius I decreed that any land that had been confiscated from the church by Roman authorities be returned. The most usual term for the geographic area of a bishop's authority and ministry, the diocese, began as part of the structure of the Roman Empire under Diocletian. As Roman authority began to fail in the western portion of the empire, the church took over much of the civil administration. This can be clearly seen in the ministry of two popes: Pope Leo I in the 5th century, and Pope Gregory I in the 6th century. Both of these men were statesmen and public administrators in addition to their role as Christian pastors, teachers and leaders. In the Eastern churches, latifundia entailed to a bishop's see were much less common, the state power did not collapse the way it did in the West, and thus the tendency of bishops acquiring civil power was much weaker than in the West. However, the role of Western bishops as civil authorities, often called prince bishops, continued throughout much of the Middle Ages. Bishops holding political office As well as being Archchancellors of the Holy Roman Empire after the 9th century, bishops generally served as chancellors to medieval monarchs, acting as head of the justiciary and chief chaplain. The Lord Chancellor of England was almost always a bishop up until the dismissal of Cardinal Thomas Wolsey by Henry VIII. Similarly, the position of Kanclerz in the Polish kingdom was always held by a bishop until the 16th century. In modern times, the principality of Andorra is headed by Co-Princes of Andorra, one of whom is the Bishop of Urgell and the other, the sitting President of France, an arrangement that began with the Paréage of Andorra (1278), and was ratified in the 1993 constitution of Andorra. The office of the Papacy is inherently held by the sitting Roman Catholic Bishop of Rome. Though not originally intended to hold temporal authority, since the Middle Ages the power of the Papacy gradually expanded deep into the secular realm and for centuries the sitting Bishop of Rome was the most powerful governmental office in Central Italy. In modern times, the Pope is also the sovereign Prince of Vatican City, an internationally recognized micro-state located entirely within the city of Rome. In France, prior to the Revolution, representatives of the clergy — in practice, bishops and abbots of the largest monasteries — comprised the First Estate of the Estates-General. This role was abolished after separation of Church and State was implemented during the French Revolution. In the 21st century, the more senior bishops of the Church of England continue to sit in the House of Lords of the Parliament of the United Kingdom, as representatives of the established church, and are known as Lords Spiritual. The Bishop of Sodor and Man, whose diocese lies outside the United Kingdom, is an ex officio member of the Legislative Council of the Isle of Man. In the past, the Bishop of Durham had extensive vice-regal powers within his northern diocese, which was a county palatine, the County Palatine of Durham, (previously, Liberty of Durham) of which he was ex officio the earl. In the nineteenth century, a gradual process of reform was enacted, with the majority of the bishop's historic powers vested in The Crown by 1858. Eastern Orthodox bishops, along with all other members of the clergy, are canonically forbidden to hold political office. Occasional exceptions to this rule are tolerated when the alternative is political chaos. In the Ottoman Empire, the Patriarch of Constantinople, for example, had de facto administrative, cultural and legal jurisdiction, as well as spiritual authority, over all Eastern Orthodox Christians of the empire, as part of the Ottoman millet system. An Orthodox bishop headed the Prince-Bishopric of Montenegro from 1516-1852, assisted by a secular guvernadur. More recently, Archbishop Makarios III of Cyprus, served as President of the Cyprus from 1960 to 1977, an extremely turbulent time period on the island. In 2001, Peter Hollingworth, AC, OBE – then the Anglican Archbishop of Brisbane – was controversially appointed Governor-General of Australia. Although Hollingworth gave up his episcopal position to accept the appointment, it still attracted considerable opposition in a country which maintains a formal separation between Church and State. Episcopacy during the English Civil War During the period of the English Civil War, the role of bishops as wielders of political power and as upholders of the established church became a matter of heated political controversy. Indeed, Presbyterianism was the polity of most Reformed Churches in Europe, and had been favored by many in England since the English Reformation. Since in the primitive church the offices of presbyter and episkopos were not clearly distinguished, many Puritans held that this was the only form of government the church should have. The Anglican divine, Richard Hooker, objected to this claim in his famous work Of the Laws of Ecclesiastic Polity while, at the same time, defending Presbyterian ordination as valid (in particular Calvin's ordination of Beza). This was the official stance of the English Church until the Commonwealth, during which time, the views of Presbyterians and Independents (Congregationalists) were more freely expressed and practiced. Christian churches Catholic, Eastern Orthodox, Oriental Orthodox, Lutheran and Anglican churches Bishops form the leadership in the Catholic Church, the Eastern Orthodox Church, the Oriental Orthodox Churches, certain Lutheran Churches, the Anglican Communion, the Independent Catholic Churches, the Independent Anglican Churches, and certain other, smaller, denominations. The traditional role of a bishop is as pastor of a diocese (also called a bishopric, synod, eparchy or see), and so to serve as a "diocesan bishop", or "eparch" as it is called in many Eastern Christian churches. Dioceses vary considerably in size, geographically and population-wise. Some dioceses around the Mediterranean Sea which were Christianised early are rather compact, whereas dioceses in areas of rapid modern growth in Christian commitment—as in some parts of Sub-Saharan Africa, South America and the Far East—are much larger and
Andrieu (24 November 1761 – 6 December 1822) was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had
1761 – 6 December 1822) was a French engraver of medals. He was born in Bordeaux. In France, he was considered as the restorer of the art, which had declined after the time of
the Bourbon Restoration, the economy of Bordeaux was rebuilt by traders and shipowners. They engaged to construct the first bridge of Bordeaux, and customs warehouses. The shipping traffic grew through the new African colonies. Georges-Eugène Haussmann, a longtime prefect of Bordeaux, used Bordeaux's 18th-century large-scale rebuilding as a model when he was asked by Emperor Napoleon III to transform the quasi-medieval Paris into a "modern" capital that would make France proud. Victor Hugo found the town so beautiful he said: "Take Versailles, add Antwerp, and you have Bordeaux". In 1870, at the beginning of the Franco-Prussian war against Prussia, the French government temporarily relocated to Bordeaux from Paris. That recurred during World War I and again very briefly during World War II, when it became clear that Paris would fall into German hands. 20th century During World War II, Bordeaux fell under German occupation. In May and June 1940, Bordeaux was the site of the life-saving actions of the Portuguese consul-general, Aristides de Sousa Mendes, who illegally granted thousands of Portuguese visas, which were needed to pass the Spanish border, to refugees fleeing the German occupation. From 1941 to 1943, the Italian Royal Navy established BETASOM, a submarine base at Bordeaux. Italian submarines participated in the Battle of the Atlantic from that base, which was also a major base for German U-boats as headquarters of 12th U-boat Flotilla. The massive, reinforced concrete U-boat pens have proved impractical to demolish and are now partly used as a cultural center for exhibitions. 21st century, listed as World heritage In 2007, 40% of the city surface area, located around the Port of the Moon, was listed as World heritage sites. Unesco inscribed Bordeaux as "an inhabited historic city, an outstanding urban and architectural ensemble, created in the age of the Enlightenment, whose values continued up to the first half of the 20th century, with more protected buildings than any other French city except Paris". Geography Bordeaux is located close to the European Atlantic coast, in the southwest of France and in the north of the Aquitaine region. It is around southwest of Paris. The city is built on a bend of the river Garonne, and is divided into two parts: the right bank to the east and left bank in the west. Historically the left bank is more developed because when flowing outside the bend, the water makes a furrow of the required depth to allow the passing of merchant ships, which used to offload on this side of the river. But, today, the right bank is developing, including new urban projects. In Bordeaux, the Garonne River is accessible to ocean liners through the Gironde estuary. The right bank of the Garonne is a low-lying, often marshy plain. Climate Bordeaux's climate was last officially classified as a temperate oceanic climate (Köppen climate classification Cfb), although in more recent temperature records, from 1991-2020, it has warmed to become a humid subtropical climate (Köppen climate classification Cfa). In the Trewartha climate classification system it was classified as temperate oceanic or Do climate, but more recent temperature numbers have shown it to have eight months greater than 10° C and classify it as subtropical (Cf.) Winters are cool because of the prevalence of westerly winds from the Atlantic. Summers are warm and long due to the influence from the Bay of Biscay (surface temperature reaches ). The average seasonal winter temperature is , but recent winters have been warmer than this. Frosts in the winter occur several times during a winter, but snowfall is very rare, occurring only once every three years. The average summer seasonal temperature is . The summer of 2003 set a record with an average temperature of . February 1956 was the coldest month on record with an average temperature of −2.00 °C at Bordeaux Mérignac-Airport. Economy Bordeaux is a major centre for business in France as it has the sixth largest metropolitan population in France. It serves as a major regional center for trade, administration, services and industry. , the GDP of Bordeaux is €32.7 billion. Wine The vine was introduced to the Bordeaux region by the Romans, probably in the mid-first century, to provide wine for local consumption, and wine production has been continuous in the region since. Bordeaux wine growing area has about of vineyards, 57 appellations, 10,000 wine-producing estates (châteaux) and 13,000 grape growers. With an annual production of approximately 960 million bottles, the Bordeaux area produces large quantities of everyday wine as well as some of the most expensive wines in the world. Included among the latter are the area's five premier cru (First Growth) red wines (four from Médoc and one, Château Haut-Brion, from Graves), established by the Bordeaux Wine Official Classification of 1855: Both red and white wines are made in the Bordeaux region. Red Bordeaux wine is called claret in the United Kingdom. Red wines are generally made from a blend of grapes, and may be made from Cabernet Sauvignon, Merlot, Cabernet Franc, Petit verdot, Malbec, and, less commonly in recent years, Carménère. White Bordeaux is made from Sauvignon blanc, Sémillon, and Muscadelle. Sauternes is a sub-region of Graves known for its intensely sweet, white, dessert wines such as Château d'Yquem. Because of a wine glut (wine lake) in the generic production, the price squeeze induced by an increasingly strong international competition, and vine pull schemes, the number of growers has recently dropped from 14,000 and the area under vine has also decreased significantly. In the meantime, the global demand for first growths and the most famous labels markedly increased and their prices skyrocketed. The Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016. Others The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies. This project, carried by the French Ministry of Defence, involves an investment of 2 billion euros. The "Road of the lasers", a major project of regional planning, promotes regional investment in optical and laser related industries leading to the Bordeaux area having the most important concentration of optical and laser expertise in Europe. Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile. Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year. Major companies This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there. Arena Groupe Bernard Groupe Castel Cdiscount Dassault Jock Marie Brizard McKesson Corporation Oxbow Ricard Sanofi Aventis Smurfit Kappa Snecma Solectron Thales Group Population In January 2017, there were 254,436 inhabitants in the city proper (commune) of Bordeaux. Bordeaux had its largest population of 267,409 in 1921. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans. The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to urban sprawl, so that by January 2017 there were 1,247,977 people living in the overall metropolitan area (aire urbaine) of Bordeaux, only a fifth of whom lived in the city proper. Largest communities of foreigners : Politics Municipal administration The Mayor of the city is the environmentalist Pierre Hurmic. Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine. The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name. The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition: Mayors of Bordeaux Since the Liberation (1944), there have been 6 mayors of Bordeaux: RPR was renamed to UMP in 2002 which was later renamed to LR in 2015 Elections Presidential elections of 2007 At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round. Parliamentary elections of 2007 In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival. Municipal elections of 2008 In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers. European elections of 2009 In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results: UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round. Only candidates obtaining more than 5% are listed 2017 elections Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux. 2019 European elections Bordeaux voted in the 2019 European Parliament election in France. Municipal elections of 2020 After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM). Parliamentary representation The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th. Education University During Antiquity, a first university had been created by the Roman in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers. In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the University reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha). The University of Bordeaux is divided into four: The University Bordeaux 1, (Maths, Physical sciences and Technologies), 10,693 students in 2002 The University Bordeaux 2, Bordeaux Segalen (Medicine and Life sciences), 15,038 students in 2002 The University Bordeaux 3, Michel de Montaigne (Liberal arts, Humanities, Languages, History), 14,785 students in 2002 The University Bordeaux 4, Montesquieu (Law, Economy and Management), 12,556 students in 2002 Institut of Political Sciences of Bordeaux. Although technically a part of the fourth
Cité du Vin, a museum as well as a place of exhibitions, shows, movie projections and academic seminars on the theme of wine opened its doors in June 2016. Others The Laser Mégajoule will be one of the most powerful lasers in the world, allowing fundamental research and the development of the laser and plasma technologies. This project, carried by the French Ministry of Defence, involves an investment of 2 billion euros. The "Road of the lasers", a major project of regional planning, promotes regional investment in optical and laser related industries leading to the Bordeaux area having the most important concentration of optical and laser expertise in Europe. Some 20,000 people work for the aeronautic industry in Bordeaux. The city has some of the biggest companies including Dassault, EADS Sogerma, Snecma, Thales, SNPE, and others. The Dassault Falcon private jets are built there as well as the military aircraft Rafale and Mirage 2000, the Airbus A380 cockpit, the boosters of Ariane 5, and the M51 SLBM missile. Tourism, especially wine tourism, is a major industry. Globelink.co.uk mentioned Bordeaux as the best tourist destination in Europe in 2015. Access to the port from the Atlantic is via the Gironde estuary. Almost nine million tonnes of goods arrive and leave each year. Major companies This list includes indigenous Bordeaux-based companies and companies that have major presence in Bordeaux, but are not necessarily headquartered there. Arena Groupe Bernard Groupe Castel Cdiscount Dassault Jock Marie Brizard McKesson Corporation Oxbow Ricard Sanofi Aventis Smurfit Kappa Snecma Solectron Thales Group Population In January 2017, there were 254,436 inhabitants in the city proper (commune) of Bordeaux. Bordeaux had its largest population of 267,409 in 1921. The majority of the population is French, but there are sizable groups of Italians, Spaniards (Up to 20% of the Bordeaux population claim some degree of Spanish heritage), Portuguese, Turks, Germans. The built-up area has grown for more than a century beyond the municipal borders of Bordeaux due to urban sprawl, so that by January 2017 there were 1,247,977 people living in the overall metropolitan area (aire urbaine) of Bordeaux, only a fifth of whom lived in the city proper. Largest communities of foreigners : Politics Municipal administration The Mayor of the city is the environmentalist Pierre Hurmic. Bordeaux is the capital of five cantons and the Prefecture of the Gironde and Aquitaine. The town is divided into three districts, the first three of Gironde. The headquarters of Urban Community of Bordeaux Mériadeck is located in the neighbourhood and the city is at the head of the Chamber of Commerce and Industry that bears his name. The number of inhabitants of Bordeaux is greater than 250,000 and less than 299,999 so the number of municipal councilors is 65. They are divided according to the following composition: Mayors of Bordeaux Since the Liberation (1944), there have been 6 mayors of Bordeaux: RPR was renamed to UMP in 2002 which was later renamed to LR in 2015 Elections Presidential elections of 2007 At the 2007 presidential election, the Bordelais gave 31.37% of their votes to Ségolène Royal of the Socialist Party against 30.84% to Nicolas Sarkozy, president of the UMP. Then came François Bayrou with 22.01%, followed by Jean-Marie Le Pen who recorded 5.42%. None of the other candidates exceeded the 5% mark. Nationally, Nicolas Sarkozy led with 31.18%, then Ségolène Royal with 25.87%, followed by François Bayrou with 18.57%. After these came Jean-Marie Le Pen with 10.44%, none of the other candidates exceeded the 5% mark. In the second round, the city of Bordeaux gave Ségolène Royal 52.44% against 47.56% for Nicolas Sarkozy, the latter being elected President of the Republic with 53.06% against 46.94% for Ségolène Royal. The abstention rates for Bordeaux were 14.52% in the first round and 15.90% in the second round. Parliamentary elections of 2007 In the parliamentary elections of 2007, the left won eight constituencies against only three for the right. It should be added that after the partial 2008 elections, the eighth district of Gironde switched to the left, bringing the count to nine. In Bordeaux, the left was for the first time in its history the majority as it held two of three constituencies following the elections. In the first division of the Gironde, the outgoing UMP MP Chantal Bourragué was well ahead with 44.81% against 25.39% for the Socialist candidate Beatrice Desaigues. In the second round, it was Chantal Bourragué who was re-elected with 54.45% against 45.55% for his socialist opponent. In the second district of Gironde the UMP mayor and all new Minister of Ecology, Energy, Sustainable Development and the Sea Alain Juppé confronted the General Counsel PS Michèle Delaunay. In the first round, Alain Juppé was well ahead with 43.73% against 31.36% for Michèle Delaunay. In the second round, it was finally Michèle Delaunay who won the election with 50.93% of the votes against 49.07% for Alain Juppé, the margin being only 670 votes. The defeat of the so-called constituency "Mayor" showed that Bordeaux was rocking increasingly left. Finally, in the third constituency of the Gironde, Noël Mamère was well ahead with 39.82% against 28.42% for the UMP candidate Elizabeth Vine. In the second round, Noël Mamère was re-elected with 62.82% against 37.18% for his right-wing rival. Municipal elections of 2008 In 2008 municipal elections saw the clash between mayor of Bordeaux, Alain Juppé and the President of the Regional Council of Aquitaine Socialist Alain Rousset. The PS had put up a Socialist heavyweight in the Gironde and had put great hopes in this election after the victory of Ségolène Royal and Michèle Delaunay in 2007. However, after a rather exciting campaign it was Alain Juppé who was widely elected in the first round with 56.62%, far ahead of Alain Rousset who has managed to get 34.14%. At present, of the eight cantons that has Bordeaux, five are held by the PS and three by the UMP, the left eating a little each time into the right's numbers. European elections of 2009 In the European elections of 2009, Bordeaux voters largely voted for the UMP candidate Dominique Baudis, who won 31.54% against 15.00% for PS candidate Kader Arif. The candidate of Europe Ecology José Bové came second with 22.34%. None of the other candidates reached the 10% mark. The 2009 European elections were like the previous ones in eight constituencies. Bordeaux is located in the district "Southwest", here are the results: UMP candidate Dominique Baudis: 26.89%. His party gained four seats. PS candidate Kader Arif: 17.79%, gaining two seats in the European Parliament. Europe Ecology candidate Bove: 15.83%, obtaining two seats. MoDem candidate Robert Rochefort: 8.61%, winning a seat. Left Front candidate Jean-Luc Mélenchon: 8.16%, gaining the last seat. At regional elections in 2010, the Socialist incumbent president Alain Rousset won the first round by totaling 35.19% in Bordeaux, but this score was lower than the plan for Gironde and Aquitaine. Xavier Darcos, Minister of Labour followed with 28.40% of the votes, scoring above the regional and departmental average. Then came Monique De Marco, Green candidate with 13.40%, followed by the member of Pyrenees-Atlantiques and candidate of the MoDem Jean Lassalle who registered a low 6.78% while qualifying to the second round on the whole Aquitaine, closely followed by Jacques Colombier, candidate of the National Front, who gained 6.48%. Finally the candidate of the Left Front Gérard Boulanger with 5.64%, no other candidate above the 5% mark. In the second round, Alain Rousset had a tidal wave win as national totals rose to 55.83%. If Xavier Darcos largely lost the election, he nevertheless achieved a score above the regional and departmental average obtaining 33.40%. Jean Lassalle, who qualified for the second round, passed the 10% mark by totaling 10.77%. The ballot was marked by abstention amounting to 55.51% in the first round and 53.59% in the second round. Only candidates obtaining more than 5% are listed 2017 elections Bordeaux voted for Emmanuel Macron in the presidential election. In the 2017 parliamentary election, La République En Marche! won most of the constituencies in Bordeaux. 2019 European elections Bordeaux voted in the 2019 European Parliament election in France. Municipal elections of 2020 After 73 years of right-of-centre rule, the ecologist Pierre Hurmic (EELV) came in ahead of Nicolas Florian (LR/LaREM). Parliamentary representation The city area is represented by the following constituencies: Gironde's 1st, Gironde's 2nd, Gironde's 3rd, Gironde's 4th, Gironde's 5th, Gironde's 6th, Gironde's 7th. Education University During Antiquity, a first university had been created by the Roman in 286. The city was an important administrative centre and the new university had to train administrators. Only rhetoric and grammar were taught. Ausonius and Sulpicius Severus were two of the teachers. In 1441, when Bordeaux was an English town, the Pope Eugene IV created a university by demand of the archbishop Pey Berland. In 1793, during the French Revolution, the National Convention abolished the university, and replace them with the École centrale in 1796. In Bordeaux, this one was located in the former buildings of the college of Guyenne. In 1808, the University reappeared with Napoleon. Bordeaux accommodates approximately 70,000 students on one of the largest campuses of Europe (235 ha). The University of Bordeaux is divided into four: The University Bordeaux 1, (Maths, Physical sciences and Technologies), 10,693 students in 2002 The University Bordeaux 2, Bordeaux Segalen (Medicine and Life sciences), 15,038 students in 2002 The University Bordeaux 3, Michel de Montaigne (Liberal arts, Humanities, Languages, History), 14,785 students in 2002 The University Bordeaux 4, Montesquieu (Law, Economy and Management), 12,556 students in 2002 Institut of Political Sciences of Bordeaux. Although technically a part of the fourth university, it largely functions autonomously. Schools Bordeaux has numerous public and private schools offering undergraduate and postgraduate programs. Engineering schools: Arts et Métiers ParisTech, graduate school of industrial and mechanical engineering ESME-Sudria, graduate school of engineering École d'ingénieurs en modélisation mathématique et mécanique École nationale supérieure d'électronique, informatique, télécommunications, mathématique et mécanique de Bordeaux (ENSEIRB-MATMECA) École supérieure de technologie des biomolécules de Bordeaux École nationale d'ingénieurs des travaux agricoles de Bordeaux École nationale supérieure de chimie et physique de Bordeaux École pour l'informatique et les nouvelles technologies Institut des sciences et techniques des aliments de Bordeaux Institut de cognitique École supérieure d'informatique École privée des sciences informatiques Business and management schools: The Bordeaux MBA (International College of Bordeaux) IUT Techniques de Commercialisation of Bordeaux (business school) INSEEC Business School (Institut des hautes études économiques et commerciales) KEDGE Business School (former BEM – Bordeaux Management School) Vatel Bordeaux International Business School E-Artsup Institut supérieur européen de gestion group Institut supérieur européen de formation par l'action Other: École nationale de la magistrature (National school for the judiciary) (EFAP) (CNAM) (law school) Weekend education The École Compleméntaire Japonaise de Bordeaux (ボルドー日本語補習授業校 Borudō Nihongo Hoshū Jugyō Kō), a part-time Japanese supplementary school, is held in the Salle de L'Athenee Municipal in Bordeaux. Main sights Heritage and architecture Bordeaux is classified "City of Art and History". The city is home to 362 monuments historiques (only Paris has more in France) with some buildings dating back to Roman times. Bordeaux, Port of the Moon, has been inscribed on UNESCO World Heritage List as "an outstanding urban and architectural ensemble". Bordeaux is home to one of Europe's biggest 18th-century architectural urban areas, making it a sought-after destination for tourists and cinema production crews. It stands out as one of the first French cities, after Nancy, to have entered an era of urbanism and metropolitan big scale projects, with the team Gabriel father and son, architects for King Louis XV, under the supervision of two intendants (Governors), first Nicolas-François Dupré de Saint-Maur then the Marquis de Tourny. Saint-André Cathedral, Saint-Michel Basilica and Saint-Seurin Basilica are part of the World Heritage Sites of the Routes of Santiago de Compostela in France. The organ in Saint-Louis-des-Chartrons is registered on the French monuments historiques. Buildings Main sights include: Place de la Bourse (1735–1755), designed by the Royal architect Jacques Gabriel as landscape for an equestrian statue of Louis XV, now replaced by the Fountain of the Three Graces. Grand Théâtre (1780), a large neoclassical theater built in the 18th century. Allées de Tourny Cours de l'Intendance Place du Chapelet Place du Parlement Place des Quinconces, the largest square in France. Monument aux Girondins Place Saint-Pierre Pont de pierre (1822) Saint Andrew's Cathedral, consecrated by Pope Urban II in 1096. Of the original Romanesque edifice only a wall in the nave remains. The Royal Gate is from the early 13th century, while the rest of the construction is mostly from the 14th and 15th centuries. Tour Pey-Berland (1440–1450), a massive, quadrangular Gothic tower annexed to the cathedral. Église Sainte-Croix (Church of the Holy Cross). It lies on the site of a 7th-century abbey destroyed by the Saracens. Rebuilt under the Carolingians, it was again destroyed by the Normans in 845 and 864. It is annexed to a Benedictine abbey founded in the 7th century, and was built in the late 11th and early 12th centuries. The façade is in Romanesque style The Gothic Basilica of Saint Michael, constructed between the end of the 14th century and the 16th century. Basilica of Saint Severinus, the most ancient church in Bordeaux. It was built in the early 6th century on the site of a palaeochristian necropolis. It has an 11th-century portico, while the apse and transept are from the following century. The 13th-century nave has chapels from the 11th and the 14th centuries. The ancient crypt houses sepulchres of the Merovingian family. Église Saint-Pierre, gothic church Église Saint-Éloi, gothic church Église Saint-Bruno, baroque church decorated with frescoes Église Notre-Dame, baroque church Église Saint-Paul-Saint-François-Xavier, baroque church Palais Rohan, former mansion of the archbishop, now city hall Palais Gallien, the remains of a late 2nd-century Roman amphitheatre Porte Cailhau, a medieval gatehouse of the old city walls. La Grosse Cloche (15th century), the second remaining gate of the Medieval walls. It was the belfry of the old Town Hall. It consists of two circular towers and a central bell tower housing a bell weighing . The watch is from 1759. La Grande Synagogue, built in 1878 Rue Sainte-Catherine, the longest pedestrian street of France Darwin ecosystem, alternative place into former military barracks The BETASOM submarine base Contemporary architecture Cité Frugès, district of Pessac, built by Le Corbusier, 1924–1926, listed as UNESCO heritage Fire Station, la Benauge, Claude Ferret/Adrien Courtois/Yves Salier, 1951–1954 Mériadeck district, 1960-70's Court of first instance, Richard Rogers, 1998 CTBA, wood and furniture research center, A. Loisier, 1998 Hangar 14 on the Quai des Chartrons, 1999 The Management Science faculty on the Bastide, Anne Lacaton/Jean-Philippe Vassal, 2006 The Jardin botanique de la Bastide, Catherine Mosbach/Françoise Hélène Jourda/Pascal Convert, 2007 The Nuyens School complex on the Bastide, Yves Ballot/Nathalie Franck, 2007 Seeko'o Hotel on the Quai des Chartrons, King Kong architects, 2007 Matmut Atlantique stadium, Herzog & de Meuron, 2015 Cité du Vin, XTU architects, Anouk Legendre & Nicolas Desmazières, 2016 MECA, Maison de l'Economie Créative et de la culture de la Région Nouvelle-Aquitaine, Bjarke Ingels, 2019 Museums Musée des Beaux-Arts (Fine arts museum), one of the finest painting galleries in France with paintings by painter such as Tiziano, Veronese, Rubens, Van Dyck, Frans Hals, Claude, Chardin, Delacroix, Renoir, Seurat, Redon, Matisse and Picasso. Musée d'Aquitaine (archeological and history museum) Musée du Vin et du Négoce (museum of the wine trade) Musée des Arts Décoratifs et du Design (museum of decorative arts and design) Musée d'Histoire Naturelle (natural history museum) Musée Mer Marine (Sea and Navy museum) Cité du Vin CAPC musée d'art contemporain de Bordeaux (modern art museum) Musée national des douanes (history of French customs) Bordeaux Patrimoine Mondial (architectural and heritage interpretation centre) Musée d'ethnologie (ethnology museum) Institut culturel Bernard Magrez, modern and streetart museum into an 18th-century mansion Cervantez Institute (into the house of Goya) Cap Sciences Centre Jean Moulin Memory of slavery Slavery was part of a growing drive for the city. Firstly, during the 18th and 19th centuries, Bordeaux was an important slave port, which saw some 500 slave expeditions that cause the deportation of 150,000 Africans by Bordeaux shipowners. Secondly, even though the "Triangular trade" represented only 5% of Bordeaux's wealth, the city's direct trade with the Caribbean, that accounted for the other 95%, concerns the colonial stuffs made by the slave (sugar, coffee, cocoa). And thirdly, in that same period, a major migratory movement by Aquitanians took place to the Caribbean colonies, with Saint-Domingue (now Haiti) being the most popular destination. 40% of the white population of the island came from Aquitaine. They prospered with plantations incomes, until the first slave revolts which concluded in 1848 in the final abolition of slavery in France. A statue of Modeste Testas, an Ethiopian woman who was enslaved by the Bordeaux-based Testas brothers was unveiled in 2019. She was trafficked by them from West Africa, to Philadelphia (where one of the brother coerced her to have two children by him) and was ultimately freed and lived in Haiti. The bronze sculpture was created by the Haitian artists Woodly Caymitte. A number of traces and memorial sites are visible in the city. Moreover, in May 2009, the Museum of Aquitaine opened the spaces dedicated to "Bordeaux in the 18th century, trans-Atlantic trading and slavery". This work, richly illustrated with original documents, contributes to disseminate the state of knowledge on this question, presenting above all the facts and their chronology. The region of Bordeaux was also the land of several prominent abolitionists, as Montesquieu, Laffon de Ladébat and Elisée Reclus. Others were members of the Society of the Friends of the Blacks as the revolutionaries Boyer-Fonfrède, Gensonné, Guadet and Ducos. Parks and gardens Jardin public de Bordeaux, with inside the Jardin botanique de Bordeaux Jardin botanique de la Bastide Parc bordelais Parc aux Angéliques Jardin des Lumières Parc Rivière Parc Floral Pont Jacques Chaban-Delmas Europe's longest-span vertical-lift bridge, the Pont Jacques Chaban-Delmas, was opened in 2013 in Bordeaux, spanning the River Garonne. The central lift span is and can be lifted vertically up to to let tall ships pass underneath. The €160 million bridge was inaugurated by President François Hollande and Mayor Alain Juppé on 16 March 2013. The bridge was named after the late Jacques Chaban-Delmas, who was a former Prime Minister and Mayor of Bordeaux. Shopping Bordeaux has many shopping options. In the heart of Bordeaux is Rue Sainte-Catherine. This pedestrian-only shopping street has of shops, restaurants and cafés; it is also one of the longest shopping streets in Europe. Rue Sainte-Catherine starts at Place de la Victoire and ends at Place de la Comédie by the Grand Théâtre. The shops become progressively more upmarket as one moves towards Place de la Comédie and the nearby Cours de l'Intendance is where one finds the more exclusive shops and boutiques. Culture Bordeaux is also the first city in France to have created, in the 1980s, an architecture exhibition and research centre, Arc en rêve. Bordeaux offers a large number of cinemas, theatres, and is the home of the Opéra national de Bordeaux. There are many music venues of varying capacity. The city also offers several festivals throughout the year. In October 2021, Bordeaux was shortlisted for the European Commission's 2022 European Capital of Smart Tourism award along with Copenhagen, Dublin, Florence, Ljubljana, Palma de Mallorca and Valencia. Transport Road Bordeaux is an important road and motorway junction. The city is connected to Paris by the A10 motorway, with Lyon by the A89, with Toulouse by the A62, and with Spain by the A63. There is a ring road called the "Rocade" which is often very busy. Another ring road is under consideration. Bordeaux has five road bridges that cross the Garonne, the Pont de pierre built in the 1820s and three modern bridges built after 1960: the Pont Saint Jean, just south of the Pont de pierre (both located downtown), the Pont d'Aquitaine, a suspended bridge downstream from downtown, and the Pont François Mitterrand, located upstream of downtown. These two bridges are part of the ring road around Bordeaux. A fifth bridge, the Pont Jacques-Chaban-Delmas, was constructed in 2009–2012 and opened to traffic in March 2013. Located halfway between the Pont de pierre and the Pont d'Aquitaine and serving downtown rather than highway traffic, it is a vertical-lift bridge with a height comparable to the Pont de pierre in closed position, and to the Pont d'Aquitaine in open position. All five road bridges, including the two highway bridges, are open to cyclists and pedestrians as well. Another bridge, the Pont Jean-Jacques Bosc, is to be built in 2018. Lacking any steep hills, Bordeaux is relatively friendly to cyclists. Cycle paths (separate from the roadways) exist on the highway bridges, along the riverfront, on the university campuses, and incidentally elsewhere in the city. Cycle lanes and bus lanes that explicitly allow cyclists exist on many of the city's boulevards. A paid bicycle-sharing system with automated stations has been established in 2010. Rail The main railway station, Gare de Bordeaux Saint-Jean, near the center of the city, has 12 million passengers a year. It is served by the French national (SNCF) railway's high speed train, the TGV, that gets to Paris in two hours, with connections to major European centers such as Lille, Brussels, Amsterdam, Cologne, Geneva and London. The TGV also serves Toulouse and Irun (Spain) from Bordeaux. A regular train service is provided to Nantes, Nice, Marseille and Lyon. The Gare Saint-Jean is the major hub for regional trains (TER) operated by the SNCF to Arcachon, Limoges, Agen, Périgueux, Langon, Pau, Le Médoc, Angoulême and Bayonne. Historically the train line used to terminate at a station on the right bank of the river Garonne near the Pont de Pierre, and passengers crossed the bridge to get into the city. Subsequently, a double-track steel railway bridge was constructed in the 1850s, by Gustave Eiffel, to bring trains across the river direct into Gare de Bordeaux Saint-Jean. The old station was later converted and in 2010 comprised a cinema and restaurants. The two-track Eiffel bridge with a speed limit of became a bottleneck and a new bridge was built, opening in 2009. The new bridge has four tracks and allows trains to pass at . During the planning there was much lobbying by the Eiffel family and other supporters to preserve the old bridge as a footbridge across the Garonne, with possibly a museum to document the history of the bridge and Gustave Eiffel's contribution. The decision was taken to save the bridge, but by early 2010 no plans had been announced as to
1994 by Taito, running on Taito's B System hardware (with the preliminary title "Bubble Buster"). Then, 6 months later in December, the international Neo Geo version of Puzzle Bobble was released. It was almost identical aside from being in stereo and having some different sound effects and translated text. Reception In Japan, Game Machine listed the Neo Geo version of Puzzle Bobble on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below Virtua Fighter 2. In North America, RePlay reported the Neo Geo version of Puzzle Bobble to be the fourth most-popular arcade game in February 1995. Reviewing the Super NES version, Mike Weigand of Electronic Gaming Monthly called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. GamePro gave it a generally negative review, saying it "starts out fun but ultimately lacks intricacy and longevity." They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. Next Generation reviewed the SNES version of the game, and stated that "It's very simple, using only the control pad and one button to fire, and it's addictive as hell." A reviewer for Next Generation, while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and very addictive". He remarked that though the 3DO version makes no significant additions, none are called
with all the bubbles stuck to it. The number of shots between each drop of the ceiling is influenced by the number of bubble colors remaining. The closer the bubbles get to the bottom of the screen, the faster the music plays and if they cross the line at the bottom then the game is over. Release Two different versions of the original game were released. Puzzle Bobble was originally released in Japan only in June 1994 by Taito, running on Taito's B System hardware (with the preliminary title "Bubble Buster"). Then, 6 months later in December, the international Neo Geo version of Puzzle Bobble was released. It was almost identical aside from being in stereo and having some different sound effects and translated text. Reception In Japan, Game Machine listed the Neo Geo version of Puzzle Bobble on their February 15, 1995 issue as being the second most-popular arcade game at the time. It went on to become Japan's second highest-grossing arcade printed circuit board (PCB) software of 1995, below Virtua Fighter 2. In North America, RePlay reported the Neo Geo version of Puzzle Bobble to be the fourth most-popular arcade game in February 1995. Reviewing the Super NES version, Mike Weigand of Electronic Gaming Monthly called it "a thoroughly enjoyable and incredibly addicting puzzle game". He considered the two player mode the highlight, but also said that the one player mode provides a solid challenge. GamePro gave it a generally negative review, saying it "starts out fun but ultimately lacks intricacy and longevity." They elaborated that in one player mode all the levels feel the same, and that two player matches are over too quickly to build up any excitement. They also criticized the lack of any 3D effects in the graphics. Next Generation reviewed the SNES version of the game, and stated that "It's very simple, using only the control pad and one button to fire, and it's addictive as hell." A reviewer for Next Generation, while questioning the continued viability of the action puzzle genre, admitted that the game is "very simple and very addictive". He remarked that though the 3DO version makes no significant additions, none are called for by a game with such simple enjoyment. GamePros brief review of the 3DO version commented, "The move-and-shoot controls are very responsive and the simple visuals and music are well done. This is one puzzler that isn't a
osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralized matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. In the human body at birth, there are approximately 300 bones present; many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear. The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. Structure Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%) which are intricately woven and endlessly remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight. Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as calcium hydroxylapatite. It is the bone mineralization that give bones rigidity. Bone is actively constructed and remodeled throughout life by special bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns, known as cortical and cancellous bone, and each with different appearance and characteristics. Cortex The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the haversian canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon. Trabecules Cancellous bone, also called trabecular or spongy bone, is the internal tissue of the skeletal bone and is an open cell porous network. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez. Marrow Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Cells Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteoblast Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as lining cells. Osteocyte Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels. Osteoclast Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis. Composition Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also being found. Type I collagen composes 90–95% of the organic matrix, with remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone (also known as fibrous bone), which is characterized by a haphazard organization of collagen fibers and is mechanically weak. Lamellar bone, which has a regular parallel alignment of collagen into sheets ("lamellae") and is mechanically strong. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. Deposition The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These synthesise collagen within the cell and then secrete collagen fibrils. The collagen fibers rapidly polymerise to form collagen strands. At this stage, they are not yet mineralised, and are called "osteoid". Around the strands calcium and phosphate precipitate on the surface of these strands, within days to weeks becoming crystals of hydroxyapatite. In order to mineralise the bone, the osteoblasts secrete vesicles containing alkaline phosphatase. This cleaves the phosphate groups and acts as the foci for calcium and phosphate deposition. The vesicles then rupture and act as a centre for crystals to grow on. More particularly, bone mineral is formed from globular and plate structures. Types There are five types of bones in the human body: long, short, flat, irregular, and sesamoid. Long bones are characterized by a shaft, the diaphysis, that is much longer than its width; and by an epiphysis, a rounded head at each end of the shaft. They are made up mostly of compact bone, with lesser amounts of marrow, located within the medullary cavity, and areas of spongy, cancellous bone at the ends of the bones. Most bones of the limbs, including those of the fingers and toes, are long bones. The exceptions are the eight carpal bones of the wrist, the seven articulating tarsal bones of the ankle and the sesamoid bone of the kneecap. Long bones such as the clavicle, that have a differently shaped shaft or ends are also called modified long bones. Short bones are roughly cube-shaped, and have only a thin layer of compact bone surrounding a spongy interior. The bones of the wrist and ankle are short bones. Flat bones are thin and generally curved, with two parallel layers of compact bone sandwiching a layer of spongy bone. Most of the bones of the skull are flat bones, as is the sternum. Sesamoid bones are bones embedded in tendons. Since they act to hold the tendon further away from the joint, the angle of the tendon is increased and thus the leverage of the muscle is increased. Examples of sesamoid bones are the patella and the pisiform. Irregular bones do not fit into the above categories. They consist of thin layers of compact bone surrounding a spongy interior. As implied by the name, their shapes are irregular and complicated. Often this irregular shape is due to their many centers of ossification or because they contain bony sinuses. The bones of the spine, pelvis, and some bones of the skull are irregular bones. Examples include the ethmoid and sphenoid bones. Terminology In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join together, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint
that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez. Marrow Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Cells Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteoblast Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as lining cells. Osteocyte Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels. Osteoclast Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis. Composition Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also being found. Type I collagen composes 90–95% of the organic matrix, with remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone (also known as fibrous bone), which is characterized by a haphazard organization of collagen fibers and is mechanically weak. Lamellar bone, which has a regular parallel alignment of collagen into sheets ("lamellae") and is mechanically strong. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution." Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 µm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. Deposition The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These synthesise collagen within the cell and then secrete collagen fibrils. The collagen fibers rapidly polymerise to form collagen strands. At this stage, they are not yet mineralised, and are called "osteoid". Around the strands calcium and phosphate precipitate on the surface of these strands, within days to weeks becoming crystals of hydroxyapatite. In order to mineralise the bone, the osteoblasts secrete vesicles containing alkaline phosphatase. This cleaves the phosphate groups and acts as the foci for calcium and phosphate deposition. The vesicles then rupture and act as a centre for crystals to grow on. More particularly, bone mineral is formed from globular and plate structures. Types There are five types of bones in the human body: long, short, flat, irregular, and sesamoid. Long bones are characterized by a shaft, the diaphysis, that is much longer than its width; and by an epiphysis, a rounded head at each end of the shaft. They are made up mostly of compact bone, with lesser amounts of marrow, located within the medullary cavity, and areas of spongy, cancellous bone at the ends of the bones. Most bones of the limbs, including those of the fingers and toes, are long bones. The exceptions are the eight carpal bones of the wrist, the seven articulating tarsal bones of the ankle and the sesamoid bone of the kneecap. Long bones such as the clavicle, that have a differently shaped shaft or ends are also called modified long bones. Short bones are roughly cube-shaped, and have only a thin layer of compact bone surrounding a spongy interior. The bones of the wrist and ankle are short bones. Flat bones are thin and generally curved, with two parallel layers of compact bone sandwiching a layer of spongy bone. Most of the bones of the skull are flat bones, as is the sternum. Sesamoid bones are bones embedded in tendons. Since they act to hold the tendon further away from the joint, the angle of the tendon is increased and thus the leverage of the muscle is increased. Examples of sesamoid bones are the patella and the pisiform. Irregular bones do not fit into the above categories. They consist of thin layers of compact bone surrounding a spongy interior. As implied by the name, their shapes are irregular and complicated. Often this irregular shape is due to their many centers of ossification or because they contain bony sinuses. The bones of the spine, pelvis, and some bones of the skull are irregular bones. Examples include the ethmoid and sphenoid bones. Terminology In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join together, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture". Development The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage. Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum. Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates. Endochondral ossification begins with points in the cartilage called "primary ossification centers." They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous. The following steps are followed in the conversion of cartilage to bone: Zone of reserve cartilage. This region, farthest from the marrow cavity, consists of typical hyaline cartilage that as yet shows no sign of transforming into bone. Zone of cell proliferation. A little closer to the marrow cavity, chondrocytes multiply and arrange themselves into longitudinal columns of flattened lacunae. Zone of cell hypertrophy. Next, the chondrocytes cease to divide and begin to hypertrophy (enlarge), much like they do in the primary ossification center of the fetus. The walls of the matrix between lacunae become very thin. Zone of calcification. Minerals are deposited in the matrix between the columns of lacunae and calcify the cartilage. These are not the permanent mineral deposits of bone, but only a temporary support for the cartilage that would otherwise soon be weakened by the breakdown of the enlarged lacunae. Zone of bone deposition. Within each column, the walls between the lacunae break down and the chondrocytes die. This converts each column into a longitudinal channel, which is immediately invaded by blood vessels and marrow from the marrow cavity. Osteoblasts line up along the walls of these channels and begin depositing concentric lamellae of matrix, while osteoclasts dissolve the temporarily calcified cartilage. Functions Bones have a variety of functions: Mechanical Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics). Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about , poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen. Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction. Synthetic The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way. As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed. Metabolic Mineral storage – bones act as reserves of minerals important for the body, most notably calcium and phosphorus. Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others. Fat storage – marrow adipose tissue (MAT) acts as a storage reserve of fatty acids. Acid-base balance – bone buffers the blood against excessive pH changes by absorbing or releasing alkaline salts. Detoxification – bone tissues can also store heavy metals and other foreign elements, removing them from the blood and reducing their effects on other tissues. These can later be gradually released for excretion. Endocrine organ – bone controls phosphate metabolism by releasing fibroblast growth factor 23 (FGF-23), which acts on kidneys to reduce phosphate reabsorption. Bone cells also release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both the insulin
the West Saxon chronicler ignored such Mercian kings as Offa. The use of the term Bretwalda was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of the whole of Britain was at least recognised in the period, whatever was meant by the term. Quite possibly it was a survival of a Roman concept of "Britain": it is significant that, while the hyperbolic inscriptions on coins and titles in charters often included the title rex Britanniae, when England was unified the title used was rex Angulsaxonum, ('king of the Anglo-Saxons'.) Modern interpretation by historians For some time, the existence of the word bretwalda in the Anglo-Saxon Chronicle, which was based in part on the list given by Bede in his Historia Ecclesiastica, led historians to think that there was perhaps a "title" held by Anglo-Saxon overlords. This was particularly attractive as it would lay the foundations for the establishment of an English monarchy. The 20th-century historian Frank Stenton said of the Anglo-Saxon chronicler that "his inaccuracy is more than compensated by his preservation of the English title applied to these outstanding kings". He argued that the term bretwalda "falls into line with the other evidence which points to the Germanic origin of the earliest English institutions". Over the later 20th century, this assumption was increasingly challenged. Patrick Wormald interpreted it as "less an objectively realized office than a subjectively perceived status" and emphasised the partiality of its usage in favour of Southumbrian rulers. In 1991, Steven Fanning argued that "it is unlikely that the term ever existed as a title or was in common usage in Anglo-Saxon England". The fact that Bede never mentioned a special title for the kings in his list implies that he was unaware of one. In 1995, Simon Keynes observed that "if Bede's concept of the Southumbrian overlord, and the chronicler's concept of the 'Bretwalda', are to be regarded as artificial constructs, which have no validity outside the context of the literary works in which they appear, we are released from the assumptions about political development which they seem to involve... we might ask whether kings in the eighth and ninth centuries were quite so obsessed with the establishment of a pan-Southumbrian state". Modern interpretations view the concept of bretwalda overlordship as complex and an important indicator of how a 9th-century chronicler interpreted history and attempted to insert the increasingly powerful Saxon kings into that history. Overlordship A complex array of dominance and subservience existed during the Anglo-Saxon period. A king who used charters to grant land in another kingdom indicated such a relationship. If a king held sway over a large kingdom, such as when the Mercians dominated the East Anglians, the relationship would have been more equal than in the case of the Mercian dominance of the Hwicce, which was a comparatively small kingdom. Mercia was arguably the most powerful Anglo-Saxon kingdom for much of the late 7th though 8th centuries, though Mercian kings are missing from the two main "lists". For Bede, Mercia was a traditional enemy of his native Northumbria and he regarded powerful kings such as the pagan Penda as standing in the way of the Christian conversion of the Anglo-Saxons. Bede omits them from his list, even though it is evident that Penda held a considerable degree of power. Similarly powerful Mercia kings such as Offa are missed out of the West
or all of the other Anglo-Saxon kingdoms. It is unclear whether the word dates back to the 5th century and was used by the kings themselves or whether it is a later, 9th-century, invention. The term bretwalda also appears in a 10th-century charter of Æthelstan. The literal meaning of the word is disputed and may translate to either 'wide-ruler' or 'Britain-ruler'. The rulers of Mercia were generally the most powerful of the Anglo-Saxon kings from the mid 7th century to the early 9th century but are not accorded the title of bretwalda by the Chronicle, which had an anti-Mercian bias. The Annals of Wales continued to recognise the kings of Northumbria as "Kings of the Saxons" until the death of Osred I of Northumbria in 716. Bretwaldas Listed by Bede and the Anglo-Saxon Chronicle Ælle of Sussex (488– 514) Ceawlin of Wessex (560–592, died 593) Æthelberht of Kent (590–616) Rædwald of East Anglia (c. 600–around 624) Edwin of Deira (616–633) Oswald of Northumbria (633–642) Oswiu of Northumbria (642–670) Mercian rulers with similar or greater authority Penda of Mercia (628/633–655) Wulfhere of Mercia (658–675) Æthelred of Mercia (675–704, died 716) Æthelbald of Mercia (716–757) Offa of Mercia (757–796) Cœnwulf of Mercia (796–821) Listed only by the Anglo-Saxon Chronicle Egbert of Wessex (829–839) Alfred of Wessex (871–899) Other claimants Æthelstan of Wessex (927–939) Etymology The first syllable of the term bretwalda may be related to Briton or Britain. The second element is taken to mean 'ruler' or 'sovereign', though is more literally 'wielder'. Thus, this interpretation would mean 'sovereign of Britain' or 'wielder of Britain'. The word may be a compound containing the Old English adjective brytten (from the verb breotan meaning 'to break' or 'to disperse'), an element also found in the terms bryten rice ('kingdom'), bryten-grund ('the wide expanse of the earth') and bryten cyning ('king whose authority was widely extended'). Though the origin is ambiguous, the draughtsman of the charter issued by Æthelstan used the term in a way that can only mean 'wide-ruler'. The latter etymology was first suggested by John Mitchell Kemble who alluded that "of six manuscripts in which this passage occurs, one only reads Bretwalda: of the remaining five, four have Bryten-walda or -wealda, and one Breten-anweald, which is precisely synonymous with Brytenwealda"; that Æthelstan was called brytenwealda ealles ðyses ealondes, which Kemble translates as 'ruler of all these islands'; and that bryten- is a common prefix to words meaning 'wide or general dispersion' and that the similarity to the word bretwealh ('Briton') is "merely accidental". Contemporary use The first recorded use of the term Bretwalda comes from a West Saxon chronicle of the late 9th century that applied the term to Ecgberht, who ruled Wessex from 802 to 839. The chronicler also wrote down the names of seven kings that Bede listed in his Historia ecclesiastica gentis Anglorum in 731. All subsequent manuscripts of the Chronicle use the term Brytenwalda, which may have represented the original term or derived from a common error. There is no evidence that the term was a title that had any practical use, with implications of formal rights, powers and office, or even that it had any existence before the 9th-century. Bede wrote in Latin and never used the term and his list of kings holding imperium should be treated with caution, not least in that he overlooks kings such as Penda of Mercia, who clearly held some kind of dominance during his reign. Similarly, in his list of bretwaldas, the West Saxon chronicler ignored such Mercian kings as Offa. The use of the term Bretwalda was the attempt by a West Saxon chronicler to make some claim of West Saxon kings to the whole of Great Britain. The concept of the overlordship of
does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it. Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country. In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state. Intuitive approach Explanations attributed to Brouwer The theorem is supposed to have originated from Brouwer's observation of a cup of coffee. If one stirs to dissolve a lump of sugar, it appears there is always a point without motion. He drew the conclusion that at any moment, there is a point on the surface that is not moving. The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit. The result is not intuitive, since the original fixed point may become mobile when another fixed point appears. Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet." Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness. One-dimensional case In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green). Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point. Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string." History The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known. Prehistory To understand the prehistory of Brouwer's fixed point theorem one needs to pass through differential equations. At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community. Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge." He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision". He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case. To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion. Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction. First proofs At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed. It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator." Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology. Reception The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory. Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem. Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to multivalued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations. Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets. Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem. The first algorithm to approximate a fixed point was proposed by Herbert Scarf. A subtle aspect of Scarf's algorithm is that it finds a point that is by a function f, but in general cannot find a point that is close to an actual fixed point. In mathematical language, if is chosen to be very small, Scarf's algorithm can be used to find a point x such that f(x) is very close to x, i.e., . But Scarf's algorithm cannot be used to find a point x such that x is very close to a fixed point: we cannot guarantee where Often this latter condition is what is meant by the informal phrase "approximating a fixed point". Proof outlines A proof using degree Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably . Let denote the closed unit ball in centered at the origin. Suppose for simplicity that is continuously differentiable. A regular value of is a point such that the Jacobian of is non-singular at every point of the preimage of . In particular, by the inverse function theorem, every point of the preimage of lies in (the interior of ). The degree of at a regular value is defined as the sum of the signs of the Jacobian determinant of over the preimages of under : The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization
routine computation shows that the mapping () = + is a contraction mapping on and that the volume of its image is a polynomial in . On the other hand, as a contraction mapping, must restrict to a homeomorphism of onto (1 + )½ and onto (1 + )½ . This gives a contradiction, because, if the dimension of the Euclidean space is odd, (1 + )/2 is not a polynomial. If is only a continuous unit tangent vector on , by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map of into Euclidean space. The orthogonal projection on to the tangent space is given by () = () - () ⋅ . Thus is polynomial and nowhere vanishing on ; by construction /|||| is a smooth unit tangent vector field on , a contradiction. The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that is odd. If there were a fixed-point-free continuous self-mapping of the closed unit ball of the -dimensional Euclidean space , set Since has no fixed points, it follows that, for in the interior of , the vector () is non-zero; and for in , the scalar product ⋅ () = 1 – ⋅ () is strictly positive. From the original -dimensional space Euclidean space , construct a new auxiliary ()-dimensional space = x R, with coordinates = (, ). Set By construction is a continuous vector field on the unit sphere of , satisfying the tangency condition ⋅ () = 0. Moreover, () is nowhere vanishing (because, if has norm 1, then ⋅ () is non-zero; while if has norm strictly less than 1, then and () are both non-zero). This contradiction proves the fixed point theorem when is odd. For even, one can apply the fixed point theorem to the closed unit ball in dimensions and the mapping (,) = ((),0). The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology. A proof using homology or cohomology The proof uses the observation that the boundary of the n-disk Dn is Sn−1, the (n − 1)-sphere. Suppose, for contradiction, that a continuous function has no fixed point. This means that, for every point x in Dn, the points x and f(x) are distinct. Because they are distinct, for every point x in Dn, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary Sn−1 (see illustration). By calling this intersection point F(x), we define a function F : Dn → Sn−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x. Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case Sn−1) is a fixed point of F. Intuitively it seems unlikely that there could be a retraction of Dn onto Sn−1, and in the case n = 1, the impossibility is more basic, because S0 (i.e., the endpoints of the closed interval D1) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D2 to that of S1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields. For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(Dn) is trivial, while Hn−1(Sn−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group. The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space En. For n ≥ 2, the de Rham cohomology of U = En – (0) is one-dimensional in degree 0 and n - 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n - 1 would have to vanish, a contradiction. A proof using Stokes' theorem As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction from the ball onto its boundary ∂. In that case it can be assumed that is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If is a volume form on the boundary then by Stokes' theorem, giving a contradiction. More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form generates the de Rham cohomology group (∂) which is isomorphic to the homology group (∂) by de Rham's theorem. A combinatorial proof The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, to itself, where For every point also Hence the sum of their coordinates is equal: Hence, by the pigeonhole principle, for every there must be an index such that the th coordinate of is greater than or equal to the th coordinate of its image under f: Moreover, if lies on a k-dimensional sub-face of then by the same argument, the index can be selected from among the coordinates which are not zero on this sub-face. We now use this fact to construct a Sperner coloring. For every triangulation of the color of every vertex is an index such that By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of available colors. Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point which satisfies the labeling condition in all coordinates: for all Because the sum of the coordinates of and must be equal, all these inequalities must actually be equalities. But this means that: That is, is a fixed point of A proof by Hirsch There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction. R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems. A proof using oriented area A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If is a smooth retraction, one considers the smooth deformation and the smooth function Differentiating under the sign of integral it is not difficult to check that (t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of gt(B) (that is, the Lebesgue measure of the image of the ball via gt, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes form 0 to 1 the map gt transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure. A proof using the game hex A quite different proof given by David Gale is based on the game of Hex. The basic theorem about Hex is that no game can end in a draw. This is equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex. A proof using the Lefschetz fixed-point theorem The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number and in particular
levels for its application are controlled by local food laws. Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of carcinogenic benzene. Medicinal Benzoic acid is a constituent of Whitfield's ointment which is used for the treatment of fungal skin diseases such as tinea, ringworm, and athlete's foot. As the principal component of gum benzoin, benzoic acid is also a major ingredient in both tincture of benzoin and Friar's balsam. Such products have a long history of use as topical antiseptics and inhalant decongestants. Benzoic acid was used as an expectorant, analgesic, and antiseptic in the early 20th century. Niche and laboratory uses In teaching laboratories, benzoic acid is a common standard for calibrating a bomb calorimeter. Biology and health effects Benzoic acid occurs naturally as do its esters in many plant and animal species. Appreciable amounts are found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis macrocarpon; bilberry, V. myrtillus) contain as much as 0.03–0.13% free benzoic acid. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena. Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the rock ptarmigan (Lagopus muta) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus). Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters. In terms of its biosynthesis, benzoate is produced in plants from cinnamic acid. A pathway has been identified from phenol via 4-hydroxybenzoate. Reactions Reactions of benzoic acid can occur at either the aromatic ring or at the carboxyl group. Aromatic ring Electrophilic aromatic substitution reaction will take place mainly in 3-position due to the electron-withdrawing carboxylic group; i.e. benzoic acid is meta directing. Carboxyl group Reactions typical for carboxylic acids apply also to benzoic acid. Benzoate esters are the product of the acid catalysed reaction with alcohols. Benzoic acid amides are usually prepared from benzoyl chloride. Dehydration to benzoic anhydride is induced with acetic anhydride or phosphorus pentoxide. Highly reactive acid derivatives such as acid halides are easily obtained by mixing with halogenation agents like phosphorus chlorides or thionyl chloride. Orthoesters can be obtained by the reaction of alcohols under acidic water free conditions with benzonitrile. Reduction to benzaldehyde and benzyl alcohol is possible using DIBAL-H, LiAlH4 or sodium borohydride. Decarboxylation to benzene may be effected by heating in quinoline in the presence of copper salts. Hunsdiecker decarboxylation can be achieved by heating the silver salt. Safety and mammalian metabolism It is excreted as hippuric acid. Benzoic acid is metabolized by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid. Humans metabolize toluene and benzoic acid which is excreted as hippuric acid. For humans, the World Health Organization's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral for rats is 3040 mg/kg, for mice it is
dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically. Laboratory synthesis Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation. Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65% By hydrolysis Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions. From Grignard reagent Bromobenzene can be converted to benzoic acid by "carboxylation" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry. Oxidation of benzyl compounds Benzyl alcohol and benzyl chloride and virtually all benzyl derivatives are readily oxidized to benzoic acid. Uses Benzoic acid is mainly consumed in the production of phenol by oxidative decarboxylation at 300−400 °C: C6H5CO2H + O2 → C6H5OH + CO2 The temperature required can be lowered to 200 °C by the addition of catalytic amounts of copper (II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis. Precursor to plasticizers Benzoate plasticizers, such as the glycol-, diethyleneglycol-, and triethyleneglycol esters, are obtained by transesterification of methyl benzoate with the corresponding diol. These plasticizers, which are used similarly to those derived from terephthalic acid ester, represent alternatives to phthalates. Precursor to sodium benzoate and related preservatives Benzoic acid and its salts are used as a food preservatives, represented by the E numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Acidic food and beverages such as fruit juice (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) or other acidified food are preserved with benzoic acid and benzoates. Typical concentrations of benzoic acid as a preservative in food are between 0.05 and 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are controlled by local food laws. Concern has
in state i is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i. where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution. Generalized Boltzmann distribution Distribution of the form is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. In statistical mechanics The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish
transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution. Generalized Boltzmann distribution Distribution of the form is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. In statistical mechanics The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead. If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium. With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively. In mathematics In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning model. In the design of Boltzmann machine in deep learning , as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced. In economics The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization. See also Bose–Einstein statistics Fermi–Dirac statistics Negative temperature Softmax function
and Bill Voce, developed a variant of leg theory in which the bowlers bowled fast, short-pitched balls that would rise into the batsman's body, together with a heavily stacked ring of close fielders on the leg side. The idea was that when the batsman defended against the ball, he would be likely to deflect the ball into the air for a catch. Jardine called this modified form of the tactic fast leg theory. On the 1932–33 English tour of Australia, Larwood and Voce bowled fast leg theory at the Australian batsmen. It turned out to be extremely dangerous, and most Australian players sustained injuries from being hit by the ball. Wicket-keeper Bert Oldfield's skull was fractured by a ball hitting his head (although the ball had first glanced off the bat and Larwood had an orthodox field), almost precipitating a riot by the Australian crowd. The Australian press dubbed the tactic Bodyline, and claimed it was a deliberate attempt by the English team to intimidate
attack on the leg stump is considered by many cricket fans and commentators to lead to boring play, as it stifles run scoring and encourages batsmen to play conservatively. Fast leg theory In 1930, England captain Douglas Jardine, together with Nottinghamshire's captain Arthur Carr and his bowlers Harold Larwood and Bill Voce, developed a variant of leg theory in which the bowlers bowled fast, short-pitched balls that would rise into the batsman's body, together with a heavily stacked ring of close fielders on the leg side. The idea was that when the batsman defended against the ball, he would be likely to deflect the ball into the air for a catch. Jardine called this modified form of the tactic fast leg theory. On the 1932–33 English tour of Australia, Larwood and Voce bowled fast leg theory at the Australian batsmen. It turned out to be extremely dangerous, and most Australian players sustained injuries from being hit by the ball. Wicket-keeper Bert Oldfield's skull was fractured by a ball hitting his head (although the ball had first glanced off the bat and Larwood had an orthodox field), almost precipitating a riot by the Australian crowd. The Australian press dubbed the tactic Bodyline, and claimed it was a deliberate attempt by the English team to intimidate and injure the Australian players. Reports of the controversy reaching England at the time described the bowling as fast leg theory, which sounded to many people to be a harmless and well-established tactic. This led to a serious misunderstanding amongst the English public and the Marylebone Cricket Club – the administrators of English cricket – of the dangers posed by Bodyline. The English press and cricket authorities declared the Australian protests to be a case of sore losing and "squealing". It was only with the return of the English team
who committed murder, opposite Peter Falk and John Cassavetes, in the Columbo episode "Etude in Black". Her earliest starring film role was opposite Alan Alda in To Kill a Clown (1972). Danner appeared in the episode of M*A*S*H entitled "The More I See You", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's Adam's Rib, opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in F. Scott Fitzgerald and 'The Last of the Belles' (1974). She was the eponymous heroine in the film Lovin' Molly (1974) (directed by Sidney Lumet). She appeared in Futureworld, playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie Inside the Third Reich, she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play Brighton Beach Memoirs (1986), she portrayed a middle-aged Jewish mother. She has appeared in two films based on the novels of Pat Conroy, The Great Santini (1979) and The Prince of Tides (1991), as well as two television movies adapted from books by Anne Tyler, Saint Maybe and Back When We Were Grownups, both for the Hallmark Hall of Fame. Danner appeared opposite Robert De Niro in the 2000 comedy hit Meet the Parents, and its sequels, Meet the Fockers (2004) and Little Fockers (2010). From 2001 to 2006, she regularly appeared on NBC's sitcom Will & Grace as Will Truman's mother Marilyn. From 2004 to 2006, she starred in the main cast of the comedy-drama series Huff. In 2005, she was nominated for three Primetime Emmy Awards for her work on Will & Grace, Huff, and the television film Back When We Were Grownups, winning for her role in Huff. The following year, she won a second consecutive Emmy Award for Huff. For 25 years, she has been a regular performer at the Williamstown Summer Theater Festival, where she also serves on the board of directors. In 2006, Danner was awarded an inaugural Katharine Hepburn Medal by Bryn Mawr College's Katharine Houghton Hepburn Center. In 2015, Danner was inducted into the American Theater Hall of Fame. Environmental activism Danner has been involved in environmental issues such as recycling and conservation for over 30 years. She has been active with INFORM, Inc., is on the Board of Environmental Advocates of New York and the Board of Directors of the Environmental Media Association, and won the 2002 EMA Board of Directors Ongoing Commitment Award. In 2011, Danner joined Moms Clean Air Force, to help call on parents to join in the fight against toxic air pollution. Health care activism After the death of her husband Bruce Paltrow from oral cancer, she became involved with the nonprofit Oral Cancer Foundation. In 2005, she filmed a
They Had (2018). Danner is the sister of Harry Danner and the widow of Bruce Paltrow. She is the mother of actress Gwyneth Paltrow and director Jake Paltrow. Early life Danner was born in Philadelphia, Pennsylvania, the daughter of Katharine (née Kile; 1909–2006) and Harry Earl Danner, a bank executive. She has a brother, opera singer and actor Harry Danner; a sister, performer-turned-director Dorothy "Dottie" Danner; and a maternal half-brother, violin maker William Moennig. Danner has Pennsylvania Dutch (German), and some English and Irish, ancestry; her maternal grandmother was a German immigrant, and one of her paternal great-grandmothers was born in Barbados (to a family of European descent). Danner graduated from George School, a Quaker high school located near Newtown, Bucks County, Pennsylvania, in 1960. Career A graduate of Bard College, Danner's first roles included the 1967 musical Mata Hari and the 1968 Off-Broadway production of Summertree. Her early Broadway appearances included Cyrano de Bergerac (1968) and her Theatre World Award-winning performance in The Miser (1969). She won the Tony Award for Best Featured Actress in a Play for portraying a free-spirited divorcée in Butterflies Are Free (1970). In 1972, Danner portrayed Martha Jefferson in the film version of 1776. That same year, she played the unknowing wife of a husband who committed murder, opposite Peter Falk and John Cassavetes, in the Columbo episode "Etude in Black". Her earliest starring film role was opposite Alan Alda in To Kill a Clown (1972). Danner appeared in the episode of M*A*S*H entitled "The More I See You", playing the love interest of Alda's character Hawkeye Pierce. She played lawyer Amanda Bonner in television's Adam's Rib, opposite Ken Howard as Adam Bonner. She played Zelda Fitzgerald in F. Scott Fitzgerald and 'The Last of the Belles' (1974). She was the eponymous heroine in the film Lovin' Molly (1974) (directed by Sidney Lumet). She appeared in Futureworld, playing Tracy Ballard with co-star Peter Fonda (1976). In the 1982 TV movie Inside the Third Reich, she played the wife of Albert Speer. In the film version of Neil Simon's semi-autobiographical play Brighton Beach Memoirs (1986),
this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1) spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2) (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3) (sulfur oxidizers) The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction: (4) The net products of the reaction are soluble ferrous sulfate and sulfuric acid. The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant. The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution. Chalcopyrite leaching: (1) spontaneous (2) (iron oxidizers) (3) (sulfur oxidizers) net reaction: (4) In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals. Further processing The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene: Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq) The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution. Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there. The copper can also be concentrated and separated by displacing the copper with
oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1) spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2) (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3) (sulfur oxidizers) The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction: (4) The net products of the reaction are soluble ferrous sulfate and sulfuric acid. The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant. The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution. Chalcopyrite leaching: (1) spontaneous (2) (iron oxidizers) (3) (sulfur oxidizers) net reaction: (4) In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can
the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an "overall" competition, which would feature bouldering, lead climbing, and speed climbing. In May 2013, the IOC announced that climbing would not be added to the 2020 Olympic program. In 2016, the International Olympic Committee (IOC) officially approved climbing as an Olympic sport "in order to appeal to younger audiences." The Olympics will feature the earlier proposed overall competition. Medalists will be competing in all three categories for a best overall score. The score will be calculated by the multiplication of the positions that the climbers have attained in each discipline of climbing. History Rock climbing first appeared as a sport in the late 1800s. Early records describe climbers engaging in what is now referred to as bouldering, not as a separate discipline, but as a playful form of training for larger ascents. It was during this time that the words "bouldering" and "problem" first appeared in British climbing literature. Oscar Eckenstein was an early proponent of the activity in the British Isles. In the early 20th century, the Fontainebleau area of France established itself as a prominent climbing area, where some of the first dedicated bleausards (or "boulderers") emerged. One of those athletes, Pierre Allain, invented the specialized shoe used for rock climbing. Modern bouldering In the late 1950s through the 1960s, American mathematician John Gill pushed the sport further and contributed several important innovations, distinguishing bouldering as a separate discipline in the process. Gill previously pursued gymnastics, a sport which had an established scale of difficulty for movements and body positions, and shifted the focus of bouldering from reaching the summit to navigating a set of holds. Gill developed a rating system that was closed-ended: B1 problems were as difficult as the most challenging roped routes of the time, B2 problems were more difficult, and B3 problems had been completed once. Gill introduced chalk as a method of keeping the climber's hands dry, promoted a dynamic climbing style, and emphasized the importance of strength training to complement skill. As Gill improved in ability and influence, his ideas became the norm. In the 1980s, two important training tools emerged. One important training tool was bouldering mats, also referred to as "crash pads", which protected against injuries from falling and enabled boulderers to climb in areas that would have been too dangerous otherwise. The second important tool was indoor climbing walls, which helped spread the sport to areas without outdoor climbing and allowed serious climbers to train year-round. As the sport grew in popularity, new bouldering areas were developed throughout Europe and the United States, and more athletes began participating in bouldering competitions. The visibility of the sport greatly increased in the early 2000s, as YouTube videos and climbing blogs helped boulderers around the world to quickly learn techniques, find hard problems, and announce newly completed projects. Notable ascents Notable boulder climbs are chronicled by the climbing media to track progress in boulder climbing standards and levels of technical difficulty; in contrast, the hardest traditional climbing routes tend to be of lower technical difficulty due to the additional burden of having to place protection during the course of the climb, and due to the lack of any possibility of using natural protection on the most extreme climbs. As of December 2021, the world's hardest bouldering routes are Burden of Dreams by Nalle Hukkataival, and Return of the Sleepwalker by Daniel Woods, both at proposed grades of . There are a number of routes with a confirmed climbing grade of , the first of which was Gioia by Christian Core in 2008 (and confirmed by Adam Ondra in 2011). As of December 2021, female climbers Josune Bereziartu, Ashima Shiraishi, and Kaddi Lehmann have repeated boulder problems at the boulder grade. Equipment Unlike other climbing sports, bouldering can be performed safely and effectively with very little equipment, an aspect which makes the discipline highly appealing, but opinions differ. While bouldering pioneer John Sherman asserted that "The only gear really needed to go bouldering is boulders," others suggest the use of climbing shoes and a chalkbag – a small pouch where ground-up chalk is kept – as the bare minimum, and more experienced boulderers typically bring multiple pairs of climbing shoes, chalk, brushes, crash pads, and a skincare kit. Climbing shoes have the most direct impact on performance. Besides protecting the climber's feet from rough surfaces, climbing shoes are designed to help the climber secure footholds. Climbing shoes typically fit much tighter than other athletic footwear and often curl the toes downwards to enable precise footwork. They are manufactured in a variety of different styles to perform in different situations. For example, High-top shoes provide better protection for the ankle, while low-top shoes provide greater flexibility and freedom of movement. Stiffer shoes excel at securing small edges, whereas softer shoes provide greater sensitivity. The front of the shoe, called the "toe box", can be asymmetric, which performs well on overhanging rocks, or symmetric, which is better suited for vertical problems and slabs. To absorb sweat, most boulderers use gymnastics chalk on their hands, stored in a chalkbag, which can be tied around the waist (also called sport climbing chalkbags), allowing the climber to reapply chalk during the climb. There are also versions of floor chalkbags (also called bouldering chalkbags), which are usually bigger than sport climbing chalkbags and are meant to be kept on the floor while climbing; this is because boulders do not usually have so many movements as to require chalking up more than once. Different sizes of brushes are used to remove excess chalk and debris from boulders in between climbs; they are often attached to the end of a long straight object in order to reach higher holds. Crash pads, also referred to as bouldering mats, are foam cushions placed on the ground to protect climbers from falls. Safety Boulder problems are generally shorter than from ground to top. This makes the sport significantly safer than free solo climbing, which is also performed without ropes, but with no upper limit on the height of the climb. However, minor injuries are common in bouldering, particularly sprained ankles and wrists. Two factors contribute to the frequency of injuries in bouldering: first, boulder problems typically feature more difficult moves than other climbing disciplines, making falls more common. Second, without ropes to arrest the climber's descent, every fall will cause the climber to hit the ground. To prevent injuries, boulderers position crash pads near the boulder to provide a softer landing, as well as one or more spotters (people watching out for the climber to fall in convenient position) to help redirect the climber towards the pads. Upon landing, boulderers employ falling techniques similar to those used in gymnastics: spreading the impact across the entire body to avoid bone fractures, and positioning limbs to allow joints to move freely throughout the impact. Technique Although every type of rock climbing requires a high level of strength and technique, bouldering is the most dynamic form of the sport, requiring the highest level of power and placing considerable strain on the body. Training routines that strengthen fingers and forearms are useful in preventing injuries such as tendonitis and ruptured ligaments. However, as with other forms of climbing, bouldering technique begins with proper footwork. Leg muscles are significantly stronger than arm muscles; thus, proficient boulderers use their arms to maintain balance and body positioning as much as possible, relying on their legs to push them up the rock. Boulderers also keep their arms straight with their shoulders engaged whenever feasible, allowing their bones to support their body weight rather than their muscles. Bouldering movements are described as either "static" or "dynamic". Static movements are those that are performed slowly, with the climber's position controlled by maintaining contact on the boulder with the other three limbs. Dynamic movements use the climber's momentum to reach holds that would be difficult or impossible to secure statically, with an increased risk of falling if the movement is not performed accurately. Environmental impact Bouldering can damage vegetation that grows on rocks, such as moss and lichens. This can occur as a result of the climber intentionally cleaning the boulder, or unintentionally from repeated use of handholds and footholds. Vegetation on the ground surrounding the boulder can also
Squamish, British Columbia is one of the most popular bouldering areas in Canada. Europe is also home to a number of bouldering sites, such as Fontainebleau in France, Albarracín in Spain, and various mountains throughout Switzerland. Africa's most prominent bouldering areas include the more established Rocklands, South Africa, the newer Oukaïmeden in Morocco or more recently opened areas like Chimanimani in Zimbabwe. Indoor bouldering Artificial climbing walls are used to simulate boulder problems in an indoor environment, usually at climbing gyms. These walls are constructed with wooden panels, polymer cement panels, concrete shells, or precast molds of actual rock walls. Holds, usually made of plastic, are then bolted onto the wall to create problems. Some problems use steep overhanging surfaces which force the climber to support much of their weight using their upper body strength. Other problems are set on flat walls; Instead of requiring upper body strength, these problems create difficulty by requiring the climber to execute a series of predetermined movements to complete the route. The IFSC Climbing World Championships have noticeably included more of such problems in their competitions as of late. Climbing gyms often feature multiple problems within the same section of wall. In the US the most common method route-setters use to designate the intended problem is by placing colored tape next to each hold. For example, red tape would indicate one bouldering problem while green tape would be used to set a different problem in the same area. Across much of the rest of the world problems and grades are usually designated using a set color of plastic hold to indicate problems and their difficulty levels. Using colored holds to set has certain advantages, the most notable of which are that it makes it more obvious where the holds for a problem are, and that there is no chance of tape being accidentally kicked off footholds. Smaller, resource-poor climbing gyms may prefer taped problems because large, expensive holds can be used in multiple routes by marking them with more than one color of tape. The tape indicates the hold(s) that the athlete should grab first. Grading Bouldering problems are assigned numerical difficulty ratings by route-setters and climbers. The two most widely used rating systems are the V-scale and the Fontainebleau system. The V-scale, which originated in the United States, is an open-ended rating system with higher numbers indicating a higher degree of difficulty. The V1 rating indicates that a problem can be completed by a novice climber in good physical condition after several attempts. The scale begins at V0, and as of 2013, the highest V rating that has been assigned to a bouldering problem is V17. Some climbing gyms also use a VB grade to indicate beginner problems. The Fontainebleau scale follows a similar system, with each numerical grade divided into three ratings with the letters a, b, and c. For example, Fontainebleau 7A roughly corresponds with V6, while Fontainebleau 7C+ is equivalent to V10. In both systems, grades are further differentiated by appending "+" to indicate a small increase in difficulty. Despite this level of specificity, ratings of individual problems are often controversial, as ability level is not the only factor that affects how difficult a problem may be for a particular climber. Height, arm length, flexibility, and other body characteristics can also be relevant to perceived difficulty. Highball bouldering Highball bouldering is simply climbing high, difficult, long, and tall boulders. Using the same protection as standard bouldering, climbers venture up house-sized rocks that test not only their physical skill and strength but mental focus. Highballing, like most of climbing, is open to interpretation. Most climbers say anything above 15 feet is a highball and can range in height up to 35–40 feet where highball bouldering then turns into free soloing. Highball bouldering may have begun in 1961 when John Gill, without top-rope rehearsal, bouldered a steep face on a 37-foot (11 meter) granite spire called "The Thimble". The difficulty level of this ascent (V4/5 or 5.12a) was extraordinary for that time. Gill's achievement initiated a wave of climbers making ascents of large boulders. Later, with the introduction and evolution of crash pads, climbers were able to push the limits of highball bouldering ever higher. In 2002 Jason Kehl completed the first highball at double-digit V-difficulty, called Evilution, a 55-foot (16.8 meter) boulder in the Buttermilks of California, earning the grade of V12. This climb marked the beginning of a new generation of highball climbing that pushed not only height, but great difficulty.It is not unusual for climbers to rehearse such risky problems on top-rope, although this practice is not a settled issue. Groundbreaking ascents in this style include; Ambrosia, a 55-foot (16.8 meter) boulder in Bishop, California, climbed by Kevin Jorgeson in 2015 sporting the grade of V11. Too Big to Flail, V10, another 55 foot (16.8 meter) line in Bishop, California, climbed by Alex Honnold in 2016. Livin' Large, a 35-foot V15 in Rocklands, South Africa, found and established by Nalle Hukkataival in 2009, which has been repeated by only one person, Jimmy Webb. The Process is a 55-foot V16 in Bishop, California, first climbed by Daniel Woods in 2015. The line was worked with another climber, Dan Beal, but a hold broke after Woods's top and the climb has yet to see a second ascent as of Sep 28, 2017. Competitions Traditionally, competition in bouldering was informal, with climbers working out problems near the limits of their abilities, then challenging their peers to repeat these accomplishments. However, modern climbing gyms allow for a more formal competitive structure. The International Federation of Sport Climbing (IFSC) employs an indoor format (although competitions can also take place in an outdoor setting) that breaks the competition into three rounds: qualifications, semi-finals, and finals. The rounds feature different sets of four to six boulder problems, and each competitor has a fixed amount of time to attempt each problem. At the end of each round, competitors are ranked by the number of completed problems with ties settled by the total number of attempts taken to solve the problems. Some competitions only permit climbers a fixed number of attempts at each problem with a timed rest period in between. In an open-format competition, all climbers compete simultaneously, and are given a fixed amount of time to complete as many problems as possible. More points are awarded for more difficult problems, while points are deducted for multiple attempts on the same problem. In 2012, the IFSC submitted a proposal to the International Olympic Committee (IOC) to include lead climbing in the 2020 Summer Olympics. The proposal was later revised to an "overall" competition, which would feature bouldering, lead climbing, and speed climbing. In May 2013, the IOC announced that climbing would not be added to the 2020 Olympic program. In 2016, the International Olympic Committee (IOC) officially approved climbing as an Olympic sport "in order to appeal to younger audiences." The Olympics will feature the earlier proposed overall competition. Medalists will be competing in all three categories for a best overall score. The score will be calculated by the multiplication of the positions that the climbers have attained in each discipline of climbing. History Rock climbing first appeared as a sport in the late 1800s. Early records describe climbers engaging in what is now referred to as bouldering, not as a separate discipline, but as a playful form of training for larger ascents. It was during this time that the words "bouldering" and "problem"
pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum has a lower boiling point than when that liquid is at atmospheric pressure. A liquid at high pressure has a higher boiling point than when that liquid is at atmospheric pressure. For example, water boils at at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing). Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (or 1 atm), or the IUPAC standard pressure of 100.000 kPa. At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. If the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus: where: is the boiling point at the pressure of interest, is the ideal gas constant, is the vapor pressure of the liquid, is some pressure where the corresponding is known (usually data available at 1 atm or 100 kPa), is the heat of vaporization of the liquid, is the boiling temperature, is the natural logarithm. Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature. If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the
for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. The critical point of a liquid is the highest temperature (and pressure) it will actually boil at. See also Vapour pressure of water. Properties of the elements The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point. Boiling point as a reference property of a pure compound As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points. In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As
principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. Expansion of space The expansion of the Universe was inferred from early twentieth century astronomical observations and is an essential ingredient of the Big Bang theory. Mathematically, general relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, are specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker (FLRW) metric. This metric contains a scale factor, which describes how the size of the universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system, the grid expands along with the universe, and objects that are moving only because of the expansion of the universe remain at fixed points on the grid. While their coordinate distance (comoving distance) remains constant, the physical distance between two such co-moving points expands proportionally with the scale factor of the universe. The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distances between comoving points. In other words, the Big Bang is not an explosion in space, but rather an expansion of space. Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our universe only on large scales—local concentrations of matter such as our galaxy do not necessarily expand with the same speed as the whole Universe. Horizons An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach us. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. Thermalization Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. Timeline According to the Big Bang theory, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling down. Singularity Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone can not extrapolate toward the singularity—before the end of the so-called Planck epoch. This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years. Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient. Inflation and baryogenesis The earliest phases of the Big Bang are subject to much speculation, since astronomical data about them are not available. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period from 0 to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces — the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped at around the 10−33 to 10−32 seconds mark, with the universe's volume having increased by a factor of at least 1078. Reheating occurred until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. Cooling The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background. Structure formation Over a long period of time, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" is estimated at about 0.023 (this is different from the 'baryon density' expressed as a fraction of the total matter/energy density, which is about 0.046), and the corresponding cold dark matter density is about 0.11, the corresponding neutrino density is estimated to be less than 0.0062. Cosmic acceleration Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's universe is in this form. When the universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity predominated, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the declining density of matter relative to the density of dark energy caused the expansion of the universe to slowly begin to accelerate. Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theoretically. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is currently one of the greatest unsolved problems in physics. History Etymology English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all
In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. Abundance of primordial elements Using the Big Bang model, it is possible to calculate the concentration of helium-4, helium-3, deuterium, and lithium-7 in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for ^4He/H, about 10−3 for ^2H/H, about 10−4 for ^3He/H and about 10−9 for ^7Li/H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for ^4He, and off by a factor of two for ^7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe (i.e., before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products) should have more helium than deuterium or more deuterium than ^3He, and in constant ratios, too. Galactic evolution and distribution Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current state of the Big Bang theory. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. Primordial gas clouds In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. Other lines of evidence The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in good agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn out to agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. Future observations Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. Problems and related issues in physics As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang theory. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflationary theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang theory still under intense investigation by cosmologists and astrophysicists. Baryon asymmetry It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of matter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Dark energy Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. Dark matter During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. Horizon problem The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB. If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. Magnetic monopoles The magnetic monopole objection was raised in the late 1970s. Grand Unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. Flatness problem The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. Ultimate fate of the universe Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. Misconceptions One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Hubble's law predicts that galaxies that are beyond Hubble distance recede faster than the speed of light. However, special relativity does not apply beyond motion through space. Hubble's law describes velocity that results from expansion of space, rather than through space. Astronomers often refer to the cosmological redshift as a Doppler shift which can lead to a misconception. Although similar, the cosmological redshift is not identical to the classically derived Doppler redshift because most elementary derivations of the Doppler redshift do not accommodate the expansion of space. Accurate derivation of the cosmological redshift requires the use of general relativity, and while a treatment using simpler Doppler effect arguments gives nearly identical results for nearby galaxies, interpreting the redshift of more distant galaxies as due to the simplest Doppler redshift treatments can cause confusion. Pre–Big Bang cosmology The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, specific laws of nature most likely came to existence in a random way, but as inflation models show, some combinations of these are far more probable. A topologically flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang. Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient. Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang
adapted to the new lager style of brewing. Due to their Bavarian accent, citizens of Munich pronounced "Einbeck" as "ein Bock" ("a billy goat"), and thus the beer became known as "bock". As a visual pun, a goat often appears on bock labels. Bock is historically associated with special occasions, often religious festivals such as Christmas, Easter or Lent (the latter as ). Bocks have a long history of being brewed and consumed by Bavarian monks as a source of nutrition during times of fasting. Styles of bock Traditional bock Traditional bock is a sweet, relatively strong (6.3–7.2% by volume), lightly hopped (20–27 IBUs) lager. The beer should be clear, and color can range from light copper to brown, with a bountiful and persistent off-white head. The aroma should be malty and toasty, possibly with hints of alcohol, but no detectable hops or fruitiness. The mouthfeel is smooth, with low to moderate carbonation and no astringency. The taste is rich and toasty, sometimes with a bit of caramel. Again, hop presence is low to undetectable, providing just enough bitterness so that the sweetness is not cloying and the aftertaste is muted. The following commercial products are indicative of the style: Christmas Bock (Gunpowder Falls Brewing Company), Point Bock (Stevens Point Brewery) Einbecker Ur-Bock Dunkel, Pennsylvania Brewing St. Nick Bock, Aass Bock, Great Lakes Rockefeller Bock, Stegmaier Brewhouse Bock, and Nashville Brewing Company's Nashville Bock. Maibock The maibock style – also known as helles bock or heller bock or even lente bock in The Netherlands – is a helles lager brewed to bock strength; therefore, still as strong as traditional bock, but lighter in colour and with more hop presence. It is a fairly recent development compared to other styles of bock beers, frequently associated with springtime and the month of May. Colour can range from deep gold to light amber with a large, creamy, persistent white head, and moderate to moderately high carbonation, while alcohol content ranges from 6.3% to 7.4% by volume. The flavour is typically less malty than a traditional bock, and may be drier, hoppier, and more bitter, but still with a relatively low hop flavour, with a mild spicy or peppery quality from the hops, increased carbonation and alcohol content. The following commercial products are indicative of the style: Gunpowder Falls Brewing Company Maibock, Ayinger Maibock, Mahr's Bock, Hacker-Pschorr Hubertus Bock, Capital Maibock, Einbecker Mai-Urbock, Hofbräu Maibock, Victory St. Boisterous, Gordon Biersch Blonde Bock, Smuttynose Maibock, Old Dominion Brewing Company Big Thaw Bock, [Brewery 85's Quittin' Time], Rogue Dead Guy Ale, Franconia Brewing Company Maibock Ale, Church Street maibock, and Tröegs Cultivator. Doppelbock Doppelbock or double bock is a stronger version of traditional bock that was first brewed in Munich by the Paulaner Friars, a Franciscan order founded by St. Francis of Paula. Historically, doppelbock was high in alcohol and sweet. The story is told that it served as "liquid bread" for the Friars during times of fasting, when solid
Stegmaier Brewhouse Bock, and Nashville Brewing Company's Nashville Bock. Maibock The maibock style – also known as helles bock or heller bock or even lente bock in The Netherlands – is a helles lager brewed to bock strength; therefore, still as strong as traditional bock, but lighter in colour and with more hop presence. It is a fairly recent development compared to other styles of bock beers, frequently associated with springtime and the month of May. Colour can range from deep gold to light amber with a large, creamy, persistent white head, and moderate to moderately high carbonation, while alcohol content ranges from 6.3% to 7.4% by volume. The flavour is typically less malty than a traditional bock, and may be drier, hoppier, and more bitter, but still with a relatively low hop flavour, with a mild spicy or peppery quality from the hops, increased carbonation and alcohol content. The following commercial products are indicative of the style: Gunpowder Falls Brewing Company Maibock, Ayinger Maibock, Mahr's Bock, Hacker-Pschorr Hubertus Bock, Capital Maibock, Einbecker Mai-Urbock, Hofbräu Maibock, Victory St. Boisterous, Gordon Biersch Blonde Bock, Smuttynose Maibock, Old Dominion Brewing Company Big Thaw Bock, [Brewery 85's Quittin' Time], Rogue Dead Guy Ale, Franconia Brewing Company Maibock Ale, Church Street maibock, and Tröegs Cultivator. Doppelbock Doppelbock or double bock is a stronger version of traditional bock that was first brewed in Munich by the Paulaner Friars, a Franciscan order founded by St. Francis of Paula. Historically, doppelbock was high in alcohol and sweet. The story is told that it served as "liquid bread" for the Friars during times of fasting, when solid food was not permitted. However, historian Mark Dredge, in his book A Brief History of Lager, says that this story is myth, and that the monks produced doppelbock to supplement their order's vegetarian diet all year. Today, doppelbock is still strong — ranging from 7%–12% or more by volume. It is clear, with colour ranging from dark gold, for the paler version, to dark brown with ruby highlights for darker version. It has a large, creamy, persistent head (although head retention may be impaired by alcohol in the stronger versions). The aroma is intensely malty, with some toasty notes, and possibly some alcohol presence as well; darker versions may have a chocolate-like or fruity aroma. The flavour is very rich and malty, with noticeable alcoholic strength, and little or no detectable hops (16–26 IBUs). Paler versions may have a drier finish. The monks who originally brewed doppelbock named their beer "Salvator" (literally "Savior", but actually a malapropism for "Sankt Vater", "St. Father", originally brewed for the feast of St. Francis of Paola on 2 April which often falls into Lent), which today is trademarked by Paulaner. Brewers of modern doppelbocks often add "-ator" to their beer's name as a signpost of the style; there are 200 "-ator" doppelbock names registered with the German patent office. The following are representative examples of the style: Paulaner Salvator, Ayinger Celebrator, Weihenstephaner Korbinian, Andechser Doppelbock Dunkel, Spaten Optimator, Augustiner Brau Maximator, Tucher Bajuvator, Weltenburger Kloster Asam-Bock, Capital Autumnal Fire, EKU 28, Eggenberg Urbock 23º, Bell's Consecrator, Moretti La Rossa, Samuel Adams Double Bock, Tröegs Tröegenator Double Bock, Wasatch Brewery Devastator, Great Lakes Doppelrock, Abita Andygator, Wolverine State Brewing Company Predator, Burly Brewing's Burlynator, Monteith's Doppel Bock, and Christian Moerlein Emancipator Doppelbock. Eisbock Eisbock is a traditional specialty beer of the Kulmbach district of Bavaria, Germany,
South African linguists. But in contemporary decolonial South African linguistics, the term Ntu languages is used. Origin The Bantu languages descend from a common Proto-Bantu language, which is believed to have been spoken in what is now Cameroon in Central Africa. An estimated 2,500–3,000 years ago (1000 BC to 500 BC), speakers of the Proto-Bantu language began a series of migrations eastward and southward, carrying agriculture with them. This Bantu expansion came to dominate Sub-Saharan Africa east of Cameroon, an area where Bantu peoples now constitute nearly the entire population. Some other sources estimate the Bantu Expansion started closer to 3000 BC. The technical term Bantu, meaning "human beings" or simply "people", was first used by Wilhelm Bleek (1827–1875), as the concept is reflected in many of the languages of this group. A common characteristic of Bantu languages is that they use words such as muntu or mutu for "human being" or in simplistic terms "person", and the plural prefix for human nouns starting with mu- (class 1) in most languages is ba- (class 2), thus giving bantu for "people". Bleek, and later Carl Meinhof, pursued extensive studies comparing the grammatical structures of Bantu languages. Classification The most widely used classification is an alphanumeric coding system developed by Malcolm Guthrie in his 1948 classification of the Bantu languages. It is mainly geographic. The term 'narrow Bantu' was coined by the Benue–Congo Working Group to distinguish Bantu as recognized by Guthrie, from the Bantoid languages not recognized as Bantu by Guthrie. In recent times, the distinctiveness of Narrow Bantu as opposed to the other Southern Bantoid languages has been called into doubt (cf. Piron 1995, Williamson & Blench 2000, Blench 2011), but the term is still widely used. There is no true genealogical classification of the (Narrow) Bantu languages. Until recently most attempted classifications only considered languages that happen to fall within traditional Narrow Bantu, but there seems to be a continuum with the related languages of South Bantoid. At a broader level, the family is commonly split in two depending on the reflexes of proto-Bantu tone patterns: Many Bantuists group together parts of zones A through D (the extent depending on the author) as Northwest Bantu or Forest Bantu, and the remainder as Central Bantu or Savanna Bantu. The two groups have been described as having mirror-image tone systems: where Northwest Bantu has a high tone in a cognate, Central Bantu languages generally have a low tone, and vice versa. Northwest Bantu is more divergent internally than Central Bantu, and perhaps less conservative due to contact with non-Bantu Niger–Congo languages; Central Bantu is likely the innovative line cladistically. Northwest Bantu is clearly not a coherent family, but even for Central Bantu the evidence is lexical, with little evidence that it is a historically valid group. Another attempt at a detailed genetic classification to replace the Guthrie system is the 1999 "Tervuren" proposal of Bastin, Coupez, and Mann. However, it relies on lexicostatistics, which, because of its reliance on overall similarity rather than shared innovations, may predict spurious groups of conservative languages that are not closely related. Meanwhile, Ethnologue has added languages to the Guthrie classification which Guthrie overlooked, while removing the Mbam languages (much of zone A), and shifting some languages between groups (much of zones D and E to a new zone J, for example, and part of zone L to K, and part of M to F) in an apparent effort at a semi-genetic, or at least semi-areal, classification. This has been criticized for sowing confusion in one of the few unambiguous ways to distinguish Bantu languages. Nurse & Philippson (2006) evaluate many proposals for low-level groups of Bantu languages, but the result is not a complete portrayal of the family. Glottolog has incorporated many of these into their classification. The languages that share Dahl's law may also form a valid group, Northeast Bantu. The infobox at right lists these together with various low-level groups that are fairly uncontroversial, though they continue to be revised. The development of a rigorous genealogical classification of many branches of Niger–Congo, not just Bantu, is hampered by insufficient data. Computational phylogenetic classifications Simplified phylogeny of northwestern branches of Bantu by Grollemund (2012): Other computational phylogenetic analyses of Bantu include Currie et al. (2013), Grollemund et al. (2015), Rexova et al. 2006, Holden et al., 2016, and Whiteley et al. 2018. Glottolog classification Glottolog (2021) does not consider the older geographic classification by Guthrie relevant for its ongoing classification based on more recent linguistic studies, and Divides Bantu into 4 main branches (Bantu A-B10-B20-B30, Central-Western Bantu, East Bantu and Mbam-Bube-Jarawan). Language structure Guthrie reconstructed both the phonemic inventory and the vocabulary of Proto-Bantu. The most prominent grammatical characteristic of Bantu languages is the extensive use of affixes (see Sotho grammar and Ganda noun classes for detailed discussions of these affixes). Each noun belongs to a class, and each language may have several numbered classes, somewhat like grammatical gender in European languages. The class is indicated by a prefix that is part of the noun, as well as agreement markers on verb and qualificative roots connected with the noun. Plural is indicated by a change of class, with a resulting change of prefix. All Bantu languages are agglutinative. The verb has a number of prefixes, though in the western languages these are often treated as independent words. In Swahili, for example, Kitoto kidogo kimekisoma (for comparison, Kamwana kadoko kariverenga in Shona language) means 'The small child has read it [a book]'. Kitoto 'child'
(Bantu A-B10-B20-B30, Central-Western Bantu, East Bantu and Mbam-Bube-Jarawan). Language structure Guthrie reconstructed both the phonemic inventory and the vocabulary of Proto-Bantu. The most prominent grammatical characteristic of Bantu languages is the extensive use of affixes (see Sotho grammar and Ganda noun classes for detailed discussions of these affixes). Each noun belongs to a class, and each language may have several numbered classes, somewhat like grammatical gender in European languages. The class is indicated by a prefix that is part of the noun, as well as agreement markers on verb and qualificative roots connected with the noun. Plural is indicated by a change of class, with a resulting change of prefix. All Bantu languages are agglutinative. The verb has a number of prefixes, though in the western languages these are often treated as independent words. In Swahili, for example, Kitoto kidogo kimekisoma (for comparison, Kamwana kadoko kariverenga in Shona language) means 'The small child has read it [a book]'. Kitoto 'child' governs the adjective prefix ki- (representing the diminutive form of the word) and the verb subject prefix ki-. Then comes perfect tense -me- and an object marker -ki- agreeing with implicit kitabu 'book' (from Arabic kitab). Pluralizing to 'children' gives Vitoto vidogo vimekisoma (Vana vadoko variverenga in Shona), and pluralizing to 'books' (vitabu) gives Vitoto vidogo vimevisoma. Bantu words are typically made up of open syllables of the type CV (consonant-vowel) with most languages having syllables exclusively of this type. The Bushong language recorded by Vansina, however, has final consonants, while slurring of the final syllable (though written) is reported as common among the Tonga of Malawi. The morphological shape of Bantu words is typically CV, VCV, CVCV, VCVCV, etc.; that is, any combination of CV (with possibly a V- syllable at the start). In other words, a strong claim for this language family is that almost all words end in a vowel, precisely because closed syllables (CVC) are not permissible in most of the documented languages, as far as is understood. This tendency to avoid consonant clusters in some positions is important when words are imported from English or other non-Bantu languages. An example from Chewa: the word "school", borrowed from English, and then transformed to fit the sound patterns of this language, is sukulu. That is, sk- has been broken up by inserting an epenthetic -u-; -u has also been added at the end of the word. Another example is buledi for "bread". Similar effects are seen in loanwords for other non-African CV languages like Japanese. However, a clustering of sounds at the beginning of a syllable can be readily observed in such languages as Shona, and the Makua languages. With few exceptions, such as kiswahili and Rutooro, Bantu languages are tonal and have two to four register tones. Reduplication Reduplication is a common morphological phenomenon in Bantu languages and is usually used to indicate frequency or intensity of the action signalled by the (unreduplicated) verb stem. Example: in Swahili piga means "strike", pigapiga means "strike repeatedly". Well-known words and names that have reduplication include: Bafana Bafana, a football team Chipolopolo, a football team Eric Djemba-Djemba, a footballer Lomana LuaLua, a footballer Ngorongoro, a conservation area Repetition emphasizes the repeated word in the context that it is used. For instance, "Mwenda pole hajikwai," while, "Pole pole ndio mwendo," has two to emphasize the consistency of slowness of the pace. The meaning of the former in translation is, "He who goes slowly doesn't trip," and that of the latter is, "A slow but steady pace wins the race." Haraka haraka would mean hurrying just for the sake of hurrying, reckless hurry, as in "Njoo! Haraka haraka" [come here! Hurry, hurry]. In contrast, there are some words in some of the languages in which reduplication has the opposite meaning. It usually denotes short durations, and or lower intensity of the action and also means a few repetitions or a little bit more. Example 1: In Xitsonga and (Chi)Shona, famba means "walk" while famba-famba means "walk around". Example 2: in isiZulu and SiSwati hamba means "go", hambahamba means "go a little bit, but not much". Example 3: in both of the above languages shaya means "strike", shayashaya means "strike a few more times lightly, but not heavy strikes and not too many times". Example 4: In Shona means "scratch", Kwenyakwenya means "scratch excessively or a lot". Noun class The following is a list of nominal classes in Bantu languages: Syntax Virtually all Bantu languages have a Subject–verb–object word order with some exceptions such as the Nen language which has a Subject-Object-Verb word order. By country Following is an incomplete list of the principal Bantu languages of each country. Included are those languages that constitute at least 1% of the population and have at least 10% the number of speakers of the largest Bantu language in the country. Most languages are referred to in English without the class prefix (Swahili, Tswana, Ndebele), but are sometimes seen with the (language-specific) prefix (Kiswahili, Setswana, Sindebele). In a few cases prefixes are used to distinguish languages with the same root in their name, such as Tshiluba and Kiluba (both Luba), Umbundu and Kimbundu (both Mbundu). The prefixless form typically does not occur in the language itself, but is the basis for other words based on the ethnicity. So, in the country of Botswana the people are the Batswana, one person is a Motswana, and the language is Setswana; and in Uganda, centred on the kingdom of Buganda, the dominant ethnicity are the Baganda (singular
Bearing BTS Station in Bangkok See also Bearings (album), by Ronnie Montrose
component that separates moving parts and takes a load Bridge bearing, a component separating a bridge pier and deck Bearing BTS Station in
1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated. In the era of the intercontinental ballistic missiles the Bomarc, designed to intercept relatively slow manned bombers, had become a useless asset. The remaining Bomarc missiles were used by all armed services as high-speed target drones for tests of other air-defense missiles. The Bomarc A and Bomarc B targets were designated as CQM-10A and CQM-10B, respectively. Following the accident, the McGuire complex has never been sold or converted to other uses and remains in Air Force ownership, making it the most intact site of the eight in the US. It has been nominated to the National Register of Historic Sites. Although a number of IM-99/CIM-10 Bomarcs have been placed on public display, because of concerns about the possible environmental hazards of the thoriated magnesium structure of the airframe several have been removed from public view. Russ Sneddon, director of the Air Force Armament Museum, Eglin Air Force Base, Florida provided information about missing CIM-10 exhibit airframe serial 59–2016, one of the museum's original artifacts from its founding in 1975 and donated by the 4751st Air Defense Squadron at Hurlburt Field, Eglin Auxiliary Field 9, Eglin AFB. As of December 2006, the suspect missile was stored in a secure compound behind the Armaments Museum. In December 2010, the airframe was still on premises, but partly dismantled. Canada The Bomarc Missile Program was highly controversial in Canada. The Progressive Conservative government of Prime Minister John Diefenbaker initially agreed to deploy the missiles, and shortly thereafter controversially scrapped the Avro Arrow, a supersonic manned interceptor aircraft, arguing that the missile program made the Arrow unnecessary. Initially, it was unclear whether the missiles would be equipped with nuclear warheads. By 1960 it became known that the missiles were to have a nuclear payload, and a debate ensued about whether Canada should accept nuclear weapons. Ultimately, the Diefenbaker government decided that the Bomarcs should not be equipped with nuclear warheads. The dispute split the Diefenbaker Cabinet, and led to the collapse of the government in 1963. The Official Opposition and Liberal Party leader Lester B. Pearson originally was against nuclear missiles, but reversed his personal position and argued in favor of accepting nuclear warheads. He won the 1963 election, largely on the basis of this issue, and his new Liberal government proceeded to accept nuclear-armed Bomarcs, with the first being deployed on 31 December 1963. When the nuclear warheads were deployed, Pearson's wife, Maryon, resigned her honorary membership in the anti-nuclear weapons group, Voice of Women. Canadian operational deployment of the Bomarc involved the formation of two specialized Surface/Air Missile squadrons. The first to begin operations was No. 446 SAM Squadron at RCAF Station North Bay, which was the command and control center for both squadrons. With construction of the compound and related facilities completed in 1961, the squadron received its Bomarcs in 1961, without nuclear warheads. The squadron became fully operational from 31 December 1963, when the nuclear warheads arrived, until disbanding on 31 March 1972. All the warheads were stored separately and under control of Detachment 1 of the USAF 425th Munitions Maintenance Squadron at Stewart Air Force Base. During operational service, the Bomarcs were maintained on stand-by, on a 24-hour basis, but were never fired, although the squadron test-fired the missiles at Eglin AFB, Florida on annual winter retreats. No. 447 SAM Squadron operating out of RCAF Station La Macaza, Quebec, was activated on 15 September 1962 although warheads were not delivered until late 1963. The squadron followed the same operational procedures as No. 446, its sister squadron. With the passage of time the operational capability of the 1950s-era Bomarc system no longer met modern requirements; the Department of National Defence deemed that the Bomarc missile defense was no longer a viable system, and ordered both squadrons to be stood down in 1972. The bunkers and ancillary facilities remain at both former sites. Variants XF-99 (experimental for booster research) XF-99A/XIM-99A (experimental for ramjet research) YF-99A/YIM-99A (service-test) IM-99A/CIM-10A (initial production) IM-99B/CIM-10B ("advanced") CQM-10A (target drone developed from CIM-10A) CQM-10B (target drone developed from CIM-10B) Operators / Royal Canadian Air Force from 1955 to 1968 / Canadian Forces from 1968 to 1972 446 SAM Squadron: 28 IM-99B, CFB North Bay, Ontario 1962–1972 Bomarc site located at 447 SAM Squadron: 28 IM-99B, La Macaza, Quebec (La Macaza – Mont Tremblant International Airport) 1962–1972 Bomarc site located at (Approximately) United States Air Force Air (later Aerospace) Defense Command 6th Air Defense Missile Squadron, 56 IM-99A Activated on 1 February 1959 Assigned to: New York Air Defense Sector Inactivated 15 December 1964 Stationed at: Suffolk County Air Force Base Missile Annex, New York Bomarc site located 3 miles SW at 22d Air Defense Missile Squadron: 28 IM-99A/28 IM-99B Activated on 15 September 1959 Assigned to: Washington Air Defense Sector Reassigned to: 33d Air Division, 1 April 1966 Reassigned to: 20th Air Division, 19 November 1969 Inactivated: 31 October 1972 Stationed at: Langley AFB, Virginia Bomarc site located 3 miles WNW at 26th Air Defense Missile Squadron: 28 IM-99A/28 IM-99B Activated 1 March 1959 Assigned to: Boston Air Defense Sector Reassigned to: 35th Air Division, 1 April 1966 Reassigned to: 21st Air Division, 19 November 1969 Inactivated: 30 April 1972 Stationed at: Otis Air Force Base BOMARC site, Massachusetts Bomarc site located 1 mile NNW at 30th Air Defense Missile Squadron: 28 IM-99A Activated on 1 June 1959 Assigned to Bangor Air Defense Sector Inactivated: 15 December 1964 Stationed at Dow AFB, Maine Bomarc site located 4 mils NNE at 35th Air Defense Missile Squadron: 56 IM-99B Activated 1 June 1960 Assigned to Syracuse Air Defense Sector Reassigned to: Detroit Air Defense Sector, 4 September 1963 Reassigned to: 34th Air Division, 1 April 1966 Reassigned to: 35th Air Division, 15 September 1969 Inactivated: 31 December 1969 Stationed at: Niagara Falls Air Force Missile Site, New York Bomarc site located at 37th Air Defense Missile Squadron: 28 IM-99B Activated 1 March 1960 Assigned to 30th Air Division Reassigned to: Sault Sainte Marie Air Defense Sector, 1 April 1960 Reassigned to: Duluth Air Defense Sector, 1 October 1963 Reassigned to: 29th Air Division, 1 April 1966 Reassigned to: 23d Air Division, 19 November 1969 Inactivated 31 July 1972 Stationed at: Kincheloe AFB, Michigan Bomarc site located 19 miles NW at Raco 46th Air Defense Missile Squadron: 28 IM-99A/56 IM-99B Activated 1 January 1959 Assigned to New York Air Defense Sector Reassigned to: 21st Air Division, 1 April 1966 Reassigned to: 35th Air Division, 1 December 1957 Reassigned to: 21st Air Division, 19 November 1969 Inactivated 31 October 1972 Stationed at: McGuire AFB, New Jersey Bomarc site located 4 miles ESE at 74th Air Defense Missile Squadron: 28 IM-99B Activated 1 April 1960 Assigned to Duluth Air Defense Sector Reassigned to: 29th Air Division, 1 April 1966 Reassigned to: 23d Air Division, 19 November 1969 Inactivated 30 April 1972 Stationed at: Duluth International Airport, Minnesota Bomarc site located 10 miles NE at 4751st Air Defense Missile Squadron Activated 15 January 1959 Assigned to 73d Air Division (Weapons) Reassigned to: 32d Air Division, 1 October 1959 Reassigned to: Montgomery Air Defense Sector, 1 July 1962 Reassigned to: Air Defense, Tactical Air Command, 1 September 1979 Inactivated 30 September 1979 Stationed at: Eglin Auxiliary Field #9 (Hurlburt Field), Florida Bomarc site located on Santa Rosa Island at Bomarc site located at Eglin Auxiliary Field #5 (Piccolo Field) at Air Force Systems Command Cape Canaveral Air Force Station, Florida Launch Complex 4 (LC-4) was used for Bomarc testing and development launches 2 February 1956 – 15 April 1960 (17 Launches). Vandenberg Air Force Base, California Two launch sites, BOM-1 and BOM-2 were used by the United States Navy for Bomarc launches against aerial targets. The first launch taking place on 25 August 1966. The last two launches occurred on 14 July 1982. BOM1 49 launches; BOM2 38 launches. Locations under construction but not activated. Each site was programmed for 28 IM-99B missiles: Camp Adair, Oregon Charleston AFB, South Carolina Ethan Allen AFB, Vermont Paine Field, Washington Travis AFB, California Truax Field, Wisconsin Vandenberg AFB, California Reference for BOMARC units and locations: Surviving missiles Below is a list of museums or sites which have a Bomarc missile on display: Air Force Armament Museum, Eglin Air Force Base, Florida Air Force Space & Missile Museum, Cape Canaveral Air Force Station, Florida. It is on display Hangar C. Alberta Aviation Museum, Edmonton, Alberta, Canada Canada Aviation and Space Museum, Ottawa, Ontario, Canada Hill Aerospace Museum, Hill Air Force Base, Utah Historical Electronics Museum, Linthicum, Maryland (display of AN/DPN-53, the first airborne pulse-doppler radar, used in the Bomarc) Illinois Soldiers & Sailors Home, Quincy, Illinois Keesler Air Force Base, Biloxi, Mississippi Museum of Aviation, Robins Air Force Base, Warner Robins, Georgia National Museum of Nuclear Science & History, Kirtland Air Force Base, Albuquerque, New Mexico National Museum of the United States Air Force, Wright-Patterson Air Force Base, Ohio Octave Chanute Aerospace Museum (former Chanute Air Force Base), Rantoul, Illinois; the museum closed on December 30, 2015 Peterson Air and Space Museum, Peterson Air Force Base, Colorado Strategic Air and Space Museum, Ashland, Nebraska U.S. Air Force History and Traditions Museum, Lackland Air Force Base, San Antonio, Texas Vandenberg Air Force Base (Space and Missile Heritage Center), California. Bomarc not for public access. Impact on popular music The Bomarc missile captured the imagination of the American and Canadian popular music industry, giving rise to a pop music group, the Bomarcs (composed mainly of servicemen stationed on a Florida radar site that tracked Bomarcs), a record label, Bomarc Records, and a moderately successful Canadian pop group, The Beau Marks. See also References Bibliography Clearwater, John. Canadian Nuclear Weapons: The Untold Story of Canada's Cold War Arsenal. Toronto, Ontario, Canada: Dundern Press, 1999. . Clearwater, John. U.S. Nuclear Weapons in Canada. Toronto, Ontario, Canada: Dundern Press, 1999. . Cornett, Lloyd H. Jr. and Mildred W. Johnson. A Handbook of Aerospace Defense Organization 1946–1980. Peterson Air Force Base, Colorado: Office of History, Aerospace Defense Center, 1980. No ISBN. Gibson, James N. Nuclear Weapons of the United States:
located in the US and two in Canada. Bomarc B The liquid-fuel booster of the Bomarc A had several drawbacks. It took two minutes to fuel before launch, which could be a long time in high-speed intercepts, and its hypergolic propellants (hydrazine and nitric acid) were very dangerous to handle, leading to several serious accidents. As soon as high-thrust solid-fuel rockets became a reality in the mid-1950s, the USAF began to develop a new solid-fueled Bomarc variant, the IM-99B Bomarc B. It used a Thiokol XM51 booster, and also had improved Marquardt RJ43-MA-7 (and finally the RJ43-MA-11) ramjets. The first IM-99B was launched in May 1959, but problems with the new propulsion system delayed the first fully successful flight until July 1960, when a supersonic MQM-15A Regulus II drone was intercepted. Because the new booster took up less space in the missile, more ramjet fuel could be carried, increasing the range to . The terminal homing system was also improved, using the world's first pulse Doppler search radar, the Westinghouse AN/DPN-53. All Bomarc Bs were equipped with the W-40 nuclear warhead. In June 1961, the first IM-99B squadron became operational, and Bomarc B quickly replaced most Bomarc A missiles. On 23 March 1961, a Bomarc B successfully intercepted a Regulus II cruise missile flying at , thus achieving the highest interception in the world up to that date. Boeing built 570 Bomarc missiles between 1957 and 1964, 269 CIM-10A, 301 CIM-10B. In September 1958 Air Research & Development Command decided to transfer the Bomarc program from its testing at Cape Canaveral Air Force Station to a new facility on Santa Rosa Island, immediately south of Eglin AFB Hurlburt Field on the Gulf of Mexico. To operate the facility and to provide training and operational evaluation in the missile program, Air Defense Command established the 4751st Air Defense Wing (Missile) (4751st ADW) on 15 January 1958. The first launch from Santa Rosa took place on 15 January 1959. Operational history In 1955, to support a program which called for 40 squadrons of BOMARC (120 missiles to a squadron for a total of 4,800 missiles), ADC reached a decision on the location of these 40 squadrons and suggested operational dates for each. The sequence was as follows: ... l. McGuire 1/60 2. Suffolk 2/60 3. Otis 3/60 4. Dow 4/60 5. Niagara Falls 1/61 6. Plattsburgh 1/61 7. Kinross 2/61 8. K.I. Sawyer 2/61 9. Langley 2/61 10. Truax 3/61 11. Paine 3/61 12. Portland 3/61 ... At the end of 1958, ADC plans called for construction of the following BOMARC bases in the following order: l. McGuire 2. Suffolk 3. Otis 4. Dow 5. Langley 6. Truax 7. Kinross 8. Duluth 9. Ethan Allen 10. Niagara Falls 11. Paine 12. Adair 13. Travis 14. Vandenberg 15. San Diego 16. Malmstrom 17. Grand Forks 18. Minot 19. Youngstown 20. Seymour-Johnson 21. Bunker Hill 22. Sioux Falls 23. Charleston 24. McConnell 25. Holloman 26. McCoy 27. Amarillo 28. Barksdale 29. Williams. United States The first USAF operational Bomarc squadron was the 46th Air Defense Missile Squadron (ADMS), organized on 1 January 1959 and activated on 25 March. The 46th ADMS was assigned to the New York Air Defense Sector at McGuire Air Force Base, New Jersey. The training program, under the 4751st Air Defense Wing used technicians acting as instructors and was established for a four-month duration. Training included missile maintenance; SAGE operations and launch procedures, including the launch of an unarmed missile at Eglin. In September 1959 the squadron assembled at their permanent station, the Bomarc site near McGuire AFB, and trained for operational readiness. The first Bomarc-A were used at McGuire on 19 September 1959 with Kincheloe AFB getting the first operational IM-99Bs. While several of the squadrons replicated earlier fighter interceptor unit numbers, they were all new organizations with no previous historical counterpart. ADC's initial plans called for some 52 Bomarc sites around the United States with 120 missiles each but as defense budgets decreased during the 1950s the number of sites dropped substantially. Ongoing development and reliability problems didn't help, nor did Congressional debate over the missile's usefulness and necessity. In June 1959, the Air Force authorized 16 Bomarc sites with 56 missiles each; the initial five would get the IM-99A with the remainder getting the IM-99B. However, in March 1960, HQ USAF cut deployment to eight sites in the United States and two in Canada. Bomarc incident Within a year of operations, a Bomarc A with a nuclear warhead caught fire at McGuire AFB on 7 June 1960 after its on-board helium tank exploded. While the missile's explosives did not detonate, the heat melted the warhead and released plutonium, which the fire crews spread. The Air Force and the Atomic Energy Commission cleaned up the site and covered it with concrete. This was the only major incident involving the weapon system. The site remained in operation for several years following the fire. Since its closure in 1972, the area has remained off limits, primarily due to low levels of plutonium contamination. Between 2002 and 2004, 21,998 cubic yards of contaminated debris and soils were shipped to what was then known as Envirocare, located in Utah. Modification and deactivation In 1962, the US Air Force started using modified A-models as drones; following the October 1962 tri-service redesignation of aircraft and weapons systems they became CQM-10As. Otherwise the air defense missile squadrons maintained alert while making regular trips to Santa Rosa Island for training and firing practice. After the inactivation of the 4751st ADW(M) on 1 July 1962 and transfer of Hurlburt to Tactical Air Command for air commando operations the 4751st Air Defense Squadron (Missile) remained at Hurlburt and Santa Rosa Island for training purposes. In 1964, the liquid-fueled Bomarc-A sites and squadrons began to be deactivated. The sites at Dow and Suffolk County closed first. The remainder continued to be operational for several more years while the government started dismantling the air defense missile network. Niagara Falls was the first BOMARC B installation to close, in December 1969; the others remained on alert through 1972. In April 1972, the last Bomarc B in U.S. Air Force service was retired at McGuire and the 46th ADMS inactivated and the base was deactivated. In the era of the
to a remarkable degree, its waters being actually milky in appearance". Alexander von Humboldt attributed the color to the presence of silicates in the water, principally mica and talc. There is a visible contrast with the waters of the Rio Negro at the confluence of the two rivers. The Rio Negro is a blackwater river with dark tea-colored acidic water (pH 3.5–4.5) that contains high levels of dissolved organic carbon. River capture Until approximately 20,000 years ago the headwaters of the Branco River flowed not into the Amazon, but via the Takutu Graben in the Rupununi area of Guyana towards the Caribbean. Currently in the rainy season much of the Rupununi area floods, with water draining both to the Amazon (via the Branco River) and the Essequibo River. Citations Notes References Encyclopædia Britannica
are clear and flow through rocky country, leading to the suggestion that sediments mainly originate from the lower parts. Furthermore, its chemistry and color may contradict each other compared to the traditional Amazonian river classifications. The Branco River has pH 6–7 and low levels of dissolved organic carbon. Alfred Russel Wallace mentioned the coloration in "On the Rio Negro", a paper read at the 13 June 1853 meeting of the Royal Geographical Society, in which he said: "[The Rio Branco] is white to a remarkable degree, its waters being actually milky in appearance". Alexander von Humboldt attributed the color to the presence of silicates in the water, principally mica and talc. There is a visible contrast with the waters of the Rio Negro at the confluence of the two rivers. The Rio Negro is a blackwater river with dark tea-colored acidic water (pH 3.5–4.5) that contains high levels of dissolved organic carbon. River capture Until approximately
trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all." The first such vehicle, the Electromote, was made by his brother Dr. Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration. Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911. Motor buses In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales. Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard. The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company – it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds saw military service on the Western Front during the First World War. The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. They then purchased the balance of the shares in 1943 to form the GM Truck and Coach Division. Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking. Types Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer. Design Accessibility Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses. Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws. Configuration Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, articulated buses have three. Guidance Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways. Liveries Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative. Propulsion The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s. Dimensions United Kingdom and European Union: Maximum Length: Single rear axle . Twin rear axle . Maximum Width: United States, Canada and Mexico: Maximum Length: None Maximum Width: Manufacture Early bus manufacturing grew out of carriage coach
the country. Trolleybuses In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all." The first such vehicle, the Electromote, was made by his brother Dr. Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration. Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911. Motor buses In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales. Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard. The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company – it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds saw military service on the Western Front during the First World War. The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. They then purchased the balance of the shares in 1943 to form the GM Truck and Coach Division. Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking. Types Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer. Design Accessibility Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses. Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws. Configuration Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, articulated buses have three. Guidance Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways. Liveries Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative. Propulsion The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s. Dimensions United Kingdom and European Union: Maximum Length: Single rear axle . Twin rear axle . Maximum Width: United States, Canada and Mexico: Maximum Length: None Maximum Width: Manufacture Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products. Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service. As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. As with the cars, new models are often exhibited by manufacturers at prestigious industry shows to gain new orders. A typical city bus costs almost US$450,000. Uses Public transport Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis. Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services. Tourism Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches. In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey. Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package. Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories. Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world.
encouraged the tourism sector as one of the mainstays for economic progress and social welfare. The tourism industry is primarily focused in the south, while also significant in the other parts of the island. The main tourist locations are the town of Kuta (with its beach), and its outer suburbs of Legian and Seminyak (which were once independent townships), the east coast town of Sanur (once the only tourist hub), Ubud towards the centre of the island, to the south of the Ngurah Rai International Airport, Jimbaran and the newer developments of Nusa Dua and Pecatu. The United States government lifted its travel warnings in 2008. The Australian government issued an advisory on Friday, 4 May 2012, with the overall level of this advisory lowered to 'Exercise a high degree of caution'. The Swedish government issued a new warning on Sunday, 10 June 2012 because of one tourist who died from methanol poisoning. Australia last issued an advisory on Monday, 5 January 2015 due to new terrorist threats. An offshoot of tourism is the growing real estate industry. Bali's real estate has been rapidly developing in the main tourist areas of Kuta, Legian, Seminyak and Oberoi. Most recently, high-end 5-star projects are under development on the Bukit peninsula, on the south side of the island. Expensive villas are being developed along the cliff sides of south Bali, with commanding panoramic ocean views. Foreign and domestic, many Jakarta individuals and companies are fairly active, investment into other areas of the island also continues to grow. Land prices, despite the worldwide economic crisis, have remained stable. In the last half of 2008, Indonesia's currency had dropped approximately 30% against the US dollar, providing many overseas visitors improved value for their currencies. Bali's tourism economy survived the Islamists terrorist bombings of 2002 and 2005, and the tourism industry has slowly recovered and surpassed its pre terrorist bombing levels; the long-term trend has been a steady increase in visitor arrivals. In 2010, Bali received 2.57 million foreign tourists, which surpassed the target of 2.0–2.3 million tourists. The average occupancy of starred hotels achieved 65%, so the island still should be able to accommodate tourists for some years without any addition of new rooms/hotels, although at the peak season some of them are fully booked. Bali received the Best Island award from Travel and Leisure in 2010. Bali won because of its attractive surroundings (both mountain and coastal areas), diverse tourist attractions, excellent international and local restaurants, and the friendliness of the local people. The Balinese culture and its religion are also considered as the main factor of the award. One of the most prestigious events that symbolize a strong relationship between a god and its followers is Kecak dance. According to BBC Travel released in 2011, Bali is one of the World's Best Islands, ranking second after Santorini, Greece. In 2006, Elizabeth Gilbert's memoir Eat, Pray, Love was published, and in August 2010 it was adapted into the film Eat Pray Love. It took place at Ubud and Padang-Padang Beach in Bali. Both the book and the film fuelled a boom in tourism in Ubud, the hill town and cultural and tourist centre that was the focus of Gilbert's quest for balance and love through traditional spirituality and healing. In January 2016, after musician David Bowie died, it was revealed that in his will, Bowie asked for his ashes to be scattered in Bali, conforming to Buddhist rituals. He had visited and performed in several Southeast Asian cities early in his career, including Bangkok and Singapore. Since 2011, China has displaced Japan as the second-largest supplier of tourists to Bali, while Australia still tops the list while India has also emerged as a greater supply of tourists. Chinese tourists increased by 17% from last year due to the impact of ACFTA and new direct flights to Bali. In January 2012, Chinese tourists increased by 222.18% compared to January 2011, while Japanese tourists declined by 23.54% year on year. Bali authorities reported the island had 2.88 million foreign tourists and 5 million domestic tourists in 2012, marginally surpassing the expectations of 2.8 million foreign tourists. Based on a Bank Indonesia survey in May 2013, 34.39 per cent of tourists are upper-middle class, spending between $1,286 to $5,592, and are dominated by Australia, India, France, China, Germany and the UK. Some Chinese tourists have increased their levels of spending from previous years. 30.26 percent of tourists are middle class, spending between $662 to $1,285. In 2017 it was expected that Chinese tourists would outnumber Australian tourists. In January 2020, 10,000 Chinese tourists cancelled trips to Bali due to the COVID-19 pandemic. Because of the Covid-19 pandemic travel restrictions, Bali welcomed 1.07 million international travelers in 2020, most of them between January and March, which is -87% compared to 2019. In the first half of 2021, they welcomed 43 international travelers. Transportation The Ngurah Rai International Airport is located near Jimbaran, on the isthmus at the southernmost part of the island. Lt. Col. Wisnu Airfield is on northwest Bali. A coastal road circles the island, and three major two-lane arteries cross the central mountains at passes reaching 1,750 m in height (at Penelokan). The Ngurah Rai Bypass is a four-lane expressway that partly encircles Denpasar. Bali has no railway lines. There is a car ferry between Gilimanuk on the west coast of Bali to Ketapang on Java. In December 2010 the Government of Indonesia invited investors to build a new Tanah Ampo Cruise Terminal at Karangasem, Bali with a projected worth of $30 million. On 17 July 2011 the first cruise ship (Sun Princess) anchored about away from the wharf of Tanah Ampo harbour. The current pier is only but will eventually be extended to to accommodate international cruise ships. The harbour is safer than the existing facility at Benoa and has a scenic backdrop of east Bali mountains and green rice fields. The tender for improvement was subject to delays, and as of July 2013 the situation was unclear with cruise line operators complaining and even refusing to use the existing facility at Tanah Ampo. A Memorandum of Understanding has been signed by two ministers, Bali's Governor and Indonesian Train Company to build of railway along the coast around the island. As of July 2015, no details of these proposed railways have been released. In 2019 it was reported in Gapura Bali that Wayan Koster, governor of Bali, "is keen to improve Bali's transportation infrastructure and is considering plans to build an electric rail network across the island". On 16 March 2011 (Tanjung) Benoa port received the "Best Port Welcome 2010" award from London's "Dream World Cruise Destination" magazine. Government plans to expand the role of Benoa port as export-import port to boost Bali's trade and industry sector. In 2013, The Tourism and Creative Economy Ministry advised that 306 cruise liners were scheduled to visit Indonesia, an increase of 43 per cent compared to the previous year. In May 2011, an integrated Area Traffic Control System (ATCS) was implemented to reduce traffic jams at four crossing points: Ngurah Rai statue, Dewa Ruci Kuta crossing, Jimbaran crossing and Sanur crossing. ATCS is an integrated system connecting all traffic lights, CCTVs and other traffic signals with a monitoring office at the police headquarters. It has successfully been implemented in other ASEAN countries and will be implemented at other crossings in Bali. On 21 December 2011 construction started on the Nusa Dua-Benoa-Ngurah Rai International Airport toll road, which will also provide a special lane for motorcycles. This has been done by seven state-owned enterprises led by PT Jasa Marga with 60% of shares. PT Jasa Marga Bali Tol will construct the toll road (totally with access road). The construction is estimated to cost Rp.2.49 trillion ($273.9 million). The project goes through of mangrove forest and through of beach, both within area. The elevated toll road is built over the mangrove forest on 18,000 concrete pillars that occupied two hectares of mangrove forest. This was compensated by the planting of 300,000 mangrove trees along the road. On 21 December 2011 the Dewa Ruci underpass has also started on the busy Dewa Ruci junction near Bali Kuta Galeria with an estimated cost of Rp136 billion ($14.9 million) from the state budget. On 23 September 2013, the Bali Mandara Toll Road was opened, with the Dewa Ruci Junction (Simpang Siur) underpass being opened previously. To solve chronic traffic problems, the province will also build a toll road connecting Serangan with Tohpati, a toll road connecting Kuta, Denpasar and Tohpati and a flyover connecting Kuta and Ngurah Rai Airport. Demographics The population of Bali was 3,890,757 as of the 2010 Census, and 4,317,404 at the 2020 Census. There are an estimated 30,000 expatriates living in Bali. Ethnic origins A DNA study in 2005 by Karafet et al. found that 12% of Balinese Y-chromosomes are of likely Indian origin, while 84% are of likely Austronesian origin, and 2% of likely Melanesian origin. Caste system Pre-modern Bali had four castes, as Jeff Lewis and Belinda Lewis state, but with a "very strong tradition of communal decision-making and interdependence". The four castes have been classified as Sudra (Shudra), Wesia (Vaishyas), Satria (Kshatriyas) and Brahmana (Brahmin). The 19th-century scholars such as Crawfurd and Friederich suggested that the Balinese caste system had Indian origins, but Helen Creese states that scholars such as Brumund who had visited and stayed on the island of Bali suggested that his field observations conflicted with the "received understandings concerning its Indian origins". In Bali, the Shudra (locally spelt Soedra) have typically been the temple priests, though depending on the demographics, a temple priest may also be from the other three castes. In most regions, it has been the Shudra who typically make offerings to the gods on behalf of the Hindu devotees, chant prayers, recite meweda (Vedas), and set the course of Balinese temple festivals. Religion About 86.91% of Bali's population adheres to Balinese Hinduism, formed as a combination of existing local beliefs and Hindu influences from mainland Southeast Asia and South Asia. Minority religions include Islam (10.05%), Christianity (2.35%), and Buddhism (0.68%) as for 2018. The general beliefs and practices of Agama Hindu Dharma mix ancient traditions and contemporary pressures placed by Indonesian laws that permit only monotheist belief under the national ideology of Pancasila. Traditionally, Hinduism in Indonesia had a pantheon of deities and that tradition of belief continues in practice; further, Hinduism in Indonesia granted freedom and flexibility to Hindus as to when, how and where to pray. However, officially, the Indonesian government considers and advertises Indonesian Hinduism as a monotheistic religion with certain officially recognised beliefs that comply with its national ideology. Indonesian school textbooks describe Hinduism as having one supreme being, Hindus offering three daily mandatory prayers, and Hinduism as having certain common beliefs that in part parallel those of Islam. Scholars contest whether these Indonesian government recognised and assigned beliefs to reflect the traditional beliefs and practices of Hindus in Indonesia before Indonesia gained independence from Dutch colonial rule. Balinese Hinduism has roots in Indian Hinduism and Buddhism, which arrived through Java. Hindu influences reached the Indonesian Archipelago as early as the first century. Historical evidence is unclear about the diffusion process of cultural and spiritual ideas from India. Java legends refer to Saka-era, traced to 78 AD. Stories from the Mahabharata Epic have been traced in Indonesian islands to the 1st century; however, the versions mirror those found in the southeast Indian peninsular region (now Tamil Nadu and southern Karnataka Andhra Pradesh). The Bali tradition adopted the pre-existing animistic traditions of the indigenous people. This influence strengthened the belief that the gods and goddesses are present in all things. Every element of nature, therefore, possesses its power, which reflects the power of the gods. A rock, tree, dagger, or woven cloth is a potential home for spirits whose energy can be directed for good or evil. Balinese Hinduism is deeply interwoven with art and ritual. Ritualising states of self-control are a notable feature of religious expression among the people, who for this reason have become famous for their graceful and decorous behaviour. Apart from the majority of Balinese Hindus, there also exist Chinese immigrants whose traditions have melded with that of the locals. As a result, these Sino-Balinese embrace their original religion, which is a mixture of Buddhism, Christianity, Taoism and Confucianism, and find a way to harmonise it with the local traditions. Hence, it is not uncommon to find local Sino-Balinese during the local temple's odalan. Moreover, Balinese Hindu priests are invited to perform rites alongside a Chinese priest in the event of the death of a Sino-Balinese. Nevertheless, the Sino-Balinese claim to embrace Buddhism for administrative purposes, such as their Identity Cards. The Roman Catholic community has a diocese, the Diocese of Denpasar that encompasses the province of Bali and West Nusa Tenggara and has its cathedral located in Denpasar. Language Balinese and Indonesian are the most widely spoken languages in Bali, and the vast majority of Balinese people are bilingual or trilingual. The most common spoken language around the tourist areas is Indonesian, as many people in the tourist sector are not solely Balinese, but migrants from Java, Lombok, Sumatra, and other parts of Indonesia. There are several indigenous Balinese languages, but most Balinese can also use the most widely spoken option: modern common Balinese. The usage of different Balinese languages was traditionally determined by the Balinese caste system and by clan membership, but this tradition is diminishing. Kawi and Sanskrit are also commonly used by some Hindu priests in Bali, as Hindu literature was mostly written in Sanskrit. English and Chinese are the next most common languages (and the primary foreign languages) of many Balinese, owing to the requirements of the tourism industry, as well as the English-speaking community and huge Chinese-Indonesian population. Other foreign languages, such as Japanese, Korean, French, Russian or German are often used in multilingual signs for foreign tourists. Culture Bali is renowned for its diverse and sophisticated art forms, such as painting, sculpture, woodcarving, handcrafts, and performing arts. Balinese cuisine is also distinctive. Balinese percussion orchestra music, known as gamelan, is highly developed and varied. Balinese performing arts often portray stories from Hindu epics such as the Ramayana but with heavy Balinese influence. Famous Balinese dances include pendet, legong, baris, topeng, barong, gong keybar, and kecak (the monkey dance). Bali
1178 and 1181, while Adikuntiketana and his son Paramesvara in 1204. Balinese culture was strongly influenced by Indian, Chinese, and particularly Hindu culture, beginning around the 1st century AD. The name Bali dwipa ("Bali island") has been discovered from various inscriptions, including the Blanjong pillar inscription written by Sri Kesari Warmadewa in 914 AD and mentioning Walidwipa. It was during this time that the people developed their complex irrigation system subak to grow rice in wet-field cultivation. Some religious and cultural traditions still practised today can be traced to this period. The Hindu Majapahit Empire (1293–1520 AD) on eastern Java founded a Balinese colony in 1343. The uncle of Hayam Wuruk is mentioned in the charters of 1384–86. Mass Javanese immigration to Bali occurred in the next century when the Majapahit Empire fell in 1520. Bali's government then became an independent collection of Hindu kingdoms which led to a Balinese national identity and major enhancements in culture, arts, and economy. The nation with various kingdoms became independent for up to 386 years until 1906 when the Dutch subjugated and repulsed the natives for economic control and took it over. Portuguese contacts The first known European contact with Bali is thought to have been made in 1512, when a Portuguese expedition led by Antonio Abreu and Francisco Serrão sighted its northern shores. It was the first expedition of a series of bi-annual fleets to the Moluccas, that throughout the 16th century usually travelled along the coasts of the Sunda Islands. Bali was also mapped in 1512, in the chart of Francisco Rodrigues, aboard the expedition. In 1585, a ship foundered off the Bukit Peninsula and left a few Portuguese in the service of Dewa Agung. Dutch East Indies In 1597, the Dutch explorer Cornelis de Houtman arrived at Bali, and the Dutch East India Company was established in 1602. The Dutch government expanded its control across the Indonesian archipelago during the second half of the 19th century. Dutch political and economic control over Bali began in the 1840s on the island's north coast when the Dutch pitted various competing for Balinese realms against each other. In the late 1890s, struggles between Balinese kingdoms on the island's south were exploited by the Dutch to increase their control. In June 1860, the famous Welsh naturalist, Alfred Russel Wallace, travelled to Bali from Singapore, landing at Buleleng on the north coast of the island. Wallace's trip to Bali was instrumental in helping him devise his Wallace Line theory. The Wallace Line is a faunal boundary that runs through the strait between Bali and Lombok. It is a boundary between species. In his travel memoir The Malay Archipelago, Wallace wrote of his experience in Bali, of which has a strong mention of the unique Balinese irrigation methods: I was both astonished and delighted; for as my visit to Java was some years later, I had never beheld so beautiful and well-cultivated a district out of Europe. A slightly undulating plain extends from the seacoast about inland, where it is bounded by a fine range of wooded and cultivated hills. Houses and villages, marked out by dense clumps of coconut palms, tamarind and other fruit trees, are dotted about in every direction; while between them extend luxurious rice-grounds, watered by an elaborate system of irrigation that would be the pride of the best cultivated parts of Europe. The Dutch mounted large naval and ground assaults at the Sanur region in 1906 and were met by the thousands of members of the royal family and their followers who rather than yield to the superior Dutch force committed ritual suicide (puputan) to avoid the humiliation of surrender. Despite Dutch demands for surrender, an estimated 200 Balinese killed themselves rather than surrender. In the Dutch intervention in Bali, a similar mass suicide occurred in the face of a Dutch assault in Klungkung. Afterwards, the Dutch governours exercised administrative control over the island, but local control over religion and culture generally remained intact. Dutch rule over Bali came later and was never as well established as in other parts of Indonesia such as Java and Maluku. In the 1930s, anthropologists Margaret Mead and Gregory Bateson, artists Miguel Covarrubias and Walter Spies, and musicologist Colin McPhee all spent time here. Their accounts of the island and its peoples created a western image of Bali as "an enchanted land of aesthetes at peace with themselves and nature". Western tourists began to visit the island. The sensuous image of Bali was enhanced in the West by a quasi-pornographic 1932 documentary Virgins of Bali about a day in the lives of two teenage Balinese girls whom the film's narrator Deane Dickason notes in the first scene "bathe their shamelessly nude bronze bodies". Under the looser version of the Hays code that existed up to 1934, nudity involving "civilised" (i.e. white) women was banned, but permitted with "uncivilised" (i.e. all non-white women), a loophole that was exploited by the producers of Virgins of Bali. The film, which mostly consisted of scenes of topless Balinese women was a great success in 1932, and almost single-handedly made Bali into a popular spot for tourists. Imperial Japan occupied Bali during World War II. It was not originally a target in their Netherlands East Indies Campaign, but as the airfields on Borneo were inoperative due to heavy rains, the Imperial Japanese Army decided to occupy Bali, which did not suffer from comparable weather. The island had no regular Royal Netherlands East Indies Army (KNIL) troops. There was only a Native Auxiliary Corps Prajoda (Korps Prajoda) consisting of about 600 native soldiers and several Dutch KNIL officers under the command of KNIL Lieutenant Colonel W.P. Roodenburg. On 19 February 1942, the Japanese forces landed near the town of Sanoer (Sanur). The island was quickly captured. During the Japanese occupation, a Balinese military officer, I Gusti Ngurah Rai, formed a Balinese 'freedom army'. The harshness of Japanese occupation forces made them more resented than the Dutch colonial rulers. Independence from the Dutch In 1945, Bali was liberated by the British 5th infantry Division under the command of Major-General Robert Mansergh who took the Japanese surrender. Once the Japanese forces had been repatriated the island was handed over to the Dutch the following year. In 1946, the Dutch constituted Bali as one of the 13 administrative districts of the newly proclaimed State of East Indonesia, a rival state to the Republic of Indonesia, which was proclaimed and headed by Sukarno and Hatta. Bali was included in the "Republic of the United States of Indonesia" when the Netherlands recognised Indonesian independence on 29 December 1949. The first governor of Bali, Anak Agung Bagus Suteja, was appointed by President Sukarno in 1958, when Bali became a province. Contemporary The 1963 eruption of Mount Agung killed thousands, created economic havoc and forced many displaced Balinese to be transmigrated to other parts of Indonesia. Mirroring the widening of social divisions across Indonesia in the 1950s and early 1960s, Bali saw conflict between supporters of the traditional caste system, and those rejecting this system. Politically, the opposition was represented by supporters of the Indonesian Communist Party (PKI) and the Indonesian Nationalist Party (PNI), with tensions and ill-feeling further increased by the PKI's land reform programmes. An attempted coup in Jakarta was put down by forces led by General Suharto. The army became the dominant power as it instigated a violent anti-communist purge, in which the army blamed the PKI for the coup. Most estimates suggest that at least 500,000 people were killed across Indonesia, with an estimated 80,000 killed in Bali, equivalent to 5% of the island's population. With no Islamic forces involved as in Java and Sumatra, upper-caste PNI landlords led the extermination of PKI members. As a result of the 1965–66 upheavals, Suharto was able to manoeuvre Sukarno out of the presidency. His "New Order" government re-established relations with Western countries. The pre-War Bali as "paradise" was revived in a modern form. The resulting large growth in tourism has led to a dramatic increase in Balinese standards of living and significant foreign exchange earned for the country. A bombing in 2002 by militant Islamists in the tourist area of Kuta killed 202 people, mostly foreigners. This attack, and another in 2005, severely reduced tourism, producing much economic hardship to the island. On 27 November 2017, Mount Agung erupted five times, causing evacuation of thousands, disruption of air travel and environmental damage. Further eruptions also occurred between 2018 and 2019. Geography The island of Bali lies east of Java, and is approximately 8 degrees south of the equator. Bali and Java are separated by the Bali Strait. East to west, the island is approximately wide and spans approximately north to south; administratively it covers , or without Nusa Penida District, which comprises three small islands off the southeast coast of Bali. Its population density was roughly in 2020. Bali's central mountains include several peaks over in elevation and active volcanoes such as Mount Batur. The highest is Mount Agung (), known as the "mother mountain", which is an active volcano rated as one of the world's most likely sites for a massive eruption within the next 100 years. In late 2017 Mount Agung started erupting and large numbers of people were evacuated, temporarily closing the island's airport. Mountains range from centre to the eastern side, with Mount Agung the easternmost peak. Bali's volcanic nature has contributed to its exceptional fertility and its tall mountain ranges provide the high rainfall that supports the highly productive agriculture sector. South of the mountains is a broad, steadily descending area where most of Bali's large rice crop is grown. The northern side of the mountains slopes more steeply to the sea and is the main coffee-producing area of the island, along with rice, vegetables and cattle. The longest river, Ayung River, flows approximately (see List of rivers of Bali). The island is surrounded by coral reefs. Beaches in the south tend to have white sand while those in the north and west have black sand. Bali has no major waterways, although the Ho River is navigable by small sampan boats. Black sand beaches between Pasut and Klatingdukuh are being developed for tourism, but apart from the seaside temple of Tanah Lot, they are not yet used for significant tourism. The largest city is the provincial capital, Denpasar, near the southern coast. Its population is around 725,000 (2020). Bali's second-largest city is the old colonial capital, Singaraja, which is located on the north coast and is home to around 150,000 people in 2020. Other important cities include the beach resort, Kuta, which is practically part of Denpasar's urban area, and Ubud, situated at the north of Denpasar, is the island's cultural centre. Three small islands lie to the immediate south-east and all are administratively part of the Klungkung regency of Bali: Nusa Penida, Nusa Lembongan and Nusa Ceningan. These islands are separated from Bali by the Badung Strait. To the east, the Lombok Strait separates Bali from Lombok and marks the biogeographical division between the fauna of the Indomalayan realm and the distinctly different fauna of Australasia. The transition is known as the Wallace Line, named after Alfred Russel Wallace, who first proposed a transition zone between these two major biomes. When sea levels dropped during the Pleistocene ice age, Bali was connected to Java and Sumatra and to the mainland of Asia and shared the Asian fauna, but the deep water of the Lombok Strait continued to keep Lombok Island and the Lesser Sunda archipelago isolated. Climate Being just 8 degrees south of the equator, Bali has a fairly even climate all year round. Average year-round temperature stands at around with a humidity level of about 85%. Daytime temperatures at low elevations vary between , but the temperatures decrease significantly with increasing elevation. The west monsoon is in place from approximately October to April, and this can bring significant rain, particularly from December to March. During the rainy season, there are comparatively fewer tourists seen in Bali. During the Easter and Christmas holidays, the weather is very unpredictable. Outside of the monsoon period, humidity is relatively low and any rain is unlikely in lowland areas. Ecology Bali lies just to the west of the Wallace Line, and thus has a fauna that is Asian in character, with very little Australasian influence, and has more in common with Java than with Lombok. An exception is the yellow-crested cockatoo, a member of a primarily Australasian family. There are around 280 species of birds, including the critically endangered Bali myna, which is endemic. Others include barn swallow, black-naped oriole, black racket-tailed treepie, crested serpent-eagle, crested treeswift, dollarbird, Java sparrow, lesser adjutant, long-tailed shrike, milky stork, Pacific swallow, red-rumped swallow, sacred kingfisher, sea eagle, woodswallow, savanna nightjar, stork-billed kingfisher, yellow-vented bulbul and great egret. Until the early 20th century, Bali was possibly home to several large mammals: leopard and the endemic Bali tiger. The banteng still occurs in its domestic form, whereas leopards are found only in neighbouring Java, and the Bali tiger is extinct. The last definite record of a tiger on Bali dates from 1937, when one was shot, though the subspecies may have survived until the 1940s or 1950s. Pleistocene and Holocene megafaunas include banteng and giant tapir (based on speculations that they might have reached up to the Wallace Line), elephants, and rhinoceros. Squirrels are quite commonly encountered, less often is the Asian palm civet, which is also kept in coffee farms to produce kopi luwak. Bats are well represented, perhaps the most famous place to encounter them remaining is the Goa Lawah (Temple of the Bats) where they are worshipped by the locals and also constitute a tourist attraction. They also occur in other cave temples, for instance at Gangga Beach. Two species of monkey occur. The crab-eating macaque, known locally as "kera", is quite common around human settlements and temples, where it becomes accustomed to being fed by humans, particularly in any of the three "monkey forest" temples, such as the popular one in the Ubud area. They are also quite often kept as pets by locals. The second monkey, endemic to Java and some surrounding islands such as Bali, is far rarer and more elusive and is the Javan langur, locally known as "lutung". They occur in a few places apart from the West Bali National Park. They are born an orange colour, though they would have already changed to a more blackish colouration by their first year. In Java, however, there is more of a tendency for this species to retain its juvenile orange colour into adulthood, and a mixture of black and orange monkeys can be seen together as a family. Other rarer mammals include the leopard cat, Sunda pangolin and black giant squirrel. Snakes include the king cobra and reticulated python. The water monitor can grow to at least in length and and can move quickly. The rich coral reefs around the coast, particularly around popular diving spots such as Tulamben, Amed, Menjangan or neighbouring Nusa Penida, host a wide range of marine life, for instance hawksbill turtle, giant sunfish, giant manta ray, giant moray eel, bumphead parrotfish, hammerhead shark, reef shark, barracuda, and sea snakes. Dolphins are commonly encountered on the north coast near Singaraja and Lovina. A team of scientists conducted a survey from 29 April 2011 to 11 May 2011 at 33 sea sites around Bali. They discovered 952 species of reef fish of which 8 were new discoveries at Pemuteran, Gilimanuk, Nusa Dua, Tulamben and Candidasa, and 393 coral species, including two new ones at Padangbai and between Padangbai and Amed. The average coverage level of healthy coral was 36% (better than in Raja Ampat and Halmahera by 29% or in Fakfak and Kaimana by 25%) with the highest coverage found in Gili Selang and Gili Mimpang in Candidasa, Karangasem regency. Among the larger trees the most common are: banyan trees, jackfruit, coconuts, bamboo species, acacia trees and also endless rows of coconuts and banana species. Numerous flowers can be seen: hibiscus, frangipani, bougainvillea, poinsettia, oleander, jasmine, water lily, lotus, roses, begonias, orchids and hydrangeas exist. On higher grounds that receive more moisture, for instance around Kintamani, certain species of fern trees, mushrooms and even pine trees thrive well. Rice comes in many varieties. Other plants with agricultural value include: salak, mangosteen, corn, kintamani orange, coffee and water spinach. Environment Over-exploitation by the tourist industry has led to 200 out of 400 rivers on the island drying up. Research suggests that the southern part of Bali would face a water shortage. To ease the shortage, the central government plans to build a water catchment and processing facility at Petanu River in Gianyar. The 300 litres capacity of water per second will be channelled to Denpasar, Badung and Gianyar in 2013. A 2010 Environment Ministry report on its environmental quality index gave Bali a score of 99.65, which was the highest score of Indonesia's 33 provinces. The score considers the level of total suspended solids, dissolved oxygen and chemical oxygen demand in water. Erosion at Lebih Beach has seen of land lost every year. Decades ago, this beach was used for holy pilgrimages with more than 10,000 people, but they have now moved to Masceti Beach. In 2017, a year when Bali received nearly 5.7 million tourists, government officials declared a “garbage emergency” in response to the covering of 3.6 mile stretch of coastline
to get ready smeya – to dare, smeya se – to laugh Indirect actions When the action is performed on an indirect object, the particles change to si and its derivatives – kazvam si – I say to myself, kazvash si – you say to yourself, kazvam ti – I say to you peya si – I am singing to myself, pee si – she is singing to herself, pee mu – she is singing to him gotvya si – I cook for myself, gotvyat si – they cook for themselves, gotvya im – I cook for them In some cases, the particle si is ambiguous between the indirect object and the possessive meaning – miya si ratsete – I wash my hands, miya ti ratsete – I wash your hands pitam si priyatelite – I ask my friends, pitam ti priyatelite – I ask your friends iskam si topkata – I want my ball (back) The difference between transitive and intransitive verbs can lead to significant differences in meaning with minimal change, e.g. –haresvash me – you like me, haresvash mi – I like you (lit. you are pleasing to me)otivam – I am going, otivam si – I am going home The particle si is often used to indicate a more personal relationship to the action, e.g. –haresvam go – I like him, haresvam si go – no precise translation, roughly translates as "he's really close to my heart"stanahme priyateli – we became friends, stanahme si priyateli – same meaning, but sounds friendliermislya – I am thinking (usually about something serious), mislya si – same meaning, but usually about something personal and/or trivial Adverbs The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. (fast), (hard), (strange)—but adjectives ending in use the masculine singular form (i.e. ending in ), instead—e.g. (heroically), (bravely, like a man), (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. (firstly), (secondly), (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. (twice as/double), (three times as), (five times as). The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: (here), (there), (inside), (outside), (very/much) etc. The rest are mostly fossilized case forms, such as: Archaic locative forms of some adjectives, e.g. (well), (badly), (too, rather), and nouns (up), (tomorrow), (in the summer) Archaic instrumental forms of some adjectives, e.g. (quietly), (furtively), (blindly), and nouns, e.g. (during the day), (during the night), (one next to the other), (spiritually), (in figures), (with words); or verbs: (while running), (while lying), (while standing) Archaic accusative forms of some nouns: (today), (tonight), (in the morning), (in winter) Archaic genitive forms of some nouns: (tonight), (last night), (yesterday) Homonymous and etymologically identical to the feminine singular form of the corresponding adjective used with the definite article: (hard), (gropingly); the same pattern has been applied to some verbs, e.g. (while running), (while lying), (while standing) Derived from cardinal numerals by means of a non-productive suffix: (once), (twice), (thrice) Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: "" ("rather slowly"), "" ("with great difficulty"), "" ("quite", "thoroughly"). Syntax Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian: (lit. "I gave it the present to Maria.") (lit. "I gave her it the present to Maria.") The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect): (lit. "The present [to her] it I-gave to Maria.") (lit. "To Maria to her [it] I-gave the present.") Sometimes, the doubling signals syntactic relations, thus: (lit. "Petar and Ivan them ate the wolves.") Transl.: "Petar and Ivan were eaten by the wolves". This is contrasted with: (lit. "Petar and Ivan ate the wolves") Transl.: "Petar and Ivan ate the wolves". In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows: (lit. "Petar and Ivan were eaten by the wolves.") Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as "" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts): (lit. "To me to me it-feels-like-sleeping, and to Ivan to him it-feels-like-playing") Transl.: "I feel like sleeping, and Ivan feels like playing." (lit. "To us to us it-is cold, and to you-plur. to you-plur. it-is warm") Transl.: "We are cold, and you are warm." (lit. Ivan him aches the throat, and me me aches the head) Transl.: Ivan has sore throat, and I have a headache. Except the above examples, clitic doubling is considered inappropriate in a formal context. Other features Questions Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action: – 'you are coming'; – 'are you coming?' While the particle generally goes after the verb, it can go after a noun or adjective if a contrast is needed: – 'are you coming with us?'; – 'are you coming with us'? A verb is not always necessary, e.g. when presenting a choice: – 'him?'; – 'the yellow one?' Rhetorical questions can be formed by adding to a question word, thus forming a "double interrogative" – – 'Who?'; – 'I wonder who(?)' The same construction +не ('no') is an emphasized positive – – 'Who was there?' – – 'Nearly everyone!' (lit. 'I wonder who wasn't there') Significant verbs Съм The verb – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional: past tense – – 'I have hit' passive – – 'I am hit' past passive – – 'I was hit' conditional – – 'I would hit' Two alternate forms of exist: – interchangeable with съм in most tenses and moods, but never in the present indicative – e.g. ('I want to be'), ('I will be here'); in the imperative, only бъда is used – ('be here'); – slightly archaic, imperfective form of бъда – e.g. ('he used to get threats'); in contemporary usage, it is mostly used in the negative to mean "ought not", e.g. ('you shouldn't smoke'). Ще The impersonal verb (lit. 'it wants') is used to for forming the (positive) future tense: – 'I am going' – 'I will be going' The negative future is formed with the invariable construction (see below): – 'I will not be going' The past tense of this verb – щях is conjugated to form the past conditional ('would have' – again, with да, since it is irrealis): – 'I would have gone;' 'you would have gone' Имам and нямам The verbs ('to have') and ('to not have'): the third person singular of these two can be used impersonally to mean 'there is/there are' or 'there isn't/aren't any,' e.g. ('there is still time' – compare Spanish hay); ('there is no one there'). The impersonal form няма is used in the negative future – (see ще above). used on its own can mean simply 'I won't' – a simple refusal to a suggestion or instruction. Conjunctions and particles But In Bulgarian, there are several conjunctions all translating into English as "but", which are all used in distinct situations. They are (), (), (), (), and () (and () – "however", identical in use to ). While there is some overlapping between their uses, in many cases they are specific. For example, is used for a choice – – "not this one, but that one" (compare Spanish ), while is often used to provide extra information or an opinion – – "I said it, but I was wrong". Meanwhile, provides contrast between two situations, and in some sentences can even be translated as "although", "while" or even "and" – – "I'm working, and he's daydreaming". Very often, different words can be used to alter the emphasis of a sentence – e.g. while and both mean "I smoke, but I shouldn't", the first sounds more like a statement of fact ("...but I mustn't"), while the second feels more like a judgement ("...but I oughtn't"). Similarly, and both mean "I don't want to, but he does", however the first emphasizes the fact that he wants to, while the second emphasizes the wanting rather than the person. is interesting in that, while it feels archaic, it is often used in poetry and frequently in children's stories, since it has quite a moral/ominous feel to it. Some common expressions use these words, and some can be used alone as interjections: (lit. "yes, but no") – means "you're wrong to think so". can be tagged onto a sentence to express surprise: – "he's sleeping!" – "you don't say!", "really!" Vocative particles Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English. The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions. () – the most common particle. It can be used to strengthen a statement or, sometimes, to indicate derision of an opinion, aided by the tone of voice. (Originally purely masculine, it can now be used towards both men and women.) – tell me (insistence); – is that so? (derisive); – you don't say!. ( – expresses urgency, sometimes pleading. – come on, get up! () (feminine only) – originally simply the feminine counterpart of , but today perceived as rude and derisive (compare the similar evolution of the vocative forms of feminine names). (, masculine), (, feminine) – similar to and , but archaic. Although informal, can sometimes be heard being used by older people. Modal particles These are "tagged" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages). () – is a universal affirmative tag, like "isn't it"/"won't you", etc. (it is invariable, like the French ). It can be placed almost anywhere in the sentence, and does not always require a verb: – you are coming, aren't you?; – didn't they want to?; – that one, right?; it can express quite complex thoughts through simple constructions – – "I thought you weren't going to!" or "I thought there weren't any!" (depending on context – the verb presents general negation/lacking, see "nyama", above). () – expresses uncertainty (if in the middle of a clause, can be translated as "whether") – e.g. – "do you think he will come?" () – presents disbelief ~"don't tell me that..." – e.g. – "don't tell me you want to!". It is slightly archaic, but still in use. Can be used on its own as an interjection – () – expresses hope – – "he will come"; – "I hope he comes" (compare Spanish ). Grammatically, is entirely separate from the verb – "to hope". () – means "let('s)" – e.g. – "let him come"; when used in the first person, it expresses extreme politeness: – "let us go" (in colloquial situations, , below, is used instead). , as an interjection, can also be used to express judgement or even schadenfreude – – "he deserves it!". Intentional particles These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal. () – "come on", "let's" e.g. – "faster!" () – "let me" – exclusively when asking someone else for something. It can even be used on its own as a request or instruction (depending on the tone used), indicating that the speaker wants to partake in or try whatever the listener is doing. – let me see; or – "let me.../give me..." () (plural ) – can be used to issue a negative instruction – e.g. – "don't come" ( + subjunctive). In some dialects, the construction ( + preterite) is used instead. As an interjection – – "don't!" (See section on imperative mood). These particles can be combined with the vocative particles for greater effect, e.g. (let me see), or even exclusively in combinations with them, with no other elements, e.g. (come on!); (I told you not to!). Pronouns of quality Bulgarian has several pronouns of quality which have no direct parallels in English – kakav (what sort of); takuv (this sort of); onakuv (that sort of – colloq.); nyakakav (some sort of); nikakav (no sort of); vsyakakav (every sort of); and the relative pronoun kakavto (the sort of ... that ... ). The adjective ednakuv ("the same") derives from the same radical. Example phrases include:kakav chovek?! – "what person?!"; kakav chovek e toy? – what sort of person is he?ne poznavam takuv – "I don't know any (people like that)" (lit. "I don't know this sort of (person)")nyakakvi hora – lit. "some type of people", but the understood meaning is "a bunch of people I don't know"vsyakakvi hora – "all sorts of people"kakav iskash? – "which type do you want?"; nikakav! – "I don't want any!"/"none!" An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g. An extreme (colloquial) sentence, with almost no physical meaning in it whatsoever – yet which does have perfect meaning to the Bulgarian ear – would be : "kakva e taya takava edna nyakakva nikakva?!" inferred translation – "what kind of no-good person is she?" literal translation: "what kind of – is – this one here (she) – this sort of – one – some sort of – no sort of" —Note: the subject
e.g. –haresvam go – I like him, haresvam si go – no precise translation, roughly translates as "he's really close to my heart"stanahme priyateli – we became friends, stanahme si priyateli – same meaning, but sounds friendliermislya – I am thinking (usually about something serious), mislya si – same meaning, but usually about something personal and/or trivial Adverbs The most productive way to form adverbs is to derive them from the neuter singular form of the corresponding adjective—e.g. (fast), (hard), (strange)—but adjectives ending in use the masculine singular form (i.e. ending in ), instead—e.g. (heroically), (bravely, like a man), (skillfully). The same pattern is used to form adverbs from the (adjective-like) ordinal numerals, e.g. (firstly), (secondly), (thirdly), and in some cases from (adjective-like) cardinal numerals, e.g. (twice as/double), (three times as), (five times as). The remaining adverbs are formed in ways that are no longer productive in the language. A small number are original (not derived from other words), for example: (here), (there), (inside), (outside), (very/much) etc. The rest are mostly fossilized case forms, such as: Archaic locative forms of some adjectives, e.g. (well), (badly), (too, rather), and nouns (up), (tomorrow), (in the summer) Archaic instrumental forms of some adjectives, e.g. (quietly), (furtively), (blindly), and nouns, e.g. (during the day), (during the night), (one next to the other), (spiritually), (in figures), (with words); or verbs: (while running), (while lying), (while standing) Archaic accusative forms of some nouns: (today), (tonight), (in the morning), (in winter) Archaic genitive forms of some nouns: (tonight), (last night), (yesterday) Homonymous and etymologically identical to the feminine singular form of the corresponding adjective used with the definite article: (hard), (gropingly); the same pattern has been applied to some verbs, e.g. (while running), (while lying), (while standing) Derived from cardinal numerals by means of a non-productive suffix: (once), (twice), (thrice) Adverbs can sometimes be reduplicated to emphasize the qualitative or quantitative properties of actions, moods or relations as performed by the subject of the sentence: "" ("rather slowly"), "" ("with great difficulty"), "" ("quite", "thoroughly"). Syntax Bulgarian employs clitic doubling, mostly for emphatic purposes. For example, the following constructions are common in colloquial Bulgarian: (lit. "I gave it the present to Maria.") (lit. "I gave her it the present to Maria.") The phenomenon is practically obligatory in the spoken language in the case of inversion signalling information structure (in writing, clitic doubling may be skipped in such instances, with a somewhat bookish effect): (lit. "The present [to her] it I-gave to Maria.") (lit. "To Maria to her [it] I-gave the present.") Sometimes, the doubling signals syntactic relations, thus: (lit. "Petar and Ivan them ate the wolves.") Transl.: "Petar and Ivan were eaten by the wolves". This is contrasted with: (lit. "Petar and Ivan ate the wolves") Transl.: "Petar and Ivan ate the wolves". In this case, clitic doubling can be a colloquial alternative of the more formal or bookish passive voice, which would be constructed as follows: (lit. "Petar and Ivan were eaten by the wolves.") Clitic doubling is also fully obligatory, both in the spoken and in the written norm, in clauses including several special expressions that use the short accusative and dative pronouns such as "" (I feel like playing), студено ми е (I am cold), and боли ме ръката (my arm hurts): (lit. "To me to me it-feels-like-sleeping, and to Ivan to him it-feels-like-playing") Transl.: "I feel like sleeping, and Ivan feels like playing." (lit. "To us to us it-is cold, and to you-plur. to you-plur. it-is warm") Transl.: "We are cold, and you are warm." (lit. Ivan him aches the throat, and me me aches the head) Transl.: Ivan has sore throat, and I have a headache. Except the above examples, clitic doubling is considered inappropriate in a formal context. Other features Questions Questions in Bulgarian which do not use a question word (such as who? what? etc.) are formed with the particle ли after the verb; a subject is not necessary, as the verbal conjugation suggests who is performing the action: – 'you are coming'; – 'are you coming?' While the particle generally goes after the verb, it can go after a noun or adjective if a contrast is needed: – 'are you coming with us?'; – 'are you coming with us'? A verb is not always necessary, e.g. when presenting a choice: – 'him?'; – 'the yellow one?' Rhetorical questions can be formed by adding to a question word, thus forming a "double interrogative" – – 'Who?'; – 'I wonder who(?)' The same construction +не ('no') is an emphasized positive – – 'Who was there?' – – 'Nearly everyone!' (lit. 'I wonder who wasn't there') Significant verbs Съм The verb – 'to be' is also used as an auxiliary for forming the perfect, the passive and the conditional: past tense – – 'I have hit' passive – – 'I am hit' past passive – – 'I was hit' conditional – – 'I would hit' Two alternate forms of exist: – interchangeable with съм in most tenses and moods, but never in the present indicative – e.g. ('I want to be'), ('I will be here'); in the imperative, only бъда is used – ('be here'); – slightly archaic, imperfective form of бъда – e.g. ('he used to get threats'); in contemporary usage, it is mostly used in the negative to mean "ought not", e.g. ('you shouldn't smoke'). Ще The impersonal verb (lit. 'it wants') is used to for forming the (positive) future tense: – 'I am going' – 'I will be going' The negative future is formed with the invariable construction (see below): – 'I will not be going' The past tense of this verb – щях is conjugated to form the past conditional ('would have' – again, with да, since it is irrealis): – 'I would have gone;' 'you would have gone' Имам and нямам The verbs ('to have') and ('to not have'): the third person singular of these two can be used impersonally to mean 'there is/there are' or 'there isn't/aren't any,' e.g. ('there is still time' – compare Spanish hay); ('there is no one there'). The impersonal form няма is used in the negative future – (see ще above). used on its own can mean simply 'I won't' – a simple refusal to a suggestion or instruction. Conjunctions and particles But In Bulgarian, there are several conjunctions all translating into English as "but", which are all used in distinct situations. They are (), (), (), (), and () (and () – "however", identical in use to ). While there is some overlapping between their uses, in many cases they are specific. For example, is used for a choice – – "not this one, but that one" (compare Spanish ), while is often used to provide extra information or an opinion – – "I said it, but I was wrong". Meanwhile, provides contrast between two situations, and in some sentences can even be translated as "although", "while" or even "and" – – "I'm working, and he's daydreaming". Very often, different words can be used to alter the emphasis of a sentence – e.g. while and both mean "I smoke, but I shouldn't", the first sounds more like a statement of fact ("...but I mustn't"), while the second feels more like a judgement ("...but I oughtn't"). Similarly, and both mean "I don't want to, but he does", however the first emphasizes the fact that he wants to, while the second emphasizes the wanting rather than the person. is interesting in that, while it feels archaic, it is often used in poetry and frequently in children's stories, since it has quite a moral/ominous feel to it. Some common expressions use these words, and some can be used alone as interjections: (lit. "yes, but no") – means "you're wrong to think so". can be tagged onto a sentence to express surprise: – "he's sleeping!" – "you don't say!", "really!" Vocative particles Bulgarian has several abstract particles which are used to strengthen a statement. These have no precise translation in English. The particles are strictly informal and can even be considered rude by some people and in some situations. They are mostly used at the end of questions or instructions. () – the most common particle. It can be used to strengthen a statement or, sometimes, to indicate derision of an opinion, aided by the tone of voice. (Originally purely masculine, it can now be used towards both men and women.) – tell me (insistence); – is that so? (derisive); – you don't say!. ( – expresses urgency, sometimes pleading. – come on, get up! () (feminine only) – originally simply the feminine counterpart of , but today perceived as rude and derisive (compare the similar evolution of the vocative forms of feminine names). (, masculine), (, feminine) – similar to and , but archaic. Although informal, can sometimes be heard being used by older people. Modal particles These are "tagged" on to the beginning or end of a sentence to express the mood of the speaker in relation to the situation. They are mostly interrogative or slightly imperative in nature. There is no change in the grammatical mood when these are used (although they may be expressed through different grammatical moods in other languages). () – is a universal affirmative tag, like "isn't it"/"won't you", etc. (it is invariable, like the French ). It can be placed almost anywhere in the sentence, and does not always require a verb: – you are coming, aren't you?; – didn't they want to?; – that one, right?; it can express quite complex thoughts through simple constructions – – "I thought you weren't going to!" or "I thought there weren't any!" (depending on context – the verb presents general negation/lacking, see "nyama", above). () – expresses uncertainty (if in the middle of a clause, can be translated as "whether") – e.g. – "do you think he will come?" () – presents disbelief ~"don't tell me that..." – e.g. – "don't tell me you want to!". It is slightly archaic, but still in use. Can be used on its own as an interjection – () – expresses hope – – "he will come"; – "I hope he comes" (compare Spanish ). Grammatically, is entirely separate from the verb – "to hope". () – means "let('s)" – e.g. – "let him come"; when used in the first person, it expresses extreme politeness: – "let us go" (in colloquial situations, , below, is used instead). , as an interjection, can also be used to express judgement or even schadenfreude – – "he deserves it!". Intentional particles These express intent or desire, perhaps even pleading. They can be seen as a sort of cohortative side to the language. (Since they can be used by themselves, they could even be considered as verbs in their own right.) They are also highly informal. () – "come on", "let's" e.g. – "faster!" () – "let me" – exclusively when asking someone else for something. It can even be used on its own as a request or instruction (depending on the tone used), indicating that the speaker wants to partake in or try whatever the listener is doing. – let me see; or – "let me.../give me..." () (plural ) – can be used to issue a negative instruction – e.g. – "don't come" ( + subjunctive). In some dialects, the construction ( + preterite) is used instead. As an interjection – – "don't!" (See section on imperative mood). These particles can be combined with the vocative particles for greater effect, e.g. (let me see), or even exclusively in combinations with them, with no other elements, e.g. (come on!); (I told you not to!). Pronouns of quality Bulgarian has several pronouns of quality which have no direct parallels in English – kakav (what sort of); takuv (this sort of); onakuv (that sort of – colloq.); nyakakav (some sort of); nikakav (no sort of); vsyakakav (every sort of); and the relative pronoun kakavto (the sort of ... that ... ). The adjective ednakuv ("the same") derives from the same radical. Example phrases include:kakav chovek?! – "what person?!"; kakav chovek e toy? – what sort of person is he?ne poznavam takuv – "I don't know any (people like that)" (lit. "I don't know this sort of (person)")nyakakvi hora – lit. "some type of people", but the understood meaning is "a bunch of people I don't know"vsyakakvi hora – "all sorts of people"kakav iskash? – "which type do you want?"; nikakav! – "I don't want any!"/"none!" An interesting phenomenon is that these can be strung along one after another in quite long constructions, e.g. An extreme (colloquial) sentence, with almost no physical meaning in it whatsoever – yet which does have perfect meaning to the Bulgarian ear – would be : "kakva e taya takava edna nyakakva nikakva?!" inferred translation – "what kind of no-good person is she?" literal translation: "what kind of – is – this one here (she) – this sort of – one – some sort of – no sort of" —Note: the subject of the sentence is simply the pronoun "taya" (lit. "this one here"; colloq. "she"). Another interesting phenomenon that is observed in colloquial speech is the use of takova (neuter of takyv) not only as a substitute for an adjective, but also as a substitute for a verb. In that case the base form takova is used as the third person singular in the present indicative and all other forms are formed by analogy to other verbs in the language. Sometimes the "verb" may even acquire a derivational prefix that changes its meaning. Examples: takovah ti shapkata – I did something to your hat (perhaps: I took your hat) takovah si ochilata – I did something to my glasses (perhaps: I lost my glasses) takovah se – I did something to myself (perhaps: I hurt myself) Another use of takova in colloquial speech is the word takovata, which can be used as a substitution for a noun, but also, if the speaker doesn't remember or is not sure how to say something, they might say takovata and then pause to think about it: i posle toy takovata... – and then he [no translation] ... izyadoh ti takovata – I ate something of yours (perhaps: I ate your dessert). Here the word takovata is used as a substitution for a noun. As a result of this versalitity, the word takova can be used as a euphemism for literally anything. It is commonly used to substitute words relating to reproductive organs or sexual acts, for example: toy si takova takovata v takovata i - he [verb] his [noun] in her [noun] Similar "meaningless" expressions are extremely common in spoken Bulgarian, especially when the speaker is finding it difficult to describe something. Miscellaneous The commonly cited phenomenon of Bulgarian people shaking their head for "yes" and nodding for "no" is true but, with the influence of Western culture, ever rarer, and almost non-existent among the younger generation. (The shaking and nodding are not identical to the Western gestures. The "nod" for no is actually an upward movement of the head rather than a downward one, while the shaking of the head for yes is not completely horizontal, but also has a slight "wavy" aspect to it.) A dental click (similar to the English "tsk") also means "no" (informal), as does ъ-ъ (the only occurrence in Bulgarian of the glottal stop). The two are often said with the upward 'nod'. Bulgarian has an extensive vocabulary covering family relationships. The biggest range of words is for uncles and aunts, e.g. chicho (your father's brother), vuicho (your mother's brother), svako (your aunt's husband); an even larger number of synonyms for these three exists in the various dialects of Bulgarian, including kaleko, lelincho, tetin, etc. The words do not only refer to the closest members of the family (such as brat – brother, but batko/bate – older brother, sestra – sister, but kaka – older sister), but extend to its furthest reaches,
alternate in two radii. An "isotoxal" right (symmetric) di-n-gonal bipyramid has n two-fold rotation axes through vertices around sides, n reflection planes through vertices and apices, an n-fold rotation axis through apices, a reflection plane through base, and an n-fold rotation-reflection axis through apices, representing symmetry group Dnh, [n,2], (*22n), of order 4n. (The reflection in base plane corresponds to the 0° rotation-reflection. If n is even, there is a symmetry about the center, corresponding to the 180° rotation-reflection.) All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of right "symmetric" di-n-gonal scalenohedron. Note: For at most two particular apex heights, triangle faces may be isosceles. Example: The "isotoxal" right (symmetric) "didigonal" (*) bipyramid with base vertices: U = (1,0,0), U′ = (−1,0,0), V = (0,2,0), V′ = (0,−2,0), and with apices: A = (0,0,1), A′ = (0,0,−1), has two different edge lengths: UV = UV′ = U′V = U′V′ = , AU = AU′ = A′U = A′U′ = , AV = AV′ = A′V = A′V′ = ; thus all its triangle faces are isosceles. The "isotoxal" right (symmetric) "didigonal" (*) bipyramid with the same base vertices, but with apex height: 2, also has two different edge lengths: and 2. In crystallography, "isotoxal" right (symmetric) "didigonal" (*) (8-faced), ditrigonal (12-faced), ditetragonal (16-faced), and dihexagonal (24-faced) bipyramids exist. (*) The smallest geometric di-n-gonal bipyramids have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2):an "isotoxal" right (symmetric) "didigonal" bipyramid is called a rhombic bipyramid, although all its faces are scalene triangles, because its flat polygon base is a rhombus. Scalenohedra A "regular" right "symmetric" di-n-gonal scalenohedron can be made with a regular zigzag skew 2n-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each base edge to each apex. It has two apices and 2n vertices around sides, 4n faces, and 6n edges; it is topologically identical to a 2n-gonal bipyramid, but its 2n vertices around sides alternate in two rings above and below the center. A "regular" right "symmetric" di-n-gonal scalenohedron has n two-fold rotation axes through mid-edges around sides, n reflection planes through vertices and apices, an n-fold rotation axis through apices, and an n-fold rotation-reflection axis through apices, representing symmetry group Dnv = Dnd, [2+,2n], (2*n), of order 4n. (If n is odd, there is a symmetry about the center, corresponding to the 180° rotation-reflection.) All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of right "symmetric" 2n-gonal bipyramid, with a regular zigzag skew polygon base. Note: For at most two particular apex heights, triangle faces may be isoceles. In crystallography, "regular" right "symmetric" "didigonal" (8-faced) and ditrigonal (12-faced) scalenohedra exist. The smallest geometric scalenohedra have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2):a "regular" right "symmetric" "didigonal" scalenohedron is called a tetragonal scalenohedron; its six vertices can be represented as (0,0,±1), (±1,0,z), (0,±1,−z), where z is a parameter between 0 and 1; at z = 0, it is a regular octahedron; at z = 1, it is a disphenoid with all merged coplanar faces (four congruent isosceles triangles); for z > 1, it becomes concave. Note: If the 2n-gon base is both isotoxal in-out and zigzag skew, then not all triangle faces of the "isotoxal" right "symmetric" solid are congruent. Example: The solid with isotoxal in-out zigzag skew 2×2-gon base vertices: U = (1,0,1), U′ = (−1,0,1), V = (0,2,−1), V′ = (0,−2,−1), and with "right" symmetric apices: A = (0,0,3), A′ = (0,0,3), has five different edge lengths: UV = UV′ = U′V = U′V′ = 3, AU = AU′ = , AV = AV′ = 2, A′U = A′U′ = , A′V = A′V′ = 2; thus not all its triangle faces are congruent. "Regular" star bipyramids A self-intersecting or star bipyramid has a star polygon base. A "regular" right symmetric star bipyramid can be made with a regular star polygon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each base edge to each apex. A "regular" right symmetric star bipyramid has congruent isosceles triangle faces, and is isohedral. Note: For at most one particular apex height, triangle faces may be equilateral. A {p/q}-bipyramid has Coxeter diagram . Scalene triangle star bipyramids An "isotoxal" right symmetric 2p/q-gonal star bipyramid can be made with an isotoxal in-out star 2p/q-gon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each base edge to each apex. An "isotoxal" right symmetric 2p/q-gonal star bipyramid has congruent scalene triangle faces, and is isohedral. It can be seen as another type of 2p/q-gonal right "symmetric" star scalenohedron. Note: For at most two particular apex heights, triangle faces may be isoceles. Star scalenohedra A "regular" right "symmetric" 2p/q-gonal star scalenohedron can be made with a regular zigzag skew star 2p/q-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each base edge to each apex. A "regular" right "symmetric" 2p/q-gonal star scalenohedron has
apex. It has two apices and 2n vertices around sides, 4n faces, and 6n edges; it is topologically identical to a 2n-gonal bipyramid, but its 2n vertices around sides alternate in two rings above and below the center. A "regular" right "symmetric" di-n-gonal scalenohedron has n two-fold rotation axes through mid-edges around sides, n reflection planes through vertices and apices, an n-fold rotation axis through apices, and an n-fold rotation-reflection axis through apices, representing symmetry group Dnv = Dnd, [2+,2n], (2*n), of order 4n. (If n is odd, there is a symmetry about the center, corresponding to the 180° rotation-reflection.) All its faces are congruent scalene triangles, and it is isohedral. It can be seen as another type of right "symmetric" 2n-gonal bipyramid, with a regular zigzag skew polygon base. Note: For at most two particular apex heights, triangle faces may be isoceles. In crystallography, "regular" right "symmetric" "didigonal" (8-faced) and ditrigonal (12-faced) scalenohedra exist. The smallest geometric scalenohedra have eight faces, and are topologically identical to the regular octahedron. In this case (2n = 2×2):a "regular" right "symmetric" "didigonal" scalenohedron is called a tetragonal scalenohedron; its six vertices can be represented as (0,0,±1), (±1,0,z), (0,±1,−z), where z is a parameter between 0 and 1; at z = 0, it is a regular octahedron; at z = 1, it is a disphenoid with all merged coplanar faces (four congruent isosceles triangles); for z > 1, it becomes concave. Note: If the 2n-gon base is both isotoxal in-out and zigzag skew, then not all triangle faces of the "isotoxal" right "symmetric" solid are congruent. Example: The solid with isotoxal in-out zigzag skew 2×2-gon base vertices: U = (1,0,1), U′ = (−1,0,1), V = (0,2,−1), V′ = (0,−2,−1), and with "right" symmetric apices: A = (0,0,3), A′ = (0,0,3), has five different edge lengths: UV = UV′ = U′V = U′V′ = 3, AU = AU′ = , AV = AV′ = 2, A′U = A′U′ = , A′V = A′V′ = 2; thus not all its triangle faces are congruent. "Regular" star bipyramids A self-intersecting or star bipyramid has a star polygon base. A "regular" right symmetric star bipyramid can be made with a regular star polygon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each base edge to each apex. A "regular" right symmetric star bipyramid has congruent isosceles triangle faces, and is isohedral. Note: For at most one particular apex height, triangle faces may be equilateral. A {p/q}-bipyramid has Coxeter diagram . Scalene triangle star bipyramids An "isotoxal" right symmetric 2p/q-gonal star bipyramid can be made with an isotoxal in-out star 2p/q-gon base, two symmetric apices right above and right below the base center, and thus one-to-one symmetric triangle faces connecting each base edge to each apex. An "isotoxal" right symmetric 2p/q-gonal star bipyramid has congruent scalene triangle faces, and is isohedral. It can be seen as another type of 2p/q-gonal right "symmetric" star scalenohedron. Note: For at most two particular apex heights, triangle faces may be isoceles. Star scalenohedra A "regular" right "symmetric" 2p/q-gonal star scalenohedron can be made with a regular zigzag skew star 2p/q-gon base, two symmetric apices right above and right below the base center, and triangle faces connecting each base edge to each apex. A "regular" right "symmetric" 2p/q-gonal star scalenohedron has congruent scalene triangle faces, and is isohedral. It can be seen as another type of right "symmetric" 2p/q-gonal star bipyramid, with a regular zigzag skew star polygon base. Note: For at most two particular apex heights, triangle faces may be isosceles. Note: If the star 2p/q-gon base is both isotoxal in-out and zigzag skew, then not all triangle faces of the "isotoxal" right "symmetric" star polyhedron are congruent. With base vertices: U0 = (1,0,1), U1 = (0,1,1), U2 = (−1,0,1), U3 = (0,−1,1), V0 = (2,2,−1), V1 = (−2,2,−1), V2 = (−2,−2,−1), V3 = (2,−2,−1), and with apices: A = (0,0,3), A′ = (0,0,−3), it has four different edge lengths: U0V1 = V1U3 = U3V0 = V0U2 = U2V3 = V3U1 = U1V2 = V2U0 = , AU0 = AU1 = AU2 = AU3 = , AV0 = AV1 = AV2 = AV3 = 2, A′U0 = A′U1 = A′U2 = A′U3 = , A′V0 = A′V1 = A′V2 = A′V3 = 2; thus not all its triangle faces are congruent. 4-polytopes with bipyramidal cells The dual of the rectification of each convex regular 4-polytopes is a cell-transitive 4-polytope with bipyramidal cells. In the following, the apex vertex of the bipyramid is A and an equator vertex is E. The distance
Britain. Bodmin Moor became a centre of purported sightings after 1978, with occasional reports of mutilated slain livestock; the alleged panther/ leopard-like black cats of the same region came to be popularly known as the Beast of Bodmin Moor. In general, scientists reject such claims because of the improbably large numbers necessary to maintain a breeding population and because climate and food supply issues would make such purported creatures' survival in reported habitats unlikely. Investigation A long-held hypothesis suggests the possibility that alien big cats at large in the United Kingdom could have been imported as part of private collections or zoos, then later escaped or set free. An escaped big cat would not be reported to the authorities due to the illegality of owning and importing the animals. It has been claimed that animal trainer Mary Chipperfield released three pumas into the wild following the closure of her Plymouth zoo in 1978 and that subsequent sightings of the animals gave rise to rumours of the Beast. The Ministry of Agriculture, Fisheries and Food conducted an official investigation in 1995 led by investigators Simon Baker and Charles Wilson. On 19 July 1995 the study found that there was "no verifiable evidence" of exotic felines loose in Britain and that the mauled farm animals could have been attacked by common indigenous species. The report stated that "no verifiable evidence for the presence of a 'big cat' was found ... There is no significant threat to
Moor, () is a phantom wild cat purported to live in Cornwall, South West Britain. Bodmin Moor became a centre of purported sightings after 1978, with occasional reports of mutilated slain livestock; the alleged panther/ leopard-like black cats of the same region came to be popularly known as the Beast of Bodmin Moor. In general, scientists reject such claims because of the improbably large numbers necessary to maintain a breeding population and because climate and food supply issues would make such purported creatures' survival in reported habitats unlikely. Investigation A long-held hypothesis suggests the possibility that alien big cats at large in the United Kingdom could have been imported as part of private collections or zoos, then later escaped or set free. An escaped big cat would not be reported to the authorities due to the illegality of owning and importing the animals. It has been claimed that animal trainer Mary Chipperfield released three pumas into the wild following the closure of her Plymouth zoo in 1978 and that subsequent sightings of the animals gave rise to rumours of the Beast. The Ministry of Agriculture, Fisheries and Food conducted an official investigation in 1995 led by investigators Simon Baker and Charles Wilson. On 19 July 1995 the study found that there was "no verifiable evidence" of
by Meeting, Brown, Bowen, and Thayer Streets and sits three blocks north of Brown's central campus. The campus is dominated by brick architecture, largely of the Georgian and Victorian styles. The west side of the quadrangle comprises Pembroke Hall (1897), Smith-Buonanno Hall (1907), and Metcalf Hall (1919), while the east side comprises Alumnae Hall (1927) and Miller Hall (1910). The quadrangle culminates on the north with Andrews Hall (1947). East Campus, centered on Hope and Charlesfield streets, originally served as the campus of Bryant University. In 1969, as Bryant was preparing to relocate to Smithfield, Rhode Island, Brown purchased their Providence campus for $5 million. The transaction expanded the Brown campus by and 26 buildings. In 1971, Brown renamed the area East Campus. Today, the area is largely used for dormitories. Thayer Street runs through Brown's main campus. As commercial corridor frequented by students, Thayer is comparable to Harvard Square or Berkeley's Telegraph Avenue. Wickenden Street, in the adjacent Fox Point neighborhood, is another commercial street similarly popular among students. Built in 1925, Brown Stadium—the home of the school's football team—is located approximately a mile and a half northeast of the university's central campus. Marston Boathouse, the home of Brown's crew teams, lies on the Seekonk River, to the southeast of campus. Brown's sailing teams are based out of the Ted Turner Sailing Pavilion at the Edgewood Yacht Club in adjacent Cranston. Since 2011, Brown's Warren Alpert Medical School has been located in Providence's historic Jewelry District, near the medical campus of Brown's teaching hospitals, Rhode Island Hospital and the Women and Infants Hospital of Rhode Island. Other university facilities, including molecular medicine labs and administrative offices, are likewise located in the area. Brown's School of Public Health occupies a landmark modernist building along the Providence River. Other Brown properties include the Mount Hope Grant in Bristol, Rhode Island, an important Native American site noted as a location of King Philip's War. Brown's Haffenreffer Museum of Anthropology Collection Research Center, particularly strong in Native American items, is located in the Mount Hope Grant. Sustainability Brown has committed to "minimize its energy use, reduce negative environmental impacts and promote environmental stewardship." Since 2010, the university has required all new buildings meet LEED silver standards. Between 2007 and 2018, Brown reduced its greenhouse emissions by 27 percent; the majority of this reduction is attributable to the university's Thermal Efficiency Project which converted its central heating plant from a steam-powered system to a hot water-powered system. In 2020, Brown announced it had sold 90 percent of its fossil fuel investments as part of a broader divestment from direct investments and managed funds that focus on fossil fuels. In 2021, the university adopted the goal of reducing quantifiable campus emissions by 75 percent by 2025 and achieving carbon neutrality by 2040. According to the A. W. Kuchler U.S. potential natural vegetation types, Brown would have a dominant vegetation type of Appalachian Oak (104) with a dominant vegetation form of Eastern Hardwood Forest (25). Academics The College Founded in 1764, the college is Brown's oldest school. About 7,200 undergraduate students are enrolled in the college, and 81 concentrations are offered. For the graduating class of 2020 the most popular concentrations were Computer Science, Economics, Biology, History, Applied Mathematics, International Relations, and Political Science. A quarter of Brown undergraduates complete more than one concentration before graduating. If the existing programs do not align with their intended curricular interests, undergraduates may design and pursue independent concentrations. 35 percent of undergraduates pursue graduate or professional study immediately, 60 percent within 5 years, and 80 percent within 10 years. For the Class of 2009, 56 percent of all undergraduate alumni have since earned graduate degrees. Among undergraduate alumni who go on to receive graduate degrees, the most common degrees earned are J.D. (16%), M.D. (14%), M.A. (14%), M.Sc. (14%), and Ph.D. (11%). The most common institutions from which undergraduate alumni earn graduate degrees are Brown University, Columbia University, and Harvard University. The highest fields of employment for undergraduate alumni ten years after graduation are education and higher education (15%), medicine (9%), business and finance (9%), law (8%), and computing and technology (7%). Brown and RISD Since its 1893 relocation to College Hill, Rhode Island School of Design (RISD) has bordered Brown to its west. Since 1900, Brown and RISD students have been able to cross-register at the two institutions, with Brown students permitted to take as many as four courses at RISD to count towards their Brown degree. The two institutions partner to provide various student-life services and the two student bodies compose a synergy in the College Hill cultural scene. Brown|RISD Dual Degree Program After several years of discussion between the two institutions and several students pursuing dual degrees unofficially, Brown and RISD formally established a five-year dual degree program in 2007, with the first class matriculating in the fall of 2008. The Brown|RISD Dual Degree Program, among the most selective in the country, offered admission to 20 of the 725 applicants for the class entering in autumn 2020, for an acceptance rate of 2.7%. The program combines the complementary strengths of the two institutions, integrating studio art and design at RISD with Brown's academic offerings. Students are admitted to the Dual Degree Program for a course lasting five years and culminating in both the Bachelor of Arts (A.B.) or Bachelor of Science (Sc.B.) degree from Brown and the Bachelor of Fine Arts (B.F.A.) degree from RISD. Prospective students must apply to the two schools separately and be accepted by separate admissions committees. Their application must then be approved by a third Brown|RISD joint committee. Admitted students spend the first year in residence at RISD completing its first-year Experimental and Foundation Studies curriculum while taking up to three Brown classes. Students spend their second year in residence at Brown, during which students take mainly Brown courses while starting on their RISD major requirements. In the third, fourth, and fifth years, students can elect to live at either school or off-campus, and course distribution is determined by the requirements of each student's unique combination of Brown concentration and RISD major. Program participants are noted for their creative and original approach to cross-disciplinary opportunities, combining, for example, industrial design with engineering, or anatomical illustration with human biology, or philosophy with sculpture, or architecture with urban studies. An annual "BRDD Exhibition" is a well-publicized and heavily attended event, drawing interest and attendees from the broader world of industry, design, the media, and the fine arts. MADE Program In 2020, the two schools announced the establishment of a new joint Master of Arts in design engineering program. Abbreviated as MADE, the program intends to combine RISD's programs in industrial design with Brown's programs in engineering. The program is administered through Brown's School of Engineering and RISD's Architecture and Design Division. Theatre and playwriting Brown's theatre and playwriting programs are among the best-regarded in the country. Six Brown graduates have received the Pulitzer Prize for Drama; Alfred Uhry '58, Lynn Nottage '86, Ayad Akhtar '93, Nilo Cruz '94, Quiara Alegría Hudes '04, and Jackie Sibblies Drury MFA '04. In American Theater magazine's 2009 ranking of the most-produced American plays, Brown graduates occupied four of the top five places—Peter Nachtrieb '97, Rachel Sheinkin '89, Sarah Ruhl '97, and Stephen Karam '02. The undergraduate concentration encompasses programs in theatre history, performance theory, playwriting, dramaturgy, acting, directing, dance, speech, and technical production. Applications for doctoral and master's degree programs are made through the University Graduate School. Master's degrees in acting and directing are pursued in conjunction with the Brown/Trinity Rep MFA program, which partners with the Trinity Repertory Company, a local regional theatre. Writing programs Writing at Brown—fiction, non-fiction, poetry, playwriting, screenwriting, electronic writing, mixed media, and the undergraduate writing proficiency requirement—is catered for by various centers and degree programs, and a faculty that has long included nationally and internationally known authors. The undergraduate concentration in literary arts offers courses in fiction, poetry, screenwriting, literary hypermedia, and translation. Graduate programs include the fiction and poetry MFA writing programs in the literary arts department, and the MFA playwriting program in the theatre arts and performance studies department. The non-fiction writing program is offered in the English department. Screenwriting and cinema narrativity courses are offered in the departments of literary arts and modern culture and media. The undergraduate writing proficiency requirement is supported by the Writing Center. Author prizewinners Alumni authors take their degrees across the spectrum of degree concentrations, but a gauge of the strength of writing at Brown is the number of major national writing prizes won. To note only winners since the year 2000: Pulitzer Prize for Fiction-winners Jeffrey Eugenides '82 (2003), Marilynne Robinson '66 (2005), and Andrew Sean Greer '92 (2018); British Orange Prize-winners Marilynne Robinson '66 (2009) and Madeline Miller '00 (2012); Pulitzer Prize for Drama-winners Nilo Cruz '94 (2003), Lynn Nottage '86 (twice, 2009, 2017), Quiara Alegría Hudes '04 (2012), Ayad Akhtar '93 (2013), and Jackie Sibblies Drury MFA '04 (2019); Pulitzer Prize for Biography-winners David Kertzer '69 (2015) and Benjamin Moser '98 (2020); Pulitzer Prize for Journalism-winners James Risen '77 (twice, 2002, 2006), Mark Maremont '80 (twice, 2003, 2007), Gareth Cook '91 (2005), Tony Horwitz '80 (2005), Peter Kovacs '77 (twice, 2006, 2019), Stephanie Grace '86 (2006), Mary Swerczek '98 (2006), Jane B. Spencer '99 (2006), Usha Lee McFarling '89 (2007), James Bandler '89 (2007), Amy Goldstein '75 (2009), David Rohde '90 (twice, 1996, 2009), Kathryn Schulz '96 (2016), Alissa J. Rubin '80 (2016), Rebecca Ballhaus '13 (2019); Pulitzer Prize for General Nonfiction-winner James Forman Jr. '88 (2018), Pulitzer Prize for History-winner Marcia Chatelain PhD '08 (2021), as well as Pulitzer Prize for Poetry-winner Peter Balakian PhD '80 (2016) Computer science Brown began offering computer science courses through the departments of Economics and Applied Mathematics in 1956 when it acquired an IBM machine. Brown added an IBM 650 in January 1958, the only one of its type between Hartford and Boston. In 1960, Brown opened its first dedicated computer building. The facility, designed by Philip Johnson, received an IBM 7070 computer the following year. Brown granted computer sciences full Departmental status in 1979. In 2009, IBM and Brown announced the installation of a supercomputer (by teraflops standards), the most powerful in the southeastern New England region. In the 1960s, Andries van Dam along with Ted Nelson, and Bob Wallace invented The Hypertext Editing Systems, HES and FRESS while at Brown. Nelson coined the word hypertext while Van Dam's students helped originate XML, XSLT, and related Web standards. Among the school's computer science alumni are principal architect of the Classic Mac OS, Andy Hertzfeld; principal architect of the Intel 80386 and Intel 80486 microprocessors, John Crawford; former CEO of Apple, John Sculley; and digital effects programer Masi Oka. Other alumni include former CS department head at MIT, John Guttag, Workday founder, Aneel Bhusri, MongoDB founder Eliot Horowitz, Figma founders Dylan Field and Evan Wallace; and OpenSea founder Devin Finzer. The character "Andy" in the animated film Toy Story purportedly an homage to professor Van Dam from his students employed at Pixar. Between 2012 and 2018, the number of concentrators in CS tripled. In 2017, computer science overtook economics as the school's most popular undergraduate concentration. Applied mathematics Brown's program in applied mathematics was established in 1941 making it the oldest such program the United States. The division is highly ranked and regarded nationally and internationally. Among the 67 recipients of the Timoshenko Medal, 22 have been affiliated with Brown's applied mathematics division as faculty, researchers, or students. The Joukowsky Institute for Archaeology and the Ancient World Established in 2004, the Joukowsky Institute for Archaeology and the Ancient World is Brown's interdisciplinary research center for archeology and ancient studies. The institute pursues fieldwork, excavations, regional surveys, and academic study of the archaeology and art of the ancient Mediterranean, Egypt, and Western Asia from the Levant to the Caucasus. The institute has a very active fieldwork profile, with faculty-led excavations and regional surveys presently in Petra (Jordan), Abydos (Egypt), Turkey, Sudan, Italy, Mexico, Guatemala, Montserrat, and Providence. The Joukowsky Institute's faculty includes cross-appointments from the departments of Egyptology, Assyriology, Classics, Anthropology, and History of Art and Architecture. Faculty research and publication areas include Greek and Roman art and architecture, landscape archaeology, urban and religious architecture of the Levant, Roman provincial studies, the Aegean Bronze Age, and the archaeology of the Caucasus. The institute offers visiting teaching appointments and postdoctoral fellowships which have, in recent years, included Near Eastern Archaeology and Art, Classical Archaeology and Art, Islamic Archaeology and Art, and Archaeology and Media Studies. Egyptology and Assyriology Facing the Joukowsky Institute, across the Front Green, is the Department of Egyptology and Assyriology, formed in 2006 by the merger of Brown's departments of Egyptology and History of Mathematics. It is one of only a handful of such departments in the United States. The curricular focus is on three principal areas: Egyptology, Assyriology, and the history of the ancient exact sciences (astronomy, astrology, and mathematics). Many courses in the department are open to all Brown undergraduates without prerequisite, and include archaeology, languages, history, and Egyptian and Mesopotamian religions, literature, and science. Students concentrating in the department choose a track of either Egyptology or Assyriology. Graduate level study comprises three tracks to the doctoral degree: Egyptology, Assyriology, or the History of the Exact Sciences in Antiquity. The Watson Institute for International and Public Affairs The Watson Institute for International and Public Affairs, Brown's center for the study of global issues and public affairs, is one of the leading institutes of its type in the country. The institute occupies facilities designed by Uruguayan architect Rafael Viñoly and Japanese architect Toshiko Mori. The institute was initially endowed by Thomas Watson, Jr. (Class of 1937), former Ambassador to the Soviet Union and longtime president of IBM. Institute faculty and faculty emeritus include Italian prime minister and European Commission president Romano Prodi, Brazilian president Fernando Henrique Cardoso, Chilean president Ricardo Lagos Escobar, Mexican novelist and statesman Carlos Fuentes, Brazilian statesman and United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Class of 1962), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union. The institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Six Brown undergraduate concentrations are hosted by the Watson Institute: Development Studies, International and Public Affairs, International Relations, Latin American and Caribbean Studies, Middle East Studies, Public Policy, and South Asian Studies. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Master of Public Affairs (M.P.A) Program. The institute also offers postdoctoral, professional development and global outreach programming. In support of these programs, the institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies center, The Center for Latin American and Caribbean Studies (CLACS) and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated since. The project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001. The School of Engineering Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country. In 1916, Brown's departments of electrical, mechanical, and civil engineering were merged into a single Division of Engineering. In 2010 the division was elevated to a School of Engineering. Engineering at Brown is especially interdisciplinary. The school is organized without the traditional departments or boundaries found at most schools, and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities and the social sciences. The school practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas. IE Brown Executive MBA Dual Degree Program Since 2009, Brown has developed an Executive MBA program in conjunction with one of the
United Nations commission head Paulo Sérgio Pinheiro, Indian foreign minister and ambassador to the United States Nirupama Rao, American diplomat and Dayton Peace Accords author Richard Holbrooke (Class of 1962), and Sergei Khrushchev, editor of the papers of his father Nikita Khrushchev, leader of the Soviet Union. The institute's curricular interest is organized into the principal themes of development, security, and governance—with further focuses on globalization, economic uncertainty, security threats, environmental degradation, and poverty. Six Brown undergraduate concentrations are hosted by the Watson Institute: Development Studies, International and Public Affairs, International Relations, Latin American and Caribbean Studies, Middle East Studies, Public Policy, and South Asian Studies. Graduate programs offered at the Watson Institute include the Graduate Program in Development (Ph.D.) and the Master of Public Affairs (M.P.A) Program. The institute also offers postdoctoral, professional development and global outreach programming. In support of these programs, the institute houses various centers, including the Brazil Initiative, Brown-India Initiative, China Initiative, Middle East Studies center, The Center for Latin American and Caribbean Studies (CLACS) and the Taubman Center for Public Policy. In recent years, the most internationally cited product of the Watson Institute has been its Costs of War Project, first released in 2011 and continuously updated since. The project comprises a team of economists, anthropologists, political scientists, legal experts, and physicians, and seeks to calculate the economic costs, human casualties, and impact on civil liberties of the wars in Iraq, Afghanistan, and Pakistan since 2001. The School of Engineering Established in 1847, Brown's engineering program is the oldest in the Ivy League and the third oldest civilian engineering program in the country. In 1916, Brown's departments of electrical, mechanical, and civil engineering were merged into a single Division of Engineering. In 2010 the division was elevated to a School of Engineering. Engineering at Brown is especially interdisciplinary. The school is organized without the traditional departments or boundaries found at most schools, and follows a model of connectivity between disciplines—including biology, medicine, physics, chemistry, computer science, the humanities and the social sciences. The school practices an innovative clustering of faculties in which engineers team with non-engineers to bring a convergence of ideas. IE Brown Executive MBA Dual Degree Program Since 2009, Brown has developed an Executive MBA program in conjunction with one of the leading Business Schools in Europe; IE Business School in Madrid. This relationship has since strengthened resulting in both institutions offering a dual degree program. In this partnership, Brown provides its traditional coursework while IE provides most of the business-related subjects making a differentiated alternative program to other Ivy League's EMBAs. The cohort typically consists of 25-30 EMBA candidates from some 20 countries. Classes are held in Providence, Madrid, Cape Town and Online. The Pembroke Center The Pembroke Center for Teaching and Research on Women was established at Brown in 1981 by Joan Wallach Scott as an interdisciplinary research center on gender. The center is named for Pembroke College, Brown's former women's college, and is affiliated with Brown's Sarah Doyle Women's Center. The Pembroke Center supports Brown's undergraduate concentration in Gender and Sexuality Studies, post-doctoral research fellowships, the annual Pembroke Seminar, and other academic programs. It also manages various collections, archives, and resources, including the Elizabeth Weed Feminist Theory Papers and the Christine Dunlap Farnham Archive. The Graduate School Brown introduced graduate courses in the 1870s and granted its first advanced degrees in 1888. The university established a Graduate Department in 1903 and a full Graduate School in 1927. With an enrollment of approximately 2,600 students, the school currently offers 33 and 51 master's and doctoral programs, respectively. The school additionally offers a number of fifth-year master's programs. Overall, admission to the Graduate School is most competitive with an acceptance rate averaging at approximately 9 percent in recent years. Carney Institute for Brain Science The Robert J. & Nancy D. Carney Institute for Brain Science is Brown's cross-departmental neuroscience research institute. The institute's core focus areas include brain-computer interfaces and computational neuroscience; additional areas of focus include research into mechanisms of cell death with the interest of developing therapies for neurodegenerative diseases. The Carney Institute was founded by John Donoghue in 2009 as the Brown Institute for Brain Science and renamed in 2018 in recognition of a $100 million gift. The donation, one of the largest in the university's history, established the institute as one of the best-endowed university neuroscience programs in the country. Alpert Medical School Established in 1811, Brown's Alpert Medical School is the fourth oldest medical school in the Ivy League. In 1827, medical instruction was suspended by President Francis Wayland after the program's faculty declined to follow a new policy requiring students to live on campus. The program was reorganized in 1972; the first M.D. degrees from the new Program in Medicine were awarded to a graduating class of 58 students in 1975. In 1991, the school was officially renamed the Brown University School of Medicine, then renamed once more to Brown Medical School in October 2000. In January 2007, entrepreneur and philanthropist Warren Alpert donated $100 million to the school. In recognition of the gift the school's name was changed to the Warren Alpert Medical School of Brown University. In 2020, U.S. News & World Report ranked Brown's medical school the 9th most selective in the country, with an acceptance rate of 2.8%. U.S. News ranks the school 38th for research and 35th for primary care. Brown's medical school is known especially for its eight-year Program in Liberal Medical Education (PLME), an eight-year combined baccalaureate-M.D. medical program. Inaugurated in 1984, the program is one of the most selective and renowned programs of its type in the country, offering admission to only of 2% of applicants in 2021. Since 1976, the Early Identification Program (EIP) has encouraged Rhode Island residents to pursue careers in medicine by recruiting sophomores from Providence College, Rhode Island College, the University of Rhode Island, and Tougaloo College. In 2004, the school once again began to accept applications from premedical students at other colleges and universities via AMCAS like most other medical schools. The medical school also offers M.D./PhD, M.D./M.P.H. and M.D./M.P.P. dual degree programs. School of Public Health Brown's School of Public Health grew out of the Alpert Medical School's Department of Community Health and was officially founded in 2013 as an independent school. The school issues undergraduate (A.B., Sc.B.), graduate (M.P.H., Sc.M., A.M.), doctoral (Ph.D.), and dual-degrees (M.P.H./M.P.A., M.D./M.P.H.). Online programs The Brown University School of Professional Studies currently offers blended learning Executive master's degrees in Healthcare Leadership, Cyber Security, and Science and Technology Leadership. The master's degrees are designed to help students who have a job and life outside of academia to progress in their respective fields. The students meet in Providence every 6–7 weeks for a week seminar each trimester. The university has also invested in MOOC development starting in 2013, when two courses, Archeology's Dirty Little Secrets and The Fiction of Relationship, both of which received thousands of students. However, after a year of courses, the university broke its contract with Coursera and revamped its online persona and MOOC development department. By 2017, the university released new courses on edx, two of which were The Ethics of Memory and Artful Medicine: Art's Power to Enrich Patient Care. In January 2018, Brown published its first "game-ified" course called Fantastic Places, Unhuman Humans: Exploring Humanity Through Literature, which featured out of platform games to help learners understand materials, as well as a story-line that immerses users into a fictional world to help characters along their journey. Admissions and financial aid Undergraduate Undergraduate admission to Brown University is considered "most selective" by U.S. News & World Report. For the undergraduate class of 2025, Brown received 46,568 applications—the largest applicant pool in the university's history. Of these applicants, 2,566 were admitted for an acceptance rate of 5.4%. The university's yield rate for the class was 69%. For the academic year 2019–20 the university received 2,030 transfer applications, of which 5.8% were accepted. Brown's admissions policy is stipulated need-blind for all domestic first-year applicants. In 2017, Brown announced that loans would be eliminated from all undergraduate financial aid awards starting in 2018–2019, as part of a new $30 million campaign called the Brown Promise. In 2016–17, the university awarded need-based scholarships worth $120.5 million. The average need-based award for the class of 2020 was $47,940. Graduate In 2017, the Graduate School accepted 11% of 9,215 applicants. In 2021, Brown received a record 948 applications for roughly 90 spots in its Master of Public Health Degree. In 2014, U.S. News ranked Brown's Warren Alpert Medical School the 5th most selective in the country, with an acceptance rate of 2.9%. Rankings Brown University is accredited by the New England Commission of Higher Education. For their 2021 rankings, The Wall Street Journal/Times Higher Education ranked Brown 5th in the "Best Colleges 2021" edition. The Forbes magazine annual ranking of "America's Top Colleges 2021"—which ranked 600 research universities, liberal arts colleges and service academies—ranked Brown 26th overall and 23rd among universities. U.S. News & World Report ranked Brown 14th among national universities in its 2021 edition. The 2021 edition also ranked Brown 1st for undergraduate teaching, 20th in Most Innovative Schools, and 18th in Best Value Schools. Washington Monthly ranked Brown 37th in 2020 among 389 national universities in the U.S. based on its contribution to the public good, as measured by social mobility, research, and promoting public service. For 2020, U.S. News & World Report ranks Brown 102nd globally. In 2014, Forbes magazine ranked Brown 7th on its list of "America's Most Entrepreneurial Universities." The Forbes analysis looked at the ratio of "alumni and students who have identified themselves as founders and business owners on LinkedIn" and the total number of alumni and students. LinkedIn particularized the Forbes rankings, placing Brown third (between MIT and Princeton) among "Best Undergraduate Universities for Software Developers at Startups." LinkedIn's methodology involved a career-path examination of "millions of alumni profiles" in its membership database. In 2020, U.S. News ranked Brown's Warren Alpert Medical School the 9th most selective in the country, with an acceptance rate of 2.8 percent. According to 2020 data from the U.S. Department of Education, the median starting salary of Brown computer science graduates was the highest in the United States. In 2020, Brown produced the second-highest number of Fulbright winners. For the three years prior, the university produced the most Fulbright winners of any university in the nation. Research Brown is member of the Association of American Universities since 1933 and is classified among "R1: Doctoral Universities – Very High Research Activity". In FY 2017, Brown spent $212.3 million on research and was ranked 103rd in the United States by total R&D expenditure by National Science Foundation. Student life Campus safety In 2014, Brown tied with the University of Connecticut for the highest number of reported rapes in the nation, with its "total of reports of rape" on their main campus standing at 43. Spring weekend Established in 1950, Spring Weekend is an annual spring music festival for students. Historical performers at the festival have included Ella Fitzgerald, Dizzy Gillespie, Ray Charles, Bob Dylan, Janis Joplin, Bruce Springsteen. More recent headliners include Kendrick Lamar, Young Thug, Daniel Caesar, Anderson .Paak, Mitski, and Mac DeMarco. Since 1960, Spring Weekend has been organized by the student-run Brown Concert Agency. Residential and Greek societies Approximately 12 percent of Brown students participate in Greek Life. The university recognizes eleven Greek organizations: six fraternities (Alpha Phi Alpha, Beta Omega Chi, Delta Tau, Delta Phi, Kappa Alpha Psi, and Theta Alpha), five sororities (Alpha Chi Omega, Alpha Kappa Alpha, Delta Sigma Theta, Delta Gamma, Kappa Delta, and Kappa Alpha Theta,), one co-ed house (Zeta Delta Xi), and one co-ed literary society (Alpha Delta Phi). Since the early 1950s, all Greek organizations on campus have been located in Wriston Quadrangle. The organizations are overseen by the Greek Council. An alternative to Greek-letter organizations are Brown's program houses, which are organized by themes. As with Greek houses, the residents of program houses select their new members, usually at the start of the spring semester. Examples of program houses are St. Anthony Hall (located in King House), Buxton International House, the Machado French/Hispanic/Latinx House, Technology House, Harambee (African culture) House, Social Action House and Interfaith House. All students not in program housing enter a lottery for general housing. Students form groups and are assigned time slots during which they can pick among the remaining housing options. Societies and clubs The earliest societies at Brown were devoted to oration and debate. The Pronouncing Society is mentioned in the diary of Solomon Drowne, class of 1773, who was voted its president in 1771. The organization seems to have disappeared during the American Revolutionary War. Subsequent societies include the Misokosmian Society (est. 1798 and renamed the Philermenian Society), the Philandrian Society (est. 1799), the United Brothers (1806), the Philophysian Society (1818), and the Franklin Society (1824). Societies served social as well as academic purposes, with many supporting literary debate and amassing large libraries. Older societies generally aligned with Federalists while younger societies generally leaned Republican. Societies remained popular into the 1860s, after which they were largely replaced by fraternities. The Cammarian Club was at first a semi-secret society which "tapped" 15 seniors each year. In 1915, self-perpetuating membership gave way to popular election by the student body, and thenceforward the club served as the de facto undergraduate student government. The organization was dissolved in 1971, and ultimately succeeded by a formal student government. Societas Domi Pacificae, known colloquially as "Pacifica House," is a present-day, self-described secret society. It purports a continuous line of descent from the Franklin Society of 1824, citing a supposed intermediary "Franklin Society" traceable in the nineteenth century. Student organizations There are over 300 registered student organizations on campus with diverse interests. The Student Activities Fair, during the orientation program, provides first-year students the opportunity to become acquainted with the wide range of organizations. A sample of organizations includes: Brown University Undergraduate Council of Students The College Hill Independent The Brown Daily Herald Brown Debating Union The Brown Derbies Brown International Organization Brown Journal of World Affairs The Brown Jug The Brown Noser Brown Opera Productions Brown Political Review The Brown Spectator BSR Brown University Band Brown University Orchestra Chinese Students and Scholars Association The College Hill Independent Critical Review Ivy Film Festival Jabberwocks Production Workshop Starla and Sons Students for Sensible Drug Policy WBRU Resource centers Brown has several resource centers on campus. The centers often act as sources of support as well as safe spaces for students to explore certain aspects of their identity. Additionally, the centers often provide physical spaces for students to study and have meetings. Although most centers are identity-focused, some provide academic support as well. The Brown Center for Students of Color (BCSC) is a space that provides support for students of color. Established in 1972 at the demand of student protests, the BCSC encourages students to engage in critical dialogue, develop leadership skills, and promote social justice. The center houses various programs for students to share their knowledge and engage in discussion. Programs include the Third World Transition Program, the Minority Peer Counselor Program, the Heritage Series, and other student-led initiatives. Additionally, the BCSC hopes to foster community among the students it serves by providing spaces for students to meet and study. The Sarah Doyle Women's Center aims to provide a space for members of the Brown community to examine and explore issues surrounding gender. The center was named after one of the first women to attend Brown, Sarah Doyle. The center emphasizes intersectionality in its conversations on gender, encouraging people to see gender as present and relevant in various aspects of life. The center hosts programs and workshops in order to facilitate dialogue and provide resources for students, faculty, and staff. Other centers include the LGBTQ+ Center, the Undocumented, First-Generation College and Low-Income Student (U-FLi) Center, and the Curricular Resource Center. Activism 1968 Black Student Walkout On December 5 of 1968, several Black women from Pembroke College initiated a walkout in protest an atmosphere at the colleges described by Black students as a "stifling, frustrating, [and] degrading place for Black students" after feeling the colleges were non-responsive to their concerns. In total, 65 Black students participated in the walk out. Their principal demand was to increase Black student enrollment to 11% of the student populace, in an attempt to match that of the proportion in the US. This ultimately resulted in a 300% increase in Black enrollment the following year, but some demands have yet to be met. Athletics Brown is a member of the Ivy League athletic conference, which is categorized as a Division I (top level) conference of the National Collegiate Athletic Association (NCAA). The Brown Bears has one of the largest university sports programs in the United States, sponsoring 32 varsity intercollegiate teams. Brown's athletic program is one of the U.S. News & World Report top 20—the "College Sports Honor Roll"—based on breadth of program and athletes' graduation rates. Brown's newest varsity team is women's rugby, promoted from club-sport status in 2014. Brown women's rowing has won 7 national titles between 1999 and 2011. Brown men's rowing perennially finishes in the top 5 in the nation, most recently winning silver, bronze, and silver in the national championship races of 2012, 2013, and 2014. The men's and women's crews have also won championship trophies at the Henley Royal Regatta and the Henley Women's Regatta. Brown's men's soccer is consistently ranked in the top 20 and has won 18 Ivy League titles overall; recent soccer graduates play professionally in Major League Soccer and overseas. Brown football, under its most successful coach historically, Phil Estes, won Ivy League championships in 1999, 2005, and 2008. high-profile alumni of the football program include former Houston Texans head coach Bill O'Brien; former Penn State football coach Joe Paterno, Heisman Trophy namesake John W. Heisman, and Pollard Award namesake Fritz Pollard. Brown women's gymnastics won the Ivy League tournament in 2013 and 2014. The Brown women's sailing team has won 5 national championships, most recently in 2019 while the coed sailing team won 2 national championships in 1942 and 1948. Both teams are consistency ranked in the top 10 in the nation. The first intercollegiate ice hockey game in America was played between Brown and Harvard on January 19, 1898. The first university rowing regatta larger than a dual-meet was held between Brown, Harvard, and Yale at Lake Quinsigamond in Massachusetts on July 26, 1859. Brown also supports competitive intercollegiate club sports, including ultimate frisbee. The men's ultimate team, Brownian Motion, has won three national championships, in 2000, 2005 and 2019. Notable people Alumni Alumni in politics include U.S. Secretary of State John Hay (1852), U.S. Secretary of State and U.S. Attorney General Richard Olney (1856), Chief Justice of the United States and U.S. Secretary of State Charles Evans Hughes (1881), Louisiana Governor Bobby Jindal '92, U.S. Senator Maggie Hassan '80 of New Hampshire, Delaware Governor Jack Markell '82, Rhode Island Representative David Cicilline '83, Minnesota Representative Dean Phillips '91, 2020 Presidential candidate and entrepreneur Andrew Yang '96, and DNC Chair Tom Perez '83. Prominent alumni in business and finance include philanthropist John D. Rockefeller Jr. (1897), former Chair of the Federal Reserve and current U.S. Secretary of the Treasury Janet Yellen '67, World Bank President Jim Yong Kim '82, Bank of America CEO Brian Moynihan '81, CNN founder Ted Turner '60, IBM chairman and CEO Thomas Watson, Jr. '37, co-founder of Starwood Capital Group Barry Sternlicht '82, Apple Inc. CEO John Sculley '61, Blackberry Ltd. CEO John S. Chen '78, Facebook CFO David Ebersman '91, and Uber CEO Dara Khosrowshahi '91. Companies founded by Brown alumni include CNN,The Wall Street Journal, Searchlight Pictures, Netgear, W Hotels, Workday, Warby Parker, Casper, Figma, ZipRecruiter, and Cards Against Humanity. Alumni in the arts and media include actors Emma Watson '14, Daveed Diggs '04, Julie Bowen '91, Tracee Ellis Ross '94, and Jessica Capshaw '98; NPR program host Ira Glass '82; singer-composer Mary Chapin Carpenter '81; humorist and Marx Brothers screenwriter S.J. Perelman '25; novelists Nathanael West '24, Jeffrey Eugenides '83, Edwidge Danticat (MFA '93), and Marilynne Robinson '66; composer and synthesizer pioneer Wendy Carlos '62; journalist James Risen '77; political pundit Mara Liasson; MSNBC host and The Nation editor-at-large Chris Hayes '01; New York Times, publisher A. G. Sulzberger '03, and magazine editor John F. Kennedy, Jr. '83. Important figures in the history of education include the father of American public school education Horace Mann (1819), civil libertarian and Amherst College president Alexander Meiklejohn, first president of the University of South Carolina Jonathan Maxcy (1787), Bates College founder Oren B. Cheney (1836), University of Michigan president (1871–1909) James Burrill Angell (1849), University of California president (1899–1919) Benjamin Ide Wheeler (1875), and Morehouse College's first African-American president John Hope (1894). Alumni in the computer sciences and industry include architect of Intel 386, 486, and Pentium microprocessors John H. Crawford '75, inventor of the first silicon transistor Gordon Kidd Teal '31, MongoDB founder Eliot Horowitz '03, Figma founder Dylan Field, and Macintosh developer Andy Hertzfeld '75. Other notable alumni include "Lafayette of the Greek Revolution" and its historian Samuel Gridley Howe (1821) Governor of Wyoming Territory and Nebraska Governor John Milton Thayer (1841), Rhode Island Governor Augustus Bourn (1855), NASA head during first seven Apollo missions Thomas O. Paine '42, diplomat Richard Holbrooke '62, sportscaster Chris Berman '77, Houston Texans head coach Bill O'Brien '92, 2018 Miss America Cara Mund
a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD. Career Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in Byte magazine: The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications. In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, "what Numenta is doing
also was one of the main designers of the Lisa and Macintosh user interfaces. Atkinson also conceived, designed and implemented HyperCard, an early and influential hypermedia system. HyperCard put the power of computer programming and database design into the hands of nonprogrammers. In 1994, Atkinson received the EFF Pioneer Award for his contributions. Education He received his undergraduate degree from the University of California, San Diego, where Apple Macintosh developer Jef Raskin was one of his professors. Atkinson continued his studies as a graduate student in neurochemistry at the University of Washington. Raskin invited Atkinson to visit him at Apple Computer; Steve Jobs persuaded him to join the company immediately as employee No. 51, and Atkinson never finished his PhD. Career Around 1990, General Magic's founding, with Bill Atkinson as one of the three cofounders, met the following press in Byte magazine: The obstacles to General Magic's success may appear daunting, but General Magic is not your typical start-up company. Its partners include some of the biggest players in the worlds of computing, communications, and consumer electronics, and it's loaded with top-notch engineers who have been given a clean slate to reinvent traditional approaches to ubiquitous worldwide communications. In 2007, Atkinson began working as an outside developer with Numenta, a startup working on computer intelligence. On his work there Atkinson said, "what Numenta is doing is more fundamentally important to society than the personal computer and the rise of the Internet." Currently, Atkinson has combined his passion for computer programming with his love of nature photography to create art images. He takes close-up photographs of
in Exeter and joined his Oxford army with the Royalist forces commanded by Prince Maurice. On that same day, Essex and his Parliamentary force entered Cornwall. One week later, as Essex bivouacked with his army at Bodmin, he learned that King Charles had defeated Waller; brought his Oxford army to the South-West; and joined forces with Prince Maurice. Essex had also seen that he was not getting the military support from the people of Cornwall as Lord Robartes asserted. At that time, Essex understood that he and his army were trapped in Cornwall and his only salvation would be reinforcements or an escape through the port of Fowey by means of the Parliamentarian fleet. Essex immediately marched his troops five miles south to the small town of Lostwithiel arriving on 2 August. He immediately deployed his men in a defensive arc with detachments on the high ground to the north at Restormel Castle and the high ground to the east at Beacon Hill. Essex also sent a small contingent of foot south to secure the port of Fowey aiming to eventually evacuate his infantry by sea. At Essex's disposal was a force of 6,500 foot and 3,000 horse. Aided through intelligence provided by the people of Cornwell, King Charles followed westward, slowly and deliberately cutting off the potential escape routes that Essex might attempt to utilize. On 6 August King Charles communicated with Essex, calling for him to surrender. Stalling for several days, Essex considered the offer but ultimately refused. On 11 August, Grenville and the Cornish Royalists entered Bodmin forcing out Essex's rear-guard cavalry. Grenville then proceeds south across Respryn Bridge to meet and join forces with King Charles and Prince Maurice. It is estimated that the Royalist forces at that time were composed of 12,000 foot and 7,000 horse. Over the next two days the Royalists deployed detachments along the east side of the River Fowey to prevent a Parliamentarian escape across country. Finally the Royalists sent 200 foot with artillery south to garrison the fort at Polruan, effectively blocking the entrance to the harbour of Fowey. At about that time, Essex learned that reinforcements under the command of Sir John Middleton were turned back by the Royalists at Bridgwater in Somerset. First battle - 21–30 August 1644 At 07:00 hours on 21 August, King Charles launched his first attack on Essex and the Parliamentarians at Lostwithiel. From the north, Grenville and the Cornish Royalists attacked Restormel Castle and easily dislodged the Parliamentarians who fell back quickly. From the east, King Charles and the Oxford army captured Beacon Hill with little resistance from the Parliamentarians. Prince Maurice and his force occupied Druid Hill. Casualties were fairly low and by nightfall the fighting ended and the Royalists held the high ground on the north and east sides of Lostwithiel. For the next couple of days the two opposing forces exchanged fire only in a number of small skirmishes. On 24 August, King Charles further tightened the noose encircling the Parliamentarians when he sent Lord Goring and Sir Thomas Bassett to secure the town of St Blazey and the area to the southwest of Lostwithiel. This reduced the foraging area for the Parliamentarians and access to the coves and inlets in the vicinity of the port of Par. Essex and the Parliamentarians were now totally surrounded and boxed into a two-mile by five-mile area spanning from Lostwithiel in the north to the port of Fowey in the south. Knowing that he would not be able to fight his way out, Essex made his final plans for an escape. Since a sea evacuation of his cavalry would not be possible, Essex ordered his cavalry commander William Balfour to attempt a breakout to Plymouth. For the infantry, Essex planned to retreat south and meet Lord Warwick and the Parliamentarian fleet at Fowey. At 03:00 hours on 31 August, Balfour and 2,000 members of his cavalry executed the first step of Essex's plan when they successfully crossed the River Fowey and escaped intact without engaging the Royalist defenders. Second battle - 31 August - 2 September 1644 Early on the morning on 31 August, the Parliamentarians ransacked and looted Lostwithiel and began their withdrawal south. At 07:00 hours, the Royalists observed the actions of the Parliamentarians and immediately proceeded to attack. Grenville attacked from the north. King Charles and Prince Maurice crossed the River Fowey, joined up with Grenville, and entered Lostwithiel. Together the Royalists engaged the Parliamentarian rear-guards and quickly took possession of the town. The Royalist also sent detachments down along the east side of the River Fowey to protect against any further breakouts and to capture the town of Polruan. The Royalists then began to pursue Essex and the Parliamentarian infantry down the river valley. At the outset the Royalist pushed the Parliamentarians nearly three miles south through the hedged fields, hills and valleys. At the narrow pass near St. Veep, Philip Skippon, Essex's commander of the infantry, counter-attacked the Royalists and pushed them back several fields
and defeated Sir William Waller at the Battle of Cropredy Bridge on 29 June. On 12 July after a Royalist council of war recommended that Essex be dealt with before he could be reinforced, King Charles and his Oxford army departed Evesham. King Charles accepted the council's advice, not solely because it was good strategy, but more so because his Queen was in Exeter where she had recently given birth to the Princess Henrietta and had been denied safe conduct to Bath by Essex. Trapped in Cornwall On 26 July, King Charles arrived in Exeter and joined his Oxford army with the Royalist forces commanded by Prince Maurice. On that same day, Essex and his Parliamentary force entered Cornwall. One week later, as Essex bivouacked with his army at Bodmin, he learned that King Charles had defeated Waller; brought his Oxford army to the South-West; and joined forces with Prince Maurice. Essex had also seen that he was not getting the military support from the people of Cornwall as Lord Robartes asserted. At that time, Essex understood that he and his army were trapped in Cornwall and his only salvation would be reinforcements or an escape through the port of Fowey by means of the Parliamentarian fleet. Essex immediately marched his troops five miles south to the small town of Lostwithiel arriving on 2 August. He immediately deployed his men in a defensive arc with detachments on the high ground to the north at Restormel Castle and the high ground to the east at Beacon Hill. Essex also sent a small contingent of foot south to secure the port of Fowey aiming to eventually evacuate his infantry by sea. At Essex's disposal was a force of 6,500 foot and 3,000 horse. Aided through intelligence provided by the people of Cornwell, King Charles followed westward, slowly and deliberately cutting off the potential escape routes that Essex might attempt to utilize. On 6 August King Charles communicated with Essex, calling for him to surrender. Stalling for several days, Essex considered the offer but ultimately refused. On 11 August, Grenville and the Cornish Royalists entered Bodmin forcing out Essex's rear-guard cavalry. Grenville then proceeds south across Respryn Bridge to meet and join forces with King Charles and Prince Maurice. It is estimated that the Royalist forces at that time were composed of 12,000 foot and 7,000 horse. Over the next two days the Royalists deployed detachments along the east side of the River Fowey to prevent a Parliamentarian escape across country. Finally the Royalists sent 200 foot with artillery south to garrison the fort at Polruan, effectively blocking the entrance to the harbour of Fowey. At about that time, Essex learned that reinforcements under the command of Sir John Middleton were turned back by the Royalists at Bridgwater in Somerset. First battle - 21–30 August 1644 At 07:00 hours on 21 August, King Charles launched his first attack on Essex and the Parliamentarians at Lostwithiel. From the north, Grenville and the Cornish Royalists attacked Restormel Castle and easily dislodged the Parliamentarians who fell back quickly. From the east, King Charles and the Oxford army captured Beacon Hill with little resistance from the Parliamentarians. Prince Maurice and his force occupied Druid Hill. Casualties were fairly low and by nightfall the fighting ended and the Royalists held the high ground on the north and east sides of Lostwithiel. For the next couple of days the two opposing forces exchanged fire only in a number of small skirmishes. On 24 August, King Charles further tightened the noose encircling the Parliamentarians when he sent Lord Goring and Sir Thomas Bassett to secure the town of St Blazey and the area to the southwest of Lostwithiel. This reduced the foraging area for the Parliamentarians and access to the coves and inlets in the vicinity of the port of Par. Essex and the Parliamentarians were now totally surrounded and boxed into a two-mile by five-mile area spanning from Lostwithiel in the north to the port of Fowey in the south. Knowing that he would not be able to fight his way out, Essex made his final plans for an escape. Since a sea evacuation of his cavalry would not be possible, Essex ordered his cavalry commander William Balfour to attempt a breakout to Plymouth. For the infantry, Essex planned to retreat south and meet Lord Warwick and the Parliamentarian fleet at Fowey. At 03:00 hours on 31 August, Balfour and 2,000 members of his cavalry executed the first step of Essex's plan when they successfully crossed the River Fowey and escaped intact without engaging the Royalist defenders. Second battle - 31 August - 2 September 1644 Early on the morning on
home computer built for the BBC by Acorn Computers Ltd., nicknamed The Beeb Beeb.com or BBC online Beeb Birtles (born 1948), Dutch-Australian musician See also Bebe (disambiguation) Beebe
or Auntie Beeb BEEB, a BBC children's magazine published in 1985 BBC Micro, a home computer
being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the "First Cause" argument and became an atheist. He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed. University and first marriage Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895. Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family – they knew him primarily as "Lord John's grandson" and enjoyed showing him off. He soon fell in love with the puritanical, high-minded Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. It was to be a hollow shell of a marriage. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry. During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot. Early career Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb. He now started an intensive study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the First International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same. At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's acute suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person." In 1905, he wrote the essay "On Denoting", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. In 1910, he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict. First World War During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this as an illegitimate means the state used to violate freedom of expression, in Free Thought and Official Propaganda. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a significant part in the Leeds Convention in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody". His conviction in 1916 resulted in Russell being fined £100 (), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police". A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918. He later said of his imprisonment: While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warden to intervene and reminding him that "prison was a place of punishment". Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949. In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for "passive resistance to military or naval service". G. H. Hardy on the Trinity controversy In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account about Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation. The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation, since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote: Between the wars In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia—1920", for the US magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring. Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution. The following autumn, Russell, accompanied by Dora, visited Peking (as it was then known in the West) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully. Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (now Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman. From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions. Owing to the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws, as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930 Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943. On a tour through the US in 1927, Russell met Barry Fox (later Barry Stevens), who became a well-known Gestalt therapist and writer in later years. Russell and Fox developed an intensive relationship. In Fox's words: "...for three years we were very close." Fox sent her daughter Judith to Beacon Hill School for some time. From 1927 to 1932 Russell wrote 34 letters to Fox. Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell. Russell's marriage to Dora grew increasingly tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia ("Peter") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a prominent historian and one of the leading figures in the Liberal Democrat party. Russell returned to the London School of Economics to lecture on the science of power in 1937. During the 1930s, Russell became a close friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian self-rule. Russell was chair of the India League from 1932–1939. Second World War Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: "If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister." In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called "relative political pacifism": "War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils." Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him "morally unfit" to teach at the college because of his opinions, especially those relating to sexual morality, detailed in Marriage and Morals (1929). The matter was however taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that "great spirits have always encountered violent opposition from mediocre minds" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in The Bertrand Russell Case. Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of A History of Western Philosophy. His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College. Later life Russell participated in many broadcasts over the BBC, particularly The Brains Trust and the Third Programme, on various topical and philosophical subjects. By this time Russell was world-famous outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a wide variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (among a total of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. A History of Western Philosophy (1945) became a best-seller and provided Russell with a steady income for the remainder of his life. In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles, in an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal DYN, saying "I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism." In 1943, Russell expressed support for Zionism: "I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture". In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atomic bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an extremely aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was merely explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe. However, just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practised in the Soviet Union. After it became known that the USSR carried out its nuclear bomb tests, Russell declared his position advocating for the total abolition of atomic weapons. In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled Authority and the Individual, explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to Words and Things by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal Mind, which caused Russell to respond via The Times. The result was a month-long correspondence in The Times between the supporters and detractors of ordinary language philosophy, which was only ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy. In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but slightly embarrassed at decorating a former jailbird, saying, "You have sometimes behaved in a manner that would not do if generally adopted". Russell merely smiled, but afterwards claimed that the reply "That's right, just like your brother" immediately came to mind. In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the best-known patrons of the Congress, until he resigned in 1956. In 1952, Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from serious mental illness, which was the source of ongoing disputes between Russell and his former wife Dora. In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for "breach of peace" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to "good behaviour", to which Russell replied: "No, I won't." In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy: YOUR ACTION DESPERATE. THREAT TO HUMAN SURVIVAL. NO CONCEIVABLE JUSTIFICATION. CIVILIZED MAN CONDEMNS IT. WE WILL NOT HAVE MASS MURDER. ULTIMATUM MEANS WAR... END THIS MADNESS. According to historian Peter Knight, after JFK's assassination, Russell, "prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline
He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963). Biography Early life and background Bertrand Arthur William Russell was born on 18 May 1872 at Ravenscroft, Trellech, Monmouthshire, United Kingdom, into an influential and liberal family of the British aristocracy. His parents, Viscount and Viscountess Amberley, were radical for their times. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was an atheist, and his atheism was evident when he asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings had a great effect on Russell's life. His paternal grandfather, Earl Russell, had twice been Prime Minister in the 1840s and 1860s. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading British Whig families and participated in every great political event from the Dissolution of the Monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832. Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women. Childhood and adolescence Russell had two siblings: brother Frank (nearly seven years older than Bertrand), and sister Rachel (four years older). In June 1874 Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis after a long period of depression. Frank and Bertrand were placed in the care of staunchly Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kindly old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the dominant family figure for the rest of Russell's childhood and youth. The Countess was from a Scottish Presbyterian family and successfully petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. Her favourite Bible verse, "Thou shalt not follow a multitude to do evil", became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings. Russell's adolescence was lonely and he often contemplated suicide. He remarked in his autobiography that his keenest interests in "nature and books and (later) mathematics saved me from complete despondency;" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as "one of the great events of my life, as dazzling as first love." During these formative years he also discovered the works of Percy Bysshe Shelley. Russell wrote: "I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the "First Cause" argument and became an atheist. He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed. University and first marriage Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He quickly distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895. Russell was 17 years old in the summer of 1889 when he met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family – they knew him primarily as "Lord John's grandson" and enjoyed showing him off. He soon fell in love with the puritanical, high-minded Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. It was to be a hollow shell of a marriage. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry. During his years of separation from Alys, Russell had passionate (and often simultaneous) affairs with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot. Early career Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of a lifelong interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb. He now started an intensive study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the First International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same. At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's acute suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty... and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person." In 1905, he wrote the essay "On Denoting", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. In 1910, he became a University of Cambridge lecturer at Trinity College, where he had studied. He was considered for a Fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", essentially because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who became his PhD student. Russell viewed Wittgenstein as a genius and a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his frequent bouts of despair. This was often a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict. First World War During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a Fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this as an illegitimate means the state used to violate freedom of expression, in Free Thought and Official Propaganda. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a significant part in the Leeds Convention in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody". His conviction in 1916 resulted in Russell being fined £100 (), which he refused to pay in hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police". A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918. He later said of his imprisonment: While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warden to intervene and reminding him that "prison was a place of punishment". Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949. In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for "passive resistance to military or naval service". G. H. Hardy on the Trinity controversy In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account about Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing from October. In July 1920, Russell applied for a one year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was completely voluntary and was not the result of another altercation. The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it, since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation, since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote: Between the wars In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia—1920", for the US magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring. Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution. The following autumn, Russell, accompanied by Dora, visited Peking (as it was then known in the West) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully. Dora was six months pregnant when the couple returned to England on 26 August 1921. Russell arranged a hasty divorce from Alys, marrying Dora six days after the divorce was finalised, on 27 September 1921. Russell's children with Dora were John Conrad Russell, 4th Earl Russell, born on 16 November 1921, and Katharine Jane Russell (now Lady Katharine Tait), born on 29 December 1923. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman. From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was extremely unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions. Owing to the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws, as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations,
in 2008 to just three in 2010. During the same period, operators upgraded aircraft already in service; in 2008, the first 767-300ER retrofitted with blended winglets from Aviation Partners Incorporated debuted with American Airlines. The manufacturer-sanctioned winglets, at in height, improved fuel efficiency by an estimated 6.5 percent. Other carriers including All Nippon Airways and Delta Air Lines also ordered winglet kits. On February 2, 2011, the 1,000th 767 rolled out, destined for All Nippon Airways. The aircraft was the 91st 767-300ER ordered by the Japanese carrier, and with its completion the 767 became the second wide-body airliner to reach the thousand-unit milestone after the 747. The 1,000th aircraft also marked the last model produced on the original 767 assembly line. Beginning with the 1,001st aircraft, production moved to another area in the Everett factory which occupied about half of the previous floor space. The new assembly line made room for 787 production and aimed to boost manufacturing efficiency by over twenty percent. At the inauguration of its new assembly line, the 767's order backlog numbered approximately 50, only enough for production to last until 2013. Despite the reduced backlog, Boeing officials expressed optimism that additional orders would be forthcoming. On February 24, 2011, the USAF announced its selection of the KC-767 Advanced Tanker, an upgraded variant of the KC-767, for its KC-X fleet renewal program. The selection followed two rounds of tanker competition between Boeing and Airbus parent EADS, and came eight years after the USAF's original 2003 announcement of its plan to lease KC-767s. The tanker order encompassed 179 aircraft and was expected to sustain 767 production past 2013. In December 2011, FedEx Express announced a 767-300F order for 27 aircraft to replace its DC-10 freighters, citing the USAF tanker order and Boeing's decision to continue production as contributing factors. FedEx Express agreed to buy 19 more of the −300F variant in June 2012. In June 2015, FedEx said it was accelerating retirements of planes both to reflect demand and to modernize its fleet, recording charges of $276 million. On July 21, 2015 FedEx announced an order for 50 767-300F with options on another 50, the largest order for the type. With the announcement FedEx confirmed that it has firm orders for 106 of the freighters for delivery between 2018 and 2023. In February 2018, UPS announced an order for 4 more 767-300Fs to increase the total on order to 63. With its successor, the Boeing New Midsize Airplane, that was planned for introduction in 2025 or later, and the 787 being much larger, Boeing could restart a passenger 767-300ER production to bridge the gap. A demand for 50 to 60 aircraft could have to be satisfied. Having to replace its 40 767s, United Airlines requested a price quote for other widebodies. In November 2017, Boeing CEO Dennis Muilenburg cited interest beyond military and freighter uses. However, in early 2018 Boeing Commercial Airplanes VP of marketing Randy Tinseth stated that the company did not intend to resume production of the passenger variant. In its first quarter of 2018 earnings report, Boeing plan to increase its production from 2.5 to 3 monthly beginning in January 2020 due to increased demand in the cargo market, as FedEx had 56 on order, UPS has four, and an unidentified customer has three on order. This rate could rise to 3.5 per month in July 2020 and 4 per month in January 2021, before decreasing to 3 per month in January 2025 and then 2 per month in July 2025. In 2019, unit cost was US$ 217.9 million for a -300ER, and US$ 220.3 million for a -300F. Re-engined 767-XF In October 2019, Boeing was reportedly studying a re-engined 767-XF for entry into service around 2025, based on the 767-400ER with an extended landing gear to accommodate larger General Electric GEnx turbofan engines. The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane. Design Overview The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by versus preceding aircraft. To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. The 767 has left-side exit doors near the front and rear of the aircraft. In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft. Flight systems The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each. The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers. Interior The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common. The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system. In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna. Operational history In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for all-new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications. Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to , and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In May 1984, an Ethiopian Airlines 767-200ER set a non-stop record for a commercial twinjet of from Washington DC to Addis Ababa. In the mid-1980s, the 767 spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. Before the 767, overwater flight paths of twinjets could be no more than 90 minutes away from diversion airports. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to 767 operators, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The larger safety margins were permitted because of the improved reliability demonstrated by the twinjet and its turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic 767 flights and boosted the aircraft's sales. Variants The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the , , and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models. When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. "762" or "763". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764". 767-200 The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985 under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 examples of the model in commercial service , almost entirely as freighter conversions. The type's competitors included the Airbus A300 and A310. The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters. 767-2C A commercial freighter version of the Boeing with wings from the -300 series and an updated flightdeck was first flown on 29 December 2014. A military tanker variant of the Boeing 767-2C is being developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. , Boeing does not have customers for the freighter. 767-200ER The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to . The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called cheek tanks; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants. This version was originally offered with the same engines as the , while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988 with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering . The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300. 767-300 The , the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a fuselage extension over the , achieved by additional sections inserted before and after the wings, for an overall length of . Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the . An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As
re-engined 767-XF for entry into service around 2025, based on the 767-400ER with an extended landing gear to accommodate larger General Electric GEnx turbofan engines. The cargo market is the main target, but a passenger version could be a cheaper alternative to the proposed New Midsize Airplane. Design Overview The 767 is a low-wing cantilever monoplane with a conventional tail unit featuring a single fin and rudder. The wings are swept at 31.5 degrees and optimized for a cruising speed of Mach 0.8 (). Each wing features a supercritical airfoil cross-section and is equipped with six-panel leading edge slats, single- and double-slotted flaps, inboard and outboard ailerons, and six spoilers. The airframe further incorporates Carbon-fiber-reinforced polymer composite material wing surfaces, Kevlar fairings and access panels, plus improved aluminum alloys, which together reduce overall weight by versus preceding aircraft. To distribute the aircraft's weight on the ground, the 767 has a retractable tricycle landing gear with four wheels on each main gear and two for the nose gear. The original wing and gear design accommodated the stretched 767-300 without major changes. The 767-400ER features a larger, more widely spaced main gear with 777 wheels, tires, and brakes. To prevent damage if the tail section contacts the runway surface during takeoff, 767-300 and 767-400ER models are fitted with a retractable tailskid. The 767 has left-side exit doors near the front and rear of the aircraft. In addition to shared avionics and computer technology, the 767 uses the same auxiliary power unit, electric power systems, and hydraulic parts as the 757. A raised cockpit floor and the same forward cockpit windows result in similar pilot viewing angles. Related design and functionality allows 767 pilots to obtain a common type rating to operate the 757 and share the same seniority roster with pilots of either aircraft. Flight systems The original 767 flight deck uses six Rockwell Collins CRT screens to display Electronic flight instrument system (EFIS) and engine indication and crew alerting system (EICAS) information, allowing pilots to handle monitoring tasks previously performed by the flight engineer. The CRTs replace conventional electromechanical instruments found on earlier aircraft. An enhanced flight management system, improved over versions used on early 747s, automates navigation and other functions, while an automatic landing system facilitates CAT IIIb instrument landings in low visibility situations. The 767 became the first aircraft to receive CAT IIIb certification from the FAA for landings with minimum visibility in 1984. On the 767-400ER, the cockpit layout is simplified further with six Rockwell Collins liquid crystal display (LCD) screens, and adapted for similarities with the 777 and the Next Generation 737. To retain operational commonality, the LCD screens can be programmed to display information in the same manner as earlier 767s. In 2012, Boeing and Rockwell Collins launched a further 787-based cockpit upgrade for the 767, featuring three landscape-format LCD screens that can display two windows each. The 767 is equipped with three redundant hydraulic systems for operation of control surfaces, landing gear, and utility actuation systems. Each engine powers a separate hydraulic system, and the third system uses electric pumps. A ram air turbine provides power for basic controls in the event of an emergency. An early form of fly-by-wire is employed for spoiler operation, utilizing electric signaling instead of traditional control cables. The fly-by-wire system reduces weight and allows independent operation of individual spoilers. Interior The 767 features a twin-aisle cabin with a typical configuration of six abreast in business class and seven across in economy. The standard seven abreast, 2–3–2 economy class layout places approximately 87 percent of all seats at a window or aisle. As a result, the aircraft can be largely occupied before center seats need to be filled, and each passenger is no more than one seat from the aisle. It is possible to configure the aircraft with extra seats for up to an eight abreast configuration, but this is less common. The 767 interior introduced larger overhead bins and more lavatories per passenger than previous aircraft. The bins are wider to accommodate garment bags without folding, and strengthened for heavier carry-on items. A single, large galley is installed near the aft doors, allowing for more efficient meal service and simpler ground resupply. Passenger and service doors are an overhead plug type, which retract upwards, and commonly used doors can be equipped with an electric-assist system. In 2000, a 777-style interior, known as the Boeing Signature Interior, debuted on the 767-400ER. Subsequently, adopted for all new-build 767s, the Signature Interior features even larger overhead bins, indirect lighting, and sculpted, curved panels. The 767-400ER also received larger windows derived from the 777. Older 767s can be retrofitted with the Signature Interior. Some operators have adopted a simpler modification known as the Enhanced Interior, featuring curved ceiling panels and indirect lighting with minimal modification of cabin architecture, as well as aftermarket modifications such as the NuLook 767 package by Heath Tecna. Operational history In its first year, the 767 logged a 96.1 percent dispatch rate, which exceeded the industry average for all-new aircraft. Operators reported generally favorable ratings for the twinjet's sound levels, interior comfort, and economic performance. Resolved issues were minor and included the recalibration of a leading edge sensor to prevent false readings, the replacement of an evacuation slide latch, and the repair of a tailplane pivot to match production specifications. Seeking to capitalize on its new wide-body's potential for growth, Boeing offered an extended-range model, the 767-200ER, in its first year of service. Ethiopian Airlines placed the first order for the type in December 1982. Featuring increased gross weight and greater fuel capacity, the extended-range model could carry heavier payloads at distances up to , and was targeted at overseas customers. The 767-200ER entered service with El Al Airline on March 27, 1984. The type was mainly ordered by international airlines operating medium-traffic, long-distance flights. In May 1984, an Ethiopian Airlines 767-200ER set a non-stop record for a commercial twinjet of from Washington DC to Addis Ababa. In the mid-1980s, the 767 spearheaded the growth of twinjet flights across the northern Atlantic under extended-range twin-engine operational performance standards (ETOPS) regulations, the FAA's safety rules governing transoceanic flights by aircraft with two engines. Before the 767, overwater flight paths of twinjets could be no more than 90 minutes away from diversion airports. In May 1985, the FAA granted its first approval for 120-minute ETOPS flights to 767 operators, on an individual airline basis starting with TWA, provided that the operator met flight safety criteria. This allowed the aircraft to fly overseas routes at up to two hours' distance from land. The larger safety margins were permitted because of the improved reliability demonstrated by the twinjet and its turbofan engines. The FAA lengthened the ETOPS time to 180 minutes for CF6-powered 767s in 1989, making the type the first to be certified under the longer duration, and all available engines received approval by 1993. Regulatory approval spurred the expansion of transoceanic 767 flights and boosted the aircraft's sales. Variants The 767 has been produced in three fuselage lengths. These debuted in progressively larger form as the , , and 767-400ER. Longer-range variants include the 767-200ER and 767-300ER, while cargo models include the 767-300F, a production freighter, and conversions of passenger 767-200 and 767-300 models. When referring to different variants, Boeing and airlines often collapse the model number (767) and the variant designator, e.g. –200 or –300, into a truncated form, e.g. "762" or "763". Subsequent to the capacity number, designations may append the range identifier, though -200ER and -300ER are company marketing designations and not certificated as such. The International Civil Aviation Organization (ICAO) aircraft type designator system uses a similar numbering scheme, but adds a preceding manufacturer letter; all variants based on the 767-200 and 767-300 are classified under the codes "B762" and "B763"; the 767-400ER receives the designation of "B764". 767-200 The 767-200 was the original model and entered service with United Airlines in 1982. The type has been used primarily by mainline U.S. carriers for domestic routes between major hub centers such as Los Angeles to Washington. The 767-200 was the first aircraft to be used on transatlantic ETOPS flights, beginning with TWA on February 1, 1985 under 90-minute diversion rules. Deliveries for the variant totaled 128 aircraft. There were 52 examples of the model in commercial service , almost entirely as freighter conversions. The type's competitors included the Airbus A300 and A310. The 767-200 was produced until 1987 when production switched to the extended-range 767-200ER. Some early 767-200s were subsequently upgraded to extended-range specification. In 1998, Boeing began offering 767-200 conversions to 767-200SF (Special Freighter) specification for cargo use, and Israel Aerospace Industries has been licensed to perform cargo conversions since 2005. The conversion process entails the installation of a side cargo door, strengthened main deck floor, and added freight monitoring and safety equipment. The 767-200SF was positioned as a replacement for Douglas DC-8 freighters. 767-2C A commercial freighter version of the Boeing with wings from the -300 series and an updated flightdeck was first flown on 29 December 2014. A military tanker variant of the Boeing 767-2C is being developed for the USAF as the KC-46. Boeing is building two aircraft as commercial freighters which will be used to obtain Federal Aviation Administration certification, a further two Boeing 767-2Cs will be modified as military tankers. , Boeing does not have customers for the freighter. 767-200ER The 767-200ER was the first extended-range model and entered service with El Al in 1984. The type's increased range is due to extra fuel capacity and higher maximum takeoff weight (MTOW) of up to . The additional fuel capacity is accomplished by using the center tank's dry dock to carry fuel. The non-ER variant's center tank is what is called cheek tanks; two interconnected halves in each wing root with a dry dock in between. The center tank is also used on the -300ER and -400ER variants. This version was originally offered with the same engines as the , while more powerful Pratt & Whitney PW4000 and General Electric CF6 engines later became available. The 767-200ER was the first 767 to complete a non-stop transatlantic journey, and broke the flying distance record for a twinjet airliner on April 17, 1988 with an Air Mauritius flight from Halifax, Nova Scotia to Port Louis, Mauritius, covering . The 767-200ER has been acquired by international operators seeking smaller wide-body aircraft for long-haul routes such as New York to Beijing. Deliveries of the type totaled 121 with no unfilled orders. As of July 2018, 21 examples of passenger and freighter conversion versions were in airline service. The type's main competitors of the time included the Airbus A300-600R and the A310-300. 767-300 The , the first stretched version of the aircraft, entered service with Japan Airlines in 1986. The type features a fuselage extension over the , achieved by additional sections inserted before and after the wings, for an overall length of . Reflecting the growth potential built into the original 767 design, the wings, engines, and most systems were largely unchanged on the . An optional mid-cabin exit door is positioned ahead of the wings on the left, while more powerful Pratt & Whitney PW4000 and Rolls-Royce RB211 engines later became available. The 767-300's increased capacity has been used on high-density routes within Asia and Europe. The 767-300 was produced from 1986 until 2000. Deliveries for the type totaled 104 aircraft with no unfilled orders remaining. As of July 2018, 34 of the variant were in airline service. The type's main competitor was the Airbus A300. 767-300ER The 767-300ER, the extended-range version of the , entered service with American Airlines in 1988. The type's increased range was made possible by greater fuel tankage and a higher MTOW of . Design improvements allowed the available MTOW to increase to by 1993. Power is provided by Pratt & Whitney PW4000, General Electric CF6, or Rolls-Royce RB211 engines. the 767-300ER comes in three exit configurations: the baseline configuration has four main cabin doors and four over-wing window exits, the second configuration has six main cabin doors and two over-wing window exits; and the third configuration has six main cabin doors, as well as two smaller doors that are located behind the wings. Typical routes for the type include Los Angeles to Frankfurt. The combination of increased capacity and range offered by the 767-300ER has been particularly attractive to both new and existing 767 operators. It is the most successful version of the aircraft, with more orders placed than all other variants combined. , 767-300ER deliveries stand at 583 with no unfilled orders. There were 376 examples in service . The type's main competitor is the Airbus A330-200. At its 1990s peak, a new 767-300ER was valued at $85 million, dipping to around $12 million in 2018 for a 1996 build. 767-300F The 767-300F, the production freighter version of the 767-300ER, entered service with UPS Airlines in 1995. The 767-300F can hold up to 24 standard pallets on its main deck and up to 30 LD2 unit load devices on the lower deck, with a total cargo volume of . The freighter has a main deck cargo door and crew exit, while the lower deck features two starboard-side cargo doors and one port-side cargo door. A general market version with onboard freight-handling systems, refrigeration capability, and crew facilities was delivered to Asiana Airlines on August 23, 1996. , 767-300F deliveries stand at 161 with 61 unfilled orders. Airlines operated 222 examples of the freighter variant and freighter conversions in July 2018. In June 2008, All Nippon Airways took delivery of the first 767-300BCF (Boeing Converted Freighter), a modified passenger-to-freighter model. The conversion work was performed in Singapore by ST Aerospace Services, the first supplier to offer a 767-300BCF program, and involved the addition of a main deck cargo door, strengthened main deck floor, and additional freight monitoring and safety equipment. Since then, Boeing, Israel Aerospace Industries, and Wagner Aeronautical have also offered passenger-to-freighter conversion programs for series aircraft. 767-400ER The 767-400ER, the first Boeing wide-body jet resulting from two fuselage stretches, entered service with Continental Airlines in 2000. The type features a stretch over the , for a total length of . The wingspan is also increased by through the addition of raked wingtips. The exit configuration uses six main cabin doors and two smaller exit doors behind the wings, similar to certain 767-300ERs. Other differences include an updated cockpit, redesigned landing gear, and 777-style Signature Interior. Power is provided by uprated General Electric CF6 engines. The FAA granted approval for the 767-400ER to operate 180-minute ETOPS flights before it entered service. Because its fuel capacity was not increased over preceding models, the 767-400ER has a range of , less than previous extended-range 767s. No 767-400 version was developed. The longer-range 767-400ERX was offered in July 2000 before being cancelled a year later, leaving the 767-400ER as the sole version of the largest 767. Boeing dropped the 767-400ER and the -200ER from its pricing list in 2014. A total of 37 767-400ERs were delivered to the variant's two airline customers, Continental Airlines (now merged with United Airlines) and Delta Air Lines, with no unfilled orders. All 37 examples of the -400ER were in service in July 2018. One additional example was produced as a military testbed, and later sold as a VIP transport. The type's closest competitor is the Airbus A330-200. Military and government Versions of the 767 serve in a number of military and government applications, with responsibilities ranging from airborne surveillance and refueling to cargo and VIP transport. Several military 767s have been derived from the 767-200ER, the longest-range version of the aircraft. Airborne Surveillance Testbed – the Airborne Optical Adjunct (AOA) was modified from the prototype 767-200 for a United States Army program, under a contract signed with the Strategic Air Command in July 1984. Intended to evaluate the feasibility of using airborne optical sensors to detect and track hostile intercontinental ballistic missiles, the modified aircraft first flew on August 21, 1987. Alterations included a large "cupola" or hump on the top of the aircraft from above the cockpit to just behind the trailing edge of the wings, and a pair of ventral fins below the rear fuselage. Inside the cupola was a suite of infrared seekers used for tracking theater ballistic missile launches. The aircraft was later renamed as the Airborne Surveillance Testbed (AST). Following the end of the AST program in 2002, the aircraft was retired for scrapping. E-767 – the Airborne Early Warning and Control (AWACS) platform for the Japan Self-Defense Forces; it is essentially the Boeing E-3 Sentry mission package on a 767-200ER platform. E-767 modifications, completed on 767-200ERs flown from the Everett factory to Boeing Integrated Defense Systems in Wichita, Kansas, include strengthening to accommodate a dorsal surveillance radar system, engine nacelle alterations, as well as electrical and interior changes. Japan operates four E-767s. The first E-767s were delivered in March 1998. KC-767 Tanker Transport – the 767-200ER-based aerial refueling platform operated by the Italian Air Force (Aeronautica Militare), and the Japan Self-Defense Forces. Modifications conducted by Boeing Integrated Defense Systems include the addition of a fly-by-wire refueling boom, strengthened flaps, and optional auxiliary fuel tanks, as well as structural reinforcement and modified avionics. The four KC-767Js ordered by Japan have been delivered. The Aeronautica Militare received the first of its four KC-767As in January 2011. KC-767 Advanced Tanker – the 767-200ER-based aerial tanker developed for the USAF KC-X tanker competition. It is an updated version of the KC-767, originally selected as the USAF's new tanker aircraft in 2003, designated KC-767A, and then dropped amid conflict of interest allegations. The KC-767 Advanced Tanker is derived from studies for a longer-range cargo version of the 767-200ER, and features a fly-by-wire refueling boom, a remote vision refueling system, and a 767-400ER-based flight deck with LCD screens and head-up displays. KC-46 - a 767-based tanker, not derived from the KC-767, awarded as part of the KC-X contract for the USAF. Tanker conversions – the 767 MMTT or Multi-Mission Tanker Transport is a 767-200ER-based aircraft operated by the Colombian Air Force (Fuerza Aérea Colombiana) and modified by Israel Aerospace Industries. In 2013, the Brazilian Air Force ordered two 767-300ER tanker conversions from IAI for its KC-X2 program. E-10 MC2A - the Northrop Grumman E-10 was to be a 767-400ER-based replacement for the USAF's 707-based E-3 Sentry AWACS, Northrop Grumman E-8 Joint STARS, and RC-135 SIGINT aircraft. The E-10 would have included an all-new AWACS system, with a powerful active electronically scanned array
system that relied on quick, short throws, often spreading the ball across the entire width of the field. The new offense was much better suited to Carter's physical abilities; he led the league in pass completion percentage in 1971. Walsh spent eight seasons as an assistant with the Bengals. Ken Anderson eventually replaced Carter as starting quarterback, and, together with star wide receiver Isaac Curtis, produced a consistent, effective offensive attack. Initially, Walsh started as the wide receivers coach from 1968 to 1970 before also coaching the quarterbacks from 1971 to 1975. When Brown retired as head coach following the 1975 season and appointed Bill "Tiger" Johnson as his successor, Walsh resigned and served as an assistant coach for Tommy Prothro with the San Diego Chargers in 1976. In a 2006 interview, Walsh claimed that during his tenure with the Bengals, Brown "worked against my candidacy" to be a head coach anywhere in the league. "All the way through I had opportunities, and I never knew about them", Walsh said. "And then when I left him, he called whoever he thought was necessary to keep me out of the NFL." In 1977, Walsh was hired as the head coach at Stanford where he stayed for two seasons. His two Stanford teams were successful, posting a 9–3 record in 1977 with a win in the Sun Bowl, and 8–4 in 1978 with a win in the Bluebonnet Bowl. His notable players at Stanford included quarterbacks Guy Benjamin and Steve Dils, wide receivers James Lofton and Ken Margerum, linebacker Gordy Ceresino, in addition to running back Darrin Nelson. Walsh was the Pac-8 Conference Coach of the Year in 1977. 49ers head coach He was appointed head coach of the San Francisco 49ers on January 9, 1979, one day after both his resignation from Stanford and team owner Edward J. DeBartolo, Jr.'s dismissal of Walsh's predecessor Fred O'Connor and general manager Joe Thomas. The long-suffering 49ers went 2–14 in 1978, the season before Walsh's arrival, repeating the same dismal record in his first season as head coach. However, Walsh enacted organizational changes that improved the 49ers record in his second season on their way to winning their first Super Bowl. He also drafted quarterback Joe Montana from Notre Dame in the third round. Despite their second consecutive 2–14 record, the 49ers were playing more competitive football. In 1980, Steve DeBerg was the starting quarterback who got San Francisco off to a 3–0 start, but after a 59–14 blowout loss to Dallas in week 6, Walsh promoted Montana to starting QB. In a Sunday game, December 7 vs. the New Orleans Saints, Montana brought the 49ers back from a 35–7 halftime deficit to win 38–35 in overtime. The 49ers improved to 6–10, but more importantly, Walsh had them making great strides and they were getting better every week. 1981 championship In 1981, key victories were two wins each over the Los Angeles Rams and the Dallas Cowboys. The Rams were only two seasons removed from a Super Bowl appearance, and had dominated the series with the 49ers since 1967, winning 23, losing 3 and tying 1. San Francisco's two wins over the Rams in 1981 marked the shift of dominance in favor of the 49ers that lasted until 1998 with 30 wins (including 17 consecutively) against only 6 defeats. The 49ers blew out the Cowboys in week 6 of the regular season. On Monday Night Football that week, the win was not included in the halftime highlights. Walsh felt that this was because the Cowboys were scheduled to play the Rams the next week in a Sunday night game and that showing the highlights of the 49ers' win would potentially hurt the game's ratings. However, Walsh used this as a motivating factor for his team, who felt they were disrespected. The 49ers finished the regular season with a 13–3 record. The 49ers faced the Cowboys again that same season in the NFC title game. The game was very close, and in the fourth quarter Walsh called a series of running plays as the 49ers marched down the field against the Cowboys' prevent defense, which had been expecting the 49ers to mainly pass. The 49ers came from behind to win the game on Dwight Clark's touchdown reception, known as The Catch, propelling Walsh to his first Super Bowl. Walsh would later write that the 49ers' two wins over the Rams showed a shift of power in their division, while the wins over the Cowboys showed a shift of power in the conference. Two weeks later, on January 24, 1982, San Francisco faced the Cincinnati Bengals in Super Bowl XVI, winning 26–21 for the team’s first NFL championship. Only a year removed from back-to-back two-win seasons, the 49ers had risen from the cellar to the top of the NFL in just two seasons. In all, Walsh served as 49ers head coach for 10 years, winning three championships, in the 1981, 1984, and 1988 seasons. During his tenure, Walsh and his coaching staff perfected the style of play known popularly as the West Coast offense. With a disciplined approach to game-planning, Walsh famously scripted the first 10–15 offensive plays before the start of each game and was nicknamed "The Genius" for both his innovative play calling and design. In the ten years under Walsh, San Francisco scored 3,714 points (24.4 per game), the most of any team in the league during that span. In addition to Joe Montana, Walsh drafted Ronnie Lott, Charles Haley, and Jerry Rice. He also traded a 2nd and 4th round pick in the 1987 draft for Steve Young. His success with the 49ers was rewarded with his election to the Pro Football Hall of Fame in 1993. Montana, Lott, Haley, Rice and Young were also elected to the Hall of Fame. Coaching tree Prominent assistant coaches Many of Bill Walsh's assistant coaches went on to be head coaches themselves, including George Seifert, Mike Holmgren, Ray Rhodes, and Dennis Green. After Walsh's retirement from the 49ers, Seifert succeeded him as 49ers head coach, and guided San Francisco to victories in Super Bowl XXIV and Super Bowl XXIX. Holmgren won a Super Bowl with the Green Bay Packers, and made 3 Super Bowl appearances as a head coach: 2 with the Packers, and another with the Seattle Seahawks. These coaches in turn have their own disciples who have used Walsh's West Coast system, such as former Denver Broncos head coach Mike Shanahan and former Houston Texans head coach Gary Kubiak. Mike Shanahan was an offensive coordinator under George Seifert and went on to win Super Bowl XXXII and Super Bowl XXXIII during his time as head coach of the Denver Broncos. Kubiak was first a quarterback coach with the 49ers, and then offensive coordinator for Shanahan with the Broncos. In 2015, he became the Broncos' head coach and led Denver to victory in Super Bowl 50. Dennis Green trained Tony Dungy, who won a Super Bowl with the Indianapolis Colts, and Brian Billick with his brother-in law and linebackers coach Mike Smith. Billick won a Super Bowl as head coach of the Baltimore Ravens. Mike Holmgren trained many of his assistants to become head coaches, including Jon Gruden and Andy Reid. Gruden won a Super Bowl with the Tampa Bay Buccaneers. Reid served as head coach of the Philadelphia Eagles from 1999 to 2012, and guided the Eagles to multiple winning seasons and numerous playoff appearances. Ever since 2013, Reid has served as head coach of the Kansas City Chiefs. He was finally able to win a Super Bowl, when his Chiefs defeated the San Francisco 49ers in Super Bowl LIV. In addition to this, Marc Trestman, former head coach of the Chicago Bears, served as Offensive Coordinator under Seifert in the 90's. Gruden himself would train Mike Tomlin, who led the Pittsburgh Steelers to their sixth Super Bowl championship, and Jim Harbaugh, whose 49ers would face his brother, John Harbaugh, whom Reid himself trained, and the Baltimore Ravens at Super Bowl XLVII, which marked the Ravens' second World Championship. Bill Walsh was viewed as a strong advocate for African-American head coaches in the NFL and NCAA. Thus, the impact of Walsh also changed the NFL into an equal opportunity for African-American coaches. Along with Ray Rhodes and Dennis Green, Tyrone Willingham became the head coach at Stanford, then later Notre Dame and Washington. One of Mike Shanahan's assistants, Karl Dorrell, went on to be the head coach at UCLA. Walsh directly helped propel Dennis Green into the NFL head coaching ranks by offering to take on the head coaching job at Stanford. Bill Walsh
and 8–4 in 1978 with a win in the Bluebonnet Bowl. His notable players at Stanford included quarterbacks Guy Benjamin and Steve Dils, wide receivers James Lofton and Ken Margerum, linebacker Gordy Ceresino, in addition to running back Darrin Nelson. Walsh was the Pac-8 Conference Coach of the Year in 1977. 49ers head coach He was appointed head coach of the San Francisco 49ers on January 9, 1979, one day after both his resignation from Stanford and team owner Edward J. DeBartolo, Jr.'s dismissal of Walsh's predecessor Fred O'Connor and general manager Joe Thomas. The long-suffering 49ers went 2–14 in 1978, the season before Walsh's arrival, repeating the same dismal record in his first season as head coach. However, Walsh enacted organizational changes that improved the 49ers record in his second season on their way to winning their first Super Bowl. He also drafted quarterback Joe Montana from Notre Dame in the third round. Despite their second consecutive 2–14 record, the 49ers were playing more competitive football. In 1980, Steve DeBerg was the starting quarterback who got San Francisco off to a 3–0 start, but after a 59–14 blowout loss to Dallas in week 6, Walsh promoted Montana to starting QB. In a Sunday game, December 7 vs. the New Orleans Saints, Montana brought the 49ers back from a 35–7 halftime deficit to win 38–35 in overtime. The 49ers improved to 6–10, but more importantly, Walsh had them making great strides and they were getting better every week. 1981 championship In 1981, key victories were two wins each over the Los Angeles Rams and the Dallas Cowboys. The Rams were only two seasons removed from a Super Bowl appearance, and had dominated the series with the 49ers since 1967, winning 23, losing 3 and tying 1. San Francisco's two wins over the Rams in 1981 marked the shift of dominance in favor of the 49ers that lasted until 1998 with 30 wins (including 17 consecutively) against only 6 defeats. The 49ers blew out the Cowboys in week 6 of the regular season. On Monday Night Football that week, the win was not included in the halftime highlights. Walsh felt that this was because the Cowboys were scheduled to play the Rams the next week in a Sunday night game and that showing the highlights of the 49ers' win would potentially hurt the game's ratings. However, Walsh used this as a motivating factor for his team, who felt they were disrespected. The 49ers finished the regular season with a 13–3 record. The 49ers faced the Cowboys again that same season in the NFC title game. The game was very close, and in the fourth quarter Walsh called a series of running plays as the 49ers marched down the field against the Cowboys' prevent defense, which had been expecting the 49ers to mainly pass. The 49ers came from behind to win the game on Dwight Clark's touchdown reception, known as The Catch, propelling Walsh to his first Super Bowl. Walsh would later write that the 49ers' two wins over the Rams showed a shift of power in their division, while the wins over the Cowboys showed a shift of power in the conference. Two weeks later, on January 24, 1982, San Francisco faced the Cincinnati Bengals in Super Bowl XVI, winning 26–21 for the team’s first NFL championship. Only a year removed from back-to-back two-win seasons, the 49ers had risen from the cellar to the top of the NFL in just two seasons. In all, Walsh served as 49ers head coach for 10 years, winning three championships, in the 1981, 1984, and 1988 seasons. During his tenure, Walsh and his coaching staff perfected the style of play known popularly as the West Coast offense. With a disciplined approach to game-planning, Walsh famously scripted the first 10–15 offensive plays before the start of each game and was nicknamed "The Genius" for both his innovative play calling and design. In the ten years under Walsh, San Francisco scored 3,714 points (24.4 per game), the most of any team in the league during that span. In addition to Joe Montana, Walsh drafted Ronnie Lott, Charles Haley, and Jerry Rice. He also traded a 2nd and 4th round pick in the 1987 draft for Steve Young. His success with the 49ers was rewarded with his election to the Pro Football Hall of Fame in 1993. Montana, Lott, Haley, Rice and Young were also elected to the Hall of Fame. Coaching tree Prominent assistant coaches Many of Bill Walsh's assistant coaches went on to be head coaches themselves, including George Seifert, Mike Holmgren, Ray Rhodes, and Dennis Green. After Walsh's retirement from the 49ers, Seifert succeeded him as 49ers head coach, and guided San Francisco to victories in Super Bowl XXIV and Super Bowl XXIX. Holmgren won a Super Bowl with the Green Bay Packers, and made 3 Super Bowl appearances as a head coach: 2 with the Packers, and another with the Seattle Seahawks. These coaches in turn have their own disciples who have used Walsh's West Coast system, such as former Denver Broncos head coach Mike Shanahan and former Houston Texans head coach Gary Kubiak. Mike Shanahan was an offensive coordinator under George Seifert and went on to win Super Bowl XXXII and Super Bowl XXXIII during his time as head coach of the Denver Broncos. Kubiak was first a quarterback coach with the 49ers, and then offensive coordinator for Shanahan with the Broncos. In 2015, he became the Broncos' head coach and led Denver to victory in Super Bowl 50. Dennis Green trained Tony Dungy, who won a Super Bowl with the Indianapolis Colts, and Brian Billick with his brother-in law and linebackers coach Mike Smith. Billick won a Super Bowl as head coach of the Baltimore Ravens. Mike Holmgren trained many of his assistants to become head coaches, including Jon Gruden and Andy Reid. Gruden won a Super Bowl with the Tampa Bay Buccaneers. Reid served as head coach of the Philadelphia Eagles from 1999 to 2012, and guided the Eagles to multiple winning seasons and numerous playoff appearances. Ever since 2013, Reid has served as head coach of the Kansas City Chiefs. He was finally able to win a Super Bowl, when his Chiefs defeated the San Francisco 49ers in Super Bowl LIV. In addition to this, Marc Trestman, former head coach of the Chicago Bears, served as Offensive Coordinator under Seifert in the 90's. Gruden himself would train Mike Tomlin, who led the Pittsburgh Steelers to their sixth Super Bowl championship, and Jim Harbaugh, whose 49ers would face his brother, John Harbaugh, whom Reid himself trained, and the Baltimore Ravens at Super Bowl XLVII, which marked the Ravens' second World Championship. Bill Walsh was viewed as a strong advocate for African-American head coaches in the NFL and NCAA. Thus, the impact of Walsh also changed the NFL into an equal opportunity for African-American coaches. Along with Ray Rhodes and Dennis Green, Tyrone Willingham became the head coach at Stanford, then later Notre Dame and Washington. One of Mike Shanahan's assistants, Karl Dorrell, went on to be the head coach at UCLA. Walsh directly helped propel Dennis Green into the NFL head coaching ranks by offering to take on the head coaching job at Stanford. Bill Walsh coaching tree Many former and current NFL head coaches trace their lineage back to Bill Walsh on his coaching tree, shown below. Walsh, in turn, belonged to the coaching tree of American Football League great and Hall of Fame coach Sid Gillman of the AFL's Los Angeles/San Diego Chargers and Hall of Fame coach Paul Brown. Tree updated through December 9, 2015. Later career After leaving the coaching ranks immediately following his team's victory in Super Bowl XXIII, Walsh went to work
knives were general-purpose tools, designed for cutting and shaping wooden implements, scraping hides, preparing food, and for other utilitarian purposes. By the 19th century the fixed-blade utility knife had evolved into a steel-bladed outdoors field knife capable of butchering game, cutting wood, and preparing campfires and meals. With the invention of the backspring, pocket-size utility knives were introduced with folding blades and other folding tools designed to increase the utility of the overall design. The folding pocketknife and utility tool is typified by the Camper or Boy Scout pocketknife, the U.S. folding utility knife, the Swiss Army Knife, and by multi-tools fitted with knife blades. The development of stronger locking blade mechanisms for folding knives—as with the Spanish navaja, the Opinel, and the Buck 110 Folding Hunter—significantly increased the utility of such knives when employed for heavy-duty tasks such as preparing game or cutting through dense or tough materials. Contemporary utility knives The fixed or folding blade utility knife is popular for both indoor and outdoor use. One of the most popular types of workplace utility knife is the retractable or folding utility knife (also known as a Stanley knife, box cutter, or by various other names). These types of utility knives are designed as multi-purpose cutting tools for use in a variety of trades and crafts. Designed to be lightweight and easy to carry and use, utility knives are commonly used in factories, warehouses, construction projects, and other situations where a tool is routinely needed to mark cut lines, trim plastic or wood materials, or to cut tape, cord, strapping, cardboard, or other packaging material. Names In British, Australian and New Zealand English, along with Dutch, Danish and Austrian German, a utility knife frequently used in the construction industry is known as a Stanley knife. This name is a generic trademark named after Stanley Works, a manufacturer of such knives. In Israel and Switzerland, these knives are known as Japanese knives. In Brazil they are known as estiletes or cortadores Olfa (the latter, being another genericised trademark). In Portugal, Panama and Canada they are also known as X-Acto (yet another genericised trademark ). In India, Russia, the Philippines, France, Iraq, Italy, Egypt, and Germany, they are simply called cutter. In the Flemish region of Belgium it is called cuttermes(je) (cutter knife). In general Spanish, they are known as cortaplumas (penknife, when it comes to folding blades); in Spain, Mexico, and Costa Rica, they are colloquially known as cutters; in Argentina and Uruguay the segmented fixed-blade knives are known as "Trinchetas". In Turkey, they are known as maket bıçağı (which literally translates as model knife). Other names for the tool are box cutter or boxcutter, razor blade knife, razor knife, carpet knife, pen knife, stationery knife, sheetrock knife, or drywall knife. Design Utility knives may use fixed, folding, or retractable or replaceable blades, and come in a wide variety of lengths and styles suited to the particular set of tasks they are designed to perform. Thus, an outdoors utility knife suited for camping or hunting might use a broad fixed blade, while a utility knife designed for the construction industry might feature a replaceable utility or razor blade for cutting packaging, cutting shingles, marking cut lines, or scraping paint. Fixed blade utility knife Large fixed-blade utility knives are most often employed in an outdoors context, such as fishing, camping, or hunting. Outdoor utility knives typically feature sturdy blades from in length, with edge geometry designed to resist chipping and breakage. The term "utility knife" may also refer to small fixed-blade knives used for crafts, model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control. Workplace utility knives The largest construction or workplace utility knives typically feature retractable and replaceable blades, made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized
model-making and other artisanal projects. These small knives feature light-duty blades best suited for cutting thin, lightweight materials. The small, thin blade and specialized handle permit cuts requiring a high degree of precision and control. Workplace utility knives The largest construction or workplace utility knives typically feature retractable and replaceable blades, made of either die-cast metal or molded plastic. Some use standard razor blades, others specialized double-ended utility blades. The user can adjust how far the blade extends from the handle, so that, for example, the knife can be used to cut the tape sealing a package without damaging the contents of the package. When the blade becomes dull, it can be quickly reversed or switched for a new one. Spare or used blades are stored in the hollow handle of some models, and can be accessed by removing a screw and opening the handle. Other models feature a quick-change mechanism that allows replacing the blade without tools, as well as a flip-out blade storage tray. The blades for this type of utility knife come in both double- and single-ended versions, and are interchangeable with many, but not all, of the later copies. Specialized blades also exist for cutting string, linoleum, and other materials. Another style is a snap-off utility knife that contains a long, segmented blade that slides out from it. As the endmost edge becomes dull, it can be broken off the remaining blade, exposing the next section, which is sharp and ready for use. The snapping is best accomplished with a blade snapper that is often built-in, or a pair of pliers, and the break occurs at the score lines, where the metal is thinnest. When all of the individual segments are used, the knife may be thrown away, or, more often, refilled with a replacement blade. This design was introduced by Japanese manufacturer Olfa Corporation in 1956 as the world's first snap-off blade and was inspired from analyzing the sharp cutting edge produced when glass is broken and how pieces of a chocolate bar break into segments. The sharp cutting edge on these knives is not on the edge where the blade is snapped off; rather one long edge of the whole blade is sharpened, and there are scored diagonal breakoff lines at intervals down the blade. Thus each snapped-off piece is roughly a parallelogram, with each long edge being a breaking edge, and one or both of the short ends being a sharpened edge. Another utility knife often used for cutting open boxes consists of a simple sleeve around a rectangular handle into which single-edge utility blades can be inserted. The sleeve slides up and down on the handle, holding the blade in place during use and covering the blade when not in use. The blade holder may either retract or fold into the handle, much like a folding-blade pocketknife. The blade holder is designed to expose just enough edge to cut through one layer of corrugated fibreboard, to minimize chances of damaging contents of cardboard boxes. Use as weapon Most utility knives are not well suited to use as offensive weapons, with the exception of some outdoor-type utility knives employing longer blades. However, even small razor-blade type utility knives may sometimes find use as slashing weapons. The 9-11 commission report stated passengers in cell phone calls reported knives or "box-cutters" were
the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BCE reduced the shipping of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths learned how to make steel. Steel is stronger than bronze and holds a sharper edge longer. Bronze was still used during the Iron Age, and has continued in use for many purposes to the modern day. Composition There are many different bronze alloys, but typically modern bronze is 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic with an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is described as both bronze and brass. In the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; and "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were mostly cast from classic bronze, while helmets and armor were hammered from mild bronze. Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications. Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction. has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance. Other bronze alloys include aluminum bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal and cymbal alloys. Properties Bronzes are typically ductile alloys, considerably less brittle than cast iron. Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. However, if copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually completely destroy it. Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys. The melting point of bronze varies depending on the ratio of the alloy components and is about . Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties. Uses Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings. In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze. Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs. Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings,
and armor were hammered from mild bronze. Commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications. Plastic bronze contains a significant quantity of lead, which makes for improved plasticity possibly used by the ancient Greeks in their ship construction. has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance. Other bronze alloys include aluminum bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal and cymbal alloys. Properties Bronzes are typically ductile alloys, considerably less brittle than cast iron. Typically bronze oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. However, if copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually completely destroy it. Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze is a better conductor of heat and electricity than most steels. The cost of copper-base alloys is generally higher than that of steels but lower than that of nickel-base alloys. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), resonant qualities of bell bronze (20% tin, 80% copper), and resistance to corrosion by seawater of several bronze alloys. The melting point of bronze varies depending on the ratio of the alloy components and is about . Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties. Uses Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings. In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze. Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs. Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings. Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak. Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways. Sculptures Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould. The Assyrian king Sennacherib (704–681 BCE) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method. Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive. In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai. In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples. Bronze sculptures, although known for their longevity, still undergo microbial degradation; such as from certain species of yeasts. Bronze continues into modern times as one of the materials of choice for monumental statuary. Mirrors Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. The reflecting surface was typically made slightly convex so that the whole face could be seen in a small mirror. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries. Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BCE). In Europe, the Etruscans were making bronze mirrors in the sixth century BCE, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, bronze mirrors were still being made in Japan in the eighteenth century AD. Musical instruments Bronze is the preferred metal for bells in the form of a high tin bronze alloy known colloquially as bell metal, which is about 23% tin. Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops. Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel. Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls,
works together with the German Land (state) North Rhine-Westphalia. In 2018 Benelux Union signed a declaration with France to strengthen cross-border cooperation. Benelux legal instruments The Benelux Union involves intergovernmental cooperation. The Treaty establishing the Benelux Union explicitly provides that the Benelux Committee of Ministers can resort to four legal instruments (art. 6, paragraph 2, under a), f), g) and h)): 1. Decisions Decisions are legally binding regulations for implementing the Treaty establishing the Benelux Union or other Benelux treaties. Their legally binding force concerns the Benelux states (and their sub-state entities), which have to implement them. However, they have no direct effect towards individual citizens or companies (notwithstanding any indirect protection of their rights based on such decisions as a source of international law). Only national provisions implementing a decision can directly create rights and obligations for citizens or companies. 2. Agreements The Committee of Ministers can draw up agreements, which are then submitted to the Benelux states (and/or their sub-state entities) for signature and subsequent parliamentary ratification. These agreements can deal with any subject matter, also in policy areas that are not yet covered by cooperation in the framework of the Benelux Union. These are in fact traditional treaties, with the same direct legally binding force towards both authorities and citizens or companies. The negotiations do however take place in the established context of the Benelux working groups and institutions, rather than on an ad hoc basis. 3. Recommendations Recommendations are non-binding orientations, adopted at ministerial level, which underpin the functioning of the Benelux Union. These (policy) orientations may not be legally binding, but given their adoption at the highest political level and their legal basis vested directly in the Treaty, they do entail a strong moral obligation for any authority concerned in the Benelux countries. 4. Directives Directives of the Committee of Ministers are mere inter-institutional instructions towards the Benelux Council and/or the Secretariat-General, for which they are binding. This instrument has so far only been used occasionally, basically in order to organise certain activities within a Benelux working group or to give them impetus. All four instruments require the unanimous approval of the members of the Committee of Ministers (and, in the case of agreements, subsequent signature and ratification at national level). In 1965, the treaty establishing a Benelux Court of Justice was signed. It entered into force in 1974. The Court, composed of judges from the highest courts of the three States, has to guarantee the uniform interpretation of common legal rules. This international judicial institution is located in Luxembourg. The Benelux is particularly active in the field of intellectual property. The three countries established a Benelux Trademarks Office and a Benelux Designs Office, both situated in The Hague. In 2005, they concluded a treaty establishing a Benelux Organisation for Intellectual Property which replaced both offices upon its entry into force on 1 September 2006. This Organisation is the official body for the registration of trademarks and designs in the Benelux. In addition, it offers the possibility to formally record the existence of ideas, concepts, designs, prototypes and the like. All higher education degrees recognised throughout Benelux In 2018 the education ministers from Belgium's three communities as well as those from the Netherlands and Luxembourg signed an agreement to recognise the level of all higher education diplomas between the three countries, a unique development in the EU. To continue studies or get a job in another country, applicants must have their locally earned degree recognised by the other country, which entails a lot of paperwork, fees and sometimes a months-long wait. In 2015, the Benelux countries agreed to recognise each other's bachelor's and master's diplomas without such hindrances. Now, recognition is extended to PhDs and to so-called graduate degrees, which are earned from adult educational institutions. This means that a graduate of any of the three countries can continue their education or seek a job in the other countries without having to have their degree officially recognised. New Benelux Treaty on police cooperation The Belgian Minister of Security and Home Affairs, Jan Jambon, the Belgian Minister of Justice, Koen Geens, the Dutch Minister of Justice and Security, Ferdinand Grapperhaus, the Luxembourg Minister of Homeland Security, Etienne Schneider and the Luxembourg Minister of Justice, Félix Braz, signed in 2018 a new Benelux police treaty, which will improve the exchange of information, create more opportunities for cross-border action and facilitate police investigations in the neighbouring country. In 2004, a Treaty on cross-border cooperation between the Benelux police forces was concluded. This has been completely revised and expanded. The Benelux countries are at the forefront of the European Union in this respect. This new Treaty will allow direct access to each other's police databases on the basis of hit/no hit. In addition, direct consultation of police databases will be possible during joint operations and in common police stations. It will also be possible to consult population registers within the limits of national legislation. In the future, ANPR (Automatic Number Plate Recognition) camera data, which play an increasingly important role in the fight against crime, can be exchanged between the Benelux countries in accordance with their own applicable law. Police and judicial authorities will also work more closely with local authorities to exchange information on organised crime in a more targeted way (administrative approach) in accordance with national law. The Treaty makes cross-border pursuit a lot easier and broadens the investigative powers of Benelux police officers. For example, it will be possible to continue a lawful hot pursuit in one's own country across the border, without the thresholds for criminal offences that characterise the current regulation. Another new feature of the Treaty is that a police officer can, under certain conditions, carry out cross-border investigations. The existing intensive cooperation in the field of police liaison officers, joint patrols and checks as well as the provision of assistance at major events will be maintained. In addition, the possibilities for cross-border escort and surveillance missions and for operating on international trains will be considerably extended. In the event of a crisis situation, special police units will now be able to act across borders; this can also be used to support important events with a high security risk, such as a NATO Summit. After approval by the parliaments, and the elaboration of implementation agreements, the new Benelux Police Treaty will enter into force. Benelux Treaty of Liège: joint Benelux road transport inspections The Treaty of Liège entered into force in 2017. As a result, Dutch, Belgian and Luxembourg inspectors may carry out joint inspections of trucks and buses in the three countries. This treaty was signed in 2014 in Liège (Belgium) by the three countries. In the meantime, on the basis of a transitional regime and pending the entry into force of the Treaty, several major Benelux road transport inspections have taken place. Under this transition
be possible during joint operations and in common police stations. It will also be possible to consult population registers within the limits of national legislation. In the future, ANPR (Automatic Number Plate Recognition) camera data, which play an increasingly important role in the fight against crime, can be exchanged between the Benelux countries in accordance with their own applicable law. Police and judicial authorities will also work more closely with local authorities to exchange information on organised crime in a more targeted way (administrative approach) in accordance with national law. The Treaty makes cross-border pursuit a lot easier and broadens the investigative powers of Benelux police officers. For example, it will be possible to continue a lawful hot pursuit in one's own country across the border, without the thresholds for criminal offences that characterise the current regulation. Another new feature of the Treaty is that a police officer can, under certain conditions, carry out cross-border investigations. The existing intensive cooperation in the field of police liaison officers, joint patrols and checks as well as the provision of assistance at major events will be maintained. In addition, the possibilities for cross-border escort and surveillance missions and for operating on international trains will be considerably extended. In the event of a crisis situation, special police units will now be able to act across borders; this can also be used to support important events with a high security risk, such as a NATO Summit. After approval by the parliaments, and the elaboration of implementation agreements, the new Benelux Police Treaty will enter into force. Benelux Treaty of Liège: joint Benelux road transport inspections The Treaty of Liège entered into force in 2017. As a result, Dutch, Belgian and Luxembourg inspectors may carry out joint inspections of trucks and buses in the three countries. This treaty was signed in 2014 in Liège (Belgium) by the three countries. In the meantime, on the basis of a transitional regime and pending the entry into force of the Treaty, several major Benelux road transport inspections have taken place. Under this transition regime, inspectors from neighbouring countries could only act as observers. Now they can exercise all of their skills. Co-operation on the basis of this Benelux Treaty leads to a more uniform control of road transport, cost reductions, more honest competition between transport companies and better working conditions for drivers. In addition, this cooperation strengthens general road safety in the three countries. The Benelux Treaty seeks to intensify cooperation by improving the existing situation through intensive harmonisation of controls, exchange of equipment and training of personnel in order to reduce costs and by allowing inspectors of a country to participate in Inspections in another Benelux country by exercising all their powers, which in particular enables the expertise of the specialists in each country to be obtained. In so doing, they are fully committed to road safety for citizens and create a level playing field, so that entrepreneurs inside and outside the Benelux must comply with the same rules of control. The application of the Treaty of Liège allows the three Benelux countries to play the role of forerunners in Europe. In addition, the treaty expressly provides for the possibility of accession of other countries. By June 2019 already a total of 922 vehicles were subject to common Benelux inspections. Benelux pilot project with digital consignment notes A Benelux-wide pilot project was launched in 2017 to enable the use of digital consignment notes (e-CMR) for national and intra-Benelux transport. The switch to e-CMR in the Benelux offers possible savings of €4.50 per consignment. With an annual figure of around 65 million consignment notes used, this represents overall savings of close to €300 million per year. With this operation, the Benelux countries are testing the operation of the digital consignment note ( from a control perspective). They will share findings with the European Union. Benelux effect on cross-border mobility Currently 37% of the total number of EU frontier workers work in the Benelux and surrounding areas. 35,000 Belgian citizens work in Luxembourg, while 37,000 Belgian citizens cross the border to work in the Netherlands each day. In addition, 12,000 Dutch and close to a thousand Luxembourg residents work in Belgium. Benelux countries take the lead in stimulating European cycling policy In a joint political declaration (July 2020), the mobility ministers of the Benelux countries called on the European Commission to prioritise cycling in European climate policy and Sustainable Transport strategies. They call on the commission to co-finance the construction of cycling infrastructure and to provide funds to stimulate cycling policy as part of the European Green Deal. The COVID-19 crisis has had a massive impact on the state of mobility in Europe. During the lockdown period, cycle use increased in almost every European country. The (increased) use of this sustainable form of transport is not just essential if the EU is to achieve its climate objectives by 2050, but also has a positive impact on public health and the economy in the EU. Cycling in Europe brings €150 billion in benefits, of which €90 billion are linked to the environment, health and the mobility system. The cycle industry already provides hundreds of thousands of jobs and annual revenue from cycle tourism in the EU is estimated at €44 billion. In their statement the ministers stress that the provision of safe, high quality cycling infrastructure and secure cycle parking is essential to further stimulate cycle use. Further European research is also needed to map out the potential for cycling post COVID-19. With this declaration, the mobility ministers of the Benelux are also calling on other EU Member States to provide the European Commission with up-to-date data on active mobility, which is not currently collected at EU level. They also call on them to make adequate funding available for cycling projects in their COVID-19 recovery plans and to take cycling into account in tourism and road policy. They ask regional and local authorities to expand networks of cycle paths, to promote cycling campaigns and to arrange cycle sharing schemes during the summer months. Characteristics Countries Associated territories Renewal of the agreement The Treaty between the Benelux countries establishing the Benelux Economic Union was limited to a period of 50 years. During the following years, and even more so after the creation of the European Union, the Benelux cooperation focused on developing other fields of activity within a constantly changing international context. At the end of the 50 years, the governments of the three Benelux countries decided to renew the agreement, taking into account the new aspects of the Benelux-cooperation – such as security – and the new federal government structure of Belgium. The original establishing treaty, set to expire in 2010, was replaced by a new legal framework (called the Treaty revising the Treaty establishing the Benelux Economic Union), which was signed on 17 June
also lent suspicions) The FCC ordered comparative hearings, and in 1969 a competing applicant, Boston Broadcasters, Inc., was granted a construction permit to replace WHDH-TV on channel 5. Herald-Traveler Corp. fought the decision in court—by this time, revenues from channel 5 were all but keeping the newspaper afloat—but its final appeal ran out in 1972, and on March 19 WHDH-TV was forced to surrender channel 5 to the new WCVB-TV. The Boston Herald Traveler and Record American Without a television station to subsidize the newspaper, the Herald Traveler was no longer able to remain in business, and the newspaper was sold to Hearst Corporation, which published the rival all-day newspaper, the Record American. The two papers were merged to become an all-day paper called the Boston Herald Traveler and Record American in the morning and Record-American and Boston Herald Traveler in the afternoon. The first editions published under the new combined name were those of June 19, 1972. The afternoon edition was soon dropped and the unwieldy name shortened to Boston Herald American, with the Sunday edition called the Sunday Herald Advertiser. The Herald American was printed in broadsheet format, and failed to target a particular readership; where the Record American had been a typical city tabloid, the Herald Traveler was a Republican paper. Murdoch purchases The Herald American The Herald American converted to tabloid format in September 1981, but Hearst faced steep declines in circulation and advertising. The company announced it would close the Herald American—making Boston a one-newspaper town—on December 3, 1982. When the deadline came, Australian media baron Rupert Murdoch was negotiating to buy the paper and save it. He closed on the deal after 30 hours of talks with Hearst and newspaper unions—and five hours after Hearst had sent out notices to newsroom employees telling them they were terminated. The newspaper announced its own survival the next day with a full-page headline: "You Bet We're Alive!" The Boston Herald once again Murdoch changed the paper's name back to the Boston Herald. The Herald continued to grow, expanding its coverage and increasing its circulation until 2001, when nearly all newspapers fell victim to declining circulations and revenue. Independent ownership In February 1994, Murdoch's News Corporation was forced to sell the paper, in order that its subsidiary Fox Television Stations could legally consummate its purchase of Fox affiliate WFXT (Channel 25) because Massachusetts Senator Ted Kennedy included language in an appropriations barring one company from owning a newspaper and television station in the same market. Patrick J. Purcell, who was the publisher of the Boston Herald and a former News Corporation executive, purchased the Herald and established it as an independent newspaper. Several years later, Purcell would give the Herald a suburban presence it never had by purchasing the money-losing Community Newspaper Company from Fidelity Investments. Although the companies merged under the banner of Herald Media, Inc., the suburban papers maintained their distinct editorial and marketing identity. After years of operating profits at Community Newspaper and losses at the Herald, Purcell in 2006 sold the suburban chain to newspaper conglomerate Liberty Group Publishing of Illinois, which soon after changed its name to GateHouse Media. The deal, which also saw GateHouse acquiring The Patriot Ledger and The Enterprise respectively in south suburban Quincy and Brockton, netted $225 million for Purcell, who vowed to use the funds to clear the Herald's debt and reinvest in the Paper. Boston Herald Radio On August 5, 2013, the Herald launched an internet radio station named Boston Herald Radio which includes radio shows by much of the Herald staff. The station's morning lineup is simulcast on 830 AM WCRN from 10 AM Eastern time to 12 noon Eastern time. Bankruptcy In December 2017, the Herald announced plans to sell itself to GateHouse Media after filing for chapter 11 bankruptcy protection. The deal was scheduled to be completed by February 2018, with the new company streamlining and having layoffs in coming months. However, in early January 2018, another potential buyer, Revolution Capital Group of Los Angeles, filed a bid with the federal bankruptcy court; the Herald reported in a press release that "the court requires BHI [Boston Herald, Inc.] to hold an auction to allow all potential buyers an opportunity to submit competing offers." Digital First Media acquisition In February 2018, acquisition of the Herald by Digital First Media for almost $12 million was approved by the bankruptcy court judge in Delaware. The new owner, DFM, said they would be keeping 175 of the approximately 240 employees the Herald had when it sought bankruptcy protection in December 2017. The acquisition was completed on March 19, 2018. The Herald and parent DFM were criticized for ending the ten-year printing contract with competitor The Boston Globe, moving printing from Taunton, Massachusetts, to Rhode Island and its "dehumanizing cost-cutting efforts" in personnel. In June, some design and advertising layoffs were expected, with work moving to a sister paper, The Denver Post. The "consolidation" took effect in August, with nine jobs eliminated. In late August 2018, it was announced that the Herald would move its offices from Boston's Seaport District to Braintree, Massachusetts, in late November or early December. On October 27, 2020, the Herald endorsed Donald Trump for the 2020 U.S. Presidential Election. Awards 1924. Pulitzer Prizes for Editorial Writing, , « Who Made Coolidge? » 1927. Pulitzer Prizes for Editorial Writing, F. Lauriston Bullard, « We Submit » 1948. Pulitzer Prizes for Photography, Frank Cushing, « Boy Gunman and Hostage » 1949. Pulitzer Prizes for Editorial Writing, John H. Crider 1954. Pulitzer Prizes for Editorial Writing, Don Murray, series of editorials on the “New Look” in National Defense 1957. Pulitzer Prizes for Photography, Harry A. Trask. The sinking of the liner in July 1956 (the pictures were taken from an airplane flying at a height of 75 feet 9 minutes before the ship plunged to the bottom. The second picture in the sequence is cited as the key photograph. 1976. Pulitzer Prizes for Spot News Photography, Stanley Forman, for Fire Escape Collapse, a dramatic shot of a young woman and child falling as the fire escape to which they had fled during an apartment house fire collapsed on July 22, 1975 1977. Pulitzer Prizes for Spot News Photography, Stanley Forman, for The Soiling of Old Glory, as Ted Landsmark, an African American civil rights lawyer, was charged at by a protester with an American flag during the Boston busing
back through two lineages, the Daily Advertiser and the old Boston Herald, and two media moguls, William Randolph Hearst and Rupert Murdoch. The original Boston Herald The original Boston Herald was founded in 1846 by a group of Boston printers jointly under the name of John A. French & Company. The paper was published as a single two-sided sheet, selling for one cent. Its first editor, William O. Eaton, just 22 years old, said "The Herald will be independent in politics and religion; liberal, industrious, enterprising, critically concerned with literacy and dramatic matters, and diligent in its mission to report and analyze the news, local and global." In 1847, the Boston Herald absorbed the Boston American Eagle and the Boston Daily Times. The Boston Herald and Boston Journal In October 1917, John H. Higgins, the publisher and treasurer of the Boston Herald bought out its next door neighbor The Boston Journal and created The Boston Herald and Boston Journal The American Traveler Even earlier than the Herald, the weekly American Traveler was founded in 1825 as a bulletin for stagecoach listings. The Boston Evening Traveller The Boston Evening Traveler was founded in 1845. The Boston Evening Traveler was the successor to the weekly American Traveler and the semi-weekly Boston Traveler. In 1912, the Herald acquired the Traveler, continuing to publish both under their own names. For many years, the newspaper was controlled by many of the investors in United Shoe Machinery Co. After a newspaper strike in 1967, Herald-Traveler Corp. suspended the afternoon Traveler and absorbed the evening edition into the Herald to create the Boston Herald Traveler. The Boston Daily Advertiser The Boston Daily Advertiser was established in 1813 in Boston by Nathan Hale. The paper grew to prominence throughout the 19th century, taking over other Boston area papers. In 1832 The Advertiser took over control of The Boston Patriot, and then in 1840 it took over and absorbed The Boston Gazette. The paper was purchased by William Randolph Hearst in 1917. In 1920 the Advertiser was merged with The Boston Record, initially the combined newspaper was called the Boston Advertiser however when the combined newspaper became an illustrated tabloid in 1921 it was renamed The Boston American. Hearst Corp. continued using the name Advertiser for its Sunday paper until the early 1970s. The Boston Record On September 3, 1884, The Boston Evening Record was started by the Boston Advertiser as a campaign newspaper. The Record was so popular that it was made a permanent publication. The Boston American In 1904, William Randolph Hearst began publishing his own newspaper in Boston called The American. Hearst ultimately ended up purchasing the Daily Advertiser in 1917. By 1938, the Daily Advertiser had changed to the Daily Record, and The American had become the Sunday Advertiser. A third paper owned by Hearst, called the Afternoon Record, which had been renamed the Evening American, merged in 1961 with the Daily Record to form the Record American. The Sunday Advertiser and Record American would ultimately be merged in 1972 into The Boston Herald Traveler a line of newspapers that stretched back to the old Boston Herald. The Boston Herald Traveler In 1946, Herald-Traveler Corporation acquired Boston radio station WHDH. Two years later, WHDH-FM was licensed, and on November 26, 1957, WHDH-TV made its début as an ABC affiliate on channel 5. In 1961, WHDH-TV's affiliation switched to CBS. Herald-Traveler Corp. operated for years beginning some time after under temporary authority from the Federal Communications Commission stemming from controversy over luncheon meetings the newspaper's chief executive purportedly had with John C. Doerfer, chairman of the FCC between 1957 and 1960, who served as a commissioner during the original licensing process. (Some Boston broadcast historians accuse The Boston Globe of being covertly behind the proceeding as a sort of vendetta for not getting a license—The Herald Traveler was Republican in sympathies, and the Globe then had a firm policy of not endorsing political candidates, although Doerfer's history at the FCC also lent suspicions) The FCC ordered comparative hearings, and in 1969 a competing applicant, Boston Broadcasters, Inc., was granted a construction permit to replace WHDH-TV on channel 5. Herald-Traveler Corp. fought the decision in court—by this time, revenues from channel 5 were all but keeping the newspaper afloat—but its final appeal ran out in 1972, and on March 19 WHDH-TV was forced to surrender channel 5 to the new WCVB-TV. The Boston Herald Traveler and Record American Without a television station to subsidize the newspaper, the Herald Traveler was no longer able to remain in business, and the newspaper was sold to Hearst Corporation, which published the rival all-day newspaper, the Record American. The two papers were merged to become an all-day paper called the Boston Herald Traveler and Record American in the morning and Record-American and Boston Herald Traveler in the afternoon. The first editions published under the new combined name were those of June 19, 1972. The afternoon edition was soon dropped and the unwieldy name shortened to Boston Herald American, with the Sunday edition called the Sunday
by Jim Thorpe in Fayetteville. Ruth made his first appearance against a team in organized baseball in an exhibition game versus the major-league Philadelphia Phillies. Ruth pitched the middle three innings and gave up two runs in the fourth, but then settled down and pitched a scoreless fifth and sixth innings. In a game against the Phillies the following afternoon, Ruth entered during the sixth inning and did not allow a run the rest of the way. The Orioles scored seven runs in the bottom of the eighth inning to overcome a 6–0 deficit, and Ruth was the winning pitcher. Once the regular season began, Ruth was a star pitcher who was also dangerous at the plate. The team performed well, yet received almost no attention from the Baltimore press. A third major league, the Federal League, had begun play, and the local franchise, the Baltimore Terrapins, restored that city to the major leagues for the first time since 1902. Few fans visited Oriole Park, where Ruth and his teammates labored in relative obscurity. Ruth may have been offered a bonus and a larger salary to jump to the Terrapins; when rumors to that effect swept Baltimore, giving Ruth the most publicity he had experienced to date, a Terrapins official denied it, stating it was their policy not to sign players under contract to Dunn. The competition from the Terrapins caused Dunn to sustain large losses. Although by late June the Orioles were in first place, having won over two-thirds of their games, the paid attendance dropped as low as 150. Dunn explored a possible move by the Orioles to Richmond, Virginia, as well as the sale of a minority interest in the club. These possibilities fell through, leaving Dunn with little choice other than to sell his best players to major league teams to raise money. He offered Ruth to the reigning World Series champions, Connie Mack's Philadelphia Athletics, but Mack had his own financial problems. The Cincinnati Reds and New York Giants expressed interest in Ruth, but Dunn sold his contract, along with those of pitchers Ernie Shore and Ben Egan, to the Boston Red Sox of the American League (AL) on July 4. The sale price was announced as $25,000 but other reports lower the amount to half that, or possibly $8,500 plus the cancellation of a $3,000 loan. Ruth remained with the Orioles for several days while the Red Sox completed a road trip, and reported to the team in Boston on July 11. Boston Red Sox (1914–1919) Developing star On July 11, 1914, Ruth arrived in Boston with Egan and Shore. Ruth later told the story of how that morning he had met Helen Woodford, who would become his first wife. She was a 16-year-old waitress at Landers Coffee Shop, and Ruth related that she served him when he had breakfast there. Other stories, though, suggested that the meeting occurred on another day, and perhaps under other circumstances. Regardless of when he began to woo his first wife, he won his first game as a pitcher for the Red Sox that afternoon, 4–3, over the Cleveland Naps. His catcher was Bill Carrigan, who was also the Red Sox manager. Shore was given a start by Carrigan the next day; he won that and his second start and thereafter was pitched regularly. Ruth lost his second start, and was thereafter little used. In his major league debut as a batter, Ruth went 0-for-2 against left-hander Willie Mitchell, striking out in his first at bat before being removed for a pinch hitter in the seventh inning. Ruth was not much noticed by the fans, as Bostonians watched the Red Sox's crosstown rivals, the Braves, begin a legendary comeback that would take them from last place on the Fourth of July to the 1914 World Series championship. Egan was traded to Cleveland after two weeks on the Boston roster. During his time with the Red Sox, he kept an eye on the inexperienced Ruth, much as Dunn had in Baltimore. When he was traded, no one took his place as supervisor. Ruth's new teammates considered him brash, and would have preferred him, as a rookie, to remain quiet and inconspicuous. When Ruth insisted on taking batting practice despite being both a rookie who did not play regularly, and a pitcher, he arrived to find his bats sawed in half. His teammates nicknamed him "the Big Baboon", a name the swarthy Ruth, who had disliked the nickname "Niggerlips" at St. Mary's, detested. Ruth had received a raise on promotion to the major leagues, and quickly acquired tastes for fine food, liquor, and women, among other temptations. Manager Carrigan allowed Ruth to pitch two exhibition games in mid-August. Although Ruth won both against minor-league competition, he was not restored to the pitching rotation. It is uncertain why Carrigan did not give Ruth additional opportunities to pitch. There are legends—filmed for the screen in The Babe Ruth Story (1948)—that the young pitcher had a habit of signaling his intent to throw a curveball by sticking out his tongue slightly, and that he was easy to hit until this changed. Creamer pointed out that it is common for inexperienced pitchers to display such habits, and the need to break Ruth of his would not constitute a reason to not use him at all. The biographer suggested that Carrigan was unwilling to use Ruth due to poor behavior by the rookie. On July 30, 1914, Boston owner Joseph Lannin had purchased the minor-league Providence Grays, members of the International League. The Providence team had been owned by several people associated with the Detroit Tigers, including star hitter Ty Cobb, and as part of the transaction, a Providence pitcher was sent to the Tigers. To soothe Providence fans upset at losing a star, Lannin announced that the Red Sox would soon send a replacement to the Grays. This was intended to be Ruth, but his departure for Providence was delayed when Cincinnati Reds owner Garry Herrmann claimed him off of waivers. After Lannin wrote to Herrmann explaining that the Red Sox wanted Ruth in Providence so he could develop as a player, and would not release him to a major league club, Herrmann allowed Ruth to be sent to the minors. Carrigan later stated that Ruth was not sent down to Providence to make him a better player, but to help the Grays win the International League pennant (league championship). Ruth joined the Grays on August 18, 1914. After Dunn's deals, the Baltimore Orioles managed to hold on to first place until August 15, after which they continued to fade, leaving the pennant race between Providence and Rochester. Ruth was deeply impressed by Providence manager "Wild Bill" Donovan, previously a star pitcher with a 25–4 win–loss record for Detroit in 1907; in later years, he credited Donovan with teaching him much about pitching. Ruth was often called upon to pitch, in one stretch starting (and winning) four games in eight days. On September 5 at Maple Leaf Park in Toronto, Ruth pitched a one-hit 9–0 victory, and hit his first professional home run, his only one as a minor leaguer, off Ellis Johnson. Recalled to Boston after Providence finished the season in first place, he pitched and won a game for the Red Sox against the New York Yankees on October 2, getting his first major league hit, a double. Ruth finished the season with a record of 2–1 as a major leaguer and 23–8 in the International League (for Baltimore and Providence). Once the season concluded, Ruth married Helen in Ellicott City, Maryland. Creamer speculated that they did not marry in Baltimore, where the newlyweds boarded with George Ruth Sr., to avoid possible interference from those at St. Mary's—both bride and groom were not yet of age and Ruth remained on parole from that institution until his 21st birthday. In March 1915, Ruth reported to Hot Springs, Arkansas, for his first major league spring training. Despite a relatively successful first season, he was not slated to start regularly for the Red Sox, who already had two "superb" left-handed pitchers, according to Creamer: the established stars Dutch Leonard, who had broken the record for the lowest earned run average (ERA) in a single season; and Ray Collins, a 20-game winner in both 1913 and 1914. Ruth was ineffective in his first start, taking the loss in the third game of the season. Injuries and ineffective pitching by other Boston pitchers gave Ruth another chance, and after some good relief appearances, Carrigan allowed Ruth another start, and he won a rain-shortened seven inning game. Ten days later, the manager had him start against the New York Yankees at the Polo Grounds. Ruth took a 3–2 lead into the ninth, but lost the game 4–3 in 13 innings. Ruth, hitting ninth as was customary for pitchers, hit a massive home run into the upper deck in right field off of Jack Warhop. At the time, home runs were rare in baseball, and Ruth's majestic shot awed the crowd. The winning pitcher, Warhop, would in August 1915 conclude a major league career of eight seasons, undistinguished but for being the first major league pitcher to give up a home run to Babe Ruth. Carrigan was sufficiently impressed by Ruth's pitching to give him a spot in the starting rotation. Ruth finished the 1915 season 18–8 as a pitcher; as a hitter, he batted .315 and had four home runs. The Red Sox won the AL pennant, but with the pitching staff healthy, Ruth was not called upon to pitch in the 1915 World Series against the Philadelphia Phillies. Boston won in five games; Ruth was used as a pinch hitter in Game Five, but grounded out against Phillies ace Grover Cleveland Alexander. Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership. In 1916, there was attention focused on Ruth for his pitching, as he engaged in repeated pitching duels with the ace of the Washington Senators, Walter Johnson. The two met five times during the season, with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL President Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory. Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager. Emergence as a hitter Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth threw a punch at him, and was ejected from the game and later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs. The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded due to the war, Barrow had many holes in the Red Sox lineup to fill. Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired. Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth. Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA. In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats. With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances. During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, "When Ruth misses a swipe at the ball, the stands quiver." Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only . On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups. Sale to New York As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees. Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson. According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919. There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club. Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase. Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash. The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century." According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and lead baseball with 40 pennants and 27 World Series titles in their history. New York Yankees (1920–1934) Initial success (1920–1923) When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them. At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road. The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs). In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs. Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth: In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost $35,000 () betting on horse races. Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, shattering his year-old single season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of . The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run. After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required. On March 4, 1922, Ruth signed a new contract for three years at $52,000 a year (). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll. Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon". After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only . The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel was soon suffering headaches from squinting toward home plate. During the 1923 season, The Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two. Batting title and "bellyache" (1924–1925) In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs. Ruth did not look like an athlete; he was described as "toothpicks attached to a piano", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly . His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and suffered relapses during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he suffered multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965. Murderers' Row (1926–1928) Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant. Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been
Despite his success as a pitcher, Ruth was acquiring a reputation for long home runs; at Sportsman's Park against the St. Louis Browns, a Ruth hit soared over Grand Avenue, breaking the window of a Chevrolet dealership. In 1916, there was attention focused on Ruth for his pitching, as he engaged in repeated pitching duels with the ace of the Washington Senators, Walter Johnson. The two met five times during the season, with Ruth winning four and Johnson one (Ruth had a no decision in Johnson's victory). Two of Ruth's victories were by the score of 1–0, one in a 13-inning game. Of the 1–0 shutout decided without extra innings, AL President Ban Johnson stated, "That was one of the best ball games I have ever seen." For the season, Ruth went 23–12, with a 1.75 ERA and nine shutouts, both of which led the league. Ruth's nine shutouts in 1916 set a league record for left-handers that would remain unmatched until Ron Guidry tied it in 1978. The Red Sox won the pennant and World Series again, this time defeating the Brooklyn Robins (as the Dodgers were then known) in five games. Ruth started and won Game 2, 2–1, in 14 innings. Until another game of that length was played in 2005, this was the longest World Series game, and Ruth's pitching performance is still the longest postseason complete game victory. Carrigan retired as player and manager after 1916, returning to his native Maine to be a businessman. Ruth, who played under four managers who are in the National Baseball Hall of Fame, always maintained that Carrigan, who is not enshrined there, was the best skipper he ever played for. There were other changes in the Red Sox organization that offseason, as Lannin sold the team to a three-man group headed by New York theatrical promoter Harry Frazee. Jack Barry was hired by Frazee as manager. Emergence as a hitter Ruth went 24–13 with a 2.01 ERA and six shutouts in 1917, but the Sox finished in second place in the league, nine games behind the Chicago White Sox in the standings. On June 23 at Washington, when home plate umpire 'Brick' Owens called the first four pitches as balls, Ruth threw a punch at him, and was ejected from the game and later suspended for ten days and fined $100. Ernie Shore was called in to relieve Ruth, and was allowed eight warm-up pitches. The runner who had reached base on the walk was caught stealing, and Shore retired all 26 batters he faced to win the game. Shore's feat was listed as a perfect game for many years. In 1991, Major League Baseball's (MLB) Committee on Statistical Accuracy amended it to be listed as a combined no-hitter. In 1917, Ruth was used little as a batter, other than for his plate appearances while pitching, and hit .325 with two home runs. The United States' entry into World War I occurred at the start of the season and overshadowed baseball. Conscription was introduced in September 1917, and most baseball players in the big leagues were of draft age. This included Barry, who was a player-manager, and who joined the Naval Reserve in an attempt to avoid the draft, only to be called up after the 1917 season. Frazee hired International League President Ed Barrow as Red Sox manager. Barrow had spent the previous 30 years in a variety of baseball jobs, though he never played the game professionally. With the major leagues shorthanded due to the war, Barrow had many holes in the Red Sox lineup to fill. Ruth also noticed these vacancies in the lineup. He was dissatisfied in the role of a pitcher who appeared every four or five days and wanted to play every day at another position. Barrow used Ruth at first base and in the outfield during the exhibition season, but he restricted him to pitching as the team moved toward Boston and the season opener. At the time, Ruth was possibly the best left-handed pitcher in baseball, and allowing him to play another position was an experiment that could have backfired. Inexperienced as a manager, Barrow had player Harry Hooper advise him on baseball game strategy. Hooper urged his manager to allow Ruth to play another position when he was not pitching, arguing to Barrow, who had invested in the club, that the crowds were larger on days when Ruth played, as they were attracted by his hitting. In early May, Barrow gave in; Ruth promptly hit home runs in four consecutive games (one an exhibition), the last off of Walter Johnson. For the first time in his career (disregarding pinch-hitting appearances), Ruth was assigned a place in the batting order higher than ninth. Although Barrow predicted that Ruth would beg to return to pitching the first time he experienced a batting slump, that did not occur. Barrow used Ruth primarily as an outfielder in the war-shortened 1918 season. Ruth hit .300, with 11 home runs, enough to secure him a share of the major league home run title with Tilly Walker of the Philadelphia Athletics. He was still occasionally used as a pitcher, and had a 13–7 record with a 2.22 ERA. In 1918, the Red Sox won their third pennant in four years and faced the Chicago Cubs in the World Series, which began on September 5, the earliest date in history. The season had been shortened because the government had ruled that baseball players who were eligible for the military would have to be inducted or work in critical war industries, such as armaments plants. Ruth pitched and won Game One for the Red Sox, a 1–0 shutout. Before Game Four, Ruth injured his left hand in a fight but pitched anyway. He gave up seven hits and six walks, but was helped by outstanding fielding behind him and by his own batting efforts, as a fourth-inning triple by Ruth gave his team a 2–0 lead. The Cubs tied the game in the eighth inning, but the Red Sox scored to take a 3–2 lead again in the bottom of that inning. After Ruth gave up a hit and a walk to start the ninth inning, he was relieved on the mound by Joe Bush. To keep Ruth and his bat in the game, he was sent to play left field. Bush retired the side to give Ruth his second win of the Series, and the third and last World Series pitching victory of his career, against no defeats, in three pitching appearances. Ruth's effort gave his team a three-games-to-one lead, and two days later the Red Sox won their third Series in four years, four-games-to-two. Before allowing the Cubs to score in Game Four, Ruth pitched consecutive scoreless innings, a record for the World Series that stood for more than 40 years until 1961, broken by Whitey Ford after Ruth's death. Ruth was prouder of that record than he was of any of his batting feats. With the World Series over, Ruth gained exemption from the war draft by accepting a nominal position with a Pennsylvania steel mill. Many industrial establishments took pride in their baseball teams and sought to hire major leaguers. The end of the war in November set Ruth free to play baseball without such contrivances. During the 1919 season, Ruth was used as a pitcher in only 17 of his 130 games and compiled a 9–5 record. Barrow used him as a pitcher mostly in the early part of the season, when the Red Sox manager still had hopes of a second consecutive pennant. By late June, the Red Sox were clearly out of the race, and Barrow had no objection to Ruth concentrating on his hitting, if only because it drew people to the ballpark. Ruth had hit a home run against the Yankees on Opening Day, and another during a month-long batting slump that soon followed. Relieved of his pitching duties, Ruth began an unprecedented spell of slugging home runs, which gave him widespread public and press attention. Even his failures were seen as majestic—one sportswriter said, "When Ruth misses a swipe at the ball, the stands quiver." Two home runs by Ruth on July 5, and one in each of two consecutive games a week later, raised his season total to 11, tying his career best from 1918. The first record to fall was the AL single-season mark of 16, set by Ralph "Socks" Seybold in 1902. Ruth matched that on July 29, then pulled ahead toward the major league record of 25, set by Buck Freeman in 1899. By the time Ruth reached this in early September, writers had discovered that Ned Williamson of the 1884 Chicago White Stockings had hit 27—though in a ballpark where the distance to right field was only . On September 20, "Babe Ruth Day" at Fenway Park, Ruth won the game with a home run in the bottom of the ninth inning, tying Williamson. He broke the record four days later against the Yankees at the Polo Grounds, and hit one more against the Senators to finish with 29. The home run at Washington made Ruth the first major league player to hit a home run at all eight ballparks in his league. In spite of Ruth's hitting heroics, the Red Sox finished sixth, games behind the league champion White Sox. In his six seasons with Boston, he won 89 games and recorded a 2.19 ERA. He had a four-year stretch where he was second in the AL in wins and ERA behind Walter Johnson, and Ruth had a winning record against Johnson in head-to-head matchups. Sale to New York As an out-of-towner from New York City, Frazee had been regarded with suspicion by Boston's sportswriters and baseball fans when he bought the team. He won them over with success on the field and a willingness to build the Red Sox by purchasing or trading for players. He offered the Senators $60,000 for Walter Johnson, but Washington owner Clark Griffith was unwilling. Even so, Frazee was successful in bringing other players to Boston, especially as replacements for players in the military. This willingness to spend for players helped the Red Sox secure the 1918 title. The 1919 season saw record-breaking attendance, and Ruth's home runs for Boston made him a national sensation. In March 1919 Ruth was reported as having accepted a three-year contract for a total of $27,000, after protracted negotiations. Nevertheless, on December 26, 1919, Frazee sold Ruth's contract to the New York Yankees. Not all the circumstances concerning the sale are known, but brewer and former congressman Jacob Ruppert, the New York team's principal owner, reportedly asked Yankee manager Miller Huggins what the team needed to be successful. "Get Ruth from Boston", Huggins supposedly replied, noting that Frazee was perennially in need of money to finance his theatrical productions. In any event, there was precedent for the Ruth transaction: when Boston pitcher Carl Mays left the Red Sox in a 1919 dispute, Frazee had settled the matter by selling Mays to the Yankees, though over the opposition of AL President Johnson. According to one of Ruth's biographers, Jim Reisler, "why Frazee needed cash in 1919—and large infusions of it quickly—is still, more than 80 years later, a bit of a mystery". The often-told story is that Frazee needed money to finance the musical No, No, Nanette, which was a Broadway hit and brought Frazee financial security. That play did not open until 1925, however, by which time Frazee had sold the Red Sox. Still, the story may be true in essence: No, No, Nanette was based on a Frazee-produced play, My Lady Friends, which opened in 1919. There were other financial pressures on Frazee, despite his team's success. Ruth, fully aware of baseball's popularity and his role in it, wanted to renegotiate his contract, signed before the 1919 season for $10,000 per year through 1921. He demanded that his salary be doubled, or he would sit out the season and cash in on his popularity through other ventures. Ruth's salary demands were causing other players to ask for more money. Additionally, Frazee still owed Lannin as much as $125,000 from the purchase of the club. Although Ruppert and his co-owner, Colonel Tillinghast Huston, were both wealthy, and had aggressively purchased and traded for players in 1918 and 1919 to build a winning team, Ruppert faced losses in his brewing interests as Prohibition was implemented, and if their team left the Polo Grounds, where the Yankees were the tenants of the New York Giants, building a stadium in New York would be expensive. Nevertheless, when Frazee, who moved in the same social circles as Huston, hinted to the colonel that Ruth was available for the right price, the Yankees owners quickly pursued the purchase. Frazee sold the rights to Babe Ruth for $100,000, the largest sum ever paid for a baseball player. The deal also involved a $350,000 loan from Ruppert to Frazee, secured by a mortgage on Fenway Park. Once it was agreed, Frazee informed Barrow, who, stunned, told the owner that he was getting the worse end of the bargain. Cynics have suggested that Barrow may have played a larger role in the Ruth sale, as less than a year after, he became the Yankee general manager, and in the following years made a number of purchases of Red Sox players from Frazee. The $100,000 price included $25,000 in cash, and notes for the same amount due November 1 in 1920, 1921, and 1922; Ruppert and Huston assisted Frazee in selling the notes to banks for immediate cash. The transaction was contingent on Ruth signing a new contract, which was quickly accomplished—Ruth agreed to fulfill the remaining two years on his contract, but was given a $20,000 bonus, payable over two seasons. The deal was announced on January 6, 1920. Reaction in Boston was mixed: some fans were embittered at the loss of Ruth; others conceded that Ruth had become difficult to deal with. The New York Times suggested that "The short right field wall at the Polo Grounds should prove an easy target for Ruth next season and, playing seventy-seven games at home, it would not be surprising if Ruth surpassed his home run record of twenty-nine circuit clouts next Summer." According to Reisler, "The Yankees had pulled off the sports steal of the century." According to Marty Appel in his history of the Yankees, the transaction, "changed the fortunes of two high-profile franchises for decades". The Red Sox, winners of five of the first 16 World Series, those played between 1903 and 1919, would not win another pennant until 1946, or another World Series until 2004, a drought attributed in baseball superstition to Frazee's sale of Ruth and sometimes dubbed the "Curse of the Bambino". Conversely, the Yankees had not won the AL championship prior to their acquisition of Ruth. They won seven AL pennants and four World Series with him, and lead baseball with 40 pennants and 27 World Series titles in their history. New York Yankees (1920–1934) Initial success (1920–1923) When Ruth signed with the Yankees, he completed his transition from a pitcher to a power-hitting outfielder. His fifteen-season Yankee career consisted of over 2,000 games, and Ruth broke many batting records while making only five widely scattered appearances on the mound, winning all of them. At the end of April 1920, the Yankees were 4–7, with the Red Sox leading the league with a 10–2 mark. Ruth had done little, having injured himself swinging the bat. Both situations began to change on May 1, when Ruth hit a tape measure home run that sent the ball completely out of the Polo Grounds, a feat believed to have been previously accomplished only by Shoeless Joe Jackson. The Yankees won, 6–0, taking three out of four from the Red Sox. Ruth hit his second home run on May 2, and by the end of the month had set a major league record for home runs in a month with 11, and promptly broke it with 13 in June. Fans responded with record attendance figures. On May 16, Ruth and the Yankees drew 38,600 to the Polo Grounds, a record for the ballpark, and 15,000 fans were turned away. Large crowds jammed stadiums to see Ruth play when the Yankees were on the road. The home runs kept on coming. Ruth tied his own record of 29 on July 15 and broke it with home runs in both games of a doubleheader four days later. By the end of July, he had 37, but his pace slackened somewhat after that. Nevertheless, on September 4, he both tied and broke the organized baseball record for home runs in a season, snapping Perry Werden's 1895 mark of 44 in the minor Western League. The Yankees played well as a team, battling for the league lead early in the summer, but slumped in August in the AL pennant battle with Chicago and Cleveland. The pennant and the World Series were won by Cleveland, who surged ahead after the Black Sox Scandal broke on September 28 and led to the suspension of many of Chicago's top players, including Shoeless Joe Jackson. The Yankees finished third, but drew 1.2 million fans to the Polo Grounds, the first time a team had drawn a seven-figure attendance. The rest of the league sold 600,000 more tickets, many fans there to see Ruth, who led the league with 54 home runs, 158 runs, and 137 runs batted in (RBIs). In 1920 and afterwards, Ruth was aided in his power hitting by the fact that A.J. Reach Company—the maker of baseballs used in the major leagues—was using a more efficient machine to wind the yarn found within the baseball. The new baseballs went into play in 1920 and ushered the start of the live-ball era; the number of home runs across the major leagues increased by 184 over the previous year. Baseball statistician Bill James pointed out that while Ruth was likely aided by the change in the baseball, there were other factors at work, including the gradual abolition of the spitball (accelerated after the death of Ray Chapman, struck by a pitched ball thrown by Mays in August 1920) and the more frequent use of new baseballs (also a response to Chapman's death). Nevertheless, James theorized that Ruth's 1920 explosion might have happened in 1919, had a full season of 154 games been played rather than 140, had Ruth refrained from pitching 133 innings that season, and if he were playing at any other home field but Fenway Park, where he hit only 9 of 29 home runs. Yankees business manager Harry Sparrow had died early in the 1920 season. Ruppert and Huston hired Barrow to replace him. The two men quickly made a deal with Frazee for New York to acquire some of the players who would be mainstays of the early Yankee pennant-winning teams, including catcher Wally Schang and pitcher Waite Hoyt. The 21-year-old Hoyt became close to Ruth: In the offseason, Ruth spent some time in Havana, Cuba, where he was said to have lost $35,000 () betting on horse races. Ruth hit home runs early and often in the 1921 season, during which he broke Roger Connor's mark for home runs in a career, 138. Each of the almost 600 home runs Ruth hit in his career after that extended his own record. After a slow start, the Yankees were soon locked in a tight pennant race with Cleveland, winners of the 1920 World Series. On September 15, Ruth hit his 55th home run, shattering his year-old single season record. In late September, the Yankees visited Cleveland and won three out of four games, giving them the upper hand in the race, and clinched their first pennant a few days later. Ruth finished the regular season with 59 home runs, batting .378 and with a slugging percentage of .846. Ruth's 177 runs scored, 119 extra-base hits, and 457 total bases set modern-era records that still stand as of . The Yankees had high expectations when they met the New York Giants in the 1921 World Series, every game of which was played in the Polo Grounds. The Yankees won the first two games with Ruth in the lineup. However, Ruth badly scraped his elbow during Game 2 when he slid into third base (he had walked and stolen both second and third bases). After the game, he was told by the team physician not to play the rest of the series. Despite this advice, he did play in the next three games, and pinch-hit in Game Eight of the best-of-nine series, but the Yankees lost, five games to three. Ruth hit .316, drove in five runs and hit his first World Series home run. After the Series, Ruth and teammates Bob Meusel and Bill Piercy participated in a barnstorming tour in the Northeast. A rule then in force prohibited World Series participants from playing in exhibition games during the offseason, the purpose being to prevent Series participants from replicating the Series and undermining its value. Baseball Commissioner Kenesaw Mountain Landis suspended the trio until May 20, 1922, and fined them their 1921 World Series checks. In August 1922, the rule was changed to allow limited barnstorming for World Series participants, with Landis's permission required. On March 4, 1922, Ruth signed a new contract for three years at $52,000 a year (). This was more than two times the largest sum ever paid to a ballplayer up to that point and it represented 40% of the team's player payroll. Despite his suspension, Ruth was named the Yankees' new on-field captain prior to the 1922 season. During the suspension, he worked out with the team in the morning and played exhibition games with the Yankees on their off days. He and Meusel returned on May 20 to a sellout crowd at the Polo Grounds, but Ruth batted 0-for-4 and was booed. On May 25, he was thrown out of the game for throwing dust in umpire George Hildebrand's face, then climbed into the stands to confront a heckler. Ban Johnson ordered him fined, suspended, and stripped of position as team captain. In his shortened season, Ruth appeared in 110 games, batted .315, with 35 home runs, and drove in 99 runs, but the 1922 season was a disappointment in comparison to his two previous dominating years. Despite Ruth's off-year, the Yankees managed to win the pennant and faced the New York Giants in the World Series for the second consecutive year. In the Series, Giants manager John McGraw instructed his pitchers to throw him nothing but curveballs, and Ruth never adjusted. Ruth had just two hits in 17 at bats, and the Yankees lost to the Giants for the second straight year, by 4–0 (with one tie game). Sportswriter Joe Vila called him, "an exploded phenomenon". After the season, Ruth was a guest at an Elks Club banquet, set up by Ruth's agent with Yankee team support. There, each speaker, concluding with future New York mayor Jimmy Walker, censured him for his poor behavior. An emotional Ruth promised reform, and, to the surprise of many, followed through. When he reported to spring training, he was in his best shape as a Yankee, weighing only . The Yankees' status as tenants of the Giants at the Polo Grounds had become increasingly uneasy, and in 1922, Giants owner Charles Stoneham said the Yankees' lease, expiring after that season, would not be renewed. Ruppert and Huston had long contemplated a new stadium, and had taken an option on property at 161st Street and River Avenue in the Bronx. Yankee Stadium was completed in time for the home opener on April 18, 1923, at which Ruth hit the first home run in what was quickly dubbed "the House that Ruth Built". The ballpark was designed with Ruth in mind: although the venue's left-field fence was further from home plate than at the Polo Grounds, Yankee Stadium's right-field fence was closer, making home runs easier to hit for left-handed batters. To spare Ruth's eyes, right field—his defensive position—was not pointed into the afternoon sun, as was traditional; left fielder Meusel was soon suffering headaches from squinting toward home plate. During the 1923 season, The Yankees were never seriously challenged and won the AL pennant by 17 games. Ruth finished the season with a career-high .393 batting average and 41 home runs, which tied Cy Williams for the most in the major-leagues that year. Ruth hit a career-high 45 doubles in 1923, and he reached base 379 times, then a major league record. For the third straight year, the Yankees faced the Giants in the World Series, which Ruth dominated. He batted .368, walked eight times, scored eight runs, hit three home runs and slugged 1.000 during the series, as the Yankees christened their new stadium with their first World Series championship, four games to two. Batting title and "bellyache" (1924–1925) In 1924, the Yankees were favored to become the first team to win four consecutive pennants. Plagued by injuries, they found themselves in a battle with the Senators. Although the Yankees won 18 of 22 at one point in September, the Senators beat out the Yankees by two games. Ruth hit .378, winning his only AL batting title, with a league-leading 46 home runs. Ruth did not look like an athlete; he was described as "toothpicks attached to a piano", with a big upper body but thin wrists and legs. Ruth had kept up his efforts to stay in shape in 1923 and 1924, but by early 1925 weighed nearly . His annual visit to Hot Springs, Arkansas, where he exercised and took saunas early in the year, did him no good as he spent much of the time carousing in the resort town. He became ill while there, and suffered relapses during spring training. Ruth collapsed in Asheville, North Carolina, as the team journeyed north. He was put on a train for New York, where he was briefly hospitalized. A rumor circulated that he had died, prompting British newspapers to print a premature obituary. In New York, Ruth collapsed again and was found unconscious in his hotel bathroom. He was taken to a hospital where he suffered multiple convulsions. After sportswriter W. O. McGeehan wrote that Ruth's illness was due to binging on hot dogs and soda pop before a game, it became known as "the bellyache heard 'round the world". However, the exact cause of his ailment has never been confirmed and remains a mystery. Glenn Stout, in his history of the Yankees, writes that the Ruth legend is "still one of the most sheltered in sports"; he suggests that alcohol was at the root of Ruth's illness, pointing to the fact that Ruth remained six weeks at St. Vincent's Hospital but was allowed to leave, under supervision, for workouts with the team for part of that time. He concludes that the hospitalization was behavior-related. Playing just 98 games, Ruth had his worst season as a Yankee; he finished with a .290 average and 25 home runs. The Yankees finished next to last in the AL with a 69–85 record, their last season with a losing record until 1965. Murderers' Row (1926–1928) Ruth spent part of the offseason of 1925–26 working out at Artie McGovern's gym, where he got back into shape. Barrow and Huggins had rebuilt the team and surrounded the veteran core with good young players like Tony Lazzeri and Lou Gehrig, but the Yankees were not expected to win the pennant. Ruth returned to his normal production during 1926, when he batted .372 with 47 home runs and 146 RBIs. The Yankees built a 10-game lead by mid-June and coasted to win the pennant by three games. The St. Louis Cardinals had won the National League with the lowest winning percentage for a pennant winner to that point (.578) and the Yankees were expected to win the World Series easily. Although the Yankees won the opener in New York, St. Louis took Games Two and Three. In Game Four, Ruth hit three home runs—the first time this had been done in a World Series game—to lead the Yankees to victory. In the fifth game, Ruth caught a ball as he crashed into the fence. The play was described by baseball writers as a defensive gem. New York took that game, but Grover Cleveland Alexander won Game Six for St. Louis to tie the Series at three games each, then got very drunk. He was nevertheless inserted into Game Seven in the seventh inning and shut down the Yankees to win the game, 3–2, and win the Series. Ruth had hit his fourth home run of the Series earlier in the game and was the only Yankee to reach base off Alexander; he walked in the ninth inning before being thrown out to end the game when he attempted to steal second base. Although Ruth's attempt to steal second is often deemed a baserunning blunder, Creamer pointed out that the Yankees' chances of tying the game would have been greatly improved with a runner in scoring position. The 1926 World Series was also known for Ruth's promise to Johnny Sylvester, a hospitalized 11-year-old boy. Ruth promised the child that he would hit a home run on his behalf. Sylvester had been injured in a fall from a horse, and a friend of Sylvester's father gave the boy two autographed baseballs signed by Yankees and Cardinals. The friend relayed a promise from Ruth (who did not know the boy) that he would hit a home run for him. After the Series, Ruth visited the boy in the hospital. When the matter became public, the press greatly inflated it, and by some accounts, Ruth allegedly saved the boy's life by visiting him, emotionally promising to hit a home run, and doing so. Ruth's 1926 salary of $52,000 was far more than any other baseball player, but he made at least twice as much in other income, including $100,000 from 12 weeks of vaudeville. The 1927 New York Yankees team is considered one of the greatest squads to ever take the field. Known as Murderers' Row because of the power of its lineup, the team clinched first place on Labor Day, won a then-AL-record 110 games and took the AL pennant by 19 games. There was no suspense in the pennant race, and the nation turned its attention to Ruth's pursuit of his own single-season home run record of 59 round trippers. Ruth was not alone in this chase. Teammate Lou Gehrig proved to be a slugger who was capable of challenging Ruth for his home run crown; he tied Ruth with 24 home runs late in June. Through July and August, the dynamic duo was never separated by more than two home runs. Gehrig took the lead, 45–44, in the first game of a doubleheader at Fenway Park early in September; Ruth responded with two blasts of his own to take the lead, as it proved permanently—Gehrig finished with 47. Even so, as of September 6, Ruth was still several games off his 1921 pace, and going into the final series against the Senators, had only 57. He hit two in the first game of the series, including one off of Paul Hopkins, facing his first major league batter, to tie the record. The following day, September 30, he broke it with his 60th homer, in the eighth inning off Tom Zachary to break a 2–2 tie. "Sixty! Let's see some son of a bitch try to top that one", Ruth exulted after the game. In addition to his career-high 60 home runs, Ruth batted .356, drove in 164 runs and slugged .772. In the 1927 World Series, the Yankees swept the Pittsburgh Pirates in four games; the National Leaguers were disheartened after watching the Yankees take batting practice before Game One, with ball after ball leaving Forbes Field. According to Appel, "The 1927 New York Yankees. Even today, the words inspire awe... all baseball success is measured against the '27 team." The following season started off well for the Yankees, who led the league in the early going. But the Yankees were plagued by injuries, erratic pitching and inconsistent play. The Philadelphia Athletics, rebuilding after some lean years, erased the Yankees' big lead and even took over first place briefly in early September. The Yankees, however, regained first place when they beat the Athletics three out of four games in a pivotal series at Yankee Stadium later that month, and clinched the pennant in the final weekend of the season. Ruth's play in 1928 mirrored his team's performance. He got off to a hot start and on August 1, he had 42 home runs. This put him ahead of his 60 home run pace from the previous season. He then slumped for the latter part of the season, and he hit just twelve home runs in the last two months. Ruth's batting average also fell to .323, well below his career average. Nevertheless, he ended the season with 54 home runs. The Yankees swept the favored Cardinals in four games in the World Series, with Ruth batting .625 and hitting three home runs in Game Four, including one off Alexander. "Called shot" and final Yankee years (1929–1934) Before the 1929 season, Ruppert (who had bought out Huston in 1923) announced that the Yankees would wear uniform numbers to allow fans at cavernous Yankee Stadium to easily identify the players. The Cardinals and Indians had each experimented with uniform numbers; the Yankees were the first to use them on both home and away uniforms. Ruth batted third and was given number 3. According to a long-standing baseball legend, the Yankees adopted their now-iconic pinstriped uniforms in hopes of making Ruth look slimmer. In truth, though, they had been wearing pinstripes since 1915. Although the Yankees started well, the Athletics soon proved they were the better team in 1929, splitting two series with the Yankees in the first month of the season, then taking advantage of a Yankee losing streak in mid-May to gain first place. Although Ruth performed well, the Yankees were not able to catch the Athletics—Connie Mack had built another great team. Tragedy struck the Yankees late in the year as manager Huggins died at 51 of erysipelas, a bacterial skin infection, on September 25, only ten days after he had last directed the team. Despite their past differences, Ruth praised Huggins and described him as a "great guy". The Yankees finished second, 18 games behind the Athletics. Ruth hit .345 during the season, with 46 home runs and 154 RBIs. On October 17, the Yankees hired Bob Shawkey as manager; he was their fourth choice. Ruth had politicked for the job of player-manager, but Ruppert and Barrow never seriously considered him for the position. Stout deemed this the first hint Ruth would have no future with the Yankees once he retired as a player. Shawkey, a former Yankees player and teammate of Ruth, would prove unable to command Ruth's respect. On January 7, 1930, salary negotiations between the Yankees and Ruth quickly broke down. Having just concluded a three-year contract at an annual salary of $70,000, Ruth promptly rejected both the Yankees' initial proposal of $70,000 for one year and their 'final' offer of two years at seventy-five—the latter figure equalling the annual salary of then US President Herbert Hoover; instead, Ruth demanded at least $85,000 and three years. When asked why he thought he was "worth more than the President of the United States," Ruth responded: "Say, if I hadn't been sick last summer, I'd have broken hell out of that home run record! Besides, the President gets a four-year contract. I'm only asking for three." Exactly two months later, a compromise was reached, with Ruth settling for two years at an unprecedented $80,000 per year. Ruth's salary was more than 2.4 times greater than the next-highest salary that season, a record margin . In 1930, Ruth hit .359 with 49 home runs (his best in his years after 1928) and 153 RBIs, and pitched his first game in nine years, a complete game victory. Nevertheless, the Athletics won their second consecutive pennant and World Series, as the Yankees finished in third place, sixteen games back. At the end of the season, Shawkey was fired and replaced with Cubs manager Joe McCarthy, though Ruth again unsuccessfully sought the job. McCarthy was a disciplinarian, but chose not to interfere with Ruth, who did not seek conflict with the manager. The team improved in 1931, but was no match for the Athletics, who won 107 games, games in front of the Yankees. Ruth, for his part, hit .373, with 46 home runs and 163 RBIs. He had 31 doubles, his most since 1924. In the 1932 season, the Yankees went 107–47 and won the pennant. Ruth's effectiveness had decreased somewhat, but he still hit .341 with 41 home runs and 137 RBIs. Nevertheless, he was sidelined twice due to injuries during the season. The Yankees faced the Cubs, McCarthy's former team, in the 1932 World Series. There was bad blood between the two teams as the Yankees resented the Cubs only awarding half a World Series share to Mark Koenig, a former Yankee. The games at Yankee Stadium had not been sellouts; both were won by the home team, with Ruth collecting two singles, but scoring four runs as he was walked four times by the Cubs pitchers. In Chicago, Ruth was resentful at the hostile crowds that met the Yankees' train and jeered them at the hotel. The crowd for Game Three included New York Governor Franklin D. Roosevelt, the Democratic candidate for president, who sat with Chicago Mayor Anton Cermak. Many in the crowd threw lemons at Ruth, a sign of derision, and others (as well as the Cubs themselves) shouted abuse at Ruth and other Yankees. They were briefly silenced when Ruth hit a three-run home run off Charlie Root in the first inning, but soon revived, and the Cubs tied the score at 4–4 in the fourth inning, partly due to Ruth's fielding error in the outfield. When Ruth came to the plate in the top of the fifth, the Chicago crowd and players, led by pitcher Guy Bush, were screaming insults at Ruth. With the count at two balls and one strike, Ruth gestured, possibly in the direction of center field, and after the next pitch (a strike), may have pointed there with one hand. Ruth hit the fifth pitch over the center field fence; estimates were that it traveled nearly . Whether or not Ruth intended to indicate where he planned to (and did) hit the ball (Charlie Devens, who, in 1999, was interviewed as Ruth's surviving teammate in that game, did not think so), the incident has gone down in legend as Babe Ruth's called shot. The Yankees won Game Three, and the following day clinched the Series with another victory. During that game, Bush hit Ruth on the arm with a pitch, causing words to be exchanged and provoking a game-winning Yankee rally. Ruth remained productive in 1933. He batted .301, with 34 home runs, 103 RBIs, and a league-leading 114 walks, as the Yankees finished in second place, seven games behind the Senators. Athletics manager Connie Mack selected him to play right field in the first Major League Baseball All-Star Game, held on July 6, 1933, at Comiskey Park in Chicago. He hit the first home run in the All-Star Game's history, a two-run blast against Bill Hallahan during the third inning, which helped the AL win the game 4–2. During the final game of the 1933 season, as a publicity stunt organized by his team, Ruth was called upon and pitched a complete game victory against the Red Sox, his final appearance as a pitcher. Despite unremarkable pitching numbers, Ruth had a 5–0 record in five games for the Yankees, raising his career totals to 94–46. In 1934, Ruth played in his last full season with the Yankees. By this time, years of high living were starting to catch up with him. His conditioning had deteriorated to the point that he could no longer field or run. He accepted a pay cut to $35,000 from Ruppert, but he was still the highest-paid player in the major leagues. He could still handle a bat, recording a .288 batting average with 22 home runs. However, Reisler described these statistics as "merely mortal" by Ruth's previous standards. Ruth was selected to the AL All-Star team for the second consecutive year, even though he was in the twilight of his career. During the game, New York Giants pitcher Carl Hubbell struck out Ruth and four other future Hall-of-Famers consecutively. The Yankees finished second again, seven games behind the Tigers. Boston Braves (1935) By this time, Ruth knew he was nearly finished as a player. He desired to remain in baseball as a manager. He was often spoken of as a possible candidate as managerial jobs opened up, but in 1932, when he was mentioned as a contender for the Red Sox position, Ruth stated that he was not yet ready to leave the field. There were rumors that Ruth was a likely candidate each time when the Cleveland Indians, Cincinnati Reds, and Detroit Tigers were looking for a manager, but nothing came of them. Just before the 1934 season, Ruppert offered to make Ruth the manager of the Yankees' top minor-league team, the Newark Bears, but he was talked out of it by his wife, Claire, and his business manager, Christy Walsh. Tigers owner Frank Navin seriously considered acquiring Ruth and making him player-manager. However, Ruth insisted on delaying the meeting until he came back from a trip to Hawaii. Navin was unwilling to wait. Ruth opted to go on his trip, despite Barrow advising him that he was making a mistake; in any event, Ruth's asking price was too high for the notoriously tight-fisted Navin. The Tigers' job ultimately went to Mickey Cochrane. Early in the 1934 season, Ruth openly campaigned to become the Yankees manager. However, the Yankee job was never a serious possibility. Ruppert always supported McCarthy, who would remain in his position for another 12 seasons. The relationship between Ruth and McCarthy had been lukewarm at best, and Ruth's managerial ambitions further chilled their interpersonal relations. By the end of the season, Ruth hinted that he would retire unless Ruppert named him manager of the Yankees. When the time came, Ruppert wanted Ruth to leave the team without drama or hard feelings. During the 1934–35 offseason, Ruth circled the world with his wife; the trip included a barnstorming tour of the Far East. At his final stop in the United Kingdom before returning home, Ruth was introduced to cricket by Australian player Alan Fairfax, and after having little luck in a cricketer's stance, he stood as a baseball batter and launched some massive shots around the field, destroying the bat in the process. Although Fairfax regretted that he could not have the time to make Ruth a cricket player, Ruth had lost any interest in such a career upon learning that the best batsmen made only about $40 per week. Also during the offseason, Ruppert had been sounding out the other clubs in hopes of finding one that would be willing to take Ruth as a manager and/or a player. However, the only serious offer came from Athletics owner-manager Connie Mack, who gave some thought to stepping down as manager in favor of Ruth. However, Mack later dropped the idea, saying that Ruth's wife would be running the team in a month if Ruth ever took over. While
a pole against the streambed, canal or lake bottom to move the vessel where desired. In settling the American west it was generally faster to navigate downriver from Brownsville, Pennsylvania, to the Ohio River confluence with the Mississippi and then pole upriver against the current to St. Louis than to travel overland on the rare primitive dirt roads for many decades after the American Revolution. Once the New York Central and Pennsylvania Railroads reached Chicago, that time dynamic changed, and American poleboats became less common, relegated to smaller rivers and more remote streams. On the Mississippi riverine system today, including that of other sheltered waterways, industrial barge trafficking in bulk raw materials such as coal, coke, timber, iron ore and other minerals is extremely common; in the developed world using huge cargo barges that connect in groups and trains-of-barges in ways that allow cargo volumes and weights considerably greater than those used by pioneers of modern barge systems and methods in the Victorian era. Such barges need to be towed by tugboats or pushed by towboats. Canal barges, towed by draft animals on a waterway adjacent towpath were of fundamental importance in the early Industrial Revolution, whose major early engineering projects were efforts to build viaducts, aqueducts and especially canals to fuel and feed raw materials to nascent factories in the early industrial takeoff (18th century) and take their goods to ports and cities for distribution. The barge and canal system contended favourably with the railways in the early Industrial Revolution before around the 1850s–1860s; for example, the Erie Canal in New York state is credited by economic historians with giving the growth boost needed for New York City to eclipse Philadelphia as America's largest port and city – but such canal systems with their locks, need for maintenance and dredging, pumps and sanitary issues were eventually outcompeted in the carriage of high-value items by the railways due to the higher speed, falling costs and route flexibility of rail transport. Barge and canal systems were nonetheless of great, perhaps even primary, economic importance until after the First World War in Europe, particularly in the more developed nations of the Low Countries, France, Germany and especially Great Britain which more or less made the system characteristically its own.
were to be able to navigate the system. It was soon realised that narrow locks were too limiting, and later locks were doubled in width to . Accordingly, on the British canal system the term 'barge' is used to describe a "Thames [sailing barge], Dutch [barge], or other styles of barge" (the people who move barges are often known as lightermen), and does not include Narrowboats and Widebeams (see also canal craft). In the United States, deckhands perform the labor and are supervised by a leadman or the mate. The captain and pilot steer the towboat, which pushes one or more barges held together with rigging, collectively called 'the tow'. The crew live aboard the towboat as it travels along the inland river system or the intracoastal waterways. These towboats travel between ports and are also called line-haul boats. Poles are used on barges to fend off other nearby vessels or a wharf. These are often called 'pike poles'. Etymology "Barge" is attested from 1300, from Old French barge, from Vulgar Latin barga. The word originally could refer to any small boat; the modern meaning arose around 1480. Bark "small ship" is attested from 1420, from Old French barque, from Vulgar Latin barca (400 AD). The more precise meaning "three-masted ship" arose in the 17th century, and often takes the French spelling for disambiguation. Both are probably derived from the Latin barica, from Greek baris "Egyptian boat", from Coptic bari "small boat", hieroglyphic Egyptian D58-G29-M17-M17-D21-P1 and similar ba-y-r for "basket-shaped boat". By extension, the term "embark" literally means to board the kind of boat called a "barque". The long pole used to maneuver or propel a barge have given rise to the saying "I wouldn't touch that [subject/thing] with a barge pole." Types Admiral's barge Articulated tug and barge Barracks barge ("accommodation barge") Bin barge Canal motorship Car float Ferrocement or "Concrete" Barge Crane barge Dredges Deck barge Dutch barge Dry bulk cargo barge Gundalow Hopper barge Hotel barge Horse-drawn boat Jackup barge Landing craft Lighter Liquid cargo barge Log barge Notch barge Narrowboat Norfolk wherry Rocket landing barge Oil barge Paddle barge Péniche or Spitz barge Pleasure barge Power barge Row barge Royal barge Sand barge Severn trow Tank barge Thames sailing barge Tub boat Vehicular barge Whaleback barge Widebeam Modern use Barges are used today for low-value bulk items, as the cost of hauling goods by barge is very low. Barges are also
of the GNU Common Lisp (GCL) implementation of Common Lisp and the GPL'd version of the computer algebra system Macsyma called Maxima. Schelter authored Austin Kyoto Common Lisp (AKCL) under contract with IBM. AKCL formed the foundation for Axiom, another computer algebra system. AKCL eventually became GNU Common Lisp. He is also credited with the first port of the GNU C compiler to the Intel 386 architecture, used in the original implementation of the
the GNU C compiler to the Intel 386 architecture, used in the original implementation of the Linux kernel. Schelter obtained his Ph.D. at McGill University in 1972. His mathematical specialties were noncommutative ring theory and computational algebra and its applications, including automated theorem proving in geometry. In the summer of 2001, age 54, he
the more it is from Anglo-Saxon origins. The more intellectual and abstract English is, the more it contains Latin and French influences e.g. swine (like the Germanic schwein) is the animal in the field bred by the occupied Anglo-Saxons and pork (like the French porc) is the animal at the table eaten by the occupying Normans. Another example is the Anglo-Saxon ‘cu’ meaning cow, and the French ‘bœuf’ meaning beef. Cohabitation with the Scandinavians resulted in a significant grammatical simplification and lexical enrichment of the Anglo-Frisian core of English; the later Norman occupation led to the grafting onto that Germanic core of a more elaborate layer of words from the Romance branch of the European languages. This Norman influence entered English largely through the courts and government. Thus, English developed into a "borrowing" language of great flexibility and with a huge vocabulary. Dialects Dialects and accents vary amongst the four countries of the United Kingdom, as well as within the countries themselves. The major divisions are normally classified as English English (or English as spoken in England, which encompasses Southern English dialects, West Country dialects, East and West Midlands English dialects and Northern English dialects), Ulster English (in Northern Ireland), Welsh English (not to be confused with the Welsh language), and Scottish English (not to be confused with the Scots language or Scottish Gaelic language). The various British dialects also differ in the words that they have borrowed from other languages. Around the middle of the 15th century, there were points where within the 5 major dialects there were almost 500 ways to spell the word though. Following its last major survey of English Dialects (1949–1950), the University of Leeds has started work on a new project. In May 2007 the Arts and Humanities Research Council awarded a grant to Leeds to study British regional dialects. The team are sifting through a large collection of examples of regional slang words and phrases turned up by the "Voices project" run by the BBC, in which they invited the public to send in examples of English still spoken throughout the country. The BBC Voices project also collected hundreds of news articles about how the British speak English from swearing through to items on language schools. This information will also be collated and analysed by Johnson's team both for content and for where it was reported. "Perhaps the most remarkable finding in the Voices study is that the English language is as diverse as ever, despite our increased mobility and constant exposure to other accents and dialects through TV and radio". When discussing the award of the grant in 2007, Leeds University stated: Regional Most people in Britain speak with a regional accent or dialect. However, about 2% of Britons speak with an accent called Received Pronunciation (also called "the Queen's English", "Oxford English" and "BBC English"), that is essentially region-less. It derives from a mixture of the Midlands and Southern dialects spoken in London in the early modern period. It is frequently used as a model for teaching English to foreign learners. In the South East there are significantly different accents; the Cockney accent spoken by some East Londoners is strikingly different from Received Pronunciation (RP). The Cockney rhyming slang can be (and was initially intended to be) difficult for outsiders to understand, although the extent of its use is often somewhat exaggerated. Estuary English has been gaining prominence in recent decades: it has some features of RP and some of Cockney. In London itself, the broad local accent is still changing, partly influenced by Caribbean speech. Immigrants to the UK in recent decades have brought many more languages to the country. Surveys started in 1979 by the Inner London Education Authority discovered over 100 languages being spoken domestically by the families of the inner city's schoolchildren. As a result, Londoners speak with a mixture of accents, depending on ethnicity, neighbourhood, class, age, upbringing, and sundry other factors. Since the mass internal migration to Northamptonshire in the 1940s and given its position between several major accent regions, it has become a
a vowel. This is called the intrusive R. It could be understood as a merger, in that words that once ended in an R and words that did not are no longer treated differently. This is also due to London-centric influences. Examples of R-dropping are car and sugar, where the R is not pronounced. Diphthongisation British dialects differ on the extent of diphthongisation of long vowels, with southern varieties extensively turning them into diphthongs, and with northern dialects normally preserving many of them. As a comparison, North American varieties could be said to be in-between. North Long vowels /iː/ and /uː/ are usually preserved, and in several areas also /oː/ and /eː/, as in go and say (unlike other varieties of English, that change them to [oʊ] and [eɪ] respectively). Some areas go as far as not diphthongising medieval /iː/ and /uː/, that give rise to modern /aɪ/ and /aʊ/; that is, for example, in the traditional accent of Newcastle upon Tyne, 'out' will sound as 'oot', and in parts of Scotland and North-West England, 'my' will be pronounced as 'me'. South Long vowels /iː/ and /uː/ are diphthongised to [ɪi] and [ʊu] respectively (or, more technically, [ʏʉ], with a raised tongue), so that ee and oo in feed and food are pronounced with a movement. The diphthong [oʊ] is also pronounced with a greater movement, normally [əʊ], [əʉ] or [əɨ]. People in groups Dropping a morphological grammatical number, in collective nouns, is stronger in British English than North American English. This is to treat them as plural when once grammatically singular, a perceived natural number prevails, especially when applying to institutional nouns and groups of people. The noun 'police', for example, undergoes this treatment: A football team can be treated likewise: This tendency can be observed in texts produced already in the 19th century. For example, Jane Austen, a British author, writes in Chapter 4 of Pride and Prejudice, published in 1813:All the world are good and agreeable in your eyes. However, in Chapter 16, the grammatical number is used. The world is blinded by his fortune and consequence. Negatives Some dialects of British English use negative concords, also known as double negatives. Rather than changing a word or using a positive, words like nobody, not, nothing, and never would be used in the same sentence. While this does not occur in Standard English, it does occur in non-standard dialects. The double negation follows the idea of two different morphemes, one that causes the double negation, and one that is used for the point or the verb. Standardisation As with English around the world, the English language as used in the United Kingdom is governed by convention rather than formal code: there is no body equivalent to the Académie française or the Real Academia Española. Dictionaries (for example, the Oxford English Dictionary, the Longman Dictionary of Contemporary English, the Chambers Dictionary, and the Collins Dictionary) record usage rather than attempting to prescribe it. In addition, vocabulary and usage change with time: words are freely borrowed from other languages and other strains of English, and neologisms are frequent. For historical reasons dating back to the rise of London in the 9th century, the form of language spoken in London and the East Midlands became standard English within the Court, and ultimately became the basis for generally accepted use in the law, government, literature and education in Britain. The standardisation of British English is thought to be from both dialect levelling and a thought of social superiority. Speaking in the Standard dialect created class distinctions; those who did not speak the standard English would be considered of a lesser class or social status and often discounted or considered of a low intelligence. Another contribution to the standardisation of British English was the introduction of the printing press to England in the mid-15th century. In doing so, William Caxton enabled a common language and spelling to be dispersed among the entirety of England at a much faster rate. Samuel Johnson's A Dictionary of the English Language (1755) was a large step in the English-language spelling reform, where the purification of language focused on standardising both speech and spelling. By the early 20th century, British authors had produced numerous books intended as guides to English grammar and usage, a few of which achieved sufficient acclaim to have remained in print for long periods and to have been reissued in new editions after some decades. These include, most notably of all, Fowler's Modern English Usage and The Complete Plain Words by Sir Ernest Gowers. Detailed guidance on many aspects of writing British English for publication is included in style guides issued by various publishers including The Times newspaper, the Oxford University Press and the Cambridge University Press. The Oxford University Press guidelines were originally drafted as a single broadsheet page by Horace Henry Hart, and were at the time (1893) the first guide of their type in English; they were gradually expanded and eventually published, first as Hart's Rules, and in 2002 as part of The Oxford Manual of Style. Comparable in authority and stature to The Chicago Manual of Style for published American English, the Oxford Manual is a fairly exhaustive standard for published British English that writers can turn to in the absence of specific guidance from their publishing house. See also American English Australian English British Sign Language Canadian English Commonwealth English Hiberno-English Newfoundland English New
campaign, used to achieve military objectives. Where the duration of the battle is longer than a week, it is often for reasons of planning called an operation. Battles can be planned, encountered or forced by one side when the other is unable to withdraw from combat. A battle always has as its purpose the reaching of a mission goal by use of military force. A victory in the battle is achieved when one of the opposing sides forces the other to abandon its mission and surrender its forces, routs the other (i.e., forces it to retreat or renders it militarily ineffective for further combat operations) or annihilates the latter, resulting in their deaths or capture. A battle may end in a Pyrrhic victory, which ultimately favors the defeated party. If no resolution is reached in a battle, it can result in a stalemate. A conflict in which one side is unwilling to reach a decision by a direct battle using conventional warfare often becomes an insurgency. Until the 19th century the majority of battles were of short duration, many lasting a part of a day. (The Battle of Preston (1648), the Battle of Nations (1813) and the Battle of Gettysburg (1863) were exceptional in lasting three days.) This was mainly due to the difficulty of supplying armies in the field or conducting night operations. The means of prolonging a battle was typically with siege warfare. Improvements in transport and the sudden evolving of trench warfare, with its siege-like nature during the First World War in the 20th century, lengthened the duration of battles to days and weeks. This created the requirement for unit rotation to prevent combat fatigue, with troops preferably not remaining in a combat area of operations for more than a month. The use of the term "battle" in military history has led to its misuse when referring to almost any scale of combat, notably by strategic forces involving hundreds of thousands of troops that may be engaged in either one battle at a time (Battle of Leipzig) or operations (Battle of Kursk). The space a battle occupies depends on the range of the weapons of the combatants. A "battle" in this broader sense may be of long duration and take place over a large area, as in the case of the Battle of Britain or the Battle of the Atlantic. Until the advent of artillery and aircraft, battles were fought with the two sides within sight, if not reach, of each other. The depth of the battlefield has also increased in modern warfare with inclusion of the supporting units in the rear areas; supply, artillery, medical personnel etc. often outnumber the front-line combat troops. Battles are made up of a multitude of individual combats, skirmishes and small engagements and the combatants will usually only experience a small part of the battle. To the infantryman, there may be little to distinguish between combat as part of a minor raid or a big offensive, nor is it likely that he anticipates the future course of the battle; few of the British infantry who went over the top on the first day on the Somme, 1 July 1916, would have anticipated that the battle would last five months. Some of the Allied infantry who had just dealt a crushing defeat to the French at the Battle of Waterloo fully expected to have to fight again the next day (at the Battle of Wavre). Battlespace Battlespace is a unified strategic concept to integrate and combine armed forces for the military theatre of operations, including air, information, land, sea and space. It includes the environment, factors and conditions that must be understood to apply combat power, protect the force or complete the mission, comprising enemy and friendly armed forces; facilities; weather; terrain; and the electromagnetic spectrum. Factors Battles are decided by various factors, the number and quality of combatants and equipment, the skill of commanders and terrain are among the most prominent. Weapons and armour can be decisive; on many occasions armies have achieved victory through more advanced weapons than those of their opponents. An extreme example was in the Battle of Omdurman, in which a large army of Sudanese Mahdists armed in a traditional manner were destroyed by an Anglo-Egyptian force equipped with Maxim machine guns and artillery. On some occasions, simple weapons employed in an unorthodox fashion have proven advantageous; Swiss pikemen gained many victories through their ability to transform a traditionally defensive weapon into an offensive one. Zulus in the early 19th century were victorious in battles against their rivals in part because they adopted a new kind of spear, the iklwa. Forces with inferior weapons have still emerged victorious at times, for example in the Wars of Scottish Independence. Disciplined troops are often of greater importance; at the Battle of Alesia, the Romans were greatly outnumbered but won because of superior training. Battles can also be determined by terrain. Capturing high ground has been the main tactic in innumerable battles. An army that holds the high ground forces the enemy to climb and thus wear themselves down. Areas of jungle and forest, with dense vegetation act as force-multipliers, of benefit to inferior armies. Terrain may have lost importance in modern warfare, due to the advent of aircraft, though the terrain is still vital for camouflage, especially for guerrilla warfare. Generals and commanders also play an important role, Hannibal, Julius Caesar, Khalid ibn Walid, Subutai and Napoleon Bonaparte were all skilled generals and their armies were extremely successful at times. An army that can trust the commands of their leaders with conviction in its success invariably has a higher morale than an army that doubts its every move. The British in the naval Battle of Trafalgar owed its success to the reputation of Admiral Lord Nelson. Types Battles can be fought on land, at sea, and in the air. Naval battles have occurred since before the 5th century BC. Air battles have been far less common, due to their late conception, the most prominent being the Battle of Britain in 1940. Since the Second World War, land or sea battles have come to rely on air support. During the Battle of Midway, five aircraft carriers were sunk without either fleet coming into direct contact. A pitched battle is an encounter where opposing sides agree on the time and place of combat. A battle of encounter (or encounter battle) is a meeting engagement where the opposing sides collide in the field without either having prepared their attack or defence. A battle of attrition aims to inflict losses on an enemy that are less sustainable compared to one's own losses. These need not be greater numerical losses – if one side is much more numerous than the other then pursuing a strategy based on attrition can work even if casualties on both sides are about equal. Many battles of the Western Front in the First World War were intentionally (Verdun) or unintentionally (Somme) attrition battles. A battle of breakthrough aims to pierce the enemy's defences, thereby exposing the vulnerable flanks which can be turned. A battle of encirclement—the of the German battle of manoeuvre ()—surrounds the enemy in a pocket. A battle of envelopment involves an attack on one or both flanks; the classic example being the double envelopment of the Battle of Cannae. A battle of annihilation is one in which the defeated party is destroyed in the field, such as the French fleet at the Battle of the Nile. Battles are usually hybrids of different types listed above. A decisive battle is one with political effects, determining the course of the war such as the Battle of Smolensk or bringing hostilities to an end, such as the Battle of Hastings or the Battle of Hattin. A decisive battle can change the balance of power or boundaries between countries. The concept of the decisive battle became popular with the publication in 1851 of Edward Creasy's The Fifteen Decisive Battles of the World. British military historians J.F.C. Fuller (The Decisive Battles of the Western World) and B.H. Liddell Hart (Decisive Wars of History), among many others, have written books in the style of Creasy's work. Land There is an obvious difference in the way battles have been fought. Early battles were probably fought between rival hunting bands as unorganized crowds. During the Battle of Megiddo, the first reliably documented battle in the fifteenth century BC, both armies were organised and disciplined; during the many wars of the Roman Empire, barbarians continued to use mob tactics. As the Age of Enlightenment dawned, armies began to fight in highly disciplined lines. Each would follow the orders from their officers and fight as a unit instead of individuals. Armies were divided into regiments, battalions, companies and platoons. These armies would march, line up and fire in divisions. Native Americans, on the other hand, did not fight in lines, using guerrilla tactics. American colonists and European forces continued using disciplined lines into the American Civil War. A new style arose from the 1850s to the First World War, known as trench warfare, which also led to tactical radio. Chemical warfare also began in 1915. By the Second World War, the use of the smaller divisions, platoons and companies became much more important as precise operations became vital. Instead of the trench stalemate of 1915–1917, in the Second World War, battles
both armies were organised and disciplined; during the many wars of the Roman Empire, barbarians continued to use mob tactics. As the Age of Enlightenment dawned, armies began to fight in highly disciplined lines. Each would follow the orders from their officers and fight as a unit instead of individuals. Armies were divided into regiments, battalions, companies and platoons. These armies would march, line up and fire in divisions. Native Americans, on the other hand, did not fight in lines, using guerrilla tactics. American colonists and European forces continued using disciplined lines into the American Civil War. A new style arose from the 1850s to the First World War, known as trench warfare, which also led to tactical radio. Chemical warfare also began in 1915. By the Second World War, the use of the smaller divisions, platoons and companies became much more important as precise operations became vital. Instead of the trench stalemate of 1915–1917, in the Second World War, battles developed where small groups encountered other platoons. As a result, elite squads became much more recognized and distinguishable. Maneuver warfare also returned with an astonishing pace with the advent of the tank, replacing the cannon of the Enlightenment Age. Artillery has since gradually replaced the use of frontal troops. Modern battles resemble those of the Second World War, along with indirect combat through the use of aircraft and missiles which has come to constitute a large portion of wars in place of battles, where battles are now mostly reserved for capturing cities. Naval One significant difference of modern naval battles, as opposed to earlier forms of combat is the use of marines, which introduced amphibious warfare. Today, a marine is actually an infantry regiment that sometimes fights solely on land and is no longer tied to the navy. A good example of an old naval battle is the Battle of Salamis. Most ancient naval battles were fought by fast ships using the battering ram to sink opposing fleets or steer close enough for boarding in hand-to-hand combat. Troops were often used to storm enemy ships as used by Romans and pirates. This tactic was usually used by civilizations that could not beat the enemy with ranged weaponry. Another invention in the late Middle Ages was the use of Greek fire by the Byzantines, which was used to set enemy fleets on fire. Empty demolition ships utilized the tactic to crash into opposing ships and set it afire with an explosion. After the invention of cannons, naval warfare became useful as support units for land warfare. During the 19th century, the development of mines led to a new type of naval warfare. The ironclad, first used in the American Civil War, resistant to cannons, soon made the wooden ship obsolete. The invention of military submarines, during World War I, brought naval warfare to both above and below the surface. With the development of military aircraft during World War II, battles were fought in the sky as well as below the ocean. Aircraft carriers have since become the central unit in naval warfare, acting as a mobile base for lethal aircraft. Aerial Although the use of aircraft has for the most part always been used as a supplement to land or naval engagements, since their first major military use in World War I aircraft have increasingly taken on larger roles in warfare. During World War I, the primary use was for reconnaissance, and small-scale bombardment. Aircraft began becoming much more prominent in the Spanish Civil War and especially World War II. Aircraft design began specializing, primarily into two types: bombers, which carried explosive payloads to bomb land targets or ships; and fighter-interceptors, which were used to either intercept incoming aircraft or to escort and protect bombers (engagements between fighter aircraft were known as dog fights). Some of the more notable aerial battles in this period include the Battle of Britain and the Battle of Midway. Another important use of aircraft came with the development of the helicopter, which first became heavily used during the Vietnam War, and still continues to be widely used today to transport and augment ground forces. Today, direct engagements between aircraft are rare – the most modern fighter-interceptors carry much more extensive bombing payloads, and are used to bomb precision land targets, rather than to fight other aircraft. Anti-aircraft batteries are used much more extensively to defend against incoming aircraft than interceptors. Despite this, aircraft today are much more extensively used as the primary tools for both army and navy, as evidenced by the prominent use of helicopters to transport and support troops, the use of aerial bombardment as the "first strike" in many engagements, and the replacement of the battleship with the aircraft carrier as the center of most modern navies. Naming Battles are usually named after some feature of the battlefield geography, such as a town, forest or river, commonly prefixed "Battle of...". Occasionally battles are named after the date on which they took place, such as The Glorious First of June. In the Middle Ages it was considered important to settle on a suitable name for a battle which could be used by the chroniclers. After Henry V of England defeated a French army on October 25, 1415, he met with the senior French herald and they agreed to name the battle after the nearby castle and so it was called the Battle of Agincourt. In other cases, the sides adopted different names for the same battle, such as the Battle of Gallipoli which is known in Turkey as the Battle of Çanakkale. During the American Civil War, the Union tended to name the battles after the nearest watercourse, such as the Battle of Wilsons Creek and the Battle of Stones River, whereas the Confederates favoured the nearby towns, as in the Battles of Chancellorsville and Murfreesboro. Occasionally both names for the same battle entered the popular culture, such as the First Battle of Bull Run and the Second Battle of Bull Run, which are also referred to as the First and Second Battles of Manassas. Sometimes in desert warfare, there is no nearby town name to use; map coordinates gave the name to the Battle of 73 Easting in the First Gulf War. Some place names have become synonymous with battles, such as the Passchendaele, Pearl Harbor, the Alamo, Thermopylae and Waterloo. Military operations, many of which result in battle, are given codenames, which are not necessarily meaningful or indicative of the type or the location of the battle. Operation Market Garden and Operation Rolling Thunder are examples of battles known by their military codenames. When a battleground is the site of more than one battle in the same conflict, the instances are distinguished by ordinal number, such as the First and Second Battles of Bull Run. An extreme case are the twelve Battles of the Isonzo—First to Twelfth—between Italy and Austria-Hungary during the First World War. Some battles are named for the convenience of military historians so that periods of combat can be neatly distinguished from one another. Following the First World War, the British Battles Nomenclature Committee was formed to decide on standard names for all battles and subsidiary actions. To the soldiers who did the fighting, the distinction was usually academic; a soldier fighting at Beaumont Hamel on November 13, 1916 was probably unaware he was taking part in what the committee named the Battle of the Ancre. Many combats are too small to be battles; terms such as "action", "affair" "skirmish", "firefight" "raid" or "offensive patrol" are used to describe small military encounters. These combats often take place within the time and space of a battle and while they may have an objective, they are not necessarily "decisive". Sometimes the soldiers are unable to immediately gauge the significance of the combat; in the aftermath of the Battle of Waterloo, some British officers were in doubt as to whether the day's events merited the title of "battle" or would be called an "action". Effects Battles affect the individuals who take part, as well as the political actors. Personal effects of battle range from mild psychological issues to permanent and crippling injuries. Some battle-survivors have nightmares about the conditions they encountered or abnormal reactions to certain sights or sounds and some suffer flashbacks. Physical effects of battle can include scars, amputations, lesions, loss of bodily functions, blindness, paralysis and death. Battles affect politics; a decisive battle can cause the losing side to surrender, while a Pyrrhic victory such as the Battle of Asculum can cause the winning side to reconsider its goals. Battles in civil wars have often decided the fate of monarchs or political factions. Famous examples include the Wars of the Roses, as well as the Jacobite risings. Battles affect the commitment of one side or the other to the continuance of a war, for example the Battle of Inchon and the Battle of Huế during the Tet Offensive. See also List of battles Military strategy Military tactics Naval
psychic medium. Her elder sister, Marisa Berenson, became a well-known model and actress. She also was a great-grandniece of Giovanni Schiaparelli, an Italian astronomer who believed he had discovered the supposed canals of Mars, and a second cousin, once removed, of art expert Bernard Berenson (1865–1959) and his sister Senda Berenson (1868–1954), an athlete and educator who was one of the first two women elected to the Basketball Hall of Fame. Career Following a brief modeling career in the late 1960s, Berenson became a freelance photographer. By 1973, her photographs had been published in Life, Glamour, Vogue and Newsweek. Berenson studied acting at New York's The American Place Theatre with Wynn Handman along with Richard Gere, Philip Anglim, Penelope Milford, Robert Ozn, Ingrid Boulting and her sister Marisa. Berenson also appeared in several motion pictures. She starred opposite Anthony Perkins in the 1978 Alan Rudolph film Remember My Name, and appeared with Jeff Bridges in the 1979 film Winter Kills and Malcolm McDowell in Cat People (1982). Personal life and death On August 9, 1973, in Cape Cod, Massachusetts, Berenson, three months pregnant, married her future Remember My Name co-star Anthony Perkins. The couple raised two sons: actor-director Oz Perkins (born 1974) and folk/rock singer-songwriter Elvis Perkins (born 1976). They remained married until Perkins's death from AIDS-related complications on September 12, 1992. Berenson died at age 53 in 2001 in the September 11 attacks aboard American Airlines Flight 11. She was returning to her Los Angeles
Rudolph film Remember My Name, and appeared with Jeff Bridges in the 1979 film Winter Kills and Malcolm McDowell in Cat People (1982). Personal life and death On August 9, 1973, in Cape Cod, Massachusetts, Berenson, three months pregnant, married her future Remember My Name co-star Anthony Perkins. The couple raised two sons: actor-director Oz Perkins (born 1974) and folk/rock singer-songwriter Elvis Perkins (born 1976). They remained married until Perkins's death from AIDS-related complications on September 12, 1992. Berenson died at age 53 in 2001 in the September 11 attacks aboard American Airlines Flight 11. She was returning to her Los Angeles home, following a holiday on Cape Cod. At the National September 11 Memorial & Museum, Berenson is memorialized at the North Pool, on Panel N-76. References External links Several news stories about Berry Berenson 1948 births 2001 deaths 20th-century American actresses Actresses from New York City Female models from New York (state) American Airlines Flight 11 victims American film actresses American people of Egyptian descent American people of French descent American people of Italian descent American people of Lithuanian-Jewish descent American people of Swiss descent American terrorism victims Jewish American actresses Models from New
animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed. As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. Molecular genetics A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology. Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. Epigenetics Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others. Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other. Plant evolution The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina. Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. Plant physiology Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. Plant hormones Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids. The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek , to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as Gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light. Plant anatomy and morphology Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means. Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations. Systematic botany Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress. Kingdom Plantae belongs to Domain Eukarya and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class;
used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists. Plant biochemistry Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. Plants make various photosynthetic pigments, some of which can be seen here through paper chromatography Xanthophylls Chlorophyll a Chlorophyll b Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product. The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant. Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common carbon fixation pathway. These biochemical strategies are unique to land plants. Medicine and materials Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. Plants can synthesise useful coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder. Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin. Plant ecology Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. Plants, climate and environmental change Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. Genetics Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms. Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed. As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. Molecular genetics A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first
strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt. Taxonomy and discovery In 1902, B. thuringiensis was first discovered in silkworms by Japanese sericultural engineer . He named it B. sotto, using the Japanese word , here referring to bacillary paralysis. In 1911, German microbiologist Ernst Berliner rediscovered it when he isolated it as the cause of a disease called in flour moth caterpillars in Thuringia (hence the specific name thuringiensis, "Thuringian"). B. sotto would later be reassigned as B. thuringiensis var. sotto. In 1976, Robert A. Zakharyan reported the presence of a plasmid in a strain of B. thuringiensis and suggested the plasmid's involvement in endospore and crystal formation. B. thuringiensis is closely related to B. cereus, a soil bacterium, and B. anthracis, the cause of anthrax; the three organisms differ mainly in their plasmids. Like other members of the genus, all three are anaerobes capable of producing endospores. Species group placement B. thuringiensis is placed in the Bacillus cereus group which is variously defined as: seven closely related species: B. cereus sensu stricto (B. cereus), B. anthracis, B. thuringiensis, B. mycoides, B. pseudomycoides, and B. cytotoxicus; or as six species in a Bacillus cereus sensu lato: B. weihenstephanensis, B. mycoides, B. pseudomycoides, B. cereus, B. thuringiensis, and B. anthracis. Within this grouping B.t. is more closely related to B.ce. It is more distantly related to B.w., B.m., B.p., and B.cy. Subspecies There are several dozen recognized subspecies of B. thuringiensis. Subspecies commonly used as insecticides include B. thuringiensis subspecies kurstaki (Btk), subspecies israelensis (Bti) and subspecies aizawa. Some Bti lineages are clonal. Genetics Some strains are known to carry the same genes that produce enterotoxins in B. cereus, and so it is possible that the entire B. cereus sensu lato group may have the potential to be enteropathogens. The proteins that B. thuringiensis is most known for are encoded by cry genes. In most strains of B. thuringiensis, these genes are located on a plasmid (in other words cry is not a chromosomal gene in most strains). If these plasmids are lost it becomes indistinguishable from B. cereus as B. thuringiensis has no other species characteristics. Plasmid exchange has been observed both naturally and experimentally both within B.t. and between B.t. and two congeners, B. cereus and B. mycoides. plcR is an indispensable transcription regulator of most virulence factors, its absence greatly reducing virulence and toxicity. Some strains do naturally complete their life cycle with an inactivated plcR. It is half of a two-gene operon along with the heptapeptide papR. papR is part of quorum sensing in B. thuringiensis. Various strains including Btk ATCC 33679 carry plasmids belonging to the wider pXO1-like family. (The pXO1 family being a B. cereus-common family with members of ~330kb length. They differ from pXO1 by replacement of the pXO1 pathogenicity island.) The insect parasite Btk HD73 carries a pXO2-like plasmid - pBT9727 - lacking the 35kb pathogenicity island of pXO2 itself, and in fact having no identifiable virulence factors. (The pXO2 family does not have replacement of the pathogenicity island, instead simply lacking that part of pXO2.) The genomes of the B. cereus group may contain two types of introns, dubbed group I and group II. B.t strains have variously 0-5 group Is and 0-13 group IIs. There is still insufficient information to determine whether chromosome-plasmid coevolution to enable adaptation to particular environmental niches has occurred or is even possible. Common with B. cereus but so far not found elsewhere - including in other members of the species group - are the efflux pump BC3663, the N-acyl--amino-acid amidohydrolase BC3664, and the methyl-accepting chemotaxis protein BC5034. Proteome Has similar proteome diversity to close relative B. cereus. Mechanism of insecticidal action Upon sporulation, B. thuringiensis forms crystals of two types of proteinaceous insecticidal delta endotoxins (δ-endotoxins) called crystal proteins or Cry proteins, which are encoded by cry genes, and Cyt proteins. Cry toxins have specific activities against insect species of the orders Lepidoptera (moths and butterflies), Diptera (flies and mosquitoes), Coleoptera (beetles) and Hymenoptera (wasps, bees, ants and sawflies), as well as against nematodes. Thus, B. thuringiensis serves as an important reservoir of Cry toxins for production of biological insecticides and insect-resistant genetically modified crops. When insects ingest toxin crystals, their alkaline digestive tracts denature the insoluble crystals, making them soluble and thus amenable to being cut with proteases found in the insect gut, which liberate the toxin from the crystal. The Cry toxin is then inserted into the insect gut cell membrane, paralyzing the digestive tract and forming a pore. The insect stops eating and starves to death; live Bt bacteria may also colonize the insect, which can contribute to death. Death occurs within a few hours or weeks. The midgut bacteria of susceptible larvae may be required for B. thuringiensis insecticidal activity. A B. thuringiensis small RNA called BtsR1 can silence the Cry5Ba toxin expression when outside the host by binding to the RBS site of the Cry5Ba toxin transcript to avoid nematode behavioral defenses. The silencing results in an increased of the bacteria ingestion by C. elegans.The expression of BtsR1 is then reduced after ingestion, resulting in Cry5Ba toxin production and host death. In 1996 another class of insecticidal proteins in Bt was discovered: the vegetative insecticidal proteins (Vip; ). Vip proteins do not share sequence homology with Cry proteins, in general do not compete for the same receptors, and some kill different insects than do Cry proteins. In 2000, a novel subgroup of Cry protein, designated parasporin, was discovered from non-insecticidal B. thuringiensis isolates. The proteins of parasporin group are defined as B. thuringiensis and related bacterial parasporal proteins that are not hemolytic, but capable of preferentially killing cancer cells. As of January 2013, parasporins comprise six subfamilies: PS1 to PS6. Use of spores and proteins in pest control Spores and crystalline insecticidal proteins produced by B. thuringiensis have been used to control insect pests since the 1920s and are often applied as liquid sprays. They are now used as specific insecticides under trade names such as DiPel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects, and are used in organic farming; however, the manuals for these products do contain many environmental and human health warnings, and a 2012 European regulatory peer review of five approved strains found, while data exist to support some claims of low toxicity to humans and the environment, the data are insufficient to justify many of these claims. New strains of Bt are developed and introduced over time as insects develop resistance to Bt, or the desire occurs to force mutations to modify organism characteristics, or to use homologous recombinant genetic engineering to improve crystal size and increase pesticidal activity, or broaden the host range of Bt and obtain more effective formulations. Each new strain is given a unique number and registered with the U.S. EPA and allowances may be given for genetic modification depending on "its parental strains, the proposed pesticide use pattern, and the manner and extent to which the organism has been genetically modified". Formulations of Bt that are approved for organic farming in the US are listed at the website of the Organic Materials Review Institute (OMRI) and several university extension websites offer advice on how to use Bt spore or protein preparations in organic farming. Use of Bt genes in genetic engineering of plants for pest control The Belgian company Plant Genetic Systems (now part of Bayer CropScience) was the first company (in 1985) to develop genetically modified crops (tobacco) with insect tolerance by expressing cry genes from B. thuringiensis; the resulting crops contain delta endotoxin. The Bt tobacco was never commercialized; tobacco plants are used to test genetic modifications since they are easy to manipulate genetically and are not part of the food supply. Usage In 1985, potato plants producing CRY 3A Bt toxin were approved safe by the Environmental Protection Agency, making it the first human-modified pesticide-producing crop to be approved in the US, though many plants produce pesticides
delta endotoxins, that have insecticidal action. This has led to their use as insecticides, and more recently to genetically modified crops using Bt genes, such as Bt corn. Many crystal-producing Bt strains, though, do not have insecticidal properties. The subspecies israelensis is commonly used for control of mosquitoes and of fungus gnats. As a toxic mechanism, cry proteins bind to specific receptors on the membranes of mid-gut (epithelial) cells of the targeted pests, resulting in their rupture. Other organisms (including humans, other animals and non-targeted insects) that lack the appropriate receptors in their gut cannot be affected by the cry protein, and therefore are not affected by Bt. Taxonomy and discovery In 1902, B. thuringiensis was first discovered in silkworms by Japanese sericultural engineer . He named it B. sotto, using the Japanese word , here referring to bacillary paralysis. In 1911, German microbiologist Ernst Berliner rediscovered it when he isolated it as the cause of a disease called in flour moth caterpillars in Thuringia (hence the specific name thuringiensis, "Thuringian"). B. sotto would later be reassigned as B. thuringiensis var. sotto. In 1976, Robert A. Zakharyan reported the presence of a plasmid in a strain of B. thuringiensis and suggested the plasmid's involvement in endospore and crystal formation. B. thuringiensis is closely related to B. cereus, a soil bacterium, and B. anthracis, the cause of anthrax; the three organisms differ mainly in their plasmids. Like other members of the genus, all three are anaerobes capable of producing endospores. Species group placement B. thuringiensis is placed in the Bacillus cereus group which is variously defined as: seven closely related species: B. cereus sensu stricto (B. cereus), B. anthracis, B. thuringiensis, B. mycoides, B. pseudomycoides, and B. cytotoxicus; or as six species in a Bacillus cereus sensu lato: B. weihenstephanensis, B. mycoides, B. pseudomycoides, B. cereus, B. thuringiensis, and B. anthracis. Within this grouping B.t. is more closely related to B.ce. It is more distantly related to B.w., B.m., B.p., and B.cy. Subspecies There are several dozen recognized subspecies of B. thuringiensis. Subspecies commonly used as insecticides include B. thuringiensis subspecies kurstaki (Btk), subspecies israelensis (Bti) and subspecies aizawa. Some Bti lineages are clonal. Genetics Some strains are known to carry the same genes that produce enterotoxins in B. cereus, and so it is possible that the entire B. cereus sensu lato group may have the potential to be enteropathogens. The proteins that B. thuringiensis is most known for are encoded by cry genes. In most strains of B. thuringiensis, these genes are located on a plasmid (in other words cry is not a chromosomal gene in most strains). If these plasmids are lost it becomes indistinguishable from B. cereus as B. thuringiensis has no other species characteristics. Plasmid exchange has been observed both naturally and experimentally both within B.t. and between B.t. and two congeners, B. cereus and B. mycoides. plcR is an indispensable transcription regulator of most virulence factors, its absence greatly reducing virulence and toxicity. Some strains do naturally complete their life cycle with an inactivated plcR. It is half of a two-gene operon along with the heptapeptide papR. papR is part of quorum sensing in B. thuringiensis. Various strains including Btk ATCC 33679 carry plasmids belonging to the wider pXO1-like family. (The pXO1 family being a B. cereus-common family with members of ~330kb length. They differ from pXO1 by replacement of the pXO1 pathogenicity island.) The insect parasite Btk HD73 carries a pXO2-like plasmid - pBT9727 - lacking the 35kb pathogenicity island of pXO2 itself, and in fact having no identifiable virulence factors. (The pXO2 family does not have replacement of the pathogenicity island, instead simply lacking that part of pXO2.) The genomes of the B. cereus group may contain two types of introns, dubbed group I and group II. B.t strains have variously 0-5 group Is and 0-13 group IIs. There is still insufficient information to determine whether chromosome-plasmid coevolution to enable adaptation to particular environmental niches has occurred or is even possible. Common with B. cereus but so far not found elsewhere - including in other members of the species group - are the efflux pump BC3663, the N-acyl--amino-acid amidohydrolase BC3664, and the methyl-accepting chemotaxis protein BC5034. Proteome Has similar proteome diversity to close relative B. cereus. Mechanism of insecticidal action Upon sporulation, B. thuringiensis forms crystals of two types of proteinaceous insecticidal delta endotoxins (δ-endotoxins) called crystal proteins or Cry proteins, which are encoded by cry genes, and Cyt proteins. Cry toxins have specific activities against insect species of the orders Lepidoptera (moths and butterflies), Diptera (flies and mosquitoes), Coleoptera (beetles) and Hymenoptera (wasps, bees, ants and sawflies), as well as against nematodes. Thus, B. thuringiensis serves as an important reservoir of Cry toxins for production of biological insecticides and insect-resistant genetically modified crops. When insects ingest toxin crystals, their alkaline digestive tracts denature the insoluble crystals, making them soluble and thus amenable to being cut with proteases found in the insect gut, which liberate the toxin from the crystal. The Cry toxin is then inserted into the insect gut cell membrane, paralyzing the digestive tract and forming a pore. The insect stops eating and starves to death; live Bt bacteria may also colonize the insect, which can contribute to death. Death occurs within a few hours or weeks. The midgut bacteria of susceptible larvae may be required for B. thuringiensis insecticidal activity. A B. thuringiensis small RNA called BtsR1 can silence the Cry5Ba toxin expression when outside the host by binding to the RBS site of the Cry5Ba toxin transcript to avoid nematode behavioral defenses. The silencing results in an increased of the bacteria ingestion by C. elegans.The expression of BtsR1 is then reduced after ingestion, resulting in Cry5Ba toxin production and host death. In 1996 another class of insecticidal proteins in Bt was discovered: the vegetative insecticidal proteins (Vip; ). Vip proteins do not share sequence homology with Cry proteins, in general do not compete for the same receptors, and some kill different insects than do Cry proteins. In 2000, a novel subgroup of Cry protein, designated parasporin, was discovered from non-insecticidal B. thuringiensis isolates. The proteins of parasporin group are defined as B. thuringiensis and related bacterial parasporal proteins that are not hemolytic, but capable of preferentially killing cancer cells. As of January 2013, parasporins comprise six subfamilies: PS1 to PS6. Use of spores and proteins in pest control Spores and crystalline insecticidal proteins produced by B. thuringiensis have been used to control insect pests since the 1920s and are often applied as liquid sprays. They are now used as specific insecticides under trade names such as DiPel and Thuricide. Because of their specificity, these pesticides are regarded as environmentally friendly, with little or no effect on humans, wildlife, pollinators, and most other beneficial insects, and are used in organic farming; however, the manuals for these products do contain many environmental and human health warnings, and a 2012 European regulatory peer review of five approved strains found, while data exist to support some claims of low toxicity to humans and the environment, the data are insufficient to justify many of these claims. New strains of Bt are developed and introduced over time as insects develop resistance to Bt, or the desire occurs to force mutations to modify organism characteristics, or to use homologous recombinant genetic engineering to improve crystal size and increase pesticidal activity, or broaden the host range of Bt and obtain more effective formulations. Each new strain is given a unique number and registered with the U.S. EPA and allowances may be given for genetic modification depending on "its parental strains, the proposed pesticide use
bacteria may be infected by phages. Phages have been used since the late 20th century as an alternative to antibiotics in the former Soviet Union and Central Europe, as well as in France. They are seen as a possible therapy against multi-drug-resistant strains of many bacteria (see phage therapy). Phages are known to interact with the immune system both indirectly via bacterial expression of phage-encoded proteins and directly by influencing innate immunity and bacterial clearance. Classification Bacteriophages occur abundantly in the biosphere, with different genomes, and lifestyles. Phages are classified by the International Committee on Taxonomy of Viruses (ICTV) according to morphology and nucleic acid. {| class="wikitable sortable" |+ ICTV classification of prokaryotic (bacterial and archaeal) viruses |- style="background:gray;" !Order !! Family !! Morphology !! Nucleic acid !! Examples |- | Belfryvirales | Turriviridae || Enveloped, isometric || Linear dsDNA || |- | rowspan="14" | Caudovirales | Ackermannviridae || Nonenveloped, contractile tail || Linear dsDNA || |- | Autographiviridae || Nonenveloped, noncontractile tail (short) || Linear dsDNA || |- | Chaseviridae || || Linear dsDNA || |- | Demerecviridae || || Linear dsDNA || |- | Drexlerviridae || || Linear dsDNA || |- | Guenliviridae || || Linear dsDNA || |- | Herelleviridae || Nonenveloped, contractile tail || Linear dsDNA || |- | Myoviridae || Nonenveloped, contractile tail || Linear dsDNA || T4, Mu, P1, P2 |- | Siphoviridae || Nonenveloped, noncontractile tail (long) || Linear dsDNA || λ, T5, HK97, N15 |- | Podoviridae || Nonenveloped, noncontractile tail (short) || Linear dsDNA || T7, T3, Φ29, P22 |- | Rountreeviridae || || Linear dsDNA || |- | Salasmaviridae || || Linear dsDNA || |- | Schitoviridae || || Linear dsDNA || |- | Zobellviridae || || Linear dsDNA || |- | rowspan="3" | Halopanivirales | Sphaerolipoviridae || Enveloped, isometric || Linear dsDNA || |- | Simuloviridae || Enveloped, isometric || Linear dsDNA || |- | Matshushitaviridae || Enveloped, isometric || Linear dsDNA || |- | Haloruvirales | Pleolipoviridae || Enveloped, pleomorphic || Circular ssDNA, circular dsDNA, or linear dsDNA || |- | Kalamavirales | Tectiviridae || Nonenveloped, isometric || Linear dsDNA || |- | rowspan=2 | Ligamenvirales | Lipothrixviridae || Enveloped, rod-shaped || Linear dsDNA || Acidianus filamentous virus 1 |- | Rudiviridae || Nonenveloped, rod-shaped || Linear dsDNA || Sulfolobus islandicus rod-shaped virus 1 |- | Mindivirales | Cystoviridae || Enveloped, spherical || Linear dsRNA ||Φ6 |- | rowspan="4" | Norzivirales | Atkinsviridae || Nonenveloped, isometric || Linear ssRNA || |- | Duinviridae || Nonenveloped, isometric || Linear ssRNA || |- | Fiersviridae || Nonenveloped, isometric || Linear ssRNA || MS2, Qβ |- | Solspiviridae || Nonenveloped, isometric || Linear ssRNA || |- | Petitvirales | Microviridae || Nonenveloped, isometric || Circular ssDNA || ΦX174 |- | Primavirales | Tristromaviridae || Enveloped, rod-shaped || Linear dsDNA || |- | rowspan="2" | Timlovirales | Blumeviridae || Nonenveloped, isometric || Linear ssRNA || |- | Steitzviridae || Nonenveloped, isometric || Linear ssRNA || |- | rowspan="3" | Tubulavirales| Inoviridae || Nonenveloped, filamentous || Circular ssDNA || M13 |- | Paulinoviridae || Nonenveloped, filamentous || Circular ssDNA || |- | Plectroviridae || Nonenveloped, filamentous || Circular ssDNA |- | Vinavirales| Corticoviridae || Nonenveloped, isometric || Circular dsDNA || PM2 |- | Durnavirales| Picobirnaviridae (proposal) || Nonenveloped, isometric || Linear dsRNA || |- | rowspan=14 | Unassigned | Ampullaviridae || Enveloped, bottle-shaped || Linear dsDNA || |- | Autolykiviridae || Nonenveloped, isometric || Linear dsDNA || |- | Bicaudaviridae || Nonenveloped, lemon-shaped || Circular dsDNA || |- | Clavaviridae || Nonenveloped, rod-shaped || Circular dsDNA || |- | Finnlakeviridae || Nonenveloped, isometric || Circular ssDNA || FLiP |- | Fuselloviridae || Nonenveloped, lemon-shaped || Circular dsDNA || |- | Globuloviridae || Enveloped, isometric || Linear dsDNA || |- | Guttaviridae || Nonenveloped, ovoid || Circular dsDNA || |- | Halspiviridae || Nonenveloped, lemon-shaped || Linear dsDNA || |- | Plasmaviridae || Enveloped, pleomorphic || Circular dsDNA || |- | Portogloboviridae || Enveloped, isometric || Circular dsDNA || |- | Thaspiviridae || Nonenveloped, lemon-shaped || Linear dsDNA || |- | Spiraviridae || Nonnveloped, rod-shaped || Circular ssDNA || |} It has been suggested that members of Picobirnaviridae infect bacteria, but not mammals. There are also many unassigned genera of the class Leviviricetes: Chimpavirus, Hohglivirus, Mahrahvirus, Meihzavirus, Nicedsevirus, Sculuvirus, Skrubnovirus, Tetipavirus and Winunavirus containing linear ssRNA genomes and the unassigned genus Lilyvirus of the order Caudovirales containing a linear dsDNA genome. History In 1896, Ernest Hanbury Hankin reported that something in the waters of the Ganges and Yamuna rivers in India had a marked antibacterial action against cholera and it could pass through a very fine porcelain filter. In 1915, British bacteriologist Frederick Twort, superintendent of the Brown Institution of London, discovered a small agent that infected and killed bacteria. He believed the agent must be one of the following: a stage in the life cycle of the bacteria an enzyme produced by the bacteria themselves, or a virus that grew on and destroyed the bacteria Twort's research was interrupted by the onset of World War I, as well as a shortage of funding and the discoveries of antibiotics. Independently, French-Canadian microbiologist Félix d'Hérelle, working at the Pasteur Institute in Paris, announced on 3 September 1917, that he had discovered "an invisible, antagonistic microbe of the dysentery bacillus". For d’Hérelle, there was no question as to the nature of his discovery: "In a flash I had understood: what caused my clear spots was in fact an invisible microbe… a virus parasitic on bacteria." D'Hérelle called the virus a bacteriophage, a bacteria-eater (from the Greek meaning "to devour"). He also recorded a dramatic account of a man suffering from dysentery who was restored to good health by the bacteriophages. It was D'Herelle who conducted much research into bacteriophages and introduced the concept of phage therapy. Nobel prizes awarded for phage research In 1969, Max Delbrück, Alfred Hershey, and Salvador Luria were awarded the Nobel Prize in Physiology or Medicine for their discoveries of the replication of viruses and their genetic structure. Specifically the work of Hershey, as contributor to the Hershey–Chase experiment in 1952 provided convincing evidence that DNA, not protein, was the genetic material of life. Delbrück and Luria carried out the Luria–Delbrück experiment which demonstrated statistically that mutations in bacteria occur randomly and thus follow Darwinian rather than Lamarckian principles. Uses Phage therapy Phages were discovered to be antibacterial agents and were used in the former Soviet Republic of Georgia (pioneered there by Giorgi Eliava with help from the co-discoverer of bacteriophages, Félix d'Herelle) during the 1920s and 1930s for treating bacterial infections. They had widespread use, including treatment of soldiers in the Red Army. However, they were abandoned for general use in the West for several reasons: Antibiotics were discovered and marketed widely. They were easier to make, store, and to prescribe. Medical trials of phages were carried out, but a basic lack of understanding raised questions about the validity of these trials. Publication of research in the Soviet Union was mainly in the Russian or Georgian languages and for many years, was not followed internationally. The use of phages has continued since the end of the Cold War in Russia, Georgia and elsewhere in Central and Eastern Europe. The first regulated, randomized, double-blind clinical trial was reported in the Journal of Wound Care in June 2009, which evaluated the safety and efficacy of a bacteriophage cocktail to treat infected venous ulcers of the leg in human patients. The FDA approved the study as a Phase I clinical trial. The study's results demonstrated the safety of therapeutic application of bacteriophages, but did not show efficacy. The authors explained that the use of certain chemicals that are part of standard wound care (e.g. lactoferrin or silver) may have interfered with bacteriophage viability. Shortly after that, another controlled clinical trial in Western Europe (treatment of ear infections caused by Pseudomonas aeruginosa) was reported in the journal Clinical Otolaryngology in August 2009. The study concludes that bacteriophage preparations were safe and effective for treatment of chronic ear infections in humans. Additionally, there have been numerous animal and other experimental clinical trials evaluating the efficacy of bacteriophages for various diseases, such as infected burns and wounds, and cystic fibrosis associated lung infections, among others. On the other hand, phages of Inoviridae have been shown to complicate biofilms involved in pneumonia and cystic fibrosis and to shelter the bacteria from drugs meant to eradicate disease, thus promoting persistent infection. Meanwhile, bacteriophage researchers have been developing engineered viruses to overcome antibiotic resistance, and engineering the phage genes responsible for coding enzymes that degrade the biofilm matrix, phage structural proteins, and the enzymes responsible for lysis of the bacterial cell wall. There have been results showing that T4 phages that are small in size and short-tailed, can be helpful in detecting E. coli in the human body. Therapeutic efficacy of a phage cocktail was evaluated in a mice model with nasal infection of multidrug-resistant (MDR) A. baumannii. Mice treated with the phage cocktail showed a 2.3-fold higher survival rate than those untreated in seven days post infection. In 2017 a patient with a pancreas compromised by MDR A. baumannii was put on several antibiotics, despite this the patient's health continued to deteriorate during a four-month period. Without effective antibiotics the patient was subjected to phage therapy using a phage cocktail containing nine different phages that had been demonstrated to be effective against MDR A. baumannii. Once on this therapy the patient's downward clinical trajectory reversed, and returned to health. D'Herelle "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients." This includes rivers traditionally thought to have healing powers, including India's Ganges River. Other Food industry – Phages have increasingly been used to safen food products, and to forestall spoilage bacteria. Since 2006, the United States Food and Drug Administration (FDA) and United States Department of Agriculture (USDA) have approved several bacteriophage products. LMP-102 (Intralytix) was approved for treating ready-to-eat (RTE) poultry and meat products. In that same year, the FDA approved LISTEX (developed and produced by Micreos) using bacteriophages on cheese to kill Listeria monocytogenes bacteria, in order to give them generally recognized as safe (GRAS) status. In July 2007, the same bacteriophage were approved for use on all food products. In 2011 USDA confirmed that LISTEX is a clean label processing aid and is included in USDA. Research in the field of food safety is continuing to see if lytic phages are a viable option to control other food-borne pathogens in various food products. Diagnostics – In 2011, the FDA cleared the first bacteriophage-based product for in vitro diagnostic use. The KeyPath MRSA/MSSA Blood Culture Test uses a cocktail of bacteriophage to detect Staphylococcus aureus in positive blood cultures and determine methicillin resistance or susceptibility. The test returns results in about five hours, compared to two to three days for standard microbial identification and susceptibility test methods. It was the first accelerated antibiotic-susceptibility test approved by the FDA. Counteracting bioweapons and toxins – Government agencies in the West have for several years been looking to Georgia and the former Soviet Union for help with exploiting phages for counteracting bioweapons and toxins, such as anthrax and botulism. Developments are continuing among research groups in the U.S. Other uses include spray application in horticulture for protecting plants and vegetable produce from decay and the spread of bacterial disease. Other applications for bacteriophages are as biocides for environmental
activity and may serve as leads for peptidomimetics, i.e. drugs that mimic peptides. Phage-ligand technology makes use of phage proteins for various applications, such as binding of bacteria and bacterial components (e.g. endotoxin) and lysis of bacteria. Basic research – Bacteriophages are important model organisms for studying principles of evolution and ecology. Detriments Dairy industry Bacteriophages present in the environment can cause cheese to not ferment. In order to avoid this, mixed-strain starter cultures and culture rotation regimes can be used. Genetic engineering of culture microbes - especially Lactococcus lactis and Streptococcus thermophilus - have been studied for genetic analysis and modification to improve phage resistance. This has especially focused on plasmid and recombinant chromosomal modifications. Replication The life cycle of bacteriophages tends to be either a lytic cycle or a lysogenic cycle. In addition, some phages display pseudolysogenic behaviors. With lytic phages such as the T4 phage, bacterial cells are broken open (lysed) and destroyed after immediate replication of the virion. As soon as the cell is destroyed, the phage progeny can find new hosts to infect. Lytic phages are more suitable for phage therapy. Some lytic phages undergo a phenomenon known as lysis inhibition, where completed phage progeny will not immediately lyse out of the cell if extracellular phage concentrations are high. This mechanism is not identical to that of the temperate phage going dormant and usually is temporary. In contrast, the lysogenic cycle does not result in immediate lysing of the host cell. Those phages able to undergo lysogeny are known as temperate phages. Their viral genome will integrate with host DNA and replicate along with it, relatively harmlessly, or may even become established as a plasmid. The virus remains dormant until host conditions deteriorate, perhaps due to depletion of nutrients, then, the endogenous phages (known as prophages) become active. At this point they initiate the reproductive cycle, resulting in lysis of the host cell. As the lysogenic cycle allows the host cell to continue to survive and reproduce, the virus is replicated in all offspring of the cell. An example of a bacteriophage known to follow the lysogenic cycle and the lytic cycle is the phage lambda of E. coli.Sometimes prophages may provide benefits to the host bacterium while they are dormant by adding new functions to the bacterial genome, in a phenomenon called lysogenic conversion. Examples are the conversion of harmless strains of Corynebacterium diphtheriae or Vibrio cholerae by bacteriophages, to highly virulent ones that cause diphtheria or cholera, respectively. Strategies to combat certain bacterial infections by targeting these toxin-encoding prophages have been proposed. Attachment and penetration Bacterial cells are protected by a cell wall of polysaccharides, which are important virulence factors protecting bacterial cells against both immune host defenses and antibiotics. Host growth conditions also influence the ability of the phage to attach and invade them. As phage virions do not move independently, they must rely on random encounters with the correct receptors when in solution, such as blood, lymphatic circulation, irrigation, soil water, etc. Myovirus bacteriophages use a hypodermic syringe-like motion to inject their genetic material into the cell. After contacting the appropriate receptor, the tail fibers flex to bring the base plate closer to the surface of the cell. This is known as reversible binding. Once attached completely, irreversible binding is initiated and the tail contracts, possibly with the help of ATP present in the tail, injecting genetic material through the bacterial membrane. The injection is accomplished through a sort of bending motion in the shaft by going to the side, contracting closer to the cell and pushing back up. Podoviruses lack an elongated tail sheath like that of a myovirus, so instead, they use their small, tooth-like tail fibers enzymatically to degrade a portion of the cell membrane before inserting their genetic material. Synthesis of proteins and nucleic acid Within minutes, bacterial ribosomes start translating viral mRNA into protein. For RNA-based phages, RNA replicase is synthesized early in the process. Proteins modify the bacterial RNA polymerase so it preferentially transcribes viral mRNA. The host's normal synthesis of proteins and nucleic acids is disrupted, and it is forced to manufacture viral products instead. These products go on to become part of new virions within the cell, helper proteins that contribute to the assemblage of new virions, or proteins involved in cell lysis. In 1972, Walter Fiers (University of Ghent, Belgium) was the first to establish the complete nucleotide sequence of a gene and in 1976, of the viral genome of bacteriophage MS2. Some dsDNA bacteriophages encode ribosomal proteins, which are thought to modulate protein translation during phage infection. Virion assembly In the case of the T4 phage, the construction of new virus particles involves the assistance of helper proteins that act catalytically during phage morphogenesis. The base plates are assembled first, with the tails being built upon them afterward. The head capsids, constructed separately, will spontaneously assemble with the tails. During assembly of the phage T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. The DNA is packed efficiently within the heads. The whole process takes about 15 minutes. Release of virions Phages may be released via cell lysis, by extrusion, or, in a few cases, by budding. Lysis, by tailed phages, is achieved by an enzyme called endolysin, which attacks and breaks down the cell wall peptidoglycan. An altogether different phage type, the filamentous phage, make the host cell continually secrete new virus particles. Released virions are described as free, and, unless defective, are capable of infecting a new bacterium. Budding is associated with certain Mycoplasma phages. In contrast to virion release, phages displaying a lysogenic cycle do not kill the host but, rather, become long-term residents as prophage. Communication Research in 2017 revealed that the bacteriophage Φ3T makes a short viral protein that signals other bacteriophages to lie dormant instead of killing the host bacterium. Arbitrium is the name given to this protein by the researchers who discovered it. Genome structure Given the millions of different phages in the environment, phage genomes come in a variety of forms and sizes. RNA phage such as MS2 have the smallest genomes, of only a few kilobases. However, some DNA phage such as T4 may have large genomes with hundreds of genes; the size and shape of the capsid varies along with the size of the genome. The largest bacteriophage genomes reach a size of 735 kb.Bacteriophage genomes can be highly mosaic, i.e. the genome of many phage species appear to be composed of numerous individual modules. These modules may be found in other phage species in different arrangements. Mycobacteriophages, bacteriophages with mycobacterial hosts, have provided excellent examples of this mosaicism. In these mycobacteriophages, genetic assortment may be the result of repeated instances of site-specific recombination and illegitimate recombination (the result of phage genome acquisition of bacterial host genetic sequences). Evolutionary mechanisms shaping the genomes of bacterial viruses vary between different families and depend upon the type of the nucleic acid, characteristics of the virion structure, as well as the mode of the viral life cycle. Some marine roseobacter phage contain deoxyuridine (dU) instead of deoxythymidine (dT) in their genomic DNA. There is some evidence that this unusual component is a mechanism to evade bacterial defense mechanisms such as restriction endonucleases and CRISPR/Cas systems which evolved to recognize and cleave sequences within invading phage, thereby inactivating them. Other phages have long been known to use unusual nucleotides. In 1963, Takahashi and Marmur identified a Bacillus phage that has dU substituting dT in its genome, and in 1977, Kirnos et al. identified a cyanophage containing 2-aminoadenine (Z) instead of adenine (A). Systems biology The field of systems biology investigates the complex networks of interactions within an organism, usually using computational tools and modeling. For example, a phage genome that enters into a bacterial host cell may express hundreds of phage proteins which will affect the expression of numerous host gene or the host's metabolism. All of these complex interactions can be described and simulated in computer models. For instance, infection of Pseudomonas aeruginosa'' by the temperate phage PaP3 changed the expression of 38% (2160/5633) of its host's genes. Many of these effects are probably indirect, hence the challenge becomes to identify the direct interactions among bacteria and phage. Several attempts have been made to map protein–protein interactions among phage and their host. For instance, bacteriophage lambda was found to interact with its host, E. coli, by dozens of interactions. Again, the significance of many of these interactions remains unclear, but these studies suggest that there most likely are several key interactions and many indirect interactions whose role remains uncharacterized. In the environment Metagenomics has allowed the in-water detection of bacteriophages that was not possible previously. Also, bacteriophages have been used in hydrological tracing and modelling in river systems, especially where surface water and groundwater interactions occur. The use of phages is preferred to the more conventional dye marker because they are significantly less absorbed when passing through ground waters and they are readily detected at very low concentrations. Non-polluted water may contain approximately 2×108 bacteriophages per ml. Bacteriophages are thought to contribute extensively to horizontal gene transfer in natural environments, principally via transduction, but also via transformation. Metagenomics-based studies also have revealed that viromes from a variety of environments harbor antibiotic-resistance genes, including those that could confer multidrug resistance. In humans Although phage do not infect humans, there are countless phage particles in the human body, given our extensive microbiome. Our phage population has been called the human phageome, including the "healthy gut phageome" (HGP) and the "diseased human phageome" (DHP). The active phageome of a healthy human (i.e., actively replicating as opposed to nonreplicating, integrated prophage) has been estimated to comprise dozens to thousands of different viruses. There is evidence that bacteriophage and bacteria interact in the human gut microbiome both antagonistically and beneficially. Preliminary studies have indicated that common bacteriophage are found on average in 62% of healthy individuals, while their prevalence was reduced by 42% and 54% on average in patients with ulcerative colitis (UC) and Crohn’s disease (CD). Abundance of phages may also decline in the elderly. The most common phages in the human intestine, found worldwide, are crAssphage. CrAssphages are transmitted from mother to child soon after birth, and there is some evidence suggesting that they may transmitted locally. Each person develops
as of pH > 13, particularly under elevated temperature (above 60 °C), kills bacteria. Antiseptics As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are properly diluted chlorine preparations (f.e. Dakin's solution, 0.5% sodium or potassium hypochlorite solution, pH-adjusted to pH 7 – 8, or 0.5 – 1% solution of sodium benzenesulfochloramide (chloramine B)), some iodine preparations, such as iodopovidone in various galenics (ointment, solutions, wound plasters), in the past also Lugol's solution, peroxides such as urea perhydrate solutions and pH-buffered 0.1 – 0.25% peracetic acid solutions, alcohols with or without antiseptic additives, used mainly for skin antisepsis, weak organic acids such as sorbic acid, benzoic acid, lactic acid and salicylic acid some phenolic compounds, such as hexachlorophene, triclosan and Dibromol, and cationic surfactants, such as 0.05 – 0.5% benzalkonium, 0.5 – 4% chlorhexidine, 0.1 – 2% octenidine solutions. Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature. Antibiotics Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction. Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin. Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin. Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms. As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection
and therefore their use is strongly discouraged or prohibited strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and alkalis (sodium, potassium, calcium hydroxides), such as of pH > 13, particularly under elevated temperature (above 60 °C), kills bacteria. Antiseptics As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucoses, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are properly diluted chlorine preparations (f.e. Dakin's solution, 0.5% sodium or potassium hypochlorite solution, pH-adjusted to pH 7 – 8, or 0.5 – 1% solution of sodium benzenesulfochloramide (chloramine B)), some iodine preparations, such as iodopovidone in various galenics (ointment, solutions, wound plasters), in the past also Lugol's solution, peroxides such as urea perhydrate solutions and pH-buffered 0.1 – 0.25% peracetic acid solutions, alcohols with or without antiseptic additives, used mainly for skin antisepsis, weak organic acids such as sorbic acid, benzoic acid, lactic acid and salicylic acid some phenolic compounds, such as hexachlorophene, triclosan and Dibromol, and cationic surfactants, such as 0.05 – 0.5% benzalkonium, 0.5 – 4% chlorhexidine, 0.1 – 2% octenidine solutions. Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature. Antibiotics Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction. Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin. Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin. Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms. As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have
viewed with the eyes closed. It was in painting and drawing, however, that Gysin devoted his greatest efforts, creating calligraphic works inspired by cursive Japanese "grass" script and Arabic script. Burroughs later stated that "Brion Gysin was the only man I ever respected." Biography Early years John Clifford Brian Gysin was born at the Canadian military hospital in the grounds of Cliveden, Taplow, England. His mother, Stella Margaret Martin, was a Canadian from Deseronto, Ontario. His father, Leonard Gysin, a captain with the Canadian Expeditionary Force, was killed in action eight months after his son's birth. Stella returned to Canada and settled in Edmonton, Alberta where her son became "the only Catholic day-boy at an Anglican boarding school". Graduating at fifteen, Gysin was sent to Downside School in Stratton-on-the-Fosse, near Bath, Somerset in England, a prestigious college run by the Benedictines and known as "the Eton of Catholic public schools". Despite, or because of, attending a Catholic school, Gysin became an atheist. Surrealism In 1934, he moved to Paris to study La Civilisation Française, an open course given at the Sorbonne where he made literary and artistic contacts through Marie Berthe Aurenche, Max Ernst's second wife. He joined the Surrealist Group and began frequenting Valentine Hugo, Leonor Fini, Salvador Dalí, Picasso and Dora Maar. A year later, he had his first exhibition at the Galerie Quatre Chemins in Paris with Ernst, Picasso, Hans Arp, Hans Bellmer, Victor Brauner, Giorgio de Chirico, Dalí, Marcel Duchamp, René Magritte, Man Ray and Yves Tanguy. On the day of the preview, however, he was expelled from the Surrealist Group by André Breton, who ordered the poet Paul Éluard to take down his pictures. Gysin was 19 years old. His biographer, John Geiger, suggests the arbitrary expulsion "had the effect of a curse. Years later, he blamed other failures on the Breton incident. It gave rise to conspiracy theories about the powerful interests who seek control of the art world. He gave various explanations for the expulsion, the more elaborate involving 'insubordination' or lèse majesté towards Breton". After World War II After serving in the U.S. army during World War II, Gysin published a biography of Josiah "Uncle Tom" Henson titled, To Master, a Long Goodnight: The History of Slavery in Canada (1946). A gifted draughtsman, he took an 18-month course learning the Japanese language (including calligraphy) that would greatly influence his artwork. In 1949, he was among the first Fulbright Fellows. His goal was to research, at the University of Bordeaux and in the Archivo de Indias in Seville, Spain, the history of slavery, a project that he later abandoned. He moved to Tangier, Morocco, after visiting the city with novelist and composer Paul Bowles in 1950. In 1952/3 he met the travel writer and sexual adventurer Anne Cumming and they remained friends until his death. Morocco and the Beat Hotel In 1954 in Tangier, Gysin opened a restaurant called The 1001 Nights, with his friend Mohamed Hamri, who was the cook. Gysin hired the Master Musicians of Jajouka from the village of Jajouka to perform alongside entertainment that included acrobats, a dancing boy and fire eaters. The musicians performed there for an international clientele that included William S. Burroughs. Gysin lost the business in 1958, and the restaurant closed permanently. That same year, Gysin returned to Paris, taking lodgings in a flophouse located at 9 rue Gît-le-Cœur that would become famous as the Beat Hotel. Working on a drawing, he discovered a Dada technique by accident: William Burroughs and I first went into techniques of writing, together, back in room No. 15 of the Beat Hotel during the cold Paris spring of 1958... Burroughs was more intent on Scotch-taping his photos together into one great continuum on the wall, where scenes faded and slipped into one another, than occupied with editing the monster manuscript... Naked Lunch appeared and Burroughs disappeared. He kicked his habit with Apomorphine and flew off to London to see Dr Dent, who had first turned him on to the cure. While cutting a mount for a drawing in room No. 15, I sliced through a pile of newspapers with my Stanley blade and thought of what I had said to Burroughs some six months earlier about the necessity for turning painters' techniques directly into writing. I picked up the raw words and began to piece together texts that later appeared as "First Cut-Ups" in Minutes to Go (Two Cities, Paris 1960). When Burroughs returned from London in September 1959, Gysin not only shared his discovery with his friend but the new techniques he had developed for it. Burroughs then put the techniques to use while completing Naked Lunch and the experiment dramatically changed the landscape of American literature. Gysin helped Burroughs with the editing of several of his novels including Interzone, and wrote a script for a film version of Naked Lunch, which was never produced. The pair collaborated on a large manuscript for Grove Press titled The Third Mind but it was determined that it would be
would parlay into a lifetime career, great clumps of ideas, as casually as a locomotive throws off sparks". Later that year a heavily edited version of his novel, The Last Museum, was published posthumously by Faber & Faber (London) and by Grove Press (New York). As a joke, Gysin had contributed a recipe for marijuana fudge to a cookbook by Alice B. Toklas; it was included for publication, becoming famous under the name Alice B. Toklas brownies. Burroughs on the Gysin cut-up In a 1966 interview by Conrad Knickerbocker for The Paris Review, William S. Burroughs explained that Brion Gysin was, to his knowledge, "the first to create cut-ups": A friend, Brion Gysin, an American poet and painter, who has lived in Europe for thirty years, was, as far as I know, the first to create cut-ups. His cut-up poem, Minutes to Go, was broadcast by the BBC and later published in a pamphlet. I was in Paris in the summer of 1960; this was after the publication there of Naked Lunch. I became interested in the possibilities of this technique, and I began experimenting myself. Of course, when you think of it, The Waste Land was the first great cut-up collage, and Tristan Tzara had done a bit along the same lines. Dos Passos used the same idea in 'The Camera Eye' sequences in USA. I felt I had been working toward the same goal; thus it was a major revelation to me when I actually saw it being done. Influence According to José Férez Kuri, author of Brion Gysin: Tuning in to the Multimedia Age (2003) and co-curator of a major retrospective of the artist's work at The Edmonton Art Gallery in 1998, Gysin's wide range of "radical ideas would become a source of inspiration for artists of the Beat Generation, as well as for their successors (among them David Bowie, Mick Jagger, Keith Haring, and Laurie Anderson)". Other artists include Genesis P-Orridge, John Zorn (as displayed on the 2013's Dreamachines album) and Brian Jones. Selected bibliography Gysin is the subject of John Geiger's biography, Nothing Is True Everything Is Permitted: The Life of Brion Gysin, and features in Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine, also by Geiger. Man From Nowhere: Storming the Citadels of Enlightenment with William Burroughs and Brion Gysin, a biographical study of Burroughs and Gysin with a collection of homages to Gysin, was authored by Joe Ambrose, Frank Rynne, and Terry Wilson with contributions by Marianne Faithfull, John Cale, William S. Burroughs, John Giorno, Stanley Booth, Bill Laswell, Mohamed Hamri, Keith Haring and Paul Bowles. A monograph on Gysin was published in 2003 by Thames and Hudson. Works Prose To Master, A Long Goodnight: The History of Slavery in Canada (1946) Minutes to Go (1960) The Exterminator (1960) The Process (1969) Brion Gysin Let The Mice In (1973) The Third Mind (1978), with William S. Burroughs Here To Go: Planet R-101 (first published 1982) Stories (1984) The Last Museum (1985) Radio Pistol Poem (1960) Permutations (1960) I Am (1960) No Poets (1962) Junk is No Good Baby (1962) Cinema Scenario to Naked Lunch (1973) Music Songs (hat ART, 181) with Steve Lacy Junk (1985) Self-Portrait Jumping (with Ramuntcho Matta, Don Cherry, Steve Lacy) (1993) Painting Les deux faux interlocuteurs, Gradiva Rediviva Zoe Bertgang, and Signe dans le paysage (Surrealist ink drawings, 1935) Sahara Sand (1958) The Songs of Marrakech (1959) Unit II pink, Unit III yellow, Unit IV orange, Unit V blue (1961) Francis in the Beat Hotel (1962) For a Stained-Glass Window in Rheims (1963) Roller Poem (1971) Calligraffiti of Fire (1986) Sources Print Primary sources Secondary sources Morgan, Ted. Literary Outlaw: The Life and Times of William S. Burroughs. New York and London: W.W. Norton & Company, 1988, 2012. Kuri, José Férez, ed. Brion Gysin: Tuning in to the Multimedia Age. London: Thames & Hudson, 2003. Geiger, John. Nothing Is True Everything Is Permitted: The Life of Brion Gysin. Disinformation Company, 2005. Geiger, John. Chapel of Extreme Experience: A Short History of Stroboscopic Light and the Dream Machine. Soft Skull Press, 2003. Ambrose, Joe, Frank Rynne, and Terry Wilson. Man From Nowhere: Storming the Citadels of Enlightenment with William Burroughs and Brion Gysin. Williamsburg: Autonomedia, 1992 Vale, V. William Burroughs, Brion Gysin, Throbbing Gristle. San Francisco: V/Search, 1982. See also Asemic writing Brian Jones Presents The Pipes Of Pan at Jajouka References External links UBU Sound Article on Brion Gysin briongysin.com article What Does Brion Gysin's Art Mean? Cutup The Burroughs & Gysin Non-Linear Adding Machine The Master Musicians of Jajouka led by Bachir Attar Official website Master Musicians of Joujouka Official website Village Voice Review of Back in No Time: A Brion Gysin Reader (2001) Perilous Passages Terry Wilson's account of his "lifetime apprenticeship" with Brion Gysin William Burroughs's letter on Gysin and Jajouka Interzone documentation on the Dream Machine and free Dream Machine plans Official website of FLicKeR a film on Brion Gysin and the Dream Machine based on Geiger's book
of Southeastern Europe See also List of Bulgarians, include Bulgarian name, names of Bulgarians Bulgarian umbrella, an umbrella with a hidden pneumatic mechanism Bulgar (disambiguation) Bulgarian-Serbian
refer to: Something of, from, or related to the country of Bulgaria Bulgarians, a South Slavic ethnic group Bulgarian language, a Slavic language
studies; immunity against mycobacteria stops BCG from replicating and so stops it from producing an immune response. This is called the block hypothesis. Interference by concurrent parasitic infection: In another hypothesis, simultaneous infection with parasites changes the immune response to BCG, making it less effective. As Th1 response is required for an effective immune response to tuberculous infection, concurrent infection with various parasites produces a simultaneous Th2 response, which blunts the effect of BCG. Mycobacteria BCG has protective effects against some nontuberculosis mycobacteria. Leprosy: BCG has a protective effect against leprosy in the range of 20 to 80%. Buruli ulcer: BCG may protect against or delay the onset of Buruli ulcer. Cancer BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of nonmuscle-invasive bladder cancer. A number of cancer vaccines use BCG as an additive to provide an initial stimulation of the person's immune systems. BCG is used in the treatment of superficial forms of bladder cancer. Since the late 1970s, evidence has become available that instillation of BCG into the bladder is an effective form of immunotherapy in this disease. While the mechanism is unclear, it appears a local immune reaction is mounted against the tumor. Immunotherapy with BCG prevents recurrence in up to 67% of cases of superficial bladder cancer. BCG has been evaluated in a number of studies as a therapy for colorectal cancer. The US biotech company Vaccinogen is evaluating BCG as an adjuvant to autologous tumour cells used as a cancer vaccine in stage II colon cancer. Method of administration A tuberculin skin test is usually carried out before administering BCG. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate any immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects. BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks. The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble. When given for bladder cancer, the vaccine is not injected through the skin, but is instilled into the bladder through the urethra using a soft catheter. Adverse effects BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes. BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) and nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For nonresolving suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce. Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications. When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation). If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency, it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context). Usage The age of the person and the frequency with which BCG is given has always varied from country to country. The WHO currently recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and current BCG practice around the globe. A complete atlas of past and present practice has been generated. Americas Brazil introduced universal BCG immunization in 1967–1968, and the practice continues until now. According to Brazilian law, BCG is given again to professionals of the health sector and to people close to patients with tuberculosis or leprosy. Canadian Indigenous communities currently receive the BCG vaccine, and in the province of Quebec the vaccine was offered to children until the mid-70s. Most countries in Central and South America have universal BCG immunizations. The United States has never used mass immunization of BCG due to the rarity of tuberculosis in the US, relying instead on the detection and treatment of latent tuberculosis. Europe Asia China: Introduced in 1930s. Increasingly widespread after 1949. Majority inoculated by 1979. South Korea, Singapore, Taiwan and Malaysia. In these countries, BCG was given at birth and again at age 12. In Malaysia and Singapore from 2001, this policy was changed to once only at birth. South Korea stopped re-vaccination in 2008. Hong Kong: BCG is given to all newborns. Japan: In Japan, BCG was introduced in 1951, given typically at age 6. From 2005 it is administered between five and eight months after birth, and no later than a child's first birthday. BCG was administered no later than the fourth birthday until 2005, and no later than six months from birth from 2005 to 2012; the schedule was changed in 2012 due to reports of osteitis side effects from vaccinations at 3–4 months. Some municipalities recommend an earlier immunization schedule. Thailand: In Thailand, the BCG vaccine is given routinely at birth. India and Pakistan: India and Pakistan introduced BCG mass immunization
demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay. The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only a slight waning in efficacy. BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible. The 100th anniversary of BCG was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO), although the development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress. Efficacy A number of possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in both low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG. Genetic variation in BCG strains: Genetic variation in the BCG strains used may explain the variable efficacy reported in different trials. Genetic variation in populations: Differences in genetic make-up of different populations may explain the difference in efficacy. The Birmingham BCG trial was published in 1988. The trial, based in Birmingham, United Kingdom, examined children born to families who originated from the Indian subcontinent (where vaccine efficacy had previously been shown to be zero). The trial showed a 64% protective effect, which is very similar to the figure derived from other UK trials, thus arguing against the genetic variation hypothesis. Interference by nontuberculous mycobacteria: Exposure to environmental mycobacteria (especially Mycobacterium avium, Mycobacterium marinum and Mycobacterium intracellulare) results in a nonspecific immune response against mycobacteria. Administering BCG to someone who already has a nonspecific immune response against mycobacteria does not augment the response already there. BCG will, therefore, appear not to be efficacious because that person already has a level of immunity and BCG is not adding to that immunity. This effect is called masking because the effect of BCG is masked by environmental mycobacteria. Clinical evidence for this effect was found in a series of studies performed in parallel in adolescent school children in the UK and Malawi. In this study, the UK school children had a low baseline cellular immunity to mycobacteria which was increased by BCG; in contrast, the Malawi school children had a high baseline cellular immunity to mycobacteria and this was not significantly increased by BCG. Whether this natural immune response is protective is not known. An alternative explanation is suggested by mouse studies; immunity against mycobacteria stops BCG from replicating and so stops it from producing an immune response. This is called the block hypothesis. Interference by concurrent parasitic infection: In another hypothesis, simultaneous infection with parasites changes the immune response to BCG, making it less effective. As Th1 response is required for an effective immune response to tuberculous infection, concurrent infection with various parasites produces a simultaneous Th2 response, which blunts the effect of BCG. Mycobacteria BCG has protective effects against some nontuberculosis mycobacteria. Leprosy: BCG has a protective effect against leprosy in the range of 20 to 80%. Buruli ulcer: BCG may protect against or delay the onset of Buruli ulcer. Cancer BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014 there were more than eight different considered biosimilar agents or strains used for the treatment of nonmuscle-invasive bladder cancer. A number of cancer vaccines use BCG as an additive to provide an initial stimulation of the person's immune systems. BCG is used in the treatment of superficial forms of bladder cancer. Since the late 1970s, evidence has become available that instillation of BCG into the bladder is an effective form of immunotherapy in this disease. While the mechanism is unclear, it appears a local immune reaction is mounted against the tumor. Immunotherapy with BCG prevents recurrence in up to 67% of cases of superficial bladder cancer. BCG has been evaluated in a number of studies as a therapy for colorectal cancer. The US biotech company Vaccinogen is evaluating BCG as an adjuvant to autologous tumour cells used as a cancer vaccine in stage II colon cancer. Method of administration A tuberculin skin test is usually carried out before administering BCG. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate any immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects. BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks. The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble. When given for bladder cancer, the vaccine is not injected through the skin, but is instilled into the bladder through the urethra using a soft catheter. Adverse effects BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes. BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) and nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For nonresolving suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce. Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications. When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation). If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency, it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context). Usage The age of the person and the frequency with which BCG is given has always varied from country to country. The WHO currently recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and current BCG practice around the globe. A complete atlas of past and present practice has been generated. Americas Brazil introduced universal BCG immunization in 1967–1968, and the practice continues until now. According to Brazilian law, BCG is given again to professionals of the health sector and to people close to patients with tuberculosis or leprosy. Canadian Indigenous communities currently receive the BCG vaccine, and in the province of Quebec the vaccine was offered to children until the mid-70s. Most countries in Central and South America have universal BCG immunizations. The United States has never used mass immunization of BCG due to the rarity of tuberculosis in the US, relying instead on the detection and treatment of latent tuberculosis. Europe Asia China: Introduced in 1930s. Increasingly widespread after 1949. Majority inoculated by 1979. South Korea, Singapore, Taiwan and Malaysia. In these countries, BCG was given at birth and again at age 12. In Malaysia and Singapore from 2001, this policy was changed to once only at birth. South Korea stopped re-vaccination in 2008. Hong Kong: BCG is given to all newborns. Japan: In Japan, BCG was introduced in 1951, given typically at age 6. From 2005 it is administered between five and eight months after birth, and no later than a child's first birthday. BCG was administered no later than the fourth birthday until 2005, and no later than six months from birth from 2005 to 2012; the schedule was changed in 2012 due to reports of osteitis side effects from vaccinations at 3–4 months. Some municipalities recommend an earlier immunization schedule. Thailand: In Thailand, the BCG vaccine is given routinely at birth. India and Pakistan: India and Pakistan introduced BCG mass immunization in 1948, the first countries outside Europe to do so. In 2015, millions of infants were denied BCG vaccine in Pakistan for the first time due to shortage globally. Mongolia: All newborns are vaccinated with BCG. Previously, the vaccine was also given at ages 8 and 15, although this is no longer common practice. Philippines: BCG vaccine started in the Philippines in 1979 with the Expanded Program on Immunization. Sri Lanka: In Sri Lanka, The National Policy of Sri Lanka is to give BCG vaccination to all newborn