Datasets:
de-francophones
commited on
Commit
•
0b828a4
1
Parent(s):
457df53
94603eeeccb39c6fb2d027e790330c00c3c2ee0783ce43923ecec551a2bc2584
Browse files- en/4194.html.txt +135 -0
- en/4195.html.txt +163 -0
- en/4196.html.txt +183 -0
- en/4197.html.txt +183 -0
- en/4198.html.txt +183 -0
- en/4199.html.txt +31 -0
- en/42.html.txt +108 -0
- en/420.html.txt +191 -0
- en/4200.html.txt +31 -0
- en/4201.html.txt +150 -0
- en/4202.html.txt +150 -0
- en/4203.html.txt +35 -0
- en/4204.html.txt +35 -0
- en/4205.html.txt +130 -0
- en/4206.html.txt +17 -0
- en/4207.html.txt +31 -0
- en/4208.html.txt +181 -0
- en/4209.html.txt +181 -0
- en/421.html.txt +191 -0
- en/4210.html.txt +1 -0
- en/4211.html.txt +231 -0
- en/4212.html.txt +117 -0
- en/4213.html.txt +117 -0
- en/4214.html.txt +190 -0
- en/4215.html.txt +190 -0
- en/4216.html.txt +231 -0
- en/4217.html.txt +231 -0
- en/4218.html.txt +117 -0
- en/4219.html.txt +117 -0
- en/422.html.txt +191 -0
- en/4220.html.txt +117 -0
- en/4221.html.txt +117 -0
- en/4222.html.txt +108 -0
- en/4223.html.txt +255 -0
- en/4224.html.txt +199 -0
- en/4225.html.txt +199 -0
- en/4226.html.txt +111 -0
- en/4227.html.txt +111 -0
- en/4228.html.txt +108 -0
- en/4229.html.txt +252 -0
- en/423.html.txt +91 -0
- en/4230.html.txt +252 -0
- en/4231.html.txt +252 -0
- en/4232.html.txt +9 -0
- en/4233.html.txt +252 -0
- en/4234.html.txt +252 -0
- en/4235.html.txt +242 -0
- en/4236.html.txt +78 -0
- en/4237.html.txt +141 -0
- en/4238.html.txt +880 -0
en/4194.html.txt
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In cell biology, the nucleus (pl. nuclei; from Latin nucleus or nuculeus, meaning kernel or seed) is a membrane-bound organelle found in eukaryotic cells. Eukaryotes usually have a single nucleus, but a few cell types, such as mammalian red blood cells, have no nuclei, and a few others including osteoclasts have many.
|
4 |
+
|
5 |
+
The cell nucleus contains all of the cell's genome, except for a small fraction of mitochondrial DNA, organized as multiple long linear DNA molecules in a complex with a large variety of proteins, such as histones, to form chromosomes. The genes within these chromosomes are structured in such a way to promote cell function. The nucleus maintains the integrity of genes and controls the activities of the cell by regulating gene expression—the nucleus is, therefore, the control center of the cell. The main structures making up the nucleus are the nuclear envelope, a double membrane that encloses the entire organelle and isolates its contents from the cellular cytoplasm, and the nuclear matrix (which includes the nuclear lamina), a network within the nucleus that adds mechanical support, much like the cytoskeleton, which supports the cell as a whole.
|
6 |
+
|
7 |
+
Because the nuclear envelope is impermeable to large molecules, nuclear pores are required to regulate nuclear transport of molecules across the envelope. The pores cross both nuclear membranes, providing a channel through which larger molecules must be actively transported by carrier proteins while allowing free movement of small molecules and ions. Movement of large molecules such as proteins and RNA through the pores is required for both gene expression and the maintenance of chromosomes. Although the interior of the nucleus does not contain any membrane-bound subcompartments, its contents are not uniform, and a number of nuclear bodies exist, made up of unique proteins, RNA molecules, and particular parts of the chromosomes. The best-known of these is the nucleolus, which is mainly involved in the assembly of ribosomes. After being produced in the nucleolus, ribosomes are exported to the cytoplasm where they translate mRNA.
|
8 |
+
|
9 |
+
The nucleus was the first organelle to be discovered. What is most likely the oldest preserved drawing dates back to the early microscopist Antonie van Leeuwenhoek (1632–1723). He observed a "lumen", the nucleus, in the red blood cells of salmon.[1] Unlike mammalian red blood cells, those of other vertebrates still contain nuclei.
|
10 |
+
|
11 |
+
The nucleus was also described by Franz Bauer in 1804[2] and in more detail in 1831 by Scottish botanist Robert Brown in a talk at the Linnean Society of London. Brown was studying orchids under the microscope when he observed an opaque area, which he called the "areola" or "nucleus", in the cells of the flower's outer layer.[3]
|
12 |
+
|
13 |
+
He did not suggest a potential function. In 1838, Matthias Schleiden proposed that the nucleus plays a role in generating cells, thus he introduced the name "cytoblast" ("cell builder"). He believed that he had observed new cells assembling around "cytoblasts". Franz Meyen was a strong opponent of this view, having already described cells multiplying by division and believing that many cells would have no nuclei. The idea that cells can be generated de novo, by the "cytoblast" or otherwise, contradicted work by Robert Remak (1852) and Rudolf Virchow (1855) who decisively propagated the new paradigm that cells are generated solely by cells ("Omnis cellula e cellula"). The function of the nucleus remained unclear.[4]
|
14 |
+
|
15 |
+
Between 1877 and 1878, Oscar Hertwig published several studies on the fertilization of sea urchin eggs, showing that the nucleus of the sperm enters the oocyte and fuses with its nucleus. This was the first time it was suggested that an individual develops from a (single) nucleated cell. This was in contradiction to Ernst Haeckel's theory that the complete phylogeny of a species would be repeated during embryonic development, including generation of the first nucleated cell from a "monerula", a structureless mass of primordial protoplasm ("Urschleim"). Therefore, the necessity of the sperm nucleus for fertilization was discussed for quite some time. However, Hertwig confirmed his observation in other animal groups, including amphibians and molluscs. Eduard Strasburger produced the same results for plants in 1884. This paved the way to assign the nucleus an important role in heredity. In 1873, August Weismann postulated the equivalence of the maternal and paternal germ cells for heredity. The function of the nucleus as carrier of genetic information became clear only later, after mitosis was discovered and the Mendelian rules were rediscovered at the beginning of the 20th century; the chromosome theory of heredity was therefore developed.[4]
|
16 |
+
|
17 |
+
The nucleus is the largest organelle in animal cells.[5]
|
18 |
+
In mammalian cells, the average diameter of the nucleus is approximately 6 micrometres (µm), which occupies about 10% of the total cell volume.[6] The contents of the
|
19 |
+
nucleus are held in the nucleoplasm similar to the cytoplasm in the rest of the cell. The fluid component of this is termed the nucleosol, similar to the cytosol in the cytoplasm.[7]
|
20 |
+
|
21 |
+
In most types of granulocyte, a white blood cell, the nucleus is lobated and can be bi-lobed, tri-lobed or multi-lobed.
|
22 |
+
|
23 |
+
The dynamic behaviour of structures in the nucleus, such as the nuclear rotation that occurs prior to mitosis, can be visualized using label-free live cell imaging.[8]
|
24 |
+
|
25 |
+
The nuclear envelope, otherwise known as nuclear membrane, consists of two cellular membranes, an inner and an outer membrane, arranged parallel to one another and separated by 10 to 50 nanometres (nm). The nuclear envelope completely encloses the nucleus and separates the cell's genetic material from the surrounding cytoplasm, serving as a barrier to prevent macromolecules from diffusing freely between the nucleoplasm and the cytoplasm.[9] The outer nuclear membrane is continuous with the membrane of the rough endoplasmic reticulum (RER), and is similarly studded with ribosomes.[9] The space between the membranes is called the perinuclear space and is continuous with the RER lumen.
|
26 |
+
|
27 |
+
Nuclear pores, which provide aqueous channels through the envelope, are composed of multiple proteins, collectively referred to as nucleoporins. The pores are about 125 million daltons in molecular weight and consist of around 50 (in yeast) to several hundred proteins (in vertebrates).[5] The pores are 100 nm in total diameter; however, the gap through which molecules freely diffuse is only about 9 nm wide, due to the presence of regulatory systems within the center of the pore. This size selectively allows the passage of small water-soluble molecules while preventing larger molecules, such as nucleic acids and larger proteins, from inappropriately entering or exiting the nucleus. These large molecules must be actively transported into the nucleus instead. The nucleus of a typical mammalian cell will have about 3000 to 4000 pores throughout its envelope,[10] each of which contains an eightfold-symmetric ring-shaped structure at a position where the inner and outer membranes fuse.[11] Attached to the ring is a structure called the nuclear basket that extends into the nucleoplasm, and a series of filamentous extensions that reach into the cytoplasm. Both structures serve to mediate binding to nuclear transport proteins.[5]
|
28 |
+
|
29 |
+
Most proteins, ribosomal subunits, and some DNAs are transported through the pore complexes in a process mediated by a family of transport factors known as karyopherins. Those karyopherins that mediate movement into the nucleus are also called importins, whereas those that mediate movement out of the nucleus are called exportins. Most karyopherins interact directly with their cargo, although some use adaptor proteins.[12] Steroid hormones such as cortisol and aldosterone, as well as other small lipid-soluble molecules involved in intercellular signaling, can diffuse through the cell membrane and into the cytoplasm, where they bind nuclear receptor proteins that are trafficked into the nucleus. There they serve as transcription factors when bound to their ligand; in the absence of a ligand, many such receptors function as histone deacetylases that repress gene expression.[5]
|
30 |
+
|
31 |
+
In animal cells, two networks of intermediate filaments provide the nucleus with mechanical support: The nuclear lamina forms an organized meshwork on the internal face of the envelope, while less organized support is provided on the cytosolic face of the envelope. Both systems provide structural support for the nuclear envelope and anchoring sites for chromosomes and nuclear pores.[13]
|
32 |
+
|
33 |
+
The nuclear lamina is composed mostly of lamin proteins. Like all proteins, lamins are synthesized in the cytoplasm and later transported to the nucleus interior, where they are assembled before being incorporated into the existing network of nuclear lamina.[14][15] Lamins found on the cytosolic face of the membrane, such as emerin and nesprin, bind to the cytoskeleton to provide structural support. Lamins are also found inside the nucleoplasm where they form another regular structure, known as the nucleoplasmic veil,[16] that is visible using fluorescence microscopy. The actual function of the veil is not clear, although it is excluded from the nucleolus and is present during interphase.[17] Lamin structures that make up the veil, such as LEM3, bind chromatin and disrupting their structure inhibits transcription of protein-coding genes.[18]
|
34 |
+
|
35 |
+
Like the components of other intermediate filaments, the lamin monomer contains an alpha-helical domain used by two monomers to coil around each other, forming a dimer structure called a coiled coil. Two of these dimer structures then join side by side, in an antiparallel arrangement, to form a tetramer called a protofilament. Eight of these protofilaments form a lateral arrangement that is twisted to form a ropelike filament. These filaments can be assembled or disassembled in a dynamic manner, meaning that changes in the length of the filament depend on the competing rates of filament addition and removal.[13]
|
36 |
+
|
37 |
+
Mutations in lamin genes leading to defects in filament assembly cause a group of rare genetic disorders known as laminopathies. The most notable laminopathy is the family of diseases known as progeria, which causes the appearance of premature aging in its sufferers. The exact mechanism by which the associated biochemical changes give rise to the aged phenotype is not well understood.[19]
|
38 |
+
|
39 |
+
The cell nucleus contains the majority of the cell's genetic material in the form of multiple linear DNA molecules organized into structures called chromosomes. Each human cell contains roughly two meters of DNA. During most of the cell cycle these are organized in a DNA-protein complex known as chromatin, and during cell division the chromatin can be seen to form the well-defined chromosomes familiar from a karyotype. A small fraction of the cell's genes are located instead in the mitochondria.
|
40 |
+
|
41 |
+
There are two types of chromatin. Euchromatin is the less compact DNA form, and contains genes that are frequently expressed by the cell.[20] The other type, heterochromatin, is the more compact form, and contains DNA that is infrequently transcribed. This structure is further categorized into facultative heterochromatin, consisting of genes that are organized as heterochromatin only in certain cell types or at certain stages of development, and constitutive heterochromatin that consists of chromosome structural components such as telomeres and centromeres.[21] During interphase the chromatin organizes itself into discrete individual patches,[22] called chromosome territories.[23] Active genes, which are generally found in the euchromatic region of the chromosome, tend to be located towards the chromosome's territory boundary.[24]
|
42 |
+
|
43 |
+
Antibodies to certain types of chromatin organization, in particular, nucleosomes, have been associated with a number of autoimmune diseases, such as systemic lupus erythematosus.[25] These are known as anti-nuclear antibodies (ANA) and have also been observed in concert with multiple sclerosis as part of general immune system dysfunction.[26] As in the case of progeria, the role played by the antibodies in inducing the symptoms of autoimmune diseases is not obvious.
|
44 |
+
|
45 |
+
The nucleolus is the largest of the discrete densely stained, membraneless structures known as nuclear bodies found in the nucleus. It forms around tandem repeats of rDNA, DNA coding for ribosomal RNA (rRNA). These regions are called nucleolar organizer regions (NOR). The main roles of the nucleolus are to synthesize rRNA and assemble ribosomes. The structural cohesion of the nucleolus depends on its activity, as ribosomal assembly in the nucleolus results in the transient association of nucleolar components, facilitating further ribosomal assembly, and hence further association. This model is supported by observations that inactivation of rDNA results in intermingling of nucleolar structures.[27]
|
46 |
+
|
47 |
+
In the first step of ribosome assembly, a protein called RNA polymerase I transcribes rDNA, which forms a large pre-rRNA precursor. This is cleaved into the subunits 5.8S, 18S, and 28S rRNA.[28] The transcription, post-transcriptional processing, and assembly of rRNA occurs in the nucleolus, aided by small nucleolar RNA (snoRNA) molecules, some of which are derived from spliced introns from messenger RNAs encoding genes related to ribosomal function. The assembled ribosomal subunits are the largest structures passed through the nuclear pores.[5]
|
48 |
+
|
49 |
+
When observed under the electron microscope, the nucleolus can be seen to consist of three distinguishable regions: the innermost fibrillar centers (FCs), surrounded by the dense fibrillar component (DFC) (that contains fibrillarin and nucleolin), which in turn is bordered by the granular component (GC) (that contains the protein nucleophosmin). Transcription of the rDNA occurs either in the FC or at the FC-DFC boundary, and, therefore, when rDNA transcription in the cell is increased, more FCs are detected. Most of the cleavage and modification of rRNAs occurs in the DFC, while the latter steps involving protein assembly onto the ribosomal subunits occur in the GC.[28]
|
50 |
+
|
51 |
+
Besides the nucleolus, the nucleus contains a number of other nuclear bodies. These include Cajal bodies, gemini of Cajal bodies, polymorphic interphase karyosomal association (PIKA), promyelocytic leukaemia (PML) bodies, paraspeckles, and splicing speckles. Although little is known about a number of these domains, they are significant in that they show that the nucleoplasm is not a uniform mixture, but rather contains organized functional subdomains.[32]
|
52 |
+
|
53 |
+
Other subnuclear structures appear as part of abnormal disease processes. For example, the presence of small intranuclear rods has been reported in some cases of nemaline myopathy. This condition typically results from mutations in actin, and the rods themselves consist of mutant actin as well as other cytoskeletal proteins.[34]
|
54 |
+
|
55 |
+
A nucleus typically contains between 1 and 10 compact structures called Cajal bodies or coiled bodies (CB), whose diameter measures between 0.2 µm and 2.0 µm depending on the cell type and species.[29] When seen under an electron microscope, they resemble balls of tangled thread[31] and are dense foci of distribution for the protein coilin.[35] CBs are involved in a number of different roles relating to RNA processing, specifically small nucleolar RNA (snoRNA) and small nuclear RNA (snRNA) maturation, and histone mRNA modification.[29]
|
56 |
+
|
57 |
+
Similar to Cajal bodies are Gemini of Cajal bodies, or gems, whose name is derived from the Gemini constellation in reference to their close "twin" relationship with CBs. Gems are similar in size and shape to CBs, and in fact are virtually indistinguishable under the microscope.[35] Unlike CBs, gems do not contain small nuclear ribonucleoproteins (snRNPs), but do contain a protein called survival of motor neuron (SMN) whose function relates to snRNP biogenesis. Gems are believed to assist CBs in snRNP biogenesis,[36] though it has also been suggested from microscopy evidence that CBs and gems are different manifestations of the same structure.[35] Later ultrastructural studies have shown gems to be twins of Cajal bodies with the difference being in the coilin component; Cajal bodies are SMN positive and coilin positive, and gems are SMN positive and coilin negative.[37]
|
58 |
+
|
59 |
+
PIKA domains, or polymorphic interphase karyosomal associations, were first described in microscopy studies in 1991. Their function remains unclear, though they were not thought to be associated with active DNA replication, transcription, or RNA processing.[38] They have been found to often associate with discrete domains defined by dense localization of the transcription factor PTF, which promotes transcription of small nuclear RNA (snRNA).[39]
|
60 |
+
|
61 |
+
Promyelocytic leukemia bodies (PML bodies) are spherical bodies found scattered throughout the nucleoplasm, measuring around 0.1–1.0 µm. They are known by a number of other names, including nuclear domain 10 (ND10), Kremer bodies, and PML oncogenic domains.[40] PML bodies are named after one of their major components, the promyelocytic leukemia protein (PML). They are often seen in the nucleus in association with Cajal bodies and cleavage bodies.[32] Pml-/- mice, which are unable to create PML bodies, develop normally without obvious ill effects, showing that PML bodies are not required for most essential biological processes.[41]
|
62 |
+
|
63 |
+
Speckles are subnuclear structures that are enriched in pre-messenger RNA splicing factors and are located in the interchromatin regions of the nucleoplasm of mammalian cells. At the fluorescence-microscope level they appear as irregular, punctate structures, which vary in size and shape, and when examined by electron microscopy they are seen as clusters of interchromatin granules. Speckles are dynamic structures, and both their protein and RNA-protein components can cycle continuously between speckles and other nuclear locations, including active transcription sites. Studies on the composition, structure and behaviour of speckles have provided a model for understanding the functional compartmentalization of the nucleus and the organization of the gene-expression machinery[42]
|
64 |
+
splicing snRNPs[43][44] and other splicing proteins necessary for pre-mRNA processing.[42] Because of a cell's changing requirements, the composition and location of these bodies changes according to mRNA transcription and regulation via phosphorylation of specific proteins.[45]
|
65 |
+
The splicing speckles are also known as nuclear speckles (nuclear specks), splicing factor compartments (SF compartments), interchromatin granule clusters (IGCs), and B snurposomes.[46]
|
66 |
+
B snurposomes are found in the amphibian oocyte nuclei and in Drosophila melanogaster embryos. B snurposomes appear alone or attached to the Cajal bodies in the electron micrographs of the amphibian nuclei.[47]
|
67 |
+
IGCs function as storage sites for the splicing factors.[48]
|
68 |
+
|
69 |
+
Discovered by Fox et al. in 2002, paraspeckles are irregularly shaped compartments in the interchromatin space of the nucleus.[49] First documented in HeLa cells, where there are generally 10–30 per nucleus,[50] paraspeckles are now known to also exist in all human primary cells, transformed cell lines, and tissue sections.[51] Their name is derived from their distribution in the nucleus; the "para" is short for parallel and the "speckles" refers to the splicing speckles to which they are always in close proximity.[50]
|
70 |
+
|
71 |
+
Paraspeckles are dynamic structures that are altered in response to changes in cellular metabolic activity. They are transcription dependent[49] and in the absence of RNA Pol II transcription, the paraspeckle disappears and all of its associated protein components (PSP1, p54nrb, PSP2, CFI(m)68, and PSF) form a crescent shaped perinucleolar cap in the nucleolus. This phenomenon is demonstrated during the cell cycle. In the cell cycle, paraspeckles are present during interphase and during all of mitosis except for telophase. During telophase, when the two daughter nuclei are formed, there is no RNA Pol II transcription so the protein components instead form a perinucleolar cap.[51]
|
72 |
+
|
73 |
+
Perichromatin fibrils are visible only under electron microscope. They are located next to the transcriptionally active chromatin and are hypothesized to be the sites of active pre-mRNA processing.[48]
|
74 |
+
|
75 |
+
Clastosomes are small nuclear bodies (0.2–0.5 µm) described as having a thick ring-shape due to the peripheral capsule around these bodies.[30] This name is derived from the Greek klastos, broken and soma, body.[30] Clastosomes are not typically present in normal cells, making them hard to detect. They form under high proteolytic conditions within the nucleus and degrade once there is a decrease in activity or if cells are treated with proteasome inhibitors.[30][52] The scarcity of clastosomes in cells indicates that they are not required for proteasome function.[53] Osmotic stress has also been shown to cause the formation of clastosomes.[54] These nuclear bodies contain catalytic and regulatory sub-units of the proteasome and its substrates, indicating that clastosomes are sites for degrading proteins.[53]
|
76 |
+
|
77 |
+
The fougaro system (Greek; Fougaro, chimney) is a sub-organelle system in the nucleus that may be a mechanism to recycle or remove molecules from the cell to the external medium. The molecules or peptides are ubiquitinated before being released from the nucleus of the cells. The ubiquitinated molecules are released independently or associated with endosomal proteins such as Beclin[55][56]
|
78 |
+
|
79 |
+
The nucleus provides a site for genetic transcription that is segregated from the location of translation in the cytoplasm, allowing levels of gene regulation that are not available to prokaryotes. The main function of the cell nucleus is to control gene expression and mediate the replication of DNA during the cell cycle.
|
80 |
+
|
81 |
+
The nucleus is an organelle found in eukaryotic cells. Inside its fully enclosed nuclear membrane, it contains the majority of the cell's genetic material. This material is organized as DNA molecules, along with a variety of proteins, to form chromosomes.
|
82 |
+
|
83 |
+
The nuclear envelope allows the nucleus to control its contents, and separate them from the rest of the cytoplasm where necessary. This is important for controlling processes on either side of the nuclear membrane. In most cases where a cytoplasmic process needs to be restricted, a key participant is removed to the nucleus, where it interacts with transcription factors to downregulate the production of certain enzymes in the pathway. This regulatory mechanism occurs in the case of glycolysis, a cellular pathway for breaking down glucose to produce energy. Hexokinase is an enzyme responsible for the first the step of glycolysis, forming glucose-6-phosphate from glucose. At high concentrations of fructose-6-phosphate, a molecule made later from glucose-6-phosphate, a regulator protein removes hexokinase to the nucleus,[57] where it forms a transcriptional repressor complex with nuclear proteins to reduce the expression of genes involved in glycolysis.[58]
|
84 |
+
|
85 |
+
In order to control which genes are being transcribed, the cell separates some transcription factor proteins responsible for regulating gene expression from physical access to the DNA until they are activated by other signaling pathways. This prevents even low levels of inappropriate gene expression. For example, in the case of NF-κB-controlled genes, which are involved in most inflammatory responses, transcription is induced in response to a signal pathway such as that initiated by the signaling molecule TNF-α, binds to a cell membrane receptor, resulting in the recruitment of signalling proteins, and eventually activating the transcription factor NF-κB. A nuclear localisation signal on the NF-κB protein allows it to be transported through the nuclear pore and into the nucleus, where it stimulates the transcription of the target genes.[13]
|
86 |
+
|
87 |
+
The compartmentalization allows the cell to prevent translation of unspliced mRNA.[59] Eukaryotic mRNA contains introns that must be removed before being translated to produce functional proteins. The splicing is done inside the nucleus before the mRNA can be accessed by ribosomes for translation. Without the nucleus, ribosomes would translate newly transcribed (unprocessed) mRNA, resulting in malformed and nonfunctional proteins.
|
88 |
+
|
89 |
+
Gene expression first involves transcription, in which DNA is used as a template to produce RNA. In the case of genes encoding proteins, that RNA produced from this process is messenger RNA (mRNA), which then needs to be translated by ribosomes to form a protein. As ribosomes are located outside the nucleus, mRNA produced needs to be exported.[60]
|
90 |
+
|
91 |
+
Since the nucleus is the site of transcription, it also contains a variety of proteins that either directly mediate transcription or are involved in regulating the process. These proteins include helicases, which unwind the double-stranded DNA molecule to facilitate access to it, RNA polymerases, which bind to the DNA promoter to synthesize the growing RNA molecule, topoisomerases, which change the amount of supercoiling in DNA, helping it wind and unwind, as well as a large variety of transcription factors that regulate expression.[61]
|
92 |
+
|
93 |
+
Newly synthesized mRNA molecules are known as primary transcripts or pre-mRNA. They must undergo post-transcriptional modification in the nucleus before being exported to the cytoplasm; mRNA that appears in the cytoplasm without these modifications is degraded rather than used for protein translation. The three main modifications are 5' capping, 3' polyadenylation, and RNA splicing. While in the nucleus, pre-mRNA is associated with a variety of proteins in complexes known as heterogeneous ribonucleoprotein particles (hnRNPs). Addition of the 5' cap occurs co-transcriptionally and is the first step in post-transcriptional modification. The 3' poly-adenine tail is only added after transcription is complete.
|
94 |
+
|
95 |
+
RNA splicing, carried out by a complex called the spliceosome, is the process by which introns, or regions of DNA that do not code for protein, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. This process normally occurs after 5' capping and 3' polyadenylation but can begin before synthesis is complete in transcripts with many exons.[5] Many pre-mRNAs, including those encoding antibodies, can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences. This process is known as alternative splicing, and allows production of a large variety of proteins from a limited amount of DNA.
|
96 |
+
|
97 |
+
The entry and exit of large molecules from the nucleus is tightly controlled by the nuclear pore complexes. Although small molecules can enter the nucleus without regulation,[62] macromolecules such as RNA and proteins require association karyopherins called importins to enter the nucleus and exportins to exit. "Cargo" proteins that must be translocated from the cytoplasm to the nucleus contain short amino acid sequences known as nuclear localization signals, which are bound by importins, while those transported from the nucleus to the cytoplasm carry nuclear export signals bound by exportins. The ability of importins and exportins to transport their cargo is regulated by GTPases, enzymes that hydrolyze the molecule guanosine triphosphate (GTP) to release energy. The key GTPase in nuclear transport is Ran, which can bind either GTP or GDP (guanosine diphosphate), depending on whether it is located in the nucleus or the cytoplasm. Whereas importins depend on RanGTP to dissociate from their cargo, exportins require RanGTP in order to bind to their cargo.[12]
|
98 |
+
|
99 |
+
Nuclear import depends on the importin binding its cargo in the cytoplasm and carrying it through the nuclear pore into the nucleus. Inside the nucleus, RanGTP acts to separate the cargo from the importin, allowing the importin to exit the nucleus and be reused. Nuclear export is similar, as the exportin binds the cargo inside the nucleus in a process facilitated by RanGTP, exits through the nuclear pore, and separates from its cargo in the cytoplasm.
|
100 |
+
|
101 |
+
Specialized export proteins exist for translocation of mature mRNA and tRNA to the cytoplasm after post-transcriptional modification is complete. This quality-control mechanism is important due to these molecules' central role in protein translation. Mis-expression of a protein due to incomplete excision of exons or mis-incorporation of amino acids could have negative consequences for the cell; thus, incompletely modified RNA that reaches the cytoplasm is degraded rather than used in translation.[5]
|
102 |
+
|
103 |
+
During its lifetime, a nucleus may be broken down or destroyed, either in the process of cell division or as a consequence of apoptosis (the process of programmed cell death). During these events, the structural components of the nucleus — the envelope and lamina — can be systematically degraded.
|
104 |
+
In most cells, the disassembly of the nuclear envelope marks the end of the prophase of mitosis. However, this disassembly of the nucleus is not a universal feature of mitosis and does not occur in all cells. Some unicellular eukaryotes (e.g., yeasts) undergo so-called closed mitosis, in which the nuclear envelope remains intact. In closed mitosis, the daughter chromosomes migrate to opposite poles of the nucleus, which then divides in two. The cells of higher eukaryotes, however, usually undergo open mitosis, which is characterized by breakdown of the nuclear envelope. The daughter chromosomes then migrate to opposite poles of the mitotic spindle, and new nuclei reassemble around them.
|
105 |
+
|
106 |
+
At a certain point during the cell cycle in open mitosis, the cell divides to form two cells. In order for this process to be possible, each of the new daughter cells must have a full set of genes, a process requiring replication of the chromosomes as well as segregation of the separate sets. This occurs by the replicated chromosomes, the sister chromatids, attaching to microtubules, which in turn are attached to different centrosomes. The sister chromatids can then be pulled to separate locations in the cell. In many cells, the centrosome is located in the cytoplasm, outside the nucleus; the microtubules would be unable to attach to the chromatids in the presence of the nuclear envelope.[63] Therefore, the early stages in the cell cycle, beginning in prophase and until around prometaphase, the nuclear membrane is dismantled.[16] Likewise, during the same period, the nuclear lamina is also disassembled, a process regulated by phosphorylation of the lamins by protein kinases such as the CDC2 protein kinase.[64] Towards the end of the cell cycle, the nuclear membrane is reformed, and around the same time, the nuclear lamina are reassembled by dephosphorylating the lamins.[64]
|
107 |
+
|
108 |
+
However, in dinoflagellates, the nuclear envelope remains intact, the centrosomes are located in the cytoplasm, and the microtubules come in contact with chromosomes, whose centromeric regions are incorporated into the nuclear envelope (the so-called closed mitosis with extranuclear spindle). In many other protists (e.g., ciliates, sporozoans) and fungi, the centrosomes are intranuclear, and their nuclear envelope also does not disassemble during cell division.
|
109 |
+
|
110 |
+
Apoptosis is a controlled process in which the cell's structural components are destroyed, resulting in death of the cell. Changes associated with apoptosis directly affect the nucleus and its contents, for example, in the condensation of chromatin and the disintegration of the nuclear envelope and lamina. The destruction of the lamin networks is controlled by specialized apoptotic proteases called caspases, which cleave the lamin proteins and, thus, degrade the nucleus' structural integrity. Lamin cleavage is sometimes used as a laboratory indicator of caspase activity in assays for early apoptotic activity.[16] Cells that express mutant caspase-resistant lamins are deficient in nuclear changes related to apoptosis, suggesting that lamins play a role in initiating the events that lead to apoptotic degradation of the nucleus.[16] Inhibition of lamin assembly itself is an inducer of apoptosis.[65]
|
111 |
+
|
112 |
+
The nuclear envelope acts as a barrier that prevents both DNA and RNA viruses from entering the nucleus. Some viruses require access to proteins inside the nucleus in order to replicate and/or assemble. DNA viruses, such as herpesvirus replicate and assemble in the cell nucleus, and exit by budding through the inner nuclear membrane. This process is accompanied by disassembly of the lamina on the nuclear face of the inner membrane.[16]
|
113 |
+
|
114 |
+
Initially, it has been suspected that immunoglobulins in general and autoantibodies in particular do not enter the nucleus. Now there is a body of evidence that under pathological conditions (e.g. lupus erythematosus) IgG can enter the nucleus.[66]
|
115 |
+
|
116 |
+
Most eukaryotic cell types usually have a single nucleus, but some have no nuclei, while others have several. This can result from normal development, as in the maturation of mammalian red blood cells, or from faulty cell division.
|
117 |
+
|
118 |
+
An anucleated cell contains no nucleus and is, therefore, incapable of dividing to produce daughter cells. The best-known anucleated cell is the mammalian red blood cell, or erythrocyte, which also lacks other organelles such as mitochondria, and serves primarily as a transport vessel to ferry oxygen from the lungs to the body's tissues. Erythrocytes mature through erythropoiesis in the bone marrow, where they lose their nuclei, organelles, and ribosomes. The nucleus is expelled during the process of differentiation from an erythroblast to a reticulocyte, which is the immediate precursor of the mature erythrocyte.[67] The presence of mutagens may induce the release of some immature "micronucleated" erythrocytes into the bloodstream.[68][69] Anucleated cells can also arise from flawed cell division in which one daughter lacks a nucleus and the other has two nuclei.
|
119 |
+
|
120 |
+
In flowering plants, this condition occurs in sieve tube elements.
|
121 |
+
|
122 |
+
Multinucleated cells contain multiple nuclei. Most acantharean species of protozoa[70] and some fungi in mycorrhizae[71] have naturally multinucleated cells. Other examples include the intestinal parasites in the genus Giardia, which have two nuclei per cell.[72] In humans, skeletal muscle cells, called myocytes and syncytium, become multinucleated during development; the resulting arrangement of nuclei near the periphery of the cells allows maximal intracellular space for myofibrils.[5] Other multinucleate cells in the human are osteoclasts a type of bone cell. Multinucleated and binucleated cells can also be abnormal in humans; for example, cells arising from the fusion of monocytes and macrophages, known as giant multinucleated cells, sometimes accompany inflammation[73] and are also implicated in tumor formation.[74]
|
123 |
+
|
124 |
+
A number of dinoflagellates are known to have two nuclei.[75] Unlike other multinucleated cells these nuclei contain two distinct lineages of DNA: one from the dinoflagellate and the other from a symbiotic diatom. The mitochondria and the plastids of the diatom somehow remain functional.
|
125 |
+
|
126 |
+
As the major defining characteristic of the eukaryotic cell, the nucleus' evolutionary origin has been the subject of much speculation. Four major hypotheses have been proposed to explain the existence of the nucleus, although none have yet earned widespread support.[76]
|
127 |
+
|
128 |
+
The first model known as the "syntrophic model" proposes that a symbiotic relationship between the archaea and bacteria created the nucleus-containing eukaryotic cell. (Organisms of the Archaea and Bacteria domain have no cell nucleus.[77]) It is hypothesized that the symbiosis originated when ancient archaea, similar to modern methanogenic archaea, invaded and lived within bacteria similar to modern myxobacteria, eventually forming the early nucleus. This theory is analogous to the accepted theory for the origin of eukaryotic mitochondria and chloroplasts, which are thought to have developed from a similar endosymbiotic relationship between proto-eukaryotes and aerobic bacteria.[78] The archaeal origin of the nucleus is supported by observations that archaea and eukarya have similar genes for certain proteins, including histones. Observations that myxobacteria are motile, can form multicellular complexes, and possess kinases and G proteins similar to eukarya, support a bacterial origin for the eukaryotic cell.[79]
|
129 |
+
|
130 |
+
A second model proposes that proto-eukaryotic cells evolved from bacteria without an endosymbiotic stage. This model is based on the existence of modern planctomycetes bacteria that possess a nuclear structure with primitive pores and other compartmentalized membrane structures.[80] A similar proposal states that a eukaryote-like cell, the chronocyte, evolved first and phagocytosed archaea and bacteria to generate the nucleus and the eukaryotic cell.[81]
|
131 |
+
|
132 |
+
The most controversial model, known as viral eukaryogenesis, posits that the membrane-bound nucleus, along with other eukaryotic features, originated from the infection of a prokaryote by a virus. The suggestion is based on similarities between eukaryotes and viruses such as linear DNA strands, mRNA capping, and tight binding to proteins (analogizing histones to viral envelopes). One version of the proposal suggests that the nucleus evolved in concert with phagocytosis to form an early cellular "predator".[82] Another variant proposes that eukaryotes originated from early archaea infected by poxviruses, on the basis of observed similarity between the DNA polymerases in modern poxviruses and eukaryotes.[83][84] It has been suggested that the unresolved question of the evolution of sex could be related to the viral eukaryogenesis hypothesis.[85]
|
133 |
+
|
134 |
+
A more recent proposal, the exomembrane hypothesis, suggests that the nucleus instead originated from a single ancestral cell that evolved a second exterior cell membrane; the interior membrane enclosing the original cell then became the nuclear membrane and evolved increasingly elaborate pore structures for passage of internally synthesized cellular components such as ribosomal subunits.[86]
|
135 |
+
|
en/4195.html.txt
ADDED
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The National Socialist German Workers' Party[a] (abbreviated in German as NSDAP), commonly referred to in English as the Nazi Party,[b] was a far-right political party in Germany that was active between 1920 and 1945, that created and supported the ideology of National Socialism. Its precursor, the German Workers' Party (Deutsche Arbeiterpartei; DAP), existed from 1919 to 1920.
|
6 |
+
|
7 |
+
The Nazi Party emerged from the German nationalist, racist and populist Freikorps paramilitary culture, which fought against the communist uprisings in post-World War I Germany.[7] The party was created to draw workers away from communism and into völkisch nationalism.[8] Initially, Nazi political strategy focused on anti-big business, anti-bourgeois, and anti-capitalist rhetoric, although this was later downplayed to gain the support of business leaders, and in the 1930s the party's main focus shifted to antisemitic and anti-Marxist themes.[9]
|
8 |
+
|
9 |
+
Pseudoscientific racist theories were central to Nazism, expressed in the idea of a "people's community" (Volksgemeinschaft).[10] The party aimed to unite "racially desirable" Germans as national comrades, while excluding those deemed either to be political dissidents, physically or intellectually inferior, or of a foreign race (Fremdvölkische).[11] The Nazis sought to strengthen the Germanic people, the "Aryan master race", through racial purity and eugenics, broad social welfare programs, and a collective subordination of individual rights, which could be sacrificed for the good of the state on behalf of the people.
|
10 |
+
|
11 |
+
To protect the supposed purity and strength of the Aryan race, the Nazis sought to exterminate Jews, Romani, Poles and most other Slavs, along with the physically and mentally handicapped. They disenfranchised and segregated homosexuals, Africans, Jehovah's Witnesses and political opponents.[12] The persecution reached its climax when the party-controlled German state set in motion the Final Solution—an industrial system of genocide which achieved the murder of around 6 million Jews and millions of other targeted victims, in what has become known as the Holocaust.[13]
|
12 |
+
|
13 |
+
Adolf Hitler, the party's leader since 1921, was appointed Chancellor of Germany by President Paul von Hindenburg on 30 January 1933. Hitler rapidly established a totalitarian regime[14][15][16][17] known as the Third Reich. Following the defeat of the Third Reich at the conclusion of World War II in Europe, the party was "declared to be illegal" by the Allied powers,[18] who carried out denazification in the years after the war both in Germany and in territories occupied by Nazi forces. The use of any symbols associated with the party is now outlawed in many European countries, including Germany and Austria.
|
14 |
+
|
15 |
+
Nazi, the informal and originally derogatory term for a party member, abbreviates the party's name (Nationalsozialist German pronunciation: [natsi̯oˈnaːlzotsi̯aˌlɪst]), and was coined in analogy with Sozi (pronounced [ˈzoːtsiː]), an abbreviation of Sozialdemokrat (member of the rival Social Democratic Party of Germany).[c][19] Members of the party referred to themselves as Nationalsozialisten (National Socialists), rarely as Nazis. The term Parteigenosse (party member) was commonly used among Nazis, with its corresponding feminine form Parteigenossin.[20]
|
16 |
+
|
17 |
+
The term was in use before the rise of the party as a colloquial and derogatory word for a backward peasant, an awkward and clumsy person. It derived from Ignaz, a shortened version of Ignatius,[21][22] which was a common name in the Nazis' home region of Bavaria. Opponents seized on this, and the long-existing Sozi, to attach a dismissive nickname to the National Socialists.[22][23]
|
18 |
+
|
19 |
+
In 1933, when Adolf Hitler assumed power in the German government, the usage of "Nazi" diminished in Germany, although Austrian anti-Nazis continued to use the term,[19] and the use of "Nazi Germany" and "Nazi regime" was popularised by anti-Nazis and German exiles abroad. Thereafter, the term spread into other languages and eventually was brought back to Germany after World War II.[23] In English, the term is not considered slang, and has such derivatives as Nazism and denazification.
|
20 |
+
|
21 |
+
The party grew out of smaller political groups with a nationalist orientation that formed in the last years of World War I. In 1918, a league called the Freier Arbeiterausschuss für einen guten Frieden (Free Workers' Committee for a good Peace)[24] was created in Bremen, Germany. On 7 March 1918, Anton Drexler, an avid German nationalist, formed a branch of this league in Munich.[24] Drexler was a local locksmith who had been a member of the militarist Fatherland Party[25] during World War I and was bitterly opposed to the armistice of November 1918 and the revolutionary upheavals that followed. Drexler followed the views of militant nationalists of the day, such as opposing the Treaty of Versailles, having antisemitic, anti-monarchist and anti-Marxist views, as well as believing in the superiority of Germans whom they claimed to be part of the Aryan "master race" (Herrenvolk). However, he also accused international capitalism of being a Jewish-dominated movement and denounced capitalists for war profiteering in World War I.[26] Drexler saw the political violence and instability in Germany as the result of the Weimar Republic being out-of-touch with the masses, especially the lower classes.[26] Drexler emphasised the need for a synthesis of völkisch nationalism with a form of economic socialism, in order to create a popular nationalist-oriented workers' movement that could challenge the rise of Communism and internationalist politics.[27] These were all well-known themes popular with various Weimar paramilitary groups such as the Freikorps.
|
22 |
+
|
23 |
+
Drexler's movement received attention and support from some influential figures. Supporter Dietrich Eckart, a well-to-do journalist, brought military figure Felix Graf von Bothmer, a prominent supporter of the concept of "national socialism", to address the movement.[28] Later in 1918, Karl Harrer (a journalist and member of the Thule Society) convinced Drexler and several others to form the Politischer Arbeiterzirkel (Political Workers' Circle).[24] The members met periodically for discussions with themes of nationalism and racism directed against the Jews.[24] In December 1918, Drexler decided that a new political party should be formed, based on the political principles that he endorsed, by combining his branch of the Workers' Committee for a good Peace with the Political Workers' Circle.[24][29]
|
24 |
+
|
25 |
+
On 5 January 1919, Drexler created a new political party and proposed it should be named the "German Socialist Workers' Party", but Harrer objected to the term "socialist"; so the term was removed and the party was named the German Workers' Party (Deutsche Arbeiterpartei, DAP).[29] To ease concerns among potential middle-class supporters, Drexler made clear that unlike Marxists the party supported the middle-class and that its socialist policy was meant to give social welfare to German citizens deemed part of the Aryan race.[26] They became one of many völkisch movements that existed in Germany. Like other völkisch groups, the DAP advocated the belief that through profit-sharing instead of socialisation Germany should become a unified "people's community" (Volksgemeinschaft) rather than a society divided along class and party lines.[30] This ideology was explicitly antisemitic. As early as 1920, the party was raising money by selling a tobacco called Anti-Semit.[31]
|
26 |
+
|
27 |
+
From the outset, the DAP was opposed to non-nationalist political movements, especially on the left, including the Social Democratic Party of Germany (SPD) and the Communist Party of Germany (KPD). Members of the DAP saw themselves as fighting against "Bolshevism" and anyone considered a part of or aiding so-called "international Jewry". The DAP was also deeply opposed to the Versailles Treaty.[32] The DAP did not attempt to make itself public and meetings were kept in relative secrecy, with public speakers discussing what they thought of Germany's present state of affairs, or writing to like-minded societies in Northern Germany.[30]
|
28 |
+
|
29 |
+
The DAP was a comparatively small group with fewer than 60 members.[30] Nevertheless, it attracted the attention of the German authorities, who were suspicious of any organisation that appeared to have subversive tendencies. In July 1919, while stationed in Munich army Gefreiter Adolf Hitler was appointed a Verbindungsmann (intelligence agent) of an Aufklärungskommando (reconnaissance unit) of the Reichswehr (army) by Captain Mayr the head of the Education and Propaganda Department (Dept Ib/P) in Bavaria. Hitler was assigned to influence other soldiers and to infiltrate the DAP.[33] While attending a party meeting on 12 September 1919 at Munich's Sterneckerbräu, Hitler became involved in a heated argument with a visitor, Professor Baumann, who questioned the soundness of Gottfried Feder's arguments against capitalism; Baumann proposed that Bavaria should break away from Prussia and found a new South German nation with Austria. In vehemently attacking the man's arguments, Hitler made an impression on the other party members with his oratorical skills; according to Hitler, the "professor" left the hall acknowledging unequivocal defeat.[34] Drexler encouraged him to join the DAP.[34] On the orders of his army superiors, Hitler applied to join the party[35] and within a week was accepted as party member 555 (the party began counting membership at 500 to give the impression they were a much larger party).[36][37] Among the party's earlier members were Ernst Röhm of the Army's District Command VII; Dietrich Eckart, who has been called the spiritual father of National Socialism;[38] then-University of Munich student Rudolf Hess;[39] Freikorps soldier Hans Frank; and Alfred Rosenberg, often credited as the philosopher of the movement. All were later prominent in the Nazi regime.[40]
|
30 |
+
|
31 |
+
Hitler later claimed to be the seventh party member (he was in fact the seventh executive member of the party's central committee[41] and he would later wear the Golden Party Badge number one). Anton Drexler drafted a letter to Hitler in 1940—which was never sent—that contradicts Hitler's later claim:
|
32 |
+
|
33 |
+
No one knows better than you yourself, my Führer, that you were never the seventh member of the party, but at best the seventh member of the committee... And a few years ago I had to complain to a party office that your first proper membership card of the DAP, bearing the signatures of Schüssler and myself, was falsified, with the number 555 being erased and number 7 entered.[42]
|
34 |
+
|
35 |
+
Hitler's first DAP speech was held in the Hofbräukeller on 16 October 1919. He was the second speaker of the evening, and spoke to 111 people.[43] Hitler later declared that this was when he realised he could really "make a good speech".[30] At first, Hitler spoke only to relatively small groups, but his considerable oratory and propaganda skills were appreciated by the party leadership. With the support of Anton Drexler, Hitler became chief of propaganda for the party in early 1920.[44] Hitler began to make the party more public, and organised its biggest meeting yet of 2,000 people on 24 February 1920 in the Staatliches Hofbräuhaus in München. Such was the significance of this particular move in publicity that Karl Harrer resigned from the party in disagreement.[45] It was in this speech that Hitler enunciated the twenty-five points of the German Workers' Party manifesto that had been drawn up by Drexler, Feder and himself.[46] Through these points he gave the organisation a much bolder stratagem[44] with a clear foreign policy (abrogation of the Treaty of Versailles, a Greater Germany, Eastern expansion and exclusion of Jews from citizenship) and among his specific points were: confiscation of war profits, abolition of unearned incomes, the State to share profits of land and land for national needs to be taken away without compensation.[47] In general, the manifesto was antisemitic, anti-capitalist, anti-democratic, anti-Marxist and anti-liberal.[48] To increase its appeal to larger segments of the population, on the same day as Hitler's Hofbräuhaus speech on 24 February 1920, the DAP changed its name to the Nationalsozialistische Deutsche Arbeiterpartei ("National Socialist German Workers' Party", or Nazi Party).[49][50][d] The word "Socialist" was added by the party's executive committee, over Hitler's objections, in order to help appeal to left-wing workers.[53]
|
36 |
+
|
37 |
+
In 1920, the Nazi Party officially announced that only persons of "pure Aryan descent [rein arischer Abkunft]" could become party members and if the person had a spouse, the spouse also had to be a "racially pure" Aryan. Party members could not be related either directly or indirectly to a so-called "non-Aryan".[54] Even before it had become legally forbidden by the Nuremberg Laws in 1935, the Nazis banned sexual relations and marriages between party members and Jews.[55] Party members found guilty of Rassenschande ("racial defilement") were persecuted heavily. Some members were even sentenced to death.[56]
|
38 |
+
|
39 |
+
Hitler quickly became the party's most active orator, appearing in public as a speaker 31 times within the first year after his self-discovery.[57] Crowds began to flock to hear his speeches.[58] Hitler always spoke about the same subjects: the Treaty of Versailles and the Jewish question.[48] This deliberate technique and effective publicising of the party contributed significantly to his early success,[48] about which a contemporary poster wrote: "Since Herr Hitler is a brilliant speaker, we can hold out the prospect of an extremely exciting evening".[59][page needed] Over the following months, the party continued to attract new members,[41] while remaining too small to have any real significance in German politics.[60] By the end of the year, party membership was recorded at 2,000,[58] many of whom Hitler and Röhm had brought into the party personally, or for whom Hitler's oratory had been their reason for joining.[61]
|
40 |
+
|
41 |
+
Hitler's talent as an orator and his ability to draw new members, combined with his characteristic ruthlessness, soon made him the dominant figure. However, while Hitler and Eckart were on a fundraising trip to Berlin in June 1921, a mutiny broke out within the party in Munich. Members of its executive committee wanted to merge with the rival German Socialist Party (DSP).[62] Upon returning to Munich on 11 July, Hitler angrily tendered his resignation. The committee members realised that his resignation would mean the end of the party.[63] Hitler announced he would rejoin on condition that he would replace Drexler as party chairman, and that the party headquarters would remain in Munich.[64] The committee agreed, and he rejoined the party on 26 July as member 3,680. Hitler continued to face some opposition within the NSDAP, as his opponents had Hermann Esser expelled from the party and they printed 3,000 copies of a pamphlet attacking Hitler as a traitor to the party.[64] In the following days, Hitler spoke to several packed houses and defended himself and Esser to thunderous applause.[65]
|
42 |
+
|
43 |
+
His strategy proved successful; at a special party congress on 29 July 1921, he replaced Drexler as party chairman by a vote of 533 to 1.[65] The committee was dissolved, and Hitler was granted nearly absolute powers as the party's sole leader.[65] He would hold the post for the remainder of his life. Hitler soon acquired the title Führer ("leader") and after a series of sharp internal conflicts it was accepted that the party would be governed by the Führerprinzip ("leader principle"). Under this principle, the party was a highly centralised entity that functioned strictly from the top down, with Hitler at the apex as the party's absolute leader. Hitler saw the party as a revolutionary organisation, whose aim was the overthrow of the Weimar Republic, which he saw as controlled by the socialists, Jews and the "November criminals" who had betrayed the German soldiers in 1918. The SA ("storm troopers", also known as "Brownshirts") were founded as a party militia in 1921 and began violent attacks on other parties.
|
44 |
+
|
45 |
+
For Hitler, the twin goals of the party were always German nationalist expansionism and antisemitism. These two goals were fused in his mind by his belief that Germany's external enemies – Britain, France and the Soviet Union – were controlled by the Jews and that Germany's future wars of national expansion would necessarily entail a war against the Jews.[66][page needed] For Hitler and his principal lieutenants, national and racial issues were always dominant. This was symbolised by the adoption as the party emblem of the swastika or Hakenkreuz. In German nationalist circles, the swastika was considered a symbol of an "Aryan race" and it symbolised the replacement of the Christian Cross with allegiance to a National Socialist State.
|
46 |
+
|
47 |
+
The Nazi Party grew significantly during 1921 and 1922, partly through Hitler's oratorical skills, partly through the SA's appeal to unemployed young men, and partly because there was a backlash against socialist and liberal politics in Bavaria as Germany's economic problems deepened and the weakness of the Weimar regime became apparent. The party recruited former World War I soldiers, to whom Hitler as a decorated frontline veteran could particularly appeal, as well as small businessmen and disaffected former members of rival parties. Nazi rallies were often held in beer halls, where downtrodden men could get free beer. The Hitler Youth was formed for the children of party members. The party also formed groups in other parts of Germany. Julius Streicher in Nuremberg was an early recruit and became editor of the racist magazine Der Stürmer. In December 1920, the Nazi Party had acquired a newspaper, the Völkischer Beobachter, of which its leading ideologist Alfred Rosenberg became editor. Others to join the party around this time were Heinrich Himmler and World War I flying ace Hermann Göring.
|
48 |
+
|
49 |
+
On 31 October 1922, a party with similar policies and objectives came into power in Italy, the National Fascist Party, under the leadership of the charismatic Benito Mussolini. The Fascists, like the Nazis, promoted a national rebirth of their country, as they opposed communism and liberalism; appealed to the working-class; opposed the Treaty of Versailles; and advocated the territorial expansion of their country. The Italian Fascists used a straight-armed Roman salute and wore black-shirted uniforms. Hitler was inspired by Mussolini and the Fascists, borrowing their use of the straight-armed salute as a Nazi salute. When the Fascists came to power in 1922 in Italy through their coup attempt called the "March on Rome", Hitler began planning his own coup.
|
50 |
+
|
51 |
+
In January 1923, France occupied the Ruhr industrial region as a result of Germany's failure to meet its reparations payments. This led to economic chaos, the resignation of Wilhelm Cuno's government and an attempt by the German Communist Party (KPD) to stage a revolution. The reaction to these events was an upsurge of nationalist sentiment. Nazi Party membership grew sharply to about 20,000.[67] By November, Hitler had decided that the time was right for an attempt to seize power in Munich, in the hope that the Reichswehr (the post-war German military) would mutiny against the Berlin government and join his revolt. In this, he was influenced by former General Erich Ludendorff, who had become a supporter—though not a member—of the Nazis.
|
52 |
+
|
53 |
+
On the night of 8 November, the Nazis used a patriotic rally in a Munich beer hall to launch an attempted putsch ("coup d'état"). This so-called Beer Hall Putsch attempt failed almost at once when the local Reichswehr commanders refused to support it. On the morning of 9 November, the Nazis staged a march of about 2,000 supporters through Munich in an attempt to rally support. Troops opened fire and 16 Nazis were killed. Hitler, Ludendorff and a number of others were arrested and were tried for treason in March 1924. Hitler and his associates were given very lenient prison sentences. While Hitler was in prison, he wrote his semi-autobiographical political manifesto Mein Kampf ("My Struggle").
|
54 |
+
|
55 |
+
The Nazi Party was banned on 9 November 1923; however, with the support of the nationalist Völkisch-Social Bloc (Völkisch-Sozialer Block), it continued to operate under the name "German Party" (Deutsche Partei or DP) from 1924 to 1925.[68] The Nazis failed to remain unified in the DP, as in the north, the right-wing Volkish nationalist supporters of the Nazis moved to the new German Völkisch Freedom Party, leaving the north's left-wing Nazi members, such as Joseph Goebbels retaining support for the party.[68]
|
56 |
+
|
57 |
+
Adolf Hitler was released from prison on 20 December 1924. On 16 February 1925, Hitler convinced the Bavarian authorities to lift the ban on the NSDAP and the party was formally refounded on 26 February 1925, with Hitler as its undisputed leader. The new Nazi Party was no longer a paramilitary organisation and disavowed any intention of taking power by force. In any case, the economic and political situation had stabilised and the extremist upsurge of 1923 had faded, so there was no prospect of further revolutionary adventures. The Nazi Party of 1925 was divided into the "Leadership Corps" (Korps der politischen Leiter) appointed by Hitler and the general membership (Parteimitglieder). The party and the SA were kept separate and the legal aspect of the party's work was emphasised. In a sign of this, the party began to admit women. The SA and the SS members (the latter founded in 1925 as Hitler's bodyguard, and known originally as the Schutzkommando) had to all be regular party members.[69][70]
|
58 |
+
|
59 |
+
In the 1920s, the Nazi Party expanded beyond its Bavarian base. Catholic Bavaria maintained its right-wing nostalgia for a Catholic monarch;[citation needed] and Westphalia, along with working-class "Red Berlin", were always the Nazis' weakest areas electorally, even during the Third Reich itself. The areas of strongest Nazi support were in rural Protestant areas such as Schleswig-Holstein, Mecklenburg, Pomerania and East Prussia. Depressed working-class areas such as Thuringia also produced a strong Nazi vote, while the workers of the Ruhr and Hamburg largely remained loyal to the Social Democrats, the Communist Party of Germany or the Catholic Centre Party. Nuremberg remained a Nazi Party stronghold, and the first Nuremberg Rally was held there in 1927. These rallies soon became massive displays of Nazi paramilitary power and attracted many recruits. The Nazis' strongest appeal was to the lower middle-classes – farmers, public servants, teachers and small businessmen – who had suffered most from the inflation of the 1920s, so who feared Bolshevism more than anything else. The small business class was receptive to Hitler's antisemitism, since it blamed Jewish big business for its economic problems. University students, disappointed at being too young to have served in the War of 1914–1918 and attracted by the Nazis' radical rhetoric, also became a strong Nazi constituency. By 1929, the party had 130,000 members.[71]
|
60 |
+
|
61 |
+
The party's nominal Deputy Leader was Rudolf Hess, but he had no real power in the party. By the early 1930s, the senior leaders of the party after Hitler were Heinrich Himmler, Joseph Goebbels and Hermann Göring. Beneath the Leadership Corps were the party's regional leaders, the Gauleiters, each of whom commanded the party in his Gau ("region"). Goebbels began his ascent through the party hierarchy as Gauleiter of Berlin-Brandenburg in 1926. Streicher was Gauleiter of Franconia, where he published his antisemitic newspaper Der Stürmer. Beneath the Gauleiter were lower-level officials, the Kreisleiter ("county leaders"), Zellenleiter ("cell leaders") and Blockleiter ("block leaders"). This was a strictly hierarchical structure in which orders flowed from the top and unquestioning loyalty was given to superiors. Only the SA retained some autonomy. Being composed largely of unemployed workers, many SA men took the Nazis' socialist rhetoric seriously. At this time, the Hitler salute (borrowed from the Italian fascists) and the greeting "Heil Hitler!" were adopted throughout the party.
|
62 |
+
|
63 |
+
The Nazis contested elections to the national parliament (the Reichstag) and to the state legislature (the Landtage) from 1924, although at first with little success. The "National Socialist Freedom Movement" polled 3% of the vote in the December 1924 Reichstag elections and this fell to 2.6% in 1928. State elections produced similar results. Despite these poor results and despite Germany's relative political stability and prosperity during the later 1920s, the Nazi Party continued to grow. This was partly because Hitler, who had no administrative ability, left the party organisation to the head of the secretariat, Philipp Bouhler, the party treasurer Franz Xaver Schwarz and business manager Max Amann. The party had a capable propaganda head in Gregor Strasser, who was promoted to national organizational leader in January 1928. These men gave the party efficient recruitment and organizational structures. The party also owed its growth to the gradual fading away of competitor nationalist groups, such as the German National People's Party (DNVP). As Hitler became the recognised head of the German nationalists, other groups declined or were absorbed.
|
64 |
+
|
65 |
+
Despite these strengths, the Nazi Party might never have come to power had it not been for the Great Depression and its effects on Germany. By 1930, the German economy was beset with mass unemployment and widespread business failures. The Social Democrats and Communists were bitterly divided and unable to formulate an effective solution: this gave the Nazis their opportunity and Hitler's message, blaming the crisis on the Jewish financiers and the Bolsheviks, resonated with wide sections of the electorate. At the September 1930 Reichstag elections, the Nazis won 18.3% of the votes and became the second-largest party in the Reichstag after the Social Democrats. Hitler proved to be a highly effective campaigner, pioneering the use of radio and aircraft for this purpose. His dismissal of Strasser and his appointment of Goebbels as the party's propaganda chief were major factors. While Strasser had used his position to promote his own leftish version of national socialism, Goebbels was totally loyal to Hitler and worked only to improve Hitler's image.
|
66 |
+
|
67 |
+
The 1930 elections changed the German political landscape by weakening the traditional nationalist parties, the DNVP and the DVP, leaving the Nazis as the chief alternative to the discredited Social Democrats and the Zentrum, whose leader, Heinrich Brüning, headed a weak minority government. The inability of the democratic parties to form a united front, the self-imposed isolation of the Communists and the continued decline of the economy, all played into Hitler's hands. He now came to be seen as de facto leader of the opposition and donations poured into the Nazi Party's coffers. Some major business figures, such as Fritz Thyssen, were Nazi supporters and gave generously[72] and some Wall Street figures were allegedly involved,[73][page needed] but many other businessmen were suspicious of the extreme nationalist tendencies of the Nazis and preferred to support the traditional conservative parties instead.[74]
|
68 |
+
|
69 |
+
During 1931 and into 1932, Germany's political crisis deepened. Hitler ran for President against the incumbent Paul von Hindenburg in March 1932, polling 30.1% in the first round and 36.8% in the second against Hindenburg's 49% and 53%. By now the SA had 400,000 members and its running street battles with the SPD and Communist paramilitaries (who also fought each other) reduced some German cities to combat zones. Paradoxically, although the Nazis were among the main instigators of this disorder, part of Hitler's appeal to a frightened and demoralised middle class was his promise to restore law and order. Overt antisemitism was played down in official Nazi rhetoric, but was never far from the surface. Germans voted for Hitler primarily because of his promises to revive the economy (by unspecified means), to restore German greatness and overturn the Treaty of Versailles and to save Germany from communism. On 24 April 1932, the Free State of Prussia elections to the Landtag resulted in 36.3% of the votes and 162 seats for the NSDAP.
|
70 |
+
|
71 |
+
On 20 July 1932, the Prussian government was ousted by a coup, the Preussenschlag; a few days later at the July 1932 Reichstag election the Nazis made another leap forward, polling 37.4% and becoming the largest party in parliament by a wide margin. Furthermore, the Nazis and the Communists between them won 52% of the vote and a majority of seats. Since both parties opposed the established political system and neither would join or support any ministry, this made the formation of a majority government impossible. The result was weak ministries governing by decree. Under Comintern directives, the Communists maintained their policy of treating the Social Democrats as the main enemy, calling them "social fascists", thereby splintering opposition to the Nazis.[e] Later, both the Social Democrats and the Communists accused each other of having facilitated Hitler's rise to power by their unwillingness to compromise.
|
72 |
+
|
73 |
+
Chancellor Franz von Papen called another Reichstag election in November, hoping to find a way out of this impasse. The electoral result was the same, with the Nazis and the Communists winning 50% of the vote between them and more than half the seats, rendering this Reichstag no more workable than its predecessor. However, support for the Nazis had fallen to 33.1%, suggesting that the Nazi surge had passed its peak—possibly because the worst of the Depression had passed, possibly because some middle-class voters had supported Hitler in July as a protest, but had now drawn back from the prospect of actually putting him into power. The Nazis interpreted the result as a warning that they must seize power before their moment passed. Had the other parties united, this could have been prevented, but their shortsightedness made a united front impossible. Papen, his successor Kurt von Schleicher and the nationalist press magnate Alfred Hugenberg spent December and January in political intrigues that eventually persuaded President Hindenburg that it was safe to appoint Hitler as Reich Chancellor, at the head of a cabinet including only a minority of Nazi ministers—which he did on 30 January 1933.
|
74 |
+
|
75 |
+
In Mein Kampf, Hitler directly attacked both left-wing and right-wing politics in Germany.[f] However, a majority of scholars identify Nazism in practice as being a far-right form of politics.[76][page needed] When asked in an interview in 1934 whether the Nazis were "bourgeois right-wing" as alleged by their opponents, Hitler responded that Nazism was not exclusively for any class and indicated that it favoured neither the left nor the right, but preserved "pure" elements from both "camps" by stating: "From the camp of bourgeois tradition, it takes national resolve, and from the materialism of the Marxist dogma, living, creative Socialism".[77]
|
76 |
+
|
77 |
+
The votes that the Nazis received in the 1932 elections established the Nazi Party as the largest parliamentary faction of the Weimar Republic government. Hitler was appointed as Chancellor of Germany on 30 January 1933.
|
78 |
+
|
79 |
+
The Reichstag fire on 27 February 1933 gave Hitler a pretext for suppressing his political opponents. The following day he persuaded the Reich's President Paul von Hindenburg to issue the Reichstag Fire Decree, which suspended most civil liberties. The NSDAP won the parliamentary election on 5 March 1933 with 43.9 percent of votes, but failed to win an absolute majority. After the election, hundreds of thousands of new members joined the party for opportunistic reasons, most of them civil servants and white-collar workers. They were nicknamed the "casualties of March" (German: Märzgefallenen) or "March violets" (German: Märzveilchen).[78] To protect the party from too many non-ideological turncoats who were viewed by the so-called "old fighters" (alte Kämpfer) with some mistrust,[78] the party issued a freeze on admissions that remained in force from May 1933 to 1937.[79][page needed]
|
80 |
+
|
81 |
+
On 23 March, the parliament passed the Enabling Act of 1933, which gave the cabinet the right to enact laws without the consent of parliament. In effect, this gave Hitler dictatorial powers. Now possessing virtually absolute power, the Nazis established totalitarian control as they abolished labour unions and other political parties and imprisoned their political opponents, first at wilde Lager, improvised camps, then in concentration camps. Nazi Germany had been established, yet the Reichswehr remained impartial. Nazi power over Germany remained virtual, not absolute.
|
82 |
+
|
83 |
+
During June and July 1933, all competing parties were either outlawed or dissolved themselves and subsequently the Law against the founding of new parties of 14 July 1933 legally established the Nazi Party's monopoly. On 1 December 1933, the Law to secure the unity of party and state entered into force, which was the base for a progressive intertwining of party structures and state apparatus.[81] By this law, the SA—actually a party division—was given quasi-governmental authority and their leader was co-opted as an ex officio cabinet member. By virtue of a 30 January 1934 Law concerning the reorganisation of the Reich, the Länder (states) lost their statehood and were demoted to administrative divisions of the Reich's government (Gleichschaltung). Effectively, they lost most of their power to the Gaue that were originally just regional divisions of the party, but took over most competencies of the state administration in their respective sectors.[82]
|
84 |
+
|
85 |
+
During the Röhm Purge of 30 June to 2 July 1934 (also known as the "Night of the Long Knives"), Hitler disempowered the SA's leadership—most of whom belonged to the Strasserist (national revolutionary) faction within the NSDAP—and ordered them killed. He accused them of having conspired to stage a coup d'état, but it is believed that this was only a pretence to justify the suppression of any intraparty opposition. The purge was executed by the SS, assisted by the Gestapo and Reichswehr units. Aside from Strasserist Nazis, they also murdered anti-Nazi conservative figures like former chancellor Kurt von Schleicher.[83] After this, the SA continued to exist but lost much of its importance, while the role of the SS grew significantly. Formerly only a sub-organisation of the SA, it was made into a separate organisation of the NSDAP in July 1934.[84]
|
86 |
+
|
87 |
+
After the death of President Hindenburg on 2 August 1934, Hitler merged the offices of party leader, head of state and chief of government in one, taking the title of Führer und Reichskanzler. The Chancellery of the Führer, officially an organisation of the Nazi Party, took over the functions of the Office of the President (a government agency), blurring the distinction between structures of party and state even further. The SS increasingly exerted police functions, a development which was formally documented by the merger of the offices of Reichsführer-SS and Chief of the German Police on 17 June 1936, as the position was held by Heinrich Himmler who derived his authority directly from Hitler.[85] The Sicherheitsdienst (SD, formally the "Security Service of the Reichsführer-SS") that had been created in 1931 as an intraparty intelligence became the de facto intelligence agency of Nazi Germany. It was put under the Reich Main Security Office (RSHA) in 1939, which then coordinated SD, Gestapo and criminal police, therefore functioning as a hybrid organisation of state and party structures.[86]
|
88 |
+
|
89 |
+
Officially, the Third Reich lasted only 12 years. The first Instrument of Surrender was signed by representatives of Nazi Germany at Reims, France on 7 May 1945. The war in Europe had come to an end. The defeat of Germany in World War II marked the end of the Nazi Germany era.[87] The party was formally abolished on 10 October 1945 by the Allied Control Council and denazification began, along with trials of major war criminals before the International Military Tribunal (IMT) in Nuremberg.[88] Part of the Potsdam Agreement called for the destruction of the Nationalist Socialist Party alongside the requirement for the reconstruction of the German political life.[89] In addition, the Control Council Law no. 2 Providing for the Termination and Liquidation of the Nazi Organization specified the abolition of 52 other Nazi affiliated and supervised organisations and prohibited their activities.[90] The denazification was carried out in Germany and continued until the onset of the Cold War.[91][page needed][92]
|
90 |
+
|
91 |
+
Between 1939 and 1945, the Nazi Party led regime, assisted by collaborationist governments and recruits from occupied countries, was responsible for the deaths of at least eleven million people,[93][94] including 5.5 to 6 million Jews (representing two-thirds of the Jewish population of Europe),[13][95][96] and between 200,000 and 1,500,000 Romani people.[97][98] The estimated total number includes the killing of nearly two million non-Jewish Poles,[98] over three million Soviet prisoners of war,[99] communists, and other political opponents, homosexuals, the physically and mentally disabled.[100][101]
|
92 |
+
|
93 |
+
The National Socialist Programme was a formulation of the policies of the party. It contained 25 points and is therefore also known as the "25-point plan" or "25-point programme". It was the official party programme, with minor changes, from its proclamation as such by Hitler in 1920, when the party was still the German Workers' Party, until its dissolution.
|
94 |
+
|
95 |
+
At the top of the Nazi Party was the party chairman ("Der Führer"), who held absolute power and full command over the party. All other party offices were subordinate to his position and had to depend on his instructions. In 1934, Hitler founded a separate body for the chairman, Chancellery of the Führer, with its own sub-units.
|
96 |
+
|
97 |
+
Below the Führer's chancellery was first the "Staff of the Deputy Führer", headed by Rudolf Hess from 21 April 1933 to 10 May 1941; and then the "Party Chancellery" (Parteikanzlei), headed by Martin Bormann.
|
98 |
+
|
99 |
+
Directly subjected to the Führer were the Reichsleiter ("Reich Leader(s)"—the singular and plural forms are identical in German), whose number was gradually increased to eighteen. They held power and influence comparable to the Reich Ministers' in Hitler's Cabinet. The eighteen Reichsleiter formed the "Reich Leadership of the Nazi Party" (Reichsleitung der NSDAP), which was established at the so-called Brown House in Munich. Unlike a Gauleiter, a Reichsleiter did not have individual geographic areas under their command, but were responsible for specific spheres of interest.
|
100 |
+
|
101 |
+
The Nazi Party had a number of party offices dealing with various political and other matters. These included:
|
102 |
+
|
103 |
+
In addition to the Nazi Party proper, several paramilitary groups existed which "supported" Nazi aims. All members of these paramilitary organisations were required to become regular Nazi Party members first and could then enlist in the group of their choice. An exception was the Waffen-SS, considered the military arm of the SS and Nazi Party, which during the Second World War allowed members to enlist without joining the Nazi Party. Foreign volunteers of the Waffen-SS were also not required to be members of the Nazi Party, although many joined local nationalist groups from their own countries with the same aims. Police officers, including members of the Gestapo, frequently held SS rank for administrative reasons (known as "rank parity") and were likewise not required to be members of the Nazi Party.
|
104 |
+
|
105 |
+
A vast system of Nazi Party paramilitary ranks developed for each of the various paramilitary groups. This was part of the process of Gleichschaltung with the paramilitary and auxiliary groups swallowing existing associations and federations after the Party was flooded by millions of membership applications.[102]
|
106 |
+
|
107 |
+
The major Nazi Party paramilitary groups were as follows:
|
108 |
+
|
109 |
+
The Hitler Youth was a paramilitary group divided into an adult leadership corps and a general membership open to boys aged fourteen to eighteen. The League of German Girls was the equivalent group for girls.
|
110 |
+
|
111 |
+
Certain nominally independent organisations had their own legal representation and own property, but were supported by the Nazi Party. Many of these associated organisations were labour unions of various professions. Some were older organisations that were nazified according to the Gleichschaltung policy after the 1933 takeover.
|
112 |
+
|
113 |
+
The employees of large businesses with international operations such as Deutsche Bank, Dresdner Bank, and Commerzbank were mostly party members.[103] All German businesses abroad were also required to have their own Nazi Party Ausland-Organization liaison men, which enabled the party leadership to obtain updated and excellent intelligence on the actions of the global corporate elites.[104][page needed]
|
114 |
+
|
115 |
+
For the purpose of centralisation in the Gleichschaltung process a rigidly hierarchal structure was established in the Nazi Party, which it later carried through in the whole of Germany in order to consolidate total power under the person of Hitler (Führerstaat). It was regionally sub-divided into a number of Gaue (singular: Gau) headed by a Gauleiter, who received their orders directly from Hitler. The name (originally a term for sub-regions of the Holy Roman Empire headed by a Gaugraf) for these new provincial structures was deliberately chosen because of its mediaeval connotations. The term is approximately equivalent to the English shire.
|
116 |
+
|
117 |
+
While the Nazis maintained the nominal existence of state and regional governments in Germany itself, this policy was not extended to territories acquired after 1937. Even in German-speaking areas such as Austria, state and regional governments were formally disbanded as opposed to just being dis-empowered.
|
118 |
+
|
119 |
+
After the Anschluss a new type of administrative unit was introduced called a Reichsgau. In these territories the Gauleiters also held the position of Reichsstatthalter, thereby formally combining the spheres of both party and state offices. The establishment of this type of district was subsequently carried out for any further territorial annexations of Germany both before and during World War II. Even the former territories of Prussia were never formally re-integrated into what was then Germany's largest state after being re-taken in the 1939 Polish campaign.
|
120 |
+
|
121 |
+
The Gaue and Reichsgaue (state or province) were further sub-divided into Kreise (counties) headed by a Kreisleiter, which were in turn sub-divided into Zellen (cells) and Blocken (blocks), headed by a Zellenleiter and Blockleiter respectively.
|
122 |
+
|
123 |
+
A reorganisation of the Gaue was enacted on 1 October 1928. The given numbers were the official ordering numbers. The statistics are from 1941, for which the Gau organisation of that moment in time forms the basis. Their size and populations are not exact; for instance, according to the official party statistics the Gau Kurmark/Mark Brandenburg was the largest in the German Reich.[105][page needed] By 1941, there were 42 territorial Gaue for Germany,[g] 7 of them for Austria, the Sudetenland (in Czechoslovakia), Danzig and the Territory of the Saar Basin, along with the unincorporated regions under German control known as the Protectorate of Bohemia-Moravia and the General Government, established after the joint invasion of Poland by Nazi Germany and the Soviet Union in 1939 at the onset of World War II.[106] Getting the leadership of the individual Gaue to co-operate with one another proved difficult at times since there was constant administrative and financial jockeying for control going on between them.[107]
|
124 |
+
|
125 |
+
The table below uses the organizational structure that existed before its dissolution in 1945. More information on the older Gaue is in the second table.
|
126 |
+
|
127 |
+
Later Gaue:
|
128 |
+
|
129 |
+
Simple re-namings of existing Gaue without territorial changes is marked with the initials RN in the column "later became". The numbering is not based on any official former ranking, but merely listed alphabetically.
|
130 |
+
|
131 |
+
The irregular Swiss branch of the Nazi Party also established a number of Party Gaue in that country, most of them named after their regional capitals. These included Gau Basel-Solothurn, Gau Schaffhausen, Gau Luzern, Gau Bern and Gau Zürich.[108][109][110] The Gau Ostschweiz (East Switzerland) combined the territories of three cantons: St. Gallen, Thurgau and Appenzell.[111]
|
132 |
+
|
133 |
+
The general membership of the Nazi Party mainly consisted of the urban and rural lower middle classes. 7% belonged to the upper class, another 7% were peasants, 35% were industrial workers and 51% were what can be described as middle class. In early 1933, just before Hitler's appointment to the chancellorship, the party showed an under-representation of "workers", who made up 29.7% of the membership but 46.3% of German society. Conversely, white-collar employees (18.6% of members and 12% of Germans), the self-employed (19.8% of members and 9.6% of Germans) and civil servants (15.2% of members and 4.8% of the German population) had joined in proportions greater than their share of the general population.[112] These members were affiliated with local branches of the party, of which there were 1,378 throughout the country in 1928. In 1932, the number had risen to 11,845, reflecting the party's growth in this period.[112]
|
134 |
+
|
135 |
+
When it came to power in 1933, the Nazi Party had over 2 million members. In 1939, the membership total rose to 5.3 million with 81% being male and 19% being female. It continued to attract many more and by 1945 the party reached its peak of 8 million with 63% being male and 37% being female (about 10% of the German population of 80 million).[3][113]
|
136 |
+
|
137 |
+
Nazi members with military ambitions were encouraged to join the Waffen-SS, but a great number enlisted in the Wehrmacht and even more were drafted for service after World War II began. Early regulations required that all Wehrmacht members be non-political and any Nazi member joining in the 1930s was required to resign from the Nazi Party.
|
138 |
+
|
139 |
+
However, this regulation was soon waived and full Nazi Party members served in the Wehrmacht in particular after the outbreak of World War II. The Wehrmacht Reserves also saw a high number of senior Nazis enlisting, with Reinhard Heydrich and Fritz Todt joining the Luftwaffe, as well as Karl Hanke who served in the army.
|
140 |
+
|
141 |
+
The British historian Richard J. Evans wrote that junior officers in the army were inclined to be especially zealous National Socialists with a third of them having joined the Nazi Party by 1941. Reinforcing the work of the junior leaders were the National Socialist Leadership Guidance Officers, which were created with the purpose of indoctrinating the troops for the "war of extermination" against Soviet Russia.[114] Among higher-ranking officers, 29.2% were NSDAP members by 1941.[115]
|
142 |
+
|
143 |
+
In 1926, the party formed a special division to engage the student population, known as the National Socialist German Students' League (NSDStB). A group for university lecturers, the National Socialist German University Lecturers' League (NSDDB), also existed until July 1944.
|
144 |
+
|
145 |
+
The National Socialist Women's League was the women's organisation of the party and by 1938 it had approximately 2 million members.
|
146 |
+
|
147 |
+
Party members who lived outside Germany were pooled into the Auslands-Organisation (NSDAP/AO, "Foreign Organization"). The organisation was limited only to so-called "Imperial Germans" (citizens of the German Empire); and "Ethnic Germans" (Volksdeutsche), who did not hold German citizenship were not permitted to join.
|
148 |
+
|
149 |
+
Under Beneš decree No. 16/1945 Coll., in case of citizens of Czechoslovakia membership of the Nazi Party was punishable by between five and twenty years of imprisonment.
|
150 |
+
|
151 |
+
Deutsche Gemeinschaft was a branch of the Nazi Party founded in 1919, created for Germans with Volksdeutsche status.[116] It is not to be confused with the post-war right-wing Deutsche Gemeinschaft [de], which was founded in 1949.
|
152 |
+
|
153 |
+
Notable members included:[117][page needed]
|
154 |
+
|
155 |
+
|
156 |
+
|
157 |
+
|
158 |
+
|
159 |
+
Informational notes
|
160 |
+
|
161 |
+
Citations
|
162 |
+
|
163 |
+
Bibliography
|
en/4196.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/4197.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/4198.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
In meteorology, a cloud is an aerosol consisting of a visible mass of minute liquid droplets, frozen crystals, or other particles suspended in the atmosphere of a planetary body or similar space.[1] Water or various other chemicals may compose the droplets and crystals. On Earth, clouds are formed as a result of saturation of the air when it is cooled to its dew point, or when it gains sufficient moisture (usually in the form of water vapor) from an adjacent source to raise the dew point to the ambient temperature.
|
6 |
+
|
7 |
+
They are seen in the Earth's homosphere, which includes the troposphere, stratosphere, and mesosphere. Nephology is the science of clouds, which is undertaken in the cloud physics branch of meteorology. There are two methods of naming clouds in their respective layers of the homosphere, Latin and common.
|
8 |
+
|
9 |
+
Genus types in the troposphere, the atmospheric layer closest to Earth's surface, have Latin names due to the universal adoption of Luke Howard's nomenclature that was formally proposed in 1802. It became the basis of a modern international system that divides clouds into five physical forms which can be divided or classified further into altitude levels to derive the ten basic genera. The main representative cloud types for each of these forms are stratus, cirrus, stratocumulus, cumulus, and cumulonimbus. Low-level stratiform and stratocumuliform genera do not have any altitude-related prefixes. However mid-level variants of the same physical forms are given the prefix alto- while high-level types carry the prefix cirro-. The other main forms never have prefixes indicating altitude level. Cirriform clouds are always high-level while cumuliform and cumulonimbiform clouds are classified formally as low-level. The latter are also more informally characterized as multi-level or vertical as indicated by the cumulo- prefix. Most of the ten genera derived by this method of classification can be subdivided into species and further subdivided into varieties. Very low stratiform clouds that extend down to the Earth's surface are given the common names fog and mist, but have no Latin names.
|
10 |
+
|
11 |
+
In the stratosphere and mesosphere, clouds have common names for their main types. They may have the appearance of stratiform veils or sheets, cirriform wisps, or stratocumuliform bands or ripples. They are seen infrequently, mostly in the polar regions of Earth. Clouds have been observed in the atmospheres of other planets and moons in the Solar System and beyond. However, due to their different temperature characteristics, they are often composed of other substances such as methane, ammonia, and sulfuric acid, as well as water.
|
12 |
+
|
13 |
+
Tropospheric clouds can have a direct effect on climate change on Earth. They may reflect incoming rays from the sun which can contribute to a cooling effect where and when these clouds occur, or trap longer wave radiation that reflects back up from the Earth's surface which can cause a warming effect. The altitude, form, and thickness of the clouds are the main factors that affect the local heating or cooling of Earth and the atmosphere. Clouds that form above the troposphere are too scarce and too thin to have any influence on climate change.
|
14 |
+
|
15 |
+
The tabular overview that follows is very broad in scope. It draws from several methods of cloud classification, both formal and informal, used in different levels of the Earth's homosphere by a number of cited authorities. Despite some differences in methodologies and terminologies, the classification schemes seen in this article can be harmonized by using an informal cross-classification of physical forms and altitude levels to derive the 10 tropospheric genera, the fog and mist that forms at surface level, and several additional major types above the troposphere. The cumulus genus includes four species that indicate vertical size and structure which can affect both forms and levels. This table should not be seen as a strict or singular classification, but as an illustration of how various major cloud types are related to each other and defined through a full range of altitude levels from Earth's surface to the "edge of space".
|
16 |
+
|
17 |
+
The origin of the term "cloud" can be found in the Old English words clud or clod, meaning a hill or a mass of rock. Around the beginning of the 13th century, the word came to be used as a metaphor for rain clouds, because of the similarity in appearance between a mass of rock and cumulus heap cloud. Over time, the metaphoric usage of the word supplanted the Old English weolcan, which had been the literal term for clouds in general.[2][3]
|
18 |
+
|
19 |
+
Ancient cloud studies were not made in isolation, but were observed in combination with other weather elements and even other natural sciences. Around 340 BC, Greek philosopher Aristotle wrote Meteorologica, a work which represented the sum of knowledge of the time about natural science, including weather and climate. For the first time, precipitation and the clouds from which precipitation fell were called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word came the modern term meteorology, the study of clouds and weather. Meteorologica was based on intuition and simple observation, but not on what is now considered the scientific method. Nevertheless, it was the first known work that attempted to treat a broad range of meteorological topics in a systematic way, especially the hydrological cycle.[4]
|
20 |
+
|
21 |
+
After centuries of speculative theories about the formation and behavior of clouds, the first truly scientific studies were undertaken by Luke Howard in England and Jean-Baptiste Lamarck in France. Howard was a methodical observer with a strong grounding in the Latin language, and used his background to classify the various tropospheric cloud types during 1802. He believed that the changing cloud forms in the sky could unlock the key to weather forecasting. Lamarck had worked independently on cloud classification the same year and had come up with a different naming scheme that failed to make an impression even in his home country of France because it used unusual French names for cloud types. His system of nomenclature included 12 categories of clouds, with such names as (translated from French) hazy clouds, dappled clouds, and broom-like clouds. By contrast, Howard used universally accepted Latin, which caught on quickly after it was published in 1803.[5] As a sign of the popularity of the naming scheme, German dramatist and poet Johann Wolfgang von Goethe composed four poems about clouds, dedicating them to Howard. An elaboration of Howard's system was eventually formally adopted by the International Meteorological Conference in 1891.[5] This system covered only the tropospheric cloud types, but the discovery of clouds above the troposphere during the late 19th century eventually led to the creation of separate classification schemes using common names for these very high clouds, which were still broadly similar to some cloud forms identiified in the troposphhere.[6]
|
22 |
+
|
23 |
+
Terrestrial clouds can be found throughout most of the homosphere, which includes the troposphere, stratosphere, and mesosphere. Within these layers of the atmosphere, air can become saturated as a result of being cooled to its dew point or by having moisture added from an adjacent source.[7] In the latter case, saturation occurs when the dew point is raised to the ambient air temperature.
|
24 |
+
|
25 |
+
Adiabatic cooling occurs when one or more of three possible lifting agents – convective, cyclonic/frontal, or orographic – cause a parcel of air containing invisible water vapor to rise and cool to its dew point, the temperature at which the air becomes saturated. The main mechanism behind this process is adiabatic cooling.[8] As the air is cooled to its dew point and becomes saturated, water vapor normally condenses to form cloud drops. This condensation normally occurs on cloud condensation nuclei such as salt or dust particles that are small enough to be held aloft by normal circulation of the air.[9][10]
|
26 |
+
|
27 |
+
One agent is the convective upward motion of air caused by daytime solar heating at surface level.[9] Airmass instability allows for the formation of cumuliform clouds that can produce showers if the air is sufficiently moist.[11] On moderately rare occasions, convective lift can be powerful enough to penetrate the tropopause and push the cloud top into the stratosphere.[12]
|
28 |
+
|
29 |
+
Frontal and cyclonic lift occur when stable air is forced aloft at weather fronts and around centers of low pressure by a process called convergence.[13] Warm fronts associated with extratropical cyclones tend to generate mostly cirriform and stratiform clouds over a wide area unless the approaching warm airmass is unstable, in which case cumulus congestus or cumulonimbus clouds are usually embedded in the main precipitating cloud layer.[14] Cold fronts are usually faster moving and generate a narrower line of clouds, which are mostly stratocumuliform, cumuliform, or cumulonimbiform depending on the stability of the warm airmass just ahead of the front.[15]
|
30 |
+
|
31 |
+
A third source of lift is wind circulation forcing air over a physical barrier such as a mountain (orographic lift).[9] If the air is generally stable, nothing more than lenticular cap clouds form. However, if the air becomes sufficiently moist and unstable, orographic showers or thunderstorms may appear.[16]
|
32 |
+
|
33 |
+
Along with adiabatic cooling that requires a lifting agent, three major nonadiabatic mechanisms exist for lowering the temperature of the air to its dew point. Conductive, radiational, and evaporative cooling require no lifting mechanism and can cause condensation at surface level resulting in the formation of fog.[17][18][19]
|
34 |
+
|
35 |
+
Several main sources of water vapor can be added to the air as a way of achieving saturation without any cooling process: water or moist ground,[20][21][22] precipitation or virga,[23] and transpiration from plants[24]
|
36 |
+
|
37 |
+
Tropospheric classification is based on a hierarchy of categories with physical forms and altitude levels at the top.[25][26] These are cross-classified into a total of ten genus types, most of which can be divided into species and further subdivided into varieties which are at the bottom of the hierarchy.[27]
|
38 |
+
|
39 |
+
Clouds in the troposphere assume five physical forms based on structure and process of formation. These forms are commonly used for the purpose of satellite analysis.[25] They are given below in approximate ascending order of instability or convective activity.[28]
|
40 |
+
|
41 |
+
Nonconvective stratiform clouds appear in stable airmass conditions and, in general, have flat, sheet-like structures that can form at any altitude in the troposphere.[29] The stratiform group is divided by altitude range into the genera cirrostratus (high-level), altostratus (mid-level), stratus (low-level), and nimbostratus (multi-level).[26] Fog is commonly considered a surface-based cloud layer.[16] The fog may form at surface level in clear air or it may be the result of a very low stratus cloud subsiding to ground or sea level. Conversely, low stratiform clouds result when advection fog is lifted above surface level during breezy conditions.
|
42 |
+
|
43 |
+
Cirriform clouds in the troposphere are of the genus cirrus and have the appearance of detached or semimerged filaments. They form at high tropospheric altitudes in air that is mostly stable with little or no convective activity, although denser patches may occasionally show buildups caused by limited high-level convection where the air is partly unstable.[30] Clouds resembling cirrus can be found above the troposphere but are classified separately using common names.
|
44 |
+
|
45 |
+
Clouds of this structure have both cumuliform and stratiform characteristics in the form of rolls, ripples, or elements.[31] They generally form as a result of limited convection in an otherwise mostly stable airmass topped by an inversion layer.[32] If the inversion layer is absent or higher in the troposphere, increased airmass instability may cause the cloud layers to develop tops in the form of turrets consisting of embedded cumuliform buildups.[33] The stratocumuliform group is divided into cirrocumulus (high-level), altocumulus (mid-level), and stratocumulus (low-level).[31]
|
46 |
+
|
47 |
+
Cumuliform clouds generally appear in isolated heaps or tufts.[34][35] They are the product of localized but generally free-convective lift where no inversion layers are in the troposphere to limit vertical growth. In general, small cumuliform clouds tend to indicate comparatively weak instability. Larger cumuliform types are a sign of greater atmospheric instability and convective activity.[36] Depending on their vertical size, clouds of the cumulus genus type may be low-level or multi-level with moderate to towering vertical extent.[26]
|
48 |
+
|
49 |
+
The largest free-convective clouds comprise the genus cumulonimbus, which have towering vertical extent. They occur in highly unstable air[9] and often have fuzzy outlines at the upper parts of the clouds that sometimes include anvil tops.[31] These clouds are the product of very strong convection that can penetrate the lower stratosphere.
|
50 |
+
|
51 |
+
Tropospheric clouds form in any of three levels (formerly called étages) based on altitude range above the Earth's surface. The grouping of clouds into levels is commonly done for the purposes of cloud atlases, surface weather observations,[26] and weather maps.[37] The base-height range for each level varies depending on the latitudinal geographical zone.[26] Each altitude level comprises two or three genus-types differentiated mainly by physical form.[38][31]
|
52 |
+
|
53 |
+
The standard levels and genus-types are summarised below in approximate descending order of the altitude at which each is normally based.[39] Multi-level clouds with significant vertical extent are separately listed and summarized in approximate ascending order of instability or convective activity.[28]
|
54 |
+
|
55 |
+
High clouds form at altitudes of 3,000 to 7,600 m (10,000 to 25,000 ft) in the polar regions, 5,000 to 12,200 m (16,500 to 40,000 ft) in the temperate regions, and 6,100 to 18,300 m (20,000 to 60,000 ft) in the tropics.[26] All cirriform clouds are classified as high, thus constitute a single genus cirrus (Ci). Stratocumuliform and stratiform clouds in the high altitude range carry the prefix cirro-, yielding the respective genus names cirrocumulus (Cc) and cirrostratus (Cs). When limited-resolution satellite images of high clouds are analysed without supporting data from direct human observations, distinguishing between individual forms or genus types becomes impossible, and they are then collectively identified as high-type (or informally as cirrus-type, though not all high clouds are of the cirrus form or genus).[40]
|
56 |
+
|
57 |
+
Nonvertical clouds in the middle level are prefixed by alto-, yielding the genus names altocumulus (Ac) for stratocumuliform types and altostratus (As) for stratiform types. These clouds can form as low as 2,000 m (6,500 ft) above surface at any latitude, but may be based as high as 4,000 m (13,000 ft) near the poles, 7,000 m (23,000 ft) at midlatitudes, and 7,600 m (25,000 ft) in the tropics.[26] As with high clouds, the main genus types are easily identified by the human eye, but distinguishing between them using satellite photography is not possible. Without the support of human observations, these clouds are usually collectively identified as middle-type on satellite images.[40]
|
58 |
+
|
59 |
+
Low clouds are found from near the surface up to 2,000 m (6,500 ft).[26] Genus types in this level either have no prefix or carry one that refers to a characteristic other than altitude. Clouds that form in the low level of the troposphere are generally of larger structure than those that form in the middle and high levels, so they can usually be identified by their forms and genus types using satellite photography alone.[40]
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
These clouds have low- to mid-level bases that form anywhere from near the surface to about 2,400 m (8,000 ft) and tops that can extend into the mid-altitude range and sometimes higher in the case of nimbostratus.
|
64 |
+
|
65 |
+
This is a diffuse, dark grey, multi-level stratiform layer with great horizontal extent and usually moderate to deep vertical development. It lacks towering structure and looks feebly illuminated from the inside.[58] Nimbostratus normally forms from mid-level altostratus, and develops at least moderate vertical extent[59][60] when the base subsides into the low level during precipitation that can reach moderate to heavy intensity. It achieves even greater vertical development when it simultaneously grows upward into the high level due to large-scale frontal or cyclonic lift.[61] The nimbo- prefix refers to its ability to produce continuous rain or snow over a wide area, especially ahead of a warm front.[62] This thick cloud layer may be accompanied by embedded towering cumuliform or cumulonimbiform types.[60][63] Meteorologists affiliated with the World Meteorological Organization (WMO) officially classify nimbostratus as mid-level for synoptic purposes while informally characterizing it as multi-level.[26] Independent meteorologists and educators appear split between those who largely follow the WMO model[59][60] and those who classify nimbostratus as low-level, despite its considerable vertical extent and its usual initial formation in the middle altitude range.[64][65]
|
66 |
+
|
67 |
+
These very large cumuliform and cumulonimbiform types have similar low- to mid-level cloud bases as the multi-level and moderate vertical types, and tops that nearly always extend into the high levels. They are required to be identified by their standard names or abbreviations in all aviation observations (METARS) and forecasts (TAFS) to warn pilots of possible severe weather and turbulence.[66]
|
68 |
+
|
69 |
+
Genus types are commonly divided into subtypes called species that indicate specific structural details which can vary according to the stability and windshear characteristics of the atmosphere at any given time and location. Despite this hierarchy, a particular species may be a subtype of more than one genus, especially if the genera are of the same physical form and are differentiated from each other mainly by altitude or level. There are a few species, each of which can be associated with genera of more than one physical form.[72] The species types are grouped below according to the physical forms and genera with which each is normally associated. The forms, genera, and species are listed in approximate ascending order of instability or convective activity.[28]
|
70 |
+
|
71 |
+
Of the stratiform group, high-level cirrostratus comprises two species. Cirrostratus nebulosus has a rather diffuse appearance lacking in structural detail.[73] Cirrostratus fibratus is a species made of semi-merged filaments that are transitional to or from cirrus.[74] Mid-level altostratus and multi-level nimbostratus always have a flat or diffuse appearance and are therefore not subdivided into species. Low stratus is of the species nebulosus[73] except when broken up into ragged sheets of stratus fractus (see below).[59][72][75]
|
72 |
+
|
73 |
+
Cirriform clouds have three non-convective species that can form in mostly stable airmass conditions. Cirrus fibratus comprise filaments that may be straight, wavy, or occasionally twisted by non-convective wind shear.[74] The species uncinus is similar but has upturned hooks at the ends. Cirrus spissatus appear as opaque patches that can show light grey shading.[72]
|
74 |
+
|
75 |
+
Stratocumuliform genus-types (cirrocumulus, altocumulus, and stratocumulus) that appear in mostly stable air have two species each. The stratiformis species normally occur in extensive sheets or in smaller patches where there is only minimal convective activity.[76] Clouds of the lenticularis species tend to have lens-like shapes tapered at the ends. They are most commonly seen as orographic mountain-wave clouds, but can occur anywhere in the troposphere where there is strong wind shear combined with sufficient airmass stability to maintain a generally flat cloud structure. These two species can be found in the high, middle, or low levels of the troposphere depending on the stratocumuliform genus or genera present at any given time.[59][72][75]
|
76 |
+
|
77 |
+
The species fractus shows variable instability because it can be a subdivision of genus-types of different physical forms that have different stability characteristics. This subtype can be in the form of ragged but mostly stable stratiform sheets (stratus fractus) or small ragged cumuliform heaps with somewhat greater instability (cumulus fractus).[72][75][77] When clouds of this species are associated with precipitating cloud systems of considerable vertical and sometimes horizontal extent, they are also classified as accessory clouds under the name pannus (see section on supplementary features).[78]
|
78 |
+
|
79 |
+
These species are subdivisions of genus types that can occur in partly unstable air. The species castellanus appears when a mostly stable stratocumuliform or cirriform layer becomes disturbed by localized areas of airmass instability, usually in the morning or afternoon. This results in the formation of cumuliform buildups of limited convection arising from a common stratiform base.[79] Castellanus resembles the turrets of a castle when viewed from the side, and can be found with stratocumuliform genera at any tropospheric altitude level and with limited-convective patches of high-level cirrus.[80] Tufted clouds of the more detached floccus species are subdivisions of genus-types which may be cirriform or stratocumuliform in overall structure. They are sometimes seen with cirrus, cirrocumulus, altocumulus, and stratocumulus.[81]
|
80 |
+
|
81 |
+
A newly recognized species of stratocumulus or altocumulus has been given the name volutus, a roll cloud that can occur ahead of a cumulonimbus formation.[82] There are some volutus clouds that form as a consequence of interactions with specific geographical features rather than with a parent cloud. Perhaps the strangest geographically specific cloud of this type is the Morning Glory, a rolling cylindrical cloud that appears unpredictably over the Gulf of Carpentaria in Northern Australia. Associated with a powerful "ripple" in the atmosphere, the cloud may be "surfed" in glider aircraft.[83]
|
82 |
+
|
83 |
+
More general airmass instability in the troposphere tends to produce clouds of the more freely convective cumulus genus type, whose species are mainly indicators of degrees of atmospheric instability and resultant vertical development of the clouds. A cumulus cloud initially forms in the low level of the troposphere as a cloudlet of the species humilis that shows only slight vertical development. If the air becomes more unstable, the cloud tends to grow vertically into the species mediocris, then congestus, the tallest cumulus species[72] which is the same type that the International Civil Aviation Organization refers to as 'towering cumulus'.[66]
|
84 |
+
|
85 |
+
With highly unstable atmospheric conditions, large cumulus may continue to grow into cumulonimbus calvus (essentially a very tall congestus cloud that produces thunder), then ultimately into the species capillatus when supercooled water droplets at the top of the cloud turn into ice crystals giving it a cirriform appearance.[72][75]
|
86 |
+
|
87 |
+
Genus and species types are further subdivided into varieties whose names can appear after the species name to provide a fuller description of a cloud. Some cloud varieties are not restricted to a specific altitude level or form, and can therefore be common to more than one genus or species.[84]
|
88 |
+
|
89 |
+
All cloud varieties fall into one of two main groups. One group identifies the opacities of particular low and mid-level cloud structures and comprises the varieties translucidus (thin translucent), perlucidus (thick opaque with translucent or very small clear breaks), and opacus (thick opaque). These varieties are always identifiable for cloud genera and species with variable opacity. All three are associated with the stratiformis species of altocumulus and stratocumulus. However, only two varieties are seen with altostratus and stratus nebulosus whose uniform structures prevent the formation of a perlucidus variety. Opacity-based varieties are not applied to high clouds because they are always translucent, or in the case of cirrus spissatus, always opaque.[84][85]
|
90 |
+
|
91 |
+
A second group describes the occasional arrangements of cloud structures into particular patterns that are discernible by a surface-based observer (cloud fields usually being visible only from a significant altitude above the formations). These varieties are not always present with the genera and species with which they are otherwise associated, but only appear when atmospheric conditions favor their formation. Intortus and vertebratus varieties occur on occasion with cirrus fibratus. They are respectively filaments twisted into irregular shapes, and those that are arranged in fishbone patterns, usually by uneven wind currents that favor the formation of these varieties. The variety radiatus is associated with cloud rows of a particular type that appear to converge at the horizon. It is sometimes seen with the fibratus and uncinus species of cirrus, the stratiformis species of altocumulus and stratocumulus, the mediocris and sometimes humilis species of cumulus,[87][88] and with the genus altostratus.[89]
|
92 |
+
|
93 |
+
Another variety, duplicatus (closely spaced layers of the same type, one above the other), is sometimes found with cirrus of both the fibratus and uncinus species, and with altocumulus and stratocumulus of the species stratiformis and lenticularis. The variety undulatus (having a wavy undulating base) can occur with any clouds of the species stratiformis or lenticularis, and with altostratus. It is only rarely observed with stratus nebulosus. The variety lacunosus is caused by localized downdrafts that create circular holes in the form of a honeycomb or net. It is occasionally seen with cirrocumulus and altocumulus of the species stratiformis, castellanus, and floccus, and with stratocumulus of the species stratiformis and castellanus.[84][85]
|
94 |
+
|
95 |
+
It is possible for some species to show combined varieties at one time, especially if one variety is opacity-based and the other is pattern-based. An example of this would be a layer of altocumulus stratiformis arranged in seemingly converging rows separated by small breaks. The full technical name of a cloud in this configuration would be altocumulus stratiformis radiatus perlucidus, which would identify respectively its genus, species, and two combined varieties.[75][84][85]
|
96 |
+
|
97 |
+
Supplementary features and accessory clouds are not further subdivisions of cloud types below the species and variety level. Rather, they are either hydrometeors or special cloud types with their own Latin names that form in association with certain cloud genera, species, and varieties.[75][85] Supplementary features, whether in the form of clouds or precipitation, are directly attached to the main genus-cloud. Accessory clouds, by contrast, are generally detached from the main cloud.[90]
|
98 |
+
|
99 |
+
One group of supplementary features are not actual cloud formations, but precipitation that falls when water droplets or ice crystals that make up visible clouds have grown too heavy to remain aloft. Virga is a feature seen with clouds producing precipitation that evaporates before reaching the ground, these being of the genera cirrocumulus, altocumulus, altostratus, nimbostratus, stratocumulus, cumulus, and cumulonimbus.[90]
|
100 |
+
|
101 |
+
When the precipitation reaches the ground without completely evaporating, it is designated as the feature praecipitatio.[91] This normally occurs with altostratus opacus, which can produce widespread but usually light precipitation, and with thicker clouds that show significant vertical development. Of the latter, upward-growing cumulus mediocris produces only isolated light showers, while downward growing nimbostratus is capable of heavier, more extensive precipitation. Towering vertical clouds have the greatest ability to produce intense precipitation events, but these tend to be localized unless organized along fast-moving cold fronts. Showers of moderate to heavy intensity can fall from cumulus congestus clouds. Cumulonimbus, the largest of all cloud genera, has the capacity to produce very heavy showers. Low stratus clouds usually produce only light precipitation, but this always occurs as the feature praecipitatio due to the fact this cloud genus lies too close to the ground to allow for the formation of virga.[75][85][90]
|
102 |
+
|
103 |
+
Incus is the most type-specific supplementary feature, seen only with cumulonimbus of the species capillatus. A cumulonimbus incus cloud top is one that has spread out into a clear anvil shape as a result of rising air currents hitting the stability layer at the tropopause where the air no longer continues to get colder with increasing altitude.[92]
|
104 |
+
|
105 |
+
The mamma feature forms on the bases of clouds as downward-facing bubble-like protuberances caused by localized downdrafts within the cloud. It is also sometimes called mammatus, an earlier version of the term used before a standardization of Latin nomenclature brought about by the World Meteorological Organization during the 20th century. The best-known is cumulonimbus with mammatus, but the mamma feature is also seen occasionally with cirrus, cirrocumulus, altocumulus, altostratus, and stratocumulus.[90]
|
106 |
+
|
107 |
+
A tuba feature is a cloud column that may hang from the bottom of a cumulus or cumulonimbus. A newly formed or poorly organized column might be comparatively benign, but can quickly intensify into a funnel cloud or tornado.[90][93][94]
|
108 |
+
|
109 |
+
An arcus feature is a roll cloud with ragged edges attached to the lower front part of cumulus congestus or cumulonimbus that forms along the leading edge of a squall line or thunderstorm outflow.[95] A large arcus formation can have the appearance of a dark menacing arch.[90]
|
110 |
+
|
111 |
+
Several new supplementary features have been formally recognized by the World Meteorological Organization (WMO). The feature fluctus can form under conditions of strong atmospheric wind shear when a stratocumulus, altocumulus, or cirrus cloud breaks into regularly spaced crests. This variant is sometimes known informally as a Kelvin–Helmholtz (wave) cloud. This phenomenon has also been observed in cloud formations over other planets and even in the sun's atmosphere.[96] Another highly disturbed but more chaotic wave-like cloud feature associated with stratocumulus or altocumulus cloud has been given the Latin name asperitas. The supplementary feature cavum is a circular fall-streak hole that occasionally forms in a thin layer of supercooled altocumulus or cirrocumulus. Fall streaks consisting of virga or wisps of cirrus are usually seen beneath the hole as ice crystals fall out to a lower altitude. This type of hole is usually larger than typical lacunosus holes. A murus feature is a cumulonimbus wall cloud with a lowering, rotating cloud base than can lead to the development of tornadoes. A cauda feature is a tail cloud that extends horizontally away from the murus cloud and is the result of air feeding into the storm.[82]
|
112 |
+
|
113 |
+
Supplementary cloud formations detached from the main cloud are known as accessory clouds.[75][85][90] The heavier precipitating clouds, nimbostratus, towering cumulus (cumulus congestus), and cumulonimbus typically see the formation in precipitation of the pannus feature, low ragged clouds of the genera and species cumulus fractus or stratus fractus.[78]
|
114 |
+
|
115 |
+
A group of accessory clouds comprise formations that are associated mainly with upward-growing cumuliform and cumulonimbiform clouds of free convection. Pileus is a cap cloud that can form over a cumulonimbus or large cumulus cloud,[97] whereas a velum feature is a thin horizontal sheet that sometimes forms like an apron around the middle or in front of the parent cloud.[90] An accessory cloud recently officially recognized the World meteorological Organization is the flumen, also known more informally as the beaver's tail. It is formed by the warm, humid inflow of a super-cell thunderstorm, and can be mistaken for a tornado. Although the flumen can indicate a tornado risk, it is similar in appearance to pannus or scud clouds and does not rotate.[82]
|
116 |
+
|
117 |
+
Clouds initially form in clear air or become clouds when fog rises above surface level. The genus of a newly formed cloud is determined mainly by air mass characteristics such as stability and moisture content. If these characteristics change over time, the genus tends to change accordingly. When this happens, the original genus is called a mother cloud. If the mother cloud retains much of its original form after the appearance of the new genus, it is termed a genitus cloud. One example of this is stratocumulus cumulogenitus, a stratocumulus cloud formed by the partial spreading of a cumulus type when there is a loss of convective lift. If the mother cloud undergoes a complete change in genus, it is considered to be a mutatus cloud.[98]
|
118 |
+
|
119 |
+
The genitus and mutatus categories have been expanded to include certain types that do not originate from pre-existing clouds. The term flammagenitus (Latin for 'fire-made') applies to cumulus congestus or cumulonimbus that are formed by large scale fires or volcanic eruptions. Smaller low-level "pyrocumulus" or "fumulus" clouds formed by contained industrial activity are now classified as cumulus homogenitus (Latin for 'man-made'). Contrails formed from the exhaust of aircraft flying in the upper level of the troposphere can persist and spread into formations resembling cirrus which are designated cirrus homogenitus. If a cirrus homogenitus cloud changes fully to any of the high-level genera, they are termed cirrus, cirrostratus, or cirrocumulus homomutatus. Stratus cataractagenitus (Latin for 'cataract-made') are generated by the spray from waterfalls. Silvagenitus (Latin for 'forest-made') is a stratus cloud that forms as water vapor is added to the air above a forest canopy.[98]
|
120 |
+
|
121 |
+
Stratocumulus clouds can be organized into "fields" that take on certain specially classified shapes and characteristics. In general, these fields are more discernible from high altitudes than from ground level. They can often be found in the following forms:
|
122 |
+
|
123 |
+
These patterns are formed from a phenomenon known as a Kármán vortex which is named after the engineer and fluid dynamicist Theodore von Kármán,.[101] Wind driven clouds can form into parallel rows that follow the wind direction. When the wind and clouds encounter high elevation land features such as a vertically prominent islands, they can form eddies around the high land masses that give the clouds a twisted appearance.[102]
|
124 |
+
|
125 |
+
Although the local distribution of clouds can be significantly influenced by topography, the global prevalence of cloud cover in the troposphere tends to vary more by latitude. It is most prevalent in and along low pressure zones of surface tropospheric convergence which encircle the Earth close to the equator and near the 50th parallels of latitude in the northern and southern hemispheres.[105] The adiabatic cooling processes that lead to the creation of clouds by way of lifting agents are all associated with convergence; a process that involves the horizontal inflow and accumulation of air at a given location, as well as the rate at which this happens.[106] Near the equator, increased cloudiness is due to the presence of the low-pressure Intertropical Convergence Zone (ITCZ) where very warm and unstable air promotes mostly cumuliform and cumulonimbiform clouds.[107] Clouds of virtually any type can form along the mid-latitude convergence zones depending on the stability and moisture content of the air. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin.[108] This leads to the formation of weather-making extratropical cyclones composed of cloud systems that may be stable or unstable to varying degrees according to the stability characteristics of the various airmasses that are in conflict.[109]
|
126 |
+
|
127 |
+
Divergence is the opposite of convergence. In the Earth's troposphere, it involves the horizontal outflow of air from the upper part of a rising column of air, or from the lower part of a subsiding column often associated with an area or ridge of high pressure.[106] Cloudiness tends to be least prevalent near the poles and in the subtropics close to the 30th parallels, north and south. The latter are sometimes referred to as the horse latitudes. The presence of a large-scale high-pressure subtropical ridge on each side of the equator reduces cloudiness at these low latitudes.[110] Similar patterns also occur at higher latitudes in both hemispheres.[111]
|
128 |
+
|
129 |
+
The luminance or brightness of a cloud is determined by how light is reflected, scattered, and transmitted by the cloud's particles. Its brightness may also be affected by the presence of haze or photometeors such as halos and rainbows.[112] In the troposphere, dense, deep clouds exhibit a high reflectance (70% to 95%) throughout the visible spectrum. Tiny particles of water are densely packed and sunlight cannot penetrate far into the cloud before it is reflected out, giving a cloud its characteristic white color, especially when viewed from the top.[113] Cloud droplets tend to scatter light efficiently, so that the intensity of the solar radiation decreases with depth into the gases. As a result, the cloud base can vary from a very light to very-dark-grey depending on the cloud's thickness and how much light is being reflected or transmitted back to the observer. High thin tropospheric clouds reflect less light because of the comparatively low concentration of constituent ice crystals or supercooled water droplets which results in a slightly off-white appearance. However, a thick dense ice-crystal cloud appears brilliant white with pronounced grey shading because of its greater reflectivity.[112]
|
130 |
+
|
131 |
+
As a tropospheric cloud matures, the dense water droplets may combine to produce larger droplets. If the droplets become too large and heavy to be kept aloft by the air circulation, they will fall from the cloud as rain. By this process of accumulation, the space between droplets becomes increasingly larger, permitting light to penetrate farther into the cloud. If the cloud is sufficiently large and the droplets within are spaced far enough apart, a percentage of the light that enters the cloud is not reflected back out but is absorbed giving the cloud a darker look. A simple example of this is one's being able to see farther in heavy rain than in heavy fog. This process of reflection/absorption is what causes the range of cloud color from white to black.[114]
|
132 |
+
|
133 |
+
Striking cloud colorations can be seen at any altitude, with the color of a cloud usually being the same as the incident light.[115] During daytime when the sun is relatively high in the sky, tropospheric clouds generally appear bright white on top with varying shades of grey underneath. Thin clouds may look white or appear to have acquired the color of their environment or background. Red, orange, and pink clouds occur almost entirely at sunrise/sunset and are the result of the scattering of sunlight by the atmosphere. When the sun is just below the horizon, low-level clouds are gray, middle clouds appear rose-colored, and high clouds are white or off-white. Clouds at night are black or dark grey in a moonless sky, or whitish when illuminated by the moon. They may also reflect the colors of large fires, city lights, or auroras that might be present.[115]
|
134 |
+
|
135 |
+
A cumulonimbus cloud that appears to have a greenish or bluish tint is a sign that it contains extremely high amounts of water; hail or rain which scatter light in a way that gives the cloud a blue color. A green colorization occurs mostly late in the day when the sun is comparatively low in the sky and the incident sunlight has a reddish tinge that appears green when illuminating a very tall bluish cloud. Supercell type storms are more likely to be characterized by this but any storm can appear this way. Coloration such as this does not directly indicate that it is a severe thunderstorm, it only confirms its potential. Since a green/blue tint signifies copious amounts of water, a strong updraft to support it, high winds from the storm raining out, and wet hail; all elements that improve the chance for it to become severe, can all be inferred from this. In addition, the stronger the updraft is, the more likely the storm is to undergo tornadogenesis and to produce large hail and high winds.[116]
|
136 |
+
|
137 |
+
Yellowish clouds may be seen in the troposphere in the late spring through early fall months during forest fire season. The yellow color is due to the presence of pollutants in the smoke. Yellowish clouds are caused by the presence of nitrogen dioxide and are sometimes seen in urban areas with high air pollution levels.[117]
|
138 |
+
|
139 |
+
Stratocumulus stratiformis and small castellanus made orange by the sun rising
|
140 |
+
|
141 |
+
An occurrence of cloud iridescence with altocumulus volutus and cirrocumulus stratiformis
|
142 |
+
|
143 |
+
Sunset reflecting shades of pink onto grey stratocumulus stratiformis translucidus (becoming perlucidus in the background)
|
144 |
+
|
145 |
+
Stratocumulus stratiformis perlucidus before sunset. Bangalore, India.
|
146 |
+
|
147 |
+
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
|
148 |
+
|
149 |
+
Particles in the atmosphere and the sun's angle enhance colors of stratocumulus cumulogenitus at evening twilight
|
150 |
+
|
151 |
+
Tropospheric clouds exert numerous influences on Earth's troposphere and climate. First and foremost, they are the source of precipitation, thereby greatly influencing the distribution and amount of precipitation. Because of their differential buoyancy relative to surrounding cloud-free air, clouds can be associated with vertical motions of the air that may be convective, frontal, or cyclonic. The motion is upward if the clouds are less dense because condensation of water vapor releases heat, warming the air and thereby decreasing its density. This can lead to downward motion because lifting of the air results in cooling that increases its density. All of these effects are subtly dependent on the vertical temperature and moisture structure of the atmosphere and result in major redistribution of heat that affect the Earth's climate.[118]
|
152 |
+
|
153 |
+
The complexity and diversity of clouds in the troposphere is a major reason for difficulty in quantifying the effects of clouds on climate and climate change. On the one hand, white cloud tops promote cooling of Earth's surface by reflecting shortwave radiation (visible and near infrared) from the sun, diminishing the amount of solar radiation that is absorbed at the surface, enhancing the Earth's albedo. Most of the sunlight that reaches the ground is absorbed, warming the surface, which emits radiation upward at longer, infrared, wavelengths. At these wavelengths, however, water in the clouds acts as an efficient absorber. The water reacts by radiating, also in the infrared, both upward and downward, and the downward longwave radiation results in increased warming at the surface. This is analogous to the greenhouse effect of greenhouse gases and water vapor.[118]
|
154 |
+
|
155 |
+
High-level genus-types particularly show this duality with both short-wave albedo cooling and long-wave greenhouse warming effects. On the whole, ice-crystal clouds in the upper troposphere (cirrus) tend to favor net warming.[119][120] However, the cooling effect is dominant with mid-level and low clouds, especially when they form in extensive sheets.[119] Measurements by NASA indicate that on the whole, the effects of low and mid-level clouds that tend to promote cooling outweigh the warming effects of high layers and the variable outcomes associated with vertically developed clouds.[119]
|
156 |
+
|
157 |
+
As difficult as it is to evaluate the influences of current clouds on current climate, it is even more problematic to predict changes in cloud patterns and properties in a future, warmer climate, and the resultant cloud influences on future climate. In a warmer climate more water would enter the atmosphere by evaporation at the surface; as clouds are formed from water vapor, cloudiness would be expected to increase. But in a warmer climate, higher temperatures would tend to evaporate clouds.[121] Both of these statements are considered accurate, and both phenomena, known as cloud feedbacks, are found in climate model calculations. Broadly speaking, if clouds, especially low clouds, increase in a warmer climate, the resultant cooling effect leads to a negative feedback in climate response to increased greenhouse gases. But if low clouds decrease, or if high clouds increase, the feedback is positive. Differing amounts of these feedbacks are the principal reason for differences in climate sensitivities of current global climate models. As a consequence, much research has focused on the response of low and vertical clouds to a changing climate. Leading global models produce quite different results, however, with some showing increasing low clouds and others showing decreases.[122][123] For these reasons the role of tropospheric clouds in regulating weather and climate remains a leading source of uncertainty in global warming projections.[124][125]
|
158 |
+
|
159 |
+
Polar stratospheric clouds (PSC's) form in the lowest part of the stratosphere during the winter, at the altitude and during the season that produces the coldest temperatures and therefore the best chances of triggering condensation caused by adiabatic cooling. Moisture is scarce in the stratosphere, so nacreous and non-nacreous cloud at this altitude range is restricted to polar regions in the winter where the air is coldest.[6]
|
160 |
+
|
161 |
+
PSC's show some variation in structure according to their chemical makeup and atmospheric conditions, but are limited to a single very high range of altitude of about 15,000–25,000 m (49,200–82,000 ft), so they are not classified into altitude levels, genus types, species, or varieties. There is no Latin nomenclature in the manner of tropospheric clouds, but rather descriptive names using common English.[6]
|
162 |
+
|
163 |
+
Supercooled nitric acid and water PSC's, sometimes known as type 1, typically have a stratiform appearance resembling cirrostratus or haze, but because they are not frozen into crystals, do not show the pastel colours of the nacreous types. This type of PSC has been identified as a cause of ozone depletion in the stratosphere.[126] The frozen nacreous types are typically very thin with mother-of-pearl colorations and an undulating cirriform or lenticular (stratocumuliform) appearance. These are sometimes known as type 2.[127][128]
|
164 |
+
|
165 |
+
Polar mesospheric clouds form at an extreme-level altitude range of about 80 to 85 km (50 to 53 mi). They are given the Latin name noctilucent because of their illumination well after sunset and before sunrise. They typically have a bluish or silvery white coloration that can resemble brightly illuminated cirrus. Noctilucent clouds may occasionally take on more of a red or orange hue.[6] They are not common or widespread enough to have a significant effect on climate.[129] However, an increasing frequency of occurrence of noctilucent clouds since the 19th century may be the result of climate change.[130]
|
166 |
+
|
167 |
+
Noctilucent clouds are the highest in the atmosphere and form near the top of the mesosphere at about ten times the altitude of tropospheric high clouds.[131] From ground level, they can occasionally be seen illuminated by the sun during deep twilight. Ongoing research indicates that convective lift in the mesosphere is strong enough during the polar summer to cause adiabatic cooling of small amount of water vapour to the point of saturation. This tends to produce the coldest temperatures in the entire atmosphere just below the mesopause. These conditions result in the best environment for the formation of polar mesospheric clouds.[129] There is also evidence that smoke particles from burnt-up meteors provide much of the condensation nuclei required for the formation of noctilucent cloud.[132]
|
168 |
+
|
169 |
+
Noctilucent clouds have four major types based on physical structure and appearance. Type I veils are very tenuous and lack well-defined structure, somewhat like cirrostratus or poorly defined cirrus.[133] Type II bands are long streaks that often occur in groups arranged roughly parallel to each other. They are usually more widely spaced than the bands or elements seen with cirrocumulus clouds.[134] Type III billows are arrangements of closely spaced, roughly parallel short streaks that mostly resemble cirrus.[135] Type IV whirls are partial or, more rarely, complete rings of cloud with dark centres.[136]
|
170 |
+
|
171 |
+
Distribution in the mesosphere is similar to the stratosphere except at much higher altitudes. Because of the need for maximum cooling of the water vapor to produce noctilucent clouds, their distribution tends to be restricted to polar regions of Earth. A major seasonal difference is that convective lift from below the mesosphere pushes very scarce water vapor to higher colder altitudes required for cloud formation during the respective summer seasons in the northern and southern hemispheres. Sightings are rare more than 45 degrees south of the north pole or north of the south pole.[6]
|
172 |
+
|
173 |
+
Cloud cover has been seen on most other planets in the Solar System. Venus's thick clouds are composed of sulfur dioxide (due to volcanic activity) and appear to be almost entirely stratiform.[137] They are arranged in three main layers at altitudes of 45 to 65 km that obscure the planet's surface and can produce virga. No embedded cumuliform types have been identified, but broken stratocumuliform wave formations are sometimes seen in the top layer that reveal more continuous layer clouds underneath.[138] On Mars, noctilucent, cirrus, cirrocumulus and stratocumulus composed of water-ice have been detected mostly near the poles.[139][140] Water-ice fogs have also been detected on Mars.[141]
|
174 |
+
|
175 |
+
Both Jupiter and Saturn have an outer cirriform cloud deck composed of ammonia,[142][143] an intermediate stratiform haze-cloud layer made of ammonium hydrosulfide, and an inner deck of cumulus water clouds.[144][145] Embedded cumulonimbus are known to exist near the Great Red Spot on Jupiter.[146][147] The same category-types can be found covering Uranus, and Neptune, but are all composed of methane.[148][149][150][151][152][153] Saturn's moon Titan has cirrus clouds believed to be composed largely of methane.[154][155] The Cassini–Huygens Saturn mission uncovered evidence of polar stratospheric clouds[156] and a methane cycle on Titan, including lakes near the poles and fluvial channels on the surface of the moon.[157]
|
176 |
+
|
177 |
+
Some planets outside the Solar System are known to have atmospheric clouds. In October 2013, the detection of high altitude optically thick clouds in the atmosphere of exoplanet Kepler-7b was announced,[158][159] and, in December 2013, in the atmospheres of GJ 436 b and GJ 1214 b.[160][161][162][163]
|
178 |
+
|
179 |
+
Clouds play an important role in various cultures and religious traditions. The ancient Akkadians believed that the clouds were the breasts of the sky goddess Antu[165] and that rain was milk from her breasts.[165] In Exodus 13:21–22, Yahweh is described as guiding the Israelites through the desert in the form of a "pillar of cloud" by day and a "pillar of fire" by night.[164]
|
180 |
+
|
181 |
+
In the ancient Greek comedy The Clouds, written by Aristophanes and first performed at the City Dionysia in 423 BC, the philosopher Socrates declares that the Clouds are the only true deities[166] and tells the main character Strepsiades not to worship any deities other than the Clouds, but to pay homage to them alone.[166] In the play, the Clouds change shape to reveal the true nature of whoever is looking at them,[167][166][168] turning into centaurs at the sight of a long-haired politician, wolves at the sight of the embezzler Simon, deer at the sight of the coward Cleonymus, and mortal women at the sight of the effeminate informer Cleisthenes.[167][168][166] They are hailed the source of inspiration to comic poets and philosophers;[166] they are masters of rhetoric, regarding eloquence and sophistry alike as their "friends".[166]
|
182 |
+
|
183 |
+
In China, clouds are symbols of luck and happiness.[169] Overlapping clouds are thought to imply eternal happiness[169] and clouds of different colors are said to indicate "multiplied blessings".[169]
|
en/4199.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Night or nighttime (sp. night-time or night time) is the period of ambient darkness from sunset to sunrise in each twenty-four hours,[1] when the Sun is below the horizon. The exact time when night begins and ends (equally true with evening) depends on the location and varies throughout the year.[2] When night is considered as a period that which follows evening, it is usually considered to start around 8 pm and to last to about 4 am.[3] Night ends with coming of morning at sunrise.[4]
|
4 |
+
|
5 |
+
The word can be used in a different sense as the time between bedtime and morning.[5] In common communication the word 'night' is used as a farewell ('good night') and sometimes shortened to 'night', mainly when someone is going to sleep or leaving.[6] For example: It was nice to see you. Good night![citation needed] Unlike 'good morning,' 'good afternoon,' and 'good evening,' 'good night' (or 'goodnight') is not used as a greeting.
|
6 |
+
|
7 |
+
Complete darkness or astronomical night is the period between astronomical dusk and astronomical dawn when the Sun is between 18 and 90 degrees below the horizon and does not illuminate the sky. As seen from latitudes between 48.5607189° and 65.7273855666...° north or south of the Equator, complete darkness does not occur around the summer solstice because although the Sun sets, it is never more than 18° below the horizon at lower culmination.
|
8 |
+
|
9 |
+
The opposite of night is day (or "daytime", to distinguish it from "day" referring to a 24-hour period). The start and end points of time for a night vary, based on factors such as season and latitude. Twilight is the period of night after sunset or before sunrise when the Sun still illuminates the sky when it is below the horizon. At any given time, one side of Earth is bathed in sunlight (the daytime) while the other side is in the shadow caused by Earth blocking the sunlight. The central part of the shadow is called the umbra.
|
10 |
+
|
11 |
+
Natural illumination at night is still provided by a combination of moonlight, planetary light, starlight, zodiacal light, gegenschein, and airglow. In some circumstances, aurorae, lightning, and bioluminescence can provide some illumination. The glow provided by artificial lighting is sometimes referred to as light pollution because it can interfere with observational astronomy and ecosystems.
|
12 |
+
|
13 |
+
On Earth, an average night lasts shorter than daytime due to two factors. Firstly, the Sun's apparent disk is not a point, but has an angular diameter of about 32 arcminutes (32'). Secondly, the atmosphere refracts sunlight so that some of it reaches the ground when the Sun is below the horizon by about 34'. The combination of these two factors means that light reaches the ground when the center of the solar disk is below the horizon by about 50'. Without these effects, daytime and night would be the same length on both equinoxes, the moments when the Sun appears to contact the celestial equator. On the equinoxes, daytime actually lasts almost 14 minutes longer than night does at the Equator, and even longer towards the poles.
|
14 |
+
|
15 |
+
The summer and winter solstices mark the shortest and longest nights, respectively. The closer a location is to either the North Pole or the South Pole, the wider the range of variation in the night's duration. Although daytime and night nearly equalize in length on the equinoxes, the ratio of night to day changes more rapidly at high latitudes than at low latitudes before and after an equinox. In the Northern Hemisphere, Denmark experiences shorter nights in June than India. In the Southern Hemisphere, Antarctica sees longer nights in June than Chile. Both hemispheres experience the same patterns of night length at the same latitudes, but the cycles are 6 months apart so that one hemisphere experiences long nights (winter) while the other is experiencing short nights (summer).
|
16 |
+
|
17 |
+
In the region within either polar circle, the variation in daylight hours is so extreme that part of summer sees a period without night intervening between consecutive days, while part of winter sees a period without daytime intervening between consecutive nights.
|
18 |
+
|
19 |
+
The phenomenon of day and night is due to the rotation of a celestial body about its axis, creating an illusion of the sun rising and setting. Different bodies spin at very different rates, however. Some may spin much faster than Earth, while others spin extremely slowly, leading to very long days and nights. The planet Venus rotates once every 224.7 days – by far the slowest rotation period of any of the major planets. In contrast, the gas giant Jupiter's sidereal day is only 9 hours and 56 minutes.[7] However, it is not just the sidereal rotation period which determines the length of a planet's day-night cycle but the length of its orbital period as well - Venus has a rotation period of 224.7 days, but a day-night cycle just 116.75 days long due to its retrograde rotation and orbital motion around the Sun. Mercury has the longest day-night cycle as a result of its 3:2 resonance between its orbital period and rotation period - this resonance gives it a day-night cycle that is 176 days long. A planet may experience large temperature variations between day and night, such as Mercury, the planet closest to the sun. This is one consideration in terms of planetary habitability or the possibility of extraterrestrial life.
|
20 |
+
|
21 |
+
The disappearance of sunlight, the primary energy source for life on Earth, has dramatic effects on the morphology, physiology and behavior of almost every organism. Some animals sleep during the night, while other nocturnal animals including moths and crickets are active during this time. The effects of day and night are not seen in the animal kingdom alone; plants have also evolved adaptations to cope best with the lack of sunlight during this time. For example, crassulacean acid metabolism is a unique type of carbon fixation which allows photosynthetic plants to store carbon dioxide in their tissues as organic acids during the night, which can then be used during the day to synthesize carbohydrates. This allows them to keep their stomata closed during the daytime, preventing transpiration of precious water.
|
22 |
+
|
23 |
+
As artificial lighting has improved, especially after the Industrial Revolution, night time activity has increased and become a significant part of the economy in most places. Many establishments, such as nightclubs, bars, convenience stores, fast-food restaurants, gas stations, distribution facilities, and police stations now operate 24 hours a day or stay open as late as 1 or 2 a.m. Even without artificial light, moonlight sometimes makes it possible to travel or work outdoors at night.
|
24 |
+
|
25 |
+
Night is often associated with danger and evil, because of the psychological connection of night's all-encompassing darkness to the fear of the unknown and darkness's obstruction of a major sensory system (the sense of sight). Nighttime is naturally associated with vulnerability and danger for human physical survival. Criminals, animals, and other potential dangers can be concealed by darkness. Midnight has a particular importance in human imagination and culture.
|
26 |
+
|
27 |
+
The belief in magic often includes the idea that magic and magicians are more powerful at night. Séances of spiritualism are usually conducted closer to midnight. Similarly, mythical and folkloric creatures as vampires and werewolves are described as being more active at night. Ghosts are believed to wander around almost exclusively during night-time. In almost all cultures, there exist stories and legends warning of the dangers of night-time. In fact, the Saxons called the darkness of night the 'death mist'.[citation needed]
|
28 |
+
|
29 |
+
In literature, night and the lack of light are often color-associated with blackness which is historically symbolic in many cultures for villainy, non-existence, or a lack of knowledge (with the knowledge usually symbolized by light or illumination).
|
30 |
+
|
31 |
+
The cultural significance of the night in Islam differs from that in Western culture. The Quran was revealed during the Night of Power, the most significant night according to Islam. Muhammad made his famous journey from Mecca to Jerusalem and then to heaven in the night. Another prophet, Abraham came to a realization of the supreme being in charge of the universe at night.
|
en/42.html.txt
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Adele Laurie Blue Adkins MBE (/əˈdɛl/; born 5 May 1988) is an English singer-songwriter. After graduating from the BRIT School in 2006, Adele signed a recording contract with XL Recordings. In 2007, she received the Brit Awards Critics' Choice Award and won the BBC Sound of 2008 poll. Her debut album, 19, was released in 2008. It is certified eight times platinum in the UK, and three times platinum in the US. The album contains her first song, "Hometown Glory", written when she was 16, which is based on her home suburb of West Norwood in London. An appearance she made on Saturday Night Live in late 2008 boosted her career in the US. At the 51st Grammy Awards in 2009, Adele won the awards for Best New Artist and Best Female Pop Vocal Performance.
|
4 |
+
|
5 |
+
Adele released her second studio album, 21, in 2011. The album was critically well-received and surpassed the success of her debut, earning numerous awards in 2012, among them a record-tying six Grammy Awards, including Album of the Year; two Brit Awards, including British Album of the Year; and three American Music Awards, including Favorite Pop/Rock Album. The album has been certified 17 times platinum in the UK, and is overall the fourth best-selling album in the nation. In the US, it has held the top position longer than any album since 1985, and is certified diamond. The best-selling album worldwide of 2011 and 2012, 21 has sold over 31 million copies. The success of 21 earned Adele numerous mentions in the Guinness Book of Records. She was the first woman in the history of the Billboard Hot 100 to have three simultaneous top 10 singles as a lead artist, with "Rolling in the Deep", "Someone Like You", and "Set Fire to the Rain", all of which also topped the chart.
|
6 |
+
|
7 |
+
In 2012, Adele released "Skyfall", which she co-wrote and recorded for the James Bond film of the same name. The song won an Academy Award, a Grammy Award, and a Golden Globe for Best Original Song, as well as the Brit Award for British Single of the Year. After taking a three-year break, Adele released her third studio album, 25, in 2015. It became the year's best-selling album and broke first-week sales records in the UK and US. 25 was her second album to be certified diamond in the US and earned her five Grammy Awards, including Album of the Year, and four Brit Awards, including British Album of the Year. The lead single, "Hello", became the first song in the US to sell over one million digital copies within a week of its release. Her third concert tour, Adele Live 2016, visited Europe, North America and Oceania, and concluded with finale concerts at Wembley Stadium in late June 2017.[7]
|
8 |
+
|
9 |
+
In 2011, 2012, and 2016, Adele was named Artist of the Year by Billboard. At the 2012 and 2016 Ivor Novello Awards, Adele was named Songwriter of the Year by the British Academy of Songwriters, Composers, and Authors. In 2012, she was listed at number five on VH1's 100 Greatest Women in Music. Time magazine named her one of the most influential people in the world in 2012 and 2016. Her 2016–2017 tour, saw her break attendance records in a number of countries, including the UK, Australia, and the US. With sales of more than 120 million records, Adele is one of the world's best-selling music artists.[8]
|
10 |
+
|
11 |
+
Adele Laurie Blue Adkins was born on 5 May 1988 in Tottenham, London, to an English mother, Penny Adkins, and a Welsh father, Marc Evans.[9] Evans left when Adele was two, leaving her mother to raise her.[10][11] She began singing at the age of four and asserts that she became obsessed with voices.[12][13] In 1997, at the age of nine, Adele and her mother, who by then had found work as a furniture maker and an adult-learning activities organiser, relocated to Brighton on the south coast of England.[14]
|
12 |
+
|
13 |
+
In 1999, she and her mother moved back to London; first to Brixton, then to the neighbouring district of West Norwood in south London, which is the subject of her first song “Hometown Glory”.[15] She spent much of her youth in Brockwell Park where she would play the guitar and sing to friends, which she recalled in her 2015 song “Million Years Ago”. She stated, “It has quite monumental moments of my life that I’ve spent there, and I drove past it [in 2015] and I just literally burst into tears. I really missed it.”[16] Adele graduated from the BRIT School for Performing Arts & Technology in Croydon in May 2006,[17] where she was a classmate of Leona Lewis and Jessie J.[1][18] Adele credits the school with nurturing her talent[19] even though, at the time, she was more interested in going into A&R and hoped to launch other people's careers.[1]
|
14 |
+
|
15 |
+
Four months after graduation, she published two songs on the fourth issue of the online arts publication PlatformsMagazine.com.[20] She had recorded a three-song demo for a class project and given it to a friend.[1] The friend posted the demo on Myspace, where it became very successful and led to a phone call from Richard Russell, boss of the music label XL Recordings. She doubted if the offer was real because the only record company she knew was Virgin Records, and she took a friend with her to the meeting.[18][21]
|
16 |
+
|
17 |
+
Nick Huggett, at XL, recommended Adele to manager Jonathan Dickins at September Management, and in June 2006, Dickins became her official representative.[22] September was managing Jamie T at the time and this proved a major draw for Adele, a big fan of the British singer-songwriter. Huggett then signed Adele to XL in September 2006.[22] Adele provided vocals for Jack Peñate's song, "My Yvonne," for his debut album, and it was during this session she first met producer Jim Abbiss, who would go on to produce both the majority of her debut album, 19, and tracks on 21.[23] In June 2007, Adele made her television debut, performing "Daydreamer" on the BBC's Later... with Jools Holland.[24] Adele's breakthrough song, "Hometown Glory", written when she was 16, was released in October 2007.[22]
|
18 |
+
|
19 |
+
By 2008, Adele had become the headliner and performed an acoustic set, in which she was supported by Damien Rice.[25] She became the first recipient of the Brit Awards Critics' Choice and was named the number-one predicted breakthrough act of 2008 in an annual BBC poll of music critics, Sound of 2008.[26][27] The album 19, named for her age at the time she wrote and composed many of its songs, entered the British charts at number one. The Times Encyclopedia of Modern Music named 19 an "essential" blue-eyed soul recording.[28] She released her second single, "Chasing Pavements", on 14 January 2008, two weeks ahead of her debut album, 19. The song reached number two on the UK Chart, and stayed there for four weeks.[29] Adele was nominated for a 2008 Mercury Prize award for 19.[30] She also won an Urban Music Award for "Best Jazz Act."[31] She also received a Music of Black Origin (MOBO) nomination in the category of Best UK Female.[32] In March 2008, Adele signed a deal with Columbia Records and XL Recordings for her foray into the United States.[33] She embarked on a short North American tour in the same month,[33] and 19 was released in the US in June.[19] Billboard magazine stated of it: "Adele truly has potential to become among the most respected and inspiring international artists of her generation."[34] The An Evening with Adele world tour began in May 2008 and ended in June 2009.[35]
|
20 |
+
|
21 |
+
She later cancelled the 2008 US tour dates to be with a former boyfriend.[36] She said in Nylon magazine in June 2009, "I'm like, 'I can't believe I did that.' It seems so ungrateful.... I was drinking far too much and that was kind of the basis of my relationship with this boy. I couldn't bear to be without him, so I was like, 'Well, I'll just cancel my stuff then.'"[36] She referred to this period as her "early life crisis".[36] She is also known for her dislike of flying and bouts of homesickness when away from her native London.[37] By the middle of October 2008, Adele's attempt to break in America appeared to have failed.[38] But then she was booked as the musical guest on 18 October 2008 episode of NBC's Saturday Night Live. The episode, which included an expected appearance by then US vice-presidential candidate Sarah Palin, earned the program its best ratings in 14 years with 17 million viewers. Adele performed "Chasing Pavements" and "Cold Shoulder,"[39] and the following day, 19 topped the iTunes charts and ranked at number five at Amazon.com while "Chasing Pavements" rose into the top 25.[40] The album reached number 11 on the Billboard 200 as a result, a jump of 35 places over the previous week.[41]
|
22 |
+
In November 2008 Adele moved to Notting Hill, London after leaving her mother's house, a move that prompted her to give up drinking.[42] The album was certified gold in early 2009, by the RIAA.[43] By July 2009, the album had sold 2.2 million copies worldwide.[44]
|
23 |
+
|
24 |
+
At the 51st Annual Grammy Awards in February 2009, Adele won the award for Best New Artist, in addition to the award for Best Female Pop Vocal Performance for "Chasing Pavements", which was also nominated for Record of the Year and Song of the Year.[45] Adele performed "Chasing Pavements" at the ceremony in a duet with Jennifer Nettles. In 2010, Adele received a Grammy nomination for Best Female Pop Vocal Performance for "Hometown Glory."[46] In April her song "My Same" entered the German Singles Chart after it had been performed by Lena Meyer-Landrut in the talent show contest Unser Star für Oslo, or Our Star for Oslo, in which the German entry to the Eurovision Song Contest 2010 was determined.[47][48]
|
25 |
+
In late September, after being featured on The X Factor, Adele's version of Bob Dylan's "Make You Feel My Love" re-entered the UK singles chart at number 4.[49] During the 2010 CMT Artists of the Year special, Adele performed a widely publicised duet of Lady Antebellum's "Need You Now" with Darius Rucker.[50] This performance was later nominated for a CMT Music Award.[51]
|
26 |
+
|
27 |
+
Adele released her second studio album, 21, on 24 January 2011 in the UK and 22 February in the US.[52][53] She said that the album was inspired by the break-up with her former partner.[11] The album's sound is described as classic and contemporary country and roots music. The change in sound from her first album was the result of her bus driver playing contemporary music from Nashville when she was touring the American South, and the title reflected the growth she had experienced in the prior two years.[53] Adele told Spin Magazine: "It was really exciting for me because I never grew up around [that music]."[54] 21 hit number 1 in 30 countries, including the UK and the US.[55][56][57]
|
28 |
+
|
29 |
+
An emotional performance of "Someone Like You" at the 2011 Brit Awards on 15 February propelled the song to number one in the UK.[58] Her first album, 19, re-entered the UK album chart alongside 21, while first and second singles "Rolling in the Deep" and "Someone Like You" were in the top 5 of the UK singles chart, making Adele the first living artist to achieve the feat of two top-five hits in both the Official Singles Chart and the Official Albums Chart simultaneously since the Beatles in 1964.[59] Both songs topped the charts in multiple markets and broke numerous sales performance records. Following her performance of "Someone Like You" at the 2011 MTV Video Music Awards, it became Adele's second number-one single on the Billboard Hot 100.[60] By December 2011, 21 sold over 3.4 million copies in the UK, and became the biggest-selling album of the 21st century, overtaking Amy Winehouse's Back to Black,[61][62] with Adele becoming the first artist ever to sell three million albums in the UK in one calendar year.[63][64] "Set Fire to the Rain" became Adele's third number-one single on the Billboard Hot 100, as Adele became the first artist ever to have an album, 21, hold the number-one position on the Billboard 200 concurrently with three number-one singles.[65] Moreover, 21 had the most weeks on the Billboard 200 chart of any album by a woman.[66]
|
30 |
+
|
31 |
+
To promote the album, Adele embarked upon the "Adele Live" tour, which sold out its North American leg.[67] In October 2011, Adele was forced to cancel two tours because of a vocal-cord haemorrhage. She released a statement saying she needed an extended period of rest to avoid permanent damage to her voice.[68] In the first week of November 2011 Steven M. Zeitels, director of the Center for Laryngeal Surgery and Voice Rehabilitation at the Massachusetts General Hospital in Boston, performed laser microsurgery on Adele's vocal cords to remove a benign polyp.[69][70][71] A recording of her tour, Live at the Royal Albert Hall, was released in November 2011, debuting at number one in the US with 96,000 copies sold, the highest one-week tally for a music DVD in four years, becoming the best-selling music DVD of 2011.[72] Adele is the first artist in Nielsen SoundScan history to have the year's number-one album (21), number-one single ("Rolling in the Deep"), and number-one music video (Live at the Royal Albert Hall).[73] At the 2011 American Music Awards on 20 November, Adele won three awards; Favorite Pop/Rock Female Artist, Favorite Adult Contemporary Artist, and Favorite Pop/Rock Album for 21.[74] On 9 December, Billboard named Adele Artist of the Year, Billboard 200 Album of the Year (21), and the Billboard Hot 100 Song of the Year ("Rolling in the Deep"), becoming the first woman ever to top all three categories.[75][76]
|
32 |
+
|
33 |
+
Following the throat microsurgery, she made her live comeback at the 2012 Grammy Awards in February.[77] She won in all six categories for which she was nominated, including Album of the Year, Record of the Year, and Song of the Year, making her the second female artist in Grammy history, after Beyoncé, to win that many categories in a single night.[78] Following that success, 21 achieved the biggest weekly sales increase following a Grammy win since Nielsen SoundScan began tracking data in 1991.[79][80] Adele received the Brit Award for British Female Solo Artist, and British Album of the Year presented to her by George Michael.[81][82] Following the Brit Awards, 21 reached number one for the 21st non-consecutive week in the UK.[83] The album has sold over 4.5 million copies in the UK where it is the fourth best-selling album.[84] In October, the album's sales surpassed 4.5 million in the UK, and in November it surpassed 10 million sales in the US.[85][86][87] The best-selling album worldwide of 2011 and 2012, as of 2016[update], the album has sold over 31 million copies.[88][89][90] By the end of 2014, she had sold an estimated 40 million albums and 50 million singles worldwide.[91] Adele is the only artist or band in the last decade in the US to earn an RIAA diamond certification for a one disc album in less than two years.[86]
|
34 |
+
|
35 |
+
In October 2012, Adele confirmed that she had been writing, composing and recording the theme song for Skyfall, the twenty-third James Bond film.[92][93] The song "Skyfall," written and composed in collaboration with producer Paul Epworth, was recorded at Abbey Road Studios, and features orchestrations by J. A. C. Redford.[94] Adele stated recording "Skyfall" was "one of the proudest moments of my life." On 14 October, "Skyfall" rose to number 2 on the UK Singles Chart with sales of 92,000 copies bringing its overall sales to 176,000, and "Skyfall" entered the Billboard Hot 100 at number 8, selling 261,000 copies in the US in its first three days.[95] This tied "Skyfall" with Duran Duran's "A View to a Kill" as the highest-charting James Bond theme song on the UK Singles Chart;[96] a record surpassed in 2015 by Sam Smith's "Writing's on the Wall".[97]
|
36 |
+
|
37 |
+
"Skyfall" has sold more than five million copies worldwide[98] and earned Adele the Golden Globe Award for Best Original Song[99] and the Academy Award for Best Original Song.[100] In December 2012, Adele was named Billboard Artist of the Year, and 21 was named Album of the Year, making her the first artist to receive both accolades two years in a row.[101][102] Adele was also named top female artist.[102] The Associated Press named Adele Entertainer of the Year for 2012.[103] The 2013 Grammy Awards saw Adele's live version of "Set Fire to the Rain" win the Grammy Award for Best Pop Solo Performance, bringing her total wins to nine.[104]
|
38 |
+
|
39 |
+
On 3 April 2012, Adele confirmed that her third album would likely be at least two years away, stating, "I have to take time and live a little bit. There were a good two years between my first and second albums, so it'll be the same this time." She stated that she would continue writing and composing her own material.[105]
|
40 |
+
At the 2013 Grammy Awards, she confirmed that she was in the very early stages of her third album.[106][107] She also stated that she will most likely work with Paul Epworth again.[106]
|
41 |
+
|
42 |
+
In September 2013, Wiz Khalifa confirmed that he and Adele had collaborated on a song for his fifth studio album, Blacc Hollywood, though the collaboration did not make the final track listing.[108] In January 2014, Adele received her tenth Grammy Award with "Skyfall" winning Best Song Written for Visual Media at the 56th Annual Grammy Awards.[109]
|
43 |
+
|
44 |
+
On the eve of her 26th birthday in May 2014, Adele posted a cryptic message via her Twitter account which prompted media discussion about her next album. The message, "Bye bye 25... See you again later in the year," was interpreted by some in the media, including Capital FM, as meaning that her next album would be titled 25 and released later in the year.[110] In 2014, Adele was nominated for nine World Music Awards. In early August, Paul Moss suggested that an album would be released in 2014 or 2015.[111] However, in the October 2014 accounts filed with Companies House by XL Recordings, they ruled out a 2014 release.[112]
|
45 |
+
|
46 |
+
On 27 August 2015, Billboard reported that Adele's label, XL Recordings, had intentions of releasing her third studio album sometime in November 2015.[113] Danger Mouse was revealed to have contributed a song, while Tobias Jesso Jr. had written a track, and Ryan Tedder was "back in the mix after producing and co-writing "Rumour Has It" on 21."[113] At the 72nd Venice International Film Festival in early September 2015, Sia announced that her new single "Alive" was co-written by Adele, and had originally been intended for Adele's third album.[114] On 18 October, a 30-second clip of new material from Adele was shown on UK television during a commercial break on The X Factor. The commercial teases a snippet from a new song from her third album, with viewers hearing a voice singing accompanied by lyrics on a black screen.[115]
|
47 |
+
|
48 |
+
In a statement released three days later she confirmed that the album is titled 25, with Adele stating, "My last record was a break-up record, and if I had to label this one, I would call it a make-up record. Making up for lost time. Making up for everything I ever did and never did. 25 is about getting to know who I've become without realising. And I'm sorry it took so long but, you know, life happened."[116] Adele also believes 25 will be her last album with her age as its title, believing that 25 would be the end to a trilogy.[117] On 22 October, Adele confirmed that 25 would be released on 20 November, while the lead single from the album, "Hello" would be released on 23 October.[118] The song was first played on Nick Grimshaw's Radio 1 Breakfast Show on the BBC on the morning of 23 October with Adele interviewed live.[119] The video of "Hello", released on 22 October, was viewed over 27.7 million times on YouTube in its first 24 hours, breaking the Vevo record for the most views in a day, surpassing the 20.1 million views for "Bad Blood" by Taylor Swift.[120] On 28 October, BBC News reported that "Hello" was being viewed on YouTube an average one million times an hour.[121] "Hello" went on to become the fastest video to hit one billion views on YouTube, which it achieved after 88 days.[122] The video for "Hello" captured iconic British elements such as a red telephone box and a cup of tea.[123] The song debuted at number one on the UK Singles Chart on 30 October, with first week sales of 330,000 copies, making it the biggest-selling number one single in three years.[124] "Hello" also debuted at number one in many countries around the world, including Australia, France, Canada, New Zealand, Ireland and Germany, and on 2 November, the song debuted at number one on the Billboard Hot 100, becoming the first song in the US to sell at least one million downloads in a week, setting the record at 1.11 million.[125] By the end of 2015, it had sold 12.3 million units globally and was the year's 7th best-selling single despite being released in late October.[126]
|
49 |
+
|
50 |
+
On 27 October, BBC One announced plans for Adele at the BBC, a one-hour special presented by Graham Norton, in which Adele talks about her new album and performs new songs.[128] This was her first television appearance since performing at the 2013 Academy Awards ceremony, and the show was recorded before a live audience on 2 November for broadcast on 20 November, coinciding with the release of 25.[129] On 27 October it was also announced that Adele would appear on the US entertainment series Saturday Night Live on 21 November.[128][130] On 30 October, Adele confirmed that she would be performing a one-night-only concert titled Adele Live in New York City at the Radio City Music Hall on 17 November. Subsequently, NBC aired the concert special on 14 December.[131][132]
|
51 |
+
|
52 |
+
On 27 November, 25 debuted at number one on the UK Albums Chart and became the fastest selling album in UK chart history with over 800,000 copies sold in its first week.[133] The album debuted at number one in the US where it sold a record-breaking 3.38 million copies in its first week, the largest single sales week for an album since Nielsen began monitoring sales in 1991.[134] 25 also broke first week sales records in Canada and New Zealand.[135][136] 25 became the best-selling album of 2015 in a number of countries, including Australia, the UK and the US, spending seven consecutive weeks at number one in each country, before being displaced by David Bowie's Blackstar.[137][138][139] It was the best-selling album worldwide of 2015 with 17.4 million copies sold.[126] 25 has since sold 20 million copies globally.[140] Adele's seven weeks at the top of the UK Albums Chart took her total to 31 weeks at number one in the UK with her three albums, surpassing Madonna's previous record of most weeks at number one for a female act.[141] As the best-selling artist worldwide for 2015 the IFPI named Adele the Global Recording Artist of the Year.[142]
|
53 |
+
|
54 |
+
In November 2015, Adele's 2016 tour was announced, her first tour since 2011.[143] Beginning in Europe, Adele Live 2016 included four dates at the Manchester Arena in March 2016, six dates at the O2 Arena, London, with further dates in Ireland, Spain, Germany, Italy and the Netherlands among others.[144] Her North American Tour began on 5 July in St. Paul, Minnesota.[145] The leg included six nights at Madison Square Garden in New York City, eight nights at Staples Center in Los Angeles, and four nights at Air Canada Centre in Toronto.[146] Adele broke Taylor Swift's five-show record for most consecutive sold-out shows at the Staples Center.[147]
|
55 |
+
|
56 |
+
At the 2016 Brit Awards in London on 24 February, Adele received the awards for British Female Solo Artist, British Album of the Year for 25, British Single of the Year for "Hello", and British Global Success, bringing her Brit Award wins to eight.[149] She closed the ceremony by performing "When We Were Young", the second single from 25.[149] Two more singles from 25 were released in 2016: "Send My Love (To Your New Lover)" and "Water Under the Bridge". While on stage at London's O2 Arena on 17 March, Adele announced that she would be headlining on the Pyramid Stage at the 2016 Glastonbury Festival, which was later confirmed by the festival's organisers.[150] She appeared for a 90-minute fifteen song set at the festival on 25 June in front of 150,000 people, and described the experience as "by far, the best moment of my life so far".[151][152] In an interview with Jo Whiley on BBC Radio 2 around 30-minutes before going on stage, Adele had said she had been going to Glastonbury since she was a child and that the festival had meant a lot to her, before she broke down. Whiley recalls, "She was really scared, really, really scared. We were doing the interview and at one point she had to stop as she was in tears. It was amazing to see somebody like that, then to witness her walking out on stage and doing the most incredible set. To know that half an hour before she’d been in tears at the thought of walking out there."[148]
|
57 |
+
|
58 |
+
As part of her world tour, in February and March 2017, Adele performed in Australia for the first time, playing outdoor stadiums around the country.[153] Her first two shows in New Zealand sold out in a record-breaking 23 minutes, and a third show was announced, with all tickets sold in under 30 minutes.[154] Adele sold over 600,000 tickets for her record-breaking eight date Australian tour, setting stadium records throughout the country; her Sydney show at ANZ Stadium on 10 March was seen by 95,000 people, the biggest single concert in Australian history, a record she broke the following night with more than 100,000 fans.[155]
|
59 |
+
Adele completed her world tour with two concerts, dubbed "The Finale", at Wembley Stadium, London on 28 and 29 June.[7] She announced the shows at "the home of football" by singing the England football team's "Three Lions" anthem and also the theme song to the BBC's weekly Premier League football show Match of the Day.[7] Adele had added another two concerts at Wembley after the first two dates sold out,[156] however she cancelled the last two dates of the tour after damaging her vocal cords.[157] As a show of support, fans instead gathered outside Wembley Stadium to perform renditions of her songs, in an event titled "Sing for Adele".[158]
|
60 |
+
|
61 |
+
At the end of 2016, Billboard named Adele Artist of the Year for the third time,[159] and also received the Top Billboard 200 album.[160] 25 was the best-selling album for a second consecutive year in the US.[161] With 213 million views, Adele's Carpool Karaoke through the streets of London with James Corden, a sketch which featured on Corden's talk show The Late Late Show with James Corden in January 2016, was the biggest YouTube viral video of 2016.[162] At the 59th Annual Grammy Awards in February 2017, Adele won all five of her nominations, bringing her number of awards to fifteen. She won Album of the Year and Best Pop Vocal Album for 25, and Record of the Year, Song of the Year and Best Pop Solo Performance for "Hello".[163] She also performed a tribute to the late George Michael singing the rendition of his song "Fastlove"; due to technical difficulties which occurred during the performance, Adele decided to stop and restart, explaining "I can't mess this up for him".[164]
|
62 |
+
|
63 |
+
Adele was reportedly working on her fourth studio album by 2018.[165] On 5 May 2019, the date of her 31st birthday, Adele posted several black-and-white pictures of herself on her Instagram account celebrating her birthday along with a message reflecting on the preceding year. The message ended with "30 will be a drum n bass record to spite you". Media outlets, including NME, took the post as an indication that a new album was on the way.[166][167] On 15 February 2020, Adele announced at a friend's wedding that her fourth studio album would be out in September 2020.[168] However, she later stated the album release has been delayed due to the COVID-19 pandemic.[169]
|
64 |
+
|
65 |
+
Adele has cited the Spice Girls as a major influence in regard to her love and passion for music, stating that "they made me what I am today".[170] Adele impersonated the Spice Girls at dinner parties as a young girl.[171] She stated she was left "heartbroken" when her favourite Spice Girl, Geri Halliwell aka "Ginger Spice", left the group.[172][173] Growing up she also listened to Sinéad O'Connor,[174] the Cranberries,[175] Bob Marley,[176] the Cure,[177] Dusty Springfield,[178] Celine Dion,[179] and Annie Lennox.[180]
|
66 |
+
One of Adele's earliest influences was Gabrielle, who Adele has admired since the age of five. During Adele's school years, her mother made her an eye patch with sequins which she used to perform as the Hackney born star in a school talent contest.[181] After moving to south London, she became interested in R&B artists such as Aaliyah, Destiny's Child, and Mary J. Blige.[182] Adele says that one of the most defining moments in her life was when she watched Pink perform at Brixton Academy in London. She states: "It was the Missundaztood record, so I was about 13 or 14. I had never heard, being in the room, someone sing like that live [...] I remember sort of feeling like I was in a wind tunnel, her voice just hitting me. It was incredible."[183][184] Adele also cites Jeff Buckley's album Grace as an influence, saying "I remember falling out with my best friend when I was like seven and listening to Jeff Buckley, because my mum was a huge fan. Grace has always been around me".[185]
|
67 |
+
|
68 |
+
In 2002, 14-year-old Adele discovered Etta James and Ella Fitzgerald as she stumbled on the artists' CDs in the jazz section of her local music store. She was struck by their appearance on the album covers.[186] Adele states she then "started listening to Etta James every night for an hour," and in the process was getting "to know my own voice."[186] Adele credits Amy Winehouse and her 2003 album Frank for inspiring her to take up the guitar, stating, "If it wasn't for Amy and Frank, one hundred per cent I wouldn't have picked up a guitar, I wouldn't have written "Daydreamer" or "Hometown [Glory]" and I wrote "Someone Like You" on the guitar too."[187] She also states that her mother, who is very close to her, exposed her to the music of Aaliyah, Lauryn Hill, Mary J. Blige, and Alicia Keys, all of whom inspired her as well.[174] On the rock band Queen, she states, "I love them. I'm the biggest Queen fan ever. They're the kind of band that's just in your DNA, really. Everyone just knows who they are."[188] She is also a fan of Lana Del Rey, Grimes, Chvrches, FKA Twigs, Alabama Shakes, Kanye West, Rihanna, Britney Spears, Frank Ocean, and Stevie Nicks.[189][190][191][192] In 2017, she described Beyoncé as a particular inspiration, calling her "my artist of my life" and added "the other artists who mean that much to me are all dead."[193] Adele cited Madonna's 1998 album Ray of Light as a "chief inspiration" behind her album 25.[190] Adele mentioned that Taylor Swift is the inspiration behind her song "Send My Love (To Your New Lover)", with Adele stating "I was in New York, writing "Remedy" with Ryan Tedder. We were having lunch, and "I Knew You Were Trouble" came on the radio — Taylor's song that she did with Max Martin and Shellback. I was like, 'Who did this?' I knew it was Taylor, and I've always loved her, but this is a totally other side — like, 'I want to know who brought that out in her.' And he said Max Martin. I was unaware that I knew who Max Martin was. I Googled him, and I was like, 'He's literally written every massive soundtrack of my life.' So I got my management to reach out. They came to London, and I took my guitar along and was like, 'I've got this riff,' and then "Send My Love" happened really quickly. Max Martin, I just could hang out with him forever. He's so beautiful and lovely and funny and generous and warm and caring. He's a really amazing man."[194]
|
69 |
+
|
70 |
+
Adele's debut album, 19, is of the soul genre, with lyrics describing heartbreak and relationship.[19] Her success occurred simultaneously with several other British female soul singers, with the British press dubbing her a new Amy Winehouse.[1] This was described as a third British Musical Invasion of the US.[18] However, Adele called the comparisons between her and other female soul singers lazy, noting "we're a gender, not a genre".[19][195][196] AllMusic wrote that "Adele is simply too magical to compare her to anyone."[186]
|
71 |
+
|
72 |
+
Her second album, 21, shares the folk and soul influences of her debut album, but was further inspired by the American country and Southern blues music to which she had been exposed during her 2008–09 North American tour An Evening with Adele.[197][198] Composed in the aftermath of Adele's separation from her partner, the album typifies the near dormant tradition of the confessional singer-songwriter in its exploration of heartbreak, self-examination, and forgiveness. Having referred to 21 as a "break-up record", Adele labelled her third studio album, 25, a "make-up record", adding it was about "Making up for lost time. Making up for everything I ever did and never did."[116] Her yearning for her old self, her nostalgia, and melancholy about the passage of time, is a feature of 25, with Adele stating, "I've had a lot of regrets since I turned 25. And sadness hits me in different ways than it used to. There's a lot of things I don't think I'll ever get 'round to doing."[199]
|
73 |
+
|
74 |
+
—Dave Simpson of The Guardian on her voice and down to earth manner.[200]
|
75 |
+
|
76 |
+
Adele is a mezzo-soprano with a range spanning from B2 to C6. However Classic FM state she is often mistaken for a contralto due to the application of a tense chest mix to achieve her lower notes, while also noting that her voice becomes its clearest as she ascends the register, particularly from C4 to C5.[201][202][203][204] Rolling Stone reported that following throat surgery her voice had become "palpably bigger and purer-toned", and that she had added a further four notes to the top of her range.[199] Initially, critics suggested that her vocals were more developed and intriguing than her songwriting, a sentiment with which Adele agreed.[205] She has stated: "I taught myself how to sing by listening to Ella Fitzgerald for acrobatics and scales, Etta James for passion and Roberta Flack for control."[206]
|
77 |
+
|
78 |
+
Adele's singing voice has been acclaimed by critics. In a review of 19, The Observer said, "The way she stretched the vowels, her wonderful soulful phrasing, the sheer unadulterated pleasure of her voice, stood out all the more; little doubt that she's a rare singer".[207] BBC Music wrote, "Her melodies exude warmth, her singing is occasionally stunning and, ...she has tracks that make Lily Allen and Kate Nash sound every bit as ordinary as they are."[208] Also in 2008, Sylvia Patterson of The Guardian wrote, "Of all the gobby new girls, only Adele's bewitching singing voice has the enigmatic quality which causes tears of involuntary emotion to splash down your face in the way Eva Cassidy’s did before her."[209] For their reviews of 21, The New York Times' chief music critic Jon Pareles commended Adele's emotive timbre, comparing her to Dusty Springfield, Petula Clark, and Annie Lennox: "[Adele] can seethe, sob, rasp, swoop, lilt and belt, in ways that draw more attention to the song than to the singer".[210] Ryan Reed of Paste magazine regarded her voice as "a raspy, aged-beyond-its-years thing of full-blooded beauty",[211] while MSN Music's Tom Townshend declared her "the finest singer of [our] generation".[212]
|
79 |
+
|
80 |
+
Adele began dating charity entrepreneur and Old Etonian Simon Konecki in the summer of 2011.[213] In June 2012, Adele announced that she and Konecki were expecting a child.[214][215] Their son Angelo James was born on 19 October 2012.[216] On the topic of becoming a parent, Adele observed that she "felt like I was truly living. I had a purpose, where before I didn't".[217] Adele and Konecki brought a privacy case against a UK-based photo agency that published intrusive paparazzi images of their son taken during family outings in 2013.[218] Lawyers working on their behalf accepted damages from the company in July 2014.[219] Adele has also stated that she has suffered from postnatal depression, generalized anxiety disorder, and panic attacks.[220][221]
|
81 |
+
|
82 |
+
In early 2017, tabloids started speculating that Adele and Konecki had secretly married when they were spotted wearing matching rings on their ring fingers.[222] During her acceptance speech at the 59th Annual Grammy Awards for Album of the Year, Adele confirmed these reports of their marriage by calling Konecki her husband when thanking him.[223] She repeated this in March 2017, telling the audience at a concert in Brisbane, Australia, "I'm married now".[224] Adele became a stay-at-home mother.[225] In April 2019, Adele's representatives announced to the Associated Press that she and Konecki had separated after more than seven years together, but that they would continue to raise their son together.[226][227] On 13 September 2019, it was reported that Adele had filed for divorce from Konecki in the US.[228]
|
83 |
+
|
84 |
+
Politically, she is a supporter of the Labour Party, stating in 2011 that she was a "Labour girl through and through", and in the same interview was critical of the Conservative Party.[229] Despite this declared political affiliation, Adele received backlash after comments about paying taxes during a 2011 interview with Q magazine. She stated, "I use the NHS, I can't use public transport any more, doing what I do, I went to state school, I'm mortified to have to pay 50 percent! Trains are always late, most state schools are shit and I've gotta give you like four million quid, are you having a laugh? When I got my tax bill in from 19 I was ready to go and buy a gun and randomly open fire."[230][231] In 2015, Adele stated "I'm a feminist, I believe that everyone should be treated the same, including race and sexuality".[190]
|
85 |
+
|
86 |
+
Born in the North London district of Tottenham, Adele supports local football club Tottenham Hotspur.[232] In 2017, Adele was ranked the richest musician under 30 years old in the UK and Ireland in the Sunday Times Rich List, which valued her wealth at £125 million. She was ranked the 19th richest musician overall.[233] On the 2019 list, she was valued at £150 million as the 22nd richest musician in the UK.[234]
|
87 |
+
|
88 |
+
Supportive of the LGBT community, on 12 June 2016, an emotional Adele dedicated her show in Antwerp, Belgium to the victims of the mass shooting at a gay nightclub in Orlando, Florida, United States earlier that day, adding "The LGBTQ community, they're like my soul mates since I was really young, so I'm very moved by it."[235][236] In April 2018, it was widely reported that Adele had become an ordained minister in order to officiate at close friend comedian Alan Carr's wedding to Paul Drayton, something which Adele herself subsequently confirmed. The wedding, held in January 2018, took place in the garden of her house in Los Angeles, California.[237]
|
89 |
+
|
90 |
+
Adele has performed in numerous charity concerts throughout her career. In 2007 and 2008 she performed at the Little Noise Sessions held at London's Union Chapel, with proceeds from the concerts donated to Mencap which works with people with learning disabilities.[37] In July and November 2008, Adele performed at the Keep a Child Alive Black Ball in London and New York City respectively.[238][239][240] On 17 September 2009 she performed at the Brooklyn Academy of Music, for the VH1 Divas event, a concert to raise money for the Save The Music Foundation charity.[241][242] On 6 December, Adele opened with a 40-minute set at John Mayer's 2nd Annual Holiday Charity Revue held at the Nokia Theatre in Los Angeles.[243] In 2011, Adele gave a free concert for Pride London, a registered charity which arranges LGBT events in London.[244] The same year, Adele took part in the UK charity telethon Comic Relief for Red Nose Day 2011, performing "Someone like You".[245]
|
91 |
+
|
92 |
+
Adele has been a major contributor to MusiCares, a charity organisation founded by the National Academy of Recording Arts and Sciences for musicians in need. In February 2009, Adele performed at the 2009 MusiCares charity concert in Los Angeles. In 2011 and 2012, Adele donated autographed items for auctions to support MusiCares.[246][247][248] Adele required all backstage visitors to the North American leg of her Adele Live tour to donate a minimum charitable contribution of US$20 for the UK charity SANDS, an organisation dedicated to "supporting anyone affected by the death of a baby and promoting research to reduce the loss of babies' lives".[249]
|
93 |
+
|
94 |
+
On 15 June 2017, Adele attended a vigil in west London for the victims of the Grenfell Tower fire where, keeping a low profile, she was only spotted by a small handful of fans.[250] Four days later she appeared at Chelsea fire station and brought cakes for the firefighters.[251] Station manager Ben King stated "She came in, came up to the mess and had a cup of tea with the watch and then she joined us for the minute's silence."[251] Paying tribute to the victims at her first Wembley show on 28 June, Adele encouraged fans to donate money to help the victims of the blaze rather than waste the money on "overpriced wine".[252]
|
95 |
+
|
96 |
+
At the 51st Annual Grammy Awards in 2009, Adele won awards in the categories of Best New Artist and Best Female Pop Vocal Performance.[253] She was also nominated in the categories of Record of the Year and Song of the Year.[254] The success of her debut album 19 saw Adele nominated for three Brit Awards in the categories of British Female Solo Artist, British Single of the Year and British Breakthrough Act.[255] Then British Prime Minister Gordon Brown sent a thank-you letter to Adele that stated "with the troubles that the country's in financially, you're a light at the end of the tunnel".[256]
|
97 |
+
|
98 |
+
Adele's second album, 21, earned her a record-tying six Grammy Awards, including Album of the Year; two Brit Awards, including British Album of the Year. Adele was the second artist and first female, preceded by Christopher Cross, to have won all four of the general field awards throughout her career.[257] The success of the album saw her receive numerous mentions in the Guinness Book of World Records.[258] With 21 non-consecutive weeks at number 1 in the US, Adele broke the record for the longest number-1 album by a woman in Billboard history, beating the record formerly held by Whitney Houston's soundtrack The Bodyguard.[80] 21 spent its 23rd week at number one in March 2012, making it the longest-running album at number one since 1985,[259] and it became the fourth best-selling album of the past 10 years in the US.[260] The best selling album in the UK of the 21st century, and the best selling album by a female in UK chart history, 21 is also the fourth best-selling album in the UK of all time.[261][262] 21 was her first album certified diamond in the US.[263] On 6 March, 21 reached 30 non-consecutive weeks at number one on the Australian ARIA Chart, making it the longest-running number one album in Australia in the 21st century, and the second longest-running number one ever.[264]
|
99 |
+
|
100 |
+
In February 2012, Adele was listed at number five on VH1's 100 Greatest Women in Music.[265] In April 2012, Time magazine named Adele one of the 100 most influential people in the world.[266][267] People named her one of 2012 Most Beautiful at Every Age.[268] On 30 April 2012, a tribute to Adele was held at New York City's (Le) Poisson Rouge called Broadway Sings Adele, starring various Broadway actors such as Matt Doyle.[269] In July 2012, Adele was listed at number six in Forbes list of the world's highest-paid celebrities under the age of 30, having earned £23 million between May 2011 and May 2012.[270]
|
101 |
+
|
102 |
+
On the week ending 3 March 2012, Adele became the first solo female artist to have three singles in the top 10 of the Billboard Hot 100 at the same time with "Rolling in the Deep", "Someone Like You", and "Set Fire to the Rain" as well as the first female artist to have two albums in the top 5 of the Billboard 200 and two singles in the top 5 of the Billboard Hot 100 simultaneously.[271] Adele topped the 2012 Sunday Times Rich List of musicians in the UK under 30,[272] and made the Top 10 of Billboard magazine's "Top 40 Money Makers".[273] Billboard also announced the same day that Adele's "Rolling in the Deep" is the biggest crossover hit of the past 25 years, topping pop, adult pop and adult contemporary charts and that Adele is one of four female artists to have an album chart at number one for more than 13 weeks (the other three artists being Judy Garland, Carole King, and Whitney Houston).[273]
|
103 |
+
|
104 |
+
At the 2012 Ivor Novello Awards in May, Adele was named Songwriter of the Year, and "Rolling in the Deep" won the award for Most Performed Work of 2011.[274] At the 2012 BMI Awards held in London in October, Adele won Song of the Year (for "Rolling in the Deep") in recognition of the song being the most played on US television and radio in 2011.[275] In 2013, Adele won the Academy Award for Best Original Song for the James Bond theme "Skyfall". This is the first James Bond song to win and the fifth to be nominated—after "For Your Eyes Only" (1981), "Nobody Does It Better" (1977), "Live and Let Die" (1973), and "The Look of Love" (1967).[276][277] "Skyfall" won the Brit Award for Best British Single at the 2013 Brit Awards.[278]
|
105 |
+
|
106 |
+
In June 2013, Adele was appointed a MBE in the Queen's Birthday Honours list for services to music, and she received the award from Prince Charles at Buckingham Palace on 19 December 2013.[279][280] In February 2013 she was named one of the 100 most powerful women in the UK by Woman's Hour on BBC Radio 4.[281] In April 2016, Adele appeared for the second time on the Time 100 list of most influential people.[282] In 2014, Adele was already being regarded as a British cultural icon, with young adults from abroad naming her among a group of people that they most associated with UK culture, which included William Shakespeare, Queen Elizabeth II, David Beckham, J. K. Rowling, The Beatles, Charlie Chaplin and Elton John.[283][284]
|
107 |
+
|
108 |
+
Released in 2015, Adele's third album, 25, became the year's best-selling album and broke first week sales records in a number of markets, including the UK and the US.[285] 25 was her second album to be certified diamond in the US and earned her five Grammy Awards, including her second Grammy Award for Album of the Year, and four Brit Awards for British Female Solo Artist, British Album of the Year, British Single of the Year for "Hello", and British Global Success.[149] Adele became the only artist in history to, on two separate occasions, win the three general categories Grammys in the same ceremony.[286] With 15 awards from 18 nominations, Adele won more Grammys than any other female who was born outside the U.S.[287] Adele's seven weeks at the top of the UK Albums Chart took her total to 31 weeks at number one in the UK with her three albums, surpassing Madonna's previous record of most weeks at number one for a female act in the UK.[141] The lead single, "Hello", became the first song in the US to sell over one million digital copies within a week of its release.[125] At the 2016 Ivor Novello Awards Adele was named Songwriter of the Year by the British Academy of Songwriters, Composers, and Authors.[288] As of 6 August 2019, despite releasing just two albums in the decade (21 and 25), at 36 weeks she has the second most weeks at number one in the UK Album Charts in the 2010s, five weeks behind Ed Sheeran (who has released four albums).[289][290] In December 2019, Israel's largest TV and Radio stations named her singer of the 2010s.[291]
|
en/420.html.txt
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Astronomy (from Greek: ἀστρονομία) is a natural science that studies celestial objects and phenomena. It uses mathematics, physics, and chemistry in order to explain their origin and evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates outside Earth's atmosphere. Cosmology is a branch of astronomy. It studies the Universe as a whole.[1]
|
6 |
+
|
7 |
+
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Babylonians, Greeks, Indians, Egyptians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Nowadays, professional astronomy is often said to be the same as astrophysics.[2]
|
8 |
+
|
9 |
+
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
|
10 |
+
|
11 |
+
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
|
12 |
+
|
13 |
+
Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects.[4] Although the two fields share a common origin, they are now entirely distinct.[5]
|
14 |
+
|
15 |
+
"Astronomy" and "astrophysics" are synonyms.[6][7][8] Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties,"[9] while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena".[10] In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject.[11] However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics.[6] Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department,[7] and many professional astronomers have physics rather than astronomy degrees.[8] Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
|
16 |
+
|
17 |
+
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.[12]
|
18 |
+
|
19 |
+
Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Mesopotamia, Greece, Persia, India, China, Egypt, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.[13]
|
20 |
+
|
21 |
+
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations.[15] The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.[16]
|
22 |
+
|
23 |
+
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena.[17] In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model.[18] In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe.[19] Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy.[20] The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.[21]
|
24 |
+
|
25 |
+
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels.[22] Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
|
26 |
+
|
27 |
+
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century.[23][24][25] In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars.[26] The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.[27][28] It is also believed that the ruins at Great Zimbabwe and Timbuktu[29] may have housed astronomical observatories.[30] Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.[31][32][33][34]
|
28 |
+
|
29 |
+
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.[35]
|
30 |
+
|
31 |
+
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down.[36] It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.[37]
|
32 |
+
|
33 |
+
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars,[38] More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.[39]
|
34 |
+
|
35 |
+
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.[40]
|
36 |
+
|
37 |
+
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.[27]
|
38 |
+
|
39 |
+
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe.[41] Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere.[citation needed] In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.[42][43]
|
40 |
+
|
41 |
+
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation.[44] Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
|
42 |
+
|
43 |
+
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range.[45] Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.[45]
|
44 |
+
|
45 |
+
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields.[45] Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[11][45]
|
46 |
+
|
47 |
+
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.[11][45]
|
48 |
+
|
49 |
+
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous Galactic protostars and their host star clusters.[47][48]
|
50 |
+
With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space.[49] Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.[50]
|
51 |
+
|
52 |
+
Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy.[51] Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm),[51] that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
|
53 |
+
|
54 |
+
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm).[45] Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei.[45] However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.[45]
|
55 |
+
|
56 |
+
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 107 (10 million) kelvins, and thermal emission from thick gases above 107 Kelvin.[45] Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.[45]
|
57 |
+
|
58 |
+
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[45] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.[52]
|
59 |
+
|
60 |
+
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.[45]
|
61 |
+
|
62 |
+
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
|
63 |
+
|
64 |
+
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A.[45] Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories.[53] Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.[45]
|
65 |
+
|
66 |
+
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[54] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.[55][56]
|
67 |
+
|
68 |
+
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.[57][58]
|
69 |
+
|
70 |
+
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
|
71 |
+
|
72 |
+
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.[59]
|
73 |
+
|
74 |
+
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allows astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.[60]
|
75 |
+
|
76 |
+
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.[61]
|
77 |
+
|
78 |
+
Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved.[62][63]
|
79 |
+
|
80 |
+
Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
|
81 |
+
|
82 |
+
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency between the data and model's results, the general tendency is to try to make minimal modifications to the model so that it produces results that fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
|
83 |
+
|
84 |
+
Phenomena modeled by theoretical astronomers include: stellar dynamics and evolution; galaxy formation; large-scale distribution of matter in the Universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
|
85 |
+
|
86 |
+
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, dark matter and fundamental theories of physics.
|
87 |
+
|
88 |
+
A few examples of this process:
|
89 |
+
|
90 |
+
Along with Cosmic inflation, dark matter and dark energy are the current leading topics in astronomy,[64] as their discovery and controversy originated during the study of the galaxies.
|
91 |
+
|
92 |
+
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space".[65][66] Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background.[67][68] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
|
93 |
+
|
94 |
+
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[67] Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.
|
95 |
+
|
96 |
+
Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation.[69] The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.
|
97 |
+
|
98 |
+
Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans.
|
99 |
+
|
100 |
+
Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does.[70] The term exobiology is similar.[71]
|
101 |
+
|
102 |
+
Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth.[72] The origin and early evolution of life is an inseparable part of the discipline of astrobiology.[73] Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories.
|
103 |
+
|
104 |
+
This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space.[74][75][76]
|
105 |
+
|
106 |
+
Cosmology (from the Greek κόσμος (kosmos) "world, universe" and λόγος (logos) "word, study" or literally "logic") could be considered the study of the Universe as a whole.
|
107 |
+
|
108 |
+
Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13.8 billion years[77] to its present condition.[78] The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965.[78]
|
109 |
+
|
110 |
+
In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe.[78] (See also nucleocosmochronology.)
|
111 |
+
|
112 |
+
When the first neutral atoms formed from a sea of primordial ions, space became transparent to radiation, releasing the energy viewed today as the microwave background radiation. The expanding Universe then underwent a Dark Age due to the lack of stellar energy sources.[79]
|
113 |
+
|
114 |
+
A hierarchical structure of matter began to form from minute variations in the mass density of space. Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III stars. These massive stars triggered the reionization process and are believed to have created many of the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing the cycle of nucleosynthesis to continue longer.[80]
|
115 |
+
|
116 |
+
Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations of gas and dust merged to form the first primitive galaxies. Over time, these pulled in more matter, and were often organized into groups and clusters of galaxies, then into larger-scale superclusters.[81]
|
117 |
+
|
118 |
+
Various fields of physics are crucial to studying the universe. Interdisciplinary studies involve the fields of quantum mechanics, particle physics, plasma physics, condensed matter physics, statistical mechanics, optics, and nuclear physics.
|
119 |
+
|
120 |
+
Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason, much effort is expended in trying to understand the physics of these components.[82]
|
121 |
+
|
122 |
+
The study of objects outside our galaxy is a branch of astronomy concerned with the formation and evolution of Galaxies, their morphology (description) and classification, the observation of active galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the understanding of the large-scale structure of the cosmos.
|
123 |
+
|
124 |
+
Most galaxies are organized into distinct shapes that allow for classification schemes. They are commonly divided into spiral, elliptical and Irregular galaxies.[83]
|
125 |
+
|
126 |
+
As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move along random orbits with no preferred direction. These galaxies contain little or no interstellar dust, few star-forming regions, and older stars. Elliptical galaxies are more commonly found at the core of galactic clusters, and may have been formed through mergers of large galaxies.
|
127 |
+
|
128 |
+
A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation within which massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of older stars. Both the Milky Way and one of our nearest galaxy neighbors, the Andromeda Galaxy, are spiral galaxies.
|
129 |
+
|
130 |
+
Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational interaction.
|
131 |
+
|
132 |
+
An active galaxy is a formation that emits a significant amount of its energy from a source other than its stars, dust and gas. It is powered by a compact region at the core, thought to be a super-massive black hole that is emitting radiation from in-falling material.
|
133 |
+
|
134 |
+
A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy radiation include Seyfert galaxies, Quasars, and Blazars. Quasars are believed to be the most consistently luminous objects in the known universe.[84]
|
135 |
+
|
136 |
+
The large-scale structure of the cosmos is represented by groups and clusters of galaxies. This structure is organized into a hierarchy of groupings, with the largest being the superclusters. The collective matter is formed into filaments and walls, leaving large voids between.[85]
|
137 |
+
|
138 |
+
The Solar System orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the Local Group of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large portions of the Milky Way that are obscured from view.
|
139 |
+
|
140 |
+
In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. This is surrounded by four primary arms that spiral from the core. This is a region of active star formation that contains many younger, population I stars. The disk is surrounded by a spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as globular clusters.[86]
|
141 |
+
|
142 |
+
Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions, molecular clouds of molecular hydrogen and other elements create star-forming regions. These begin as a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by the Jeans length) to form compact protostars.[87]
|
143 |
+
|
144 |
+
As the more massive stars appear, they transform the cloud into an H II region (ionized atomic hydrogen) of glowing gas and plasma. The stellar wind and supernova explosions from these stars eventually cause the cloud to disperse, often leaving behind one or more young open clusters of stars. These clusters gradually disperse, and the stars join the population of the Milky Way.[88]
|
145 |
+
|
146 |
+
Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined.[89]
|
147 |
+
|
148 |
+
The study of stars and stellar evolution is fundamental to our understanding of the Universe. The astrophysics of stars has been determined through observation and theoretical understanding; and from computer simulations of the interior.[90] Star formation occurs in dense regions of dust and gas, known as giant molecular clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star.[87]
|
149 |
+
|
150 |
+
Almost all elements heavier than hydrogen and helium were created inside the cores of stars.[90]
|
151 |
+
|
152 |
+
The characteristics of the resulting star depend primarily upon its starting mass. The more massive the star, the greater its luminosity, and the more rapidly it fuses its hydrogen fuel into helium in its core. Over time, this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium requires a higher core temperature. A star with a high enough core temperature will push its outer layers outward while increasing its core density. The resulting red giant formed by the expanding outer layers enjoys a brief life span, before the helium fuel in the core is in turn consumed. Very massive stars can also undergo a series of evolutionary phases, as they fuse increasingly heavier elements.[91]
|
153 |
+
|
154 |
+
The final fate of the star depends on its mass, with stars of mass greater than about eight times the Sun becoming core collapse supernovae;[92] while smaller stars blow off their outer layers and leave behind the inert core in the form of a white dwarf. The ejection of the outer layers forms a planetary nebula.[93] The remnant of a supernova is a dense neutron star, or, if the stellar mass was at least three times that of the Sun, a black hole.[94] Closely orbiting binary stars can follow more complex evolutionary paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova.[95] Planetary nebulae and supernovae distribute the "metals" produced in the star by fusion to the interstellar medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and helium alone.[96]
|
155 |
+
|
156 |
+
At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical main-sequence dwarf star of stellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not considered a variable star, but it does undergo periodic changes in activity known as the sunspot cycle. This is an 11-year oscillation in sunspot number. Sunspots are regions of lower-than- average temperatures that are associated with intense magnetic activity.[97]
|
157 |
+
|
158 |
+
The Sun has steadily increased in luminosity by 40% since it first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can have a significant impact on the Earth.[98] The Maunder minimum, for example, is believed to have caused the Little Ice Age phenomenon during the Middle Ages.[99]
|
159 |
+
|
160 |
+
The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and finally by the super-heated corona.
|
161 |
+
|
162 |
+
At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by means of radiation. Above that is the convection zone where the gas material transports energy primarily through physical displacement of the gas known as convection. It is believed that the movement of mass within the convection zone creates the magnetic activity that generates sunspots.[97]
|
163 |
+
|
164 |
+
A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit of the Solar System, it reaches the heliopause. As the solar wind passes the Earth, it interacts with the Earth's magnetic field (magnetosphere) and deflects the solar wind, but traps some creating the Van Allen radiation belts that envelop the Earth. The aurora are created when solar wind particles are guided by the magnetic flux lines into the Earth's polar regions where the lines then descend into the atmosphere.[100]
|
165 |
+
|
166 |
+
Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids, and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall understanding of the formation and evolution of the Sun's planetary system, although many new discoveries are still being made.[101]
|
167 |
+
|
168 |
+
The Solar System is subdivided into the inner planets, the asteroid belt, and the outer planets. The inner terrestrial planets consist of Mercury, Venus, Earth, and Mars. The outer gas giant planets are Jupiter, Saturn, Uranus, and Neptune.[102] Beyond Neptune lies the Kuiper belt, and finally the Oort Cloud, which may extend as far as a light-year.
|
169 |
+
|
170 |
+
The planets were formed 4.6 billion years ago in the protoplanetary disk that surrounded the early Sun. Through a process that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with time, became protoplanets. The radiation pressure of the solar wind then expelled most of the unaccreted matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets continued to sweep up, or eject, the remaining matter during a period of intense bombardment, evidenced by the many impact craters on the Moon. During this period, some of the protoplanets may have collided and one such collision may have formed the Moon.[103]
|
171 |
+
|
172 |
+
Once a planet reaches sufficient mass, the materials of different densities segregate within, during planetary differentiation. This process can form a stony or metallic core, surrounded by a mantle and an outer crust. The core may include solid and liquid regions, and some planetary cores generate their own magnetic field, which can protect their atmospheres from solar wind stripping.[104]
|
173 |
+
|
174 |
+
A planet or moon's interior heat is produced from the collisions that created the body, by the decay of radioactive materials (e.g. uranium, thorium, and 26Al), or tidal heating caused by interactions with other bodies. Some planets and moons accumulate enough heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating, cool more quickly; and their geological activity ceases with the exception of impact cratering.[105]
|
175 |
+
|
176 |
+
Astronomy and astrophysics have developed significant interdisciplinary links with other major scientific fields. Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and evolution of biological systems in the Universe, with particular emphasis on the possibility of non-terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of vast amount of observational astrophysical data.
|
177 |
+
|
178 |
+
The study of chemicals found in space, including their formation, interaction and destruction, is called astrochemistry. These substances are usually found in molecular clouds, although they may also appear in low temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the chemicals found within the Solar System, including the origins of the elements and variations in the isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As "forensic astronomy", finally, methods from astronomy have been used to solve problems of law and history.
|
179 |
+
|
180 |
+
Astronomy is one of the sciences to which amateurs can contribute the most.[106]
|
181 |
+
|
182 |
+
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events which interest them.[107][108]
|
183 |
+
|
184 |
+
Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (e.g. the One-Mile Telescope).[109][110]
|
185 |
+
|
186 |
+
Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make occultation measurements that are used to refine the orbits of minor planets. They can also discover comets, and perform regular observations of variable stars. Improvements in digital technology have allowed amateurs to make impressive advances in the field of astrophotography.[111][112][113]
|
187 |
+
|
188 |
+
Although the scientific discipline of astronomy has made tremendous strides in understanding the nature of the Universe and its contents, there remain some important unanswered questions. Answers to these may require the construction of new ground- and space-based instruments, and possibly new developments in theoretical and experimental physics.
|
189 |
+
|
190 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
191 |
+
|
en/4200.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Night or nighttime (sp. night-time or night time) is the period of ambient darkness from sunset to sunrise in each twenty-four hours,[1] when the Sun is below the horizon. The exact time when night begins and ends (equally true with evening) depends on the location and varies throughout the year.[2] When night is considered as a period that which follows evening, it is usually considered to start around 8 pm and to last to about 4 am.[3] Night ends with coming of morning at sunrise.[4]
|
4 |
+
|
5 |
+
The word can be used in a different sense as the time between bedtime and morning.[5] In common communication the word 'night' is used as a farewell ('good night') and sometimes shortened to 'night', mainly when someone is going to sleep or leaving.[6] For example: It was nice to see you. Good night![citation needed] Unlike 'good morning,' 'good afternoon,' and 'good evening,' 'good night' (or 'goodnight') is not used as a greeting.
|
6 |
+
|
7 |
+
Complete darkness or astronomical night is the period between astronomical dusk and astronomical dawn when the Sun is between 18 and 90 degrees below the horizon and does not illuminate the sky. As seen from latitudes between 48.5607189° and 65.7273855666...° north or south of the Equator, complete darkness does not occur around the summer solstice because although the Sun sets, it is never more than 18° below the horizon at lower culmination.
|
8 |
+
|
9 |
+
The opposite of night is day (or "daytime", to distinguish it from "day" referring to a 24-hour period). The start and end points of time for a night vary, based on factors such as season and latitude. Twilight is the period of night after sunset or before sunrise when the Sun still illuminates the sky when it is below the horizon. At any given time, one side of Earth is bathed in sunlight (the daytime) while the other side is in the shadow caused by Earth blocking the sunlight. The central part of the shadow is called the umbra.
|
10 |
+
|
11 |
+
Natural illumination at night is still provided by a combination of moonlight, planetary light, starlight, zodiacal light, gegenschein, and airglow. In some circumstances, aurorae, lightning, and bioluminescence can provide some illumination. The glow provided by artificial lighting is sometimes referred to as light pollution because it can interfere with observational astronomy and ecosystems.
|
12 |
+
|
13 |
+
On Earth, an average night lasts shorter than daytime due to two factors. Firstly, the Sun's apparent disk is not a point, but has an angular diameter of about 32 arcminutes (32'). Secondly, the atmosphere refracts sunlight so that some of it reaches the ground when the Sun is below the horizon by about 34'. The combination of these two factors means that light reaches the ground when the center of the solar disk is below the horizon by about 50'. Without these effects, daytime and night would be the same length on both equinoxes, the moments when the Sun appears to contact the celestial equator. On the equinoxes, daytime actually lasts almost 14 minutes longer than night does at the Equator, and even longer towards the poles.
|
14 |
+
|
15 |
+
The summer and winter solstices mark the shortest and longest nights, respectively. The closer a location is to either the North Pole or the South Pole, the wider the range of variation in the night's duration. Although daytime and night nearly equalize in length on the equinoxes, the ratio of night to day changes more rapidly at high latitudes than at low latitudes before and after an equinox. In the Northern Hemisphere, Denmark experiences shorter nights in June than India. In the Southern Hemisphere, Antarctica sees longer nights in June than Chile. Both hemispheres experience the same patterns of night length at the same latitudes, but the cycles are 6 months apart so that one hemisphere experiences long nights (winter) while the other is experiencing short nights (summer).
|
16 |
+
|
17 |
+
In the region within either polar circle, the variation in daylight hours is so extreme that part of summer sees a period without night intervening between consecutive days, while part of winter sees a period without daytime intervening between consecutive nights.
|
18 |
+
|
19 |
+
The phenomenon of day and night is due to the rotation of a celestial body about its axis, creating an illusion of the sun rising and setting. Different bodies spin at very different rates, however. Some may spin much faster than Earth, while others spin extremely slowly, leading to very long days and nights. The planet Venus rotates once every 224.7 days – by far the slowest rotation period of any of the major planets. In contrast, the gas giant Jupiter's sidereal day is only 9 hours and 56 minutes.[7] However, it is not just the sidereal rotation period which determines the length of a planet's day-night cycle but the length of its orbital period as well - Venus has a rotation period of 224.7 days, but a day-night cycle just 116.75 days long due to its retrograde rotation and orbital motion around the Sun. Mercury has the longest day-night cycle as a result of its 3:2 resonance between its orbital period and rotation period - this resonance gives it a day-night cycle that is 176 days long. A planet may experience large temperature variations between day and night, such as Mercury, the planet closest to the sun. This is one consideration in terms of planetary habitability or the possibility of extraterrestrial life.
|
20 |
+
|
21 |
+
The disappearance of sunlight, the primary energy source for life on Earth, has dramatic effects on the morphology, physiology and behavior of almost every organism. Some animals sleep during the night, while other nocturnal animals including moths and crickets are active during this time. The effects of day and night are not seen in the animal kingdom alone; plants have also evolved adaptations to cope best with the lack of sunlight during this time. For example, crassulacean acid metabolism is a unique type of carbon fixation which allows photosynthetic plants to store carbon dioxide in their tissues as organic acids during the night, which can then be used during the day to synthesize carbohydrates. This allows them to keep their stomata closed during the daytime, preventing transpiration of precious water.
|
22 |
+
|
23 |
+
As artificial lighting has improved, especially after the Industrial Revolution, night time activity has increased and become a significant part of the economy in most places. Many establishments, such as nightclubs, bars, convenience stores, fast-food restaurants, gas stations, distribution facilities, and police stations now operate 24 hours a day or stay open as late as 1 or 2 a.m. Even without artificial light, moonlight sometimes makes it possible to travel or work outdoors at night.
|
24 |
+
|
25 |
+
Night is often associated with danger and evil, because of the psychological connection of night's all-encompassing darkness to the fear of the unknown and darkness's obstruction of a major sensory system (the sense of sight). Nighttime is naturally associated with vulnerability and danger for human physical survival. Criminals, animals, and other potential dangers can be concealed by darkness. Midnight has a particular importance in human imagination and culture.
|
26 |
+
|
27 |
+
The belief in magic often includes the idea that magic and magicians are more powerful at night. Séances of spiritualism are usually conducted closer to midnight. Similarly, mythical and folkloric creatures as vampires and werewolves are described as being more active at night. Ghosts are believed to wander around almost exclusively during night-time. In almost all cultures, there exist stories and legends warning of the dangers of night-time. In fact, the Saxons called the darkness of night the 'death mist'.[citation needed]
|
28 |
+
|
29 |
+
In literature, night and the lack of light are often color-associated with blackness which is historically symbolic in many cultures for villainy, non-existence, or a lack of knowledge (with the knowledge usually symbolized by light or illumination).
|
30 |
+
|
31 |
+
The cultural significance of the night in Islam differs from that in Western culture. The Quran was revealed during the Night of Power, the most significant night according to Islam. Muhammad made his famous journey from Mecca to Jerusalem and then to heaven in the night. Another prophet, Abraham came to a realization of the supreme being in charge of the universe at night.
|
en/4201.html.txt
ADDED
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Roman numerals are a numeral system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet. Modern usage employs seven symbols, each with a fixed integer value:[1]
|
4 |
+
|
5 |
+
The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced in most contexts by the more convenient Arabic numerals; however, this process was gradual, and the use of Roman numerals persists in some minor applications to this day.
|
6 |
+
|
7 |
+
One place they are often seen is on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as:
|
8 |
+
|
9 |
+
The notations IV and IX can be read as "one less than five" (4) and "one less than ten" (9), although there is a tradition favouring representation of "4" as "IIII" on Roman numeral clocks.[2]
|
10 |
+
|
11 |
+
Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of movies and television programs. MCM, signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written MCMXII. For the years of this century, MM indicates 2000. The current year is MMXX (2020).
|
12 |
+
|
13 |
+
There has never been an officially "binding", or universally accepted standard for Roman numerals. Usage in ancient Rome varied greatly and became thoroughly chaotic in medieval times. Even the post-renaissance restoration of a largely "classical" notation has failed to produce total consistency: variant forms are even defended by some modern writers as offering improved "flexibility".[3]
|
14 |
+
On the other hand, especially where a Roman numeral is considered a legally binding expression of a number, as in U.S. Copyright law (where an "incorrect" or ambiguous numeral may invalidate a copyright claim, or affect the termination date of the copyright period)[4] it is desirable to strictly follow the usual modern standardized orthography.
|
15 |
+
|
16 |
+
This section defines a standard form of Roman numerals that is in current, more or less universal use, is unambiguous, and permits only one representation for each value.[5] It does not attempt to either endorse or refute every combination of Roman numeral symbols that has been, could be, or is used.
|
17 |
+
|
18 |
+
Roman numerals are essentially a decimal or "base 10" number system, with a "digit" for each power of ten – thousands, hundreds, tens and units. Each digit is represented by a fixed symbol or combination of symbols. In the absence of "place keeping" zeros, different symbols are used for each power of ten but each follows the same pattern,[6] as summarised in this table
|
19 |
+
|
20 |
+
The numerals for 4 (IV) and 9 (IX) are written using "subtractive notation",[7] where the first symbol (I) is subtracted from the larger one (V, or X), thus avoiding the clumsier (IIII, and VIIII).[a] Subtractive notation is also used for 40 (XL) and 90 (XC), as well as 400 (CD) and 900 (CM).[8] These are the only subtractive forms in standard use.
|
21 |
+
|
22 |
+
A number containing several decimal digits is built by appending them from highest to lowest, as in the following examples:
|
23 |
+
|
24 |
+
Any missing place (represented by a zero in the Arabic equivalent) is omitted, as in Latin (and English) speech:
|
25 |
+
|
26 |
+
Roman numerals for large numbers are nowadays seen mainly in the form of year numbers, as in these examples:
|
27 |
+
|
28 |
+
The largest number that can be represented in this notation is 3,999 (MMMCMXCIX). Since the largest Roman numeral likely to be required today is MMXX (the current year) any pressing need for larger Roman numerals is hypothetical. Ancient and medieval users of the system used various means to write larger numbers, two of which are described below, under "Large numbers".
|
29 |
+
|
30 |
+
Forms exist that vary in one way or another from the general "standard" described above.
|
31 |
+
|
32 |
+
While subtractive notation for 4, 40 and 400 (IV, XL and CD) has been the usual form since Roman times, additive notation (IIII, XXXX,[11] and CCCC[11]) continued to be used, including in compound numbers like XXIIII,[12] LXXIIII,[13] and CCCCLXXXX.[14] The additive forms for 9, 90, and 900 (VIIII,[11] LXXXX,[15] and DCCCC[16]) have also been used, although less frequently.
|
33 |
+
|
34 |
+
The two conventions could be mixed in the same document or inscription, even in the same numeral. On the numbered gates to the Colosseum, for instance, IIII is systematically used instead of IV, but subtractive notation is used for other digits; so that gate 44 is labelled XLIIII.[17] Isaac Asimov speculates that the use of IV, as the initial letters of IVPITTER (a classical Latin spelling of the name of the Roman god Jupiter), may have been felt to have been impious in this context.[18]
|
35 |
+
|
36 |
+
Modern clock faces that use Roman numerals still usually employ IIII for four o'clock but IX for nine o'clock, a practice that goes back to very early clocks such as the Wells Cathedral clock of the late 14th century.[19][20][21] However, this is far from universal: for example, the clock on the Palace of Westminster tower, "Big Ben", uses a subtractive IV for 4 o'clock.[20]
|
37 |
+
|
38 |
+
Several monumental inscriptions created in the early 20th century use variant forms for "1900" (usually written MCM). These vary from MDCCCCX – a classical use of additive notation for MCMX (1910), as seen on Admiralty Arch, London, to the more unusual, if not unique MDCDIII for MCMIII (1903), on the north entrance to the Saint Louis Art Museum.[22]
|
39 |
+
|
40 |
+
Especially on tombstones and other funerary inscriptions 5 and 50 have been occasionally written IIIII and XXXXX instead of V and L, and there are instances such as IIIIII and XXXXXX rather than VI or LX.[23][24]
|
41 |
+
|
42 |
+
The irregular use of subtractive notation, such as IIIXX for 17,[25] IIXX for 18,[26] IIIC for 97,[27] IIC for 98,[28][29] and IC for 99[30] have been occasionally used. A possible explanation is that the word for 18 in Latin is duodeviginti, literally "two from twenty". Similarly, the words for 98 and 99 were duodecentum (two from hundred) and undecentum (one from hundred), respectively.[31] However, the explanation does not seem to apply to IIIXX and IIIC, since the Latin words for 17 and 97 were septendecim (seven ten) and nonaginta septem (ninety seven), respectively.
|
43 |
+
|
44 |
+
Another example of irregular subtractive notation is the use of XIIX for 18. It was used by officers of the XVIII Roman Legion to write their number.[32][33] The notation appears prominently on the cenotaph of their senior centurion Marcus Caelius (c. 45 BC – AD 9). There does not seem to be a linguistic explanation for this use, although it is one stroke shorter than XVIII.
|
45 |
+
|
46 |
+
On the publicly displayed official Roman calendars known as Fasti, the numbers 18 and 28 could be represented by XIIX and XXIIX respectively; the XIIX for 18 days to the next Kalends, and XXIIX for the number of days in February. The latter can be seen on the sole extant pre-Julian calendar, the Fasti Antiates Maiores.[34]
|
47 |
+
|
48 |
+
While irregular subtractive and additive notation has been used at least occasionally throughout history, some Roman numerals have been observed in documents and inscriptions that do not fit either system. Some of these variants do not seem to have been used outside specific contexts, and may have been regarded as errors even by contemporaries.
|
49 |
+
|
50 |
+
As Roman numerals are composed of ordinary alphabetic characters, there may sometimes be confusion with other uses of the same letters. For example, "XXX" and "XL" have other connotations in addition to their values as Roman numerals, while "IXL" more often than not is a gramogram of "I excel", and is in any case not an unambiguous Roman numeral.
|
51 |
+
|
52 |
+
The number zero did not originally have its own Roman numeral, but the word nulla (the Latin word meaning "none") was used by medieval scholars to represent 0. Dionysius Exiguus was known to use nulla alongside Roman numerals in 525.[40][41] About 725, Bede or one of his colleagues used the letter N, the initial of nulla or of nihil (the Latin word for "nothing") for 0, in a table of epacts, all written in Roman numerals.[42]
|
53 |
+
|
54 |
+
Though the Romans used a decimal system for whole numbers, reflecting how they counted in Latin, they used a duodecimal system for fractions, because the divisibility of twelve (12 = 22 × 3) makes it easier to handle the common fractions of 1⁄3 and 1⁄4 than does a system based on ten (10 = 2 × 5). On coins, many of which had values that were duodecimal fractions of the unit as, they used a tally-like notational system based on twelfths and halves. A dot (·) indicated an uncia "twelfth", the source of the English words inch and ounce; dots were repeated for fractions up to five twelfths. Six twelfths (one half) was abbreviated as the letter S for semis "half". Uncia dots were added to S for fractions from seven to eleven twelfths, just as tallies were added to V for whole numbers from six to nine.[43]
|
55 |
+
|
56 |
+
Each fraction from 1⁄12 to 12⁄12 had a name in Roman times; these corresponded to the names of the related coins:
|
57 |
+
|
58 |
+
The arrangement of the dots was variable and not necessarily linear. Five dots arranged like (⁙) (as on the face of a die) are known as a quincunx, from the name of the Roman fraction/coin. The Latin words sextans and quadrans are the source of the English words sextant and quadrant.
|
59 |
+
|
60 |
+
Other Roman fractional notations included the following:
|
61 |
+
|
62 |
+
During the centuries that Roman numerals remained the standard way of writing numbers throughout Europe, there were various extensions to the system designed to indicate larger numbers, none of which were ever standardised.
|
63 |
+
|
64 |
+
One of these was the apostrophus,[44] in which 500 (usually written as "D") was written as IↃ, while 1,000, was written as CIↃ instead of "M".[18] This is a system of encasing numbers to denote thousands (imagine the Cs and Ↄs as parentheses), which has its origins in Etruscan numeral usage. The IↃ and CIↃ used to represent 500 and 1,000 most likely preceded, and subsequently influenced, the adoption of "D" and "M" in conventional Roman numerals.
|
65 |
+
|
66 |
+
In this system, an extra Ↄ denoted 500, and multiple extra Ↄs are used to denote 5,000, 50,000, etc. For example:
|
67 |
+
|
68 |
+
Sometimes CIↃ was reduced to ↀ for 1,000. John Wallis is often credited for introducing the symbol for infinity (modern ∞), and one conjecture is that he based it on this usage, since 1,000 was hyperbolically used to represent very large numbers. Similarly, IↃↃ for 5,000 was reduced to ↁ; CCIↃↃ for 10,000 to ↂ; IↃↃↃ for 50,000 to ↇ; and CCCIↃↃↃ for 100,000 to ↈ.
|
69 |
+
[45]
|
70 |
+
|
71 |
+
Another system was the vinculum, in which conventional Roman numerals were multiplied by 1,000 by adding a "bar" or "overline".[45] Although mathematical historian David Eugene Smith disputes that this was part of ancient Roman usage,[46] the notation was certainly in use in the Middle Ages. Although modern usage is largely hypothetical it is certainly easier for a modern user to decode than the Apostrophus,
|
72 |
+
|
73 |
+
Another inconsistent medieval usage was the addition of vertical lines (or brackets) before and after the numeral to multiply it by 10 (or 100): thus M for 10,000 as an alternative form for X. In combination with the overline the bracketed forms might be used to raise the multiplier to (say) ten (or one hundred) thousand, thus:
|
74 |
+
|
75 |
+
This use of lines is distinct from the custom, once very common, of adding both underline and overline (or very large serifs) to a Roman numeral, simply to make it clear that it is a number, e.g. MCMLXVII (1967).
|
76 |
+
|
77 |
+
The system is closely associated with the ancient city-state of Rome and the Empire that it created. However, due to the scarcity of surviving examples, the origins of the system are obscure and there are several competing theories, all largely conjectural.
|
78 |
+
|
79 |
+
Rome was founded sometime between 850 and 750 BC. At the time, the region was inhabited by diverse populations of which the Etruscans were the most advanced. The ancient Romans themselves admitted that the basis of much of their civilization was Etruscan. Rome itself was located next to the southern edge of the Etruscan domain, which covered a large part of north-central Italy.
|
80 |
+
|
81 |
+
The Roman numerals, in particular, are directly derived from the Etruscan number symbols: "𐌠", "𐌡", "𐌢", "𐌣", and "𐌟" for 1, 5, 10, 50, and 100 (They had more symbols for larger numbers, but it is unknown which symbol represents which number). As in the basic Roman system, the Etruscans wrote the symbols that added to the desired number, from higher to lower value. Thus the number 87, for example, would be written 50 + 10 + 10 + 10 + 5 + 1 + 1 = 𐌣𐌢𐌢𐌢𐌡𐌠𐌠 (this would appear as 𐌠𐌠𐌡𐌢𐌢𐌢𐌣 since Etruscan was written from right to left.)[47]
|
82 |
+
|
83 |
+
The symbols "𐌠" and "𐌡" resembled letters of the Etruscan alphabet, but "𐌢", "𐌣", and "𐌟" did not. The Etruscans used the subtractive notation, too, but not like the Romans. They wrote 17, 18, and 19 as "𐌠𐌠𐌠𐌢𐌢", "𐌠𐌠𐌢𐌢", and 𐌠𐌢𐌢, mirroring the way they spoke those numbers ("three from twenty", etc.); and similarly for 27, 28, 29, 37, 38, etc. However they did not write "𐌠𐌡" for 4 (or "𐌢𐌣" for 40), and wrote "𐌡𐌠𐌠", "𐌡𐌠𐌠𐌠" and "𐌡𐌠𐌠𐌠𐌠" for 7, 8, and 9, respectively.[47]
|
84 |
+
|
85 |
+
The early Roman numerals for 1, 10, and 100 were the Etruscan ones: "I", "X", and "Ж". The symbols for 5 and 50 changed from Ʌ and "𐌣" to V and ↆ at some point. The latter had flattened to ⊥ (an inverted T) by the time of Augustus, and soon afterwards became identified with the graphically similar letter L.[48]
|
86 |
+
|
87 |
+
The symbol for 100 was written variously as >I< or ƆIC, was then abbreviated to Ɔ or C, with C (which matched a Latin letter) finally winning out. It may have helped that C is the initial of centum, Latin for "hundred".
|
88 |
+
|
89 |
+
The numbers 500 and 1000 were denoted by V or X overlaid with a box or circle. Thus 500 was like a Ɔ superimposed on a Þ. It became D or Ð by the time of Augustus, under the graphic influence of the letter D. It was later identified as the letter D; an alternative symbol for "thousand" was a CIƆ, and half of a thousand or "five hundred" is the right half of the symbol, IƆ, and this may have been converted into D.[18]
|
90 |
+
|
91 |
+
The notation for 1000 was a circled or boxed X: Ⓧ, ⊗, ⊕, and by Augustinian times was partially identified with the Greek letter Φ phi. Over time, the symbol changed to Ψ and ↀ. The latter symbol further evolved into ∞, then ⋈, and eventually changed to M under the influence of the Latin word mille "thousand".[48]
|
92 |
+
|
93 |
+
According to Paul Kayser, the basic numerical symbols were I, X, C and Φ (or ⊕) and the intermediate ones were derived by taking half of those (half an X is V, half a C is L and half a Φ/⊕ is D).[49]
|
94 |
+
|
95 |
+
Lower case, minuscule, letters were developed in the Middle Ages, well after the demise of the Western Roman Empire, and since that time lower-case versions of Roman numbers have also been commonly used: i, ii, iii, iv, and so on.
|
96 |
+
|
97 |
+
Since the Middle Ages, a "j" has sometimes been substituted for the final "i" of a "lower-case" Roman numeral, such as "iij" for 3 or "vij" for 7. This "j" can be considered a swash variant of "i". The use of a final "j" is still used in medical prescriptions to prevent tampering with or misinterpretation of a number after it is written.[50][51]
|
98 |
+
|
99 |
+
Numerals in documents and inscriptions from the Middle Ages sometimes include additional symbols, which today are called "medieval Roman numerals". Some simply substitute another letter for the standard one (such as "A" for "V", or "Q" for "D"), while others serve as abbreviations for compound numerals ("O" for "XI", or "F" for "XL"). Although they are still listed today in some dictionaries, they are long out of use.[52]
|
100 |
+
|
101 |
+
Chronograms, messages with dates encoded into them, were popular during the Renaissance era. The chronogram would be a phrase containing the letters I, V, X, L, C, D, and M. By putting these letters together, the reader would obtain a number, usually indicating a particular year.
|
102 |
+
|
103 |
+
By the 11th century, Arabic numerals had been introduced into Europe from al-Andalus, by way of Arab traders and arithmetic treatises. Roman numerals, however, proved very persistent, remaining in common use in the West well into the 14th and 15th centuries, even in accounting and other business records (where the actual calculations would have been made using an abacus). Replacement by their more convenient "Arabic" equivalents was quite gradual, and Roman numerals are still used today in certain contexts. A few examples of their current use are:
|
104 |
+
|
105 |
+
In astronomy, the natural satellites or "moons" of the planets are traditionally designated by capital Roman numerals appended to the planet's name. For example, Titan's designation is Saturn VI.
|
106 |
+
|
107 |
+
In chemistry, Roman numerals are often used to denote the groups of the periodic table.
|
108 |
+
They are also used in the IUPAC nomenclature of inorganic chemistry, for the oxidation number of cations which can take on several different positive charges. They are also used for naming phases of polymorphic crystals, such as ice.
|
109 |
+
|
110 |
+
In education, school grades (in the sense of year-groups rather than test scores) are sometimes referred to by a Roman numeral; for example, "grade IX" is sometimes seen for "grade 9".
|
111 |
+
|
112 |
+
In entomology, the broods of the thirteen and seventeen year periodical cicadas are identified by Roman numerals.
|
113 |
+
|
114 |
+
In advanced mathematics (including trigonometry, statistics, and calculus), when a graph includes negative numbers, its quadrants are named using I, II, III, and IV. These quadrant names signify positive numbers on both axes, negative numbers on the X axis, negative numbers on both axes, and negative numbers on the Y axis, respectively. The use of Roman numerals to designate quadrants avoids confusion, since Arabic numerals are used for the actual data represented in the graph.
|
115 |
+
|
116 |
+
In military unit designation, Roman numerals are often used to distinguish between units at different levels. This reduces possible confusion, especially when viewing operational or strategic level maps. In particular, army corps are often numbered using Roman numerals (for example the American XVIII Airborne Corps or the WW2-era German III Panzerkorps) with Arabic numerals being used for divisions and armies.
|
117 |
+
|
118 |
+
In music, Roman numerals are used in several contexts:
|
119 |
+
|
120 |
+
In pharmacy, Roman numerals are used in some contexts, including S to denote "one half" and N to denote "zero".[57]
|
121 |
+
|
122 |
+
In photography, Roman numerals (with zero) are used to denote varying levels of brightness when using the Zone System.
|
123 |
+
|
124 |
+
In seismology, Roman numerals are used to designate degrees of the Mercalli intensity scale of earthquakes.
|
125 |
+
|
126 |
+
In sport the team containing the "top" players and representing a nation or province, a club or a school at the highest level in (say) rugby union is often called the "1st XV", while a lower-ranking cricket or American football team might be the "3rd XI".
|
127 |
+
|
128 |
+
In tarot, Roman numerals (with zero) are used to denote the cards of the Major Arcana.
|
129 |
+
|
130 |
+
In theology and biblical scholarship, the Septuagint is often referred to as LXX, as this translation of the Old Testament into Greek is named for the legendary number of its translators (septuaginta being Latin for "seventy").
|
131 |
+
|
132 |
+
Some uses that are rare or never seen in English speaking countries may be relatively common in parts of continental Europe. For instance:
|
133 |
+
|
134 |
+
Capital or small capital Roman numerals are widely used in Romance languages to denote centuries, e.g. the French xviiie siècle[58] and the Spanish siglo XVIII mean "18th century". Slavic languages in and adjacent to Russia similarly favor Roman numerals (xviii век). On the other hand, in Slavic languages in Central Europe, like most Germanic languages, one writes "18." (with a period) before the local word for "century".
|
135 |
+
|
136 |
+
Mixed Roman and Arabic numerals are sometimes used in numeric representations of dates (especially in formal letters and official documents, but also on tombstones). The month is written in Roman numerals, while the day is in Arabic numerals: "14.VI.1789" and "VI.14.1789" both refer unambiguously to 14 June 1789.
|
137 |
+
|
138 |
+
Roman numerals are sometimes used to represent the days of the week in hours-of-operation signs displayed in windows or on doors of businesses,[59] and also sometimes in railway and bus timetables. Monday, taken as the first day of the week, is represented by I. Sunday is represented by VII. The hours of operation signs are tables composed of two columns where the left column is the day of the week in Roman numerals and the right column is a range of hours of operation from starting time to closing time. In the example case (left), the business opens from 10 AM to 7 PM on weekdays, 10 AM to 5 PM on Saturdays and is closed on Sundays. Note that the listing uses 24-hour time.
|
139 |
+
|
140 |
+
Roman numerals may also be used for floor numbering.[60][61] For instance, apartments in central Amsterdam are indicated as 138-III, with both an Arabic numeral (number of the block or house) and a Roman numeral (floor number). The apartment on the ground floor is indicated as 138-huis.
|
141 |
+
|
142 |
+
In Italy, where roads outside built-up areas have kilometre signs, major roads and motorways also mark 100-metre subdivisionals, using Roman numerals from I to IX for the smaller intervals. The sign "IX | 17" thus marks 17.9 km.
|
143 |
+
|
144 |
+
A notable exception to the use of Roman numerals in Europe is in Greece, where Greek numerals (based on the Greek alphabet) are generally used in contexts where Roman numerals would be used elsewhere.
|
145 |
+
|
146 |
+
The "Number Forms" block of the Unicode computer character set standard has a number of Roman numeral symbols in the range of code points from U+2160 to U+2188.[62] This range includes both upper- and lowercase numerals, as well as pre-combined characters for numbers up to 12 (Ⅻ or XII). One justification for the existence of pre-combined numbers is to facilitate the setting of multiple-letter numbers (such as VIII) on a single horizontal line in Asian vertical text. The Unicode standard, however, includes special Roman numeral code points for compatibility only, stating that "[f]or most purposes, it is preferable to compose the Roman numerals from sequences of the appropriate Latin letters".[63]
|
147 |
+
The block also includes some apostrophus symbols for large numbers, an old variant of "L" (50) similar to the Etruscan character, the Claudian letter "reversed C", etc.
|
148 |
+
|
149 |
+
|
150 |
+
|
en/4202.html.txt
ADDED
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Roman numerals are a numeral system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet. Modern usage employs seven symbols, each with a fixed integer value:[1]
|
4 |
+
|
5 |
+
The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced in most contexts by the more convenient Arabic numerals; however, this process was gradual, and the use of Roman numerals persists in some minor applications to this day.
|
6 |
+
|
7 |
+
One place they are often seen is on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as:
|
8 |
+
|
9 |
+
The notations IV and IX can be read as "one less than five" (4) and "one less than ten" (9), although there is a tradition favouring representation of "4" as "IIII" on Roman numeral clocks.[2]
|
10 |
+
|
11 |
+
Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of movies and television programs. MCM, signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written MCMXII. For the years of this century, MM indicates 2000. The current year is MMXX (2020).
|
12 |
+
|
13 |
+
There has never been an officially "binding", or universally accepted standard for Roman numerals. Usage in ancient Rome varied greatly and became thoroughly chaotic in medieval times. Even the post-renaissance restoration of a largely "classical" notation has failed to produce total consistency: variant forms are even defended by some modern writers as offering improved "flexibility".[3]
|
14 |
+
On the other hand, especially where a Roman numeral is considered a legally binding expression of a number, as in U.S. Copyright law (where an "incorrect" or ambiguous numeral may invalidate a copyright claim, or affect the termination date of the copyright period)[4] it is desirable to strictly follow the usual modern standardized orthography.
|
15 |
+
|
16 |
+
This section defines a standard form of Roman numerals that is in current, more or less universal use, is unambiguous, and permits only one representation for each value.[5] It does not attempt to either endorse or refute every combination of Roman numeral symbols that has been, could be, or is used.
|
17 |
+
|
18 |
+
Roman numerals are essentially a decimal or "base 10" number system, with a "digit" for each power of ten – thousands, hundreds, tens and units. Each digit is represented by a fixed symbol or combination of symbols. In the absence of "place keeping" zeros, different symbols are used for each power of ten but each follows the same pattern,[6] as summarised in this table
|
19 |
+
|
20 |
+
The numerals for 4 (IV) and 9 (IX) are written using "subtractive notation",[7] where the first symbol (I) is subtracted from the larger one (V, or X), thus avoiding the clumsier (IIII, and VIIII).[a] Subtractive notation is also used for 40 (XL) and 90 (XC), as well as 400 (CD) and 900 (CM).[8] These are the only subtractive forms in standard use.
|
21 |
+
|
22 |
+
A number containing several decimal digits is built by appending them from highest to lowest, as in the following examples:
|
23 |
+
|
24 |
+
Any missing place (represented by a zero in the Arabic equivalent) is omitted, as in Latin (and English) speech:
|
25 |
+
|
26 |
+
Roman numerals for large numbers are nowadays seen mainly in the form of year numbers, as in these examples:
|
27 |
+
|
28 |
+
The largest number that can be represented in this notation is 3,999 (MMMCMXCIX). Since the largest Roman numeral likely to be required today is MMXX (the current year) any pressing need for larger Roman numerals is hypothetical. Ancient and medieval users of the system used various means to write larger numbers, two of which are described below, under "Large numbers".
|
29 |
+
|
30 |
+
Forms exist that vary in one way or another from the general "standard" described above.
|
31 |
+
|
32 |
+
While subtractive notation for 4, 40 and 400 (IV, XL and CD) has been the usual form since Roman times, additive notation (IIII, XXXX,[11] and CCCC[11]) continued to be used, including in compound numbers like XXIIII,[12] LXXIIII,[13] and CCCCLXXXX.[14] The additive forms for 9, 90, and 900 (VIIII,[11] LXXXX,[15] and DCCCC[16]) have also been used, although less frequently.
|
33 |
+
|
34 |
+
The two conventions could be mixed in the same document or inscription, even in the same numeral. On the numbered gates to the Colosseum, for instance, IIII is systematically used instead of IV, but subtractive notation is used for other digits; so that gate 44 is labelled XLIIII.[17] Isaac Asimov speculates that the use of IV, as the initial letters of IVPITTER (a classical Latin spelling of the name of the Roman god Jupiter), may have been felt to have been impious in this context.[18]
|
35 |
+
|
36 |
+
Modern clock faces that use Roman numerals still usually employ IIII for four o'clock but IX for nine o'clock, a practice that goes back to very early clocks such as the Wells Cathedral clock of the late 14th century.[19][20][21] However, this is far from universal: for example, the clock on the Palace of Westminster tower, "Big Ben", uses a subtractive IV for 4 o'clock.[20]
|
37 |
+
|
38 |
+
Several monumental inscriptions created in the early 20th century use variant forms for "1900" (usually written MCM). These vary from MDCCCCX – a classical use of additive notation for MCMX (1910), as seen on Admiralty Arch, London, to the more unusual, if not unique MDCDIII for MCMIII (1903), on the north entrance to the Saint Louis Art Museum.[22]
|
39 |
+
|
40 |
+
Especially on tombstones and other funerary inscriptions 5 and 50 have been occasionally written IIIII and XXXXX instead of V and L, and there are instances such as IIIIII and XXXXXX rather than VI or LX.[23][24]
|
41 |
+
|
42 |
+
The irregular use of subtractive notation, such as IIIXX for 17,[25] IIXX for 18,[26] IIIC for 97,[27] IIC for 98,[28][29] and IC for 99[30] have been occasionally used. A possible explanation is that the word for 18 in Latin is duodeviginti, literally "two from twenty". Similarly, the words for 98 and 99 were duodecentum (two from hundred) and undecentum (one from hundred), respectively.[31] However, the explanation does not seem to apply to IIIXX and IIIC, since the Latin words for 17 and 97 were septendecim (seven ten) and nonaginta septem (ninety seven), respectively.
|
43 |
+
|
44 |
+
Another example of irregular subtractive notation is the use of XIIX for 18. It was used by officers of the XVIII Roman Legion to write their number.[32][33] The notation appears prominently on the cenotaph of their senior centurion Marcus Caelius (c. 45 BC – AD 9). There does not seem to be a linguistic explanation for this use, although it is one stroke shorter than XVIII.
|
45 |
+
|
46 |
+
On the publicly displayed official Roman calendars known as Fasti, the numbers 18 and 28 could be represented by XIIX and XXIIX respectively; the XIIX for 18 days to the next Kalends, and XXIIX for the number of days in February. The latter can be seen on the sole extant pre-Julian calendar, the Fasti Antiates Maiores.[34]
|
47 |
+
|
48 |
+
While irregular subtractive and additive notation has been used at least occasionally throughout history, some Roman numerals have been observed in documents and inscriptions that do not fit either system. Some of these variants do not seem to have been used outside specific contexts, and may have been regarded as errors even by contemporaries.
|
49 |
+
|
50 |
+
As Roman numerals are composed of ordinary alphabetic characters, there may sometimes be confusion with other uses of the same letters. For example, "XXX" and "XL" have other connotations in addition to their values as Roman numerals, while "IXL" more often than not is a gramogram of "I excel", and is in any case not an unambiguous Roman numeral.
|
51 |
+
|
52 |
+
The number zero did not originally have its own Roman numeral, but the word nulla (the Latin word meaning "none") was used by medieval scholars to represent 0. Dionysius Exiguus was known to use nulla alongside Roman numerals in 525.[40][41] About 725, Bede or one of his colleagues used the letter N, the initial of nulla or of nihil (the Latin word for "nothing") for 0, in a table of epacts, all written in Roman numerals.[42]
|
53 |
+
|
54 |
+
Though the Romans used a decimal system for whole numbers, reflecting how they counted in Latin, they used a duodecimal system for fractions, because the divisibility of twelve (12 = 22 × 3) makes it easier to handle the common fractions of 1⁄3 and 1⁄4 than does a system based on ten (10 = 2 × 5). On coins, many of which had values that were duodecimal fractions of the unit as, they used a tally-like notational system based on twelfths and halves. A dot (·) indicated an uncia "twelfth", the source of the English words inch and ounce; dots were repeated for fractions up to five twelfths. Six twelfths (one half) was abbreviated as the letter S for semis "half". Uncia dots were added to S for fractions from seven to eleven twelfths, just as tallies were added to V for whole numbers from six to nine.[43]
|
55 |
+
|
56 |
+
Each fraction from 1⁄12 to 12⁄12 had a name in Roman times; these corresponded to the names of the related coins:
|
57 |
+
|
58 |
+
The arrangement of the dots was variable and not necessarily linear. Five dots arranged like (⁙) (as on the face of a die) are known as a quincunx, from the name of the Roman fraction/coin. The Latin words sextans and quadrans are the source of the English words sextant and quadrant.
|
59 |
+
|
60 |
+
Other Roman fractional notations included the following:
|
61 |
+
|
62 |
+
During the centuries that Roman numerals remained the standard way of writing numbers throughout Europe, there were various extensions to the system designed to indicate larger numbers, none of which were ever standardised.
|
63 |
+
|
64 |
+
One of these was the apostrophus,[44] in which 500 (usually written as "D") was written as IↃ, while 1,000, was written as CIↃ instead of "M".[18] This is a system of encasing numbers to denote thousands (imagine the Cs and Ↄs as parentheses), which has its origins in Etruscan numeral usage. The IↃ and CIↃ used to represent 500 and 1,000 most likely preceded, and subsequently influenced, the adoption of "D" and "M" in conventional Roman numerals.
|
65 |
+
|
66 |
+
In this system, an extra Ↄ denoted 500, and multiple extra Ↄs are used to denote 5,000, 50,000, etc. For example:
|
67 |
+
|
68 |
+
Sometimes CIↃ was reduced to ↀ for 1,000. John Wallis is often credited for introducing the symbol for infinity (modern ∞), and one conjecture is that he based it on this usage, since 1,000 was hyperbolically used to represent very large numbers. Similarly, IↃↃ for 5,000 was reduced to ↁ; CCIↃↃ for 10,000 to ↂ; IↃↃↃ for 50,000 to ↇ; and CCCIↃↃↃ for 100,000 to ↈ.
|
69 |
+
[45]
|
70 |
+
|
71 |
+
Another system was the vinculum, in which conventional Roman numerals were multiplied by 1,000 by adding a "bar" or "overline".[45] Although mathematical historian David Eugene Smith disputes that this was part of ancient Roman usage,[46] the notation was certainly in use in the Middle Ages. Although modern usage is largely hypothetical it is certainly easier for a modern user to decode than the Apostrophus,
|
72 |
+
|
73 |
+
Another inconsistent medieval usage was the addition of vertical lines (or brackets) before and after the numeral to multiply it by 10 (or 100): thus M for 10,000 as an alternative form for X. In combination with the overline the bracketed forms might be used to raise the multiplier to (say) ten (or one hundred) thousand, thus:
|
74 |
+
|
75 |
+
This use of lines is distinct from the custom, once very common, of adding both underline and overline (or very large serifs) to a Roman numeral, simply to make it clear that it is a number, e.g. MCMLXVII (1967).
|
76 |
+
|
77 |
+
The system is closely associated with the ancient city-state of Rome and the Empire that it created. However, due to the scarcity of surviving examples, the origins of the system are obscure and there are several competing theories, all largely conjectural.
|
78 |
+
|
79 |
+
Rome was founded sometime between 850 and 750 BC. At the time, the region was inhabited by diverse populations of which the Etruscans were the most advanced. The ancient Romans themselves admitted that the basis of much of their civilization was Etruscan. Rome itself was located next to the southern edge of the Etruscan domain, which covered a large part of north-central Italy.
|
80 |
+
|
81 |
+
The Roman numerals, in particular, are directly derived from the Etruscan number symbols: "𐌠", "𐌡", "𐌢", "𐌣", and "𐌟" for 1, 5, 10, 50, and 100 (They had more symbols for larger numbers, but it is unknown which symbol represents which number). As in the basic Roman system, the Etruscans wrote the symbols that added to the desired number, from higher to lower value. Thus the number 87, for example, would be written 50 + 10 + 10 + 10 + 5 + 1 + 1 = 𐌣𐌢𐌢𐌢𐌡𐌠𐌠 (this would appear as 𐌠𐌠𐌡𐌢𐌢𐌢𐌣 since Etruscan was written from right to left.)[47]
|
82 |
+
|
83 |
+
The symbols "𐌠" and "𐌡" resembled letters of the Etruscan alphabet, but "𐌢", "𐌣", and "𐌟" did not. The Etruscans used the subtractive notation, too, but not like the Romans. They wrote 17, 18, and 19 as "𐌠𐌠𐌠𐌢𐌢", "𐌠𐌠𐌢𐌢", and 𐌠𐌢𐌢, mirroring the way they spoke those numbers ("three from twenty", etc.); and similarly for 27, 28, 29, 37, 38, etc. However they did not write "𐌠𐌡" for 4 (or "𐌢𐌣" for 40), and wrote "𐌡𐌠𐌠", "𐌡𐌠𐌠𐌠" and "𐌡𐌠𐌠𐌠𐌠" for 7, 8, and 9, respectively.[47]
|
84 |
+
|
85 |
+
The early Roman numerals for 1, 10, and 100 were the Etruscan ones: "I", "X", and "Ж". The symbols for 5 and 50 changed from Ʌ and "𐌣" to V and ↆ at some point. The latter had flattened to ⊥ (an inverted T) by the time of Augustus, and soon afterwards became identified with the graphically similar letter L.[48]
|
86 |
+
|
87 |
+
The symbol for 100 was written variously as >I< or ƆIC, was then abbreviated to Ɔ or C, with C (which matched a Latin letter) finally winning out. It may have helped that C is the initial of centum, Latin for "hundred".
|
88 |
+
|
89 |
+
The numbers 500 and 1000 were denoted by V or X overlaid with a box or circle. Thus 500 was like a Ɔ superimposed on a Þ. It became D or Ð by the time of Augustus, under the graphic influence of the letter D. It was later identified as the letter D; an alternative symbol for "thousand" was a CIƆ, and half of a thousand or "five hundred" is the right half of the symbol, IƆ, and this may have been converted into D.[18]
|
90 |
+
|
91 |
+
The notation for 1000 was a circled or boxed X: Ⓧ, ⊗, ⊕, and by Augustinian times was partially identified with the Greek letter Φ phi. Over time, the symbol changed to Ψ and ↀ. The latter symbol further evolved into ∞, then ⋈, and eventually changed to M under the influence of the Latin word mille "thousand".[48]
|
92 |
+
|
93 |
+
According to Paul Kayser, the basic numerical symbols were I, X, C and Φ (or ⊕) and the intermediate ones were derived by taking half of those (half an X is V, half a C is L and half a Φ/⊕ is D).[49]
|
94 |
+
|
95 |
+
Lower case, minuscule, letters were developed in the Middle Ages, well after the demise of the Western Roman Empire, and since that time lower-case versions of Roman numbers have also been commonly used: i, ii, iii, iv, and so on.
|
96 |
+
|
97 |
+
Since the Middle Ages, a "j" has sometimes been substituted for the final "i" of a "lower-case" Roman numeral, such as "iij" for 3 or "vij" for 7. This "j" can be considered a swash variant of "i". The use of a final "j" is still used in medical prescriptions to prevent tampering with or misinterpretation of a number after it is written.[50][51]
|
98 |
+
|
99 |
+
Numerals in documents and inscriptions from the Middle Ages sometimes include additional symbols, which today are called "medieval Roman numerals". Some simply substitute another letter for the standard one (such as "A" for "V", or "Q" for "D"), while others serve as abbreviations for compound numerals ("O" for "XI", or "F" for "XL"). Although they are still listed today in some dictionaries, they are long out of use.[52]
|
100 |
+
|
101 |
+
Chronograms, messages with dates encoded into them, were popular during the Renaissance era. The chronogram would be a phrase containing the letters I, V, X, L, C, D, and M. By putting these letters together, the reader would obtain a number, usually indicating a particular year.
|
102 |
+
|
103 |
+
By the 11th century, Arabic numerals had been introduced into Europe from al-Andalus, by way of Arab traders and arithmetic treatises. Roman numerals, however, proved very persistent, remaining in common use in the West well into the 14th and 15th centuries, even in accounting and other business records (where the actual calculations would have been made using an abacus). Replacement by their more convenient "Arabic" equivalents was quite gradual, and Roman numerals are still used today in certain contexts. A few examples of their current use are:
|
104 |
+
|
105 |
+
In astronomy, the natural satellites or "moons" of the planets are traditionally designated by capital Roman numerals appended to the planet's name. For example, Titan's designation is Saturn VI.
|
106 |
+
|
107 |
+
In chemistry, Roman numerals are often used to denote the groups of the periodic table.
|
108 |
+
They are also used in the IUPAC nomenclature of inorganic chemistry, for the oxidation number of cations which can take on several different positive charges. They are also used for naming phases of polymorphic crystals, such as ice.
|
109 |
+
|
110 |
+
In education, school grades (in the sense of year-groups rather than test scores) are sometimes referred to by a Roman numeral; for example, "grade IX" is sometimes seen for "grade 9".
|
111 |
+
|
112 |
+
In entomology, the broods of the thirteen and seventeen year periodical cicadas are identified by Roman numerals.
|
113 |
+
|
114 |
+
In advanced mathematics (including trigonometry, statistics, and calculus), when a graph includes negative numbers, its quadrants are named using I, II, III, and IV. These quadrant names signify positive numbers on both axes, negative numbers on the X axis, negative numbers on both axes, and negative numbers on the Y axis, respectively. The use of Roman numerals to designate quadrants avoids confusion, since Arabic numerals are used for the actual data represented in the graph.
|
115 |
+
|
116 |
+
In military unit designation, Roman numerals are often used to distinguish between units at different levels. This reduces possible confusion, especially when viewing operational or strategic level maps. In particular, army corps are often numbered using Roman numerals (for example the American XVIII Airborne Corps or the WW2-era German III Panzerkorps) with Arabic numerals being used for divisions and armies.
|
117 |
+
|
118 |
+
In music, Roman numerals are used in several contexts:
|
119 |
+
|
120 |
+
In pharmacy, Roman numerals are used in some contexts, including S to denote "one half" and N to denote "zero".[57]
|
121 |
+
|
122 |
+
In photography, Roman numerals (with zero) are used to denote varying levels of brightness when using the Zone System.
|
123 |
+
|
124 |
+
In seismology, Roman numerals are used to designate degrees of the Mercalli intensity scale of earthquakes.
|
125 |
+
|
126 |
+
In sport the team containing the "top" players and representing a nation or province, a club or a school at the highest level in (say) rugby union is often called the "1st XV", while a lower-ranking cricket or American football team might be the "3rd XI".
|
127 |
+
|
128 |
+
In tarot, Roman numerals (with zero) are used to denote the cards of the Major Arcana.
|
129 |
+
|
130 |
+
In theology and biblical scholarship, the Septuagint is often referred to as LXX, as this translation of the Old Testament into Greek is named for the legendary number of its translators (septuaginta being Latin for "seventy").
|
131 |
+
|
132 |
+
Some uses that are rare or never seen in English speaking countries may be relatively common in parts of continental Europe. For instance:
|
133 |
+
|
134 |
+
Capital or small capital Roman numerals are widely used in Romance languages to denote centuries, e.g. the French xviiie siècle[58] and the Spanish siglo XVIII mean "18th century". Slavic languages in and adjacent to Russia similarly favor Roman numerals (xviii век). On the other hand, in Slavic languages in Central Europe, like most Germanic languages, one writes "18." (with a period) before the local word for "century".
|
135 |
+
|
136 |
+
Mixed Roman and Arabic numerals are sometimes used in numeric representations of dates (especially in formal letters and official documents, but also on tombstones). The month is written in Roman numerals, while the day is in Arabic numerals: "14.VI.1789" and "VI.14.1789" both refer unambiguously to 14 June 1789.
|
137 |
+
|
138 |
+
Roman numerals are sometimes used to represent the days of the week in hours-of-operation signs displayed in windows or on doors of businesses,[59] and also sometimes in railway and bus timetables. Monday, taken as the first day of the week, is represented by I. Sunday is represented by VII. The hours of operation signs are tables composed of two columns where the left column is the day of the week in Roman numerals and the right column is a range of hours of operation from starting time to closing time. In the example case (left), the business opens from 10 AM to 7 PM on weekdays, 10 AM to 5 PM on Saturdays and is closed on Sundays. Note that the listing uses 24-hour time.
|
139 |
+
|
140 |
+
Roman numerals may also be used for floor numbering.[60][61] For instance, apartments in central Amsterdam are indicated as 138-III, with both an Arabic numeral (number of the block or house) and a Roman numeral (floor number). The apartment on the ground floor is indicated as 138-huis.
|
141 |
+
|
142 |
+
In Italy, where roads outside built-up areas have kilometre signs, major roads and motorways also mark 100-metre subdivisionals, using Roman numerals from I to IX for the smaller intervals. The sign "IX | 17" thus marks 17.9 km.
|
143 |
+
|
144 |
+
A notable exception to the use of Roman numerals in Europe is in Greece, where Greek numerals (based on the Greek alphabet) are generally used in contexts where Roman numerals would be used elsewhere.
|
145 |
+
|
146 |
+
The "Number Forms" block of the Unicode computer character set standard has a number of Roman numeral symbols in the range of code points from U+2160 to U+2188.[62] This range includes both upper- and lowercase numerals, as well as pre-combined characters for numbers up to 12 (Ⅻ or XII). One justification for the existence of pre-combined numbers is to facilitate the setting of multiple-letter numbers (such as VIII) on a single horizontal line in Asian vertical text. The Unicode standard, however, includes special Roman numeral code points for compatibility only, stating that "[f]or most purposes, it is preferable to compose the Roman numerals from sequences of the appropriate Latin letters".[63]
|
147 |
+
The block also includes some apostrophus symbols for large numbers, an old variant of "L" (50) similar to the Etruscan character, the Claudian letter "reversed C", etc.
|
148 |
+
|
149 |
+
|
150 |
+
|
en/4203.html.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The atomic number or proton number (symbol Z) of a chemical element is the number of protons found in the nucleus of every atom of that element. The atomic number uniquely identifies a chemical element. It is identical to the charge number of the nucleus. In an uncharged atom, the atomic number is also equal to the number of electrons.
|
4 |
+
|
5 |
+
The sum of the atomic number Z and the number of neutrons N gives the mass number A of an atom. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in unified atomic mass units (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A.
|
6 |
+
|
7 |
+
Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth, determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century.
|
8 |
+
|
9 |
+
The conventional symbol Z comes from the German word Zahl meaning number, which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order is approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word Atomzahl (and its English equivalent atomic number) come into common use in this context.
|
10 |
+
|
11 |
+
Loosely speaking, the existence or construction of a periodic table of elements creates an ordering of the elements, and so they can be numbered in order.
|
12 |
+
|
13 |
+
Dmitri Mendeleev claimed that he arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht").[1] However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9).[1][2] This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time.
|
14 |
+
|
15 |
+
A simple numbering based on periodic table position was never entirely satisfactory, however. Besides the case of iodine and tellurium, later several other pairs of elements (such as argon and potassium, cobalt and nickel) were known to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time).
|
16 |
+
|
17 |
+
In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold (Z = 79, A = 197), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element Z = 79 on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom was exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This proved eventually to be the case.
|
18 |
+
|
19 |
+
The experimental position improved dramatically after research by Henry Moseley in 1913.[3] Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z.
|
20 |
+
|
21 |
+
To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminum (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube.[4] The square root of the frequency of these photons (x-rays) increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time.
|
22 |
+
|
23 |
+
After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91.[5] From 1918 to 1947, all seven of these missing elements were discovered.[6] By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96).
|
24 |
+
|
25 |
+
In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms.
|
26 |
+
|
27 |
+
In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas,[7] and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to be composed of four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two of the charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number.
|
28 |
+
|
29 |
+
All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive charge now was realized to come entirely from a content of 79 protons. After 1932, therefore, an element's atomic number Z was also realized to be identical to the proton number of its nuclei.
|
30 |
+
|
31 |
+
The conventional symbol Z possibly comes from the German word Atomzahl (atomic number).[8] However, prior to 1915, the word Zahl (simply number) was used for an element's assigned number in the periodic table.
|
32 |
+
|
33 |
+
Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number.
|
34 |
+
|
35 |
+
The quest for new elements is usually described using atomic numbers. As of 2019, all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life becomes shorter as atomic number increases, though an "island of stability" may exist for undiscovered isotopes with certain numbers of protons and neutrons.
|
en/4204.html.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The atomic number or proton number (symbol Z) of a chemical element is the number of protons found in the nucleus of every atom of that element. The atomic number uniquely identifies a chemical element. It is identical to the charge number of the nucleus. In an uncharged atom, the atomic number is also equal to the number of electrons.
|
4 |
+
|
5 |
+
The sum of the atomic number Z and the number of neutrons N gives the mass number A of an atom. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in unified atomic mass units (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A.
|
6 |
+
|
7 |
+
Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth, determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century.
|
8 |
+
|
9 |
+
The conventional symbol Z comes from the German word Zahl meaning number, which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order is approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word Atomzahl (and its English equivalent atomic number) come into common use in this context.
|
10 |
+
|
11 |
+
Loosely speaking, the existence or construction of a periodic table of elements creates an ordering of the elements, and so they can be numbered in order.
|
12 |
+
|
13 |
+
Dmitri Mendeleev claimed that he arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht").[1] However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9).[1][2] This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time.
|
14 |
+
|
15 |
+
A simple numbering based on periodic table position was never entirely satisfactory, however. Besides the case of iodine and tellurium, later several other pairs of elements (such as argon and potassium, cobalt and nickel) were known to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time).
|
16 |
+
|
17 |
+
In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold (Z = 79, A = 197), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element Z = 79 on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom was exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This proved eventually to be the case.
|
18 |
+
|
19 |
+
The experimental position improved dramatically after research by Henry Moseley in 1913.[3] Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z.
|
20 |
+
|
21 |
+
To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminum (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube.[4] The square root of the frequency of these photons (x-rays) increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time.
|
22 |
+
|
23 |
+
After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91.[5] From 1918 to 1947, all seven of these missing elements were discovered.[6] By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96).
|
24 |
+
|
25 |
+
In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms.
|
26 |
+
|
27 |
+
In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas,[7] and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to be composed of four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two of the charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number.
|
28 |
+
|
29 |
+
All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive charge now was realized to come entirely from a content of 79 protons. After 1932, therefore, an element's atomic number Z was also realized to be identical to the proton number of its nuclei.
|
30 |
+
|
31 |
+
The conventional symbol Z possibly comes from the German word Atomzahl (atomic number).[8] However, prior to 1915, the word Zahl (simply number) was used for an element's assigned number in the periodic table.
|
32 |
+
|
33 |
+
Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number.
|
34 |
+
|
35 |
+
The quest for new elements is usually described using atomic numbers. As of 2019, all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life becomes shorter as atomic number increases, though an "island of stability" may exist for undiscovered isotopes with certain numbers of protons and neutrons.
|
en/4205.html.txt
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Nunavut (/ˈnʊnəvʌt/ (listen) NUUN-ə-vut; French: [nynavy(t)]; Inuktitut syllabics: ᓄᓇᕗᑦ [ˈnunavut]) is the newest, largest, and most northerly territory of Canada. It was separated officially from the Northwest Territories on April 1, 1999, via the Nunavut Act[8] and the Nunavut Land Claims Agreement Act,[9] though the boundaries had been drawn in 1993. The creation of Nunavut resulted in the first major change to Canada's political map since incorporating the province of Newfoundland in 1949.
|
4 |
+
|
5 |
+
Nunavut comprises a major portion of Northern Canada, and most of the Canadian Arctic Archipelago. Its vast territory makes it the fifth-largest country subdivision in the world, as well as North America's second-largest (after Greenland). The capital Iqaluit (formerly "Frobisher Bay"), on Baffin Island in the east, was chosen by the 1995 capital plebiscite. Other major communities include the regional centres of Rankin Inlet and Cambridge Bay.
|
6 |
+
|
7 |
+
Nunavut also includes Ellesmere Island to the far north, as well as the eastern and southern portions of Victoria Island in the west, and all islands in Hudson, James and Ungava Bays, including Akimiski Island far to the southeast of the rest of the territory. It is Canada's only geo-political region that is not connected to the rest of North America by highway.[10]
|
8 |
+
|
9 |
+
Nunavut is the second-least populous of Canada's provinces and territories. One of the world's most remote, sparsely settled regions, it has a population of 35,944,[1] mostly Inuit, spread over a land area of just over 1,877,787 km2 (725,018 sq mi), or slightly smaller than Mexico (excluding water surface area). Nunavut is also home to the world's northernmost permanently inhabited place, Alert.[11] Eureka, a weather station also on Ellesmere Island, has the lowest average annual temperature of any Canadian weather station.[12]
|
10 |
+
|
11 |
+
Nunavut means "our land" in the native language Inuktitut.[13]
|
12 |
+
|
13 |
+
Nunavut covers 1,877,787 km2 (725,018 sq mi)[1] of land and 160,930 km2 (62,137 sq mi)[14] of water in Northern Canada. The territory includes part of the mainland, most of the Arctic Archipelago, and all of the islands in Hudson Bay, James Bay, and Ungava Bay, including the Belcher Islands, all of which belonged to the Northwest Territories from which Nunavut was separated. This makes it the fifth-largest subnational entity (or administrative division) in the world. If Nunavut were a country, it would rank 15th in area.[15]
|
14 |
+
|
15 |
+
Nunavut has long land borders with the Northwest Territories on the mainland and a few Arctic islands, and with Manitoba to the south of the Nunavut mainland; it also meets Saskatchewan to the southwest at a quadripoint, and has a short land border with Newfoundland and Labrador on Killiniq Island. Nunavut also shares maritime borders with Greenland and the provinces of Quebec, Ontario, and Manitoba.
|
16 |
+
|
17 |
+
Nunavut's highest point is Barbeau Peak (2,616 m (8,583 ft)) on Ellesmere Island. The population density is 0.019 persons/km2 (0.05 persons/sq mi), one of the lowest in the world. By comparison, Greenland has approximately the same area and nearly twice the population.[16]
|
18 |
+
|
19 |
+
Nunavut experiences a polar climate in most regions, owing to its high latitude and lower continental summertime influence than areas to the west. In more southerly continental areas very cold subarctic climates can be found, due to July being slightly milder than the required 10 °C (50 °F).
|
20 |
+
|
21 |
+
The region which is now mainland Nunavut was first populated approximately 4500 years ago by the Pre-Dorset, a diverse Paleo-Eskimo culture that migrated eastward from the Bering Strait region.[18]
|
22 |
+
|
23 |
+
The Pre-Dorset culture was succeeded by the Dorset culture about 2800 years ago.[19] The Dorset culture has been assumed to have developed from the Pre-Dorset, however the relationship between the two remains unclear.[19]
|
24 |
+
|
25 |
+
Helluland, a location Norse explorers describe visiting in the Sagas of Icelanders has been connected to Nunavut's Baffin Island. Claims of contact between the Dorset and Norse, however, remain controversial.[20][21]
|
26 |
+
|
27 |
+
The Thule people, ancestors of the modern Inuit, began migrating into the Northwest Territories and Nunavut from Alaska in the 11th century. By 1300, the geographic extent of Thule settlement included most of modern Nunavut.
|
28 |
+
|
29 |
+
The migration of the Thule people coincides with the decline of the Dorset, who died out between 800 and 1500.[22]
|
30 |
+
|
31 |
+
The written historical accounts of the area begin in 1576, with an account by English explorer Martin Frobisher. While leading an expedition to find the Northwest Passage, Frobisher thought he had discovered gold ore around the body of water now known as Frobisher Bay on the coast of Baffin Island.[23] The ore turned out to be worthless, but Frobisher made the first recorded European contact with the Inuit. Other explorers in search of the elusive Northwest Passage followed in the 17th century, including Henry Hudson, William Baffin and Robert Bylot.
|
32 |
+
|
33 |
+
Cornwallis and Ellesmere Islands featured in the history of the Cold War in the 1950s. Concerned about the area's strategic geopolitical position, the federal government relocated Inuit from Nunavik (northern Quebec) to Resolute and Grise Fiord. In the unfamiliar and hostile conditions, they faced starvation[24] but were forced to stay.[25] Forty years later, the Royal Commission on Aboriginal Peoples issued a report titled The High Arctic Relocation: A Report on the 1953–55 Relocation.[26] The government paid compensation to those affected and their descendants and on August 18, 2010, in Inukjuak, the Honourable John Duncan, PC, MP, previous Minister of Indian Affairs and Northern Development and Federal Interlocutor for Métis and Non-Status Indians apologized on behalf of the Government of Canada for the relocation of Inuit to the High Arctic.[27][28]
|
34 |
+
|
35 |
+
Discussions on dividing the Northwest Territories along ethnic lines began in the 1950s, and legislation to do this was introduced in 1963. After its failure, a federal commission recommended against such a measure.[29]
|
36 |
+
In 1976, as part of the land claims negotiations between the Inuit Tapiriit Kanatami (then called the "Inuit Tapirisat of Canada") and the federal government, the parties discussed division of the Northwest Territories to provide a separate territory for the Inuit. On April 14, 1982, a plebiscite on division was held throughout the Northwest Territories. A majority of the residents voted in favour and the federal government gave a conditional agreement seven months later.[30]
|
37 |
+
|
38 |
+
The land claims agreement was completed in September 1992 and ratified by nearly 85% of the voters in Nunavut in a referendum. On July 9, 1993, the Nunavut Land Claims Agreement Act[9] and the Nunavut Act[8] were passed by the Canadian Parliament. The transition to establish Nunavut Territory was completed on April 1, 1999.[31] The creation of Nunavut has been followed by considerable population growth in the capital Iqaluit, from 5,200 in 2001 to 6,600 in 2011, a 27% increase.
|
39 |
+
|
40 |
+
Visible minority and indigenous identity (2016):[32][33]
|
41 |
+
|
42 |
+
As of the 2016 Canada Census, the population of Nunavut was 35,944, a 12.7% increase from 2011.[1] In 2006, 24,640 people identified themselves as Inuit (83.6% of the total population), 100 as First Nations (0.3%), 130 Métis (0.4%) and 4,410 as non-aboriginal (15.0%).[34]
|
43 |
+
|
44 |
+
The population growth rate of Nunavut has been well above the Canadian average for several decades, mostly due to birth rates significantly higher than the Canadian average—a trend that continues. Between 2011 and 2016, Nunavut had the highest population growth rate of any Canadian province or territory, at a rate of 12.7%.[1] The second-highest was Alberta, with a growth rate of 11.6%.
|
45 |
+
|
46 |
+
Nunavut has the highest smoking rate in all of Canada, with more than half of its adult population smoking cigarettes.[35] Smoking affects both men and women equally, and the overwhelming majority (90%) of pregnant women are smokers.[36]
|
47 |
+
|
48 |
+
Along with the Inuit Language (Inuktitut and Inuinnaqtun) sometimes called Inuktut,[37] English and French are also official languages.[4][38]
|
49 |
+
|
50 |
+
In his 2000 commissioned report (Aajiiqatigiingniq Language of Instruction Research Paper) to the Nunavut Department of Education, Ian Martin of York University stated a "long-term threat to Inuit languages from English is found everywhere, and current school language policies and practices on language are contributing to that threat" if Nunavut schools follow the Northwest Territories model. He provided a 20-year language plan to create a "fully functional bilingual society, in Inuktitut and English" by 2020. The plan provides different models, including:
|
51 |
+
|
52 |
+
Of the 34,960 responses to the census question concerning "mother tongue" in the 2016 census, the most commonly reported languages were:
|
53 |
+
|
54 |
+
At the time of the census, only English and French were counted as official languages. Figures shown are for single-language responses and the percentage of total single-language responses.[40]
|
55 |
+
|
56 |
+
In the 2016 census it was reported that 2,045 people (5.8%) living in Nunavut had no knowledge of either official language of Canada (English or French).[41] The 2016 census also reported that of the 30,135 Inuit people in Nunavut, 90.7% could speak either Inuktitut or Inuinnaqtun.[citation needed]
|
57 |
+
|
58 |
+
In 2011 census, Christianity form 86% of the Nunavut's population. About 13% of the population are Non Religious and 0.44% follows Aboriginal spirituality. There is a small minority of Muslims, Hindus, Buddhists and Jews.[42]
|
59 |
+
|
60 |
+
The economy of Nunavut is driven by the Inuit and Territorial Government, mining, oil, gas, and mineral exploration, arts, crafts, hunting, fishing, whaling, tourism, transportation, housing development, military, research, and education. Presently, one college operates in Nunavut, the Nunavut Arctic College, as well as several Arctic research stations located within the territory. The new Canadian High Arctic Research Station CHARS in planning for Cambridge Bay and high north Alert Bay Station.
|
61 |
+
|
62 |
+
Iqaluit hosts the annual Nunavut Mining Symposium every April,[43] this is a tradeshow that showcases many economic activities on going in Nunavut.
|
63 |
+
|
64 |
+
There are currently three major mines in operation in Nunavut. Agnico-Eagle Mines Ltd – Meadowbank Division. Meadowbank Gold Mine is an open pit gold mine with an estimated mine life 2010–2020 and employs 680 persons.
|
65 |
+
|
66 |
+
The second recently opened mine in production is the Mary River Iron Ore mine operated by Baffinland Iron Mines. It is located close to Pond Inlet on North Baffin Island. They produce a high grade direct ship iron ore.
|
67 |
+
|
68 |
+
The most recent mine to open is Doris North or the Hope Bay Mine operated near Hope Bay Aerodrome by TMAC Resource Ltd. This new high grade gold mine is the first in a series of potential mines in gold occurrences all along the Hope Bay greenstone belt.
|
69 |
+
|
70 |
+
Nunavut's people rely primarily on diesel fuel[45] to run generators and heat homes, with fossil fuel shipments from southern Canada by plane or boat because there are few to no roads or rail links to the region.[46] There is a government effort to use more renewable energy sources,[47] which is generally supported by the community.[48]
|
71 |
+
|
72 |
+
This support comes from Nunavut feeling the effects of global warming.[49][50] Former Nunavut Premier Eva Aariak said in 2011, "Climate change is very much upon us. It is affecting our hunters, the animals, the thinning of the ice is a big concern, as well as erosion from permafrost melting."[46] The region is warming about twice as fast as the global average, according to the UN's Intergovernmental Panel on Climate Change.
|
73 |
+
|
74 |
+
Nunavut has a Commissioner appointed by the federal Minister of Indigenous and Northern Affairs. As in the other territories, the commissioner's role is symbolic and is analogous to that of a Lieutenant-Governor.[56] While the Commissioner is not formally a representative of Canadian monarch, a role roughly analogous to representing The Crown has accrued to the position.
|
75 |
+
|
76 |
+
Nunavut elects a single member of the House of Commons of Canada. This makes Nunavut the largest electoral district in the world by area.
|
77 |
+
|
78 |
+
The members of the unicameral Legislative Assembly of Nunavut are elected individually; there are no parties and the legislature is consensus-based.[57] The head of government, the premier of Nunavut, is elected by, and from the members of the legislative assembly. On June 14, 2018, Joe Savikataaq was elected as the Premier of Nunavut, after his predecessor Paul Quassa lost a non-confidence motion.[58][59] Former Premier Paul Okalik set up an advisory council of eleven elders, whose function it is to help incorporate "Inuit Qaujimajatuqangit" (Inuit culture and traditional knowledge, often referred to in English as "IQ") into the territory's political and governmental decisions.[60]
|
79 |
+
|
80 |
+
Due to the territory's small population, and the fact that there are only a few hundred voters in each electoral district, the possibility of two election candidates finishing in an exact tie is significantly higher than in any Canadian province. This has actually happened twice in the five elections to date, with exact ties in Akulliq in the 2008 Nunavut general election and in Rankin Inlet South in the 2013 Nunavut general election. In such an event, Nunavut's practice is to schedule a follow-up by-election rather than choosing the winning candidate by an arbitrary method. The territory has also had numerous instances where MLAs were directly acclaimed to office as the only person to register their candidacy by the deadline, as well as one instance where a follow-up by-election had to be held due to no candidates registering for the regular election in their district at all.
|
81 |
+
|
82 |
+
Owing to Nunavut's vast size, the stated goal of the territorial government has been to decentralize governance beyond the region's capital. Three regions—Kitikmeot, Kivalliq and Qikiqtaaluk/Baffin—are the basis for more localized administration, although they lack autonomous governments of their own.[citation needed]
|
83 |
+
|
84 |
+
The territory has an annual budget of C$700 million, provided almost entirely by the federal government. Former Prime Minister Paul Martin designated support for Northern Canada as one of his priorities in 2004, with an extra $500 million to be divided among the three territories.[citation needed]
|
85 |
+
|
86 |
+
In 2001, the government of New Brunswick[citation needed] collaborated with the federal government and the technology firm SSI Micro to launch Qiniq, a unique network that uses satellite delivery to provide broadband Internet access to 24 communities in Nunavut. As a result, the territory was named one of the world's "Smart 25 Communities" in 2006 by the Intelligent Community Forum, a worldwide organization that honours innovation in broadband technologies. The Nunavut Public Library Services, the public library system serving the territory, also provides various information services to the territory.
|
87 |
+
|
88 |
+
In September 2012, Premier Aariak welcomed Prince Edward and Sophie, Countess of Wessex, to Nunavut as part of the events marking the Diamond Jubilee of Queen Elizabeth II.[61]
|
89 |
+
|
90 |
+
Nunavut is divided into three administrative regions:
|
91 |
+
|
92 |
+
The Nunavut licence plate was originally created for the Northwest Territories in the 1970s. The plate has long been famous worldwide for its unique design in the shape of a polar bear. Nunavut was licensed by the NWT to use the same licence plate design in 1999 when it became a separate territory,[62] but adopted its own plate design in March 2012 for launch in August 2012—a rectangle that prominently features the northern lights, a polar bear and an inuksuk.[62][63]
|
93 |
+
|
94 |
+
The flag and the coat of arms of Nunavut were designed by Andrew Qappik from Pangnirtung.[64]
|
95 |
+
|
96 |
+
A long-simmering dispute between Canada and the U.S. involves the issue of Canadian sovereignty over the Northwest Passage.[65]
|
97 |
+
|
98 |
+
Due to prohibition laws influenced by local and traditional beliefs, Nunavut has a highly regulated alcohol market. It is the last outpost of prohibition in Canada, and it is often easier to obtain firearms than alcohol.[66] Every community in Nunavut has slightly differing regulations, but as a whole it is still very restrictive. Seven communities have bans against alcohol and another 14 have orders being restricted by local committees. Because of these laws, a lucrative bootlegging market has appeared where people mark up the prices of bottles by extraordinary amounts.[67] The RCMP estimate Nunavut's bootleg liquor market rakes in some $10 million a year.[66]
|
99 |
+
|
100 |
+
Despite the restrictions, alcohol's availability leads to widespread alcohol related crime. One lawyer estimated some 95% of police calls are alcohol-related.[68] Alcohol is also believed to be a contributing factor to the territory's high rates of violence, suicide, and homicide. A special task force created in 2010 to study and address the territory's increasing alcohol-related problems recommended the government ease alcohol restrictions. With prohibition shown to be highly ineffective historically, it is believed these laws contribute to the territory's widespread social ills. However, many residents are skeptical about the effectiveness of liquor sale liberalization and want to ban it completely. In 2014, Nunavut's government decided to move towards more legalization. A liquor store has opened in Iqaluit, the capital, for the first time in 38 years as of 2017.[66]
|
101 |
+
|
102 |
+
The Inuit Broadcasting Corporation is based in Nunavut. The Canadian Broadcasting Corporation (CBC) serves Nunavut through a radio and television production centre in Iqaluit, and a bureau in Rankin Inlet. Iqaluit is served by private commercial radio stations CKIQ-FM and CKGC-FM, both owned by Northern Lights Entertainment Inc. (CKIQ-FM had a rebroadcaster in Rankin Inlet that was discontinued in 2009.)
|
103 |
+
|
104 |
+
Nunavut is served by two regional weekly newspapers, Nunatsiaq News published by Nortext, and Nunavut News/North, published by Northern News Services, who also publish the multi-territory regional Kivalliq News.[69]
|
105 |
+
|
106 |
+
The film production company Isuma is based in Igloolik. Co-founded by Zacharias Kunuk and Norman Cohn in 1990, the company produced the 1999 feature Atanarjuat: The Fast Runner, winner of the Caméra d'Or for Best First Feature Film at the 2001 Cannes Film Festival. It was the first feature film written, directed, and acted entirely in Inuktitut.
|
107 |
+
|
108 |
+
In November 2006, the National Film Board of Canada (NFB) and the Inuit Broadcasting Corporation announced the start of the Nunavut Animation Lab, offering animation training to Nunavut artists at workshops in Iqaluit, Cape Dorset and Pangnirtung.[70] Films from the Nunavut Animation Lab include Alethea Arnaquq-Baril's 2010 digital animation short Lumaajuuq, winner of the Best Aboriginal Award at the Golden Sheaf Awards and named Best Canadian Short Drama at the imagineNATIVE Film + Media Arts Festival.[71]
|
109 |
+
|
110 |
+
In November 2011, the Government of Nunavut and the NFB jointly announced the launch of a DVD and online collection entitled Unikkausivut (Inuktitut: Sharing Our Stories), which will make over 100 NFB films by and about Inuit available in Inuktitut, Inuinnaqtun and other Inuit languages, as well as English and French. The Government of Nunavut is distributing Unikkausivut to every school in the territory.[72][73]
|
111 |
+
|
112 |
+
The indigenous music of Nunavut includes Inuit throat singing and drum-led dancing, along with country music, bluegrass, square dancing, the button accordion and the fiddle, an infusion of European influence.
|
113 |
+
|
114 |
+
Artcirq is a collective of Inuit circus performers based in Igloolik.[74] The group has performed around the world, including at the 2010 Olympic Winter Games in Vancouver, British Columbia.
|
115 |
+
|
116 |
+
Nunavut competes at the Arctic Winter Games. Iqaluit co-hosted the 2002 edition in partnership with Nuuk, Greenland.
|
117 |
+
|
118 |
+
Hockey Nunavut was founded in 1999 and competes in the Maritime-Hockey North Junior C Championship.
|
119 |
+
|
120 |
+
Susan Aglukark is an Inuk singer and songwriter. She has released six albums and has won several Juno Awards. She blends the Inuktitut and English languages with contemporary pop music arrangements to tell the stories of her people, the Inuit of the Arctic.
|
121 |
+
|
122 |
+
On May 3, 2008, the Kronos Quartet premiered a collaborative piece with Inuit throat singer Tanya Tagaq, entitled Nunavut, based on an Inuit folk story. Tagaq is also known internationally for her collaborations with Icelandic pop star Björk, and her 2018 novel Split Tooth which was longlisted for the Scotiabank Giller Prize.
|
123 |
+
|
124 |
+
Jordin John Kudluk Tootoo (Inuktitut syllabics: ᔪᐊᑕᓐ ᑐᑐ; born February 2, 1983, in Churchill, Manitoba, Canada) was a professional ice hockey player with the Chicago Blackhawks of the National Hockey League (NHL). Although born in Manitoba, Tootoo grew up in Rankin Inlet, where he was taught to skate and play hockey by his father, Barney.
|
125 |
+
|
126 |
+
Hunter Tootoo, Member of Parliament for the Territory of Nunavut, was elected to the Liberal government in 2015. He served as the Minister of Fisheries, Oceans, and the Canadian Coast Guard until his resignation from the post on May 31, 2016.
|
127 |
+
|
128 |
+
Tourism
|
129 |
+
|
130 |
+
Journalism
|
en/4206.html.txt
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Nutrition is the science that interprets the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism. It includes ingestion, absorption, assimilation, biosynthesis, catabolism and excretion.[1]
|
2 |
+
|
3 |
+
The diet of an organism is what it eats, which is largely determined by the availability and palatability of foods. For humans, a healthy diet includes preparation of food and storage methods that preserve nutrients from oxidation, heat or leaching, and that reduces risk of foodborne illnesses. The seven major classes of human nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities).
|
4 |
+
|
5 |
+
In humans, an unhealthy diet can cause deficiency-related diseases such as blindness, anemia, scurvy, preterm birth, stillbirth and cretinism,[2] or nutrient excess health-threatening conditions such as obesity[3][4] and metabolic syndrome;[5] and such common chronic systemic diseases as cardiovascular disease,[6] diabetes,[7][8] and osteoporosis.[9][10][11] Undernutrition can lead to wasting in acute cases, and the stunting of marasmus in chronic cases of malnutrition.[2]
|
6 |
+
|
7 |
+
Carnivore and herbivore diets are contrasting, with basic nitrogen and carbon proportions vary for their particular foods. Many herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize. Animals generally have a higher requirement of energy in comparison to plants.[12]
|
8 |
+
|
9 |
+
Plant nutrition is the study of the chemical elements that are necessary for plant growth.[13] There are several principles that apply to plant nutrition. Some elements are directly involved in plant metabolism. However, this principle does not account for the so-called beneficial elements, whose presence, while not required, has clear positive effects on plant growth.
|
10 |
+
|
11 |
+
A nutrient that is able to limit plant growth according to Liebig's law of the minimum is considered an essential plant nutrient if the plant cannot complete its full life cycle without it. There are 16 essential plant soil nutrients, besides the three major elemental nutrients carbon and oxygen that are obtained by photosynthetic plants from carbon dioxide in air, and hydrogen, which is obtained from water.
|
12 |
+
|
13 |
+
Plants uptake essential elements from the soil through their roots and from the air (consisting of mainly nitrogen and oxygen) through their leaves. Green plants obtain their carbohydrate supply from the carbon dioxide in the air by the process of photosynthesis. Carbon and oxygen are absorbed from the air, while other nutrients are absorbed from the soil. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. The carbon dioxide molecules are used as the carbon source in photosynthesis.
|
14 |
+
|
15 |
+
Although nitrogen is plentiful in the Earth's atmosphere, very few plants can use this directly. Most plants, therefore, require nitrogen compounds to be present in the soil in which they grow. This is made possible by the fact that largely inert atmospheric nitrogen is changed in a nitrogen fixation process to biologically usable forms in the soil by bacteria.[14]
|
16 |
+
|
17 |
+
Plant nutrition is a difficult subject to understand completely, partially because of the variation between different plants and even between different species or individuals of a given clone. Elements present at low levels may cause deficiency symptoms, and toxicity is possible at levels that are too high. Furthermore, deficiency of one element may present as symptoms of toxicity from another element, and vice versa.[citation needed]
|
en/4207.html.txt
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In geography, an oasis (/oʊˈeɪsɪs/, plural oases, /oʊˈeɪsiːz/) is a fertile area (often having a date palm grove) in a desert or semi-desert environment.[1] Oases also provide habitats for animals and plants.
|
2 |
+
|
3 |
+
The word oasis came into English from Latin: oasis, from Ancient Greek: ὄασις, óasis, which in turn is a direct borrowing from Demotic Egyptian. The word for oasis in the later attested Coptic language (the descendant of Demotic Egyptian) is wahe or ouahe which means a "dwelling place".[2]
|
4 |
+
|
5 |
+
Oases are made fertile when sources of freshwater, such as underground rivers or aquifers, irrigate the surface naturally or via man-made wells.[3]
|
6 |
+
The presence of water on the surface or underground is necessary and the local or regional management of this essential resource is strategic, but not sufficient to create such areas: continuous human work and know-how (a technical and social culture) are essential to maintain such ecosystems.[4][5]
|
7 |
+
|
8 |
+
Rain showers provide subterranean water to sustain natural oases, such as the Tuat. Substrata of impermeable rock and stone can trap water and retain it in pockets, or on long faulting subsurface ridges or volcanic dikes water can collect and percolate to the surface. Any incidence of water is then used by migrating birds, which also pass seeds with their droppings which will grow at the water's edge forming an oasis. It can also be used to plant crops.
|
9 |
+
|
10 |
+
The location of oases has been of critical importance for trade and transportation routes in desert areas; caravans must travel via oases so that supplies of water and food can be replenished. Thus, political or military control of an oasis has in many cases meant control of trade on a particular route. For example, the oases of Awjila, Ghadames and Kufra, situated in modern-day Libya, have at various times been vital to both north–south and east–west trade in the Sahara Desert. The Silk Road across Central Asia also incorporated several oases.
|
11 |
+
|
12 |
+
In North American history, oases have been less prominent since the desert regions are smaller, however several areas in the deep southwestern United States have oases regions that served as important links through the hot deserts and vast rural areas. While present day desert cities like Las Vegas, Phoenix, Palm Springs, and Tucson are large modern cities, many of these locations were once small, isolated farming areas that travelers through the western desert stopped for food and supplies. Even today, there are several roads that go through western deserts like U.S. Route 50 through southern Nevada, and the Mojave Desert that feature small green fields, citrus groves and small isolated supply towns.
|
13 |
+
|
14 |
+
People who live in an oasis must manage land and water use carefully; fields must be irrigated to grow plants like apricots, dates, figs, and olives. The most important plant in an oasis is the date palm, which forms the upper layer. These palm trees provide shade for smaller trees like peach trees, which form the middle layer. By growing plants in different layers, the farmers make best use of the soil and water. Many vegetables are also grown and some cereals, such as barley, millet, and wheat, are grown where there is more moisture.[6]
|
15 |
+
In summary, an oasis palm grove is a highly anthropized and irrigated area that supports a traditionally intensive and polyculture-based agriculture.[1] The oasis is integrated into its desert environment through an often close association with nomadic transhumant livestock farming (very often pastoral and sedentary populations are clearly distinguished). However, the oasis is emancipated from the desert by a very particular social and ecosystem structure. Responding to environmental constraints, it is an integrated agriculture that is conducted with the superposition (in its typical form) of two or three strata creating what is called the "oasis effect":[1]
|
16 |
+
|
17 |
+
Al Ain Oasis in the city of Al Ain, Arabian Peninsula
|
18 |
+
|
19 |
+
Taghit in Algeria, North Africa
|
20 |
+
|
21 |
+
Ein Gedi in Israel, Middle East
|
22 |
+
|
23 |
+
Fish Springs National Wildlife Refuge in Utah, United States
|
24 |
+
|
25 |
+
Rubaksa in a dry limestone environment in north Ethiopia is an oasis thanks to the existence of karstic springs
|
26 |
+
|
27 |
+
Twentynine Palms sign
|
28 |
+
|
29 |
+
Creosote (Larrea tridentata) on alluvium at Red Rock Canyon National Conservation Area, southern Nevada. United States
|
30 |
+
|
31 |
+
Crescent Lake (Yueyaquan) in the Gobi Desert
|
en/4208.html.txt
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Obesity is a medical condition in which excess body fat has accumulated to an extent that it may have a negative effect on health.[1] People are generally considered obese when their body mass index (BMI), a measurement obtained by dividing a person's weight by the square of the person's height, is over 30 kg/m2; the range 25–30 kg/m2 is defined as overweight.[1] Some East Asian countries use lower values.[8] Obesity increases the likelihood of various diseases and conditions, particularly cardiovascular diseases, type 2 diabetes, obstructive sleep apnea, certain types of cancer, osteoarthritis, and depression.[2][3]
|
4 |
+
|
5 |
+
Obesity is most commonly caused by a combination of excessive food intake, lack of physical activity, and genetic susceptibility.[1][4] A few cases are caused primarily by genes, endocrine disorders, medications, or mental disorder.[9] The view that obese people eat little yet gain weight due to a slow metabolism is not medically supported.[10] On average, obese people have a greater energy expenditure than their normal counterparts due to the energy required to maintain an increased body mass.[10][11]
|
6 |
+
|
7 |
+
Obesity is mostly preventable through a combination of social changes and personal choices.[1] Changes to diet and exercising are the main treatments.[2] Diet quality can be improved by reducing the consumption of energy-dense foods, such as those high in fat or sugars, and by increasing the intake of dietary fiber.[1] Medications can be used, along with a suitable diet, to reduce appetite or decrease fat absorption.[5] If diet, exercise, and medication are not effective, a gastric balloon or surgery may be performed to reduce stomach volume or length of the intestines, leading to feeling full earlier or a reduced ability to absorb nutrients from food.[6][12]
|
8 |
+
|
9 |
+
Obesity is a leading preventable cause of death worldwide, with increasing rates in adults and children.[1][13] In 2015, 600 million adults (12%) and 100 million children were obese in 195 countries.[7] Obesity is more common in women than men.[1] Authorities view it as one of the most serious public health problems of the 21st century.[14] Obesity is stigmatized in much of the modern world (particularly in the Western world), though it was seen as a symbol of wealth and fertility at other times in history and still is in some parts of the world.[2][15] In 2013, several medical societies, including the American Medical Association and the American Heart Association, classified obesity as a disease.[16][17][18]
|
10 |
+
|
11 |
+
Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health.[20] It is defined by body mass index (BMI) and further evaluated in terms of fat distribution via the waist–hip ratio and total cardiovascular risk factors.[21][22] BMI is closely related to both percentage body fat and total body fat.[23]
|
12 |
+
In children, a healthy weight varies with age and sex. Obesity in children and adolescents is defined not as an absolute number but in relation to a historical normal group, such that obesity is a BMI greater than the 95th percentile.[24] The reference data on which these percentiles were based date from 1963 to 1994, and thus have not been affected by the recent increases in weight.[25] BMI is defined as the subject's weight divided by the square of their height and is calculated as follows.
|
13 |
+
|
14 |
+
BMI is usually expressed in kilograms of weight per metre squared of height. To convert from pounds per inch squared multiply by 703 (kg/m2)/(lb/sq in).[26]
|
15 |
+
|
16 |
+
The most commonly used definitions, established by the World Health Organization (WHO) in 1997 and published in 2000, provide the values listed in the table.[27][28]
|
17 |
+
|
18 |
+
Some modifications to the WHO definitions have been made by particular organizations.[29] The surgical literature breaks down class II and III obesity into further categories whose exact values are still disputed.[30]
|
19 |
+
|
20 |
+
As Asian populations develop negative health consequences at a lower BMI than Caucasians, some nations have redefined obesity; Japan has defined obesity as any BMI greater than 25 kg/m2[8] while China uses a BMI of greater than 28 kg/m2.[29]
|
21 |
+
|
22 |
+
Excessive body weight is associated with various diseases and conditions, particularly cardiovascular diseases, diabetes mellitus type 2, obstructive sleep apnea, certain types of cancer, osteoarthritis,[2] and asthma.[2][31] As a result, obesity has been found to reduce life expectancy.[2]
|
23 |
+
|
24 |
+
Obesity is one of the leading preventable causes of death worldwide.[33][34][35] A number of reviews have found that mortality risk is lowest at a BMI of 20–25 kg/m2[36][37][38] in non-smokers and at 24–27 kg/m2 in current smokers, with risk increasing along with changes in either direction.[39][40] This appears to apply in at least four continents.[38] In contrast, a 2013 review found that grade 1 obesity (BMI 30–35) was not associated with higher mortality than normal weight, and that overweight (BMI 25–30) was associated with "lower" mortality than was normal weight (BMI 18.5–25).[41] Other evidence suggests that the association of BMI and waist circumference with mortality is U- or J-shaped, while the association between waist-to-hip ratio and waist-to-height ratio with mortality is more positive.[42] In Asians the risk of negative health effects begins to increase between 22–25 kg/m2.[43] A BMI above 32 kg/m2 has been associated with a doubled mortality rate among women over a 16-year period.[44] In the United States, obesity is estimated to cause 111,909 to 365,000 deaths per year,[2][35] while 1 million (7.7%) of deaths in Europe are attributed to excess weight.[45][46] On average, obesity reduces life expectancy by six to seven years,[2][47] a BMI of 30–35 kg/m2 reduces life expectancy by two to four years,[37] while severe obesity (BMI > 40 kg/m2) reduces life expectancy by ten years.[37]
|
25 |
+
|
26 |
+
Obesity increases the risk of many physical and mental conditions. These comorbidities are most commonly shown in metabolic syndrome,[2] a combination of medical disorders which includes: diabetes mellitus type 2, high blood pressure, high blood cholesterol, and high triglyceride levels.[48]
|
27 |
+
|
28 |
+
Complications are either directly caused by obesity or indirectly related through mechanisms sharing a common cause such as a poor diet or a sedentary lifestyle. The strength of the link between obesity and specific conditions varies. One of the strongest is the link with type 2 diabetes. Excess body fat underlies 64% of cases of diabetes in men and 77% of cases in women.[49]
|
29 |
+
|
30 |
+
Health consequences fall into two broad categories: those attributable to the effects of increased fat mass (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increased number of fat cells (diabetes, cancer, cardiovascular disease, non-alcoholic fatty liver disease).[2][50] Increases in body fat alter the body's response to insulin, potentially leading to insulin resistance. Increased fat also creates a proinflammatory state,[51][52] and a prothrombotic state.[50][53]
|
31 |
+
|
32 |
+
Although the negative health consequences of obesity in the general population are well supported by the available evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox.[75] The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis,[75] and has subsequently been found in those with heart failure and peripheral artery disease (PAD).[76]
|
33 |
+
|
34 |
+
In people with heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal weight. This has been attributed to the fact that people often lose weight as they become progressively more ill.[77] Similar findings have been made in other types of heart disease. People with class I obesity and heart disease do not have greater rates of further heart problems than people of normal weight who also have heart disease. In people with greater degrees of obesity, however, the risk of further cardiovascular events is increased.[78][79] Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese.[80] One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event.[81] Another study found that if one takes into account chronic obstructive pulmonary disease (COPD) in those with PAD, the benefit of obesity no longer exists.[76]
|
35 |
+
|
36 |
+
At an individual level, a combination of excessive food energy intake and a lack of physical activity is thought to explain most cases of obesity.[82] A limited number of cases are due primarily to genetics, medical reasons, or psychiatric illness.[9] In contrast, increasing rates of obesity at a societal level are felt to be due to an easily accessible and palatable diet,[83] increased reliance on cars, and mechanized manufacturing.[84][85]
|
37 |
+
|
38 |
+
A 2006 review identified ten other possible contributors to the recent increase of obesity: (1) insufficient sleep, (2) endocrine disruptors (environmental pollutants that interfere with lipid metabolism), (3) decreased variability in ambient temperature, (4) decreased rates of smoking, because smoking suppresses appetite, (5) increased use of medications that can cause weight gain (e.g., atypical antipsychotics), (6) proportional increases in ethnic and age groups that tend to be heavier, (7) pregnancy at a later age (which may cause susceptibility to obesity in children), (8) epigenetic risk factors passed on generationally, (9) natural selection for higher BMI, and (10) assortative mating leading to increased concentration of obesity risk factors (this would increase the number of obese people by increasing population variance in weight).[86] According to the Endocrine Society, there is "growing evidence suggesting that obesity is a disorder of the energy homeostasis system, rather than simply arising from the passive accumulation of excess weight".[87]
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
A 2016 review supported excess food as the primary factor.[89][90] Dietary energy supply per capita varies markedly between different regions and countries. It has also changed significantly over time.[88] From the early 1970s to the late 1990s the average food energy available per person per day (the amount of food bought) increased in all parts of the world except Eastern Europe. The United States had the highest availability with 3,654 calories (15,290 kJ) per person in 1996.[88] This increased further in 2003 to 3,754 calories (15,710 kJ).[88] During the late 1990s Europeans had 3,394 calories (14,200 kJ) per person, in the developing areas of Asia there were 2,648 calories (11,080 kJ) per person, and in sub-Saharan Africa people had 2,176 calories (9,100 kJ) per person.[88][91] Total food energy consumption has been found to be related to obesity.[92]
|
45 |
+
|
46 |
+
The widespread availability of nutritional guidelines[93] has done little to address the problems of overeating and poor dietary choice.[94] From 1971 to 2000, obesity rates in the United States increased from 14.5% to 30.9%.[95] During the same period, an increase occurred in the average amount of food energy consumed. For women, the average increase was 335 calories (1,400 kJ) per day (1,542 calories (6,450 kJ) in 1971 and 1,877 calories (7,850 kJ) in 2004), while for men the average increase was 168 calories (700 kJ) per day (2,450 calories (10,300 kJ) in 1971 and 2,618 calories (10,950 kJ) in 2004). Most of this extra food energy came from an increase in carbohydrate consumption rather than fat consumption.[96] The primary sources of these extra carbohydrates are sweetened beverages, which now account for almost 25 percent of daily food energy in young adults in America,[97] and potato chips.[98] Consumption of sweetened drinks such as soft drinks, fruit drinks, iced tea, and energy and vitamin water drinks is believed to be contributing to the rising rates of obesity[99][100] and to an increased risk of metabolic syndrome and type 2 diabetes.[101] Vitamin D deficiency is related to diseases associated with obesity.[102]
|
47 |
+
|
48 |
+
As societies become increasingly reliant on energy-dense, big-portions, and fast-food meals, the association between fast-food consumption and obesity becomes more concerning.[103] In the United States consumption of fast-food meals tripled and food energy intake from these meals quadrupled between 1977 and 1995.[104]
|
49 |
+
|
50 |
+
Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables.[105] Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.
|
51 |
+
|
52 |
+
Obese people consistently under-report their food consumption as compared to people of normal weight.[106] This is supported both by tests of people carried out in a calorimeter room[107] and by direct observation.
|
53 |
+
|
54 |
+
A sedentary lifestyle plays a significant role in obesity.[108] Worldwide there has been a large shift towards less physically demanding work,[109][110][111] and currently at least 30% of the world's population gets insufficient exercise.[110] This is primarily due to increasing use of mechanized transportation and a greater prevalence of labor-saving technology in the home.[109][110][111] In children, there appear to be declines in levels of physical activity due to less walking and physical education.[112] World trends in active leisure time physical activity are less clear. The World Health Organization indicates people worldwide are taking up less active recreational pursuits, while a study from Finland[113] found an increase and a study from the United States found leisure-time physical activity has not changed significantly.[114] A 2011 review of physical activity in children found that it may not be a significant contributor.[115]
|
55 |
+
|
56 |
+
In both children and adults, there is an association between television viewing time and the risk of obesity.[116][117][118] A review found 63 of 73 studies (86%) showed an increased rate of childhood obesity with increased media exposure, with rates increasing proportionally to time spent watching television.[119]
|
57 |
+
|
58 |
+
Like many other medical conditions, obesity is the result of an interplay between genetic and environmental factors.[121] Polymorphisms in various genes controlling appetite and metabolism predispose to obesity when sufficient food energy is present. As of 2006, more than 41 of these sites on the human genome have been linked to the development of obesity when a favorable environment is present.[122] People with two copies of the FTO gene (fat mass and obesity associated gene) have been found on average to weigh 3–4 kg more and have a 1.67-fold greater risk of obesity compared with those without the risk allele.[123] The differences in BMI between people that are due to genetics varies depending on the population examined from 6% to 85%.[124]
|
59 |
+
|
60 |
+
Obesity is a major feature in several syndromes, such as Prader–Willi syndrome, Bardet–Biedl syndrome, Cohen syndrome, and MOMO syndrome. (The term "non-syndromic obesity" is sometimes used to exclude these conditions.)[125] In people with early-onset severe obesity (defined by an onset before 10 years of age and body mass index over three standard deviations above normal), 7% harbor a single point DNA mutation.[126]
|
61 |
+
|
62 |
+
Studies that have focused on inheritance patterns rather than on specific genes have found that 80% of the offspring of two obese parents were also obese, in contrast to less than 10% of the offspring of two parents who were of normal weight.[127] Different people exposed to the same environment have different risks of obesity due to their underlying genetics.[128]
|
63 |
+
|
64 |
+
The thrifty gene hypothesis postulates that, due to dietary scarcity during human evolution, people are prone to obesity. Their ability to take advantage of rare periods of abundance by storing energy as fat would be advantageous during times of varying food availability, and individuals with greater adipose reserves would be more likely to survive famine. This tendency to store fat, however, would be maladaptive in societies with stable food supplies.[129] This theory has received various criticisms, and other evolutionarily-based theories such as the drifty gene hypothesis and the thrifty phenotype hypothesis have also been proposed.[130][131]
|
65 |
+
|
66 |
+
Certain physical and mental illnesses and the pharmaceutical substances used to treat them can increase risk of obesity. Medical illnesses that increase obesity risk include several rare genetic syndromes (listed above) as well as some congenital or acquired conditions: hypothyroidism, Cushing's syndrome, growth hormone deficiency,[132] and some eating disorders such as binge eating disorder and night eating syndrome.[2] However, obesity is not regarded as a psychiatric disorder, and therefore is not listed in the DSM-IVR as a psychiatric illness.[133] The risk of overweight and obesity is higher in patients with psychiatric disorders than in persons without psychiatric disorders.[134]
|
67 |
+
|
68 |
+
Certain medications may cause weight gain or changes in body composition; these include insulin, sulfonylureas, thiazolidinediones, atypical antipsychotics, antidepressants, steroids, certain anticonvulsants (phenytoin and valproate), pizotifen, and some forms of hormonal contraception.[2]
|
69 |
+
|
70 |
+
While genetic influences are important to understanding obesity, they cannot explain the current dramatic increase seen within specific countries or globally.[135] Though it is accepted that energy consumption in excess of energy expenditure leads to obesity on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors.
|
71 |
+
|
72 |
+
The correlation between social class and BMI varies globally. A review in 1989 found that in developed countries women of a high social class were less likely to be obese. No significant differences were seen among men of different social classes. In the developing world, women, men, and children from high social classes had greater rates of obesity.[136] An update of this review carried out in 2007 found the same relationships, but they were weaker. The decrease in strength of correlation was felt to be due to the effects of globalization.[137] Among developed countries, levels of adult obesity, and percentage of teenage children who are overweight, are correlated with income inequality. A similar relationship is seen among US states: more adults, even in higher social classes, are obese in more unequal states.[138]
|
73 |
+
|
74 |
+
Many explanations have been put forth for associations between BMI and social class. It is thought that in developed countries, the wealthy are able to afford more nutritious food, they are under greater social pressure to remain slim, and have more opportunities along with greater expectations for physical fitness. In undeveloped countries the ability to afford food, high energy expenditure with physical labor, and cultural values favoring a larger body size are believed to contribute to the observed patterns.[137] Attitudes toward body weight held by people in one's life may also play a role in obesity. A correlation in BMI changes over time has been found among friends, siblings, and spouses.[139] Stress and perceived low social status appear to increase risk of obesity.[138][140][141]
|
75 |
+
|
76 |
+
Smoking has a significant effect on an individual's weight. Those who quit smoking gain an average of 4.4 kilograms (9.7 lb) for men and 5.0 kilograms (11.0 lb) for women over ten years.[142] However, changing rates of smoking have had little effect on the overall rates of obesity.[143]
|
77 |
+
|
78 |
+
In the United States the number of children a person has is related to their risk of obesity. A woman's risk increases by 7% per child, while a man's risk increases by 4% per child.[144] This could be partly explained by the fact that having dependent children decreases physical activity in Western parents.[145]
|
79 |
+
|
80 |
+
In the developing world urbanization is playing a role in increasing rate of obesity. In China overall rates of obesity are below 5%; however, in some cities rates of obesity are greater than 20%.[146]
|
81 |
+
|
82 |
+
Malnutrition in early life is believed to play a role in the rising rates of obesity in the developing world.[147] Endocrine changes that occur during periods of malnutrition may promote the storage of fat once more food energy becomes available.[147]
|
83 |
+
|
84 |
+
Consistent with cognitive epidemiological data, numerous studies confirm that obesity is associated with cognitive deficits.[148][149]
|
85 |
+
|
86 |
+
Whether obesity causes cognitive deficits, or vice versa is unclear at present.
|
87 |
+
|
88 |
+
The study of the effect of infectious agents on metabolism is still in its early stages. Gut flora has been shown to differ between lean and obese people. There is an indication that gut flora can affect the metabolic potential. This apparent alteration is believed to confer a greater capacity to harvest energy contributing to obesity. Whether these differences are the direct cause or the result of obesity has yet to be determined unequivocally.[150] The use of antibiotics among children has also been associated with obesity later in life.[151][152]
|
89 |
+
|
90 |
+
An association between viruses and obesity has been found in humans and several different animal species. The amount that these associations may have contributed to the rising rate of obesity is yet to be determined.[153]
|
91 |
+
|
92 |
+
A number of reviews have found an association between short duration of sleep and obesity.[154][155] Whether one causes the other is unclear.[154] Even if shorts sleep does increase weight gain it is unclear if this is to a meaningful degree or increasing sleep would be of benefit.[156]
|
93 |
+
|
94 |
+
Certain aspects of personality are associated with being obese.[157] Neuroticism, impulsivity, and sensitivity to reward are more common in people who are obese while conscientiousness and self-control are less common in people who are obese.[157][158] Loneliness is also a risk factor.[159]
|
95 |
+
|
96 |
+
There are many possible pathophysiological mechanisms involved in the development and maintenance of obesity.[160] This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory.[161] While leptin and ghrelin are produced peripherally, they control appetite through their actions on the central nervous system. In particular, they and other appetite-related hormones act on the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. There are several circuits within the hypothalamus that contribute to its role in integrating appetite, the melanocortin pathway being the most well understood.[160] The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively.[162]
|
97 |
+
|
98 |
+
The arcuate nucleus contains two distinct groups of neurons.[160] The first group coexpresses neuropeptide Y (NPY) and agouti-related peptide (AgRP) and has stimulatory inputs to the LH and inhibitory inputs to the VMH. The second group coexpresses pro-opiomelanocortin (POMC) and cocaine- and amphetamine-regulated transcript (CART) and has stimulatory inputs to the VMH and inhibitory inputs to the LH. Consequently, NPY/AgRP neurons stimulate feeding and inhibit satiety, while POMC/CART neurons stimulate satiety and inhibit feeding. Both groups of arcuate nucleus neurons are regulated in part by leptin. Leptin inhibits the NPY/AgRP group while stimulating the POMC/CART group. Thus a deficiency in leptin signaling, either via leptin deficiency or leptin resistance, leads to overfeeding and may account for some genetic and acquired forms of obesity.[160]
|
99 |
+
|
100 |
+
The World Health Organization (WHO) predicts that overweight and obesity may soon replace more traditional public health concerns such as undernutrition and infectious diseases as the most significant cause of poor health.[163] Obesity is a public health and policy problem because of its prevalence, costs, and health effects.[164] The United States Preventive Services Task Force recommends screening for all adults followed by behavioral interventions in those who are obese.[165] Public health efforts seek to understand and correct the environmental factors responsible for the increasing prevalence of obesity in the population. Solutions look at changing the factors that cause excess food energy consumption and inhibit physical activity. Efforts include federally reimbursed meal programs in schools, limiting direct junk food marketing to children,[166] and decreasing access to sugar-sweetened beverages in schools.[167] The World Health Organization recommends the taxing of sugary drinks.[168] When constructing urban environments, efforts have been made to increase access to parks and to develop pedestrian routes.[169] There is low quality evidence that nutritional labelling with energy information on menus can help to reduce energy intake while dining in restaurants.[170]
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
Many organizations have published reports pertaining to obesity. In 1998, the first US Federal guidelines were published, titled "Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults: The Evidence Report".[171] In 2006 the Canadian Obesity Network, now known as Obesity Canada published the "Canadian Clinical Practice Guidelines (CPG) on the Management and Prevention of Obesity in Adults and Children". This is a comprehensive evidence-based guideline to address the management and prevention of overweight and obesity in adults and children.[82]
|
105 |
+
|
106 |
+
In 2004, the United Kingdom Royal College of Physicians, the Faculty of Public Health and the Royal College of Paediatrics and Child Health released the report "Storing up Problems", which highlighted the growing problem of obesity in the UK.[172] The same year, the House of Commons Health Select Committee published its "most comprehensive inquiry [...] ever undertaken" into the impact of obesity on health and society in the UK and possible approaches to the problem.[173] In 2006, the National Institute for Health and Clinical Excellence (NICE) issued a guideline on the diagnosis and management of obesity, as well as policy implications for non-healthcare organizations such as local councils.[174] A 2007 report produced by Derek Wanless for the King's Fund warned that unless further action was taken, obesity had the capacity to cripple the National Health Service financially.[175]
|
107 |
+
|
108 |
+
Comprehensive approaches are being looked at to address the rising rates of obesity. The Obesity Policy Action (OPA) framework divides measure into 'upstream' policies, 'midstream' policies, 'downstream' policies. 'Upstream' policies look at changing society, 'midstream' policies try to alter individuals' behavior to prevent obesity, and 'downstream' policies try to treat currently afflicted people.[176]
|
109 |
+
|
110 |
+
The main treatment for obesity consists of weight loss via dieting and physical exercise.[16][82][177][178] Dieting, as part of a lifestyle change, produces sustained weight loss, despite slow weight regain over time.[16][179][180][181] Intensive behavioral interventions combining both dietary changes and exercise are recommended.[16][177][182]
|
111 |
+
|
112 |
+
Several diets are effective.[16] In the short-term low carbohydrate diets appear better than low fat diets for weight loss.[183] In the long term, however, all types of low-carbohydrate and low-fat diets appear equally beneficial.[183][184] A 2014 review found that the heart disease and diabetes risks associated with different diets appear to be similar.[185] Promotion of the Mediterranean diets among the obese may lower the risk of heart disease.[183] Decreased intake of sweet drinks is also related to weight-loss.[183] Success rates of long-term weight loss maintenance with lifestyle changes are low, ranging from 2–20%.[186] Dietary and lifestyle changes are effective in limiting excessive weight gain in pregnancy and improve outcomes for both the mother and the child.[187] Intensive behavioral counseling is recommended in those who are both obese and have other risk factors for heart disease.[188]
|
113 |
+
|
114 |
+
Five medications have evidence for long-term use orlistat, lorcaserin, liraglutide, phentermine–topiramate, and naltrexone–bupropion.[189] They result in weight loss after one year ranged from 3.0 to 6.7 kg (6.6-14.8 lbs) over placebo.[189] Orlistat, liraglutide, and naltrexone–bupropion are available in both the United States and Europe, whereas phentermine–topiramate are available only in the United States.[190] European regulatory authorities rejected the latter two drugs in part because of associations of heart valve problems with lorcaserin and more general heart and blood vessel problems with phentermine–topiramate.[190] Lorcaserin was available in the United States and than removed from the market in 2020 due to its association with cancer.[191] Orlistat use is associated with high rates of gastrointestinal side effects[192] and concerns have been raised about negative effects on the kidneys.[193] There is no information on how these drugs affect longer-term complications of obesity such as cardiovascular disease or death.[5]
|
115 |
+
|
116 |
+
The most effective treatment for obesity is bariatric surgery.[6][16] The types of procedures include laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, vertical-sleeve gastrectomy, and biliopancreatic diversion.[189] Surgery for severe obesity is associated with long-term weight loss, improvement in obesity-related conditions,[194] and decreased overall mortality. One study found a weight loss of between 14% and 25% (depending on the type of procedure performed) at 10 years, and a 29% reduction in all cause mortality when compared to standard weight loss measures.[195] Complications occur in about 17% of cases and reoperation is needed in 7% of cases.[194] Due to its cost and risks, researchers are searching for other effective yet less invasive treatments including devices that occupy space in the stomach.[196] For adults who have not responded to behavioral treatments with or without medication, the US guidelines on obesity recommend informing them about bariatric surgery.[177]
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
+
|
126 |
+
In earlier historical periods obesity was rare, and achievable only by a small elite, although already recognised as a problem for health. But as prosperity increased in the Early Modern period, it affected increasingly larger groups of the population.[199]
|
127 |
+
|
128 |
+
In 1997 the WHO formally recognized obesity as a global epidemic.[97] As of 2008 the WHO estimates that at least 500 million adults (greater than 10%) are obese, with higher rates among women than men.[200] The percentage of adults affected in the United States as of 2015–2016 is about 39.6% overall (37.9% of males and 41.1% of females).[201]
|
129 |
+
|
130 |
+
The rate of obesity also increases with age at least up to 50 or 60 years old[202] and severe obesity in the United States, Australia, and Canada is increasing faster than the overall rate of obesity.[30][203][204] The OECD has projected an increase in obesity rates until at least 2030, especially in the United States, Mexico and England with rates reaching 47%, 39% and 35% respectively.[205]
|
131 |
+
|
132 |
+
Once considered a problem only of high-income countries, obesity rates are rising worldwide and affecting both the developed and developing world.[45] These increases have been felt most dramatically in urban settings.[200] The only remaining region of the world where obesity is not common is sub-Saharan Africa.[2]
|
133 |
+
|
134 |
+
Obesity is from the Latin obesitas, which means "stout, fat, or plump". Ēsus is the past participle of edere (to eat), with ob (over) added to it.[206] The Oxford English Dictionary documents its first usage in 1611 by Randle Cotgrave.[207]
|
135 |
+
|
136 |
+
Ancient Greek medicine recognizes obesity as a medical disorder, and records that the Ancient Egyptians saw it in the same way.[199] Hippocrates wrote that "Corpulence is not only a disease itself, but the harbinger of others".[2] The Indian surgeon Sushruta (6th century BCE) related obesity to diabetes and heart disorders.[209] He recommended physical work to help cure it and its side effects.[209] For most of human history mankind struggled with food scarcity.[210] Obesity has thus historically been viewed as a sign of wealth and prosperity. It was common among high officials in Europe in the Middle Ages and the Renaissance[208] as well as in Ancient East Asian civilizations.[211] In the 17th century, English medical author Tobias Venner is credited with being one of the first to refer to the term as a societal disease in a published English language book.[199][212]
|
137 |
+
|
138 |
+
With the onset of the Industrial Revolution it was realized that the military and economic might of nations were dependent on both the body size and strength of their soldiers and workers.[97] Increasing the average body mass index from what is now considered underweight to what is now the normal range played a significant role in the development of industrialized societies.[97] Height and weight thus both increased through the 19th century in the developed world. During the 20th century, as populations reached their genetic potential for height, weight began increasing much more than height, resulting in obesity.[97] In the 1950s increasing wealth in the developed world decreased child mortality, but as body weight increased heart and kidney disease became more common.[97][213]
|
139 |
+
During this time period, insurance companies realized the connection between weight and life expectancy and increased premiums for the obese.[2]
|
140 |
+
|
141 |
+
Many cultures throughout history have viewed obesity as the result of a character flaw. The obesus or fat character in Ancient Greek comedy was a glutton and figure of mockery. During Christian times the food was viewed as a gateway to the sins of sloth and lust.[15] In modern Western culture, excess weight is often regarded as unattractive, and obesity is commonly associated with various negative stereotypes. People of all ages can face social stigmatization, and may be targeted by bullies or shunned by their peers.[214]
|
142 |
+
|
143 |
+
Public perceptions in Western society regarding healthy body weight differ from those regarding the weight that is considered ideal – and both have changed since the beginning of the 20th century. The weight that is viewed as an ideal has become lower since the 1920s. This is illustrated by the fact that the average height of Miss America pageant winners increased by 2% from 1922 to 1999, while their average weight decreased by 12%.[215] On the other hand, people's views concerning healthy weight have changed in the opposite direction. In Britain, the weight at which people considered themselves to be overweight was significantly higher in 2007 than in 1999.[216] These changes are believed to be due to increasing rates of adiposity leading to increased acceptance of extra body fat as being normal.[216]
|
144 |
+
|
145 |
+
Obesity is still seen as a sign of wealth and well-being in many parts of Africa. This has become particularly common since the HIV epidemic began.[2]
|
146 |
+
|
147 |
+
The first sculptural representations of the human body 20,000–35,000 years ago depict obese females. Some attribute the Venus figurines to the tendency to emphasize fertility while others feel they represent "fatness" in the people of the time.[15] Corpulence is, however, absent in both Greek and Roman art, probably in keeping with their ideals regarding moderation. This continued through much of Christian European history, with only those of low socioeconomic status being depicted as obese.[15]
|
148 |
+
|
149 |
+
During the Renaissance some of the upper class began flaunting their large size, as can be seen in portraits of Henry VIII of England and Alessandro dal Borro.[15] Rubens (1577–1640) regularly depicted full-bodied women in his pictures, from which derives the term Rubenesque. These women, however, still maintained the "hourglass" shape with its relationship to fertility.[217] During the 19th century, views on obesity changed in the Western world. After centuries of obesity being synonymous with wealth and social status, slimness began to be seen as the desirable standard.[15]
|
150 |
+
|
151 |
+
In addition to its health impacts, obesity leads to many problems including disadvantages in employment[218][219] and increased business costs. These effects are felt by all levels of society from individuals, to corporations, to governments.
|
152 |
+
|
153 |
+
In 2005, the medical costs attributable to obesity in the US were an estimated $190.2 billion or 20.6% of all medical expenditures,[220][221][222] while the cost of obesity in Canada was estimated at CA$2 billion in 1997 (2.4% of total health costs).[82] The total annual direct cost of overweight and obesity in Australia in 2005 was A$21 billion. Overweight and obese Australians also received A$35.6 billion in government subsidies.[223] The estimate range for annual expenditures on diet products is $40 billion to $100 billion in the US alone.[224]
|
154 |
+
|
155 |
+
The Lancet Commission on Obesity in 2019 called for a global treaty — modelled on the WHO Framework Convention on Tobacco Control — committing countries to address obesity and undernutrition, explicitly excluding the food industry from policy development. They estimate the global cost of obesity $2 trillion a year, about or 2.8% of world GDP.[225]
|
156 |
+
|
157 |
+
Obesity prevention programs have been found to reduce the cost of treating obesity-related disease. However, the longer people live, the more medical costs they incur. Researchers, therefore, conclude that reducing obesity may improve the public's health, but it is unlikely to reduce overall health spending.[226]
|
158 |
+
|
159 |
+
Obesity can lead to social stigmatization and disadvantages in employment.[218] When compared to their normal weight counterparts, obese workers on average have higher rates of absenteeism from work and take more disability leave, thus increasing costs for employers and decreasing productivity.[228] A study examining Duke University employees found that people with a BMI over 40 kg/m2 filed twice as many workers' compensation claims as those whose BMI was 18.5–24.9 kg/m2. They also had more than 12 times as many lost work days. The most common injuries in this group were due to falls and lifting, thus affecting the lower extremities, wrists or hands, and backs.[229] The Alabama State Employees' Insurance Board approved a controversial plan to charge obese workers $25 a month for health insurance that would otherwise be free unless they take steps to lose weight and improve their health. These measures started in January 2010 and apply to those state workers whose BMI exceeds 35 kg/m2 and who fail to make improvements in their health after one year.[230]
|
160 |
+
|
161 |
+
Some research shows that obese people are less likely to be hired for a job and are less likely to be promoted.[214] Obese people are also paid less than their non-obese counterparts for an equivalent job; obese women on average make 6% less and obese men make 3% less.[231]
|
162 |
+
|
163 |
+
Specific industries, such as the airline, healthcare and food industries, have special concerns. Due to rising rates of obesity, airlines face higher fuel costs and pressures to increase seating width.[232] In 2000, the extra weight of obese passengers cost airlines US$275 million.[233] The healthcare industry has had to invest in special facilities for handling severely obese patients, including special lifting equipment and bariatric ambulances.[234] Costs for restaurants are increased by litigation accusing them of causing obesity.[235] In 2005 the US Congress discussed legislation to prevent civil lawsuits against the food industry in relation to obesity; however, it did not become law.[235]
|
164 |
+
|
165 |
+
With the American Medical Association's 2013 classification of obesity as a chronic disease,[17] it is thought that health insurance companies will more likely pay for obesity treatment, counseling and surgery, and the cost of research and development of fat treatment pills or gene therapy treatments should be more affordable if insurers help to subsidize their cost.[236] The AMA classification is not legally binding, however, so health insurers still have the right to reject coverage for a treatment or procedure.[236]
|
166 |
+
|
167 |
+
In 2014, The European Court of Justice ruled that morbid obesity is a disability. The Court said that if an employee's obesity prevents him from "full and effective participation of that person in professional life on an equal basis with other workers", then it shall be considered a disability and that firing someone on such grounds is discriminatory.[237]
|
168 |
+
|
169 |
+
The principal goal of the fat acceptance movement is to decrease discrimination against people who are overweight and obese.[238][239] However, some in the movement are also attempting to challenge the established relationship between obesity and negative health outcomes.[240]
|
170 |
+
|
171 |
+
A number of organizations exist that promote the acceptance of obesity. They have increased in prominence in the latter half of the 20th century.[241] The US-based National Association to Advance Fat Acceptance (NAAFA) was formed in 1969 and describes itself as a civil rights organization dedicated to ending size discrimination.[242]
|
172 |
+
|
173 |
+
The International Size Acceptance Association (ISAA) is a non-governmental organization (NGO) which was founded in 1997. It has more of a global orientation and describes its mission as promoting size acceptance and helping to end weight-based discrimination.[243] These groups often argue for the recognition of obesity as a disability under the US Americans With Disabilities Act (ADA). The American legal system, however, has decided that the potential public health costs exceed the benefits of extending this anti-discrimination law to cover obesity.[240]
|
174 |
+
|
175 |
+
In 2015 the New York Times published an article on the Global Energy Balance Network, a nonprofit founded in 2014 that advocated for people to focus on increasing exercise rather than reducing calorie intake to avoid obesity and to be healthy. The organization was founded with at least $1.5M in funding from the Coca-Cola Company, and the company has provided $4M in research funding to the two founding scientists Gregory A. Hand and Steven N. Blair since 2008.[244][245]
|
176 |
+
|
177 |
+
The healthy BMI range varies with the age and sex of the child. Obesity in children and adolescents is defined as a BMI greater than the 95th percentile.[24] The reference data that these percentiles are based on is from 1963 to 1994 and thus has not been affected by the recent increases in rates of obesity.[25] Childhood obesity has reached epidemic proportions in the 21st century, with rising rates in both the developed and the developing world. Rates of obesity in Canadian boys have increased from 11% in the 1980s to over 30% in the 1990s, while during this same time period rates increased from 4 to 14% in Brazilian children.[246] In the UK, there were 60% more obese children in 2005 compared to 1989.[247] In the US, the percentage of overweight and obese children increased to 16% in 2008, a 300% increase over the prior 30 years.[248]
|
178 |
+
|
179 |
+
As with obesity in adults, many factors contribute to the rising rates of childhood obesity. Changing diet and decreasing physical activity are believed to be the two most important causes for the recent increase in the incidence of child obesity.[249] Antibiotics in the first 6 months of life have been associated with excess weight at age seven to twelve years of age.[152] Because childhood obesity often persists into adulthood and is associated with numerous chronic illnesses, children who are obese are often tested for hypertension, diabetes, hyperlipidemia, and fatty liver disease.[82] Treatments used in children are primarily lifestyle interventions and behavioral techniques, although efforts to increase activity in children have had little success.[250] In the United States, medications are not FDA approved for use in this age group.[246] Multi-component behaviour change interventions that include changes to dietary and physical activity may reduce BMI in the short term in children aged 6 to 11 years, although the benefits are small and quality of evidence is low.[251]
|
180 |
+
|
181 |
+
Obesity in pets is common in many countries. In the United States, 23–41% of dogs are overweight, and about 5.1% are obese.[252] The rate of obesity in cats was slightly higher at 6.4%.[252] In Australia the rate of obesity among dogs in a veterinary setting has been found to be 7.6%.[253] The risk of obesity in dogs is related to whether or not their owners are obese; however, there is no similar correlation between cats and their owners.[254]
|
en/4209.html.txt
ADDED
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Obesity is a medical condition in which excess body fat has accumulated to an extent that it may have a negative effect on health.[1] People are generally considered obese when their body mass index (BMI), a measurement obtained by dividing a person's weight by the square of the person's height, is over 30 kg/m2; the range 25–30 kg/m2 is defined as overweight.[1] Some East Asian countries use lower values.[8] Obesity increases the likelihood of various diseases and conditions, particularly cardiovascular diseases, type 2 diabetes, obstructive sleep apnea, certain types of cancer, osteoarthritis, and depression.[2][3]
|
4 |
+
|
5 |
+
Obesity is most commonly caused by a combination of excessive food intake, lack of physical activity, and genetic susceptibility.[1][4] A few cases are caused primarily by genes, endocrine disorders, medications, or mental disorder.[9] The view that obese people eat little yet gain weight due to a slow metabolism is not medically supported.[10] On average, obese people have a greater energy expenditure than their normal counterparts due to the energy required to maintain an increased body mass.[10][11]
|
6 |
+
|
7 |
+
Obesity is mostly preventable through a combination of social changes and personal choices.[1] Changes to diet and exercising are the main treatments.[2] Diet quality can be improved by reducing the consumption of energy-dense foods, such as those high in fat or sugars, and by increasing the intake of dietary fiber.[1] Medications can be used, along with a suitable diet, to reduce appetite or decrease fat absorption.[5] If diet, exercise, and medication are not effective, a gastric balloon or surgery may be performed to reduce stomach volume or length of the intestines, leading to feeling full earlier or a reduced ability to absorb nutrients from food.[6][12]
|
8 |
+
|
9 |
+
Obesity is a leading preventable cause of death worldwide, with increasing rates in adults and children.[1][13] In 2015, 600 million adults (12%) and 100 million children were obese in 195 countries.[7] Obesity is more common in women than men.[1] Authorities view it as one of the most serious public health problems of the 21st century.[14] Obesity is stigmatized in much of the modern world (particularly in the Western world), though it was seen as a symbol of wealth and fertility at other times in history and still is in some parts of the world.[2][15] In 2013, several medical societies, including the American Medical Association and the American Heart Association, classified obesity as a disease.[16][17][18]
|
10 |
+
|
11 |
+
Obesity is a medical condition in which excess body fat has accumulated to the extent that it may have an adverse effect on health.[20] It is defined by body mass index (BMI) and further evaluated in terms of fat distribution via the waist–hip ratio and total cardiovascular risk factors.[21][22] BMI is closely related to both percentage body fat and total body fat.[23]
|
12 |
+
In children, a healthy weight varies with age and sex. Obesity in children and adolescents is defined not as an absolute number but in relation to a historical normal group, such that obesity is a BMI greater than the 95th percentile.[24] The reference data on which these percentiles were based date from 1963 to 1994, and thus have not been affected by the recent increases in weight.[25] BMI is defined as the subject's weight divided by the square of their height and is calculated as follows.
|
13 |
+
|
14 |
+
BMI is usually expressed in kilograms of weight per metre squared of height. To convert from pounds per inch squared multiply by 703 (kg/m2)/(lb/sq in).[26]
|
15 |
+
|
16 |
+
The most commonly used definitions, established by the World Health Organization (WHO) in 1997 and published in 2000, provide the values listed in the table.[27][28]
|
17 |
+
|
18 |
+
Some modifications to the WHO definitions have been made by particular organizations.[29] The surgical literature breaks down class II and III obesity into further categories whose exact values are still disputed.[30]
|
19 |
+
|
20 |
+
As Asian populations develop negative health consequences at a lower BMI than Caucasians, some nations have redefined obesity; Japan has defined obesity as any BMI greater than 25 kg/m2[8] while China uses a BMI of greater than 28 kg/m2.[29]
|
21 |
+
|
22 |
+
Excessive body weight is associated with various diseases and conditions, particularly cardiovascular diseases, diabetes mellitus type 2, obstructive sleep apnea, certain types of cancer, osteoarthritis,[2] and asthma.[2][31] As a result, obesity has been found to reduce life expectancy.[2]
|
23 |
+
|
24 |
+
Obesity is one of the leading preventable causes of death worldwide.[33][34][35] A number of reviews have found that mortality risk is lowest at a BMI of 20–25 kg/m2[36][37][38] in non-smokers and at 24–27 kg/m2 in current smokers, with risk increasing along with changes in either direction.[39][40] This appears to apply in at least four continents.[38] In contrast, a 2013 review found that grade 1 obesity (BMI 30–35) was not associated with higher mortality than normal weight, and that overweight (BMI 25–30) was associated with "lower" mortality than was normal weight (BMI 18.5–25).[41] Other evidence suggests that the association of BMI and waist circumference with mortality is U- or J-shaped, while the association between waist-to-hip ratio and waist-to-height ratio with mortality is more positive.[42] In Asians the risk of negative health effects begins to increase between 22–25 kg/m2.[43] A BMI above 32 kg/m2 has been associated with a doubled mortality rate among women over a 16-year period.[44] In the United States, obesity is estimated to cause 111,909 to 365,000 deaths per year,[2][35] while 1 million (7.7%) of deaths in Europe are attributed to excess weight.[45][46] On average, obesity reduces life expectancy by six to seven years,[2][47] a BMI of 30–35 kg/m2 reduces life expectancy by two to four years,[37] while severe obesity (BMI > 40 kg/m2) reduces life expectancy by ten years.[37]
|
25 |
+
|
26 |
+
Obesity increases the risk of many physical and mental conditions. These comorbidities are most commonly shown in metabolic syndrome,[2] a combination of medical disorders which includes: diabetes mellitus type 2, high blood pressure, high blood cholesterol, and high triglyceride levels.[48]
|
27 |
+
|
28 |
+
Complications are either directly caused by obesity or indirectly related through mechanisms sharing a common cause such as a poor diet or a sedentary lifestyle. The strength of the link between obesity and specific conditions varies. One of the strongest is the link with type 2 diabetes. Excess body fat underlies 64% of cases of diabetes in men and 77% of cases in women.[49]
|
29 |
+
|
30 |
+
Health consequences fall into two broad categories: those attributable to the effects of increased fat mass (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increased number of fat cells (diabetes, cancer, cardiovascular disease, non-alcoholic fatty liver disease).[2][50] Increases in body fat alter the body's response to insulin, potentially leading to insulin resistance. Increased fat also creates a proinflammatory state,[51][52] and a prothrombotic state.[50][53]
|
31 |
+
|
32 |
+
Although the negative health consequences of obesity in the general population are well supported by the available evidence, health outcomes in certain subgroups seem to be improved at an increased BMI, a phenomenon known as the obesity survival paradox.[75] The paradox was first described in 1999 in overweight and obese people undergoing hemodialysis,[75] and has subsequently been found in those with heart failure and peripheral artery disease (PAD).[76]
|
33 |
+
|
34 |
+
In people with heart failure, those with a BMI between 30.0 and 34.9 had lower mortality than those with a normal weight. This has been attributed to the fact that people often lose weight as they become progressively more ill.[77] Similar findings have been made in other types of heart disease. People with class I obesity and heart disease do not have greater rates of further heart problems than people of normal weight who also have heart disease. In people with greater degrees of obesity, however, the risk of further cardiovascular events is increased.[78][79] Even after cardiac bypass surgery, no increase in mortality is seen in the overweight and obese.[80] One study found that the improved survival could be explained by the more aggressive treatment obese people receive after a cardiac event.[81] Another study found that if one takes into account chronic obstructive pulmonary disease (COPD) in those with PAD, the benefit of obesity no longer exists.[76]
|
35 |
+
|
36 |
+
At an individual level, a combination of excessive food energy intake and a lack of physical activity is thought to explain most cases of obesity.[82] A limited number of cases are due primarily to genetics, medical reasons, or psychiatric illness.[9] In contrast, increasing rates of obesity at a societal level are felt to be due to an easily accessible and palatable diet,[83] increased reliance on cars, and mechanized manufacturing.[84][85]
|
37 |
+
|
38 |
+
A 2006 review identified ten other possible contributors to the recent increase of obesity: (1) insufficient sleep, (2) endocrine disruptors (environmental pollutants that interfere with lipid metabolism), (3) decreased variability in ambient temperature, (4) decreased rates of smoking, because smoking suppresses appetite, (5) increased use of medications that can cause weight gain (e.g., atypical antipsychotics), (6) proportional increases in ethnic and age groups that tend to be heavier, (7) pregnancy at a later age (which may cause susceptibility to obesity in children), (8) epigenetic risk factors passed on generationally, (9) natural selection for higher BMI, and (10) assortative mating leading to increased concentration of obesity risk factors (this would increase the number of obese people by increasing population variance in weight).[86] According to the Endocrine Society, there is "growing evidence suggesting that obesity is a disorder of the energy homeostasis system, rather than simply arising from the passive accumulation of excess weight".[87]
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
A 2016 review supported excess food as the primary factor.[89][90] Dietary energy supply per capita varies markedly between different regions and countries. It has also changed significantly over time.[88] From the early 1970s to the late 1990s the average food energy available per person per day (the amount of food bought) increased in all parts of the world except Eastern Europe. The United States had the highest availability with 3,654 calories (15,290 kJ) per person in 1996.[88] This increased further in 2003 to 3,754 calories (15,710 kJ).[88] During the late 1990s Europeans had 3,394 calories (14,200 kJ) per person, in the developing areas of Asia there were 2,648 calories (11,080 kJ) per person, and in sub-Saharan Africa people had 2,176 calories (9,100 kJ) per person.[88][91] Total food energy consumption has been found to be related to obesity.[92]
|
45 |
+
|
46 |
+
The widespread availability of nutritional guidelines[93] has done little to address the problems of overeating and poor dietary choice.[94] From 1971 to 2000, obesity rates in the United States increased from 14.5% to 30.9%.[95] During the same period, an increase occurred in the average amount of food energy consumed. For women, the average increase was 335 calories (1,400 kJ) per day (1,542 calories (6,450 kJ) in 1971 and 1,877 calories (7,850 kJ) in 2004), while for men the average increase was 168 calories (700 kJ) per day (2,450 calories (10,300 kJ) in 1971 and 2,618 calories (10,950 kJ) in 2004). Most of this extra food energy came from an increase in carbohydrate consumption rather than fat consumption.[96] The primary sources of these extra carbohydrates are sweetened beverages, which now account for almost 25 percent of daily food energy in young adults in America,[97] and potato chips.[98] Consumption of sweetened drinks such as soft drinks, fruit drinks, iced tea, and energy and vitamin water drinks is believed to be contributing to the rising rates of obesity[99][100] and to an increased risk of metabolic syndrome and type 2 diabetes.[101] Vitamin D deficiency is related to diseases associated with obesity.[102]
|
47 |
+
|
48 |
+
As societies become increasingly reliant on energy-dense, big-portions, and fast-food meals, the association between fast-food consumption and obesity becomes more concerning.[103] In the United States consumption of fast-food meals tripled and food energy intake from these meals quadrupled between 1977 and 1995.[104]
|
49 |
+
|
50 |
+
Agricultural policy and techniques in the United States and Europe have led to lower food prices. In the United States, subsidization of corn, soy, wheat, and rice through the U.S. farm bill has made the main sources of processed food cheap compared to fruits and vegetables.[105] Calorie count laws and nutrition facts labels attempt to steer people toward making healthier food choices, including awareness of how much food energy is being consumed.
|
51 |
+
|
52 |
+
Obese people consistently under-report their food consumption as compared to people of normal weight.[106] This is supported both by tests of people carried out in a calorimeter room[107] and by direct observation.
|
53 |
+
|
54 |
+
A sedentary lifestyle plays a significant role in obesity.[108] Worldwide there has been a large shift towards less physically demanding work,[109][110][111] and currently at least 30% of the world's population gets insufficient exercise.[110] This is primarily due to increasing use of mechanized transportation and a greater prevalence of labor-saving technology in the home.[109][110][111] In children, there appear to be declines in levels of physical activity due to less walking and physical education.[112] World trends in active leisure time physical activity are less clear. The World Health Organization indicates people worldwide are taking up less active recreational pursuits, while a study from Finland[113] found an increase and a study from the United States found leisure-time physical activity has not changed significantly.[114] A 2011 review of physical activity in children found that it may not be a significant contributor.[115]
|
55 |
+
|
56 |
+
In both children and adults, there is an association between television viewing time and the risk of obesity.[116][117][118] A review found 63 of 73 studies (86%) showed an increased rate of childhood obesity with increased media exposure, with rates increasing proportionally to time spent watching television.[119]
|
57 |
+
|
58 |
+
Like many other medical conditions, obesity is the result of an interplay between genetic and environmental factors.[121] Polymorphisms in various genes controlling appetite and metabolism predispose to obesity when sufficient food energy is present. As of 2006, more than 41 of these sites on the human genome have been linked to the development of obesity when a favorable environment is present.[122] People with two copies of the FTO gene (fat mass and obesity associated gene) have been found on average to weigh 3–4 kg more and have a 1.67-fold greater risk of obesity compared with those without the risk allele.[123] The differences in BMI between people that are due to genetics varies depending on the population examined from 6% to 85%.[124]
|
59 |
+
|
60 |
+
Obesity is a major feature in several syndromes, such as Prader–Willi syndrome, Bardet–Biedl syndrome, Cohen syndrome, and MOMO syndrome. (The term "non-syndromic obesity" is sometimes used to exclude these conditions.)[125] In people with early-onset severe obesity (defined by an onset before 10 years of age and body mass index over three standard deviations above normal), 7% harbor a single point DNA mutation.[126]
|
61 |
+
|
62 |
+
Studies that have focused on inheritance patterns rather than on specific genes have found that 80% of the offspring of two obese parents were also obese, in contrast to less than 10% of the offspring of two parents who were of normal weight.[127] Different people exposed to the same environment have different risks of obesity due to their underlying genetics.[128]
|
63 |
+
|
64 |
+
The thrifty gene hypothesis postulates that, due to dietary scarcity during human evolution, people are prone to obesity. Their ability to take advantage of rare periods of abundance by storing energy as fat would be advantageous during times of varying food availability, and individuals with greater adipose reserves would be more likely to survive famine. This tendency to store fat, however, would be maladaptive in societies with stable food supplies.[129] This theory has received various criticisms, and other evolutionarily-based theories such as the drifty gene hypothesis and the thrifty phenotype hypothesis have also been proposed.[130][131]
|
65 |
+
|
66 |
+
Certain physical and mental illnesses and the pharmaceutical substances used to treat them can increase risk of obesity. Medical illnesses that increase obesity risk include several rare genetic syndromes (listed above) as well as some congenital or acquired conditions: hypothyroidism, Cushing's syndrome, growth hormone deficiency,[132] and some eating disorders such as binge eating disorder and night eating syndrome.[2] However, obesity is not regarded as a psychiatric disorder, and therefore is not listed in the DSM-IVR as a psychiatric illness.[133] The risk of overweight and obesity is higher in patients with psychiatric disorders than in persons without psychiatric disorders.[134]
|
67 |
+
|
68 |
+
Certain medications may cause weight gain or changes in body composition; these include insulin, sulfonylureas, thiazolidinediones, atypical antipsychotics, antidepressants, steroids, certain anticonvulsants (phenytoin and valproate), pizotifen, and some forms of hormonal contraception.[2]
|
69 |
+
|
70 |
+
While genetic influences are important to understanding obesity, they cannot explain the current dramatic increase seen within specific countries or globally.[135] Though it is accepted that energy consumption in excess of energy expenditure leads to obesity on an individual basis, the cause of the shifts in these two factors on the societal scale is much debated. There are a number of theories as to the cause but most believe it is a combination of various factors.
|
71 |
+
|
72 |
+
The correlation between social class and BMI varies globally. A review in 1989 found that in developed countries women of a high social class were less likely to be obese. No significant differences were seen among men of different social classes. In the developing world, women, men, and children from high social classes had greater rates of obesity.[136] An update of this review carried out in 2007 found the same relationships, but they were weaker. The decrease in strength of correlation was felt to be due to the effects of globalization.[137] Among developed countries, levels of adult obesity, and percentage of teenage children who are overweight, are correlated with income inequality. A similar relationship is seen among US states: more adults, even in higher social classes, are obese in more unequal states.[138]
|
73 |
+
|
74 |
+
Many explanations have been put forth for associations between BMI and social class. It is thought that in developed countries, the wealthy are able to afford more nutritious food, they are under greater social pressure to remain slim, and have more opportunities along with greater expectations for physical fitness. In undeveloped countries the ability to afford food, high energy expenditure with physical labor, and cultural values favoring a larger body size are believed to contribute to the observed patterns.[137] Attitudes toward body weight held by people in one's life may also play a role in obesity. A correlation in BMI changes over time has been found among friends, siblings, and spouses.[139] Stress and perceived low social status appear to increase risk of obesity.[138][140][141]
|
75 |
+
|
76 |
+
Smoking has a significant effect on an individual's weight. Those who quit smoking gain an average of 4.4 kilograms (9.7 lb) for men and 5.0 kilograms (11.0 lb) for women over ten years.[142] However, changing rates of smoking have had little effect on the overall rates of obesity.[143]
|
77 |
+
|
78 |
+
In the United States the number of children a person has is related to their risk of obesity. A woman's risk increases by 7% per child, while a man's risk increases by 4% per child.[144] This could be partly explained by the fact that having dependent children decreases physical activity in Western parents.[145]
|
79 |
+
|
80 |
+
In the developing world urbanization is playing a role in increasing rate of obesity. In China overall rates of obesity are below 5%; however, in some cities rates of obesity are greater than 20%.[146]
|
81 |
+
|
82 |
+
Malnutrition in early life is believed to play a role in the rising rates of obesity in the developing world.[147] Endocrine changes that occur during periods of malnutrition may promote the storage of fat once more food energy becomes available.[147]
|
83 |
+
|
84 |
+
Consistent with cognitive epidemiological data, numerous studies confirm that obesity is associated with cognitive deficits.[148][149]
|
85 |
+
|
86 |
+
Whether obesity causes cognitive deficits, or vice versa is unclear at present.
|
87 |
+
|
88 |
+
The study of the effect of infectious agents on metabolism is still in its early stages. Gut flora has been shown to differ between lean and obese people. There is an indication that gut flora can affect the metabolic potential. This apparent alteration is believed to confer a greater capacity to harvest energy contributing to obesity. Whether these differences are the direct cause or the result of obesity has yet to be determined unequivocally.[150] The use of antibiotics among children has also been associated with obesity later in life.[151][152]
|
89 |
+
|
90 |
+
An association between viruses and obesity has been found in humans and several different animal species. The amount that these associations may have contributed to the rising rate of obesity is yet to be determined.[153]
|
91 |
+
|
92 |
+
A number of reviews have found an association between short duration of sleep and obesity.[154][155] Whether one causes the other is unclear.[154] Even if shorts sleep does increase weight gain it is unclear if this is to a meaningful degree or increasing sleep would be of benefit.[156]
|
93 |
+
|
94 |
+
Certain aspects of personality are associated with being obese.[157] Neuroticism, impulsivity, and sensitivity to reward are more common in people who are obese while conscientiousness and self-control are less common in people who are obese.[157][158] Loneliness is also a risk factor.[159]
|
95 |
+
|
96 |
+
There are many possible pathophysiological mechanisms involved in the development and maintenance of obesity.[160] This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory.[161] While leptin and ghrelin are produced peripherally, they control appetite through their actions on the central nervous system. In particular, they and other appetite-related hormones act on the hypothalamus, a region of the brain central to the regulation of food intake and energy expenditure. There are several circuits within the hypothalamus that contribute to its role in integrating appetite, the melanocortin pathway being the most well understood.[160] The circuit begins with an area of the hypothalamus, the arcuate nucleus, that has outputs to the lateral hypothalamus (LH) and ventromedial hypothalamus (VMH), the brain's feeding and satiety centers, respectively.[162]
|
97 |
+
|
98 |
+
The arcuate nucleus contains two distinct groups of neurons.[160] The first group coexpresses neuropeptide Y (NPY) and agouti-related peptide (AgRP) and has stimulatory inputs to the LH and inhibitory inputs to the VMH. The second group coexpresses pro-opiomelanocortin (POMC) and cocaine- and amphetamine-regulated transcript (CART) and has stimulatory inputs to the VMH and inhibitory inputs to the LH. Consequently, NPY/AgRP neurons stimulate feeding and inhibit satiety, while POMC/CART neurons stimulate satiety and inhibit feeding. Both groups of arcuate nucleus neurons are regulated in part by leptin. Leptin inhibits the NPY/AgRP group while stimulating the POMC/CART group. Thus a deficiency in leptin signaling, either via leptin deficiency or leptin resistance, leads to overfeeding and may account for some genetic and acquired forms of obesity.[160]
|
99 |
+
|
100 |
+
The World Health Organization (WHO) predicts that overweight and obesity may soon replace more traditional public health concerns such as undernutrition and infectious diseases as the most significant cause of poor health.[163] Obesity is a public health and policy problem because of its prevalence, costs, and health effects.[164] The United States Preventive Services Task Force recommends screening for all adults followed by behavioral interventions in those who are obese.[165] Public health efforts seek to understand and correct the environmental factors responsible for the increasing prevalence of obesity in the population. Solutions look at changing the factors that cause excess food energy consumption and inhibit physical activity. Efforts include federally reimbursed meal programs in schools, limiting direct junk food marketing to children,[166] and decreasing access to sugar-sweetened beverages in schools.[167] The World Health Organization recommends the taxing of sugary drinks.[168] When constructing urban environments, efforts have been made to increase access to parks and to develop pedestrian routes.[169] There is low quality evidence that nutritional labelling with energy information on menus can help to reduce energy intake while dining in restaurants.[170]
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
Many organizations have published reports pertaining to obesity. In 1998, the first US Federal guidelines were published, titled "Clinical Guidelines on the Identification, Evaluation, and Treatment of Overweight and Obesity in Adults: The Evidence Report".[171] In 2006 the Canadian Obesity Network, now known as Obesity Canada published the "Canadian Clinical Practice Guidelines (CPG) on the Management and Prevention of Obesity in Adults and Children". This is a comprehensive evidence-based guideline to address the management and prevention of overweight and obesity in adults and children.[82]
|
105 |
+
|
106 |
+
In 2004, the United Kingdom Royal College of Physicians, the Faculty of Public Health and the Royal College of Paediatrics and Child Health released the report "Storing up Problems", which highlighted the growing problem of obesity in the UK.[172] The same year, the House of Commons Health Select Committee published its "most comprehensive inquiry [...] ever undertaken" into the impact of obesity on health and society in the UK and possible approaches to the problem.[173] In 2006, the National Institute for Health and Clinical Excellence (NICE) issued a guideline on the diagnosis and management of obesity, as well as policy implications for non-healthcare organizations such as local councils.[174] A 2007 report produced by Derek Wanless for the King's Fund warned that unless further action was taken, obesity had the capacity to cripple the National Health Service financially.[175]
|
107 |
+
|
108 |
+
Comprehensive approaches are being looked at to address the rising rates of obesity. The Obesity Policy Action (OPA) framework divides measure into 'upstream' policies, 'midstream' policies, 'downstream' policies. 'Upstream' policies look at changing society, 'midstream' policies try to alter individuals' behavior to prevent obesity, and 'downstream' policies try to treat currently afflicted people.[176]
|
109 |
+
|
110 |
+
The main treatment for obesity consists of weight loss via dieting and physical exercise.[16][82][177][178] Dieting, as part of a lifestyle change, produces sustained weight loss, despite slow weight regain over time.[16][179][180][181] Intensive behavioral interventions combining both dietary changes and exercise are recommended.[16][177][182]
|
111 |
+
|
112 |
+
Several diets are effective.[16] In the short-term low carbohydrate diets appear better than low fat diets for weight loss.[183] In the long term, however, all types of low-carbohydrate and low-fat diets appear equally beneficial.[183][184] A 2014 review found that the heart disease and diabetes risks associated with different diets appear to be similar.[185] Promotion of the Mediterranean diets among the obese may lower the risk of heart disease.[183] Decreased intake of sweet drinks is also related to weight-loss.[183] Success rates of long-term weight loss maintenance with lifestyle changes are low, ranging from 2–20%.[186] Dietary and lifestyle changes are effective in limiting excessive weight gain in pregnancy and improve outcomes for both the mother and the child.[187] Intensive behavioral counseling is recommended in those who are both obese and have other risk factors for heart disease.[188]
|
113 |
+
|
114 |
+
Five medications have evidence for long-term use orlistat, lorcaserin, liraglutide, phentermine–topiramate, and naltrexone–bupropion.[189] They result in weight loss after one year ranged from 3.0 to 6.7 kg (6.6-14.8 lbs) over placebo.[189] Orlistat, liraglutide, and naltrexone–bupropion are available in both the United States and Europe, whereas phentermine–topiramate are available only in the United States.[190] European regulatory authorities rejected the latter two drugs in part because of associations of heart valve problems with lorcaserin and more general heart and blood vessel problems with phentermine–topiramate.[190] Lorcaserin was available in the United States and than removed from the market in 2020 due to its association with cancer.[191] Orlistat use is associated with high rates of gastrointestinal side effects[192] and concerns have been raised about negative effects on the kidneys.[193] There is no information on how these drugs affect longer-term complications of obesity such as cardiovascular disease or death.[5]
|
115 |
+
|
116 |
+
The most effective treatment for obesity is bariatric surgery.[6][16] The types of procedures include laparoscopic adjustable gastric banding, Roux-en-Y gastric bypass, vertical-sleeve gastrectomy, and biliopancreatic diversion.[189] Surgery for severe obesity is associated with long-term weight loss, improvement in obesity-related conditions,[194] and decreased overall mortality. One study found a weight loss of between 14% and 25% (depending on the type of procedure performed) at 10 years, and a 29% reduction in all cause mortality when compared to standard weight loss measures.[195] Complications occur in about 17% of cases and reoperation is needed in 7% of cases.[194] Due to its cost and risks, researchers are searching for other effective yet less invasive treatments including devices that occupy space in the stomach.[196] For adults who have not responded to behavioral treatments with or without medication, the US guidelines on obesity recommend informing them about bariatric surgery.[177]
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
+
|
126 |
+
In earlier historical periods obesity was rare, and achievable only by a small elite, although already recognised as a problem for health. But as prosperity increased in the Early Modern period, it affected increasingly larger groups of the population.[199]
|
127 |
+
|
128 |
+
In 1997 the WHO formally recognized obesity as a global epidemic.[97] As of 2008 the WHO estimates that at least 500 million adults (greater than 10%) are obese, with higher rates among women than men.[200] The percentage of adults affected in the United States as of 2015–2016 is about 39.6% overall (37.9% of males and 41.1% of females).[201]
|
129 |
+
|
130 |
+
The rate of obesity also increases with age at least up to 50 or 60 years old[202] and severe obesity in the United States, Australia, and Canada is increasing faster than the overall rate of obesity.[30][203][204] The OECD has projected an increase in obesity rates until at least 2030, especially in the United States, Mexico and England with rates reaching 47%, 39% and 35% respectively.[205]
|
131 |
+
|
132 |
+
Once considered a problem only of high-income countries, obesity rates are rising worldwide and affecting both the developed and developing world.[45] These increases have been felt most dramatically in urban settings.[200] The only remaining region of the world where obesity is not common is sub-Saharan Africa.[2]
|
133 |
+
|
134 |
+
Obesity is from the Latin obesitas, which means "stout, fat, or plump". Ēsus is the past participle of edere (to eat), with ob (over) added to it.[206] The Oxford English Dictionary documents its first usage in 1611 by Randle Cotgrave.[207]
|
135 |
+
|
136 |
+
Ancient Greek medicine recognizes obesity as a medical disorder, and records that the Ancient Egyptians saw it in the same way.[199] Hippocrates wrote that "Corpulence is not only a disease itself, but the harbinger of others".[2] The Indian surgeon Sushruta (6th century BCE) related obesity to diabetes and heart disorders.[209] He recommended physical work to help cure it and its side effects.[209] For most of human history mankind struggled with food scarcity.[210] Obesity has thus historically been viewed as a sign of wealth and prosperity. It was common among high officials in Europe in the Middle Ages and the Renaissance[208] as well as in Ancient East Asian civilizations.[211] In the 17th century, English medical author Tobias Venner is credited with being one of the first to refer to the term as a societal disease in a published English language book.[199][212]
|
137 |
+
|
138 |
+
With the onset of the Industrial Revolution it was realized that the military and economic might of nations were dependent on both the body size and strength of their soldiers and workers.[97] Increasing the average body mass index from what is now considered underweight to what is now the normal range played a significant role in the development of industrialized societies.[97] Height and weight thus both increased through the 19th century in the developed world. During the 20th century, as populations reached their genetic potential for height, weight began increasing much more than height, resulting in obesity.[97] In the 1950s increasing wealth in the developed world decreased child mortality, but as body weight increased heart and kidney disease became more common.[97][213]
|
139 |
+
During this time period, insurance companies realized the connection between weight and life expectancy and increased premiums for the obese.[2]
|
140 |
+
|
141 |
+
Many cultures throughout history have viewed obesity as the result of a character flaw. The obesus or fat character in Ancient Greek comedy was a glutton and figure of mockery. During Christian times the food was viewed as a gateway to the sins of sloth and lust.[15] In modern Western culture, excess weight is often regarded as unattractive, and obesity is commonly associated with various negative stereotypes. People of all ages can face social stigmatization, and may be targeted by bullies or shunned by their peers.[214]
|
142 |
+
|
143 |
+
Public perceptions in Western society regarding healthy body weight differ from those regarding the weight that is considered ideal – and both have changed since the beginning of the 20th century. The weight that is viewed as an ideal has become lower since the 1920s. This is illustrated by the fact that the average height of Miss America pageant winners increased by 2% from 1922 to 1999, while their average weight decreased by 12%.[215] On the other hand, people's views concerning healthy weight have changed in the opposite direction. In Britain, the weight at which people considered themselves to be overweight was significantly higher in 2007 than in 1999.[216] These changes are believed to be due to increasing rates of adiposity leading to increased acceptance of extra body fat as being normal.[216]
|
144 |
+
|
145 |
+
Obesity is still seen as a sign of wealth and well-being in many parts of Africa. This has become particularly common since the HIV epidemic began.[2]
|
146 |
+
|
147 |
+
The first sculptural representations of the human body 20,000–35,000 years ago depict obese females. Some attribute the Venus figurines to the tendency to emphasize fertility while others feel they represent "fatness" in the people of the time.[15] Corpulence is, however, absent in both Greek and Roman art, probably in keeping with their ideals regarding moderation. This continued through much of Christian European history, with only those of low socioeconomic status being depicted as obese.[15]
|
148 |
+
|
149 |
+
During the Renaissance some of the upper class began flaunting their large size, as can be seen in portraits of Henry VIII of England and Alessandro dal Borro.[15] Rubens (1577–1640) regularly depicted full-bodied women in his pictures, from which derives the term Rubenesque. These women, however, still maintained the "hourglass" shape with its relationship to fertility.[217] During the 19th century, views on obesity changed in the Western world. After centuries of obesity being synonymous with wealth and social status, slimness began to be seen as the desirable standard.[15]
|
150 |
+
|
151 |
+
In addition to its health impacts, obesity leads to many problems including disadvantages in employment[218][219] and increased business costs. These effects are felt by all levels of society from individuals, to corporations, to governments.
|
152 |
+
|
153 |
+
In 2005, the medical costs attributable to obesity in the US were an estimated $190.2 billion or 20.6% of all medical expenditures,[220][221][222] while the cost of obesity in Canada was estimated at CA$2 billion in 1997 (2.4% of total health costs).[82] The total annual direct cost of overweight and obesity in Australia in 2005 was A$21 billion. Overweight and obese Australians also received A$35.6 billion in government subsidies.[223] The estimate range for annual expenditures on diet products is $40 billion to $100 billion in the US alone.[224]
|
154 |
+
|
155 |
+
The Lancet Commission on Obesity in 2019 called for a global treaty — modelled on the WHO Framework Convention on Tobacco Control — committing countries to address obesity and undernutrition, explicitly excluding the food industry from policy development. They estimate the global cost of obesity $2 trillion a year, about or 2.8% of world GDP.[225]
|
156 |
+
|
157 |
+
Obesity prevention programs have been found to reduce the cost of treating obesity-related disease. However, the longer people live, the more medical costs they incur. Researchers, therefore, conclude that reducing obesity may improve the public's health, but it is unlikely to reduce overall health spending.[226]
|
158 |
+
|
159 |
+
Obesity can lead to social stigmatization and disadvantages in employment.[218] When compared to their normal weight counterparts, obese workers on average have higher rates of absenteeism from work and take more disability leave, thus increasing costs for employers and decreasing productivity.[228] A study examining Duke University employees found that people with a BMI over 40 kg/m2 filed twice as many workers' compensation claims as those whose BMI was 18.5–24.9 kg/m2. They also had more than 12 times as many lost work days. The most common injuries in this group were due to falls and lifting, thus affecting the lower extremities, wrists or hands, and backs.[229] The Alabama State Employees' Insurance Board approved a controversial plan to charge obese workers $25 a month for health insurance that would otherwise be free unless they take steps to lose weight and improve their health. These measures started in January 2010 and apply to those state workers whose BMI exceeds 35 kg/m2 and who fail to make improvements in their health after one year.[230]
|
160 |
+
|
161 |
+
Some research shows that obese people are less likely to be hired for a job and are less likely to be promoted.[214] Obese people are also paid less than their non-obese counterparts for an equivalent job; obese women on average make 6% less and obese men make 3% less.[231]
|
162 |
+
|
163 |
+
Specific industries, such as the airline, healthcare and food industries, have special concerns. Due to rising rates of obesity, airlines face higher fuel costs and pressures to increase seating width.[232] In 2000, the extra weight of obese passengers cost airlines US$275 million.[233] The healthcare industry has had to invest in special facilities for handling severely obese patients, including special lifting equipment and bariatric ambulances.[234] Costs for restaurants are increased by litigation accusing them of causing obesity.[235] In 2005 the US Congress discussed legislation to prevent civil lawsuits against the food industry in relation to obesity; however, it did not become law.[235]
|
164 |
+
|
165 |
+
With the American Medical Association's 2013 classification of obesity as a chronic disease,[17] it is thought that health insurance companies will more likely pay for obesity treatment, counseling and surgery, and the cost of research and development of fat treatment pills or gene therapy treatments should be more affordable if insurers help to subsidize their cost.[236] The AMA classification is not legally binding, however, so health insurers still have the right to reject coverage for a treatment or procedure.[236]
|
166 |
+
|
167 |
+
In 2014, The European Court of Justice ruled that morbid obesity is a disability. The Court said that if an employee's obesity prevents him from "full and effective participation of that person in professional life on an equal basis with other workers", then it shall be considered a disability and that firing someone on such grounds is discriminatory.[237]
|
168 |
+
|
169 |
+
The principal goal of the fat acceptance movement is to decrease discrimination against people who are overweight and obese.[238][239] However, some in the movement are also attempting to challenge the established relationship between obesity and negative health outcomes.[240]
|
170 |
+
|
171 |
+
A number of organizations exist that promote the acceptance of obesity. They have increased in prominence in the latter half of the 20th century.[241] The US-based National Association to Advance Fat Acceptance (NAAFA) was formed in 1969 and describes itself as a civil rights organization dedicated to ending size discrimination.[242]
|
172 |
+
|
173 |
+
The International Size Acceptance Association (ISAA) is a non-governmental organization (NGO) which was founded in 1997. It has more of a global orientation and describes its mission as promoting size acceptance and helping to end weight-based discrimination.[243] These groups often argue for the recognition of obesity as a disability under the US Americans With Disabilities Act (ADA). The American legal system, however, has decided that the potential public health costs exceed the benefits of extending this anti-discrimination law to cover obesity.[240]
|
174 |
+
|
175 |
+
In 2015 the New York Times published an article on the Global Energy Balance Network, a nonprofit founded in 2014 that advocated for people to focus on increasing exercise rather than reducing calorie intake to avoid obesity and to be healthy. The organization was founded with at least $1.5M in funding from the Coca-Cola Company, and the company has provided $4M in research funding to the two founding scientists Gregory A. Hand and Steven N. Blair since 2008.[244][245]
|
176 |
+
|
177 |
+
The healthy BMI range varies with the age and sex of the child. Obesity in children and adolescents is defined as a BMI greater than the 95th percentile.[24] The reference data that these percentiles are based on is from 1963 to 1994 and thus has not been affected by the recent increases in rates of obesity.[25] Childhood obesity has reached epidemic proportions in the 21st century, with rising rates in both the developed and the developing world. Rates of obesity in Canadian boys have increased from 11% in the 1980s to over 30% in the 1990s, while during this same time period rates increased from 4 to 14% in Brazilian children.[246] In the UK, there were 60% more obese children in 2005 compared to 1989.[247] In the US, the percentage of overweight and obese children increased to 16% in 2008, a 300% increase over the prior 30 years.[248]
|
178 |
+
|
179 |
+
As with obesity in adults, many factors contribute to the rising rates of childhood obesity. Changing diet and decreasing physical activity are believed to be the two most important causes for the recent increase in the incidence of child obesity.[249] Antibiotics in the first 6 months of life have been associated with excess weight at age seven to twelve years of age.[152] Because childhood obesity often persists into adulthood and is associated with numerous chronic illnesses, children who are obese are often tested for hypertension, diabetes, hyperlipidemia, and fatty liver disease.[82] Treatments used in children are primarily lifestyle interventions and behavioral techniques, although efforts to increase activity in children have had little success.[250] In the United States, medications are not FDA approved for use in this age group.[246] Multi-component behaviour change interventions that include changes to dietary and physical activity may reduce BMI in the short term in children aged 6 to 11 years, although the benefits are small and quality of evidence is low.[251]
|
180 |
+
|
181 |
+
Obesity in pets is common in many countries. In the United States, 23–41% of dogs are overweight, and about 5.1% are obese.[252] The rate of obesity in cats was slightly higher at 6.4%.[252] In Australia the rate of obesity among dogs in a veterinary setting has been found to be 7.6%.[253] The risk of obesity in dogs is related to whether or not their owners are obese; however, there is no similar correlation between cats and their owners.[254]
|
en/421.html.txt
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Astronomy (from Greek: ἀστρονομία) is a natural science that studies celestial objects and phenomena. It uses mathematics, physics, and chemistry in order to explain their origin and evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates outside Earth's atmosphere. Cosmology is a branch of astronomy. It studies the Universe as a whole.[1]
|
6 |
+
|
7 |
+
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Babylonians, Greeks, Indians, Egyptians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Nowadays, professional astronomy is often said to be the same as astrophysics.[2]
|
8 |
+
|
9 |
+
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
|
10 |
+
|
11 |
+
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
|
12 |
+
|
13 |
+
Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects.[4] Although the two fields share a common origin, they are now entirely distinct.[5]
|
14 |
+
|
15 |
+
"Astronomy" and "astrophysics" are synonyms.[6][7][8] Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties,"[9] while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena".[10] In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject.[11] However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics.[6] Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department,[7] and many professional astronomers have physics rather than astronomy degrees.[8] Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
|
16 |
+
|
17 |
+
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.[12]
|
18 |
+
|
19 |
+
Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Mesopotamia, Greece, Persia, India, China, Egypt, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.[13]
|
20 |
+
|
21 |
+
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations.[15] The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.[16]
|
22 |
+
|
23 |
+
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena.[17] In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model.[18] In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe.[19] Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy.[20] The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.[21]
|
24 |
+
|
25 |
+
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels.[22] Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
|
26 |
+
|
27 |
+
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century.[23][24][25] In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars.[26] The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.[27][28] It is also believed that the ruins at Great Zimbabwe and Timbuktu[29] may have housed astronomical observatories.[30] Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.[31][32][33][34]
|
28 |
+
|
29 |
+
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.[35]
|
30 |
+
|
31 |
+
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down.[36] It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.[37]
|
32 |
+
|
33 |
+
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars,[38] More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.[39]
|
34 |
+
|
35 |
+
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.[40]
|
36 |
+
|
37 |
+
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.[27]
|
38 |
+
|
39 |
+
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe.[41] Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere.[citation needed] In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.[42][43]
|
40 |
+
|
41 |
+
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation.[44] Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
|
42 |
+
|
43 |
+
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range.[45] Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.[45]
|
44 |
+
|
45 |
+
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields.[45] Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[11][45]
|
46 |
+
|
47 |
+
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.[11][45]
|
48 |
+
|
49 |
+
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous Galactic protostars and their host star clusters.[47][48]
|
50 |
+
With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space.[49] Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.[50]
|
51 |
+
|
52 |
+
Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy.[51] Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm),[51] that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
|
53 |
+
|
54 |
+
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm).[45] Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei.[45] However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.[45]
|
55 |
+
|
56 |
+
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 107 (10 million) kelvins, and thermal emission from thick gases above 107 Kelvin.[45] Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.[45]
|
57 |
+
|
58 |
+
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[45] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.[52]
|
59 |
+
|
60 |
+
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.[45]
|
61 |
+
|
62 |
+
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
|
63 |
+
|
64 |
+
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A.[45] Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories.[53] Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.[45]
|
65 |
+
|
66 |
+
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[54] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.[55][56]
|
67 |
+
|
68 |
+
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.[57][58]
|
69 |
+
|
70 |
+
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
|
71 |
+
|
72 |
+
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.[59]
|
73 |
+
|
74 |
+
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allows astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.[60]
|
75 |
+
|
76 |
+
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.[61]
|
77 |
+
|
78 |
+
Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved.[62][63]
|
79 |
+
|
80 |
+
Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
|
81 |
+
|
82 |
+
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency between the data and model's results, the general tendency is to try to make minimal modifications to the model so that it produces results that fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
|
83 |
+
|
84 |
+
Phenomena modeled by theoretical astronomers include: stellar dynamics and evolution; galaxy formation; large-scale distribution of matter in the Universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
|
85 |
+
|
86 |
+
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, dark matter and fundamental theories of physics.
|
87 |
+
|
88 |
+
A few examples of this process:
|
89 |
+
|
90 |
+
Along with Cosmic inflation, dark matter and dark energy are the current leading topics in astronomy,[64] as their discovery and controversy originated during the study of the galaxies.
|
91 |
+
|
92 |
+
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space".[65][66] Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background.[67][68] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
|
93 |
+
|
94 |
+
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[67] Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.
|
95 |
+
|
96 |
+
Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation.[69] The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.
|
97 |
+
|
98 |
+
Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans.
|
99 |
+
|
100 |
+
Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does.[70] The term exobiology is similar.[71]
|
101 |
+
|
102 |
+
Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth.[72] The origin and early evolution of life is an inseparable part of the discipline of astrobiology.[73] Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories.
|
103 |
+
|
104 |
+
This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space.[74][75][76]
|
105 |
+
|
106 |
+
Cosmology (from the Greek κόσμος (kosmos) "world, universe" and λόγος (logos) "word, study" or literally "logic") could be considered the study of the Universe as a whole.
|
107 |
+
|
108 |
+
Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13.8 billion years[77] to its present condition.[78] The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965.[78]
|
109 |
+
|
110 |
+
In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe.[78] (See also nucleocosmochronology.)
|
111 |
+
|
112 |
+
When the first neutral atoms formed from a sea of primordial ions, space became transparent to radiation, releasing the energy viewed today as the microwave background radiation. The expanding Universe then underwent a Dark Age due to the lack of stellar energy sources.[79]
|
113 |
+
|
114 |
+
A hierarchical structure of matter began to form from minute variations in the mass density of space. Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III stars. These massive stars triggered the reionization process and are believed to have created many of the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing the cycle of nucleosynthesis to continue longer.[80]
|
115 |
+
|
116 |
+
Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations of gas and dust merged to form the first primitive galaxies. Over time, these pulled in more matter, and were often organized into groups and clusters of galaxies, then into larger-scale superclusters.[81]
|
117 |
+
|
118 |
+
Various fields of physics are crucial to studying the universe. Interdisciplinary studies involve the fields of quantum mechanics, particle physics, plasma physics, condensed matter physics, statistical mechanics, optics, and nuclear physics.
|
119 |
+
|
120 |
+
Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason, much effort is expended in trying to understand the physics of these components.[82]
|
121 |
+
|
122 |
+
The study of objects outside our galaxy is a branch of astronomy concerned with the formation and evolution of Galaxies, their morphology (description) and classification, the observation of active galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the understanding of the large-scale structure of the cosmos.
|
123 |
+
|
124 |
+
Most galaxies are organized into distinct shapes that allow for classification schemes. They are commonly divided into spiral, elliptical and Irregular galaxies.[83]
|
125 |
+
|
126 |
+
As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move along random orbits with no preferred direction. These galaxies contain little or no interstellar dust, few star-forming regions, and older stars. Elliptical galaxies are more commonly found at the core of galactic clusters, and may have been formed through mergers of large galaxies.
|
127 |
+
|
128 |
+
A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation within which massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of older stars. Both the Milky Way and one of our nearest galaxy neighbors, the Andromeda Galaxy, are spiral galaxies.
|
129 |
+
|
130 |
+
Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational interaction.
|
131 |
+
|
132 |
+
An active galaxy is a formation that emits a significant amount of its energy from a source other than its stars, dust and gas. It is powered by a compact region at the core, thought to be a super-massive black hole that is emitting radiation from in-falling material.
|
133 |
+
|
134 |
+
A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy radiation include Seyfert galaxies, Quasars, and Blazars. Quasars are believed to be the most consistently luminous objects in the known universe.[84]
|
135 |
+
|
136 |
+
The large-scale structure of the cosmos is represented by groups and clusters of galaxies. This structure is organized into a hierarchy of groupings, with the largest being the superclusters. The collective matter is formed into filaments and walls, leaving large voids between.[85]
|
137 |
+
|
138 |
+
The Solar System orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the Local Group of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large portions of the Milky Way that are obscured from view.
|
139 |
+
|
140 |
+
In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. This is surrounded by four primary arms that spiral from the core. This is a region of active star formation that contains many younger, population I stars. The disk is surrounded by a spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as globular clusters.[86]
|
141 |
+
|
142 |
+
Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions, molecular clouds of molecular hydrogen and other elements create star-forming regions. These begin as a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by the Jeans length) to form compact protostars.[87]
|
143 |
+
|
144 |
+
As the more massive stars appear, they transform the cloud into an H II region (ionized atomic hydrogen) of glowing gas and plasma. The stellar wind and supernova explosions from these stars eventually cause the cloud to disperse, often leaving behind one or more young open clusters of stars. These clusters gradually disperse, and the stars join the population of the Milky Way.[88]
|
145 |
+
|
146 |
+
Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined.[89]
|
147 |
+
|
148 |
+
The study of stars and stellar evolution is fundamental to our understanding of the Universe. The astrophysics of stars has been determined through observation and theoretical understanding; and from computer simulations of the interior.[90] Star formation occurs in dense regions of dust and gas, known as giant molecular clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star.[87]
|
149 |
+
|
150 |
+
Almost all elements heavier than hydrogen and helium were created inside the cores of stars.[90]
|
151 |
+
|
152 |
+
The characteristics of the resulting star depend primarily upon its starting mass. The more massive the star, the greater its luminosity, and the more rapidly it fuses its hydrogen fuel into helium in its core. Over time, this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium requires a higher core temperature. A star with a high enough core temperature will push its outer layers outward while increasing its core density. The resulting red giant formed by the expanding outer layers enjoys a brief life span, before the helium fuel in the core is in turn consumed. Very massive stars can also undergo a series of evolutionary phases, as they fuse increasingly heavier elements.[91]
|
153 |
+
|
154 |
+
The final fate of the star depends on its mass, with stars of mass greater than about eight times the Sun becoming core collapse supernovae;[92] while smaller stars blow off their outer layers and leave behind the inert core in the form of a white dwarf. The ejection of the outer layers forms a planetary nebula.[93] The remnant of a supernova is a dense neutron star, or, if the stellar mass was at least three times that of the Sun, a black hole.[94] Closely orbiting binary stars can follow more complex evolutionary paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova.[95] Planetary nebulae and supernovae distribute the "metals" produced in the star by fusion to the interstellar medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and helium alone.[96]
|
155 |
+
|
156 |
+
At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical main-sequence dwarf star of stellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not considered a variable star, but it does undergo periodic changes in activity known as the sunspot cycle. This is an 11-year oscillation in sunspot number. Sunspots are regions of lower-than- average temperatures that are associated with intense magnetic activity.[97]
|
157 |
+
|
158 |
+
The Sun has steadily increased in luminosity by 40% since it first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can have a significant impact on the Earth.[98] The Maunder minimum, for example, is believed to have caused the Little Ice Age phenomenon during the Middle Ages.[99]
|
159 |
+
|
160 |
+
The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and finally by the super-heated corona.
|
161 |
+
|
162 |
+
At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by means of radiation. Above that is the convection zone where the gas material transports energy primarily through physical displacement of the gas known as convection. It is believed that the movement of mass within the convection zone creates the magnetic activity that generates sunspots.[97]
|
163 |
+
|
164 |
+
A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit of the Solar System, it reaches the heliopause. As the solar wind passes the Earth, it interacts with the Earth's magnetic field (magnetosphere) and deflects the solar wind, but traps some creating the Van Allen radiation belts that envelop the Earth. The aurora are created when solar wind particles are guided by the magnetic flux lines into the Earth's polar regions where the lines then descend into the atmosphere.[100]
|
165 |
+
|
166 |
+
Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids, and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall understanding of the formation and evolution of the Sun's planetary system, although many new discoveries are still being made.[101]
|
167 |
+
|
168 |
+
The Solar System is subdivided into the inner planets, the asteroid belt, and the outer planets. The inner terrestrial planets consist of Mercury, Venus, Earth, and Mars. The outer gas giant planets are Jupiter, Saturn, Uranus, and Neptune.[102] Beyond Neptune lies the Kuiper belt, and finally the Oort Cloud, which may extend as far as a light-year.
|
169 |
+
|
170 |
+
The planets were formed 4.6 billion years ago in the protoplanetary disk that surrounded the early Sun. Through a process that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with time, became protoplanets. The radiation pressure of the solar wind then expelled most of the unaccreted matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets continued to sweep up, or eject, the remaining matter during a period of intense bombardment, evidenced by the many impact craters on the Moon. During this period, some of the protoplanets may have collided and one such collision may have formed the Moon.[103]
|
171 |
+
|
172 |
+
Once a planet reaches sufficient mass, the materials of different densities segregate within, during planetary differentiation. This process can form a stony or metallic core, surrounded by a mantle and an outer crust. The core may include solid and liquid regions, and some planetary cores generate their own magnetic field, which can protect their atmospheres from solar wind stripping.[104]
|
173 |
+
|
174 |
+
A planet or moon's interior heat is produced from the collisions that created the body, by the decay of radioactive materials (e.g. uranium, thorium, and 26Al), or tidal heating caused by interactions with other bodies. Some planets and moons accumulate enough heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating, cool more quickly; and their geological activity ceases with the exception of impact cratering.[105]
|
175 |
+
|
176 |
+
Astronomy and astrophysics have developed significant interdisciplinary links with other major scientific fields. Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and evolution of biological systems in the Universe, with particular emphasis on the possibility of non-terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of vast amount of observational astrophysical data.
|
177 |
+
|
178 |
+
The study of chemicals found in space, including their formation, interaction and destruction, is called astrochemistry. These substances are usually found in molecular clouds, although they may also appear in low temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the chemicals found within the Solar System, including the origins of the elements and variations in the isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As "forensic astronomy", finally, methods from astronomy have been used to solve problems of law and history.
|
179 |
+
|
180 |
+
Astronomy is one of the sciences to which amateurs can contribute the most.[106]
|
181 |
+
|
182 |
+
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events which interest them.[107][108]
|
183 |
+
|
184 |
+
Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (e.g. the One-Mile Telescope).[109][110]
|
185 |
+
|
186 |
+
Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make occultation measurements that are used to refine the orbits of minor planets. They can also discover comets, and perform regular observations of variable stars. Improvements in digital technology have allowed amateurs to make impressive advances in the field of astrophotography.[111][112][113]
|
187 |
+
|
188 |
+
Although the scientific discipline of astronomy has made tremendous strides in understanding the nature of the Universe and its contents, there remain some important unanswered questions. Answers to these may require the construction of new ground- and space-based instruments, and possibly new developments in theoretical and experimental physics.
|
189 |
+
|
190 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
191 |
+
|
en/4210.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Other reasons this message may be displayed:
|
en/4211.html.txt
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Southern Ocean, also known as the Antarctic Ocean[1] or the Austral Ocean,[2][note 4] comprises the southernmost waters of the World Ocean, generally taken to be south of 60° S latitude and encircling Antarctica.[5] As such, it is regarded as the second-smallest of the five principal oceanic divisions: smaller than the Pacific, Atlantic, and Indian Oceans but larger than the Arctic Ocean.[6] This oceanic zone is where cold, northward flowing waters from the Antarctic mix with warmer subantarctic waters.
|
4 |
+
|
5 |
+
By way of his voyages in the 1770s, James Cook proved that waters encompassed the southern latitudes of the globe. Since then, geographers have disagreed on the Southern Ocean's northern boundary or even existence, considering the waters as various parts of the Pacific, Atlantic, and Indian Oceans, instead. However, according to Commodore John Leech of the International Hydrographic Organization (IHO), recent oceanographic research has discovered the importance of Southern Circulation, and the term Southern Ocean has been used to define the body of water which lies south of the northern limit of that circulation.[7] This remains the current official policy of the IHO, since a 2000 revision of its definitions including the Southern Ocean as the waters south of the 60th parallel has not yet been adopted. Others regard the seasonally-fluctuating Antarctic Convergence as the natural boundary.[8]
|
6 |
+
|
7 |
+
The maximum depth of the Southern Ocean, using the definition that it lies south of 60th parallel, was surveyed by the Five Deeps Expedition in early February 2019. The expedition's multibeam sonar team identified the deepest point at 60° 28' 46"S, 025° 32' 32"W, with a depth of 7,434 meters. The expedition leader and chief submersible pilot Victor Vescovo, has proposed naming this deepest point in the Southern Ocean the "Factorian Deep", based on the name of the manned submersible DSV Limiting Factor, in which he successfully visited the bottom for the first time on February 3, 2019.[9]
|
8 |
+
|
9 |
+
Borders and names for oceans and seas were internationally agreed when the International Hydrographic Bureau, the precursor to the IHO, convened the First International Conference on 24 July 1919. The IHO then published these in its Limits of Oceans and Seas, the first edition being 1928. Since the first edition, the limits of the Southern Ocean have moved progressively southwards; since 1953, it has been omitted from the official publication and left to local hydrographic offices to determine their own limits.
|
10 |
+
|
11 |
+
The IHO included the ocean and its definition as the waters south of the 60th parallel south in its 2000 revisions, but this has not been formally adopted, due to continuing impasses about some of the content, such as the naming dispute over the Sea of Japan. The 2000 IHO definition, however, was circulated in a draft edition in 2002, and is used by some within the IHO and by some other organizations such as the CIA World Factbook and Merriam-Webster.[6][10]
|
12 |
+
|
13 |
+
The Australian Government regards the Southern Ocean as lying immediately south of Australia (see § Australian standpoint).[11][12]
|
14 |
+
|
15 |
+
The National Geographic Society does not recognize the ocean,[2] depicting it in a typeface different from the other world oceans; instead, it shows the Pacific, Atlantic, and Indian Oceans extending to Antarctica on both its print and online maps.[13] Map publishers using the term Southern Ocean on their maps include Hema Maps[14] and GeoNova.[15]
|
16 |
+
|
17 |
+
"Southern Ocean" is an obsolete name for the Pacific Ocean or South Pacific, coined by Vasco Núñez de Balboa, the first European to discover it, who approached it from the north.[16] The "South Seas" is a less archaic synonym. A 1745 British Act of Parliament established a prize for discovering a Northwest Passage to "the Western and Southern Ocean of America".[17]
|
18 |
+
|
19 |
+
Authors using "Southern Ocean" to name the waters encircling the unknown southern polar regions used varying limits. James Cook's account of his second voyage implies New Caledonia borders it.[18] Peacock's 1795 Geographical Dictionary said it lay "to the southward of America and Africa";[19] John Payne in 1796 used 40 degrees as the northern limit;[20] the 1827 Edinburgh Gazetteer used 50 degrees.[21] The Family Magazine in 1835 divided the "Great Southern Ocean" into the "Southern Ocean" and the "Antarctick [sic] Ocean" along the Antarctic Circle, with the northern limit of the Southern Ocean being lines joining Cape Horn, the Cape of Good Hope, Van Diemen's Land and the south of New Zealand.[22]
|
20 |
+
|
21 |
+
The United Kingdom's South Australia Act 1834 described the waters forming the southern limit of the new province of South Australia as "the Southern Ocean". The Colony of Victoria's Legislative Council Act 1881 delimited part of the division of Bairnsdale as "along the New South Wales boundary to the Southern ocean".[23]
|
22 |
+
|
23 |
+
In the 1928 first edition of Limits of Oceans and Seas, the Southern Ocean was delineated by land-based limits: Antarctica to the south, and South America, Africa, Australia, and Broughton Island, New Zealand to the north.
|
24 |
+
|
25 |
+
The detailed land-limits used were from Cape Horn in Chile eastwards to Cape Agulhas in Africa, then further eastwards to the southern coast of mainland Australia to Cape Leeuwin, Western Australia. From Cape Leeuwin, the limit then followed eastwards along the coast of mainland Australia to Cape Otway, Victoria, then southwards across Bass Strait to Cape Wickham, King Island, along the west coast of King Island, then the remainder of the way south across Bass Strait to Cape Grim, Tasmania.
|
26 |
+
|
27 |
+
The limit then followed the west coast of Tasmania southwards to the South East Cape and then went eastwards to Broughton Island, New Zealand, before returning to Cape Horn.[24]
|
28 |
+
|
29 |
+
The northern limits of the Southern Ocean were moved southwards in the IHO's 1937 second edition of the Limits of Oceans and Seas. From this edition, much of the ocean's northern limit ceased to abut land masses.
|
30 |
+
|
31 |
+
In the second edition, the Southern Ocean then extended from Antarctica northwards to latitude 40°S between Cape Agulhas in Africa (long. 20°E) and Cape Leeuwin in Western Australia (long. 115°E), and extended to latitude 55°S between Auckland Island of New Zealand (165 or 166°E east) and Cape Horn in South America (67°W).[25]
|
32 |
+
|
33 |
+
As is discussed in more detail below, prior to the 2002 edition the limits of oceans explicitly excluded the seas lying within each of them. The Great Australian Bight was unnamed in the 1928 edition, and delineated as shown in the figure above in the 1937 edition. It therefore encompassed former Southern Ocean waters—as designated in 1928—but was technically not inside any of the three adjacent oceans by 1937.
|
34 |
+
|
35 |
+
In the 2002 draft edition, the IHO have designated 'seas' as being subdivisions within 'oceans', so the Bight would have still been within the Southern Ocean in 1937 if the 2002 convention were in place then. To perform direct comparisons of current and former limits of oceans it is necessary to consider, or at least be aware of, how the 2002 change in IHO terminology for 'seas' can affect the comparison.
|
36 |
+
|
37 |
+
The Southern Ocean did not appear in the 1953 third edition of Limits of Oceans and Seas, a note in the publication read:
|
38 |
+
|
39 |
+
The Antarctic or Southern Ocean has been omitted from this publication as the majority of opinions received since the issue of the 2nd Edition in 1937 are to the effect that there exists no real justification for applying the term Ocean to this body of water, the northern limits of which are difficult to lay down owing to their seasonal change. The limits of the Atlantic, Pacific and Indian Oceans have therefore been extended South to the Antarctic Continent.Hydrographic Offices who issue separate publications dealing with this area are therefore left to decide their own northern limits (Great Britain uses Latitude of 55 South.)[26]:4
|
40 |
+
|
41 |
+
Instead, in the IHO 1953 publication, the Atlantic, Indian and Pacific Oceans were extended southward, the Indian and Pacific Oceans (which had not previously touched pre 1953, as per the first and second editions) now abutted at the meridian of South East Cape, and the southern limits of the Great Australian Bight and the Tasman Sea were moved northwards.[26]
|
42 |
+
|
43 |
+
The IHO readdressed the question of the Southern Ocean in a survey in 2000. Of its 68 member nations, 28 responded, and all responding members except Argentina agreed to redefine the ocean, reflecting the importance placed by oceanographers on ocean currents. The proposal for the name Southern Ocean won 18 votes, beating the alternative Antarctic Ocean. Half of the votes supported a definition of the ocean's northern limit at the 60th parallel south—with no land interruptions at this latitude—with the other 14 votes cast for other definitions, mostly the 50th parallel south, but a few for as far north as the 35th parallel south.
|
44 |
+
|
45 |
+
A draft fourth edition of Limits of Oceans and Seas was circulated to IHO member states in August 2002 (sometimes referred to as the "2000 edition" as it summarized the progress to 2000).[28] It has yet to be published due to 'areas of concern' by several countries relating to various naming issues around the world – primarily the Sea of Japan naming dispute – and there have been various changes, 60 seas were given new names, and even the name of the publication was changed.[29] A reservation had also been lodged by Australia regarding the Southern Ocean limits.[30] Effectively, the third edition—which did not delineate the Southern Ocean leaving delineation to local hydrographic offices—has yet to be superseded.
|
46 |
+
|
47 |
+
Despite this, the fourth edition definition has partial de facto usage by many nations, scientists and organisations such as the U.S. (the CIA World Factbook uses "Southern Ocean" but none of the other new sea names within the "Southern Ocean" such as "Cosmonauts Sea") and Merriam-Webster,[6][10][13] scientists and nations – and even by some within the IHO.[31] Some nations' hydrographic offices have defined their own boundaries; the United Kingdom used the 55th parallel south for example.[26] Other organisations favour more northerly limits for the Southern Ocean. For example, Encyclopædia Britannica describes the Southern Ocean as extending as far north as South America, and confers great significance on the Antarctic Convergence, yet its description of the Indian Ocean contradicts this, describing the Indian Ocean as extending south to Antarctica.[32][33]
|
48 |
+
|
49 |
+
Other sources, such as the National Geographic Society, show the Atlantic, Pacific and Indian Oceans as extending to Antarctica on its maps, although articles on the National Geographic web site have begun to reference the Southern Ocean.[13]
|
50 |
+
|
51 |
+
A radical shift from past IHO practices (1928–1953) was also seen in the 2002 draft edition when the IHO delineated 'seas' as being subdivisions that lay within the boundaries of 'oceans'. While the IHO are often considered the authority for such conventions, the shift brought them into line with the practices of other publications (e.g. the CIA World Fact Book) which already adopted the principle that seas are contained within oceans. This difference in practice is markedly seen for the Pacific Ocean in the adjacent figure. Thus, for example, previously the Tasman Sea between Australia and New Zealand was not regarded by the IHO as being part of the Pacific, but as of the 2002 draft edition it is.
|
52 |
+
|
53 |
+
The new delineation of seas being subdivisions of oceans has avoided the need to interrupt the northern boundary of the Southern Ocean where intersected by Drake Passage which includes all of the waters from South America to the Antarctic coast, nor interrupt it for the Scotia Sea, which also extends below the 60th parallel south. The new delineation of seas has also meant that the long-time named seas around Antarctica, excluded from the 1953 edition (the 1953 map did not even extend that far south), are 'automatically' part of the Southern Ocean.
|
54 |
+
|
55 |
+
In Australia, cartographical authorities define the Southern Ocean as including the entire body of water between Antarctica and the south coasts of Australia and New Zealand, and up to 60°S elsewhere.[34] Coastal maps of Tasmania and South Australia label the sea areas as Southern Ocean[35] and Cape Leeuwin in Western Australia is described as the point where the Indian and Southern Oceans meet.[36]
|
56 |
+
|
57 |
+
Exploration of the Southern Ocean was inspired by a belief in the existence of a Terra Australis – a vast continent in the far south of the globe to "balance" the northern lands of Eurasia and North Africa – which had existed since the times of Ptolemy. The doubling of the Cape of Good Hope in 1487 by Bartolomeu Dias first brought explorers within touch of the Antarctic cold, and proved that there was an ocean separating Africa from any Antarctic land that might exist.[37] Ferdinand Magellan, who passed through the Strait of Magellan in 1520, assumed that the islands of Tierra del Fuego to the south were an extension of this unknown southern land. In 1564, Abraham Ortelius published his first map, Typus Orbis Terrarum, an eight-leaved wall map of the world, on which he identified the Regio Patalis with Locach as a northward extension of the Terra Australis, reaching as far as New Guinea.[38][39]
|
58 |
+
|
59 |
+
European geographers continued to connect the coast of Tierra del Fuego with the coast of New Guinea on their globes, and allowing their imaginations to run riot in the vast unknown spaces of the south Atlantic, south Indian and Pacific oceans they sketched the outlines of the Terra Australis Incognita ("Unknown Southern Land"), a vast continent stretching in parts into the tropics. The search for this great south land was a leading motive of explorers in the 16th and the early part of the 17th centuries.[37]
|
60 |
+
|
61 |
+
The Spaniard Gabriel de Castilla, who claimed having sighted "snow-covered mountains" beyond the 64° S in 1603, is recognized as the first explorer that discovered the continent of Antarctica, although he was ignored in his time.
|
62 |
+
|
63 |
+
In 1606, Pedro Fernández de Quirós took possession for the king of Spain all of the lands he had discovered in Australia del Espiritu Santo (the New Hebrides) and those he would discover "even to the Pole".[37]
|
64 |
+
|
65 |
+
Francis Drake, like Spanish explorers before him, had speculated that there might be an open channel south of Tierra del Fuego. When Willem Schouten and Jacob Le Maire discovered the southern extremity of Tierra del Fuego and named it Cape Horn in 1615, they proved that the Tierra del Fuego archipelago was of small extent and not connected to the southern land, as previously thought. Subsequently, in 1642, Abel Tasman showed that even New Holland (Australia) was separated by sea from any continuous southern continent.[37]
|
66 |
+
|
67 |
+
The visit to South Georgia by Anthony de la Roché in 1675 was the first ever discovery of land south of the Antarctic Convergence i.e. in the Southern Ocean/Antarctic.[40][41] Soon after the voyage cartographers started to depict ‘Roché Island’, honouring the discoverer. James Cook was aware of la Roché's discovery when surveying and mapping the island in 1775.[42]
|
68 |
+
|
69 |
+
Edmond Halley's voyage in HMS Paramour for magnetic investigations in the South Atlantic met the pack ice in 52° S in January 1700, but that latitude (he reached 140 mi off the north coast of South Georgia) was his farthest south. A determined effort on the part of the French naval officer Jean-Baptiste Charles Bouvet de Lozier to discover the "South Land" – described by a half legendary "sieur de Gonneyville" – resulted in the discovery of Bouvet Island in 54°10′ S, and in the navigation of 48° of longitude of ice-cumbered sea nearly in 55° S in 1730.[37]
|
70 |
+
|
71 |
+
In 1771, Yves Joseph Kerguelen sailed from France with instructions to proceed south from Mauritius in search of "a very large continent". He lighted upon a land in 50° S which he called South France, and believed to be the central mass of the southern continent. He was sent out again to complete the exploration of the new land, and found it to be only an inhospitable island which he renamed the Isle of Desolation, but which was ultimately named after him.[37]
|
72 |
+
|
73 |
+
The obsession of the undiscovered continent culminated in the brain of Alexander Dalrymple, the brilliant and erratic hydrographer who was nominated by the Royal Society to command the Transit of Venus expedition to Tahiti in 1769. The command of the expedition was given by the admiralty to Captain James Cook. Sailing in 1772 with Resolution, a vessel of 462 tons under his own command and Adventure of 336 tons under Captain Tobias Furneaux, Cook first searched in vain for Bouvet Island, then sailed for 20 degrees of longitude to the westward in latitude 58° S, and then 30° eastward for the most part south of 60° S, a lower southern latitude than had ever been voluntarily entered before by any vessel. On 17 January 1773 the Antarctic Circle was crossed for the first time in history and the two ships reached 67° 15' S by 39° 35' E, where their course was stopped by ice.[37]
|
74 |
+
|
75 |
+
Cook then turned northward to look for French Southern and Antarctic Lands, of the discovery of which he had received news at Cape Town, but from the rough determination of his longitude by Kerguelen, Cook reached the assigned latitude 10° too far east and did not see it. He turned south again and was stopped by ice in 61° 52′ S by 95° E and continued eastward nearly on the parallel of 60° S to 147° E. On 16 March, the approaching winter drove him northward for rest to New Zealand and the tropical islands of the Pacific. In November 1773, Cook left New Zealand, having parted company with the Adventure, and reached 60° S by 177° W, whence he sailed eastward keeping as far south as the floating ice allowed. The Antarctic Circle was crossed on 20 December and Cook remained south of it for three days, being compelled after reaching 67° 31′ S to stand north again in 135° W.[37]
|
76 |
+
|
77 |
+
A long detour to 47° 50′ S served to show that there was no land connection between New Zealand and Tierra del Fuego. Turning south again, Cook crossed the Antarctic Circle for the third time at 109° 30′ W before his progress was once again blocked by ice four days later at 71° 10′ S by 106° 54′ W. This point, reached on 30 January 1774, was the farthest south attained in the 18th century. With a great detour to the east, almost to the coast of South America, the expedition regained Tahiti for refreshment. In November 1774, Cook started from New Zealand and crossed the South Pacific without sighting land between 53° and 57° S to Tierra del Fuego; then, passing Cape Horn on 29 December, he rediscovered Roché Island renaming it Isle of Georgia, and discovered the South Sandwich Islands (named Sandwich Land by him), the only ice-clad land he had seen, before crossing the South Atlantic to the Cape of Good Hope between 55° and 60°. He thereby laid open the way for future Antarctic exploration by exploding the myth of a habitable southern continent. Cook's most southerly discovery of land lay on the temperate side of the 60th parallel, and he convinced himself that if land lay farther south it was practically inaccessible and without economic value.[37]
|
78 |
+
|
79 |
+
Voyagers rounding Cape Horn frequently met with contrary winds and were driven southward into snowy skies and ice-encumbered seas; but so far as can be ascertained none of them before 1770 reached the Antarctic Circle, or knew it, if they did.
|
80 |
+
|
81 |
+
In a voyage from 1822 to 1824, James Weddell commanded the 160-ton brig Jane, accompanied by his second ship Beaufoy captained by Matthew Brisbane. Together they sailed to the South Orkneys where sealing proved disappointing. They turned south in the hope of finding a better sealing ground. The season was unusually mild and tranquil, and on 20 February 1823 the two ships reached latitude 74°15' S and longitude 34°16'45″ W the southernmost position any ship had ever reached up to that time. A few icebergs were sighted but there was still no sight of land, leading Weddell to theorize that the sea continued as far as the South Pole. Another two days' sailing would have brought him to Coat's Land (to the east of the Weddell Sea) but Weddell decided to turn back.[44]
|
82 |
+
|
83 |
+
The first land south of the parallel 60° south latitude was discovered by the Englishman William Smith, who sighted Livingston Island on 19 February 1819. A few months later Smith returned to explore the other islands of the South Shetlands archipelago, landed on King George Island, and claimed the new territories for Britain.
|
84 |
+
|
85 |
+
In the meantime, the Spanish Navy ship San Telmo sank in September 1819 when trying to cross Cape Horn. Parts of her wreckage were found months later by sealers on the north coast of Livingston Island (South Shetlands). It is unknown if some survivor managed to be the first to set foot on these Antarctic islands.
|
86 |
+
|
87 |
+
The first confirmed sighting of mainland Antarctica cannot be accurately attributed to one single person. It can, however, be narrowed down to three individuals. According to various sources,[45][46][47] three men all sighted the ice shelf or the continent within days or months of each other: Fabian Gottlieb von Bellingshausen, a captain in the Russian Imperial Navy; Edward Bransfield, a captain in the Royal Navy; and Nathaniel Palmer, an American sealer out of Stonington, Connecticut. It is certain that the expedition, led by von Bellingshausen and Lazarev on the ships Vostok and Mirny, reached a point within 32 km (20 mi) from Princess Martha Coast and recorded the sight of an ice shelf at 69°21′28″S 2°14′50″W / 69.35778°S 2.24722°W / -69.35778; -2.24722[48] that became known as the Fimbul Ice Shelf. On 30 January 1820, Bransfield sighted Trinity Peninsula, the northernmost point of the Antarctic mainland, while Palmer sighted the mainland in the area south of Trinity Peninsula in November 1820. Von Bellingshausen's expedition also discovered Peter I Island and Alexander I Island, the first islands to be discovered south of the circle.
|
88 |
+
|
89 |
+
1683 map by French cartographer Alain Manesson Mallet from his publication Description de L'Univers. Shows a sea below both the Atlantic and Pacific oceans at a time when Tierra del Fuego was believed joined to Antarctica. Sea is named Mer Magellanique after Ferdinand Magellan.
|
90 |
+
|
91 |
+
Samuel Dunn's 1794 General Map of the World or Terraqueous Globe shows a Southern Ocean (but meaning what is today named the South Atlantic) and a Southern Icy Ocean.
|
92 |
+
|
93 |
+
A New Map of Asia, from the Latest Authorities, by John Cary, Engraver, 1806, shows the Southern Ocean lying to the south of both the Indian Ocean and Australia.
|
94 |
+
|
95 |
+
Freycinet Map of 1811 – resulted from the 1800–1803 French Baudin expedition to Australia and was the first full map of Australia ever to be published. In French, the map named the ocean immediately below Australia as the Grand Océan Austral (‘Great Southern Ocean’).
|
96 |
+
|
97 |
+
1863 map of Australia shows the Southern Ocean lying immediately to the south of Australia.
|
98 |
+
|
99 |
+
1906 map by German publisher Justus Perthes showing Antarctica encompassed by an Antarktischer (Sudl. Eismeer) Ocean – the ‘Antarctic (South Arctic) Ocean’.
|
100 |
+
|
101 |
+
Map of The World in 1922 by the National Geographic Society showing the Antarctic (Southern) Ocean.
|
102 |
+
|
103 |
+
In December 1839, as part of the United States Exploring Expedition of 1838–42 conducted by the United States Navy (sometimes called "the Wilkes Expedition"), an expedition sailed from Sydney, Australia, on the sloops-of-war USS Vincennes and USS Peacock, the brig USS Porpoise, the full-rigged ship Relief, and two schooners Sea Gull and USS Flying Fish. They sailed into the Antarctic Ocean, as it was then known, and reported the discovery "of an Antarctic continent west of the Balleny Islands" on 25 January 1840. That part of Antarctica was later named "Wilkes Land", a name it maintains to this day.
|
104 |
+
|
105 |
+
Explorer James Clark Ross passed through what is now known as the Ross Sea and discovered Ross Island (both of which were named for him) in 1841. He sailed along a huge wall of ice that was later named the Ross Ice Shelf. Mount Erebus and Mount Terror are named after two ships from his expedition: HMS Erebus and HMS Terror.[49]
|
106 |
+
|
107 |
+
The Imperial Trans-Antarctic Expedition of 1914, led by Ernest Shackleton, set out to cross the continent via the pole, but their ship, Endurance, was trapped and crushed by pack ice before they even landed. The expedition members survived after an epic journey on sledges over pack ice to Elephant Island. Then Shackleton and five others crossed the Southern Ocean, in an open boat called James Caird, and then trekked over South Georgia to raise the alarm at the whaling station Grytviken.
|
108 |
+
|
109 |
+
In 1946, US Navy Rear Admiral Richard E. Byrd and more than 4,700 military personnel visited the Antarctic in an expedition called Operation Highjump. Reported to the public as a scientific mission, the details were kept secret and it may have actually been a training or testing mission for the military. The expedition was, in both military or scientific planning terms, put together very quickly. The group contained an unusually high amount of military equipment, including an aircraft carrier, submarines, military support ships, assault troops and military vehicles. The expedition was planned to last for eight months but was unexpectedly terminated after only two months. With the exception of some eccentric entries in Admiral Byrd's diaries, no real explanation for the early termination has ever been officially given.
|
110 |
+
|
111 |
+
Captain Finn Ronne, Byrd's executive officer, returned to Antarctica with his own expedition in 1947–1948, with Navy support, three planes, and dogs. Ronne disproved the notion that the continent was divided in two and established that East and West Antarctica was one single continent, i.e. that the Weddell Sea and the Ross Sea are not connected.[50] The expedition explored and mapped large parts of Palmer Land and the Weddell Sea coastline, and identified the Ronne Ice Shelf, named by Ronne after his wife Edith "Jackie" Ronne.[51] Ronne covered 3,600 miles (5,790 km) by ski and dog sled – more than any other explorer in history.[52] The Ronne Antarctic Research Expedition discovered and mapped the last unknown coastline in the world and was the first Antarctic expedition to ever include women.[53]
|
112 |
+
|
113 |
+
The Antarctic Treaty was signed on 1 December 1959 and came into force on 23 June 1961. Among other provisions, this treaty limits military activity in the Antarctic to the support of scientific research.
|
114 |
+
|
115 |
+
The first person to sail single-handed to Antarctica was the New Zealander David Henry Lewis, in 1972, in a 10-metre (30 ft) steel sloop Ice Bird.
|
116 |
+
|
117 |
+
A baby, named Emilio Marcos de Palma, was born near Hope Bay on 7 January 1978, becoming the first baby born on the continent. He also was born further south than anyone in history.[54]
|
118 |
+
|
119 |
+
The MV Explorer was a cruise ship operated by the Swedish explorer Lars-Eric Lindblad. Observers point to Explorer's 1969 expeditionary cruise to Antarctica as the frontrunner for today's[when?] sea-based tourism in that region.[55][56] Explorer was the first cruise ship used specifically to sail the icy waters of the Antarctic Ocean and the first to sink there[57] when she struck an unidentified submerged object on 23 November 2007, reported to be ice, which caused a 10 by 4 inches (25 by 10 cm) gash in the hull.[58] Explorer was abandoned in the early hours of 23 November 2007 after taking on water near the South Shetland Islands in the Southern Ocean, an area which is usually stormy but was calm at the time.[59] Explorer was confirmed by the Chilean Navy to have sunk at approximately position: 62° 24′ South, 57° 16′ West,[60] in roughly 600 m of water.[61]
|
120 |
+
|
121 |
+
British engineer Richard Jenkins designed an unmanned surface vehicle called a "saildrone"[62] that completed the first autonomous circumnavigation of the Southern Ocean on 3 August 2019 after 196 days at sea.[63]
|
122 |
+
|
123 |
+
The first completely human-powered expedition on the Southern Ocean was accomplished on 25 December 2019 by a team of rowers comprising captain Fiann Paul (Iceland), first mate Colin O'Brady (US), Andrew Towne (US), Cameron Bellamy (South Africa), Jamie Douglas-Hamilton (UK) and John Petersen (US).[64]
|
124 |
+
|
125 |
+
The Southern Ocean, geologically the youngest of the oceans, was formed when Antarctica and South America moved apart, opening the Drake Passage, roughly 30 million years ago. The separation of the continents allowed the formation of the Antarctic Circumpolar Current.
|
126 |
+
|
127 |
+
With a northern limit at 60°S, the Southern Ocean differs from the other oceans in that its largest boundary, the northern boundary, does not abut a landmass (as it did with the first edition of Limits of Oceans and Seas). Instead, the northern limit is with the Atlantic, Indian and Pacific Oceans.
|
128 |
+
|
129 |
+
One reason for considering it as a separate ocean stems from the fact that much of the water of the Southern Ocean differs from the water in the other oceans. Water gets transported around the Southern Ocean fairly rapidly because of the Antarctic Circumpolar Current which circulates around Antarctica. Water in the Southern Ocean south of, for example, New Zealand, resembles the water in the Southern Ocean south of South America more closely than it resembles the water in the Pacific Ocean.
|
130 |
+
|
131 |
+
The Southern Ocean has typical depths of between 4,000 and 5,000 m (13,000 and 16,000 ft) over most of its extent with only limited areas of shallow water. The Southern Ocean's greatest depth of 7,236 m (23,740 ft) occurs at the southern end of the South Sandwich Trench, at 60°00'S, 024°W. The Antarctic continental shelf appears generally narrow and unusually deep, its edge lying at depths up to 800 m (2,600 ft), compared to a global mean of 133 m (436 ft).
|
132 |
+
|
133 |
+
Equinox to equinox in line with the sun's seasonal influence, the Antarctic ice pack fluctuates from an average minimum of 2.6 million square kilometres (1.0×10^6 sq mi) in March to about 18.8 million square kilometres (7.3×10^6 sq mi) in September, more than a sevenfold increase in area.
|
134 |
+
|
135 |
+
Sub-divisions of oceans are geographical features such as "seas", "straits", "bays", "channels", and "gulfs". There are many sub-divisions of the Southern Ocean defined in the never-approved 2002 draft fourth edition of the IHO publication Limits of Oceans and Seas. In clockwise order these include (with sector):
|
136 |
+
|
137 |
+
A number of these such as the 2002 Russian-proposed "Cosmonauts Sea", "Cooperation Sea", and "Somov (mid-1950s Russian polar explorer) Sea" are not included in the 1953 IHO document which remains currently in force,[26] because they received their names largely originated from 1962 onward. Leading geographic authorities and atlases do not use these latter three names, including the 2014 10th edition World Atlas from the United States' National Geographic Society and the 2014 12th edition of the British Times Atlas of the World, but Soviet and Russian-issued maps do.[65][66]
|
138 |
+
|
139 |
+
The Southern Ocean probably contains large, and possibly giant, oil and gas fields on the continental margin. Placer deposits, accumulation of valuable minerals such as gold, formed by gravity separation during sedimentary processes are also expected to exist in the Southern Ocean.[5]
|
140 |
+
|
141 |
+
Manganese nodules are expected to exist in the Southern Ocean. Manganese nodules are rock concretions on the sea bottom formed of concentric layers of iron and manganese hydroxides around a core. The core may be microscopically small and is sometimes completely transformed into manganese minerals by crystallization. Interest in the potential exploitation of polymetallic nodules generated a great deal of activity among prospective mining consortia in the 1960s and 1970s.[5]
|
142 |
+
|
143 |
+
The icebergs that form each year around in the Southern Ocean hold enough fresh water to meet the needs of every person on Earth for several months. For several decades there have been proposals, none yet to be feasible or successful, to tow Southern Ocean icebergs to more arid northern regions (such as Australia) where they can be harvested.[67]
|
144 |
+
|
145 |
+
Icebergs can occur at any time of year throughout the ocean. Some may have drafts up to several hundred meters; smaller icebergs, iceberg fragments and sea-ice (generally 0.5 to 1 m thick) also pose problems for ships. The deep continental shelf has a floor of glacial deposits varying widely over short distances.
|
146 |
+
|
147 |
+
Sailors know latitudes from 40 to 70 degrees south as the "Roaring Forties", "Furious Fifties" and "Shrieking Sixties" due to high winds and large waves that form as winds blow around the entire globe unimpeded by any land-mass. Icebergs, especially in May to October, make the area even more dangerous. The remoteness of the region makes sources of search and rescue scarce.
|
148 |
+
|
149 |
+
The Antarctic Circumpolar Current moves perpetually eastward – chasing and joining itself, and at 21,000 km (13,000 mi) in length – it comprises the world's longest ocean current, transporting 130 million cubic metres per second (4.6×10^9 cu ft/s) of water – 100 times the flow of all the world's rivers.
|
150 |
+
|
151 |
+
Several processes operate along the coast of Antarctica to produce, in the Southern Ocean, types of water masses not produced elsewhere in the oceans of the Southern Hemisphere. One of these is the Antarctic Bottom Water, a very cold, highly saline, dense water that forms under sea ice.
|
152 |
+
|
153 |
+
Associated with the Circumpolar Current is the Antarctic Convergence encircling Antarctica, where cold northward-flowing Antarctic waters meet the relatively warmer waters of the subantarctic, Antarctic waters predominantly sink beneath subantarctic waters, while associated zones of mixing and upwelling create a zone very high in nutrients. These nurture high levels of phytoplankton with associated copepods and Antarctic krill, and resultant foodchains supporting fish, whales, seals, penguins, albatrosses and a wealth of other species.[68]
|
154 |
+
|
155 |
+
The Antarctic Convergence is considered to be the best natural definition of the northern extent of the Southern Ocean.[5]
|
156 |
+
|
157 |
+
Large-scale upwelling is found in the Southern Ocean. Strong westerly (eastward) winds blow around Antarctica, driving a significant flow of water northwards. This is actually a type of coastal upwelling. Since there are no continents in a band of open latitudes between South America and the tip of the Antarctic Peninsula, some of this water is drawn up from great depths. In many numerical models and observational syntheses, the Southern Ocean upwelling represents the primary means by which deep dense water is brought to the surface. Shallower, wind-driven upwelling is also found off the west coasts of North and South America, northwest and southwest Africa, and southwest and southeast Australia, all associated with oceanic subtropical high pressure circulations.
|
158 |
+
|
159 |
+
Some models of the ocean circulation suggest that broad-scale upwelling occurs in the tropics, as pressure driven flows converge water toward the low latitudes where it is diffusively warmed from above. The required diffusion coefficients, however, appear to be larger than are observed in the real ocean. Nonetheless, some diffusive upwelling does probably occur.
|
160 |
+
|
161 |
+
The Ross Gyre and Weddell Gyre are two gyres that exist within the Southern Ocean. The gyres are located in the Ross Sea and Weddell Sea respectively, and both rotate clockwise. The gyres are formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf.
|
162 |
+
|
163 |
+
Sea ice has been noted to persist in the central area of the Ross Gyre.[69] There is some evidence that global warming has resulted in some decrease of the salinity of the waters of the Ross Gyre since the 1950s.[70]
|
164 |
+
|
165 |
+
Due to the Coriolis effect acting to the left in the Southern Hemisphere and the resulting Ekman transport away from the centres of the Weddell Gyre, these regions are very productive due to upwelling of cold, nutrient rich water.
|
166 |
+
|
167 |
+
Sea temperatures vary from about −2 to 10 °C (28 to 50 °F). Cyclonic storms travel eastward around the continent and frequently become intense because of the temperature contrast between ice and open ocean. The ocean-area from about latitude 40 south to the Antarctic Circle has the strongest average winds found anywhere on Earth.[71] In winter the ocean freezes outward to 65 degrees south latitude in the Pacific sector and 55 degrees south latitude in the Atlantic sector, lowering surface temperatures well below 0 degrees Celsius. At some coastal points, however, persistent intense drainage winds from the interior keep the shoreline ice-free throughout the winter.
|
168 |
+
|
169 |
+
A variety of marine animals exist and rely, directly or indirectly, on the phytoplankton in the Southern Ocean. Antarctic sea life includes penguins, blue whales, orcas, colossal squids and fur seals. The emperor penguin is the only penguin that breeds during the winter in Antarctica, while the Adélie penguin breeds farther south than any other penguin. The rockhopper penguin has distinctive feathers around the eyes, giving the appearance of elaborate eyelashes. King penguins, chinstrap penguins, and gentoo penguins also breed in the Antarctic.
|
170 |
+
|
171 |
+
The Antarctic fur seal was very heavily hunted in the 18th and 19th centuries for its pelt by sealers from the United States and the United Kingdom. The Weddell seal, a "true seal", is named after Sir James Weddell, commander of British sealing expeditions in the Weddell Sea. Antarctic krill, which congregates in large schools, is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, squid, icefish, penguins, albatrosses and many other birds.[72]
|
172 |
+
|
173 |
+
The benthic communities of the seafloor are diverse and dense, with up to 155,000 animals found in 1 square metre (10.8 sq ft). As the seafloor environment is very similar all around the Antarctic, hundreds of species can be found all the way around the mainland, which is a uniquely wide distribution for such a large community. Deep-sea gigantism is common among these animals.[73]
|
174 |
+
|
175 |
+
A census of sea life carried out during the International Polar Year and which involved some 500 researchers was released in 2010. The research is part of the global Census of Marine Life (CoML) and has disclosed some remarkable findings. More than 235 marine organisms live in both polar regions, having bridged the gap of 12,000 km (7,500 mi). Large animals such as some cetaceans and birds make the round trip annually. More surprising are small forms of life such as mudworms, sea cucumbers and free-swimming snails found in both polar oceans. Various factors may aid in their distribution – fairly uniform temperatures of the deep ocean at the poles and the equator which differ by no more than 5 °C (9.0 °F), and the major current systems or marine conveyor belt which transport egg and larva stages.[74] However, among smaller marine animals generally assumed to be the same in the Antarctica and the Arctic, more detailed studies of each population have often—but not always—revealed differences, showing that they are closely related cryptic species rather than a single bipolar species.[75][76][77]
|
176 |
+
|
177 |
+
The rocky shores of mainland Antarctica and its offshore islands provide nesting space for over 100 million birds every spring. These nesters include species of albatrosses, petrels, skuas, gulls and terns.[78] The insectivorous South Georgia pipit is endemic to South Georgia and some smaller surrounding islands. Freshwater ducks inhabit South Georgia and the Kerguelen Islands.[79]
|
178 |
+
|
179 |
+
The flightless penguins are all located in the Southern Hemisphere, with the greatest concentration located on and around Antarctica. Four of the 18 penguin species live and breed on the mainland and its close offshore islands. Another four species live on the subantarctic islands.[80] Emperor penguins have four overlapping layers of feathers, keeping them warm. They are the only Antarctic animal to breed during the winter.[81]
|
180 |
+
|
181 |
+
There are relatively few fish species in few families in the Southern Ocean. The most species-rich family are the snailfish (Liparidae), followed by the cod icefish (Nototheniidae)[82] and eelpout (Zoarcidae). Together the snailfish, eelpouts and notothenioids (which includes cod icefish and several other families) account for almost 9⁄10 of the more than 320 described fish species of the Southern Ocean (tens of undescribed species also occur in the region, especially among the snailfish).[83] Southern Ocean snailfish are generally found in deep waters, while the icefish also occur in shallower waters.[82]
|
182 |
+
|
183 |
+
Cod icefish (Nototheniidae), as well as several other families, are part of the Notothenioidei suborder, collectively sometimes referred to as icefish. The suborder contains many species with antifreeze proteins in their blood and tissue, allowing them to live in water that is around or slightly below 0 °C (32 °F).[84][85] Antifreeze proteins are also known from Southern Ocean snailfish.[86]
|
184 |
+
|
185 |
+
The crocodile icefish (family Channichthyidae), also known as white-blooded fish, are only found in the Southern Ocean. They lack hemoglobin in their blood, resulting in their blood being colourless. One Channichthyidae species, the mackerel icefish (Champsocephalus gunnari), was once the most common fish in coastal waters less than 400 metres (1,312 ft) deep, but was overfished in the 1970s and 1980s. Schools of icefish spend the day at the seafloor and the night higher in the water column eating plankton and smaller fish.[84]
|
186 |
+
|
187 |
+
There are two species from the genus Dissostichus, the Antarctic toothfish (Dissostichus mawsoni) and the Patagonian toothfish (Dissostichus eleginoides). These two species live on the seafloor 100–3,000 metres (328–9,843 ft) deep, and can grow to around 2 metres (7 ft) long weighing up to 100 kilograms (220 lb), living up to 45 years. The Antarctic toothfish lives close to the Antarctic mainland, whereas the Patagonian toothfish lives in the relatively warmer subantarctic waters. Toothfish are commercially fished, and overfishing has reduced toothfish populations.[84]
|
188 |
+
|
189 |
+
Another abundant fish group is the genus Notothenia, which like the Antarctic toothfish have antifreeze in their bodies.[84]
|
190 |
+
|
191 |
+
An unusual species of icefish is the Antarctic silverfish (Pleuragramma antarcticum), which is the only truly pelagic fish in the waters near Antarctica.[87]
|
192 |
+
|
193 |
+
Seven pinniped species inhabit Antarctica. The largest, the elephant seal (Mirounga leonina), can reach up to 4,000 kilograms (8,818 lb), while females of the smallest, the Antarctic fur seal (Arctocephalus gazella), reach only 150 kilograms (331 lb). These two species live north of the sea ice, and breed in harems on beaches. The other four species can live on the sea ice. Crabeater seals (Lobodon carcinophagus) and Weddell seals (Leptonychotes weddellii) form breeding colonies, whereas leopard seals (Hydrurga leptonyx) and Ross seals (Ommatophoca rossii) live solitary lives. Although these species hunt underwater, they breed on land or ice and spend a great deal of time there, as they have no terrestrial predators.[88]
|
194 |
+
|
195 |
+
The four species that inhabit sea ice are thought to make up 50% of the total biomass of the world's seals.[89] Crabeater seals have a population of around 15 million, making them one of the most numerous large animals on the planet.[90] The New Zealand sea lion (Phocarctos hookeri), one of the rarest and most localised pinnipeds, breeds almost exclusively on the subantarctic Auckland Islands, although historically it had a wider range.[91] Out of all permanent mammalian residents, the Weddell seals live the furthest south.[92]
|
196 |
+
|
197 |
+
There are 10 cetacean species found in the Southern Ocean; six baleen whales, and four toothed whales. The largest of these, the blue whale (Balaenoptera musculus), grows to 24 metres (79 ft) long weighing 84 tonnes. Many of these species are migratory, and travel to tropical waters during the Antarctic winter.[93]
|
198 |
+
|
199 |
+
Five species of krill, small free-swimming crustaceans, have been found in the Southern Ocean.[94] The Antarctic krill (Euphausia superba) is one of the most abundant animal species on earth, with a biomass of around 500 million tonnes. Each individual is 6 centimetres (2.4 in) long and weighs over 1 gram (0.035 oz).[95] The swarms that form can stretch for kilometres, with up to 30,000 individuals per 1 cubic metre (35 cu ft), turning the water red.[94] Swarms usually remain in deep water during the day, ascending during the night to feed on plankton. Many larger animals depend on krill for their own survival.[95] During the winter when food is scarce, adult Antarctic krill can revert to a smaller juvenile stage, using their own body as nutrition.[94]
|
200 |
+
|
201 |
+
Many benthic crustaceans have a non-seasonal breeding cycle, and some raise their young in a brood pouch. Glyptonotus antarcticus is an unusually large benthic isopod, reaching 20 centimetres (8 in) in length weighing 70 grams (2.47 oz). Amphipods are abundant in soft sediments, eating a range of items, from algae to other animals.[73] The amphipods are highly diverse with more than 600 recognized species found south of the Antarctic Convergence and there are indications that many undescribed species remain. Among these are several "giants", such as the iconic epimeriids that are up to 8 cm (3.1 in) long.[96]
|
202 |
+
|
203 |
+
Slow moving sea spiders are common, sometimes growing as large as a human hand. They feed on the corals, sponges, and bryozoans that litter the seabed.[73]
|
204 |
+
|
205 |
+
Many aquatic molluscs are present in Antarctica. Bivalves such as Adamussium colbecki move around on the seafloor, while others such as Laternula elliptica live in burrows filtering the water above.[73] There are around 70 cephalopod species in the Southern Ocean,[97] the largest of which is the colossal squid (Mesonychoteuthis hamiltoni), which at up to 14 metres (46 ft) is among the largest invertebrate in the world.[98] Squid makes up most of the diet of some animals, such as grey-headed albatrosses and sperm whales, and the warty squid (Moroteuthis ingens) is one of the subantarctic's most preyed upon species by vertebrates.[97]
|
206 |
+
|
207 |
+
The sea urchin genus Abatus burrow through the sediment eating the nutrients they find in it.[73] Two species of salps are common in Antarctic waters, Salpa thompsoni and Ihlea racovitzai. Salpa thompsoni is found in ice-free areas, whereas Ihlea racovitzai is found in the high latitude areas near ice. Due to their low nutritional value, they are normally only eaten by fish, with larger animals such as birds and marine mammals only eating them when other food is scarce.[99]
|
208 |
+
|
209 |
+
Antarctic sponges are long lived, and sensitive to environmental changes due to the specificity of the symbiotic microbial communities within them. As a result, they function as indicators of environmental health.[100]
|
210 |
+
|
211 |
+
Increased solar ultraviolet radiation resulting from the Antarctic ozone hole has reduced marine primary productivity (phytoplankton) by as much as 15% and has started damaging the DNA of some fish.[101] Illegal, unreported and unregulated fishing, especially the landing of an estimated five to six times more Patagonian toothfish than the regulated fishery, likely affects the sustainability of the stock. Long-line fishing for toothfish causes a high incidence of seabird mortality.
|
212 |
+
|
213 |
+
All international agreements regarding the world's oceans apply to the Southern Ocean. In addition, it is subject to these agreements specific to the region:
|
214 |
+
|
215 |
+
Many nations prohibit the exploration for and the exploitation of mineral resources south of the fluctuating Antarctic Convergence,[104] which lies in the middle of the Antarctic Circumpolar Current and serves as the dividing line between the very cold polar surface waters to the south and the warmer waters to the north. The Antarctic Treaty covers the portion of the globe south of sixty degrees south,[105]
|
216 |
+
it prohibits new claims to Antarctica.[106]
|
217 |
+
|
218 |
+
The Convention for the Conservation of Antarctic Marine Living Resources applies to the area south of 60° South latitude as well as the areas further north up to the limit of the Antarctic Convergence.[107]
|
219 |
+
|
220 |
+
Between 1 July 1998 and 30 June 1999, fisheries landed 119,898 tonnes, of which 85% consisted of krill and 14% of Patagonian toothfish. International agreements came into force in late 1999 to reduce illegal, unreported, and unregulated fishing, which in the 1998–99 season landed five to six times more Patagonian toothfish than the regulated fishery.
|
221 |
+
|
222 |
+
Major operational ports include: Rothera Station, Palmer Station, Villa Las Estrellas, Esperanza Base, Mawson Station, McMurdo Station, and offshore anchorages in Antarctica.
|
223 |
+
|
224 |
+
Few ports or harbors exist on the southern (Antarctic) coast of the Southern Ocean, since ice conditions limit use of most shores to short periods in midsummer; even then some require icebreaker escort for access. Most Antarctic ports are operated by government research stations and, except in an emergency, remain closed to commercial or private vessels; vessels in any port south of 60 degrees south are subject to inspection by Antarctic Treaty observers.
|
225 |
+
|
226 |
+
The Southern Ocean's southernmost port operates at McMurdo Station at 77°50′S 166°40′E / 77.833°S 166.667°E / -77.833; 166.667. Winter Quarters Bay forms a small harbor, on the southern tip of Ross Island where a floating ice pier makes port operations possible in summer. Operation Deep Freeze personnel constructed the first ice pier at McMurdo in 1973.[108]
|
227 |
+
|
228 |
+
Based on the original 1928 IHO delineation of the Southern Ocean (and the 1937 delineation if the Great Australian Bight is considered integral), Australian ports and harbors between Cape Leeuwin and Cape Otway on the Australian mainland and along the west coast of Tasmania would also be identified as ports and harbors existing in the Southern Ocean. These would include the larger ports and harbors of Albany, Thevenard, Port Lincoln, Whyalla, Port Augusta, Port Adelaide, Portland, Warrnambool, and Macquarie Harbour.
|
229 |
+
|
230 |
+
Even though organizers of several Yacht races define their routes as involving the Southern Ocean, the actual routes don't enter the actual geographical boundaries of the Southern Ocean. The routes involve instead South Atlantic, South Pacific and Indian Ocean.[109][110][111]
|
231 |
+
|
en/4212.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/4213.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/4214.html.txt
ADDED
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Atlantic Ocean is the second-largest of the world's oceans, with an area of about 106,460,000 km2 (41,100,000 sq mi).[2][3] It covers approximately 20 percent of Earth's surface and about 29 percent of its water surface area. It separates the "Old World" from the "New World".
|
4 |
+
|
5 |
+
The Atlantic Ocean occupies an elongated, S-shaped basin extending longitudinally between Europe and Africa to the east, and the Americas to the west. As one component of the interconnected World Ocean, it is connected in the north to the Arctic Ocean, to the Pacific Ocean in the southwest, the Indian Ocean in the southeast, and the Southern Ocean in the south (other definitions describe the Atlantic as extending southward to Antarctica). The Equatorial Counter Current subdivides it into the North(ern) Atlantic Ocean and the South(ern) Atlantic Ocean at about 8°N.[6]
|
6 |
+
|
7 |
+
Scientific explorations of the Atlantic include the Challenger expedition, the German Meteor expedition, Columbia University's Lamont-Doherty Earth Observatory and the United States Navy Hydrographic Office.[6]
|
8 |
+
|
9 |
+
The oldest known mentions of an "Atlantic" sea come from Stesichorus around mid-sixth century BC (Sch. A. R. 1. 211):[7] Atlantikôi pelágei (Greek: Ἀτλαντικῷ πελάγει; English: 'the Atlantic sea'; etym. 'Sea of Atlantis') and in The Histories of Herodotus around 450 BC (Hdt. 1.202.4): Atlantis thalassa (Greek: Ἀτλαντὶς θάλασσα; English: 'Sea of Atlantis' or 'the Atlantis sea'[8]) where the name refers to "the sea beyond the pillars of Heracles" which is said to be part of the sea that surrounds all land.[9] In these uses, the name refers to Atlas, the Titan in Greek mythology, who supported the heavens and who later appeared as a frontispiece in Medieval maps and also lent his name to modern atlases.[10] On the other hand, to early Greek sailors and in Ancient Greek mythological literature such as the Iliad and the Odyssey, this all-encompassing ocean was instead known as Oceanus, the gigantic river that encircled the world; in contrast to the enclosed seas well known to the Greeks: the Mediterranean and the Black Sea.[11] In contrast, the term "Atlantic" originally referred specifically to the Atlas Mountains in Morocco and the sea off the Strait of Gibraltar and the North African coast.[10] The Greek word thalassa has been reused by scientists for the huge Panthalassa ocean that surrounded the supercontinent Pangaea hundreds of millions of years ago.
|
10 |
+
|
11 |
+
The term "Aethiopian Ocean", derived from Ancient Ethiopia, was applied to the Southern Atlantic as late as the mid-19th century.[12] During the Age of Discovery, the Atlantic was also known to English cartographers as the Great Western Ocean.[13]
|
12 |
+
|
13 |
+
The term The Pond is often used by British and American speakers in context to the Atlantic Ocean, as a form of meiosis, or sarcastic understatement. The term dates to as early as 1640, first appearing in print in pamphlet released during the reign of Charles I, and reproduced in 1869 in Nehemiah Wallington's Historical Notices of Events Occurring Chiefly in The Reign of Charles I, where "great Pond" is used in reference to the Atlantic Ocean by Francis Windebank, Charles I's Secretary of State.[14][15][16]
|
14 |
+
|
15 |
+
The International Hydrographic Organization (IHO) defined the limits of the oceans and seas in 1953,[17] but some of these definitions have been revised since then and some are not used by various authorities, institutions, and countries, see for example the CIA World Factbook. Correspondingly, the extent and number of oceans and seas varies.
|
16 |
+
|
17 |
+
The Atlantic Ocean is bounded on the west by North and South America. It connects to the Arctic Ocean through the Denmark Strait, Greenland Sea, Norwegian Sea and Barents Sea. To the east, the boundaries of the ocean proper are Europe: the Strait of Gibraltar (where it connects with the Mediterranean Sea—one of its marginal seas—and, in turn, the Black Sea, both of which also touch upon Asia) and Africa.
|
18 |
+
|
19 |
+
In the southeast, the Atlantic merges into the Indian Ocean. The 20° East meridian, running south from Cape Agulhas to Antarctica defines its border. In the 1953 definition it extends south to Antarctica, while in later maps it is bounded at the 60° parallel by the Southern Ocean.[17]
|
20 |
+
|
21 |
+
The Atlantic has irregular coasts indented by numerous bays, gulfs and seas. These include the Baltic Sea, Black Sea, Caribbean Sea, Davis Strait, Denmark Strait, part of the Drake Passage, Gulf of Mexico, Labrador Sea, Mediterranean Sea, North Sea, Norwegian Sea, almost all of the Scotia Sea, and other tributary water bodies.[1] Including these marginal seas the coast line of the Atlantic measures 111,866 km (69,510 mi) compared to 135,663 km (84,297 mi) for the Pacific.[1][18]
|
22 |
+
|
23 |
+
Including its marginal seas, the Atlantic covers an area of 106,460,000 km2 (41,100,000 sq mi) or 23.5% of the global ocean and has a volume of 310,410,900 km3 (74,471,500 cu mi) or 23.3% of the total volume of the earth's oceans. Excluding its marginal seas, the Atlantic covers 81,760,000 km2 (31,570,000 sq mi) and has a volume of 305,811,900 km3 (73,368,200 cu mi). The North Atlantic covers 41,490,000 km2 (16,020,000 sq mi) (11.5%) and the South Atlantic 40,270,000 km2 (15,550,000 sq mi) (11.1%).[4] The average depth is 3,646 m (11,962 ft) and the maximum depth, the Milwaukee Deep in the Puerto Rico Trench, is 8,376 m (27,480 ft).[19][20]
|
24 |
+
|
25 |
+
The bathymetry of the Atlantic is dominated by a submarine mountain range called the Mid-Atlantic Ridge (MAR). It runs from 87°N or 300 km (190 mi) south of the North Pole to the subantarctic Bouvet Island at 54°S.[21]
|
26 |
+
|
27 |
+
The MAR divides the Atlantic longitudinally into two-halves, in each of which a series of basins are delimited by secondary, transverse ridges. The MAR reaches above 2,000 m (6,600 ft) along most of its length, but is interrupted by larger transform faults at two places: the Romanche Trench near the Equator and the Gibbs Fracture Zone at 53°N. The MAR is a barrier for bottom water, but at these two transform faults deep water currents can pass from one side to the other.[22]
|
28 |
+
|
29 |
+
The MAR rises 2–3 km (1.2–1.9 mi) above the surrounding ocean floor and its rift valley is the divergent boundary between the North American and Eurasian plates in the North Atlantic and the South American and African plates in the South Atlantic. The MAR produces basaltic volcanoes in Eyjafjallajökull, Iceland, and pillow lava on the ocean floor.[23] The depth of water at the apex of the ridge is less than 2,700 m (1,500 fathoms; 8,900 ft) in most places, while the bottom of the ridge is three times as deep.[24]
|
30 |
+
|
31 |
+
The MAR is intersected by two perpendicular ridges: the Azores–Gibraltar Transform Fault, the boundary between the Nubian and Eurasian plates, intersects the MAR at the Azores Triple Junction, on either side of the Azores microplate, near the 40°N.[25] A much vaguer, nameless boundary, between the North American and South American plates, intersects the MAR near or just north of the Fifteen-Twenty Fracture Zone, approximately at 16°N.[26]
|
32 |
+
|
33 |
+
In the 1870s, the Challenger expedition discovered parts of what is now known as the Mid-Atlantic Ridge, or:
|
34 |
+
|
35 |
+
An elevated ridge rising to an average height of about 1,900 fathoms [3,500 m; 11,400 ft] below the surface traverses the basins of the North and South Atlantic in a meridianal direction from Cape Farewell, probably its far south at least as Gough Island, following roughly the outlines of the coasts of the Old and the New Worlds.[27]
|
36 |
+
|
37 |
+
The remainder of the ridge was discovered in the 1920s by the German Meteor expedition using echo-sounding equipment.[28] The exploration of the MAR in the 1950s led to the general acceptance of seafloor spreading and plate tectonics.[21]
|
38 |
+
|
39 |
+
Most of the MAR runs under water but where it reaches the surfaces it has produced volcanic islands. While nine of these have collectively been nominated a World Heritage Site for their geological value, four of them are considered of "Outstanding Universal Value" based on their cultural and natural criteria: Þingvellir, Iceland; Landscape of the Pico Island Vineyard Culture, Portugal; Gough and Inaccessible Islands, United Kingdom; and Brazilian Atlantic Islands: Fernando de Noronha and Atol das Rocas Reserves, Brazil.[21]
|
40 |
+
|
41 |
+
Continental shelves in the Atlantic are wide off Newfoundland, southern-most South America, and north-eastern Europe.
|
42 |
+
In the western Atlantic carbonate platforms dominate large areas, for example, the Blake Plateau and Bermuda Rise.
|
43 |
+
The Atlantic is surrounded by passive margins except at a few locations where active margins form deep trenches: the Puerto Rico Trench (8,376 m or 27,480 ft maximum depth) in the western Atlantic and South Sandwich Trench (8,264 m or 27,113 ft) in the South Atlantic. There are numerous submarine canyons off north-eastern North America, western Europe, and north-western Africa. Some of these canyons extend along the continental rises and farther into the abyssal plains as deep-sea channels.[22]
|
44 |
+
|
45 |
+
In 1922 a historic moment in cartography and oceanography occurred. The USS Stewart used a Navy Sonic Depth Finder to draw a continuous map across the bed of the Atlantic. This involved little guesswork because the idea of sonar is straight forward with pulses being sent from the vessel, which bounce off the ocean floor, then return to the vessel.[29] The deep ocean floor is thought to be fairly flat with occasional deeps, abyssal plains, trenches, seamounts, basins, plateaus, canyons, and some guyots. Various shelves along the margins of the continents constitute about 11% of the bottom topography with few deep channels cut across the continental rise.
|
46 |
+
|
47 |
+
The mean depth between 60°N and 60°S is 3,730 m (12,240 ft), or close to the average for the global ocean, with a modal depth between 4,000 and 5,000 m (13,000 and 16,000 ft).[22]
|
48 |
+
|
49 |
+
In the South Atlantic the Walvis Ridge and Rio Grande Rise form barriers to ocean currents.
|
50 |
+
The Laurentian Abyss is found off the eastern coast of Canada.
|
51 |
+
|
52 |
+
Surface water temperatures, which vary with latitude, current systems, and season and reflect the latitudinal distribution of solar energy, range from below −2 °C (28 °F) to over 30 °C (86 °F). Maximum temperatures occur north of the equator, and minimum values are found in the polar regions. In the middle latitudes, the area of maximum temperature variations, values may vary by 7–8 °C (13–14 °F).[6]
|
53 |
+
|
54 |
+
From October to June the surface is usually covered with sea ice in the Labrador Sea, Denmark Strait, and Baltic Sea.[6]
|
55 |
+
|
56 |
+
The Coriolis effect circulates North Atlantic water in a clockwise direction, whereas South Atlantic water circulates counter-clockwise. The south tides in the Atlantic Ocean are semi-diurnal; that is, two high tides occur during every 24 lunar hours. In latitudes above 40° North some east-west oscillation, known as the North Atlantic oscillation, occurs.[6]
|
57 |
+
|
58 |
+
On average, the Atlantic is the saltiest major ocean; surface water salinity in the open ocean ranges from 33 to 37 parts per thousand (3.3–3.7%) by mass and varies with latitude and season. Evaporation, precipitation, river inflow and sea ice melting influence surface salinity values. Although the lowest salinity values are just north of the equator (because of heavy tropical rainfall), in general, the lowest values are in the high latitudes and along coasts where large rivers enter. Maximum salinity values occur at about 25° north and south, in subtropical regions with low rainfall and high evaporation.[6]
|
59 |
+
|
60 |
+
The high surface salinity in the Atlantic, on which the Atlantic thermohaline circulation is dependent, is maintained by two processes: the Agulhas Leakage/Rings, which brings salty Indian Ocean waters into the South Atlantic, and the "Atmospheric Bridge", which evaporates subtropical Atlantic waters and exports it to the Pacific.[30]
|
61 |
+
|
62 |
+
The Atlantic Ocean consists of four major, upper water masses with distinct temperature and salinity. The Atlantic Subarctic Upper Water in the northern-most North Atlantic is the source for Subarctic Intermediate Water and North Atlantic Intermediate Water. North Atlantic Central Water can be divided into the Eastern and Western North Atlantic central Water since the western part is strongly affected by the Gulf Stream and therefore the upper layer is closer to underlying fresher subpolar intermediate water. The eastern water is saltier because of its proximity to Mediterranean Water. North Atlantic Central Water flows into South Atlantic Central Water at 15°N.[32]
|
63 |
+
|
64 |
+
There are five intermediate waters: four low-salinity waters formed at subpolar latitudes and one high-salinity formed through evaporation. Arctic Intermediate Water, flows from north to become the source for North Atlantic Deep Water south of the Greenland-Scotland sill. These two intermediate waters have different salinity in the western and eastern basins. The wide range of salinities in the North Atlantic is caused by the asymmetry of the northern subtropical gyre and the large number of contributions from a wide range of sources: Labrador Sea, Norwegian-Greenland Sea, Mediterranean, and South Atlantic Intermediate Water.[32]
|
65 |
+
|
66 |
+
The North Atlantic Deep Water (NADW) is a complex of four water masses, two that form by deep convection in the open ocean — Classical and Upper Labrador Sea Water — and two that form from the inflow of dense water across the Greenland-Iceland-Scotland sill — Denmark Strait and Iceland-Scotland Overflow Water. Along its path across Earth the composition of the NADW is affected by other water masses, especially Antarctic Bottom Water and Mediterranean Overflow Water.[33]
|
67 |
+
The NADW is fed by a flow of warm shallow water into the northern North Atlantic which is responsible for the anomalous warm climate in Europe. Changes in the formation of NADW have been linked to global climate changes in the past. Since man-made substances were introduced into the environment, the path of the NADW can be traced throughout its course by measuring tritium and radiocarbon from nuclear weapon tests in the 1960s and CFCs.[34]
|
68 |
+
|
69 |
+
The clockwise warm-water North Atlantic Gyre occupies the northern Atlantic, and the counter-clockwise warm-water South Atlantic Gyre appears in the southern Atlantic.[6]
|
70 |
+
|
71 |
+
In the North Atlantic, surface circulation is dominated by three inter-connected currents: the Gulf Stream which flows north-east from the North American coast at Cape Hatteras; the North Atlantic Current, a branch of the Gulf Stream which flows northward from the Grand Banks; and the Subpolar Front, an extension of the North Atlantic Current, a wide, vaguely defined region separating the subtropical gyre from the subpolar gyre. This system of currents transport warm water into the North Atlantic, without which temperatures in the North Atlantic and Europe would plunge dramatically.[35]
|
72 |
+
|
73 |
+
North of the North Atlantic Gyre, the cyclonic North Atlantic Subpolar Gyre plays a key role in climate variability. It is governed by ocean currents from marginal seas and regional topography, rather than being steered by wind, both in the deep ocean and at sea level.[36]
|
74 |
+
The subpolar gyre forms an important part of the global thermohaline circulation. Its eastern portion includes eddying branches of the North Atlantic Current which transport warm, saline waters from the subtropics to the north-eastern Atlantic. There this water is cooled during winter and forms return currents that merge along the eastern continental slope of Greenland where they form an intense (40–50 Sv) current which flows around the continental margins of the Labrador Sea. A third of this water becomes part of the deep portion of the North Atlantic Deep Water (NADW). The NADW, in its turn, feeds the meridional overturning circulation (MOC), the northward heat transport of which is threatened by anthropogenic climate change. Large variations in the subpolar gyre on a decade-century scale, associated with the North Atlantic oscillation, are especially pronounced in Labrador Sea Water, the upper layers of the MOC.[37]
|
75 |
+
|
76 |
+
The South Atlantic is dominated by the anti-cyclonic southern subtropical gyre. The South Atlantic Central Water originates in this gyre, while Antarctic Intermediate Water originates in the upper layers of the circumpolar region, near the Drake Passage and the Falkland Islands. Both these currents receive some contribution from the Indian Ocean. On the African east coast the small cyclonic Angola Gyre lies embedded in the large subtropical gyre.[38]
|
77 |
+
The southern subtropical gyre is partly masked by a wind-induced Ekman layer. The residence time of the gyre is 4.4–8.5 years. North Atlantic Deep Water flows southward below the thermocline of the subtropical gyre.[39]
|
78 |
+
|
79 |
+
The Sargasso Sea in the western North Atlantic can be defined as the area where two species of Sargassum (S. fluitans and natans) float, an area 4,000 km (2,500 mi) wide and encircled by the Gulf Stream, North Atlantic Drift, and North Equatorial Current. This population of seaweed probably originated from Tertiary ancestors on the European shores of the former Tethys Ocean and has, if so, maintained itself by vegetative growth, floating in the ocean for millions of years.[40]
|
80 |
+
|
81 |
+
Other species endemic to the Sargasso Sea include the sargassum fish, a predator with algae-like appendages which hovers motionless among the Sargassum. Fossils of similar fishes have been found in fossil bays of the former Tethys Ocean, in what is now the Carpathian region, that were similar to the Sargasso Sea. It is possible that the population in the Sargasso Sea migrated to the Atlantic as the Tethys closed at the end of the Miocene around 17 Ma.[40] The origin of the Sargasso fauna and flora remained enigmatic for centuries. The fossils found in the Carpathians in the mid-20th century, often called the "quasi-Sargasso assemblage", finally showed that this assemblage originated in the Carpathian Basin from where it migrated over Sicily to the Central Atlantic where it evolved into modern species of the Sargasso Sea.[41]
|
82 |
+
|
83 |
+
The location of the spawning ground for European eels remained unknown for decades. In the early 19th century it was discovered that the southern Sargasso Sea is the spawning ground for both the European and American eel and that the former migrate more than 5,000 km (3,100 mi) and the latter 2,000 km (1,200 mi). Ocean currents such as the Gulf Stream transport eel larvae from the Sargasso Sea to foraging areas in North America, Europe, and Northern Africa.[42] Recent but disputed research suggests that eels possibly use Earth's magnetic field to navigate through the ocean both as larvae and as adults.[43]
|
84 |
+
|
85 |
+
Climate is influenced by the temperatures of the surface waters and water currents as well as winds. Because of the ocean's great capacity to store and release heat, maritime climates are more moderate and have less extreme seasonal variations than inland climates. Precipitation can be approximated from coastal weather data and air temperature from water temperatures.[6]
|
86 |
+
|
87 |
+
The oceans are the major source of the atmospheric moisture that is obtained through evaporation. Climatic zones vary with latitude; the warmest zones stretch across the Atlantic north of the equator. The coldest zones are in high latitudes, with the coldest regions corresponding to the areas covered by sea ice. Ocean currents influence the climate by transporting warm and cold waters to other regions. The winds that are cooled or warmed when blowing over these currents influence adjacent land areas.[6]
|
88 |
+
|
89 |
+
The Gulf Stream and its northern extension towards Europe, the North Atlantic Drift is thought to have at least some influence on climate. For example, the Gulf Stream helps moderate winter temperatures along the coastline of southeastern North America, keeping it warmer in winter along the coast than inland areas. The Gulf Stream also keeps extreme temperatures from occurring on the Florida Peninsula. In the higher latitudes, the North Atlantic Drift, warms the atmosphere over the oceans, keeping the British Isles and north-western Europe mild and cloudy, and not severely cold in winter like other locations at the same high latitude. The cold water currents contribute to heavy fog off the coast of eastern Canada (the Grand Banks of Newfoundland area) and Africa's north-western coast. In general, winds transport moisture and air over land areas.[6]
|
90 |
+
|
91 |
+
Icebergs are common from early February to the end of July across the shipping lanes near the Grand Banks of Newfoundland. The ice season is longer in the polar regions, but there is little shipping in those areas.[44]
|
92 |
+
|
93 |
+
Hurricanes are a hazard in the western parts of the North Atlantic during the summer and autumn. Due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare.[45]
|
94 |
+
|
95 |
+
The Atlantic Ocean is underlain mostly by dense mafic oceanic crust made up of basalt and gabbro and overlain by fine clay, silt and siliceous ooze on the abyssal plain. The continental margins and continental shelf mark lower density, but greater thickness felsic continental rock that often much older than that of the seafloor. The oldest oceanic crust in the Atlantic is up to 145 million years and situated off the west coast of Africa and east coast of North America, or on either side of the South Atlantic.[46]
|
96 |
+
|
97 |
+
In many places, the continental shelf and continental slope are covered in thick sedimentary layers. For instance, on the North American side of the ocean, large carbonate deposits formed in warm shallow waters such as Florida and the Bahamas, while coarse river outwash sands and silt are common in shallow shelf areas like the Georges Bank. Coarse sand, boulders, and rocks were transported into some areas, such as off the coast of Nova Scotia or the Gulf of Maine during the Pleistocene ice ages.[47]
|
98 |
+
|
99 |
+
The break-up of Pangaea began in the Central Atlantic, between North America and Northwest Africa, where rift basins opened during the Late Triassic and Early Jurassic. This period also saw the first stages of the uplift of the Atlas Mountains. The exact timing is controversial with estimates ranging from 200 to 170 Ma.[48]
|
100 |
+
|
101 |
+
The opening of the Atlantic Ocean coincided with the initial break-up of the supercontinent Pangaea, both of which were initiated by the eruption of the Central Atlantic Magmatic Province (CAMP), one of the most extensive and voluminous large igneous provinces in Earth's history associated with the Triassic–Jurassic extinction event, one of Earth's major extinction events.[49]
|
102 |
+
Theoliitic dikes, sills, and lava flows from the CAMP eruption at 200 Ma have been found in West Africa, eastern North America, and northern South America. The extent of the volcanism has been estimated to 4.5×106 km2 (1.7×106 sq mi) of which 2.5×106 km2 (9.7×105 sq mi) covered what is now northern and central Brazil.[50]
|
103 |
+
|
104 |
+
The formation of the Central American Isthmus closed the Central American Seaway at the end of the Pliocene 2.8 Ma ago. The formation of the isthmus resulted in the migration and extinction of many land-living animals, known as the Great American Interchange, but the closure of the seaway resulted in a "Great American Schism" as it affected ocean currents, salinity, and temperatures in both the Atlantic and Pacific. Marine organisms on both sides of the isthmus became isolated and either diverged or went extinct.[51]
|
105 |
+
|
106 |
+
Geologically, the Northern Atlantic is the area delimited to the south by two conjugate margins, Newfoundland and Iberia, and to the north by the Arctic Eurasian Basin. The opening of the Northern Atlantic closely followed the margins of its predecessor, the Iapetus Ocean, and spread from the Central Atlantic in six stages: Iberia–Newfoundland, Porcupine–North America, Eurasia–Greenland, Eurasia–North America. Active and inactive spreading systems in this area are marked by the interaction with the Iceland hotspot.[52]
|
107 |
+
|
108 |
+
Seafloor spreading led to the extension of the crust and formations of troughs and sedimentary basins. The Rockall Trough opened between 105 and 84 million years ago although along the rift failed along with one leading into the Bay of Biscay. [53]
|
109 |
+
|
110 |
+
Spreading began opening the Labrador Sea around 61 million years ago, continuing until 36 million years ago. Geologists distinguish two magmatic phases. One from 62 to 58 million years ago predates the separation of Greenland from northern Europe while the second from 56 to 52 million years ago happened as the separation occurred.
|
111 |
+
|
112 |
+
Iceland began to form 62 million years ago due to a particularly concentrated mantle plume. Large quantities of basalt erupted at this time period are found on Baffin Island, Greenland, the Faroe Islands, and Scotland, with ash falls in Western Europe acting as a stratigraphic marker. [54] The opening of the North Atlantic caused significant uplift of continental crust along the coast. For instance, in spite of 7 km thick basalt, Gunnbjorn Field in East Greenland is the highest point on the island, elevated enough that it exposes older Mesozoic sedimentary rocks at its base, similar to old lava fields above sedimentary rocks in the uplifted Hebrides of western Scotland. [55]
|
113 |
+
|
114 |
+
West Gondwana (South America and Africa) broke up in the Early Cretaceous to form the South Atlantic. The apparent fit between the coastlines of the two continents was noted on the first maps that included the South Atlantic and it was also the subject of the first computer-assisted plate tectonic reconstructions in 1965.[56][57] This magnificent fit, however, has since then proven problematic and later reconstructions have introduced various deformation zones along the shorelines to accommodate the northward-propagating break-up.[56] Intra-continental rifts and deformations have also been introduced to subdivide both continental plates into sub-plates.[58]
|
115 |
+
|
116 |
+
Geologically the South Atlantic can be divided into four segments: Equatorial segment, from 10°N to the Romanche Fracture Zone (RFZ);; Central segment, from RFZ to Florianopolis Fracture Zone (FFZ, north of Walvis Ridge and Rio Grande Rise); Southern segment, from FFZ to the Agulhas-Falkland Fracture Zone (AFFZ); and Falkland segment, south of AFFZ.[59]
|
117 |
+
|
118 |
+
In the southern segment the Early Cretaceous (133–130 Ma) intensive magmatism of the Paraná–Etendeka Large Igneous Province produced by the Tristan hotspot resulted in an estimated volume of 1.5×106 to 2.0×106 km3 (3.6×105 to 4.8×105 cu mi). It covered an area of 1.2×106 to 1.6×106 km2 (4.6×105 to 6.2×105 sq mi) in Brazil, Paraguay, and Uruguay and 0.8×105 km2 (3.1×104 sq mi) in Africa. Dyke swarms in Brazil, Angola, eastern Paraguay, and Namibia, however, suggest the LIP originally covered a much larger area and also indicate failed rifts in all these areas. Associated offshore basaltic flows reach as far south as the Falkland Islands and South Africa. Traces of magmatism in both offshore and onshore basins in the central and southern segments have been dated to 147–49 Ma with two peaks between 143–121 Ma and 90–60 Ma.[59]
|
119 |
+
|
120 |
+
In the Falkland segment rifting began with dextral movements between the Patagonia and Colorado sub-plates between the Early Jurassic (190 Ma) and the Early Cretaceous (126.7 Ma). Around 150 Ma sea-floor spreading propagated northward into the southern segment. No later than 130 Ma rifting had reached the Walvis Ridge–Rio Grande Rise.[58]
|
121 |
+
|
122 |
+
In the central segment rifting started to break Africa in two by opening the Benue Trough around 118 Ma. Rifting in the central segment, however, coincided with the Cretaceous Normal Superchron (also known as the Cretaceous quiet period), a 40 Ma period without magnetic reversals, which makes it difficult to date sea-floor spreading in this segment.[58]
|
123 |
+
|
124 |
+
The equatorial segment is the last phase of the break-up, but, because it is located on the Equator, magnetic anomalies cannot be used for dating. Various estimates date the propagation of sea-floor spreading in this segment to the period 120–96 Ma. This final stage, nevertheless, coincided with or resulted in the end of continental extension in Africa.[58]
|
125 |
+
|
126 |
+
About 50 Ma the opening of the Drake Passage resulted from a change in the motions and separation rate of the South American and Antarctic plates. First small ocean basins opened and a shallow gateway appeared during the Middle Eocene. 34–30 Ma a deeper seaway developed, followed by an Eocene–Oligocene climatic deterioration and the growth of the Antarctic ice sheet.[60]
|
127 |
+
|
128 |
+
An embryonic subduction margin is potentially developing west of Gibraltar. The Gibraltar Arc in the western Mediterranean is migrating westward into the Central Atlantic where it joins the converging African and Eurasian plates. Together these three tectonic forces are slowly developing into a new subduction system in the eastern Atlantic Basin. Meanwhile, the Scotia Arc and Caribbean Plate in the western Atlantic Basin are eastward-propagating subduction systems that might, together with the Gibraltar system, represent the beginning of the closure of the Atlantic Ocean and the final stage of the Atlantic Wilson cycle.[61]
|
129 |
+
|
130 |
+
Humans evolved in Africa; first by diverging from other apes around 7 mya; then developing stone tools around 2.6 mya; to finally evolve as modern humans around 100 kya. The earliest evidence for the complex behavior associated with this behavioral modernity has been found in the Greater Cape Floristic Region (GCFR) along the coast of South Africa. During the latest glacial stages, the now-submerged plains of the Agulhas Bank were exposed above sea level, extending the South African coastline farther south by hundreds of kilometers. A small population of modern humans — probably fewer than a thousand reproducing individuals — survived glacial maxima by exploring the high diversity offered by these Palaeo-Agulhas plains. The GCFR is delimited to the north by the Cape Fold Belt and the limited space south of it resulted in the development of social networks out of which complex Stone Age technologies emerged.[62] Human history thus begins on the coasts of South Africa where the Atlantic Benguela Upwelling and Indian Ocean Agulhas Current meet to produce an intertidal zone on which shellfish, fur seal, fish and sea birds provided the necessary protein sources.[63]
|
131 |
+
The African origin of this modern behaviour is evidenced by 70,000 years-old engravings from Blombos Cave, South Africa.[64]
|
132 |
+
|
133 |
+
Mitochondrial DNA (mtDNA) studies indicate that 80–60,000 years ago a major demographic expansion within Africa, derived from a single, small population, coincided with the emergence of behavioral complexity and the rapid MIS 5–4 environmental changes. This group of people not only expanded over the whole of Africa, but also started to disperse out of Africa into Asia, Europe, and Australasia around 65,000 years ago and quickly replaced the archaic humans in these regions.[65] During the Last Glacial Maximum (LGM) 20,000 years ago humans had to abandon their initial settlements along the European North Atlantic coast and retreat to the Mediterranean. Following rapid climate changes at the end of the LGM this region was repopulated by Magdalenian culture. Other hunter-gatherers followed in waves interrupted by large-scale hazards such as the Laacher See volcanic eruption, the inundation of Doggerland (now the North Sea), and the formation of the Baltic Sea.[66] The European coasts of the North Atlantic were permanently populated about 9–8.5 thousand years ago.[67]
|
134 |
+
|
135 |
+
This human dispersal left abundant traces along the coasts of the Atlantic Ocean. 50 kya-old, deeply stratified shell middens found in Ysterfontein on the western coast of South Africa are associated with the Middle Stone Age (MSA). The MSA population was small and dispersed and the rate of their reproduction and exploitation was less intense than those of later generations. While their middens resemble 12–11 kya-old Late Stone Age (LSA) middens found on every inhabited continent, the 50–45 kya-old Enkapune Ya Muto in Kenya probably represents the oldest traces of the first modern humans to disperse out of Africa.[68]
|
136 |
+
|
137 |
+
The same development can be seen in Europe. In La Riera Cave (23–13 kya) in Asturias, Spain, only some 26,600 molluscs were deposited over 10 kya. In contrast, 8–7 kya-old shell middens in Portugal, Denmark, and Brazil generated thousands of tons of debris and artefacts. The Ertebølle middens in Denmark, for example, accumulated 2,000 m3 (71,000 cu ft) of shell deposits representing some 50 million molluscs over only a thousand years. This intensification in the exploitation of marine resources has been described as accompanied by new technologies — such as boats, harpoons, and fish-hooks — because many caves found in the Mediterranean and on the European Atlantic coast have increased quantities of marine shells in their upper levels and reduced quantities in their lower. The earliest exploitation, however, took place on the now submerged shelves, and most settlements now excavated were then located several kilometers from these shelves. The reduced quantities of shells in the lower levels can represent the few shells that were exported inland.[69]
|
138 |
+
|
139 |
+
During the LGM the Laurentide Ice Sheet covered most of northern North America while Beringia connected Siberia to Alaska. In 1973 late American geoscientist Paul S. Martin proposed a "blitzkrieg" colonization of the Americas by which Clovis hunters migrated into North America around 13,000 years ago in a single wave through an ice-free corridor in the ice sheet and "spread southward explosively, briefly attaining a density sufficiently large to overkill much of their prey."[70] Others later proposed a "three-wave" migration over the Bering Land Bridge.[71] These hypotheses remained the long-held view regarding the settlement of the Americas, a view challenged by more recent archaeological discoveries: the oldest archaeological sites in the Americas have been found in South America; sites in north-east Siberia report virtually no human presence there during the LGM; and most Clovis artefacts have been found in eastern North America along the Atlantic coast.[72] Furthermore, colonisation models based on mtDNA, yDNA, and atDNA data respectively support neither the "blitzkrieg" nor the "three-wave" hypotheses but they also deliver mutually ambiguous results. Contradictory data from archaeology and genetics will most likely deliver future hypotheses that will, eventually, confirm each other.[73] A proposed route across the Pacific to South America could explain early South American finds and another hypothesis proposes a northern path, through the Canadian Arctic and down the North American Atlantic coast.[74]
|
140 |
+
Early settlements across the Atlantic have been suggested by alternative theories, ranging from purely hypothetical to mostly disputed, including the Solutrean hypothesis and some of the Pre-Columbian trans-oceanic contact theories.
|
141 |
+
|
142 |
+
The Norse settlement of the Faroe Islands and Iceland began during the 9th and 10th centuries. A settlement on Greenland was established before 1000 CE, but contact with it was lost in 1409 and it was finally abandoned during the early Little Ice Age. This setback was caused by a range of factors: an unsustainable economy resulted in erosion and denudation, while conflicts with the local Inuit resulted in the failure to adapt their Arctic technologies; a colder climate resulted in starvation, and the colony got economically marginalized as the Great Plague and Barbary pirates harvested its victims on Iceland in the 15th century.[75]
|
143 |
+
Iceland was initially settled 865–930 CE following a warm period when winter temperatures hovered around 2 °C (36 °F) which made farming favorable at high latitudes. This did not last, however, and temperatures quickly dropped; at 1080 CE summer temperatures had reached a maximum of 5 °C (41 °F). The Landnámabók (Book of Settlement) records disastrous famines during the first century of settlement — "men ate foxes and ravens" and "the old and helpless were killed and thrown over cliffs" — and by the early 1200s hay had to be abandoned for short-season crops such as barley.[76]
|
144 |
+
|
145 |
+
Christopher Columbus reached the Americas in 1492 under Spanish flag.[77] Six years later Vasco da Gama reached India under the Portuguese flag, by navigating south around the Cape of Good Hope, thus proving that the Atlantic and Indian Oceans are connected. In 1500, in his voyage to India following Vasco da Gama, Pedro Alvares Cabral reached Brazil, taken by the currents of the South Atlantic Gyre. Following these explorations, Spain and Portugal quickly conquered and colonized large territories in the New World and forced the Amerindian population into slavery in order to explore the vast quantities of silver and gold they found. Spain and Portugal monopolized this trade in order to keep other European nations out, but conflicting interests nevertheless led to a series of Spanish-Portuguese wars. A peace treaty mediated by the Pope divided the conquered territories into Spanish and Portuguese sectors while keeping other colonial powers away. England, France, and the Dutch Republic enviously watched the Spanish and Portuguese wealth grow and allied themselves with pirates such as Henry Mainwaring and Alexandre Exquemelin. They could explore the convoys leaving the Americas because prevailing winds and currents made the transport of heavy metals slow and predictable.[77]
|
146 |
+
|
147 |
+
In the colonies of the Americas, depredation, smallpox and others diseases, and slavery quickly reduced the indigenous population of the Americas to the extent that the Atlantic slave trade had to be introduced to replace them — a trade that became the norm and an integral part of the colonization. Between the 15th century and 1888, when Brazil became the last part of the Americas to end the slave trade, an estimated ten million Africans were exported as slaves, most of them destined for agricultural labour. The slave trade was officially abolished in the British Empire and the United States in 1808, and slavery itself was abolished in the British Empire in 1838 and in the United States in 1865 after the Civil War.[78][79]
|
148 |
+
|
149 |
+
From Columbus to the Industrial Revolution Trans-Atlantic trade, including colonialism and slavery, became crucial for Western Europe. For European countries with direct access to the Atlantic (including Britain, France, the Netherlands, Portugal, and Spain) 1500–1800 was a period of sustained growth during which these countries grew richer than those in Eastern Europe and Asia. Colonialism evolved as part of the Trans-Atlantic trade, but this trade also strengthened the position of merchant groups at the expense of monarchs. Growth was more rapid in non-absolutist countries, such as Britain and the Netherlands, and more limited in absolutist monarchies, such as Portugal, Spain, and France, where profit mostly or exclusively benefited the monarchy and its allies.[80]
|
150 |
+
|
151 |
+
Trans-Atlantic trade also resulted in increasing urbanization: in European countries facing the Atlantic, urbanization grew from 8% in 1300, 10.1% in 1500, to 24.5% in 1850; in other European countries from 10% in 1300, 11.4% in 1500, to 17% in 1850. Likewise, GDP doubled in Atlantic countries but rose by only 30% in the rest of Europe. By end of the 17th century, the volume of the Trans-Atlantic trade had surpassed that of the Mediterranean trade.[80]
|
152 |
+
|
153 |
+
The Atlantic has contributed significantly to the development and economy of surrounding countries. Besides major transatlantic transportation and communication routes, the Atlantic offers abundant petroleum deposits in the sedimentary rocks of the continental shelves.[6]
|
154 |
+
|
155 |
+
The Atlantic harbors petroleum and gas fields, fish, marine mammals (seals and whales), sand and gravel aggregates, placer deposits, polymetallic nodules, and precious stones.[81]
|
156 |
+
Gold deposits are a mile or two under water on the ocean floor, however the deposits are also encased in rock that must be mined through. Currently, there is no cost-effective way to mine or extract gold from the ocean to make a profit.[82]
|
157 |
+
|
158 |
+
Various international treaties attempt to reduce pollution caused by environmental threats such as oil spills, marine debris, and the incineration of toxic wastes at sea.[6]
|
159 |
+
|
160 |
+
The shelves of the Atlantic hosts one of the world's richest fishing resources. The most productive areas include the Grand Banks of Newfoundland, the Scotian Shelf, Georges Bank off Cape Cod, the Bahama Banks, the waters around Iceland, the Irish Sea, the Bay of Fundy, the Dogger Bank of the North Sea, and the Falkland Banks.[6]
|
161 |
+
Fisheries have, however, undergone significant changes since the 1950s and global catches can now be divided into three groups of which only two are observed in the Atlantic: fisheries in the Eastern Central and South-West Atlantic oscillate around a globally stable value, the rest of the Atlantic is in overall decline following historical peaks. The third group, "continuously increasing trend since 1950", is only found in the Indian Ocean and Western Pacific.[83]
|
162 |
+
|
163 |
+
In the North-East Atlantic total catches decreased between the mid-1970s and the 1990s and reached 8.7 million tons in 2013. Blue whiting reached a 2.4 million tons peak in 2004 but was down to 628,000 tons in 2013. Recovery plans for cod, sole, and plaice have reduced mortality in these species. Arctic cod reached its lowest levels in the 1960s–1980s but is now recovered. Arctic saithe and haddock are considered fully fished; Sand eel is overfished as was capelin which has now recovered to fully fished. Limited data makes the state of redfishes and deep-water species difficult to assess but most likely they remain vulnerable to overfishing. Stocks of northern shrimp and Norwegian lobster are in good condition. In the North-East Atlantic 21% of stocks are considered overfished.[83]
|
164 |
+
|
165 |
+
In the North-West Atlantic landings have decreased from 4.2 million tons in the early 1970s to 1.9 million tons in 2013. During the 21st century some species have shown weak signs of recovery, including Greenland halibut, yellowtail flounder, Atlantic halibut, haddock, spiny dogfish, while other stocks shown no such signs, including cod, witch flounder, and redfish. Stocks of invertebrates, in contrast, remain at record levels of abundance. 31% of stocks are overfished in the North-west Atlantic.[83]
|
166 |
+
|
167 |
+
In 1497 John Cabot became the first Western European since the Vikings to explore mainland North America and one of his major discoveries was the abundant resources of Atlantic cod off Newfoundland. Referred to as "Newfoundland Currency" this discovery yielded some 200 million tons of fish over five centuries. In the late 19th and early 20th centuries new fisheries started to exploit haddock, mackerel, and lobster. From the 1950s to the 1970s the introduction of European and Asian distant-water fleets in the area dramatically increased the fishing capacity and the number of exploited species. It also expanded the exploited areas from near-shore to the open sea and to great depths to include deep-water species such as redfish, Greenland halibut, witch flounder, and grenadiers. Overfishing in the area was recognised as early as the 1960s but, because this was occurring on international waters, it took until the late 1970s before any attempts to regulate was made. In the early 1990s, this finally resulted in the collapse of the Atlantic northwest cod fishery. The population of a number of deep-sea fishes also collapsed in the process, including American plaice, redfish, and Greenland halibut, together with flounder and grenadier.[84]
|
168 |
+
|
169 |
+
In the Eastern Central Atlantic small pelagic fishes constitute about 50% of landings with sardine reaching 0.6–1.0 million tons per year. Pelagic fish stocks are considered fully fished or overfished, with sardines south of Cape Bojador the notable exception. Almost half of the stocks are fished at biologically unsustainable levels. Total catches have been fluctuating since the 1970s; reaching 3.9 million tons in 2013 or slightly less than the peak production in 2010.[83]
|
170 |
+
|
171 |
+
In the Western Central Atlantic, catches have been decreasing since 2000 and reached 1.3 million tons in 2013. The most important species in the area, Gulf menhaden, reached a million tons in the mid-1980s but only half a million tons in 2013 and is now considered fully fished. Round sardinella was an important species in the 1990s but is now considered overfished. Groupers and snappers are overfished and northern brown shrimp and American cupped oyster are considered fully fished approaching overfished. 44% of stocks are being fished at unsustainable levels.[83]
|
172 |
+
|
173 |
+
In the South-East Atlantic catches have decreased from 3.3 million tons in the early 1970s to 1.3 million tons in 2013. Horse mackerel and hake are the most important species, together representing almost half of the landings. Off South Africa and Namibia deep-water hake and shallow-water Cape hake have recovered to sustainable levels since regulations were introduced in 2006 and the states of Southern African pilchard and anchovy have improved to fully fished in 2013.[83]
|
174 |
+
|
175 |
+
In the South-West Atlantic, a peak was reached in the mid-1980s and catches now fluctuate between 1.7 and 2.6 million tons. The most important species, the Argentine shortfin squid, which reached half a million tons in 2013 or half the peak value, is considered fully fished to overfished. Another important species was the Brazilian sardinella, with a production of 100,000 tons in 2013 it is now considered overfished. Half the stocks in this area are being fished at unsustainable levels: Whitehead's round herring has not yet reached fully fished but Cunene horse mackerel is overfished. The sea snail perlemoen abalone is targeted by illegal fishing and remain overfished.[83]
|
176 |
+
|
177 |
+
Endangered marine species include the manatee, seals, sea lions, turtles, and whales. Drift net fishing can kill dolphins, albatrosses and other seabirds (petrels, auks), hastening the fish stock decline and contributing to international disputes.[85] Municipal pollution comes from the eastern United States, southern Brazil, and eastern Argentina; oil pollution in the Caribbean Sea, Gulf of Mexico, Lake Maracaibo, Mediterranean Sea, and North Sea; and industrial waste and municipal sewage pollution in the Baltic Sea, North Sea, and Mediterranean Sea.
|
178 |
+
|
179 |
+
North Atlantic hurricane activity has increased over past decades because of increased sea surface temperature (SST) at tropical latitudes, changes that can be attributed to either the natural Atlantic Multidecadal Oscillation (AMO) or to anthropogenic climate change.[86]
|
180 |
+
A 2005 report indicated that the Atlantic meridional overturning circulation (AMOC) slowed down by 30% between 1957 and 2004.[87] If the AMO were responsible for SST variability, the AMOC would have increased in strength, which is apparently not the case. Furthermore, it is clear from statistical analyses of annual tropical cyclones that these changes do not display multidecadal cyclicity.[86] Therefore, these changes in SST must be caused by human activities.[88]
|
181 |
+
|
182 |
+
The ocean mixed layer plays an important role in heat storage over seasonal and decadal time-scales, whereas deeper layers are affected over millennia and has a heat capacity about 50 times that of the mixed layer. This heat uptake provides a time-lag for climate change but it also results in thermal expansion of the oceans which contributes to sea-level rise. 21st-century global warming will probably result in an equilibrium sea-level rise five times greater than today, whilst melting of glaciers, including that of the Greenland ice-sheet, expected to have virtually no effect during the 21st century, will probably result in a sea-level rise of 3–6 m over a millennium.[89]
|
183 |
+
|
184 |
+
A USAF C-124 aircraft from Dover Air Force Base, Delaware was carrying three nuclear bombs over the Atlantic Ocean when it experienced a loss of power. For their own safety, the crew jettisoned two nuclear bombs, which were never recovered.[90]
|
185 |
+
|
186 |
+
On 7 June 2006, Florida's wildlife commission voted to take the manatee off the state's endangered species list. Some environmentalists worry that this could erode safeguards for the popular sea creature.
|
187 |
+
|
188 |
+
Marine pollution is a generic term for the entry into the ocean of potentially hazardous chemicals or particles. The biggest culprits are rivers and with them many agriculture fertilizer chemicals as well as livestock and human waste. The excess of oxygen-depleting chemicals leads to hypoxia and the creation of a dead zone.[91]
|
189 |
+
|
190 |
+
Marine debris, which is also known as marine litter, describes human-created waste floating in a body of water. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter. The North Atlantic garbage patch is estimated to be hundreds of kilometers across in size.[92]
|
en/4215.html.txt
ADDED
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Atlantic Ocean is the second-largest of the world's oceans, with an area of about 106,460,000 km2 (41,100,000 sq mi).[2][3] It covers approximately 20 percent of Earth's surface and about 29 percent of its water surface area. It separates the "Old World" from the "New World".
|
4 |
+
|
5 |
+
The Atlantic Ocean occupies an elongated, S-shaped basin extending longitudinally between Europe and Africa to the east, and the Americas to the west. As one component of the interconnected World Ocean, it is connected in the north to the Arctic Ocean, to the Pacific Ocean in the southwest, the Indian Ocean in the southeast, and the Southern Ocean in the south (other definitions describe the Atlantic as extending southward to Antarctica). The Equatorial Counter Current subdivides it into the North(ern) Atlantic Ocean and the South(ern) Atlantic Ocean at about 8°N.[6]
|
6 |
+
|
7 |
+
Scientific explorations of the Atlantic include the Challenger expedition, the German Meteor expedition, Columbia University's Lamont-Doherty Earth Observatory and the United States Navy Hydrographic Office.[6]
|
8 |
+
|
9 |
+
The oldest known mentions of an "Atlantic" sea come from Stesichorus around mid-sixth century BC (Sch. A. R. 1. 211):[7] Atlantikôi pelágei (Greek: Ἀτλαντικῷ πελάγει; English: 'the Atlantic sea'; etym. 'Sea of Atlantis') and in The Histories of Herodotus around 450 BC (Hdt. 1.202.4): Atlantis thalassa (Greek: Ἀτλαντὶς θάλασσα; English: 'Sea of Atlantis' or 'the Atlantis sea'[8]) where the name refers to "the sea beyond the pillars of Heracles" which is said to be part of the sea that surrounds all land.[9] In these uses, the name refers to Atlas, the Titan in Greek mythology, who supported the heavens and who later appeared as a frontispiece in Medieval maps and also lent his name to modern atlases.[10] On the other hand, to early Greek sailors and in Ancient Greek mythological literature such as the Iliad and the Odyssey, this all-encompassing ocean was instead known as Oceanus, the gigantic river that encircled the world; in contrast to the enclosed seas well known to the Greeks: the Mediterranean and the Black Sea.[11] In contrast, the term "Atlantic" originally referred specifically to the Atlas Mountains in Morocco and the sea off the Strait of Gibraltar and the North African coast.[10] The Greek word thalassa has been reused by scientists for the huge Panthalassa ocean that surrounded the supercontinent Pangaea hundreds of millions of years ago.
|
10 |
+
|
11 |
+
The term "Aethiopian Ocean", derived from Ancient Ethiopia, was applied to the Southern Atlantic as late as the mid-19th century.[12] During the Age of Discovery, the Atlantic was also known to English cartographers as the Great Western Ocean.[13]
|
12 |
+
|
13 |
+
The term The Pond is often used by British and American speakers in context to the Atlantic Ocean, as a form of meiosis, or sarcastic understatement. The term dates to as early as 1640, first appearing in print in pamphlet released during the reign of Charles I, and reproduced in 1869 in Nehemiah Wallington's Historical Notices of Events Occurring Chiefly in The Reign of Charles I, where "great Pond" is used in reference to the Atlantic Ocean by Francis Windebank, Charles I's Secretary of State.[14][15][16]
|
14 |
+
|
15 |
+
The International Hydrographic Organization (IHO) defined the limits of the oceans and seas in 1953,[17] but some of these definitions have been revised since then and some are not used by various authorities, institutions, and countries, see for example the CIA World Factbook. Correspondingly, the extent and number of oceans and seas varies.
|
16 |
+
|
17 |
+
The Atlantic Ocean is bounded on the west by North and South America. It connects to the Arctic Ocean through the Denmark Strait, Greenland Sea, Norwegian Sea and Barents Sea. To the east, the boundaries of the ocean proper are Europe: the Strait of Gibraltar (where it connects with the Mediterranean Sea—one of its marginal seas—and, in turn, the Black Sea, both of which also touch upon Asia) and Africa.
|
18 |
+
|
19 |
+
In the southeast, the Atlantic merges into the Indian Ocean. The 20° East meridian, running south from Cape Agulhas to Antarctica defines its border. In the 1953 definition it extends south to Antarctica, while in later maps it is bounded at the 60° parallel by the Southern Ocean.[17]
|
20 |
+
|
21 |
+
The Atlantic has irregular coasts indented by numerous bays, gulfs and seas. These include the Baltic Sea, Black Sea, Caribbean Sea, Davis Strait, Denmark Strait, part of the Drake Passage, Gulf of Mexico, Labrador Sea, Mediterranean Sea, North Sea, Norwegian Sea, almost all of the Scotia Sea, and other tributary water bodies.[1] Including these marginal seas the coast line of the Atlantic measures 111,866 km (69,510 mi) compared to 135,663 km (84,297 mi) for the Pacific.[1][18]
|
22 |
+
|
23 |
+
Including its marginal seas, the Atlantic covers an area of 106,460,000 km2 (41,100,000 sq mi) or 23.5% of the global ocean and has a volume of 310,410,900 km3 (74,471,500 cu mi) or 23.3% of the total volume of the earth's oceans. Excluding its marginal seas, the Atlantic covers 81,760,000 km2 (31,570,000 sq mi) and has a volume of 305,811,900 km3 (73,368,200 cu mi). The North Atlantic covers 41,490,000 km2 (16,020,000 sq mi) (11.5%) and the South Atlantic 40,270,000 km2 (15,550,000 sq mi) (11.1%).[4] The average depth is 3,646 m (11,962 ft) and the maximum depth, the Milwaukee Deep in the Puerto Rico Trench, is 8,376 m (27,480 ft).[19][20]
|
24 |
+
|
25 |
+
The bathymetry of the Atlantic is dominated by a submarine mountain range called the Mid-Atlantic Ridge (MAR). It runs from 87°N or 300 km (190 mi) south of the North Pole to the subantarctic Bouvet Island at 54°S.[21]
|
26 |
+
|
27 |
+
The MAR divides the Atlantic longitudinally into two-halves, in each of which a series of basins are delimited by secondary, transverse ridges. The MAR reaches above 2,000 m (6,600 ft) along most of its length, but is interrupted by larger transform faults at two places: the Romanche Trench near the Equator and the Gibbs Fracture Zone at 53°N. The MAR is a barrier for bottom water, but at these two transform faults deep water currents can pass from one side to the other.[22]
|
28 |
+
|
29 |
+
The MAR rises 2–3 km (1.2–1.9 mi) above the surrounding ocean floor and its rift valley is the divergent boundary between the North American and Eurasian plates in the North Atlantic and the South American and African plates in the South Atlantic. The MAR produces basaltic volcanoes in Eyjafjallajökull, Iceland, and pillow lava on the ocean floor.[23] The depth of water at the apex of the ridge is less than 2,700 m (1,500 fathoms; 8,900 ft) in most places, while the bottom of the ridge is three times as deep.[24]
|
30 |
+
|
31 |
+
The MAR is intersected by two perpendicular ridges: the Azores–Gibraltar Transform Fault, the boundary between the Nubian and Eurasian plates, intersects the MAR at the Azores Triple Junction, on either side of the Azores microplate, near the 40°N.[25] A much vaguer, nameless boundary, between the North American and South American plates, intersects the MAR near or just north of the Fifteen-Twenty Fracture Zone, approximately at 16°N.[26]
|
32 |
+
|
33 |
+
In the 1870s, the Challenger expedition discovered parts of what is now known as the Mid-Atlantic Ridge, or:
|
34 |
+
|
35 |
+
An elevated ridge rising to an average height of about 1,900 fathoms [3,500 m; 11,400 ft] below the surface traverses the basins of the North and South Atlantic in a meridianal direction from Cape Farewell, probably its far south at least as Gough Island, following roughly the outlines of the coasts of the Old and the New Worlds.[27]
|
36 |
+
|
37 |
+
The remainder of the ridge was discovered in the 1920s by the German Meteor expedition using echo-sounding equipment.[28] The exploration of the MAR in the 1950s led to the general acceptance of seafloor spreading and plate tectonics.[21]
|
38 |
+
|
39 |
+
Most of the MAR runs under water but where it reaches the surfaces it has produced volcanic islands. While nine of these have collectively been nominated a World Heritage Site for their geological value, four of them are considered of "Outstanding Universal Value" based on their cultural and natural criteria: Þingvellir, Iceland; Landscape of the Pico Island Vineyard Culture, Portugal; Gough and Inaccessible Islands, United Kingdom; and Brazilian Atlantic Islands: Fernando de Noronha and Atol das Rocas Reserves, Brazil.[21]
|
40 |
+
|
41 |
+
Continental shelves in the Atlantic are wide off Newfoundland, southern-most South America, and north-eastern Europe.
|
42 |
+
In the western Atlantic carbonate platforms dominate large areas, for example, the Blake Plateau and Bermuda Rise.
|
43 |
+
The Atlantic is surrounded by passive margins except at a few locations where active margins form deep trenches: the Puerto Rico Trench (8,376 m or 27,480 ft maximum depth) in the western Atlantic and South Sandwich Trench (8,264 m or 27,113 ft) in the South Atlantic. There are numerous submarine canyons off north-eastern North America, western Europe, and north-western Africa. Some of these canyons extend along the continental rises and farther into the abyssal plains as deep-sea channels.[22]
|
44 |
+
|
45 |
+
In 1922 a historic moment in cartography and oceanography occurred. The USS Stewart used a Navy Sonic Depth Finder to draw a continuous map across the bed of the Atlantic. This involved little guesswork because the idea of sonar is straight forward with pulses being sent from the vessel, which bounce off the ocean floor, then return to the vessel.[29] The deep ocean floor is thought to be fairly flat with occasional deeps, abyssal plains, trenches, seamounts, basins, plateaus, canyons, and some guyots. Various shelves along the margins of the continents constitute about 11% of the bottom topography with few deep channels cut across the continental rise.
|
46 |
+
|
47 |
+
The mean depth between 60°N and 60°S is 3,730 m (12,240 ft), or close to the average for the global ocean, with a modal depth between 4,000 and 5,000 m (13,000 and 16,000 ft).[22]
|
48 |
+
|
49 |
+
In the South Atlantic the Walvis Ridge and Rio Grande Rise form barriers to ocean currents.
|
50 |
+
The Laurentian Abyss is found off the eastern coast of Canada.
|
51 |
+
|
52 |
+
Surface water temperatures, which vary with latitude, current systems, and season and reflect the latitudinal distribution of solar energy, range from below −2 °C (28 °F) to over 30 °C (86 °F). Maximum temperatures occur north of the equator, and minimum values are found in the polar regions. In the middle latitudes, the area of maximum temperature variations, values may vary by 7–8 °C (13–14 °F).[6]
|
53 |
+
|
54 |
+
From October to June the surface is usually covered with sea ice in the Labrador Sea, Denmark Strait, and Baltic Sea.[6]
|
55 |
+
|
56 |
+
The Coriolis effect circulates North Atlantic water in a clockwise direction, whereas South Atlantic water circulates counter-clockwise. The south tides in the Atlantic Ocean are semi-diurnal; that is, two high tides occur during every 24 lunar hours. In latitudes above 40° North some east-west oscillation, known as the North Atlantic oscillation, occurs.[6]
|
57 |
+
|
58 |
+
On average, the Atlantic is the saltiest major ocean; surface water salinity in the open ocean ranges from 33 to 37 parts per thousand (3.3–3.7%) by mass and varies with latitude and season. Evaporation, precipitation, river inflow and sea ice melting influence surface salinity values. Although the lowest salinity values are just north of the equator (because of heavy tropical rainfall), in general, the lowest values are in the high latitudes and along coasts where large rivers enter. Maximum salinity values occur at about 25° north and south, in subtropical regions with low rainfall and high evaporation.[6]
|
59 |
+
|
60 |
+
The high surface salinity in the Atlantic, on which the Atlantic thermohaline circulation is dependent, is maintained by two processes: the Agulhas Leakage/Rings, which brings salty Indian Ocean waters into the South Atlantic, and the "Atmospheric Bridge", which evaporates subtropical Atlantic waters and exports it to the Pacific.[30]
|
61 |
+
|
62 |
+
The Atlantic Ocean consists of four major, upper water masses with distinct temperature and salinity. The Atlantic Subarctic Upper Water in the northern-most North Atlantic is the source for Subarctic Intermediate Water and North Atlantic Intermediate Water. North Atlantic Central Water can be divided into the Eastern and Western North Atlantic central Water since the western part is strongly affected by the Gulf Stream and therefore the upper layer is closer to underlying fresher subpolar intermediate water. The eastern water is saltier because of its proximity to Mediterranean Water. North Atlantic Central Water flows into South Atlantic Central Water at 15°N.[32]
|
63 |
+
|
64 |
+
There are five intermediate waters: four low-salinity waters formed at subpolar latitudes and one high-salinity formed through evaporation. Arctic Intermediate Water, flows from north to become the source for North Atlantic Deep Water south of the Greenland-Scotland sill. These two intermediate waters have different salinity in the western and eastern basins. The wide range of salinities in the North Atlantic is caused by the asymmetry of the northern subtropical gyre and the large number of contributions from a wide range of sources: Labrador Sea, Norwegian-Greenland Sea, Mediterranean, and South Atlantic Intermediate Water.[32]
|
65 |
+
|
66 |
+
The North Atlantic Deep Water (NADW) is a complex of four water masses, two that form by deep convection in the open ocean — Classical and Upper Labrador Sea Water — and two that form from the inflow of dense water across the Greenland-Iceland-Scotland sill — Denmark Strait and Iceland-Scotland Overflow Water. Along its path across Earth the composition of the NADW is affected by other water masses, especially Antarctic Bottom Water and Mediterranean Overflow Water.[33]
|
67 |
+
The NADW is fed by a flow of warm shallow water into the northern North Atlantic which is responsible for the anomalous warm climate in Europe. Changes in the formation of NADW have been linked to global climate changes in the past. Since man-made substances were introduced into the environment, the path of the NADW can be traced throughout its course by measuring tritium and radiocarbon from nuclear weapon tests in the 1960s and CFCs.[34]
|
68 |
+
|
69 |
+
The clockwise warm-water North Atlantic Gyre occupies the northern Atlantic, and the counter-clockwise warm-water South Atlantic Gyre appears in the southern Atlantic.[6]
|
70 |
+
|
71 |
+
In the North Atlantic, surface circulation is dominated by three inter-connected currents: the Gulf Stream which flows north-east from the North American coast at Cape Hatteras; the North Atlantic Current, a branch of the Gulf Stream which flows northward from the Grand Banks; and the Subpolar Front, an extension of the North Atlantic Current, a wide, vaguely defined region separating the subtropical gyre from the subpolar gyre. This system of currents transport warm water into the North Atlantic, without which temperatures in the North Atlantic and Europe would plunge dramatically.[35]
|
72 |
+
|
73 |
+
North of the North Atlantic Gyre, the cyclonic North Atlantic Subpolar Gyre plays a key role in climate variability. It is governed by ocean currents from marginal seas and regional topography, rather than being steered by wind, both in the deep ocean and at sea level.[36]
|
74 |
+
The subpolar gyre forms an important part of the global thermohaline circulation. Its eastern portion includes eddying branches of the North Atlantic Current which transport warm, saline waters from the subtropics to the north-eastern Atlantic. There this water is cooled during winter and forms return currents that merge along the eastern continental slope of Greenland where they form an intense (40–50 Sv) current which flows around the continental margins of the Labrador Sea. A third of this water becomes part of the deep portion of the North Atlantic Deep Water (NADW). The NADW, in its turn, feeds the meridional overturning circulation (MOC), the northward heat transport of which is threatened by anthropogenic climate change. Large variations in the subpolar gyre on a decade-century scale, associated with the North Atlantic oscillation, are especially pronounced in Labrador Sea Water, the upper layers of the MOC.[37]
|
75 |
+
|
76 |
+
The South Atlantic is dominated by the anti-cyclonic southern subtropical gyre. The South Atlantic Central Water originates in this gyre, while Antarctic Intermediate Water originates in the upper layers of the circumpolar region, near the Drake Passage and the Falkland Islands. Both these currents receive some contribution from the Indian Ocean. On the African east coast the small cyclonic Angola Gyre lies embedded in the large subtropical gyre.[38]
|
77 |
+
The southern subtropical gyre is partly masked by a wind-induced Ekman layer. The residence time of the gyre is 4.4–8.5 years. North Atlantic Deep Water flows southward below the thermocline of the subtropical gyre.[39]
|
78 |
+
|
79 |
+
The Sargasso Sea in the western North Atlantic can be defined as the area where two species of Sargassum (S. fluitans and natans) float, an area 4,000 km (2,500 mi) wide and encircled by the Gulf Stream, North Atlantic Drift, and North Equatorial Current. This population of seaweed probably originated from Tertiary ancestors on the European shores of the former Tethys Ocean and has, if so, maintained itself by vegetative growth, floating in the ocean for millions of years.[40]
|
80 |
+
|
81 |
+
Other species endemic to the Sargasso Sea include the sargassum fish, a predator with algae-like appendages which hovers motionless among the Sargassum. Fossils of similar fishes have been found in fossil bays of the former Tethys Ocean, in what is now the Carpathian region, that were similar to the Sargasso Sea. It is possible that the population in the Sargasso Sea migrated to the Atlantic as the Tethys closed at the end of the Miocene around 17 Ma.[40] The origin of the Sargasso fauna and flora remained enigmatic for centuries. The fossils found in the Carpathians in the mid-20th century, often called the "quasi-Sargasso assemblage", finally showed that this assemblage originated in the Carpathian Basin from where it migrated over Sicily to the Central Atlantic where it evolved into modern species of the Sargasso Sea.[41]
|
82 |
+
|
83 |
+
The location of the spawning ground for European eels remained unknown for decades. In the early 19th century it was discovered that the southern Sargasso Sea is the spawning ground for both the European and American eel and that the former migrate more than 5,000 km (3,100 mi) and the latter 2,000 km (1,200 mi). Ocean currents such as the Gulf Stream transport eel larvae from the Sargasso Sea to foraging areas in North America, Europe, and Northern Africa.[42] Recent but disputed research suggests that eels possibly use Earth's magnetic field to navigate through the ocean both as larvae and as adults.[43]
|
84 |
+
|
85 |
+
Climate is influenced by the temperatures of the surface waters and water currents as well as winds. Because of the ocean's great capacity to store and release heat, maritime climates are more moderate and have less extreme seasonal variations than inland climates. Precipitation can be approximated from coastal weather data and air temperature from water temperatures.[6]
|
86 |
+
|
87 |
+
The oceans are the major source of the atmospheric moisture that is obtained through evaporation. Climatic zones vary with latitude; the warmest zones stretch across the Atlantic north of the equator. The coldest zones are in high latitudes, with the coldest regions corresponding to the areas covered by sea ice. Ocean currents influence the climate by transporting warm and cold waters to other regions. The winds that are cooled or warmed when blowing over these currents influence adjacent land areas.[6]
|
88 |
+
|
89 |
+
The Gulf Stream and its northern extension towards Europe, the North Atlantic Drift is thought to have at least some influence on climate. For example, the Gulf Stream helps moderate winter temperatures along the coastline of southeastern North America, keeping it warmer in winter along the coast than inland areas. The Gulf Stream also keeps extreme temperatures from occurring on the Florida Peninsula. In the higher latitudes, the North Atlantic Drift, warms the atmosphere over the oceans, keeping the British Isles and north-western Europe mild and cloudy, and not severely cold in winter like other locations at the same high latitude. The cold water currents contribute to heavy fog off the coast of eastern Canada (the Grand Banks of Newfoundland area) and Africa's north-western coast. In general, winds transport moisture and air over land areas.[6]
|
90 |
+
|
91 |
+
Icebergs are common from early February to the end of July across the shipping lanes near the Grand Banks of Newfoundland. The ice season is longer in the polar regions, but there is little shipping in those areas.[44]
|
92 |
+
|
93 |
+
Hurricanes are a hazard in the western parts of the North Atlantic during the summer and autumn. Due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare.[45]
|
94 |
+
|
95 |
+
The Atlantic Ocean is underlain mostly by dense mafic oceanic crust made up of basalt and gabbro and overlain by fine clay, silt and siliceous ooze on the abyssal plain. The continental margins and continental shelf mark lower density, but greater thickness felsic continental rock that often much older than that of the seafloor. The oldest oceanic crust in the Atlantic is up to 145 million years and situated off the west coast of Africa and east coast of North America, or on either side of the South Atlantic.[46]
|
96 |
+
|
97 |
+
In many places, the continental shelf and continental slope are covered in thick sedimentary layers. For instance, on the North American side of the ocean, large carbonate deposits formed in warm shallow waters such as Florida and the Bahamas, while coarse river outwash sands and silt are common in shallow shelf areas like the Georges Bank. Coarse sand, boulders, and rocks were transported into some areas, such as off the coast of Nova Scotia or the Gulf of Maine during the Pleistocene ice ages.[47]
|
98 |
+
|
99 |
+
The break-up of Pangaea began in the Central Atlantic, between North America and Northwest Africa, where rift basins opened during the Late Triassic and Early Jurassic. This period also saw the first stages of the uplift of the Atlas Mountains. The exact timing is controversial with estimates ranging from 200 to 170 Ma.[48]
|
100 |
+
|
101 |
+
The opening of the Atlantic Ocean coincided with the initial break-up of the supercontinent Pangaea, both of which were initiated by the eruption of the Central Atlantic Magmatic Province (CAMP), one of the most extensive and voluminous large igneous provinces in Earth's history associated with the Triassic–Jurassic extinction event, one of Earth's major extinction events.[49]
|
102 |
+
Theoliitic dikes, sills, and lava flows from the CAMP eruption at 200 Ma have been found in West Africa, eastern North America, and northern South America. The extent of the volcanism has been estimated to 4.5×106 km2 (1.7×106 sq mi) of which 2.5×106 km2 (9.7×105 sq mi) covered what is now northern and central Brazil.[50]
|
103 |
+
|
104 |
+
The formation of the Central American Isthmus closed the Central American Seaway at the end of the Pliocene 2.8 Ma ago. The formation of the isthmus resulted in the migration and extinction of many land-living animals, known as the Great American Interchange, but the closure of the seaway resulted in a "Great American Schism" as it affected ocean currents, salinity, and temperatures in both the Atlantic and Pacific. Marine organisms on both sides of the isthmus became isolated and either diverged or went extinct.[51]
|
105 |
+
|
106 |
+
Geologically, the Northern Atlantic is the area delimited to the south by two conjugate margins, Newfoundland and Iberia, and to the north by the Arctic Eurasian Basin. The opening of the Northern Atlantic closely followed the margins of its predecessor, the Iapetus Ocean, and spread from the Central Atlantic in six stages: Iberia–Newfoundland, Porcupine–North America, Eurasia–Greenland, Eurasia–North America. Active and inactive spreading systems in this area are marked by the interaction with the Iceland hotspot.[52]
|
107 |
+
|
108 |
+
Seafloor spreading led to the extension of the crust and formations of troughs and sedimentary basins. The Rockall Trough opened between 105 and 84 million years ago although along the rift failed along with one leading into the Bay of Biscay. [53]
|
109 |
+
|
110 |
+
Spreading began opening the Labrador Sea around 61 million years ago, continuing until 36 million years ago. Geologists distinguish two magmatic phases. One from 62 to 58 million years ago predates the separation of Greenland from northern Europe while the second from 56 to 52 million years ago happened as the separation occurred.
|
111 |
+
|
112 |
+
Iceland began to form 62 million years ago due to a particularly concentrated mantle plume. Large quantities of basalt erupted at this time period are found on Baffin Island, Greenland, the Faroe Islands, and Scotland, with ash falls in Western Europe acting as a stratigraphic marker. [54] The opening of the North Atlantic caused significant uplift of continental crust along the coast. For instance, in spite of 7 km thick basalt, Gunnbjorn Field in East Greenland is the highest point on the island, elevated enough that it exposes older Mesozoic sedimentary rocks at its base, similar to old lava fields above sedimentary rocks in the uplifted Hebrides of western Scotland. [55]
|
113 |
+
|
114 |
+
West Gondwana (South America and Africa) broke up in the Early Cretaceous to form the South Atlantic. The apparent fit between the coastlines of the two continents was noted on the first maps that included the South Atlantic and it was also the subject of the first computer-assisted plate tectonic reconstructions in 1965.[56][57] This magnificent fit, however, has since then proven problematic and later reconstructions have introduced various deformation zones along the shorelines to accommodate the northward-propagating break-up.[56] Intra-continental rifts and deformations have also been introduced to subdivide both continental plates into sub-plates.[58]
|
115 |
+
|
116 |
+
Geologically the South Atlantic can be divided into four segments: Equatorial segment, from 10°N to the Romanche Fracture Zone (RFZ);; Central segment, from RFZ to Florianopolis Fracture Zone (FFZ, north of Walvis Ridge and Rio Grande Rise); Southern segment, from FFZ to the Agulhas-Falkland Fracture Zone (AFFZ); and Falkland segment, south of AFFZ.[59]
|
117 |
+
|
118 |
+
In the southern segment the Early Cretaceous (133–130 Ma) intensive magmatism of the Paraná–Etendeka Large Igneous Province produced by the Tristan hotspot resulted in an estimated volume of 1.5×106 to 2.0×106 km3 (3.6×105 to 4.8×105 cu mi). It covered an area of 1.2×106 to 1.6×106 km2 (4.6×105 to 6.2×105 sq mi) in Brazil, Paraguay, and Uruguay and 0.8×105 km2 (3.1×104 sq mi) in Africa. Dyke swarms in Brazil, Angola, eastern Paraguay, and Namibia, however, suggest the LIP originally covered a much larger area and also indicate failed rifts in all these areas. Associated offshore basaltic flows reach as far south as the Falkland Islands and South Africa. Traces of magmatism in both offshore and onshore basins in the central and southern segments have been dated to 147–49 Ma with two peaks between 143–121 Ma and 90–60 Ma.[59]
|
119 |
+
|
120 |
+
In the Falkland segment rifting began with dextral movements between the Patagonia and Colorado sub-plates between the Early Jurassic (190 Ma) and the Early Cretaceous (126.7 Ma). Around 150 Ma sea-floor spreading propagated northward into the southern segment. No later than 130 Ma rifting had reached the Walvis Ridge–Rio Grande Rise.[58]
|
121 |
+
|
122 |
+
In the central segment rifting started to break Africa in two by opening the Benue Trough around 118 Ma. Rifting in the central segment, however, coincided with the Cretaceous Normal Superchron (also known as the Cretaceous quiet period), a 40 Ma period without magnetic reversals, which makes it difficult to date sea-floor spreading in this segment.[58]
|
123 |
+
|
124 |
+
The equatorial segment is the last phase of the break-up, but, because it is located on the Equator, magnetic anomalies cannot be used for dating. Various estimates date the propagation of sea-floor spreading in this segment to the period 120–96 Ma. This final stage, nevertheless, coincided with or resulted in the end of continental extension in Africa.[58]
|
125 |
+
|
126 |
+
About 50 Ma the opening of the Drake Passage resulted from a change in the motions and separation rate of the South American and Antarctic plates. First small ocean basins opened and a shallow gateway appeared during the Middle Eocene. 34–30 Ma a deeper seaway developed, followed by an Eocene–Oligocene climatic deterioration and the growth of the Antarctic ice sheet.[60]
|
127 |
+
|
128 |
+
An embryonic subduction margin is potentially developing west of Gibraltar. The Gibraltar Arc in the western Mediterranean is migrating westward into the Central Atlantic where it joins the converging African and Eurasian plates. Together these three tectonic forces are slowly developing into a new subduction system in the eastern Atlantic Basin. Meanwhile, the Scotia Arc and Caribbean Plate in the western Atlantic Basin are eastward-propagating subduction systems that might, together with the Gibraltar system, represent the beginning of the closure of the Atlantic Ocean and the final stage of the Atlantic Wilson cycle.[61]
|
129 |
+
|
130 |
+
Humans evolved in Africa; first by diverging from other apes around 7 mya; then developing stone tools around 2.6 mya; to finally evolve as modern humans around 100 kya. The earliest evidence for the complex behavior associated with this behavioral modernity has been found in the Greater Cape Floristic Region (GCFR) along the coast of South Africa. During the latest glacial stages, the now-submerged plains of the Agulhas Bank were exposed above sea level, extending the South African coastline farther south by hundreds of kilometers. A small population of modern humans — probably fewer than a thousand reproducing individuals — survived glacial maxima by exploring the high diversity offered by these Palaeo-Agulhas plains. The GCFR is delimited to the north by the Cape Fold Belt and the limited space south of it resulted in the development of social networks out of which complex Stone Age technologies emerged.[62] Human history thus begins on the coasts of South Africa where the Atlantic Benguela Upwelling and Indian Ocean Agulhas Current meet to produce an intertidal zone on which shellfish, fur seal, fish and sea birds provided the necessary protein sources.[63]
|
131 |
+
The African origin of this modern behaviour is evidenced by 70,000 years-old engravings from Blombos Cave, South Africa.[64]
|
132 |
+
|
133 |
+
Mitochondrial DNA (mtDNA) studies indicate that 80–60,000 years ago a major demographic expansion within Africa, derived from a single, small population, coincided with the emergence of behavioral complexity and the rapid MIS 5–4 environmental changes. This group of people not only expanded over the whole of Africa, but also started to disperse out of Africa into Asia, Europe, and Australasia around 65,000 years ago and quickly replaced the archaic humans in these regions.[65] During the Last Glacial Maximum (LGM) 20,000 years ago humans had to abandon their initial settlements along the European North Atlantic coast and retreat to the Mediterranean. Following rapid climate changes at the end of the LGM this region was repopulated by Magdalenian culture. Other hunter-gatherers followed in waves interrupted by large-scale hazards such as the Laacher See volcanic eruption, the inundation of Doggerland (now the North Sea), and the formation of the Baltic Sea.[66] The European coasts of the North Atlantic were permanently populated about 9–8.5 thousand years ago.[67]
|
134 |
+
|
135 |
+
This human dispersal left abundant traces along the coasts of the Atlantic Ocean. 50 kya-old, deeply stratified shell middens found in Ysterfontein on the western coast of South Africa are associated with the Middle Stone Age (MSA). The MSA population was small and dispersed and the rate of their reproduction and exploitation was less intense than those of later generations. While their middens resemble 12–11 kya-old Late Stone Age (LSA) middens found on every inhabited continent, the 50–45 kya-old Enkapune Ya Muto in Kenya probably represents the oldest traces of the first modern humans to disperse out of Africa.[68]
|
136 |
+
|
137 |
+
The same development can be seen in Europe. In La Riera Cave (23–13 kya) in Asturias, Spain, only some 26,600 molluscs were deposited over 10 kya. In contrast, 8–7 kya-old shell middens in Portugal, Denmark, and Brazil generated thousands of tons of debris and artefacts. The Ertebølle middens in Denmark, for example, accumulated 2,000 m3 (71,000 cu ft) of shell deposits representing some 50 million molluscs over only a thousand years. This intensification in the exploitation of marine resources has been described as accompanied by new technologies — such as boats, harpoons, and fish-hooks — because many caves found in the Mediterranean and on the European Atlantic coast have increased quantities of marine shells in their upper levels and reduced quantities in their lower. The earliest exploitation, however, took place on the now submerged shelves, and most settlements now excavated were then located several kilometers from these shelves. The reduced quantities of shells in the lower levels can represent the few shells that were exported inland.[69]
|
138 |
+
|
139 |
+
During the LGM the Laurentide Ice Sheet covered most of northern North America while Beringia connected Siberia to Alaska. In 1973 late American geoscientist Paul S. Martin proposed a "blitzkrieg" colonization of the Americas by which Clovis hunters migrated into North America around 13,000 years ago in a single wave through an ice-free corridor in the ice sheet and "spread southward explosively, briefly attaining a density sufficiently large to overkill much of their prey."[70] Others later proposed a "three-wave" migration over the Bering Land Bridge.[71] These hypotheses remained the long-held view regarding the settlement of the Americas, a view challenged by more recent archaeological discoveries: the oldest archaeological sites in the Americas have been found in South America; sites in north-east Siberia report virtually no human presence there during the LGM; and most Clovis artefacts have been found in eastern North America along the Atlantic coast.[72] Furthermore, colonisation models based on mtDNA, yDNA, and atDNA data respectively support neither the "blitzkrieg" nor the "three-wave" hypotheses but they also deliver mutually ambiguous results. Contradictory data from archaeology and genetics will most likely deliver future hypotheses that will, eventually, confirm each other.[73] A proposed route across the Pacific to South America could explain early South American finds and another hypothesis proposes a northern path, through the Canadian Arctic and down the North American Atlantic coast.[74]
|
140 |
+
Early settlements across the Atlantic have been suggested by alternative theories, ranging from purely hypothetical to mostly disputed, including the Solutrean hypothesis and some of the Pre-Columbian trans-oceanic contact theories.
|
141 |
+
|
142 |
+
The Norse settlement of the Faroe Islands and Iceland began during the 9th and 10th centuries. A settlement on Greenland was established before 1000 CE, but contact with it was lost in 1409 and it was finally abandoned during the early Little Ice Age. This setback was caused by a range of factors: an unsustainable economy resulted in erosion and denudation, while conflicts with the local Inuit resulted in the failure to adapt their Arctic technologies; a colder climate resulted in starvation, and the colony got economically marginalized as the Great Plague and Barbary pirates harvested its victims on Iceland in the 15th century.[75]
|
143 |
+
Iceland was initially settled 865–930 CE following a warm period when winter temperatures hovered around 2 °C (36 °F) which made farming favorable at high latitudes. This did not last, however, and temperatures quickly dropped; at 1080 CE summer temperatures had reached a maximum of 5 °C (41 °F). The Landnámabók (Book of Settlement) records disastrous famines during the first century of settlement — "men ate foxes and ravens" and "the old and helpless were killed and thrown over cliffs" — and by the early 1200s hay had to be abandoned for short-season crops such as barley.[76]
|
144 |
+
|
145 |
+
Christopher Columbus reached the Americas in 1492 under Spanish flag.[77] Six years later Vasco da Gama reached India under the Portuguese flag, by navigating south around the Cape of Good Hope, thus proving that the Atlantic and Indian Oceans are connected. In 1500, in his voyage to India following Vasco da Gama, Pedro Alvares Cabral reached Brazil, taken by the currents of the South Atlantic Gyre. Following these explorations, Spain and Portugal quickly conquered and colonized large territories in the New World and forced the Amerindian population into slavery in order to explore the vast quantities of silver and gold they found. Spain and Portugal monopolized this trade in order to keep other European nations out, but conflicting interests nevertheless led to a series of Spanish-Portuguese wars. A peace treaty mediated by the Pope divided the conquered territories into Spanish and Portuguese sectors while keeping other colonial powers away. England, France, and the Dutch Republic enviously watched the Spanish and Portuguese wealth grow and allied themselves with pirates such as Henry Mainwaring and Alexandre Exquemelin. They could explore the convoys leaving the Americas because prevailing winds and currents made the transport of heavy metals slow and predictable.[77]
|
146 |
+
|
147 |
+
In the colonies of the Americas, depredation, smallpox and others diseases, and slavery quickly reduced the indigenous population of the Americas to the extent that the Atlantic slave trade had to be introduced to replace them — a trade that became the norm and an integral part of the colonization. Between the 15th century and 1888, when Brazil became the last part of the Americas to end the slave trade, an estimated ten million Africans were exported as slaves, most of them destined for agricultural labour. The slave trade was officially abolished in the British Empire and the United States in 1808, and slavery itself was abolished in the British Empire in 1838 and in the United States in 1865 after the Civil War.[78][79]
|
148 |
+
|
149 |
+
From Columbus to the Industrial Revolution Trans-Atlantic trade, including colonialism and slavery, became crucial for Western Europe. For European countries with direct access to the Atlantic (including Britain, France, the Netherlands, Portugal, and Spain) 1500–1800 was a period of sustained growth during which these countries grew richer than those in Eastern Europe and Asia. Colonialism evolved as part of the Trans-Atlantic trade, but this trade also strengthened the position of merchant groups at the expense of monarchs. Growth was more rapid in non-absolutist countries, such as Britain and the Netherlands, and more limited in absolutist monarchies, such as Portugal, Spain, and France, where profit mostly or exclusively benefited the monarchy and its allies.[80]
|
150 |
+
|
151 |
+
Trans-Atlantic trade also resulted in increasing urbanization: in European countries facing the Atlantic, urbanization grew from 8% in 1300, 10.1% in 1500, to 24.5% in 1850; in other European countries from 10% in 1300, 11.4% in 1500, to 17% in 1850. Likewise, GDP doubled in Atlantic countries but rose by only 30% in the rest of Europe. By end of the 17th century, the volume of the Trans-Atlantic trade had surpassed that of the Mediterranean trade.[80]
|
152 |
+
|
153 |
+
The Atlantic has contributed significantly to the development and economy of surrounding countries. Besides major transatlantic transportation and communication routes, the Atlantic offers abundant petroleum deposits in the sedimentary rocks of the continental shelves.[6]
|
154 |
+
|
155 |
+
The Atlantic harbors petroleum and gas fields, fish, marine mammals (seals and whales), sand and gravel aggregates, placer deposits, polymetallic nodules, and precious stones.[81]
|
156 |
+
Gold deposits are a mile or two under water on the ocean floor, however the deposits are also encased in rock that must be mined through. Currently, there is no cost-effective way to mine or extract gold from the ocean to make a profit.[82]
|
157 |
+
|
158 |
+
Various international treaties attempt to reduce pollution caused by environmental threats such as oil spills, marine debris, and the incineration of toxic wastes at sea.[6]
|
159 |
+
|
160 |
+
The shelves of the Atlantic hosts one of the world's richest fishing resources. The most productive areas include the Grand Banks of Newfoundland, the Scotian Shelf, Georges Bank off Cape Cod, the Bahama Banks, the waters around Iceland, the Irish Sea, the Bay of Fundy, the Dogger Bank of the North Sea, and the Falkland Banks.[6]
|
161 |
+
Fisheries have, however, undergone significant changes since the 1950s and global catches can now be divided into three groups of which only two are observed in the Atlantic: fisheries in the Eastern Central and South-West Atlantic oscillate around a globally stable value, the rest of the Atlantic is in overall decline following historical peaks. The third group, "continuously increasing trend since 1950", is only found in the Indian Ocean and Western Pacific.[83]
|
162 |
+
|
163 |
+
In the North-East Atlantic total catches decreased between the mid-1970s and the 1990s and reached 8.7 million tons in 2013. Blue whiting reached a 2.4 million tons peak in 2004 but was down to 628,000 tons in 2013. Recovery plans for cod, sole, and plaice have reduced mortality in these species. Arctic cod reached its lowest levels in the 1960s–1980s but is now recovered. Arctic saithe and haddock are considered fully fished; Sand eel is overfished as was capelin which has now recovered to fully fished. Limited data makes the state of redfishes and deep-water species difficult to assess but most likely they remain vulnerable to overfishing. Stocks of northern shrimp and Norwegian lobster are in good condition. In the North-East Atlantic 21% of stocks are considered overfished.[83]
|
164 |
+
|
165 |
+
In the North-West Atlantic landings have decreased from 4.2 million tons in the early 1970s to 1.9 million tons in 2013. During the 21st century some species have shown weak signs of recovery, including Greenland halibut, yellowtail flounder, Atlantic halibut, haddock, spiny dogfish, while other stocks shown no such signs, including cod, witch flounder, and redfish. Stocks of invertebrates, in contrast, remain at record levels of abundance. 31% of stocks are overfished in the North-west Atlantic.[83]
|
166 |
+
|
167 |
+
In 1497 John Cabot became the first Western European since the Vikings to explore mainland North America and one of his major discoveries was the abundant resources of Atlantic cod off Newfoundland. Referred to as "Newfoundland Currency" this discovery yielded some 200 million tons of fish over five centuries. In the late 19th and early 20th centuries new fisheries started to exploit haddock, mackerel, and lobster. From the 1950s to the 1970s the introduction of European and Asian distant-water fleets in the area dramatically increased the fishing capacity and the number of exploited species. It also expanded the exploited areas from near-shore to the open sea and to great depths to include deep-water species such as redfish, Greenland halibut, witch flounder, and grenadiers. Overfishing in the area was recognised as early as the 1960s but, because this was occurring on international waters, it took until the late 1970s before any attempts to regulate was made. In the early 1990s, this finally resulted in the collapse of the Atlantic northwest cod fishery. The population of a number of deep-sea fishes also collapsed in the process, including American plaice, redfish, and Greenland halibut, together with flounder and grenadier.[84]
|
168 |
+
|
169 |
+
In the Eastern Central Atlantic small pelagic fishes constitute about 50% of landings with sardine reaching 0.6–1.0 million tons per year. Pelagic fish stocks are considered fully fished or overfished, with sardines south of Cape Bojador the notable exception. Almost half of the stocks are fished at biologically unsustainable levels. Total catches have been fluctuating since the 1970s; reaching 3.9 million tons in 2013 or slightly less than the peak production in 2010.[83]
|
170 |
+
|
171 |
+
In the Western Central Atlantic, catches have been decreasing since 2000 and reached 1.3 million tons in 2013. The most important species in the area, Gulf menhaden, reached a million tons in the mid-1980s but only half a million tons in 2013 and is now considered fully fished. Round sardinella was an important species in the 1990s but is now considered overfished. Groupers and snappers are overfished and northern brown shrimp and American cupped oyster are considered fully fished approaching overfished. 44% of stocks are being fished at unsustainable levels.[83]
|
172 |
+
|
173 |
+
In the South-East Atlantic catches have decreased from 3.3 million tons in the early 1970s to 1.3 million tons in 2013. Horse mackerel and hake are the most important species, together representing almost half of the landings. Off South Africa and Namibia deep-water hake and shallow-water Cape hake have recovered to sustainable levels since regulations were introduced in 2006 and the states of Southern African pilchard and anchovy have improved to fully fished in 2013.[83]
|
174 |
+
|
175 |
+
In the South-West Atlantic, a peak was reached in the mid-1980s and catches now fluctuate between 1.7 and 2.6 million tons. The most important species, the Argentine shortfin squid, which reached half a million tons in 2013 or half the peak value, is considered fully fished to overfished. Another important species was the Brazilian sardinella, with a production of 100,000 tons in 2013 it is now considered overfished. Half the stocks in this area are being fished at unsustainable levels: Whitehead's round herring has not yet reached fully fished but Cunene horse mackerel is overfished. The sea snail perlemoen abalone is targeted by illegal fishing and remain overfished.[83]
|
176 |
+
|
177 |
+
Endangered marine species include the manatee, seals, sea lions, turtles, and whales. Drift net fishing can kill dolphins, albatrosses and other seabirds (petrels, auks), hastening the fish stock decline and contributing to international disputes.[85] Municipal pollution comes from the eastern United States, southern Brazil, and eastern Argentina; oil pollution in the Caribbean Sea, Gulf of Mexico, Lake Maracaibo, Mediterranean Sea, and North Sea; and industrial waste and municipal sewage pollution in the Baltic Sea, North Sea, and Mediterranean Sea.
|
178 |
+
|
179 |
+
North Atlantic hurricane activity has increased over past decades because of increased sea surface temperature (SST) at tropical latitudes, changes that can be attributed to either the natural Atlantic Multidecadal Oscillation (AMO) or to anthropogenic climate change.[86]
|
180 |
+
A 2005 report indicated that the Atlantic meridional overturning circulation (AMOC) slowed down by 30% between 1957 and 2004.[87] If the AMO were responsible for SST variability, the AMOC would have increased in strength, which is apparently not the case. Furthermore, it is clear from statistical analyses of annual tropical cyclones that these changes do not display multidecadal cyclicity.[86] Therefore, these changes in SST must be caused by human activities.[88]
|
181 |
+
|
182 |
+
The ocean mixed layer plays an important role in heat storage over seasonal and decadal time-scales, whereas deeper layers are affected over millennia and has a heat capacity about 50 times that of the mixed layer. This heat uptake provides a time-lag for climate change but it also results in thermal expansion of the oceans which contributes to sea-level rise. 21st-century global warming will probably result in an equilibrium sea-level rise five times greater than today, whilst melting of glaciers, including that of the Greenland ice-sheet, expected to have virtually no effect during the 21st century, will probably result in a sea-level rise of 3–6 m over a millennium.[89]
|
183 |
+
|
184 |
+
A USAF C-124 aircraft from Dover Air Force Base, Delaware was carrying three nuclear bombs over the Atlantic Ocean when it experienced a loss of power. For their own safety, the crew jettisoned two nuclear bombs, which were never recovered.[90]
|
185 |
+
|
186 |
+
On 7 June 2006, Florida's wildlife commission voted to take the manatee off the state's endangered species list. Some environmentalists worry that this could erode safeguards for the popular sea creature.
|
187 |
+
|
188 |
+
Marine pollution is a generic term for the entry into the ocean of potentially hazardous chemicals or particles. The biggest culprits are rivers and with them many agriculture fertilizer chemicals as well as livestock and human waste. The excess of oxygen-depleting chemicals leads to hypoxia and the creation of a dead zone.[91]
|
189 |
+
|
190 |
+
Marine debris, which is also known as marine litter, describes human-created waste floating in a body of water. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter. The North Atlantic garbage patch is estimated to be hundreds of kilometers across in size.[92]
|
en/4216.html.txt
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Southern Ocean, also known as the Antarctic Ocean[1] or the Austral Ocean,[2][note 4] comprises the southernmost waters of the World Ocean, generally taken to be south of 60° S latitude and encircling Antarctica.[5] As such, it is regarded as the second-smallest of the five principal oceanic divisions: smaller than the Pacific, Atlantic, and Indian Oceans but larger than the Arctic Ocean.[6] This oceanic zone is where cold, northward flowing waters from the Antarctic mix with warmer subantarctic waters.
|
4 |
+
|
5 |
+
By way of his voyages in the 1770s, James Cook proved that waters encompassed the southern latitudes of the globe. Since then, geographers have disagreed on the Southern Ocean's northern boundary or even existence, considering the waters as various parts of the Pacific, Atlantic, and Indian Oceans, instead. However, according to Commodore John Leech of the International Hydrographic Organization (IHO), recent oceanographic research has discovered the importance of Southern Circulation, and the term Southern Ocean has been used to define the body of water which lies south of the northern limit of that circulation.[7] This remains the current official policy of the IHO, since a 2000 revision of its definitions including the Southern Ocean as the waters south of the 60th parallel has not yet been adopted. Others regard the seasonally-fluctuating Antarctic Convergence as the natural boundary.[8]
|
6 |
+
|
7 |
+
The maximum depth of the Southern Ocean, using the definition that it lies south of 60th parallel, was surveyed by the Five Deeps Expedition in early February 2019. The expedition's multibeam sonar team identified the deepest point at 60° 28' 46"S, 025° 32' 32"W, with a depth of 7,434 meters. The expedition leader and chief submersible pilot Victor Vescovo, has proposed naming this deepest point in the Southern Ocean the "Factorian Deep", based on the name of the manned submersible DSV Limiting Factor, in which he successfully visited the bottom for the first time on February 3, 2019.[9]
|
8 |
+
|
9 |
+
Borders and names for oceans and seas were internationally agreed when the International Hydrographic Bureau, the precursor to the IHO, convened the First International Conference on 24 July 1919. The IHO then published these in its Limits of Oceans and Seas, the first edition being 1928. Since the first edition, the limits of the Southern Ocean have moved progressively southwards; since 1953, it has been omitted from the official publication and left to local hydrographic offices to determine their own limits.
|
10 |
+
|
11 |
+
The IHO included the ocean and its definition as the waters south of the 60th parallel south in its 2000 revisions, but this has not been formally adopted, due to continuing impasses about some of the content, such as the naming dispute over the Sea of Japan. The 2000 IHO definition, however, was circulated in a draft edition in 2002, and is used by some within the IHO and by some other organizations such as the CIA World Factbook and Merriam-Webster.[6][10]
|
12 |
+
|
13 |
+
The Australian Government regards the Southern Ocean as lying immediately south of Australia (see § Australian standpoint).[11][12]
|
14 |
+
|
15 |
+
The National Geographic Society does not recognize the ocean,[2] depicting it in a typeface different from the other world oceans; instead, it shows the Pacific, Atlantic, and Indian Oceans extending to Antarctica on both its print and online maps.[13] Map publishers using the term Southern Ocean on their maps include Hema Maps[14] and GeoNova.[15]
|
16 |
+
|
17 |
+
"Southern Ocean" is an obsolete name for the Pacific Ocean or South Pacific, coined by Vasco Núñez de Balboa, the first European to discover it, who approached it from the north.[16] The "South Seas" is a less archaic synonym. A 1745 British Act of Parliament established a prize for discovering a Northwest Passage to "the Western and Southern Ocean of America".[17]
|
18 |
+
|
19 |
+
Authors using "Southern Ocean" to name the waters encircling the unknown southern polar regions used varying limits. James Cook's account of his second voyage implies New Caledonia borders it.[18] Peacock's 1795 Geographical Dictionary said it lay "to the southward of America and Africa";[19] John Payne in 1796 used 40 degrees as the northern limit;[20] the 1827 Edinburgh Gazetteer used 50 degrees.[21] The Family Magazine in 1835 divided the "Great Southern Ocean" into the "Southern Ocean" and the "Antarctick [sic] Ocean" along the Antarctic Circle, with the northern limit of the Southern Ocean being lines joining Cape Horn, the Cape of Good Hope, Van Diemen's Land and the south of New Zealand.[22]
|
20 |
+
|
21 |
+
The United Kingdom's South Australia Act 1834 described the waters forming the southern limit of the new province of South Australia as "the Southern Ocean". The Colony of Victoria's Legislative Council Act 1881 delimited part of the division of Bairnsdale as "along the New South Wales boundary to the Southern ocean".[23]
|
22 |
+
|
23 |
+
In the 1928 first edition of Limits of Oceans and Seas, the Southern Ocean was delineated by land-based limits: Antarctica to the south, and South America, Africa, Australia, and Broughton Island, New Zealand to the north.
|
24 |
+
|
25 |
+
The detailed land-limits used were from Cape Horn in Chile eastwards to Cape Agulhas in Africa, then further eastwards to the southern coast of mainland Australia to Cape Leeuwin, Western Australia. From Cape Leeuwin, the limit then followed eastwards along the coast of mainland Australia to Cape Otway, Victoria, then southwards across Bass Strait to Cape Wickham, King Island, along the west coast of King Island, then the remainder of the way south across Bass Strait to Cape Grim, Tasmania.
|
26 |
+
|
27 |
+
The limit then followed the west coast of Tasmania southwards to the South East Cape and then went eastwards to Broughton Island, New Zealand, before returning to Cape Horn.[24]
|
28 |
+
|
29 |
+
The northern limits of the Southern Ocean were moved southwards in the IHO's 1937 second edition of the Limits of Oceans and Seas. From this edition, much of the ocean's northern limit ceased to abut land masses.
|
30 |
+
|
31 |
+
In the second edition, the Southern Ocean then extended from Antarctica northwards to latitude 40°S between Cape Agulhas in Africa (long. 20°E) and Cape Leeuwin in Western Australia (long. 115°E), and extended to latitude 55°S between Auckland Island of New Zealand (165 or 166°E east) and Cape Horn in South America (67°W).[25]
|
32 |
+
|
33 |
+
As is discussed in more detail below, prior to the 2002 edition the limits of oceans explicitly excluded the seas lying within each of them. The Great Australian Bight was unnamed in the 1928 edition, and delineated as shown in the figure above in the 1937 edition. It therefore encompassed former Southern Ocean waters—as designated in 1928—but was technically not inside any of the three adjacent oceans by 1937.
|
34 |
+
|
35 |
+
In the 2002 draft edition, the IHO have designated 'seas' as being subdivisions within 'oceans', so the Bight would have still been within the Southern Ocean in 1937 if the 2002 convention were in place then. To perform direct comparisons of current and former limits of oceans it is necessary to consider, or at least be aware of, how the 2002 change in IHO terminology for 'seas' can affect the comparison.
|
36 |
+
|
37 |
+
The Southern Ocean did not appear in the 1953 third edition of Limits of Oceans and Seas, a note in the publication read:
|
38 |
+
|
39 |
+
The Antarctic or Southern Ocean has been omitted from this publication as the majority of opinions received since the issue of the 2nd Edition in 1937 are to the effect that there exists no real justification for applying the term Ocean to this body of water, the northern limits of which are difficult to lay down owing to their seasonal change. The limits of the Atlantic, Pacific and Indian Oceans have therefore been extended South to the Antarctic Continent.Hydrographic Offices who issue separate publications dealing with this area are therefore left to decide their own northern limits (Great Britain uses Latitude of 55 South.)[26]:4
|
40 |
+
|
41 |
+
Instead, in the IHO 1953 publication, the Atlantic, Indian and Pacific Oceans were extended southward, the Indian and Pacific Oceans (which had not previously touched pre 1953, as per the first and second editions) now abutted at the meridian of South East Cape, and the southern limits of the Great Australian Bight and the Tasman Sea were moved northwards.[26]
|
42 |
+
|
43 |
+
The IHO readdressed the question of the Southern Ocean in a survey in 2000. Of its 68 member nations, 28 responded, and all responding members except Argentina agreed to redefine the ocean, reflecting the importance placed by oceanographers on ocean currents. The proposal for the name Southern Ocean won 18 votes, beating the alternative Antarctic Ocean. Half of the votes supported a definition of the ocean's northern limit at the 60th parallel south—with no land interruptions at this latitude—with the other 14 votes cast for other definitions, mostly the 50th parallel south, but a few for as far north as the 35th parallel south.
|
44 |
+
|
45 |
+
A draft fourth edition of Limits of Oceans and Seas was circulated to IHO member states in August 2002 (sometimes referred to as the "2000 edition" as it summarized the progress to 2000).[28] It has yet to be published due to 'areas of concern' by several countries relating to various naming issues around the world – primarily the Sea of Japan naming dispute – and there have been various changes, 60 seas were given new names, and even the name of the publication was changed.[29] A reservation had also been lodged by Australia regarding the Southern Ocean limits.[30] Effectively, the third edition—which did not delineate the Southern Ocean leaving delineation to local hydrographic offices—has yet to be superseded.
|
46 |
+
|
47 |
+
Despite this, the fourth edition definition has partial de facto usage by many nations, scientists and organisations such as the U.S. (the CIA World Factbook uses "Southern Ocean" but none of the other new sea names within the "Southern Ocean" such as "Cosmonauts Sea") and Merriam-Webster,[6][10][13] scientists and nations – and even by some within the IHO.[31] Some nations' hydrographic offices have defined their own boundaries; the United Kingdom used the 55th parallel south for example.[26] Other organisations favour more northerly limits for the Southern Ocean. For example, Encyclopædia Britannica describes the Southern Ocean as extending as far north as South America, and confers great significance on the Antarctic Convergence, yet its description of the Indian Ocean contradicts this, describing the Indian Ocean as extending south to Antarctica.[32][33]
|
48 |
+
|
49 |
+
Other sources, such as the National Geographic Society, show the Atlantic, Pacific and Indian Oceans as extending to Antarctica on its maps, although articles on the National Geographic web site have begun to reference the Southern Ocean.[13]
|
50 |
+
|
51 |
+
A radical shift from past IHO practices (1928–1953) was also seen in the 2002 draft edition when the IHO delineated 'seas' as being subdivisions that lay within the boundaries of 'oceans'. While the IHO are often considered the authority for such conventions, the shift brought them into line with the practices of other publications (e.g. the CIA World Fact Book) which already adopted the principle that seas are contained within oceans. This difference in practice is markedly seen for the Pacific Ocean in the adjacent figure. Thus, for example, previously the Tasman Sea between Australia and New Zealand was not regarded by the IHO as being part of the Pacific, but as of the 2002 draft edition it is.
|
52 |
+
|
53 |
+
The new delineation of seas being subdivisions of oceans has avoided the need to interrupt the northern boundary of the Southern Ocean where intersected by Drake Passage which includes all of the waters from South America to the Antarctic coast, nor interrupt it for the Scotia Sea, which also extends below the 60th parallel south. The new delineation of seas has also meant that the long-time named seas around Antarctica, excluded from the 1953 edition (the 1953 map did not even extend that far south), are 'automatically' part of the Southern Ocean.
|
54 |
+
|
55 |
+
In Australia, cartographical authorities define the Southern Ocean as including the entire body of water between Antarctica and the south coasts of Australia and New Zealand, and up to 60°S elsewhere.[34] Coastal maps of Tasmania and South Australia label the sea areas as Southern Ocean[35] and Cape Leeuwin in Western Australia is described as the point where the Indian and Southern Oceans meet.[36]
|
56 |
+
|
57 |
+
Exploration of the Southern Ocean was inspired by a belief in the existence of a Terra Australis – a vast continent in the far south of the globe to "balance" the northern lands of Eurasia and North Africa – which had existed since the times of Ptolemy. The doubling of the Cape of Good Hope in 1487 by Bartolomeu Dias first brought explorers within touch of the Antarctic cold, and proved that there was an ocean separating Africa from any Antarctic land that might exist.[37] Ferdinand Magellan, who passed through the Strait of Magellan in 1520, assumed that the islands of Tierra del Fuego to the south were an extension of this unknown southern land. In 1564, Abraham Ortelius published his first map, Typus Orbis Terrarum, an eight-leaved wall map of the world, on which he identified the Regio Patalis with Locach as a northward extension of the Terra Australis, reaching as far as New Guinea.[38][39]
|
58 |
+
|
59 |
+
European geographers continued to connect the coast of Tierra del Fuego with the coast of New Guinea on their globes, and allowing their imaginations to run riot in the vast unknown spaces of the south Atlantic, south Indian and Pacific oceans they sketched the outlines of the Terra Australis Incognita ("Unknown Southern Land"), a vast continent stretching in parts into the tropics. The search for this great south land was a leading motive of explorers in the 16th and the early part of the 17th centuries.[37]
|
60 |
+
|
61 |
+
The Spaniard Gabriel de Castilla, who claimed having sighted "snow-covered mountains" beyond the 64° S in 1603, is recognized as the first explorer that discovered the continent of Antarctica, although he was ignored in his time.
|
62 |
+
|
63 |
+
In 1606, Pedro Fernández de Quirós took possession for the king of Spain all of the lands he had discovered in Australia del Espiritu Santo (the New Hebrides) and those he would discover "even to the Pole".[37]
|
64 |
+
|
65 |
+
Francis Drake, like Spanish explorers before him, had speculated that there might be an open channel south of Tierra del Fuego. When Willem Schouten and Jacob Le Maire discovered the southern extremity of Tierra del Fuego and named it Cape Horn in 1615, they proved that the Tierra del Fuego archipelago was of small extent and not connected to the southern land, as previously thought. Subsequently, in 1642, Abel Tasman showed that even New Holland (Australia) was separated by sea from any continuous southern continent.[37]
|
66 |
+
|
67 |
+
The visit to South Georgia by Anthony de la Roché in 1675 was the first ever discovery of land south of the Antarctic Convergence i.e. in the Southern Ocean/Antarctic.[40][41] Soon after the voyage cartographers started to depict ‘Roché Island’, honouring the discoverer. James Cook was aware of la Roché's discovery when surveying and mapping the island in 1775.[42]
|
68 |
+
|
69 |
+
Edmond Halley's voyage in HMS Paramour for magnetic investigations in the South Atlantic met the pack ice in 52° S in January 1700, but that latitude (he reached 140 mi off the north coast of South Georgia) was his farthest south. A determined effort on the part of the French naval officer Jean-Baptiste Charles Bouvet de Lozier to discover the "South Land" – described by a half legendary "sieur de Gonneyville" – resulted in the discovery of Bouvet Island in 54°10′ S, and in the navigation of 48° of longitude of ice-cumbered sea nearly in 55° S in 1730.[37]
|
70 |
+
|
71 |
+
In 1771, Yves Joseph Kerguelen sailed from France with instructions to proceed south from Mauritius in search of "a very large continent". He lighted upon a land in 50° S which he called South France, and believed to be the central mass of the southern continent. He was sent out again to complete the exploration of the new land, and found it to be only an inhospitable island which he renamed the Isle of Desolation, but which was ultimately named after him.[37]
|
72 |
+
|
73 |
+
The obsession of the undiscovered continent culminated in the brain of Alexander Dalrymple, the brilliant and erratic hydrographer who was nominated by the Royal Society to command the Transit of Venus expedition to Tahiti in 1769. The command of the expedition was given by the admiralty to Captain James Cook. Sailing in 1772 with Resolution, a vessel of 462 tons under his own command and Adventure of 336 tons under Captain Tobias Furneaux, Cook first searched in vain for Bouvet Island, then sailed for 20 degrees of longitude to the westward in latitude 58° S, and then 30° eastward for the most part south of 60° S, a lower southern latitude than had ever been voluntarily entered before by any vessel. On 17 January 1773 the Antarctic Circle was crossed for the first time in history and the two ships reached 67° 15' S by 39° 35' E, where their course was stopped by ice.[37]
|
74 |
+
|
75 |
+
Cook then turned northward to look for French Southern and Antarctic Lands, of the discovery of which he had received news at Cape Town, but from the rough determination of his longitude by Kerguelen, Cook reached the assigned latitude 10° too far east and did not see it. He turned south again and was stopped by ice in 61° 52′ S by 95° E and continued eastward nearly on the parallel of 60° S to 147° E. On 16 March, the approaching winter drove him northward for rest to New Zealand and the tropical islands of the Pacific. In November 1773, Cook left New Zealand, having parted company with the Adventure, and reached 60° S by 177° W, whence he sailed eastward keeping as far south as the floating ice allowed. The Antarctic Circle was crossed on 20 December and Cook remained south of it for three days, being compelled after reaching 67° 31′ S to stand north again in 135° W.[37]
|
76 |
+
|
77 |
+
A long detour to 47° 50′ S served to show that there was no land connection between New Zealand and Tierra del Fuego. Turning south again, Cook crossed the Antarctic Circle for the third time at 109° 30′ W before his progress was once again blocked by ice four days later at 71° 10′ S by 106° 54′ W. This point, reached on 30 January 1774, was the farthest south attained in the 18th century. With a great detour to the east, almost to the coast of South America, the expedition regained Tahiti for refreshment. In November 1774, Cook started from New Zealand and crossed the South Pacific without sighting land between 53° and 57° S to Tierra del Fuego; then, passing Cape Horn on 29 December, he rediscovered Roché Island renaming it Isle of Georgia, and discovered the South Sandwich Islands (named Sandwich Land by him), the only ice-clad land he had seen, before crossing the South Atlantic to the Cape of Good Hope between 55° and 60°. He thereby laid open the way for future Antarctic exploration by exploding the myth of a habitable southern continent. Cook's most southerly discovery of land lay on the temperate side of the 60th parallel, and he convinced himself that if land lay farther south it was practically inaccessible and without economic value.[37]
|
78 |
+
|
79 |
+
Voyagers rounding Cape Horn frequently met with contrary winds and were driven southward into snowy skies and ice-encumbered seas; but so far as can be ascertained none of them before 1770 reached the Antarctic Circle, or knew it, if they did.
|
80 |
+
|
81 |
+
In a voyage from 1822 to 1824, James Weddell commanded the 160-ton brig Jane, accompanied by his second ship Beaufoy captained by Matthew Brisbane. Together they sailed to the South Orkneys where sealing proved disappointing. They turned south in the hope of finding a better sealing ground. The season was unusually mild and tranquil, and on 20 February 1823 the two ships reached latitude 74°15' S and longitude 34°16'45″ W the southernmost position any ship had ever reached up to that time. A few icebergs were sighted but there was still no sight of land, leading Weddell to theorize that the sea continued as far as the South Pole. Another two days' sailing would have brought him to Coat's Land (to the east of the Weddell Sea) but Weddell decided to turn back.[44]
|
82 |
+
|
83 |
+
The first land south of the parallel 60° south latitude was discovered by the Englishman William Smith, who sighted Livingston Island on 19 February 1819. A few months later Smith returned to explore the other islands of the South Shetlands archipelago, landed on King George Island, and claimed the new territories for Britain.
|
84 |
+
|
85 |
+
In the meantime, the Spanish Navy ship San Telmo sank in September 1819 when trying to cross Cape Horn. Parts of her wreckage were found months later by sealers on the north coast of Livingston Island (South Shetlands). It is unknown if some survivor managed to be the first to set foot on these Antarctic islands.
|
86 |
+
|
87 |
+
The first confirmed sighting of mainland Antarctica cannot be accurately attributed to one single person. It can, however, be narrowed down to three individuals. According to various sources,[45][46][47] three men all sighted the ice shelf or the continent within days or months of each other: Fabian Gottlieb von Bellingshausen, a captain in the Russian Imperial Navy; Edward Bransfield, a captain in the Royal Navy; and Nathaniel Palmer, an American sealer out of Stonington, Connecticut. It is certain that the expedition, led by von Bellingshausen and Lazarev on the ships Vostok and Mirny, reached a point within 32 km (20 mi) from Princess Martha Coast and recorded the sight of an ice shelf at 69°21′28″S 2°14′50″W / 69.35778°S 2.24722°W / -69.35778; -2.24722[48] that became known as the Fimbul Ice Shelf. On 30 January 1820, Bransfield sighted Trinity Peninsula, the northernmost point of the Antarctic mainland, while Palmer sighted the mainland in the area south of Trinity Peninsula in November 1820. Von Bellingshausen's expedition also discovered Peter I Island and Alexander I Island, the first islands to be discovered south of the circle.
|
88 |
+
|
89 |
+
1683 map by French cartographer Alain Manesson Mallet from his publication Description de L'Univers. Shows a sea below both the Atlantic and Pacific oceans at a time when Tierra del Fuego was believed joined to Antarctica. Sea is named Mer Magellanique after Ferdinand Magellan.
|
90 |
+
|
91 |
+
Samuel Dunn's 1794 General Map of the World or Terraqueous Globe shows a Southern Ocean (but meaning what is today named the South Atlantic) and a Southern Icy Ocean.
|
92 |
+
|
93 |
+
A New Map of Asia, from the Latest Authorities, by John Cary, Engraver, 1806, shows the Southern Ocean lying to the south of both the Indian Ocean and Australia.
|
94 |
+
|
95 |
+
Freycinet Map of 1811 – resulted from the 1800–1803 French Baudin expedition to Australia and was the first full map of Australia ever to be published. In French, the map named the ocean immediately below Australia as the Grand Océan Austral (‘Great Southern Ocean’).
|
96 |
+
|
97 |
+
1863 map of Australia shows the Southern Ocean lying immediately to the south of Australia.
|
98 |
+
|
99 |
+
1906 map by German publisher Justus Perthes showing Antarctica encompassed by an Antarktischer (Sudl. Eismeer) Ocean – the ‘Antarctic (South Arctic) Ocean’.
|
100 |
+
|
101 |
+
Map of The World in 1922 by the National Geographic Society showing the Antarctic (Southern) Ocean.
|
102 |
+
|
103 |
+
In December 1839, as part of the United States Exploring Expedition of 1838–42 conducted by the United States Navy (sometimes called "the Wilkes Expedition"), an expedition sailed from Sydney, Australia, on the sloops-of-war USS Vincennes and USS Peacock, the brig USS Porpoise, the full-rigged ship Relief, and two schooners Sea Gull and USS Flying Fish. They sailed into the Antarctic Ocean, as it was then known, and reported the discovery "of an Antarctic continent west of the Balleny Islands" on 25 January 1840. That part of Antarctica was later named "Wilkes Land", a name it maintains to this day.
|
104 |
+
|
105 |
+
Explorer James Clark Ross passed through what is now known as the Ross Sea and discovered Ross Island (both of which were named for him) in 1841. He sailed along a huge wall of ice that was later named the Ross Ice Shelf. Mount Erebus and Mount Terror are named after two ships from his expedition: HMS Erebus and HMS Terror.[49]
|
106 |
+
|
107 |
+
The Imperial Trans-Antarctic Expedition of 1914, led by Ernest Shackleton, set out to cross the continent via the pole, but their ship, Endurance, was trapped and crushed by pack ice before they even landed. The expedition members survived after an epic journey on sledges over pack ice to Elephant Island. Then Shackleton and five others crossed the Southern Ocean, in an open boat called James Caird, and then trekked over South Georgia to raise the alarm at the whaling station Grytviken.
|
108 |
+
|
109 |
+
In 1946, US Navy Rear Admiral Richard E. Byrd and more than 4,700 military personnel visited the Antarctic in an expedition called Operation Highjump. Reported to the public as a scientific mission, the details were kept secret and it may have actually been a training or testing mission for the military. The expedition was, in both military or scientific planning terms, put together very quickly. The group contained an unusually high amount of military equipment, including an aircraft carrier, submarines, military support ships, assault troops and military vehicles. The expedition was planned to last for eight months but was unexpectedly terminated after only two months. With the exception of some eccentric entries in Admiral Byrd's diaries, no real explanation for the early termination has ever been officially given.
|
110 |
+
|
111 |
+
Captain Finn Ronne, Byrd's executive officer, returned to Antarctica with his own expedition in 1947–1948, with Navy support, three planes, and dogs. Ronne disproved the notion that the continent was divided in two and established that East and West Antarctica was one single continent, i.e. that the Weddell Sea and the Ross Sea are not connected.[50] The expedition explored and mapped large parts of Palmer Land and the Weddell Sea coastline, and identified the Ronne Ice Shelf, named by Ronne after his wife Edith "Jackie" Ronne.[51] Ronne covered 3,600 miles (5,790 km) by ski and dog sled – more than any other explorer in history.[52] The Ronne Antarctic Research Expedition discovered and mapped the last unknown coastline in the world and was the first Antarctic expedition to ever include women.[53]
|
112 |
+
|
113 |
+
The Antarctic Treaty was signed on 1 December 1959 and came into force on 23 June 1961. Among other provisions, this treaty limits military activity in the Antarctic to the support of scientific research.
|
114 |
+
|
115 |
+
The first person to sail single-handed to Antarctica was the New Zealander David Henry Lewis, in 1972, in a 10-metre (30 ft) steel sloop Ice Bird.
|
116 |
+
|
117 |
+
A baby, named Emilio Marcos de Palma, was born near Hope Bay on 7 January 1978, becoming the first baby born on the continent. He also was born further south than anyone in history.[54]
|
118 |
+
|
119 |
+
The MV Explorer was a cruise ship operated by the Swedish explorer Lars-Eric Lindblad. Observers point to Explorer's 1969 expeditionary cruise to Antarctica as the frontrunner for today's[when?] sea-based tourism in that region.[55][56] Explorer was the first cruise ship used specifically to sail the icy waters of the Antarctic Ocean and the first to sink there[57] when she struck an unidentified submerged object on 23 November 2007, reported to be ice, which caused a 10 by 4 inches (25 by 10 cm) gash in the hull.[58] Explorer was abandoned in the early hours of 23 November 2007 after taking on water near the South Shetland Islands in the Southern Ocean, an area which is usually stormy but was calm at the time.[59] Explorer was confirmed by the Chilean Navy to have sunk at approximately position: 62° 24′ South, 57° 16′ West,[60] in roughly 600 m of water.[61]
|
120 |
+
|
121 |
+
British engineer Richard Jenkins designed an unmanned surface vehicle called a "saildrone"[62] that completed the first autonomous circumnavigation of the Southern Ocean on 3 August 2019 after 196 days at sea.[63]
|
122 |
+
|
123 |
+
The first completely human-powered expedition on the Southern Ocean was accomplished on 25 December 2019 by a team of rowers comprising captain Fiann Paul (Iceland), first mate Colin O'Brady (US), Andrew Towne (US), Cameron Bellamy (South Africa), Jamie Douglas-Hamilton (UK) and John Petersen (US).[64]
|
124 |
+
|
125 |
+
The Southern Ocean, geologically the youngest of the oceans, was formed when Antarctica and South America moved apart, opening the Drake Passage, roughly 30 million years ago. The separation of the continents allowed the formation of the Antarctic Circumpolar Current.
|
126 |
+
|
127 |
+
With a northern limit at 60°S, the Southern Ocean differs from the other oceans in that its largest boundary, the northern boundary, does not abut a landmass (as it did with the first edition of Limits of Oceans and Seas). Instead, the northern limit is with the Atlantic, Indian and Pacific Oceans.
|
128 |
+
|
129 |
+
One reason for considering it as a separate ocean stems from the fact that much of the water of the Southern Ocean differs from the water in the other oceans. Water gets transported around the Southern Ocean fairly rapidly because of the Antarctic Circumpolar Current which circulates around Antarctica. Water in the Southern Ocean south of, for example, New Zealand, resembles the water in the Southern Ocean south of South America more closely than it resembles the water in the Pacific Ocean.
|
130 |
+
|
131 |
+
The Southern Ocean has typical depths of between 4,000 and 5,000 m (13,000 and 16,000 ft) over most of its extent with only limited areas of shallow water. The Southern Ocean's greatest depth of 7,236 m (23,740 ft) occurs at the southern end of the South Sandwich Trench, at 60°00'S, 024°W. The Antarctic continental shelf appears generally narrow and unusually deep, its edge lying at depths up to 800 m (2,600 ft), compared to a global mean of 133 m (436 ft).
|
132 |
+
|
133 |
+
Equinox to equinox in line with the sun's seasonal influence, the Antarctic ice pack fluctuates from an average minimum of 2.6 million square kilometres (1.0×10^6 sq mi) in March to about 18.8 million square kilometres (7.3×10^6 sq mi) in September, more than a sevenfold increase in area.
|
134 |
+
|
135 |
+
Sub-divisions of oceans are geographical features such as "seas", "straits", "bays", "channels", and "gulfs". There are many sub-divisions of the Southern Ocean defined in the never-approved 2002 draft fourth edition of the IHO publication Limits of Oceans and Seas. In clockwise order these include (with sector):
|
136 |
+
|
137 |
+
A number of these such as the 2002 Russian-proposed "Cosmonauts Sea", "Cooperation Sea", and "Somov (mid-1950s Russian polar explorer) Sea" are not included in the 1953 IHO document which remains currently in force,[26] because they received their names largely originated from 1962 onward. Leading geographic authorities and atlases do not use these latter three names, including the 2014 10th edition World Atlas from the United States' National Geographic Society and the 2014 12th edition of the British Times Atlas of the World, but Soviet and Russian-issued maps do.[65][66]
|
138 |
+
|
139 |
+
The Southern Ocean probably contains large, and possibly giant, oil and gas fields on the continental margin. Placer deposits, accumulation of valuable minerals such as gold, formed by gravity separation during sedimentary processes are also expected to exist in the Southern Ocean.[5]
|
140 |
+
|
141 |
+
Manganese nodules are expected to exist in the Southern Ocean. Manganese nodules are rock concretions on the sea bottom formed of concentric layers of iron and manganese hydroxides around a core. The core may be microscopically small and is sometimes completely transformed into manganese minerals by crystallization. Interest in the potential exploitation of polymetallic nodules generated a great deal of activity among prospective mining consortia in the 1960s and 1970s.[5]
|
142 |
+
|
143 |
+
The icebergs that form each year around in the Southern Ocean hold enough fresh water to meet the needs of every person on Earth for several months. For several decades there have been proposals, none yet to be feasible or successful, to tow Southern Ocean icebergs to more arid northern regions (such as Australia) where they can be harvested.[67]
|
144 |
+
|
145 |
+
Icebergs can occur at any time of year throughout the ocean. Some may have drafts up to several hundred meters; smaller icebergs, iceberg fragments and sea-ice (generally 0.5 to 1 m thick) also pose problems for ships. The deep continental shelf has a floor of glacial deposits varying widely over short distances.
|
146 |
+
|
147 |
+
Sailors know latitudes from 40 to 70 degrees south as the "Roaring Forties", "Furious Fifties" and "Shrieking Sixties" due to high winds and large waves that form as winds blow around the entire globe unimpeded by any land-mass. Icebergs, especially in May to October, make the area even more dangerous. The remoteness of the region makes sources of search and rescue scarce.
|
148 |
+
|
149 |
+
The Antarctic Circumpolar Current moves perpetually eastward – chasing and joining itself, and at 21,000 km (13,000 mi) in length – it comprises the world's longest ocean current, transporting 130 million cubic metres per second (4.6×10^9 cu ft/s) of water – 100 times the flow of all the world's rivers.
|
150 |
+
|
151 |
+
Several processes operate along the coast of Antarctica to produce, in the Southern Ocean, types of water masses not produced elsewhere in the oceans of the Southern Hemisphere. One of these is the Antarctic Bottom Water, a very cold, highly saline, dense water that forms under sea ice.
|
152 |
+
|
153 |
+
Associated with the Circumpolar Current is the Antarctic Convergence encircling Antarctica, where cold northward-flowing Antarctic waters meet the relatively warmer waters of the subantarctic, Antarctic waters predominantly sink beneath subantarctic waters, while associated zones of mixing and upwelling create a zone very high in nutrients. These nurture high levels of phytoplankton with associated copepods and Antarctic krill, and resultant foodchains supporting fish, whales, seals, penguins, albatrosses and a wealth of other species.[68]
|
154 |
+
|
155 |
+
The Antarctic Convergence is considered to be the best natural definition of the northern extent of the Southern Ocean.[5]
|
156 |
+
|
157 |
+
Large-scale upwelling is found in the Southern Ocean. Strong westerly (eastward) winds blow around Antarctica, driving a significant flow of water northwards. This is actually a type of coastal upwelling. Since there are no continents in a band of open latitudes between South America and the tip of the Antarctic Peninsula, some of this water is drawn up from great depths. In many numerical models and observational syntheses, the Southern Ocean upwelling represents the primary means by which deep dense water is brought to the surface. Shallower, wind-driven upwelling is also found off the west coasts of North and South America, northwest and southwest Africa, and southwest and southeast Australia, all associated with oceanic subtropical high pressure circulations.
|
158 |
+
|
159 |
+
Some models of the ocean circulation suggest that broad-scale upwelling occurs in the tropics, as pressure driven flows converge water toward the low latitudes where it is diffusively warmed from above. The required diffusion coefficients, however, appear to be larger than are observed in the real ocean. Nonetheless, some diffusive upwelling does probably occur.
|
160 |
+
|
161 |
+
The Ross Gyre and Weddell Gyre are two gyres that exist within the Southern Ocean. The gyres are located in the Ross Sea and Weddell Sea respectively, and both rotate clockwise. The gyres are formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf.
|
162 |
+
|
163 |
+
Sea ice has been noted to persist in the central area of the Ross Gyre.[69] There is some evidence that global warming has resulted in some decrease of the salinity of the waters of the Ross Gyre since the 1950s.[70]
|
164 |
+
|
165 |
+
Due to the Coriolis effect acting to the left in the Southern Hemisphere and the resulting Ekman transport away from the centres of the Weddell Gyre, these regions are very productive due to upwelling of cold, nutrient rich water.
|
166 |
+
|
167 |
+
Sea temperatures vary from about −2 to 10 °C (28 to 50 °F). Cyclonic storms travel eastward around the continent and frequently become intense because of the temperature contrast between ice and open ocean. The ocean-area from about latitude 40 south to the Antarctic Circle has the strongest average winds found anywhere on Earth.[71] In winter the ocean freezes outward to 65 degrees south latitude in the Pacific sector and 55 degrees south latitude in the Atlantic sector, lowering surface temperatures well below 0 degrees Celsius. At some coastal points, however, persistent intense drainage winds from the interior keep the shoreline ice-free throughout the winter.
|
168 |
+
|
169 |
+
A variety of marine animals exist and rely, directly or indirectly, on the phytoplankton in the Southern Ocean. Antarctic sea life includes penguins, blue whales, orcas, colossal squids and fur seals. The emperor penguin is the only penguin that breeds during the winter in Antarctica, while the Adélie penguin breeds farther south than any other penguin. The rockhopper penguin has distinctive feathers around the eyes, giving the appearance of elaborate eyelashes. King penguins, chinstrap penguins, and gentoo penguins also breed in the Antarctic.
|
170 |
+
|
171 |
+
The Antarctic fur seal was very heavily hunted in the 18th and 19th centuries for its pelt by sealers from the United States and the United Kingdom. The Weddell seal, a "true seal", is named after Sir James Weddell, commander of British sealing expeditions in the Weddell Sea. Antarctic krill, which congregates in large schools, is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, squid, icefish, penguins, albatrosses and many other birds.[72]
|
172 |
+
|
173 |
+
The benthic communities of the seafloor are diverse and dense, with up to 155,000 animals found in 1 square metre (10.8 sq ft). As the seafloor environment is very similar all around the Antarctic, hundreds of species can be found all the way around the mainland, which is a uniquely wide distribution for such a large community. Deep-sea gigantism is common among these animals.[73]
|
174 |
+
|
175 |
+
A census of sea life carried out during the International Polar Year and which involved some 500 researchers was released in 2010. The research is part of the global Census of Marine Life (CoML) and has disclosed some remarkable findings. More than 235 marine organisms live in both polar regions, having bridged the gap of 12,000 km (7,500 mi). Large animals such as some cetaceans and birds make the round trip annually. More surprising are small forms of life such as mudworms, sea cucumbers and free-swimming snails found in both polar oceans. Various factors may aid in their distribution – fairly uniform temperatures of the deep ocean at the poles and the equator which differ by no more than 5 °C (9.0 °F), and the major current systems or marine conveyor belt which transport egg and larva stages.[74] However, among smaller marine animals generally assumed to be the same in the Antarctica and the Arctic, more detailed studies of each population have often—but not always—revealed differences, showing that they are closely related cryptic species rather than a single bipolar species.[75][76][77]
|
176 |
+
|
177 |
+
The rocky shores of mainland Antarctica and its offshore islands provide nesting space for over 100 million birds every spring. These nesters include species of albatrosses, petrels, skuas, gulls and terns.[78] The insectivorous South Georgia pipit is endemic to South Georgia and some smaller surrounding islands. Freshwater ducks inhabit South Georgia and the Kerguelen Islands.[79]
|
178 |
+
|
179 |
+
The flightless penguins are all located in the Southern Hemisphere, with the greatest concentration located on and around Antarctica. Four of the 18 penguin species live and breed on the mainland and its close offshore islands. Another four species live on the subantarctic islands.[80] Emperor penguins have four overlapping layers of feathers, keeping them warm. They are the only Antarctic animal to breed during the winter.[81]
|
180 |
+
|
181 |
+
There are relatively few fish species in few families in the Southern Ocean. The most species-rich family are the snailfish (Liparidae), followed by the cod icefish (Nototheniidae)[82] and eelpout (Zoarcidae). Together the snailfish, eelpouts and notothenioids (which includes cod icefish and several other families) account for almost 9⁄10 of the more than 320 described fish species of the Southern Ocean (tens of undescribed species also occur in the region, especially among the snailfish).[83] Southern Ocean snailfish are generally found in deep waters, while the icefish also occur in shallower waters.[82]
|
182 |
+
|
183 |
+
Cod icefish (Nototheniidae), as well as several other families, are part of the Notothenioidei suborder, collectively sometimes referred to as icefish. The suborder contains many species with antifreeze proteins in their blood and tissue, allowing them to live in water that is around or slightly below 0 °C (32 °F).[84][85] Antifreeze proteins are also known from Southern Ocean snailfish.[86]
|
184 |
+
|
185 |
+
The crocodile icefish (family Channichthyidae), also known as white-blooded fish, are only found in the Southern Ocean. They lack hemoglobin in their blood, resulting in their blood being colourless. One Channichthyidae species, the mackerel icefish (Champsocephalus gunnari), was once the most common fish in coastal waters less than 400 metres (1,312 ft) deep, but was overfished in the 1970s and 1980s. Schools of icefish spend the day at the seafloor and the night higher in the water column eating plankton and smaller fish.[84]
|
186 |
+
|
187 |
+
There are two species from the genus Dissostichus, the Antarctic toothfish (Dissostichus mawsoni) and the Patagonian toothfish (Dissostichus eleginoides). These two species live on the seafloor 100–3,000 metres (328–9,843 ft) deep, and can grow to around 2 metres (7 ft) long weighing up to 100 kilograms (220 lb), living up to 45 years. The Antarctic toothfish lives close to the Antarctic mainland, whereas the Patagonian toothfish lives in the relatively warmer subantarctic waters. Toothfish are commercially fished, and overfishing has reduced toothfish populations.[84]
|
188 |
+
|
189 |
+
Another abundant fish group is the genus Notothenia, which like the Antarctic toothfish have antifreeze in their bodies.[84]
|
190 |
+
|
191 |
+
An unusual species of icefish is the Antarctic silverfish (Pleuragramma antarcticum), which is the only truly pelagic fish in the waters near Antarctica.[87]
|
192 |
+
|
193 |
+
Seven pinniped species inhabit Antarctica. The largest, the elephant seal (Mirounga leonina), can reach up to 4,000 kilograms (8,818 lb), while females of the smallest, the Antarctic fur seal (Arctocephalus gazella), reach only 150 kilograms (331 lb). These two species live north of the sea ice, and breed in harems on beaches. The other four species can live on the sea ice. Crabeater seals (Lobodon carcinophagus) and Weddell seals (Leptonychotes weddellii) form breeding colonies, whereas leopard seals (Hydrurga leptonyx) and Ross seals (Ommatophoca rossii) live solitary lives. Although these species hunt underwater, they breed on land or ice and spend a great deal of time there, as they have no terrestrial predators.[88]
|
194 |
+
|
195 |
+
The four species that inhabit sea ice are thought to make up 50% of the total biomass of the world's seals.[89] Crabeater seals have a population of around 15 million, making them one of the most numerous large animals on the planet.[90] The New Zealand sea lion (Phocarctos hookeri), one of the rarest and most localised pinnipeds, breeds almost exclusively on the subantarctic Auckland Islands, although historically it had a wider range.[91] Out of all permanent mammalian residents, the Weddell seals live the furthest south.[92]
|
196 |
+
|
197 |
+
There are 10 cetacean species found in the Southern Ocean; six baleen whales, and four toothed whales. The largest of these, the blue whale (Balaenoptera musculus), grows to 24 metres (79 ft) long weighing 84 tonnes. Many of these species are migratory, and travel to tropical waters during the Antarctic winter.[93]
|
198 |
+
|
199 |
+
Five species of krill, small free-swimming crustaceans, have been found in the Southern Ocean.[94] The Antarctic krill (Euphausia superba) is one of the most abundant animal species on earth, with a biomass of around 500 million tonnes. Each individual is 6 centimetres (2.4 in) long and weighs over 1 gram (0.035 oz).[95] The swarms that form can stretch for kilometres, with up to 30,000 individuals per 1 cubic metre (35 cu ft), turning the water red.[94] Swarms usually remain in deep water during the day, ascending during the night to feed on plankton. Many larger animals depend on krill for their own survival.[95] During the winter when food is scarce, adult Antarctic krill can revert to a smaller juvenile stage, using their own body as nutrition.[94]
|
200 |
+
|
201 |
+
Many benthic crustaceans have a non-seasonal breeding cycle, and some raise their young in a brood pouch. Glyptonotus antarcticus is an unusually large benthic isopod, reaching 20 centimetres (8 in) in length weighing 70 grams (2.47 oz). Amphipods are abundant in soft sediments, eating a range of items, from algae to other animals.[73] The amphipods are highly diverse with more than 600 recognized species found south of the Antarctic Convergence and there are indications that many undescribed species remain. Among these are several "giants", such as the iconic epimeriids that are up to 8 cm (3.1 in) long.[96]
|
202 |
+
|
203 |
+
Slow moving sea spiders are common, sometimes growing as large as a human hand. They feed on the corals, sponges, and bryozoans that litter the seabed.[73]
|
204 |
+
|
205 |
+
Many aquatic molluscs are present in Antarctica. Bivalves such as Adamussium colbecki move around on the seafloor, while others such as Laternula elliptica live in burrows filtering the water above.[73] There are around 70 cephalopod species in the Southern Ocean,[97] the largest of which is the colossal squid (Mesonychoteuthis hamiltoni), which at up to 14 metres (46 ft) is among the largest invertebrate in the world.[98] Squid makes up most of the diet of some animals, such as grey-headed albatrosses and sperm whales, and the warty squid (Moroteuthis ingens) is one of the subantarctic's most preyed upon species by vertebrates.[97]
|
206 |
+
|
207 |
+
The sea urchin genus Abatus burrow through the sediment eating the nutrients they find in it.[73] Two species of salps are common in Antarctic waters, Salpa thompsoni and Ihlea racovitzai. Salpa thompsoni is found in ice-free areas, whereas Ihlea racovitzai is found in the high latitude areas near ice. Due to their low nutritional value, they are normally only eaten by fish, with larger animals such as birds and marine mammals only eating them when other food is scarce.[99]
|
208 |
+
|
209 |
+
Antarctic sponges are long lived, and sensitive to environmental changes due to the specificity of the symbiotic microbial communities within them. As a result, they function as indicators of environmental health.[100]
|
210 |
+
|
211 |
+
Increased solar ultraviolet radiation resulting from the Antarctic ozone hole has reduced marine primary productivity (phytoplankton) by as much as 15% and has started damaging the DNA of some fish.[101] Illegal, unreported and unregulated fishing, especially the landing of an estimated five to six times more Patagonian toothfish than the regulated fishery, likely affects the sustainability of the stock. Long-line fishing for toothfish causes a high incidence of seabird mortality.
|
212 |
+
|
213 |
+
All international agreements regarding the world's oceans apply to the Southern Ocean. In addition, it is subject to these agreements specific to the region:
|
214 |
+
|
215 |
+
Many nations prohibit the exploration for and the exploitation of mineral resources south of the fluctuating Antarctic Convergence,[104] which lies in the middle of the Antarctic Circumpolar Current and serves as the dividing line between the very cold polar surface waters to the south and the warmer waters to the north. The Antarctic Treaty covers the portion of the globe south of sixty degrees south,[105]
|
216 |
+
it prohibits new claims to Antarctica.[106]
|
217 |
+
|
218 |
+
The Convention for the Conservation of Antarctic Marine Living Resources applies to the area south of 60° South latitude as well as the areas further north up to the limit of the Antarctic Convergence.[107]
|
219 |
+
|
220 |
+
Between 1 July 1998 and 30 June 1999, fisheries landed 119,898 tonnes, of which 85% consisted of krill and 14% of Patagonian toothfish. International agreements came into force in late 1999 to reduce illegal, unreported, and unregulated fishing, which in the 1998–99 season landed five to six times more Patagonian toothfish than the regulated fishery.
|
221 |
+
|
222 |
+
Major operational ports include: Rothera Station, Palmer Station, Villa Las Estrellas, Esperanza Base, Mawson Station, McMurdo Station, and offshore anchorages in Antarctica.
|
223 |
+
|
224 |
+
Few ports or harbors exist on the southern (Antarctic) coast of the Southern Ocean, since ice conditions limit use of most shores to short periods in midsummer; even then some require icebreaker escort for access. Most Antarctic ports are operated by government research stations and, except in an emergency, remain closed to commercial or private vessels; vessels in any port south of 60 degrees south are subject to inspection by Antarctic Treaty observers.
|
225 |
+
|
226 |
+
The Southern Ocean's southernmost port operates at McMurdo Station at 77°50′S 166°40′E / 77.833°S 166.667°E / -77.833; 166.667. Winter Quarters Bay forms a small harbor, on the southern tip of Ross Island where a floating ice pier makes port operations possible in summer. Operation Deep Freeze personnel constructed the first ice pier at McMurdo in 1973.[108]
|
227 |
+
|
228 |
+
Based on the original 1928 IHO delineation of the Southern Ocean (and the 1937 delineation if the Great Australian Bight is considered integral), Australian ports and harbors between Cape Leeuwin and Cape Otway on the Australian mainland and along the west coast of Tasmania would also be identified as ports and harbors existing in the Southern Ocean. These would include the larger ports and harbors of Albany, Thevenard, Port Lincoln, Whyalla, Port Augusta, Port Adelaide, Portland, Warrnambool, and Macquarie Harbour.
|
229 |
+
|
230 |
+
Even though organizers of several Yacht races define their routes as involving the Southern Ocean, the actual routes don't enter the actual geographical boundaries of the Southern Ocean. The routes involve instead South Atlantic, South Pacific and Indian Ocean.[109][110][111]
|
231 |
+
|
en/4217.html.txt
ADDED
@@ -0,0 +1,231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Southern Ocean, also known as the Antarctic Ocean[1] or the Austral Ocean,[2][note 4] comprises the southernmost waters of the World Ocean, generally taken to be south of 60° S latitude and encircling Antarctica.[5] As such, it is regarded as the second-smallest of the five principal oceanic divisions: smaller than the Pacific, Atlantic, and Indian Oceans but larger than the Arctic Ocean.[6] This oceanic zone is where cold, northward flowing waters from the Antarctic mix with warmer subantarctic waters.
|
4 |
+
|
5 |
+
By way of his voyages in the 1770s, James Cook proved that waters encompassed the southern latitudes of the globe. Since then, geographers have disagreed on the Southern Ocean's northern boundary or even existence, considering the waters as various parts of the Pacific, Atlantic, and Indian Oceans, instead. However, according to Commodore John Leech of the International Hydrographic Organization (IHO), recent oceanographic research has discovered the importance of Southern Circulation, and the term Southern Ocean has been used to define the body of water which lies south of the northern limit of that circulation.[7] This remains the current official policy of the IHO, since a 2000 revision of its definitions including the Southern Ocean as the waters south of the 60th parallel has not yet been adopted. Others regard the seasonally-fluctuating Antarctic Convergence as the natural boundary.[8]
|
6 |
+
|
7 |
+
The maximum depth of the Southern Ocean, using the definition that it lies south of 60th parallel, was surveyed by the Five Deeps Expedition in early February 2019. The expedition's multibeam sonar team identified the deepest point at 60° 28' 46"S, 025° 32' 32"W, with a depth of 7,434 meters. The expedition leader and chief submersible pilot Victor Vescovo, has proposed naming this deepest point in the Southern Ocean the "Factorian Deep", based on the name of the manned submersible DSV Limiting Factor, in which he successfully visited the bottom for the first time on February 3, 2019.[9]
|
8 |
+
|
9 |
+
Borders and names for oceans and seas were internationally agreed when the International Hydrographic Bureau, the precursor to the IHO, convened the First International Conference on 24 July 1919. The IHO then published these in its Limits of Oceans and Seas, the first edition being 1928. Since the first edition, the limits of the Southern Ocean have moved progressively southwards; since 1953, it has been omitted from the official publication and left to local hydrographic offices to determine their own limits.
|
10 |
+
|
11 |
+
The IHO included the ocean and its definition as the waters south of the 60th parallel south in its 2000 revisions, but this has not been formally adopted, due to continuing impasses about some of the content, such as the naming dispute over the Sea of Japan. The 2000 IHO definition, however, was circulated in a draft edition in 2002, and is used by some within the IHO and by some other organizations such as the CIA World Factbook and Merriam-Webster.[6][10]
|
12 |
+
|
13 |
+
The Australian Government regards the Southern Ocean as lying immediately south of Australia (see § Australian standpoint).[11][12]
|
14 |
+
|
15 |
+
The National Geographic Society does not recognize the ocean,[2] depicting it in a typeface different from the other world oceans; instead, it shows the Pacific, Atlantic, and Indian Oceans extending to Antarctica on both its print and online maps.[13] Map publishers using the term Southern Ocean on their maps include Hema Maps[14] and GeoNova.[15]
|
16 |
+
|
17 |
+
"Southern Ocean" is an obsolete name for the Pacific Ocean or South Pacific, coined by Vasco Núñez de Balboa, the first European to discover it, who approached it from the north.[16] The "South Seas" is a less archaic synonym. A 1745 British Act of Parliament established a prize for discovering a Northwest Passage to "the Western and Southern Ocean of America".[17]
|
18 |
+
|
19 |
+
Authors using "Southern Ocean" to name the waters encircling the unknown southern polar regions used varying limits. James Cook's account of his second voyage implies New Caledonia borders it.[18] Peacock's 1795 Geographical Dictionary said it lay "to the southward of America and Africa";[19] John Payne in 1796 used 40 degrees as the northern limit;[20] the 1827 Edinburgh Gazetteer used 50 degrees.[21] The Family Magazine in 1835 divided the "Great Southern Ocean" into the "Southern Ocean" and the "Antarctick [sic] Ocean" along the Antarctic Circle, with the northern limit of the Southern Ocean being lines joining Cape Horn, the Cape of Good Hope, Van Diemen's Land and the south of New Zealand.[22]
|
20 |
+
|
21 |
+
The United Kingdom's South Australia Act 1834 described the waters forming the southern limit of the new province of South Australia as "the Southern Ocean". The Colony of Victoria's Legislative Council Act 1881 delimited part of the division of Bairnsdale as "along the New South Wales boundary to the Southern ocean".[23]
|
22 |
+
|
23 |
+
In the 1928 first edition of Limits of Oceans and Seas, the Southern Ocean was delineated by land-based limits: Antarctica to the south, and South America, Africa, Australia, and Broughton Island, New Zealand to the north.
|
24 |
+
|
25 |
+
The detailed land-limits used were from Cape Horn in Chile eastwards to Cape Agulhas in Africa, then further eastwards to the southern coast of mainland Australia to Cape Leeuwin, Western Australia. From Cape Leeuwin, the limit then followed eastwards along the coast of mainland Australia to Cape Otway, Victoria, then southwards across Bass Strait to Cape Wickham, King Island, along the west coast of King Island, then the remainder of the way south across Bass Strait to Cape Grim, Tasmania.
|
26 |
+
|
27 |
+
The limit then followed the west coast of Tasmania southwards to the South East Cape and then went eastwards to Broughton Island, New Zealand, before returning to Cape Horn.[24]
|
28 |
+
|
29 |
+
The northern limits of the Southern Ocean were moved southwards in the IHO's 1937 second edition of the Limits of Oceans and Seas. From this edition, much of the ocean's northern limit ceased to abut land masses.
|
30 |
+
|
31 |
+
In the second edition, the Southern Ocean then extended from Antarctica northwards to latitude 40°S between Cape Agulhas in Africa (long. 20°E) and Cape Leeuwin in Western Australia (long. 115°E), and extended to latitude 55°S between Auckland Island of New Zealand (165 or 166°E east) and Cape Horn in South America (67°W).[25]
|
32 |
+
|
33 |
+
As is discussed in more detail below, prior to the 2002 edition the limits of oceans explicitly excluded the seas lying within each of them. The Great Australian Bight was unnamed in the 1928 edition, and delineated as shown in the figure above in the 1937 edition. It therefore encompassed former Southern Ocean waters—as designated in 1928—but was technically not inside any of the three adjacent oceans by 1937.
|
34 |
+
|
35 |
+
In the 2002 draft edition, the IHO have designated 'seas' as being subdivisions within 'oceans', so the Bight would have still been within the Southern Ocean in 1937 if the 2002 convention were in place then. To perform direct comparisons of current and former limits of oceans it is necessary to consider, or at least be aware of, how the 2002 change in IHO terminology for 'seas' can affect the comparison.
|
36 |
+
|
37 |
+
The Southern Ocean did not appear in the 1953 third edition of Limits of Oceans and Seas, a note in the publication read:
|
38 |
+
|
39 |
+
The Antarctic or Southern Ocean has been omitted from this publication as the majority of opinions received since the issue of the 2nd Edition in 1937 are to the effect that there exists no real justification for applying the term Ocean to this body of water, the northern limits of which are difficult to lay down owing to their seasonal change. The limits of the Atlantic, Pacific and Indian Oceans have therefore been extended South to the Antarctic Continent.Hydrographic Offices who issue separate publications dealing with this area are therefore left to decide their own northern limits (Great Britain uses Latitude of 55 South.)[26]:4
|
40 |
+
|
41 |
+
Instead, in the IHO 1953 publication, the Atlantic, Indian and Pacific Oceans were extended southward, the Indian and Pacific Oceans (which had not previously touched pre 1953, as per the first and second editions) now abutted at the meridian of South East Cape, and the southern limits of the Great Australian Bight and the Tasman Sea were moved northwards.[26]
|
42 |
+
|
43 |
+
The IHO readdressed the question of the Southern Ocean in a survey in 2000. Of its 68 member nations, 28 responded, and all responding members except Argentina agreed to redefine the ocean, reflecting the importance placed by oceanographers on ocean currents. The proposal for the name Southern Ocean won 18 votes, beating the alternative Antarctic Ocean. Half of the votes supported a definition of the ocean's northern limit at the 60th parallel south—with no land interruptions at this latitude—with the other 14 votes cast for other definitions, mostly the 50th parallel south, but a few for as far north as the 35th parallel south.
|
44 |
+
|
45 |
+
A draft fourth edition of Limits of Oceans and Seas was circulated to IHO member states in August 2002 (sometimes referred to as the "2000 edition" as it summarized the progress to 2000).[28] It has yet to be published due to 'areas of concern' by several countries relating to various naming issues around the world – primarily the Sea of Japan naming dispute – and there have been various changes, 60 seas were given new names, and even the name of the publication was changed.[29] A reservation had also been lodged by Australia regarding the Southern Ocean limits.[30] Effectively, the third edition—which did not delineate the Southern Ocean leaving delineation to local hydrographic offices—has yet to be superseded.
|
46 |
+
|
47 |
+
Despite this, the fourth edition definition has partial de facto usage by many nations, scientists and organisations such as the U.S. (the CIA World Factbook uses "Southern Ocean" but none of the other new sea names within the "Southern Ocean" such as "Cosmonauts Sea") and Merriam-Webster,[6][10][13] scientists and nations – and even by some within the IHO.[31] Some nations' hydrographic offices have defined their own boundaries; the United Kingdom used the 55th parallel south for example.[26] Other organisations favour more northerly limits for the Southern Ocean. For example, Encyclopædia Britannica describes the Southern Ocean as extending as far north as South America, and confers great significance on the Antarctic Convergence, yet its description of the Indian Ocean contradicts this, describing the Indian Ocean as extending south to Antarctica.[32][33]
|
48 |
+
|
49 |
+
Other sources, such as the National Geographic Society, show the Atlantic, Pacific and Indian Oceans as extending to Antarctica on its maps, although articles on the National Geographic web site have begun to reference the Southern Ocean.[13]
|
50 |
+
|
51 |
+
A radical shift from past IHO practices (1928–1953) was also seen in the 2002 draft edition when the IHO delineated 'seas' as being subdivisions that lay within the boundaries of 'oceans'. While the IHO are often considered the authority for such conventions, the shift brought them into line with the practices of other publications (e.g. the CIA World Fact Book) which already adopted the principle that seas are contained within oceans. This difference in practice is markedly seen for the Pacific Ocean in the adjacent figure. Thus, for example, previously the Tasman Sea between Australia and New Zealand was not regarded by the IHO as being part of the Pacific, but as of the 2002 draft edition it is.
|
52 |
+
|
53 |
+
The new delineation of seas being subdivisions of oceans has avoided the need to interrupt the northern boundary of the Southern Ocean where intersected by Drake Passage which includes all of the waters from South America to the Antarctic coast, nor interrupt it for the Scotia Sea, which also extends below the 60th parallel south. The new delineation of seas has also meant that the long-time named seas around Antarctica, excluded from the 1953 edition (the 1953 map did not even extend that far south), are 'automatically' part of the Southern Ocean.
|
54 |
+
|
55 |
+
In Australia, cartographical authorities define the Southern Ocean as including the entire body of water between Antarctica and the south coasts of Australia and New Zealand, and up to 60°S elsewhere.[34] Coastal maps of Tasmania and South Australia label the sea areas as Southern Ocean[35] and Cape Leeuwin in Western Australia is described as the point where the Indian and Southern Oceans meet.[36]
|
56 |
+
|
57 |
+
Exploration of the Southern Ocean was inspired by a belief in the existence of a Terra Australis – a vast continent in the far south of the globe to "balance" the northern lands of Eurasia and North Africa – which had existed since the times of Ptolemy. The doubling of the Cape of Good Hope in 1487 by Bartolomeu Dias first brought explorers within touch of the Antarctic cold, and proved that there was an ocean separating Africa from any Antarctic land that might exist.[37] Ferdinand Magellan, who passed through the Strait of Magellan in 1520, assumed that the islands of Tierra del Fuego to the south were an extension of this unknown southern land. In 1564, Abraham Ortelius published his first map, Typus Orbis Terrarum, an eight-leaved wall map of the world, on which he identified the Regio Patalis with Locach as a northward extension of the Terra Australis, reaching as far as New Guinea.[38][39]
|
58 |
+
|
59 |
+
European geographers continued to connect the coast of Tierra del Fuego with the coast of New Guinea on their globes, and allowing their imaginations to run riot in the vast unknown spaces of the south Atlantic, south Indian and Pacific oceans they sketched the outlines of the Terra Australis Incognita ("Unknown Southern Land"), a vast continent stretching in parts into the tropics. The search for this great south land was a leading motive of explorers in the 16th and the early part of the 17th centuries.[37]
|
60 |
+
|
61 |
+
The Spaniard Gabriel de Castilla, who claimed having sighted "snow-covered mountains" beyond the 64° S in 1603, is recognized as the first explorer that discovered the continent of Antarctica, although he was ignored in his time.
|
62 |
+
|
63 |
+
In 1606, Pedro Fernández de Quirós took possession for the king of Spain all of the lands he had discovered in Australia del Espiritu Santo (the New Hebrides) and those he would discover "even to the Pole".[37]
|
64 |
+
|
65 |
+
Francis Drake, like Spanish explorers before him, had speculated that there might be an open channel south of Tierra del Fuego. When Willem Schouten and Jacob Le Maire discovered the southern extremity of Tierra del Fuego and named it Cape Horn in 1615, they proved that the Tierra del Fuego archipelago was of small extent and not connected to the southern land, as previously thought. Subsequently, in 1642, Abel Tasman showed that even New Holland (Australia) was separated by sea from any continuous southern continent.[37]
|
66 |
+
|
67 |
+
The visit to South Georgia by Anthony de la Roché in 1675 was the first ever discovery of land south of the Antarctic Convergence i.e. in the Southern Ocean/Antarctic.[40][41] Soon after the voyage cartographers started to depict ‘Roché Island’, honouring the discoverer. James Cook was aware of la Roché's discovery when surveying and mapping the island in 1775.[42]
|
68 |
+
|
69 |
+
Edmond Halley's voyage in HMS Paramour for magnetic investigations in the South Atlantic met the pack ice in 52° S in January 1700, but that latitude (he reached 140 mi off the north coast of South Georgia) was his farthest south. A determined effort on the part of the French naval officer Jean-Baptiste Charles Bouvet de Lozier to discover the "South Land" – described by a half legendary "sieur de Gonneyville" – resulted in the discovery of Bouvet Island in 54°10′ S, and in the navigation of 48° of longitude of ice-cumbered sea nearly in 55° S in 1730.[37]
|
70 |
+
|
71 |
+
In 1771, Yves Joseph Kerguelen sailed from France with instructions to proceed south from Mauritius in search of "a very large continent". He lighted upon a land in 50° S which he called South France, and believed to be the central mass of the southern continent. He was sent out again to complete the exploration of the new land, and found it to be only an inhospitable island which he renamed the Isle of Desolation, but which was ultimately named after him.[37]
|
72 |
+
|
73 |
+
The obsession of the undiscovered continent culminated in the brain of Alexander Dalrymple, the brilliant and erratic hydrographer who was nominated by the Royal Society to command the Transit of Venus expedition to Tahiti in 1769. The command of the expedition was given by the admiralty to Captain James Cook. Sailing in 1772 with Resolution, a vessel of 462 tons under his own command and Adventure of 336 tons under Captain Tobias Furneaux, Cook first searched in vain for Bouvet Island, then sailed for 20 degrees of longitude to the westward in latitude 58° S, and then 30° eastward for the most part south of 60° S, a lower southern latitude than had ever been voluntarily entered before by any vessel. On 17 January 1773 the Antarctic Circle was crossed for the first time in history and the two ships reached 67° 15' S by 39° 35' E, where their course was stopped by ice.[37]
|
74 |
+
|
75 |
+
Cook then turned northward to look for French Southern and Antarctic Lands, of the discovery of which he had received news at Cape Town, but from the rough determination of his longitude by Kerguelen, Cook reached the assigned latitude 10° too far east and did not see it. He turned south again and was stopped by ice in 61° 52′ S by 95° E and continued eastward nearly on the parallel of 60° S to 147° E. On 16 March, the approaching winter drove him northward for rest to New Zealand and the tropical islands of the Pacific. In November 1773, Cook left New Zealand, having parted company with the Adventure, and reached 60° S by 177° W, whence he sailed eastward keeping as far south as the floating ice allowed. The Antarctic Circle was crossed on 20 December and Cook remained south of it for three days, being compelled after reaching 67° 31′ S to stand north again in 135° W.[37]
|
76 |
+
|
77 |
+
A long detour to 47° 50′ S served to show that there was no land connection between New Zealand and Tierra del Fuego. Turning south again, Cook crossed the Antarctic Circle for the third time at 109° 30′ W before his progress was once again blocked by ice four days later at 71° 10′ S by 106° 54′ W. This point, reached on 30 January 1774, was the farthest south attained in the 18th century. With a great detour to the east, almost to the coast of South America, the expedition regained Tahiti for refreshment. In November 1774, Cook started from New Zealand and crossed the South Pacific without sighting land between 53° and 57° S to Tierra del Fuego; then, passing Cape Horn on 29 December, he rediscovered Roché Island renaming it Isle of Georgia, and discovered the South Sandwich Islands (named Sandwich Land by him), the only ice-clad land he had seen, before crossing the South Atlantic to the Cape of Good Hope between 55° and 60°. He thereby laid open the way for future Antarctic exploration by exploding the myth of a habitable southern continent. Cook's most southerly discovery of land lay on the temperate side of the 60th parallel, and he convinced himself that if land lay farther south it was practically inaccessible and without economic value.[37]
|
78 |
+
|
79 |
+
Voyagers rounding Cape Horn frequently met with contrary winds and were driven southward into snowy skies and ice-encumbered seas; but so far as can be ascertained none of them before 1770 reached the Antarctic Circle, or knew it, if they did.
|
80 |
+
|
81 |
+
In a voyage from 1822 to 1824, James Weddell commanded the 160-ton brig Jane, accompanied by his second ship Beaufoy captained by Matthew Brisbane. Together they sailed to the South Orkneys where sealing proved disappointing. They turned south in the hope of finding a better sealing ground. The season was unusually mild and tranquil, and on 20 February 1823 the two ships reached latitude 74°15' S and longitude 34°16'45″ W the southernmost position any ship had ever reached up to that time. A few icebergs were sighted but there was still no sight of land, leading Weddell to theorize that the sea continued as far as the South Pole. Another two days' sailing would have brought him to Coat's Land (to the east of the Weddell Sea) but Weddell decided to turn back.[44]
|
82 |
+
|
83 |
+
The first land south of the parallel 60° south latitude was discovered by the Englishman William Smith, who sighted Livingston Island on 19 February 1819. A few months later Smith returned to explore the other islands of the South Shetlands archipelago, landed on King George Island, and claimed the new territories for Britain.
|
84 |
+
|
85 |
+
In the meantime, the Spanish Navy ship San Telmo sank in September 1819 when trying to cross Cape Horn. Parts of her wreckage were found months later by sealers on the north coast of Livingston Island (South Shetlands). It is unknown if some survivor managed to be the first to set foot on these Antarctic islands.
|
86 |
+
|
87 |
+
The first confirmed sighting of mainland Antarctica cannot be accurately attributed to one single person. It can, however, be narrowed down to three individuals. According to various sources,[45][46][47] three men all sighted the ice shelf or the continent within days or months of each other: Fabian Gottlieb von Bellingshausen, a captain in the Russian Imperial Navy; Edward Bransfield, a captain in the Royal Navy; and Nathaniel Palmer, an American sealer out of Stonington, Connecticut. It is certain that the expedition, led by von Bellingshausen and Lazarev on the ships Vostok and Mirny, reached a point within 32 km (20 mi) from Princess Martha Coast and recorded the sight of an ice shelf at 69°21′28″S 2°14′50″W / 69.35778°S 2.24722°W / -69.35778; -2.24722[48] that became known as the Fimbul Ice Shelf. On 30 January 1820, Bransfield sighted Trinity Peninsula, the northernmost point of the Antarctic mainland, while Palmer sighted the mainland in the area south of Trinity Peninsula in November 1820. Von Bellingshausen's expedition also discovered Peter I Island and Alexander I Island, the first islands to be discovered south of the circle.
|
88 |
+
|
89 |
+
1683 map by French cartographer Alain Manesson Mallet from his publication Description de L'Univers. Shows a sea below both the Atlantic and Pacific oceans at a time when Tierra del Fuego was believed joined to Antarctica. Sea is named Mer Magellanique after Ferdinand Magellan.
|
90 |
+
|
91 |
+
Samuel Dunn's 1794 General Map of the World or Terraqueous Globe shows a Southern Ocean (but meaning what is today named the South Atlantic) and a Southern Icy Ocean.
|
92 |
+
|
93 |
+
A New Map of Asia, from the Latest Authorities, by John Cary, Engraver, 1806, shows the Southern Ocean lying to the south of both the Indian Ocean and Australia.
|
94 |
+
|
95 |
+
Freycinet Map of 1811 – resulted from the 1800–1803 French Baudin expedition to Australia and was the first full map of Australia ever to be published. In French, the map named the ocean immediately below Australia as the Grand Océan Austral (‘Great Southern Ocean’).
|
96 |
+
|
97 |
+
1863 map of Australia shows the Southern Ocean lying immediately to the south of Australia.
|
98 |
+
|
99 |
+
1906 map by German publisher Justus Perthes showing Antarctica encompassed by an Antarktischer (Sudl. Eismeer) Ocean – the ‘Antarctic (South Arctic) Ocean’.
|
100 |
+
|
101 |
+
Map of The World in 1922 by the National Geographic Society showing the Antarctic (Southern) Ocean.
|
102 |
+
|
103 |
+
In December 1839, as part of the United States Exploring Expedition of 1838–42 conducted by the United States Navy (sometimes called "the Wilkes Expedition"), an expedition sailed from Sydney, Australia, on the sloops-of-war USS Vincennes and USS Peacock, the brig USS Porpoise, the full-rigged ship Relief, and two schooners Sea Gull and USS Flying Fish. They sailed into the Antarctic Ocean, as it was then known, and reported the discovery "of an Antarctic continent west of the Balleny Islands" on 25 January 1840. That part of Antarctica was later named "Wilkes Land", a name it maintains to this day.
|
104 |
+
|
105 |
+
Explorer James Clark Ross passed through what is now known as the Ross Sea and discovered Ross Island (both of which were named for him) in 1841. He sailed along a huge wall of ice that was later named the Ross Ice Shelf. Mount Erebus and Mount Terror are named after two ships from his expedition: HMS Erebus and HMS Terror.[49]
|
106 |
+
|
107 |
+
The Imperial Trans-Antarctic Expedition of 1914, led by Ernest Shackleton, set out to cross the continent via the pole, but their ship, Endurance, was trapped and crushed by pack ice before they even landed. The expedition members survived after an epic journey on sledges over pack ice to Elephant Island. Then Shackleton and five others crossed the Southern Ocean, in an open boat called James Caird, and then trekked over South Georgia to raise the alarm at the whaling station Grytviken.
|
108 |
+
|
109 |
+
In 1946, US Navy Rear Admiral Richard E. Byrd and more than 4,700 military personnel visited the Antarctic in an expedition called Operation Highjump. Reported to the public as a scientific mission, the details were kept secret and it may have actually been a training or testing mission for the military. The expedition was, in both military or scientific planning terms, put together very quickly. The group contained an unusually high amount of military equipment, including an aircraft carrier, submarines, military support ships, assault troops and military vehicles. The expedition was planned to last for eight months but was unexpectedly terminated after only two months. With the exception of some eccentric entries in Admiral Byrd's diaries, no real explanation for the early termination has ever been officially given.
|
110 |
+
|
111 |
+
Captain Finn Ronne, Byrd's executive officer, returned to Antarctica with his own expedition in 1947–1948, with Navy support, three planes, and dogs. Ronne disproved the notion that the continent was divided in two and established that East and West Antarctica was one single continent, i.e. that the Weddell Sea and the Ross Sea are not connected.[50] The expedition explored and mapped large parts of Palmer Land and the Weddell Sea coastline, and identified the Ronne Ice Shelf, named by Ronne after his wife Edith "Jackie" Ronne.[51] Ronne covered 3,600 miles (5,790 km) by ski and dog sled – more than any other explorer in history.[52] The Ronne Antarctic Research Expedition discovered and mapped the last unknown coastline in the world and was the first Antarctic expedition to ever include women.[53]
|
112 |
+
|
113 |
+
The Antarctic Treaty was signed on 1 December 1959 and came into force on 23 June 1961. Among other provisions, this treaty limits military activity in the Antarctic to the support of scientific research.
|
114 |
+
|
115 |
+
The first person to sail single-handed to Antarctica was the New Zealander David Henry Lewis, in 1972, in a 10-metre (30 ft) steel sloop Ice Bird.
|
116 |
+
|
117 |
+
A baby, named Emilio Marcos de Palma, was born near Hope Bay on 7 January 1978, becoming the first baby born on the continent. He also was born further south than anyone in history.[54]
|
118 |
+
|
119 |
+
The MV Explorer was a cruise ship operated by the Swedish explorer Lars-Eric Lindblad. Observers point to Explorer's 1969 expeditionary cruise to Antarctica as the frontrunner for today's[when?] sea-based tourism in that region.[55][56] Explorer was the first cruise ship used specifically to sail the icy waters of the Antarctic Ocean and the first to sink there[57] when she struck an unidentified submerged object on 23 November 2007, reported to be ice, which caused a 10 by 4 inches (25 by 10 cm) gash in the hull.[58] Explorer was abandoned in the early hours of 23 November 2007 after taking on water near the South Shetland Islands in the Southern Ocean, an area which is usually stormy but was calm at the time.[59] Explorer was confirmed by the Chilean Navy to have sunk at approximately position: 62° 24′ South, 57° 16′ West,[60] in roughly 600 m of water.[61]
|
120 |
+
|
121 |
+
British engineer Richard Jenkins designed an unmanned surface vehicle called a "saildrone"[62] that completed the first autonomous circumnavigation of the Southern Ocean on 3 August 2019 after 196 days at sea.[63]
|
122 |
+
|
123 |
+
The first completely human-powered expedition on the Southern Ocean was accomplished on 25 December 2019 by a team of rowers comprising captain Fiann Paul (Iceland), first mate Colin O'Brady (US), Andrew Towne (US), Cameron Bellamy (South Africa), Jamie Douglas-Hamilton (UK) and John Petersen (US).[64]
|
124 |
+
|
125 |
+
The Southern Ocean, geologically the youngest of the oceans, was formed when Antarctica and South America moved apart, opening the Drake Passage, roughly 30 million years ago. The separation of the continents allowed the formation of the Antarctic Circumpolar Current.
|
126 |
+
|
127 |
+
With a northern limit at 60°S, the Southern Ocean differs from the other oceans in that its largest boundary, the northern boundary, does not abut a landmass (as it did with the first edition of Limits of Oceans and Seas). Instead, the northern limit is with the Atlantic, Indian and Pacific Oceans.
|
128 |
+
|
129 |
+
One reason for considering it as a separate ocean stems from the fact that much of the water of the Southern Ocean differs from the water in the other oceans. Water gets transported around the Southern Ocean fairly rapidly because of the Antarctic Circumpolar Current which circulates around Antarctica. Water in the Southern Ocean south of, for example, New Zealand, resembles the water in the Southern Ocean south of South America more closely than it resembles the water in the Pacific Ocean.
|
130 |
+
|
131 |
+
The Southern Ocean has typical depths of between 4,000 and 5,000 m (13,000 and 16,000 ft) over most of its extent with only limited areas of shallow water. The Southern Ocean's greatest depth of 7,236 m (23,740 ft) occurs at the southern end of the South Sandwich Trench, at 60°00'S, 024°W. The Antarctic continental shelf appears generally narrow and unusually deep, its edge lying at depths up to 800 m (2,600 ft), compared to a global mean of 133 m (436 ft).
|
132 |
+
|
133 |
+
Equinox to equinox in line with the sun's seasonal influence, the Antarctic ice pack fluctuates from an average minimum of 2.6 million square kilometres (1.0×10^6 sq mi) in March to about 18.8 million square kilometres (7.3×10^6 sq mi) in September, more than a sevenfold increase in area.
|
134 |
+
|
135 |
+
Sub-divisions of oceans are geographical features such as "seas", "straits", "bays", "channels", and "gulfs". There are many sub-divisions of the Southern Ocean defined in the never-approved 2002 draft fourth edition of the IHO publication Limits of Oceans and Seas. In clockwise order these include (with sector):
|
136 |
+
|
137 |
+
A number of these such as the 2002 Russian-proposed "Cosmonauts Sea", "Cooperation Sea", and "Somov (mid-1950s Russian polar explorer) Sea" are not included in the 1953 IHO document which remains currently in force,[26] because they received their names largely originated from 1962 onward. Leading geographic authorities and atlases do not use these latter three names, including the 2014 10th edition World Atlas from the United States' National Geographic Society and the 2014 12th edition of the British Times Atlas of the World, but Soviet and Russian-issued maps do.[65][66]
|
138 |
+
|
139 |
+
The Southern Ocean probably contains large, and possibly giant, oil and gas fields on the continental margin. Placer deposits, accumulation of valuable minerals such as gold, formed by gravity separation during sedimentary processes are also expected to exist in the Southern Ocean.[5]
|
140 |
+
|
141 |
+
Manganese nodules are expected to exist in the Southern Ocean. Manganese nodules are rock concretions on the sea bottom formed of concentric layers of iron and manganese hydroxides around a core. The core may be microscopically small and is sometimes completely transformed into manganese minerals by crystallization. Interest in the potential exploitation of polymetallic nodules generated a great deal of activity among prospective mining consortia in the 1960s and 1970s.[5]
|
142 |
+
|
143 |
+
The icebergs that form each year around in the Southern Ocean hold enough fresh water to meet the needs of every person on Earth for several months. For several decades there have been proposals, none yet to be feasible or successful, to tow Southern Ocean icebergs to more arid northern regions (such as Australia) where they can be harvested.[67]
|
144 |
+
|
145 |
+
Icebergs can occur at any time of year throughout the ocean. Some may have drafts up to several hundred meters; smaller icebergs, iceberg fragments and sea-ice (generally 0.5 to 1 m thick) also pose problems for ships. The deep continental shelf has a floor of glacial deposits varying widely over short distances.
|
146 |
+
|
147 |
+
Sailors know latitudes from 40 to 70 degrees south as the "Roaring Forties", "Furious Fifties" and "Shrieking Sixties" due to high winds and large waves that form as winds blow around the entire globe unimpeded by any land-mass. Icebergs, especially in May to October, make the area even more dangerous. The remoteness of the region makes sources of search and rescue scarce.
|
148 |
+
|
149 |
+
The Antarctic Circumpolar Current moves perpetually eastward – chasing and joining itself, and at 21,000 km (13,000 mi) in length – it comprises the world's longest ocean current, transporting 130 million cubic metres per second (4.6×10^9 cu ft/s) of water – 100 times the flow of all the world's rivers.
|
150 |
+
|
151 |
+
Several processes operate along the coast of Antarctica to produce, in the Southern Ocean, types of water masses not produced elsewhere in the oceans of the Southern Hemisphere. One of these is the Antarctic Bottom Water, a very cold, highly saline, dense water that forms under sea ice.
|
152 |
+
|
153 |
+
Associated with the Circumpolar Current is the Antarctic Convergence encircling Antarctica, where cold northward-flowing Antarctic waters meet the relatively warmer waters of the subantarctic, Antarctic waters predominantly sink beneath subantarctic waters, while associated zones of mixing and upwelling create a zone very high in nutrients. These nurture high levels of phytoplankton with associated copepods and Antarctic krill, and resultant foodchains supporting fish, whales, seals, penguins, albatrosses and a wealth of other species.[68]
|
154 |
+
|
155 |
+
The Antarctic Convergence is considered to be the best natural definition of the northern extent of the Southern Ocean.[5]
|
156 |
+
|
157 |
+
Large-scale upwelling is found in the Southern Ocean. Strong westerly (eastward) winds blow around Antarctica, driving a significant flow of water northwards. This is actually a type of coastal upwelling. Since there are no continents in a band of open latitudes between South America and the tip of the Antarctic Peninsula, some of this water is drawn up from great depths. In many numerical models and observational syntheses, the Southern Ocean upwelling represents the primary means by which deep dense water is brought to the surface. Shallower, wind-driven upwelling is also found off the west coasts of North and South America, northwest and southwest Africa, and southwest and southeast Australia, all associated with oceanic subtropical high pressure circulations.
|
158 |
+
|
159 |
+
Some models of the ocean circulation suggest that broad-scale upwelling occurs in the tropics, as pressure driven flows converge water toward the low latitudes where it is diffusively warmed from above. The required diffusion coefficients, however, appear to be larger than are observed in the real ocean. Nonetheless, some diffusive upwelling does probably occur.
|
160 |
+
|
161 |
+
The Ross Gyre and Weddell Gyre are two gyres that exist within the Southern Ocean. The gyres are located in the Ross Sea and Weddell Sea respectively, and both rotate clockwise. The gyres are formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf.
|
162 |
+
|
163 |
+
Sea ice has been noted to persist in the central area of the Ross Gyre.[69] There is some evidence that global warming has resulted in some decrease of the salinity of the waters of the Ross Gyre since the 1950s.[70]
|
164 |
+
|
165 |
+
Due to the Coriolis effect acting to the left in the Southern Hemisphere and the resulting Ekman transport away from the centres of the Weddell Gyre, these regions are very productive due to upwelling of cold, nutrient rich water.
|
166 |
+
|
167 |
+
Sea temperatures vary from about −2 to 10 °C (28 to 50 °F). Cyclonic storms travel eastward around the continent and frequently become intense because of the temperature contrast between ice and open ocean. The ocean-area from about latitude 40 south to the Antarctic Circle has the strongest average winds found anywhere on Earth.[71] In winter the ocean freezes outward to 65 degrees south latitude in the Pacific sector and 55 degrees south latitude in the Atlantic sector, lowering surface temperatures well below 0 degrees Celsius. At some coastal points, however, persistent intense drainage winds from the interior keep the shoreline ice-free throughout the winter.
|
168 |
+
|
169 |
+
A variety of marine animals exist and rely, directly or indirectly, on the phytoplankton in the Southern Ocean. Antarctic sea life includes penguins, blue whales, orcas, colossal squids and fur seals. The emperor penguin is the only penguin that breeds during the winter in Antarctica, while the Adélie penguin breeds farther south than any other penguin. The rockhopper penguin has distinctive feathers around the eyes, giving the appearance of elaborate eyelashes. King penguins, chinstrap penguins, and gentoo penguins also breed in the Antarctic.
|
170 |
+
|
171 |
+
The Antarctic fur seal was very heavily hunted in the 18th and 19th centuries for its pelt by sealers from the United States and the United Kingdom. The Weddell seal, a "true seal", is named after Sir James Weddell, commander of British sealing expeditions in the Weddell Sea. Antarctic krill, which congregates in large schools, is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, squid, icefish, penguins, albatrosses and many other birds.[72]
|
172 |
+
|
173 |
+
The benthic communities of the seafloor are diverse and dense, with up to 155,000 animals found in 1 square metre (10.8 sq ft). As the seafloor environment is very similar all around the Antarctic, hundreds of species can be found all the way around the mainland, which is a uniquely wide distribution for such a large community. Deep-sea gigantism is common among these animals.[73]
|
174 |
+
|
175 |
+
A census of sea life carried out during the International Polar Year and which involved some 500 researchers was released in 2010. The research is part of the global Census of Marine Life (CoML) and has disclosed some remarkable findings. More than 235 marine organisms live in both polar regions, having bridged the gap of 12,000 km (7,500 mi). Large animals such as some cetaceans and birds make the round trip annually. More surprising are small forms of life such as mudworms, sea cucumbers and free-swimming snails found in both polar oceans. Various factors may aid in their distribution – fairly uniform temperatures of the deep ocean at the poles and the equator which differ by no more than 5 °C (9.0 °F), and the major current systems or marine conveyor belt which transport egg and larva stages.[74] However, among smaller marine animals generally assumed to be the same in the Antarctica and the Arctic, more detailed studies of each population have often—but not always—revealed differences, showing that they are closely related cryptic species rather than a single bipolar species.[75][76][77]
|
176 |
+
|
177 |
+
The rocky shores of mainland Antarctica and its offshore islands provide nesting space for over 100 million birds every spring. These nesters include species of albatrosses, petrels, skuas, gulls and terns.[78] The insectivorous South Georgia pipit is endemic to South Georgia and some smaller surrounding islands. Freshwater ducks inhabit South Georgia and the Kerguelen Islands.[79]
|
178 |
+
|
179 |
+
The flightless penguins are all located in the Southern Hemisphere, with the greatest concentration located on and around Antarctica. Four of the 18 penguin species live and breed on the mainland and its close offshore islands. Another four species live on the subantarctic islands.[80] Emperor penguins have four overlapping layers of feathers, keeping them warm. They are the only Antarctic animal to breed during the winter.[81]
|
180 |
+
|
181 |
+
There are relatively few fish species in few families in the Southern Ocean. The most species-rich family are the snailfish (Liparidae), followed by the cod icefish (Nototheniidae)[82] and eelpout (Zoarcidae). Together the snailfish, eelpouts and notothenioids (which includes cod icefish and several other families) account for almost 9⁄10 of the more than 320 described fish species of the Southern Ocean (tens of undescribed species also occur in the region, especially among the snailfish).[83] Southern Ocean snailfish are generally found in deep waters, while the icefish also occur in shallower waters.[82]
|
182 |
+
|
183 |
+
Cod icefish (Nototheniidae), as well as several other families, are part of the Notothenioidei suborder, collectively sometimes referred to as icefish. The suborder contains many species with antifreeze proteins in their blood and tissue, allowing them to live in water that is around or slightly below 0 °C (32 °F).[84][85] Antifreeze proteins are also known from Southern Ocean snailfish.[86]
|
184 |
+
|
185 |
+
The crocodile icefish (family Channichthyidae), also known as white-blooded fish, are only found in the Southern Ocean. They lack hemoglobin in their blood, resulting in their blood being colourless. One Channichthyidae species, the mackerel icefish (Champsocephalus gunnari), was once the most common fish in coastal waters less than 400 metres (1,312 ft) deep, but was overfished in the 1970s and 1980s. Schools of icefish spend the day at the seafloor and the night higher in the water column eating plankton and smaller fish.[84]
|
186 |
+
|
187 |
+
There are two species from the genus Dissostichus, the Antarctic toothfish (Dissostichus mawsoni) and the Patagonian toothfish (Dissostichus eleginoides). These two species live on the seafloor 100–3,000 metres (328–9,843 ft) deep, and can grow to around 2 metres (7 ft) long weighing up to 100 kilograms (220 lb), living up to 45 years. The Antarctic toothfish lives close to the Antarctic mainland, whereas the Patagonian toothfish lives in the relatively warmer subantarctic waters. Toothfish are commercially fished, and overfishing has reduced toothfish populations.[84]
|
188 |
+
|
189 |
+
Another abundant fish group is the genus Notothenia, which like the Antarctic toothfish have antifreeze in their bodies.[84]
|
190 |
+
|
191 |
+
An unusual species of icefish is the Antarctic silverfish (Pleuragramma antarcticum), which is the only truly pelagic fish in the waters near Antarctica.[87]
|
192 |
+
|
193 |
+
Seven pinniped species inhabit Antarctica. The largest, the elephant seal (Mirounga leonina), can reach up to 4,000 kilograms (8,818 lb), while females of the smallest, the Antarctic fur seal (Arctocephalus gazella), reach only 150 kilograms (331 lb). These two species live north of the sea ice, and breed in harems on beaches. The other four species can live on the sea ice. Crabeater seals (Lobodon carcinophagus) and Weddell seals (Leptonychotes weddellii) form breeding colonies, whereas leopard seals (Hydrurga leptonyx) and Ross seals (Ommatophoca rossii) live solitary lives. Although these species hunt underwater, they breed on land or ice and spend a great deal of time there, as they have no terrestrial predators.[88]
|
194 |
+
|
195 |
+
The four species that inhabit sea ice are thought to make up 50% of the total biomass of the world's seals.[89] Crabeater seals have a population of around 15 million, making them one of the most numerous large animals on the planet.[90] The New Zealand sea lion (Phocarctos hookeri), one of the rarest and most localised pinnipeds, breeds almost exclusively on the subantarctic Auckland Islands, although historically it had a wider range.[91] Out of all permanent mammalian residents, the Weddell seals live the furthest south.[92]
|
196 |
+
|
197 |
+
There are 10 cetacean species found in the Southern Ocean; six baleen whales, and four toothed whales. The largest of these, the blue whale (Balaenoptera musculus), grows to 24 metres (79 ft) long weighing 84 tonnes. Many of these species are migratory, and travel to tropical waters during the Antarctic winter.[93]
|
198 |
+
|
199 |
+
Five species of krill, small free-swimming crustaceans, have been found in the Southern Ocean.[94] The Antarctic krill (Euphausia superba) is one of the most abundant animal species on earth, with a biomass of around 500 million tonnes. Each individual is 6 centimetres (2.4 in) long and weighs over 1 gram (0.035 oz).[95] The swarms that form can stretch for kilometres, with up to 30,000 individuals per 1 cubic metre (35 cu ft), turning the water red.[94] Swarms usually remain in deep water during the day, ascending during the night to feed on plankton. Many larger animals depend on krill for their own survival.[95] During the winter when food is scarce, adult Antarctic krill can revert to a smaller juvenile stage, using their own body as nutrition.[94]
|
200 |
+
|
201 |
+
Many benthic crustaceans have a non-seasonal breeding cycle, and some raise their young in a brood pouch. Glyptonotus antarcticus is an unusually large benthic isopod, reaching 20 centimetres (8 in) in length weighing 70 grams (2.47 oz). Amphipods are abundant in soft sediments, eating a range of items, from algae to other animals.[73] The amphipods are highly diverse with more than 600 recognized species found south of the Antarctic Convergence and there are indications that many undescribed species remain. Among these are several "giants", such as the iconic epimeriids that are up to 8 cm (3.1 in) long.[96]
|
202 |
+
|
203 |
+
Slow moving sea spiders are common, sometimes growing as large as a human hand. They feed on the corals, sponges, and bryozoans that litter the seabed.[73]
|
204 |
+
|
205 |
+
Many aquatic molluscs are present in Antarctica. Bivalves such as Adamussium colbecki move around on the seafloor, while others such as Laternula elliptica live in burrows filtering the water above.[73] There are around 70 cephalopod species in the Southern Ocean,[97] the largest of which is the colossal squid (Mesonychoteuthis hamiltoni), which at up to 14 metres (46 ft) is among the largest invertebrate in the world.[98] Squid makes up most of the diet of some animals, such as grey-headed albatrosses and sperm whales, and the warty squid (Moroteuthis ingens) is one of the subantarctic's most preyed upon species by vertebrates.[97]
|
206 |
+
|
207 |
+
The sea urchin genus Abatus burrow through the sediment eating the nutrients they find in it.[73] Two species of salps are common in Antarctic waters, Salpa thompsoni and Ihlea racovitzai. Salpa thompsoni is found in ice-free areas, whereas Ihlea racovitzai is found in the high latitude areas near ice. Due to their low nutritional value, they are normally only eaten by fish, with larger animals such as birds and marine mammals only eating them when other food is scarce.[99]
|
208 |
+
|
209 |
+
Antarctic sponges are long lived, and sensitive to environmental changes due to the specificity of the symbiotic microbial communities within them. As a result, they function as indicators of environmental health.[100]
|
210 |
+
|
211 |
+
Increased solar ultraviolet radiation resulting from the Antarctic ozone hole has reduced marine primary productivity (phytoplankton) by as much as 15% and has started damaging the DNA of some fish.[101] Illegal, unreported and unregulated fishing, especially the landing of an estimated five to six times more Patagonian toothfish than the regulated fishery, likely affects the sustainability of the stock. Long-line fishing for toothfish causes a high incidence of seabird mortality.
|
212 |
+
|
213 |
+
All international agreements regarding the world's oceans apply to the Southern Ocean. In addition, it is subject to these agreements specific to the region:
|
214 |
+
|
215 |
+
Many nations prohibit the exploration for and the exploitation of mineral resources south of the fluctuating Antarctic Convergence,[104] which lies in the middle of the Antarctic Circumpolar Current and serves as the dividing line between the very cold polar surface waters to the south and the warmer waters to the north. The Antarctic Treaty covers the portion of the globe south of sixty degrees south,[105]
|
216 |
+
it prohibits new claims to Antarctica.[106]
|
217 |
+
|
218 |
+
The Convention for the Conservation of Antarctic Marine Living Resources applies to the area south of 60° South latitude as well as the areas further north up to the limit of the Antarctic Convergence.[107]
|
219 |
+
|
220 |
+
Between 1 July 1998 and 30 June 1999, fisheries landed 119,898 tonnes, of which 85% consisted of krill and 14% of Patagonian toothfish. International agreements came into force in late 1999 to reduce illegal, unreported, and unregulated fishing, which in the 1998–99 season landed five to six times more Patagonian toothfish than the regulated fishery.
|
221 |
+
|
222 |
+
Major operational ports include: Rothera Station, Palmer Station, Villa Las Estrellas, Esperanza Base, Mawson Station, McMurdo Station, and offshore anchorages in Antarctica.
|
223 |
+
|
224 |
+
Few ports or harbors exist on the southern (Antarctic) coast of the Southern Ocean, since ice conditions limit use of most shores to short periods in midsummer; even then some require icebreaker escort for access. Most Antarctic ports are operated by government research stations and, except in an emergency, remain closed to commercial or private vessels; vessels in any port south of 60 degrees south are subject to inspection by Antarctic Treaty observers.
|
225 |
+
|
226 |
+
The Southern Ocean's southernmost port operates at McMurdo Station at 77°50′S 166°40′E / 77.833°S 166.667°E / -77.833; 166.667. Winter Quarters Bay forms a small harbor, on the southern tip of Ross Island where a floating ice pier makes port operations possible in summer. Operation Deep Freeze personnel constructed the first ice pier at McMurdo in 1973.[108]
|
227 |
+
|
228 |
+
Based on the original 1928 IHO delineation of the Southern Ocean (and the 1937 delineation if the Great Australian Bight is considered integral), Australian ports and harbors between Cape Leeuwin and Cape Otway on the Australian mainland and along the west coast of Tasmania would also be identified as ports and harbors existing in the Southern Ocean. These would include the larger ports and harbors of Albany, Thevenard, Port Lincoln, Whyalla, Port Augusta, Port Adelaide, Portland, Warrnambool, and Macquarie Harbour.
|
229 |
+
|
230 |
+
Even though organizers of several Yacht races define their routes as involving the Southern Ocean, the actual routes don't enter the actual geographical boundaries of the Southern Ocean. The routes involve instead South Atlantic, South Pacific and Indian Ocean.[109][110][111]
|
231 |
+
|
en/4218.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/4219.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/422.html.txt
ADDED
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Astronomy (from Greek: ἀστρονομία) is a natural science that studies celestial objects and phenomena. It uses mathematics, physics, and chemistry in order to explain their origin and evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates outside Earth's atmosphere. Cosmology is a branch of astronomy. It studies the Universe as a whole.[1]
|
6 |
+
|
7 |
+
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Babylonians, Greeks, Indians, Egyptians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars. Nowadays, professional astronomy is often said to be the same as astrophysics.[2]
|
8 |
+
|
9 |
+
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
|
10 |
+
|
11 |
+
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
|
12 |
+
|
13 |
+
Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects.[4] Although the two fields share a common origin, they are now entirely distinct.[5]
|
14 |
+
|
15 |
+
"Astronomy" and "astrophysics" are synonyms.[6][7][8] Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties,"[9] while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena".[10] In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject.[11] However, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics.[6] Some fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments in which scientists carry out research on this subject may use "astronomy" and "astrophysics", partly depending on whether the department is historically affiliated with a physics department,[7] and many professional astronomers have physics rather than astronomy degrees.[8] Some titles of the leading scientific journals in this field include The Astronomical Journal, The Astrophysical Journal, and Astronomy & Astrophysics.
|
16 |
+
|
17 |
+
In early historic times, astronomy only consisted of the observation and predictions of the motions of objects visible to the naked eye. In some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. In addition to their ceremonial uses, these observatories could be employed to determine the seasons, an important factor in knowing when to plant crops and in understanding the length of the year.[12]
|
18 |
+
|
19 |
+
Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye. As civilizations developed, most notably in Mesopotamia, Greece, Persia, India, China, Egypt, and Central America, astronomical observatories were assembled and ideas on the nature of the Universe began to develop. Most early astronomy consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic system, named after Ptolemy.[13]
|
20 |
+
|
21 |
+
A particularly important early development was the beginning of mathematical and scientific astronomy, which began among the Babylonians, who laid the foundations for the later astronomical traditions that developed in many other civilizations.[15] The Babylonians discovered that lunar eclipses recurred in a repeating cycle known as a saros.[16]
|
22 |
+
|
23 |
+
Following the Babylonians, significant advances in astronomy were made in ancient Greece and the Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical explanation for celestial phenomena.[17] In the 3rd century BC, Aristarchus of Samos estimated the size and distance of the Moon and Sun, and he proposed a model of the Solar System where the Earth and planets rotated around the Sun, now called the heliocentric model.[18] In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of the Moon and invented the earliest known astronomical devices such as the astrolabe.[19] Hipparchus also created a comprehensive catalog of 1020 stars, and most of the constellations of the northern hemisphere derive from Greek astronomy.[20] The Antikythera mechanism (c. 150–80 BC) was an early analog computer designed to calculate the location of the Sun, Moon, and planets for a given date. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared in Europe.[21]
|
24 |
+
|
25 |
+
Medieval Europe housed a number of important astronomers. Richard of Wallingford (1292–1336) made major contributions to astronomy and horology, including the invention of the first astronomical clock, the Rectangulus which allowed for the measurement of angles between planets and other astronomical bodies, as well as an equatorium called the Albion which could be used for astronomical calculations such as lunar, solar and planetary longitudes and could predict eclipses. Nicole Oresme (1320–1382) and Jean Buridan (1300–1361) first discussed evidence for the rotation of the Earth, furthermore, Buridan also developed the theory of impetus (predecessor of the modern scientific theory of inertia) which was able to show planets were capable of motion without the intervention of angels.[22] Georg von Peuerbach (1423–1461) and Regiomontanus (1436–1476) helped make astronomical progress instrumental to Copernicus's development of the heliocentric model decades later.
|
26 |
+
|
27 |
+
Astronomy flourished in the Islamic world and other parts of the world. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century.[23][24][25] In 964, the Andromeda Galaxy, the largest galaxy in the Local Group, was described by the Persian Muslim astronomer Abd al-Rahman al-Sufi in his Book of Fixed Stars.[26] The SN 1006 supernova, the brightest apparent magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn Ridwan and Chinese astronomers in 1006. Some of the prominent Islamic (mostly Persian and Arab) astronomers who made significant contributions to the science include Al-Battani, Thebit, Abd al-Rahman al-Sufi, Biruni, Abū Ishāq Ibrāhīm al-Zarqālī, Al-Birjandi, and the astronomers of the Maragheh and Samarkand observatories. Astronomers during that time introduced many Arabic names now used for individual stars.[27][28] It is also believed that the ruins at Great Zimbabwe and Timbuktu[29] may have housed astronomical observatories.[30] Europeans had previously believed that there had been no astronomical observation in sub-Saharan Africa during the pre-colonial Middle Ages, but modern discoveries show otherwise.[31][32][33][34]
|
28 |
+
|
29 |
+
For over six centuries (from the recovery of ancient learning during the late Middle Ages into the Enlightenment), the Roman Catholic Church gave more financial and social support to the study of astronomy than probably all other institutions. Among the Church's motives was finding the date for Easter.[35]
|
30 |
+
|
31 |
+
During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His work was defended by Galileo Galilei and expanded upon by Johannes Kepler. Kepler was the first to devise a system that correctly described the details of the motion of the planets around the Sun. However, Kepler did not succeed in formulating a theory behind the laws he wrote down.[36] It was Isaac Newton, with his invention of celestial dynamics and his law of gravitation, who finally explained the motions of the planets. Newton also developed the reflecting telescope.[37]
|
32 |
+
|
33 |
+
Improvements in the size and quality of the telescope led to further discoveries. The English astronomer John Flamsteed catalogued over 3000 stars,[38] More extensive star catalogues were produced by Nicolas Louis de Lacaille. The astronomer William Herschel made a detailed catalog of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.[39]
|
34 |
+
|
35 |
+
During the 18–19th centuries, the study of the three-body problem by Leonhard Euler, Alexis Claude Clairaut, and Jean le Rond d'Alembert led to more accurate predictions about the motions of the Moon and planets. This work was further refined by Joseph-Louis Lagrange and Pierre Simon Laplace, allowing the masses of the planets and moons to be estimated from their perturbations.[40]
|
36 |
+
|
37 |
+
Significant advances in astronomy came about with the introduction of new technology, including the spectroscope and photography. Joseph von Fraunhofer discovered about 600 bands in the spectrum of the Sun in 1814–15, which, in 1859, Gustav Kirchhoff ascribed to the presence of different elements. Stars were proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and sizes.[27]
|
38 |
+
|
39 |
+
The existence of the Earth's galaxy, the Milky Way, as its own group of stars was only proved in the 20th century, along with the existence of "external" galaxies. The observed recession of those galaxies led to the discovery of the expansion of the Universe.[41] Theoretical astronomy led to speculations on the existence of objects such as black holes and neutron stars, which have been used to explain such observed phenomena as quasars, pulsars, blazars, and radio galaxies. Physical cosmology made huge advances during the 20th century. In the early 1900s the model of the Big Bang theory was formulated, heavily evidenced by cosmic microwave background radiation, Hubble's law, and the cosmological abundances of elements. Space telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or blurred by the atmosphere.[citation needed] In February 2016, it was revealed that the LIGO project had detected evidence of gravitational waves in the previous September.[42][43]
|
40 |
+
|
41 |
+
The main source of information about celestial bodies and other objects is visible light, or more generally electromagnetic radiation.[44] Observational astronomy may be categorized according to the corresponding region of the electromagnetic spectrum on which the observations are made. Some parts of the spectrum can be observed from the Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's atmosphere. Specific information on these subfields is given below.
|
42 |
+
|
43 |
+
Radio astronomy uses radiation with wavelengths greater than approximately one millimeter, outside the visible range.[45] Radio astronomy is different from most other forms of observational astronomy in that the observed radio waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.[45]
|
44 |
+
|
45 |
+
Although some radio waves are emitted directly by astronomical objects, a product of thermal emission, most of the radio emission that is observed is the result of synchrotron radiation, which is produced when electrons orbit magnetic fields.[45] Additionally, a number of spectral lines produced by interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.[11][45]
|
46 |
+
|
47 |
+
A wide variety of other objects are observable at radio wavelengths, including supernovae, interstellar gas, pulsars, and active galactic nuclei.[11][45]
|
48 |
+
|
49 |
+
Infrared astronomy is founded on the detection and analysis of infrared radiation, wavelengths longer than red light and outside the range of our vision. The infrared spectrum is useful for studying objects that are too cold to radiate visible light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. The longer wavelengths of infrared can penetrate clouds of dust that block visible light, allowing the observation of young stars embedded in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer (WISE) have been particularly effective at unveiling numerous Galactic protostars and their host star clusters.[47][48]
|
50 |
+
With the exception of infrared wavelengths close to visible light, such radiation is heavily absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission. Consequently, infrared observatories have to be located in high, dry places on Earth or in space.[49] Some molecules radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can detect water in comets.[50]
|
51 |
+
|
52 |
+
Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy.[51] Images of observations were originally drawn by hand. In the late 19th century and most of the 20th century, images were made using photographic equipment. Modern images are made using digital detectors, particularly using charge-coupled devices (CCDs) and recorded on modern medium. Although visible light itself extends from approximately 4000 Å to 7000 Å (400 nm to 700 nm),[51] that same equipment can be used to observe some near-ultraviolet and near-infrared radiation.
|
53 |
+
|
54 |
+
Ultraviolet astronomy employs ultraviolet wavelengths between approximately 100 and 3200 Å (10 to 320 nm).[45] Light at those wavelengths is absorbed by the Earth's atmosphere, requiring observations at these wavelengths to be performed from the upper atmosphere or from space. Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies, which have been the targets of several ultraviolet surveys. Other objects commonly observed in ultraviolet light include planetary nebulae, supernova remnants, and active galactic nuclei.[45] However, as ultraviolet light is easily absorbed by interstellar dust, an adjustment of ultraviolet measurements is necessary.[45]
|
55 |
+
|
56 |
+
X-ray astronomy uses X-ray wavelengths. Typically, X-ray radiation is produced by synchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission from thin gases above 107 (10 million) kelvins, and thermal emission from thick gases above 107 Kelvin.[45] Since X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed from high-altitude balloons, rockets, or X-ray astronomy satellites. Notable X-ray sources include X-ray binaries, pulsars, supernova remnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.[45]
|
57 |
+
|
58 |
+
Gamma ray astronomy observes astronomical objects at the shortest wavelengths of the electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes.[45] The Cherenkov telescopes do not detect the gamma rays directly but instead detect the flashes of visible light produced when gamma rays are absorbed by the Earth's atmosphere.[52]
|
59 |
+
|
60 |
+
Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and black hole candidates such as active galactic nuclei.[45]
|
61 |
+
|
62 |
+
In addition to electromagnetic radiation, a few other events originating from great distances may be observed from the Earth.
|
63 |
+
|
64 |
+
In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX, and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A.[45] Cosmic rays, which consist of very high energy particles (atomic nuclei) that can decay or be absorbed when they enter the Earth's atmosphere, result in a cascade of secondary particles which can be detected by current observatories.[53] Some future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the Earth's atmosphere.[45]
|
65 |
+
|
66 |
+
Gravitational-wave astronomy is an emerging field of astronomy that employs gravitational-wave detectors to collect observational data about distant massive objects. A few observatories have been constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first detection on 14 September 2015, observing gravitational waves from a binary black hole.[54] A second gravitational wave was detected on 26 December 2015 and additional observations should continue but gravitational waves require extremely sensitive instruments.[55][56]
|
67 |
+
|
68 |
+
The combination of observations made using electromagnetic radiation, neutrinos or gravitational waves and other complementary information, is known as multi-messenger astronomy.[57][58]
|
69 |
+
|
70 |
+
One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the making of calendars.
|
71 |
+
|
72 |
+
Careful measurement of the positions of the planets has led to a solid understanding of gravitational perturbations, and an ability to determine past and future positions of the planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-Earth objects will allow for predictions of close encounters or potential collisions of the Earth with those objects.[59]
|
73 |
+
|
74 |
+
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic distance ladder that is used to measure the scale of the Universe. Parallax measurements of nearby stars provide an absolute baseline for the properties of more distant stars, as their properties can be compared. Measurements of the radial velocity and proper motion of stars allows astronomers to plot the movement of these systems through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of speculated dark matter in the galaxy.[60]
|
75 |
+
|
76 |
+
During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large extrasolar planets orbiting those stars.[61]
|
77 |
+
|
78 |
+
Theoretical astronomers use several tools including analytical models and computational numerical simulations; each has its particular advantages. Analytical models of a process are better for giving broader insight into the heart of what is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved.[62][63]
|
79 |
+
|
80 |
+
Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
|
81 |
+
|
82 |
+
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency between the data and model's results, the general tendency is to try to make minimal modifications to the model so that it produces results that fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
|
83 |
+
|
84 |
+
Phenomena modeled by theoretical astronomers include: stellar dynamics and evolution; galaxy formation; large-scale distribution of matter in the Universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
|
85 |
+
|
86 |
+
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, dark matter and fundamental theories of physics.
|
87 |
+
|
88 |
+
A few examples of this process:
|
89 |
+
|
90 |
+
Along with Cosmic inflation, dark matter and dark energy are the current leading topics in astronomy,[64] as their discovery and controversy originated during the study of the galaxies.
|
91 |
+
|
92 |
+
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to ascertain the nature of the astronomical objects, rather than their positions or motions in space".[65][66] Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background.[67][68] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
|
93 |
+
|
94 |
+
In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[67] Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.
|
95 |
+
|
96 |
+
Astrochemistry is the study of the abundance and reactions of molecules in the Universe, and their interaction with radiation.[69] The discipline is an overlap of astronomy and chemistry. The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is also called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar systems form.
|
97 |
+
|
98 |
+
Studies in this field contribute to the understanding of the formation of the Solar System, Earth's origin and geology, abiogenesis, and the origin of climate and oceans.
|
99 |
+
|
100 |
+
Astrobiology is an interdisciplinary scientific field concerned with the origins, early evolution, distribution, and future of life in the universe. Astrobiology considers the question of whether extraterrestrial life exists, and how humans can detect it if it does.[70] The term exobiology is similar.[71]
|
101 |
+
|
102 |
+
Astrobiology makes use of molecular biology, biophysics, biochemistry, chemistry, astronomy, physical cosmology, exoplanetology and geology to investigate the possibility of life on other worlds and help recognize biospheres that might be different from that on Earth.[72] The origin and early evolution of life is an inseparable part of the discipline of astrobiology.[73] Astrobiology concerns itself with interpretation of existing scientific data, and although speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit firmly into existing scientific theories.
|
103 |
+
|
104 |
+
This interdisciplinary field encompasses research on the origin of planetary systems, origins of organic compounds in space, rock-water-carbon interactions, abiogenesis on Earth, planetary habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to challenges on Earth and in outer space.[74][75][76]
|
105 |
+
|
106 |
+
Cosmology (from the Greek κόσμος (kosmos) "world, universe" and λόγος (logos) "word, study" or literally "logic") could be considered the study of the Universe as a whole.
|
107 |
+
|
108 |
+
Observations of the large-scale structure of the Universe, a branch known as physical cosmology, have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to modern cosmology is the well-accepted theory of the Big Bang, wherein our Universe began at a single point in time, and thereafter expanded over the course of 13.8 billion years[77] to its present condition.[78] The concept of the Big Bang can be traced back to the discovery of the microwave background radiation in 1965.[78]
|
109 |
+
|
110 |
+
In the course of this expansion, the Universe underwent several evolutionary stages. In the very early moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early Universe.[78] (See also nucleocosmochronology.)
|
111 |
+
|
112 |
+
When the first neutral atoms formed from a sea of primordial ions, space became transparent to radiation, releasing the energy viewed today as the microwave background radiation. The expanding Universe then underwent a Dark Age due to the lack of stellar energy sources.[79]
|
113 |
+
|
114 |
+
A hierarchical structure of matter began to form from minute variations in the mass density of space. Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III stars. These massive stars triggered the reionization process and are believed to have created many of the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing the cycle of nucleosynthesis to continue longer.[80]
|
115 |
+
|
116 |
+
Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations of gas and dust merged to form the first primitive galaxies. Over time, these pulled in more matter, and were often organized into groups and clusters of galaxies, then into larger-scale superclusters.[81]
|
117 |
+
|
118 |
+
Various fields of physics are crucial to studying the universe. Interdisciplinary studies involve the fields of quantum mechanics, particle physics, plasma physics, condensed matter physics, statistical mechanics, optics, and nuclear physics.
|
119 |
+
|
120 |
+
Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason, much effort is expended in trying to understand the physics of these components.[82]
|
121 |
+
|
122 |
+
The study of objects outside our galaxy is a branch of astronomy concerned with the formation and evolution of Galaxies, their morphology (description) and classification, the observation of active galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the understanding of the large-scale structure of the cosmos.
|
123 |
+
|
124 |
+
Most galaxies are organized into distinct shapes that allow for classification schemes. They are commonly divided into spiral, elliptical and Irregular galaxies.[83]
|
125 |
+
|
126 |
+
As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move along random orbits with no preferred direction. These galaxies contain little or no interstellar dust, few star-forming regions, and older stars. Elliptical galaxies are more commonly found at the core of galactic clusters, and may have been formed through mergers of large galaxies.
|
127 |
+
|
128 |
+
A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation within which massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of older stars. Both the Milky Way and one of our nearest galaxy neighbors, the Andromeda Galaxy, are spiral galaxies.
|
129 |
+
|
130 |
+
Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational interaction.
|
131 |
+
|
132 |
+
An active galaxy is a formation that emits a significant amount of its energy from a source other than its stars, dust and gas. It is powered by a compact region at the core, thought to be a super-massive black hole that is emitting radiation from in-falling material.
|
133 |
+
|
134 |
+
A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy radiation include Seyfert galaxies, Quasars, and Blazars. Quasars are believed to be the most consistently luminous objects in the known universe.[84]
|
135 |
+
|
136 |
+
The large-scale structure of the cosmos is represented by groups and clusters of galaxies. This structure is organized into a hierarchy of groupings, with the largest being the superclusters. The collective matter is formed into filaments and walls, leaving large voids between.[85]
|
137 |
+
|
138 |
+
The Solar System orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the Local Group of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large portions of the Milky Way that are obscured from view.
|
139 |
+
|
140 |
+
In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive black hole at its center. This is surrounded by four primary arms that spiral from the core. This is a region of active star formation that contains many younger, population I stars. The disk is surrounded by a spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as globular clusters.[86]
|
141 |
+
|
142 |
+
Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions, molecular clouds of molecular hydrogen and other elements create star-forming regions. These begin as a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by the Jeans length) to form compact protostars.[87]
|
143 |
+
|
144 |
+
As the more massive stars appear, they transform the cloud into an H II region (ionized atomic hydrogen) of glowing gas and plasma. The stellar wind and supernova explosions from these stars eventually cause the cloud to disperse, often leaving behind one or more young open clusters of stars. These clusters gradually disperse, and the stars join the population of the Milky Way.[88]
|
145 |
+
|
146 |
+
Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass, although the nature of this dark matter remains undetermined.[89]
|
147 |
+
|
148 |
+
The study of stars and stellar evolution is fundamental to our understanding of the Universe. The astrophysics of stars has been determined through observation and theoretical understanding; and from computer simulations of the interior.[90] Star formation occurs in dense regions of dust and gas, known as giant molecular clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-sequence star.[87]
|
149 |
+
|
150 |
+
Almost all elements heavier than hydrogen and helium were created inside the cores of stars.[90]
|
151 |
+
|
152 |
+
The characteristics of the resulting star depend primarily upon its starting mass. The more massive the star, the greater its luminosity, and the more rapidly it fuses its hydrogen fuel into helium in its core. Over time, this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium requires a higher core temperature. A star with a high enough core temperature will push its outer layers outward while increasing its core density. The resulting red giant formed by the expanding outer layers enjoys a brief life span, before the helium fuel in the core is in turn consumed. Very massive stars can also undergo a series of evolutionary phases, as they fuse increasingly heavier elements.[91]
|
153 |
+
|
154 |
+
The final fate of the star depends on its mass, with stars of mass greater than about eight times the Sun becoming core collapse supernovae;[92] while smaller stars blow off their outer layers and leave behind the inert core in the form of a white dwarf. The ejection of the outer layers forms a planetary nebula.[93] The remnant of a supernova is a dense neutron star, or, if the stellar mass was at least three times that of the Sun, a black hole.[94] Closely orbiting binary stars can follow more complex evolutionary paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova.[95] Planetary nebulae and supernovae distribute the "metals" produced in the star by fusion to the interstellar medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and helium alone.[96]
|
155 |
+
|
156 |
+
At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical main-sequence dwarf star of stellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not considered a variable star, but it does undergo periodic changes in activity known as the sunspot cycle. This is an 11-year oscillation in sunspot number. Sunspots are regions of lower-than- average temperatures that are associated with intense magnetic activity.[97]
|
157 |
+
|
158 |
+
The Sun has steadily increased in luminosity by 40% since it first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can have a significant impact on the Earth.[98] The Maunder minimum, for example, is believed to have caused the Little Ice Age phenomenon during the Middle Ages.[99]
|
159 |
+
|
160 |
+
The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and finally by the super-heated corona.
|
161 |
+
|
162 |
+
At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by means of radiation. Above that is the convection zone where the gas material transports energy primarily through physical displacement of the gas known as convection. It is believed that the movement of mass within the convection zone creates the magnetic activity that generates sunspots.[97]
|
163 |
+
|
164 |
+
A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit of the Solar System, it reaches the heliopause. As the solar wind passes the Earth, it interacts with the Earth's magnetic field (magnetosphere) and deflects the solar wind, but traps some creating the Van Allen radiation belts that envelop the Earth. The aurora are created when solar wind particles are guided by the magnetic flux lines into the Earth's polar regions where the lines then descend into the atmosphere.[100]
|
165 |
+
|
166 |
+
Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids, and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall understanding of the formation and evolution of the Sun's planetary system, although many new discoveries are still being made.[101]
|
167 |
+
|
168 |
+
The Solar System is subdivided into the inner planets, the asteroid belt, and the outer planets. The inner terrestrial planets consist of Mercury, Venus, Earth, and Mars. The outer gas giant planets are Jupiter, Saturn, Uranus, and Neptune.[102] Beyond Neptune lies the Kuiper belt, and finally the Oort Cloud, which may extend as far as a light-year.
|
169 |
+
|
170 |
+
The planets were formed 4.6 billion years ago in the protoplanetary disk that surrounded the early Sun. Through a process that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with time, became protoplanets. The radiation pressure of the solar wind then expelled most of the unaccreted matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets continued to sweep up, or eject, the remaining matter during a period of intense bombardment, evidenced by the many impact craters on the Moon. During this period, some of the protoplanets may have collided and one such collision may have formed the Moon.[103]
|
171 |
+
|
172 |
+
Once a planet reaches sufficient mass, the materials of different densities segregate within, during planetary differentiation. This process can form a stony or metallic core, surrounded by a mantle and an outer crust. The core may include solid and liquid regions, and some planetary cores generate their own magnetic field, which can protect their atmospheres from solar wind stripping.[104]
|
173 |
+
|
174 |
+
A planet or moon's interior heat is produced from the collisions that created the body, by the decay of radioactive materials (e.g. uranium, thorium, and 26Al), or tidal heating caused by interactions with other bodies. Some planets and moons accumulate enough heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating, cool more quickly; and their geological activity ceases with the exception of impact cratering.[105]
|
175 |
+
|
176 |
+
Astronomy and astrophysics have developed significant interdisciplinary links with other major scientific fields. Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context, utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and evolution of biological systems in the Universe, with particular emphasis on the possibility of non-terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of vast amount of observational astrophysical data.
|
177 |
+
|
178 |
+
The study of chemicals found in space, including their formation, interaction and destruction, is called astrochemistry. These substances are usually found in molecular clouds, although they may also appear in low temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the chemicals found within the Solar System, including the origins of the elements and variations in the isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As "forensic astronomy", finally, methods from astronomy have been used to solve problems of law and history.
|
179 |
+
|
180 |
+
Astronomy is one of the sciences to which amateurs can contribute the most.[106]
|
181 |
+
|
182 |
+
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. Common targets of amateur astronomers include the Sun, the Moon, planets, stars, comets, meteor showers, and a variety of deep-sky objects such as star clusters, galaxies, and nebulae. Astronomy clubs are located throughout the world and many have programs to help their members set up and complete observational programs including those to observe all the objects in the Messier (110 objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur astronomy, amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to specialize in the observation of particular objects, types of objects, or types of events which interest them.[107][108]
|
183 |
+
|
184 |
+
Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either homemade telescopes or use radio telescopes which were originally built for astronomy research but which are now available to amateurs (e.g. the One-Mile Telescope).[109][110]
|
185 |
+
|
186 |
+
Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make occultation measurements that are used to refine the orbits of minor planets. They can also discover comets, and perform regular observations of variable stars. Improvements in digital technology have allowed amateurs to make impressive advances in the field of astrophotography.[111][112][113]
|
187 |
+
|
188 |
+
Although the scientific discipline of astronomy has made tremendous strides in understanding the nature of the Universe and its contents, there remain some important unanswered questions. Answers to these may require the construction of new ground- and space-based instruments, and possibly new developments in theoretical and experimental physics.
|
189 |
+
|
190 |
+
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
|
191 |
+
|
en/4220.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/4221.html.txt
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The Arctic Ocean is the smallest and shallowest of the world's five major oceans [1] It is also known as the coldest of all the oceans. The International Hydrographic Organization (IHO) recognizes it as an ocean, although some oceanographers call it the Arctic Sea. It is sometimes classified as an estuary of the Atlantic Ocean,[2][3] and it is also seen as the northernmost part of the all-encompassing World Ocean.
|
6 |
+
|
7 |
+
Located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, besides its surrounding waters the Arctic Ocean is surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter. The Arctic Ocean's surface temperature and salinity vary seasonally as the ice cover melts and freezes;[4] its salinity is the lowest on average of the five major oceans, due to low evaporation, heavy fresh water inflow from rivers and streams, and limited connection and outflow to surrounding oceanic waters with higher salinities. The summer shrinking of the ice has been quoted at 50%.[1] The US National Snow and Ice Data Center (NSIDC) uses satellite data to provide a daily record of Arctic sea ice cover and the rate of melting compared to an average period and specific past years.
|
8 |
+
|
9 |
+
Human habitation in the North American polar region goes back at least 50,000–17,000 years ago, during the Wisconsin glaciation. At this time, falling sea levels allowed people to move across the Bering land bridge that joined Siberia to northwestern North America (Alaska), leading to the Settlement of the Americas.[5]
|
10 |
+
|
11 |
+
Paleo-Eskimo groups included the Pre-Dorset (c. 3200–850 BC); the Saqqaq culture of Greenland (2500–800 BC); the Independence I and Independence II cultures of northeastern Canada and Greenland (c. 2400–1800 BC and c. 800–1 BC); the Groswater of Labrador and Nunavik, and the Dorset culture (500 BC to AD 1500), which spread across Arctic North America. The Dorset were the last major Paleo-Eskimo culture in the Arctic before the migration east from present-day Alaska of the Thule, the ancestors of the modern Inuit.[6]
|
12 |
+
|
13 |
+
The Thule Tradition lasted from about 200 BC to AD 1600 around the Bering Strait, the Thule people being the prehistoric ancestors of the Inuit who now live in Northern Labrador.[7]
|
14 |
+
|
15 |
+
For much of European history, the north polar regions remained largely unexplored and their geography conjectural. Pytheas of Massilia recorded an account of a journey northward in 325 BC, to a land he called "Eschate Thule", where the Sun only set for three hours each day and the water was replaced by a congealed substance "on which one can neither walk nor sail". He was probably describing loose sea ice known today as "growlers" or "bergy bits"; his "Thule" was probably Norway, though the Faroe Islands or Shetland have also been suggested.[8]
|
16 |
+
|
17 |
+
Early cartographers were unsure whether to draw the region around the North Pole as land (as in Johannes Ruysch's map of 1507, or Gerardus Mercator's map of 1595) or water (as with Martin Waldseemüller's world map of 1507). The fervent desire of European merchants for a northern passage, the Northern Sea Route or the Northwest Passage, to "Cathay" (China) caused water to win out, and by 1723 mapmakers such as Johann Homann featured an extensive "Oceanus Septentrionalis" at the northern edge of their charts.
|
18 |
+
|
19 |
+
The few expeditions to penetrate much beyond the Arctic Circle in this era added only small islands, such as Novaya Zemlya (11th century) and Spitzbergen (1596), though since these were often surrounded by pack-ice, their northern limits were not so clear. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank, with only fragments of known coastline sketched in.
|
20 |
+
|
21 |
+
This lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an "Open Polar Sea" was persistent. John Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this.
|
22 |
+
|
23 |
+
In the United States in the 1850s and 1860s, the explorers Elisha Kane and Isaac Israel Hayes both claimed to have seen part of this elusive body of water. Even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea (1883). Nevertheless, as all the explorers who travelled closer and closer to the pole reported, the polar ice cap is quite thick, and persists year-round.
|
24 |
+
|
25 |
+
Fridtjof Nansen was the first to make a nautical crossing of the Arctic Ocean, in 1896. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support.[9] The first nautical transit of the north pole was made in 1958 by the submarine USS Nautilus, and the first surface nautical transit occurred in 1977 by the icebreaker NS Arktika.
|
26 |
+
|
27 |
+
Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean. Scientific settlements were established on the drift ice and carried thousands of kilometers by ice floes.[10]
|
28 |
+
|
29 |
+
In World War II, the European region of the Arctic Ocean was heavily contested: the Allied commitment to resupply the Soviet Union via its northern ports was opposed by German naval and air forces.
|
30 |
+
|
31 |
+
Since 1954 commercial airlines have flown over the Arctic Ocean (see Polar route).
|
32 |
+
|
33 |
+
The Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2 (5,427,000 sq mi), almost the size of Antarctica.[11][12] The coastline is 45,390 km (28,200 mi) long.[11][13] It is the only ocean smaller than Russia, which has a land area of 16,377,742 km2 (6,323,482 sq mi). It is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands.
|
34 |
+
|
35 |
+
It is generally taken to include Baffin Bay, Barents Sea, Beaufort Sea, Chukchi Sea, East Siberian Sea, Greenland Sea, Hudson Bay, Hudson Strait, Kara Sea, Laptev Sea, White Sea and other tributary bodies of water. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea and Labrador Sea.[1]
|
36 |
+
|
37 |
+
Countries bordering the Arctic Ocean are: Russia, Norway, Iceland, Greenland (territory of the Kingdom of Denmark), Canada and the United States.
|
38 |
+
|
39 |
+
There are several ports and harbors around the Arctic Ocean[14]
|
40 |
+
|
41 |
+
In Alaska, the main ports are Barrow (71°17′44″N 156°45′59″W / 71.29556°N 156.76639°W / 71.29556; -156.76639 (Barrow)) and Prudhoe Bay (70°19′32″N 148°42′41″W / 70.32556°N 148.71139°W / 70.32556; -148.71139 (Prudhoe)).
|
42 |
+
|
43 |
+
In Canada, ships may anchor at Churchill (Port of Churchill) (58°46′28″N 094°11′37″W / 58.77444°N 94.19361°W / 58.77444; -94.19361 (Port of Churchill)) in Manitoba, Nanisivik (Nanisivik Naval Facility) (73°04′08″N 084°32′57″W / 73.06889°N 84.54917°W / 73.06889; -84.54917 (Nanisivik Naval Facility)) in Nunavut,[15] Tuktoyaktuk (69°26′34″N 133°01′52″W / 69.44278°N 133.03111°W / 69.44278; -133.03111 (Tuktoyaktuk)) or Inuvik (68°21′42″N 133°43′50″W / 68.36167°N 133.73056°W / 68.36167; -133.73056 (Inuvik)) in the Northwest Territories.
|
44 |
+
|
45 |
+
In Greenland, the main port is at Nuuk (Nuuk Port and Harbour) (64°10′15″N 051°43′15″W / 64.17083°N 51.72083°W / 64.17083; -51.72083 (Nuuk Port and Harbour)).
|
46 |
+
|
47 |
+
In Norway, Kirkenes (69°43′37″N 030°02′44″E / 69.72694°N 30.04556°E / 69.72694; 30.04556 (Kirkenes)) and Vardø (70°22′14″N 031°06′27″E / 70.37056°N 31.10750°E / 70.37056; 31.10750 (Vardø)) are ports on the mainland. Also, there is Longyearbyen (78°13′12″N 15°39′00″E / 78.22000°N 15.65000°E / 78.22000; 15.65000 (Longyearbyen)) on Svalbard, a Norwegian archipelago, next to Fram Strait.
|
48 |
+
|
49 |
+
In Russia, major ports sorted by the different sea areas are:
|
50 |
+
|
51 |
+
The ocean's Arctic shelf comprises a number of continental shelves, including the Canadian Arctic shelf, underlying the Canadian Arctic Archipelago, and the Russian continental shelf, which is sometimes simply called the "Arctic Shelf" because it is greater in extent. The Russian continental shelf consists of three separate, smaller shelves, the Barents Shelf, Chukchi Sea Shelf and Siberian Shelf. Of these three, the Siberian Shelf is the largest such shelf in the world. The Siberian Shelf holds large oil and gas reserves, and the Chukchi shelf forms the border between Russian and the United States as stated in the USSR–USA Maritime Boundary Agreement. The whole area is subject to international territorial claims.
|
52 |
+
|
53 |
+
An underwater ridge, the Lomonosov Ridge, divides the deep sea North Polar Basin into two oceanic basins: the Eurasian Basin, which is between 4,000 and 4,500 m (13,100 and 14,800 ft) deep, and the Amerasian Basin (sometimes called the North American, or Hyperborean Basin), which is about 4,000 m (13,000 ft) deep. The bathymetry of the ocean bottom is marked by fault block ridges, abyssal plains, ocean deeps, and basins. The average depth of the Arctic Ocean is 1,038 m (3,406 ft).[16] The deepest point is Molloy Hole in the Fram Strait, at about 5,550 m (18,210 ft).[17]
|
54 |
+
|
55 |
+
The two major basins are further subdivided by ridges into the Canada Basin (between Alaska/Canada and the Alpha Ridge), Makarov Basin (between the Alpha and Lomonosov Ridges), Amundsen Basin (between Lomonosov and Gakkel ridges), and Nansen Basin (between the Gakkel Ridge and the continental shelf that includes the Franz Josef Land).
|
56 |
+
|
57 |
+
The crystalline basement rocks of mountains around the Arctic Ocean were recrystallized or formed during the Ellesmerian orogeny, the regional phase of the larger Caledonian orogeny in the Paleozoic. Regional subsidence in the Jurassic and Triassic led to significant sediment deposition, creating many of the reservoir for current day oil and gas deposits. During the Cretaceous the Canadian Basin opened and tectonic activity due to the assembly of Alaska caused hydrocarbons to migrate toward what is now Prudhoe Bay. At the same time, sediments shed off the rising Canadian Rockies building out the large Mackenzie Delta.
|
58 |
+
|
59 |
+
The rifting apart of the supercontinent Pangea, beginning in the Triassic opened the early Atlantic Ocean. Rifting then extended northward, opening the Arctic Ocean as mafic oceanic crust material erupted out of a branch of Mid-Atlantic Ridge. The Amerasia Basin may have opened first, with the Chulkchi Borderland moved along to the northeast by transform faults. Additional spreading helped to create the "triple-junction" of the Alpha-Mendeleev Ridge in the Late Cretaceous.
|
60 |
+
|
61 |
+
Throughout the Cenozoic, the subduction of the Pacific plate, the collision of India with Eurasia and the continued opening of the North Atlantic created new hydrocarbon traps. The seafloor began spreading from the Gakkel Ridge in the Paleocene and Eocene, causing the Lomonosov Ridge to move farther from land and subside.
|
62 |
+
|
63 |
+
Because of sea ice and remote conditions, the geology of the Arctic Ocean is still poorly explored. ACEX drilling shed some light on the Lomonosov Ridge, which appears to be continental crust separated from the Barents-Kara Shelf in the Paleocene and then starved of sediment. It may contain up to 10 billion barrels of oil. The Gakkel Ridge rift is also poorly understand and may extend into the Laptev Sea.[18][19]
|
64 |
+
|
65 |
+
In large parts of the Arctic Ocean, the top layer (about 50 m (160 ft)) is of lower salinity and lower temperature than the rest. It remains relatively stable, because the salinity effect on density is bigger than the temperature effect. It is fed by the freshwater input of the big Siberian and Canadian streams (Ob, Yenisei, Lena, Mackenzie), the water of which quasi floats on the saltier, denser, deeper ocean water. Between this lower salinity layer and the bulk of the ocean lies the so-called halocline, in which both salinity and temperature rise with increasing depth.
|
66 |
+
|
67 |
+
Because of its relative isolation from other oceans, the Arctic Ocean has a uniquely complex system of water flow. It resembles some hydrological features of the Mediterranean Sea, referring to its deep waters having only limited communication through the Fram Strait with the Atlantic Basin, "where the circulation is dominated by thermohaline forcing”.[20] The Arctic Ocean has a total volume of 18.07×106 km3, equal to about 1.3% of the World Ocean. Mean surface circulation is predominately cyclonic on the Eurasian side and anticyclonic in the Canadian Basin.[21]
|
68 |
+
|
69 |
+
Water enters from both the Pacific and Atlantic Oceans and can be divided into three unique water masses. The deepest water mass is called Arctic Bottom Water and begins around 900 metres (3,000 feet) depth.[20] It is composed of the densest water in the World Ocean and has two main sources: Arctic shelf water and Greenland Sea Deep Water. Water in the shelf region that begins as inflow from the Pacific passes through the narrow Bering Strait at an average rate of 0.8 Sverdrups and reaches the Chukchi Sea.[22] During the winter, cold Alaskan winds blow over the Chukchi Sea, freezing the surface water and pushing this newly formed ice out to the Pacific. The speed of the ice drift is roughly 1–4 cm/s.[21] This process leaves dense, salty waters in the sea that sink over the continental shelf into the western Arctic Ocean and create a halocline.[23]
|
70 |
+
|
71 |
+
This water is met by Greenland Sea Deep Water, which forms during the passage of winter storms. As temperatures cool dramatically in the winter, ice forms and intense vertical convection allows the water to become dense enough to sink below the warm saline water below.[20] Arctic Bottom Water is critically important because of its outflow, which contributes to the formation of Atlantic Deep Water. The overturning of this water plays a key role in global circulation and the moderation of climate.
|
72 |
+
|
73 |
+
In the depth range of 150–900 metres (490–2,950 feet) is a water mass referred to as Atlantic Water. Inflow from the North Atlantic Current enters through the Fram Strait, cooling and sinking to form the deepest layer of the halocline, where it circles the Arctic Basin counter-clockwise. This is the highest volumetric inflow to the Arctic Ocean, equalling about 10 times that of the Pacific inflow, and it creates the Arctic Ocean Boundary Current.[22] It flows slowly, at about 0.02 m/s.[20] Atlantic Water has the same salinity as Arctic Bottom Water but is much warmer (up to 3 °C). In fact, this water mass is actually warmer than the surface water, and remains submerged only due to the role of salinity in density.[20] When water reaches the basin it is pushed by strong winds into a large circular current called the Beaufort Gyre. Water in the Beaufort Gyre is far less saline than that of the Chukchi Sea due to inflow from large Canadian and Siberian rivers.[23]
|
74 |
+
|
75 |
+
The final defined water mass in the Arctic Ocean is called Arctic Surface Water and is found from 150–200 metres (490–660 feet). The most important feature of this water mass is a section referred to as the sub-surface layer. It is a product of Atlantic water that enters through canyons and is subjected to intense mixing on the Siberian Shelf.[20] As it is entrained, it cools and acts a heat shield for the surface layer. This insulation keeps the warm Atlantic Water from melting the surface ice. Additionally, this water forms the swiftest currents of the Arctic, with speed of around 0.3–0.6 m/s.[20] Complementing the water from the canyons, some Pacific water that does not sink to the shelf region after passing through the Bering Strait also contributes to this water mass.
|
76 |
+
|
77 |
+
Waters originating in the Pacific and Atlantic both exit through the Fram Strait between Greenland and Svalbard Island, which is about 2,700 metres (8,900 feet) deep and 350 kilometres (220 miles) wide. This outflow is about 9 Sv.[22] The width of the Fram Strait is what allows for both inflow and outflow on the Atlantic side of the Arctic Ocean. Because of this, it is influenced by the Coriolis force, which concentrates outflow to the East Greenland Current on the western side and inflow to the Norwegian Current on the eastern side.[20] Pacific water also exits along the west coast of Greenland and the Hudson Strait (1–2 Sv), providing nutrients to the Canadian Archipelago.[22]
|
78 |
+
|
79 |
+
As noted, the process of ice formation and movement is a key driver in Arctic Ocean circulation and the formation of water masses. With this dependence, the Arctic Ocean experiences variations due to seasonal changes in sea ice cover. Sea ice movement is the result of wind forcing, which is related to a number of meteorological conditions that the Arctic experiences throughout the year. For example, the Beaufort High—an extension of the Siberian High system—is a pressure system that drives the anticyclonic motion of the Beaufort Gyre.[21] During the summer, this area of high pressure is pushed out closer to its Siberian and Canadian sides. In addition, there is a sea level pressure (SLP) ridge over Greenland that drives strong northerly winds through the Fram Strait, facilitating ice export. In the summer, the SLP contrast is smaller, producing weaker winds. A final example of seasonal pressure system movement is the low pressure system that exists over the Nordic and Barents Seas. It is an extension of the Icelandic Low, which creates cyclonic ocean circulation in this area. The low shifts to center over the North Pole in the summer. These variations in the Arctic all contribute to ice drift reaching its weakest point during the summer months. There is also evidence that the drift is associated with the phase of the Arctic Oscillation and Atlantic Multidecadal Oscillation.[21]
|
80 |
+
|
81 |
+
Much of the Arctic Ocean is covered by sea ice that varies in extent and thickness seasonally. The mean extent of the ice has been decreasing since 1980 from the average winter value of 15,600,000 km2 (6,023,200 sq mi) at a rate of 3% per decade. The seasonal variations are about 7,000,000 km2 (2,702,700 sq mi) with the maximum in April and minimum in September. The sea ice is affected by wind and ocean currents, which can move and rotate very large areas of ice. Zones of compression also arise, where the ice piles up to form pack ice.[25][26][27]
|
82 |
+
|
83 |
+
Icebergs occasionally break away from northern Ellesmere Island, and icebergs are formed from glaciers in western Greenland and extreme northeastern Canada. Icebergs are not sea ice but may become embedded in the pack ice. Icebergs pose a hazard to ships, of which the Titanic is one of the most famous. The ocean is virtually icelocked from October to June, and the superstructure of ships are subject to icing from October to May.[14] Before the advent of modern icebreakers, ships sailing the Arctic Ocean risked being trapped or crushed by sea ice (although the Baychimo drifted through the Arctic Ocean untended for decades despite these hazards).
|
84 |
+
|
85 |
+
Under the influence of the Quaternary glaciation, the Arctic Ocean is contained in a polar climate characterized by persistent cold and relatively narrow annual temperature ranges. Winters are characterized by the polar night, extreme cold, frequent low-level temperature inversions, and stable weather conditions.[28] Cyclones are only common on the Atlantic side.[29] Summers are characterized by continuous daylight (midnight sun), and temperatures can rise above the melting point (0 °C (32 °F). Cyclones are more frequent in summer and may bring rain or snow.[29] It is cloudy year-round, with mean cloud cover ranging from 60% in winter to over 80% in summer.[30]
|
86 |
+
|
87 |
+
The temperature of the surface of the Arctic Ocean is fairly constant, near the freezing point of seawater. Because the Arctic Ocean consists of saltwater, the temperature must reach −1.8 °C (28.8 °F) before freezing occurs.
|
88 |
+
|
89 |
+
The density of sea water, in contrast to fresh water, increases as it nears the freezing point and thus it tends to sink. It is generally necessary that the upper 100–150 m (330–490 ft) of ocean water cools to the freezing point for sea ice to form.[31] In the winter the relatively warm ocean water exerts a moderating influence, even when covered by ice. This is one reason why the Arctic does not experience the extreme temperatures seen on the Antarctic continent.
|
90 |
+
|
91 |
+
There is considerable seasonal variation in how much pack ice of the Arctic ice pack covers the Arctic Ocean. Much of the Arctic ice pack is also covered in snow for about 10 months of the year. The maximum snow cover is in March or April — about 20 to 50 cm (7.9 to 19.7 in) over the frozen ocean.
|
92 |
+
|
93 |
+
The climate of the Arctic region has varied significantly in the past. As recently as 55 million years ago, during the Paleocene–Eocene Thermal Maximum, the region reached an average annual temperature of 10–20 °C (50–68 °F).[32] The surface waters of the northernmost[33] Arctic Ocean warmed, seasonally at least, enough to support tropical lifeforms (the dinoflagellates Apectodinium augustum) requiring surface temperatures of over 22 °C (72 °F).[34]
|
94 |
+
|
95 |
+
Due to the pronounced seasonality of 2-6 months of midnight sun and polar night[35] in the Arctic Ocean, the primary production of photosynthesizing organisms such as ice algae and phytoplankton is limited to the spring and summer months (March/April to September[36]). Important consumers of primary producers in the central Arctic Ocean and the adjacent shelf seas include zooplankton, especially copepods (Calanus finmarchicus, Calanus glacialis, and Calanus hyperboreus[37]) and euphausiids[38], as well as ice-associated fauna (e.g., amphipods[37]). These primary consumers form an important link between the primary producers and higher trophic levels. The composition of higher trophic levels in the Arctic Ocean varies with region (Atlantic side vs. Pacific side), and with the sea-ice cover. Secondary consumers in the Barents Sea, an Atlantic-influenced Arctic shelf sea, are mainly sub-Arctic species including herring, young cod, and capelin.[38] In ice-covered regions of the central Arctic Ocean, polar cod is a central predator of primary consumers. The apex predators in the Arctic Ocean - Marine mammals such as seals, whales, and polar bears, prey upon fish.
|
96 |
+
|
97 |
+
Endangered marine species in the Arctic Ocean include walruses and whales. The area has a fragile ecosystem, and it is especially exposed to climate change, because it warms faster than the rest of the world. Lion's mane jellyfish are abundant in the waters of the Arctic, and the banded gunnel is the only species of gunnel that lives in the ocean.
|
98 |
+
|
99 |
+
Petroleum and natural gas fields, placer deposits, polymetallic nodules, sand and gravel aggregates, fish, seals and whales can all be found in abundance in the region.[14][27]
|
100 |
+
|
101 |
+
The political dead zone near the center of the sea is also the focus of a mounting dispute between the United States, Russia, Canada, Norway, and Denmark.[39] It is significant for the global energy market because it may hold 25% or more of the world's undiscovered oil and gas resources.[40]
|
102 |
+
|
103 |
+
The Arctic ice pack is thinning, and a seasonal hole in the ozone layer frequently occurs.[41] Reduction of the area of Arctic sea ice reduces the planet's average albedo, possibly resulting in global warming in a positive feedback mechanism.[27][42] Research shows that the Arctic may become ice-free in the summer for the first time in human history by 2040.[43][44] Estimates vary for when the last time the Arctic was ice-free: 65 million years ago when fossils indicate that plants existed there to as recently as 5,500 years ago; ice and ocean cores going back 8,000 years to the last warm period or 125,000 during the last intraglacial period.[45]
|
104 |
+
|
105 |
+
Warming temperatures in the Arctic may cause large amounts of fresh meltwater to enter the north Atlantic, possibly disrupting global ocean current patterns. Potentially severe changes in the Earth's climate might then ensue.[42]
|
106 |
+
|
107 |
+
As the extent of sea ice diminishes and sea level rises, the effect of storms such as the Great Arctic Cyclone of 2012 on open water increases, as does possible salt-water damage to vegetation on shore at locations such as the Mackenzie's river delta as stronger storm surges become more likely.[46]
|
108 |
+
|
109 |
+
Global warming has increased encounters between polar bears and humans. Reduced sea ice due to melting is causing polar bears to search for new sources of food.[47] Beginning in December 2018 and coming to an apex in February 2019, a mass invasion of polar bears into the archipelago of Novaya Zemlya caused local authorities to declare a state of emergency. Dozens of polar bears were seen entering homes and public buildings and inhabited areas.[48][49]
|
110 |
+
|
111 |
+
Sea ice, and the cold conditions it sustains, serves to stabilize methane deposits on and near the shoreline,[50] preventing the clathrate breaking down and outgassing methane into the atmosphere, causing further warming. Melting of this ice may release large quantities of methane, a powerful greenhouse gas into the atmosphere, causing further warming in a strong positive feedback cycle and marine genus and species to become extinct.[50][51]
|
112 |
+
|
113 |
+
Other environmental concerns relate to the radioactive contamination of the Arctic Ocean from, for example, Russian radioactive waste dump sites in the Kara Sea[52] Cold War nuclear test sites such as Novaya Zemlya,[53] Camp Century's contaminants in Greenland,[54] or radioactive contamination from Fukushima.[55]
|
114 |
+
|
115 |
+
On 16 July 2015, five nations (United States of America, Russia, Canada, Norway, Denmark/Greenland) signed a declaration committing to keep their fishing vessels out of a 1.1 million square mile zone in the central Arctic Ocean near the North Pole. The agreement calls for those nations to refrain from fishing there until there is better scientific knowledge about the marine resources and until a regulatory system is in place to protect those resources.[56][57]
|
116 |
+
|
117 |
+
Coordinates: 90°N 0°E / 90°N 0°E / 90; 0
|
en/4222.html.txt
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An ocean is a body of water that composes much of a planet's hydrosphere.[1] On Earth, an ocean is one of the major conventional divisions of the World Ocean. These are, in descending order by area, the Pacific, Atlantic, Indian, Southern (Antarctic), and Arctic Oceans.[2][3] The phrases "the ocean" or "the sea" used without specification refer to the interconnected body of salt water covering the majority of the Earth's surface.[2][3] As a general term, "the ocean" is mostly interchangeable with "the sea" in American English, but not in British English.[4] Strictly speaking, a sea is a body of water (generally a division of the world ocean) partly or fully enclosed by land.[5]
|
4 |
+
|
5 |
+
Saline seawater covers approximately 361,000,000 km2 (139,000,000 sq mi) and is customarily divided into several principal oceans and smaller seas, with the ocean covering approximately 71% of Earth's surface and 90% of the Earth's biosphere.[6] The ocean contains 97% of Earth's water, and oceanographers have stated that less than 20% of the World Ocean has been mapped.[6] The total volume is approximately 1.35 billion cubic kilometers (320 million cu mi) with an average depth of nearly 3,700 meters (12,100 ft).[7][8][9]
|
6 |
+
|
7 |
+
As the world ocean is the principal component of Earth's hydrosphere, it is integral to life, forms part of the carbon cycle, and influences climate and weather patterns. The World Ocean is the habitat of 230,000 known species, but because much of it is unexplored, the number of species that exist in the ocean is much larger, possibly over two million.[10] The origin of Earth's oceans is unknown; oceans are thought to have formed in the Hadean eon and may have been the cause for the emergence of life.
|
8 |
+
|
9 |
+
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, although there is evidence for the existence of oceans elsewhere in the Solar System. Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.[11][12]
|
10 |
+
|
11 |
+
The word ocean comes from the figure in classical antiquity, Oceanus (/oʊˈsiːənəs/; Greek: Ὠκεανός Ōkeanós,[13] pronounced [ɔːkeanós]), the elder of the Titans in classical Greek mythology, believed by the ancient Greeks and Romans to be the divine personification of the sea, an enormous river encircling the world.
|
12 |
+
|
13 |
+
The concept of Ōkeanós has an Indo-European connection. Greek Ōkeanós has been compared to the Vedic epithet ā-śáyāna-, predicated of the dragon Vṛtra-, who captured the cows/rivers. Related to this notion, the Okeanos is represented with a dragon-tail on some early Greek vases.[14]
|
14 |
+
|
15 |
+
Though generally described as several separate oceans, the global, interconnected body of salt water is sometimes referred to as the World Ocean or global ocean.[15][16] The concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.[17]
|
16 |
+
|
17 |
+
The major oceanic divisions – listed below in descending order of area and volume – are defined in part by the continents, various archipelagos, and other criteria.[9][12][18]
|
18 |
+
|
19 |
+
(sq. km)
|
20 |
+
|
21 |
+
Oceans are fringed by smaller, adjoining bodies of water such as seas, gulfs, bays, bights, and straits.
|
22 |
+
|
23 |
+
The mid-ocean ridges of the world are connected and form a single global mid-oceanic ridge system that is part of every ocean and the longest mountain range in the world. The continuous mountain range is 65,000 km (40,000 mi) long (several times longer than the Andes, the longest continental mountain range).[28]
|
24 |
+
|
25 |
+
The total mass of the hydrosphere is about 1.4 quintillion tonnes (1.4×1018 long tons or 1.5×1018 short tons), which is about 0.023% of Earth's total mass. Less than 3% is freshwater; the rest is saltwater, almost all of which is in the ocean. The area of the World Ocean is about 361.9 million square kilometers (139.7 million square miles),[9] which covers about 70.9% of Earth's surface, and its volume is approximately 1.335 billion cubic kilometers (320.3 million cubic miles).[9] This can be thought of as a cube of water with an edge length of 1,101 kilometers (684 mi). Its average depth is about 3,688 meters (12,100 ft),[9] and its maximum depth is 10,994 meters (6.831 mi) at the Mariana Trench.[29] Nearly half of the world's marine waters are over 3,000 meters (9,800 ft) deep.[16] The vast expanses of deep ocean (anything below 200 meters or 660 feet) cover about 66% of Earth's surface.[30] This does not include seas not connected to the World Ocean, such as the Caspian Sea.
|
26 |
+
|
27 |
+
The bluish ocean color is a composite of several contributing agents. Prominent contributors include dissolved organic matter and chlorophyll.[31] Mariners and other seafarers have reported that the ocean often emits a visible glow which extends for miles at night. In 2005, scientists announced that for the first time, they had obtained photographic evidence of this glow.[32] It is most likely caused by bioluminescence.[33][34][35]
|
28 |
+
|
29 |
+
Oceanographers divide the ocean into different vertical zones defined by physical and biological conditions. The pelagic zone includes all open ocean regions, and can be divided into further regions categorized by depth and light abundance. The photic zone includes the oceans from the surface to a depth of 200 m; it is the region where photosynthesis can occur and is, therefore, the most biodiverse. Because plants require photosynthesis, life found deeper than the photic zone must either rely on material sinking from above (see marine snow) or find another energy source. Hydrothermal vents are the primary source of energy in what is known as the aphotic zone (depths exceeding 200 m). The pelagic part of the photic zone is known as the epipelagic.
|
30 |
+
|
31 |
+
The pelagic part of the aphotic zone can be further divided into vertical regions according to temperature.
|
32 |
+
The mesopelagic is the uppermost region. Its lowermost boundary is at a thermocline of 12 °C (54 °F), which, in the tropics generally lies at 700–1,000 meters (2,300–3,300 ft). Next is the bathypelagic lying between 10 and 4 °C (50 and 39 °F), typically between 700–1,000 meters (2,300–3,300 ft) and 2,000–4,000 meters (6,600–13,100 ft), lying along the top of the abyssal plain is the abyssopelagic, whose lower boundary lies at about 6,000 meters (20,000 ft). The last zone includes the deep oceanic trench, and is known as the hadalpelagic. This lies between 6,000–11,000 meters (20,000–36,000 ft) and is the deepest oceanic zone.
|
33 |
+
|
34 |
+
The benthic zones are aphotic and correspond to the three deepest zones of the deep-sea. The bathyal zone covers the continental slope down to about 4,000 meters (13,000 ft). The abyssal zone covers the abyssal plains between 4,000 and 6,000 m. Lastly, the hadal zone corresponds to the hadalpelagic zone, which is found in oceanic trenches.
|
35 |
+
|
36 |
+
The pelagic zone can be further subdivided into two subregions: the neritic zone and the oceanic zone. The neritic zone encompasses the water mass directly above the continental shelves whereas the oceanic zone includes all the completely open water.
|
37 |
+
|
38 |
+
In contrast, the littoral zone covers the region between low and high tide and represents the transitional area between marine and terrestrial conditions. It is also known as the intertidal zone because it is the area where tide level affects the conditions of the region.
|
39 |
+
|
40 |
+
If a zone undergoes dramatic changes in temperature with depth, it contains a thermocline. The tropical thermocline is typically deeper than the thermocline at higher latitudes. Polar waters, which receive relatively little solar energy, are not stratified by temperature and generally lack a thermocline because surface water at polar latitudes are nearly as cold as water at greater depths. Below the thermocline, water is very cold, ranging from −1 °C to 3 °C. Because this deep and cold layer contains the bulk of ocean water, the average temperature of the world ocean is 3.9 °C.[citation needed]
|
41 |
+
If a zone undergoes dramatic changes in salinity with depth, it contains a halocline. If a zone undergoes a strong, vertical chemistry gradient with depth, it contains a chemocline.
|
42 |
+
|
43 |
+
The halocline often coincides with the thermocline, and the combination produces a pronounced pycnocline.
|
44 |
+
|
45 |
+
The deepest point in the ocean is the Mariana Trench, located in the Pacific Ocean near the Northern Mariana Islands. Its maximum depth has been estimated to be 10,971 meters (35,994 ft) (plus or minus 11 meters; see the Mariana Trench article for discussion of the various estimates of the maximum depth.) The British naval vessel Challenger II surveyed the trench in 1951 and named the deepest part of the trench the "Challenger Deep". In 1960, the Trieste successfully reached the bottom of the trench, manned by a crew of two men.
|
46 |
+
|
47 |
+
Oceanic maritime currents have different origins. Tidal currents are in phase with the tide, hence are quasiperiodic; they may form various knots in certain places,[clarification needed] most notably around headlands. Non-periodic currents have for origin the waves, wind and different densities.
|
48 |
+
|
49 |
+
The wind and waves create surface currents (designated as “drift currents”). These currents can decompose in one quasi-permanent current (which varies within the hourly scale) and one movement of Stokes drift under the effect of rapid waves movement (at the echelon of a couple of seconds).).[36] The quasi-permanent current is accelerated by the breaking of waves, and in a lesser governing effect, by the friction of the wind on the surface.[37]
|
50 |
+
|
51 |
+
This acceleration of the current takes place in the direction of waves and dominant wind. Accordingly, when the sea depth increases, the rotation of the earth changes the direction of currents in proportion with the increase of depth, while friction lowers their speed. At a certain sea depth, the current changes direction and is seen inverted in the opposite direction with current speed becoming null: known as the Ekman spiral. The influence of these currents is mainly experienced at the mixed layer of the ocean surface, often from 400 to 800 meters of maximum depth. These currents can considerably alter, change and are dependent on the various yearly seasons. If the mixed layer is less thick (10 to 20 meters), the quasi-permanent current at the surface adopts an extreme oblique direction in relation to the direction of the wind, becoming virtually homogeneous, until the Thermocline.[38]
|
52 |
+
|
53 |
+
In the deep however, maritime currents are caused by the temperature gradients and the salinity between water density masses.
|
54 |
+
|
55 |
+
In littoral zones, breaking waves are so intense and the depth measurement so low, that maritime currents reach often 1 to 2 knots.
|
56 |
+
|
57 |
+
Ocean currents greatly affect Earth's climate by transferring heat from the tropics to the polar regions. Transferring warm or cold air and precipitation to coastal regions, winds may carry them inland. Surface heat and freshwater fluxes create global density gradients that drive the thermohaline circulation part of large-scale ocean circulation. It plays an important role in supplying heat to the polar regions, and thus in sea ice regulation. Changes in the thermohaline circulation are thought to have significant impacts on Earth's energy budget. In so far as the thermohaline circulation governs the rate at which deep waters reach the surface, it may also significantly influence atmospheric carbon dioxide concentrations.
|
58 |
+
|
59 |
+
For a discussion of the possibilities of changes to the thermohaline circulation under global warming, see shutdown of thermohaline circulation.
|
60 |
+
|
61 |
+
The Antarctic Circumpolar Current encircles that continent, influencing the area's climate and connecting currents in several oceans.
|
62 |
+
|
63 |
+
One of the most dramatic forms of weather occurs over the oceans: tropical cyclones (also called "typhoons" and "hurricanes" depending upon where the system forms).
|
64 |
+
|
65 |
+
The ocean has a significant effect on the biosphere. Oceanic evaporation, as a phase of the water cycle, is the source of most rainfall, and ocean temperatures determine climate and wind patterns that affect life on land. Life within the ocean evolved 3 billion years prior to life on land. Both the depth and the distance from shore strongly influence the biodiversity of the plants and animals present in each region.[39]
|
66 |
+
|
67 |
+
As it is thought that life evolved in the ocean, the diversity of life is immense, including:
|
68 |
+
|
69 |
+
In addition, many land animals have adapted to living a major part of their life on the oceans. For instance, seabirds are a diverse group of birds that have adapted to a life mainly on the oceans. They feed on marine animals and spend most of their lifetime on water, many only going on land for breeding. Other birds that have adapted to oceans as their living space are penguins, seagulls and pelicans. Seven species of turtles, the sea turtles, also spend most of their time in the oceans.
|
70 |
+
|
71 |
+
A zone of rapid salinity increase with depth is called a halocline. The temperature of maximum density of seawater decreases as its salt content increases. Freezing temperature of water decreases with salinity, and boiling temperature of water increases with salinity. Typical seawater freezes at around −2 °C at atmospheric pressure.[53] If precipitation exceeds evaporation, as is the case in polar and temperate regions, salinity will be lower. If evaporation exceeds precipitation, as is the case in tropical regions, salinity will be higher. Thus, oceanic waters in polar regions have lower salinity content than oceanic waters in temperate and tropical regions.[54]
|
72 |
+
|
73 |
+
Salinity can be calculated using the chlorinity, which is a measure of the total mass of halogen ions (includes fluorine, chlorine, bromine, and iodine) in seawater. By international agreement, the following formula is used to determine salinity:
|
74 |
+
|
75 |
+
The average chlorinity is about 19.2‰, and, thus, the average salinity is around 34.7‰ [54]
|
76 |
+
|
77 |
+
Many of the world's goods are moved by ship between the world's seaports.[55] Oceans are also the major supply source for the fishing industry. Some of the major harvests are shrimp, fish, crabs, and lobster.[6]
|
78 |
+
|
79 |
+
The motions of the ocean surface, known as undulations or waves, are the partial and alternate rising and falling of the ocean surface. The series of mechanical waves that propagate along the interface between water and air is called swell.[citation needed]
|
80 |
+
|
81 |
+
Although Earth is the only known planet with large stable bodies of liquid water on its surface and the only one in the Solar System, other celestial bodies are thought to have large oceans.[58] In June 2020, NASA scientists reported that it's likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies.[59][60]
|
82 |
+
|
83 |
+
The gas giants, Jupiter and Saturn, are thought to lack surfaces and instead have a stratum of liquid hydrogen; however their planetary geology is not well understood. The possibility of the ice giants Uranus and Neptune having hot, highly compressed, supercritical water under their thick atmospheres has been hypothesised. Although their composition is still not fully understood, a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune,[61] though some studies have suggested that exotic oceans of liquid diamond are possible.[62]
|
84 |
+
|
85 |
+
The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic (much of it residing in the ice caps). The possibility continues to be studied along with reasons for their apparent disappearance. Astronomers now think that Venus may have had liquid water and perhaps oceans for over 2 billion years. [63]
|
86 |
+
|
87 |
+
A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on the natural satellites Titan, Europa, Enceladus and, with less certainty, Callisto, Ganymede[64][65] and Triton.[66][67] A magma ocean is thought to be present on Io.[68] Geysers have been found on Saturn's moon Enceladus, possibly originating from an ocean about 10 kilometers (6.2 mi) beneath the surface ice shell.[56] Other icy moons may also have internal oceans, or may once have had internal oceans that have now frozen.[69]
|
88 |
+
|
89 |
+
Large bodies of liquid hydrocarbons are thought to be present on the surface of Titan, although they are not large enough to be considered oceans and are sometimes referred to as lakes or seas. The Cassini–Huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that Titan had lost what surface liquids it might have had. Later flybys of Titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. Titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
|
90 |
+
|
91 |
+
Ceres appears to be differentiated into a rocky core and icy mantle and may harbour a liquid-water ocean under its surface.[70][71]
|
92 |
+
|
93 |
+
Not enough is known of the larger trans-Neptunian objects to determine whether they are differentiated bodies capable of supporting oceans, although models of radioactive decay suggest that Pluto,[72] Eris, Sedna, and Orcus have oceans beneath solid icy crusts approximately 100 to 180 km thick.[69] In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.[73][74]
|
94 |
+
|
95 |
+
Some planets and natural satellites outside the Solar System are likely to have oceans, including possible water ocean planets similar to Earth in the habitable zone or "liquid-water belt". The detection of oceans, even through the spectroscopy method, however is likely extremely difficult and inconclusive.
|
96 |
+
|
97 |
+
Theoretical models have been used to predict with high probability that GJ 1214 b, detected by transit, is composed of exotic form of ice VII, making up 75% of its mass,[75]
|
98 |
+
making it an ocean planet.
|
99 |
+
|
100 |
+
Other possible candidates are merely speculated based on their mass and position in the habitable zone include planet though little is actually known of their composition. Some scientists speculate Kepler-22b may be an "ocean-like" planet.[76] Models have been proposed for Gliese 581 d that could include surface oceans. Gliese 436 b is speculated to have an ocean of "hot ice".[77] Exomoons orbiting planets, particularly gas giants within their parent star's habitable zone may theoretically have surface oceans.
|
101 |
+
|
102 |
+
Terrestrial planets will acquire water during their accretion, some of which will be buried in the magma ocean but most of it will go into a steam atmosphere, and when the atmosphere cools it will collapse on to the surface forming an ocean. There will also be outgassing of water from the mantle as the magma solidifies—this will happen even for planets with a low percentage of their mass composed of water, so "super-Earth exoplanets may be expected to commonly produce water oceans within tens to hundreds of millions of years of their last major accretionary impact."[78]
|
103 |
+
|
104 |
+
Oceans, seas, lakes and other bodies of liquids can be composed of liquids other than water, for example the hydrocarbon lakes on Titan. The possibility of seas of nitrogen on Triton was also considered but ruled out.[79] There is evidence that the icy surfaces of the moons Ganymede, Callisto, Europa, Titan and Enceladus are shells floating on oceans of very dense liquid water or water–ammonia.[80][81][82][83][84] Earth is often called the ocean planet because it is 70% covered in water.[85][86] Extrasolar terrestrial planets that are extremely close to their parent star will be tidally locked and so one half of the planet will be a magma ocean.[87] It is also possible that terrestrial planets had magma oceans at some point during their formation as a result of giant impacts.[88] Hot Neptunes close to their star could lose their atmospheres via hydrodynamic escape, leaving behind their cores with various liquids on the surface.[89] Where there are suitable temperatures and pressures, volatile chemicals that might exist as liquids in abundant quantities on planets include ammonia, argon, carbon disulfide, ethane, hydrazine, hydrogen, hydrogen cyanide, hydrogen sulfide, methane, neon, nitrogen, nitric oxide, phosphine, silane, sulfuric acid, and water.[90]
|
105 |
+
|
106 |
+
Supercritical fluids, although not liquids, do share various properties with liquids. Underneath the thick atmospheres of the planets Uranus and Neptune, it is expected that these planets are composed of oceans of hot high-density fluid mixtures of water, ammonia and other volatiles.[91] The gaseous outer layers of Jupiter and Saturn transition smoothly into oceans of supercritical hydrogen.[92][93] The atmosphere of Venus is 96.5% carbon dioxide, which is a supercritical fluid at its surface.
|
107 |
+
|
108 |
+
On other bodies:
|
en/4223.html.txt
ADDED
@@ -0,0 +1,255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Oceania (UK: /ˌoʊsiˈɑːniə, ˌoʊʃi-, -ˈeɪn-/, US: /ˌoʊʃiˈæniə/ (listen), /-ˈɑːn-/)[4] is a geographic region that includes Australasia, Melanesia, Micronesia and Polynesia.[5] Spanning the eastern and western hemispheres, Oceania has a land area of 8,525,989 square kilometres (3,291,903 sq mi) and a population of over 41 million. When compared to continents, the region of Oceania is the smallest in land area and the second smallest in population after Antarctica.
|
6 |
+
|
7 |
+
Oceania has a diverse mix of economies from the highly developed and globally competitive financial markets of Australia and New Zealand, which rank high in quality of life and human development index,[6][7] to the much less developed economies that belong to countries such as Kiribati and Tuvalu,[8] while also including medium-sized economies of Pacific islands such as Palau, Fiji and Tonga.[9] The largest and most populous country in Oceania is Australia, with Sydney being the largest city of both Oceania and Australia.[10]
|
8 |
+
|
9 |
+
The first settlers of Australia, New Guinea, and the large islands just to the east arrived more than 60,000 years ago.[11] Oceania was first explored by Europeans from the 16th century onward. Portuguese navigators, between 1512 and 1526, reached the Tanimbar Islands, some of the Caroline Islands and west Papua New Guinea. On his first voyage in the 18th century, James Cook, who later arrived at the highly developed Hawaiian Islands, went to Tahiti and followed the east coast of Australia for the first time.[12] The Pacific front saw major action during the Second World War, mainly between Allied powers the United States and Australia, and Axis power Japan.
|
10 |
+
|
11 |
+
The arrival of European settlers in subsequent centuries resulted in a significant alteration in the social and political landscape of Oceania. In more contemporary times there has been increasing discussion on national flags and a desire by some Oceanians to display their distinguishable and
|
12 |
+
individualistic identity.[13] The rock art of Australian Aborigines is the longest continuously practiced artistic tradition in the world.[14] Puncak Jaya in Papua is the highest peak in Oceania at 4,884 metres.[15] Most Oceanian countries have a parliamentary representative democratic multi-party system, with tourism being a large source of income for the Pacific Islands nations.[16]
|
13 |
+
|
14 |
+
Definitions of Oceania vary; however, the islands at the geographic extremes of Oceania are generally considered to be the Bonin Islands, a politically integral part of Japan; Hawaii, a state of the United States; Clipperton Island, a possession of France; the Juan Fernández Islands, belonging to Chile; and Macquarie Island, belonging to Australia.[citation needed] (The United Nations has its own geopolitical definition of Oceania, but this consists of discrete political entities, and so excludes the Bonin Islands, Hawaii, Clipperton Island, and the Juan Fernández Islands, along with Easter Island.)[17]
|
15 |
+
|
16 |
+
The geographer Conrad Malte-Brun coined the French term Océanie c. 1812.[18] Océanie derives from the Latin word oceanus, and this from the Greek word ὠκεανός (ōkeanós), "ocean". The term Oceania is used because, unlike the other continental groupings, it is the ocean that links the parts of the region together.[19][need quotation to verify]
|
17 |
+
|
18 |
+
In some countries (such as Brazil) however, Oceania is still regarded as a continent (Portuguese: continente) in the sense of "one of the parts of the world", and the concept of Australia as a continent does not exist.[23]
|
19 |
+
Some geographers group the Australian continental plate with other islands in the Pacific into one "quasi-continent" called Oceania.[24]
|
20 |
+
|
21 |
+
Indigenous Australians are the original inhabitants of the Australian continent and nearby islands who migrated from Africa to Asia around 70,000 years ago[25] and arrived in Australia around 50,000 years ago.[26] They are believed to be among the earliest human migrations out of Africa.[27] Although they likely migrated to Australia through Southeast Asia they are not demonstrably related to any known Asian or Polynesian population.[28] There is evidence of genetic and linguistic interchange between Australians in the far north and the Austronesian peoples of modern-day New Guinea and the islands, but this may be the result of recent trade and intermarriage.[29]
|
22 |
+
|
23 |
+
They reached Tasmania approximately 40,000 years ago by migrating across a land bridge from the mainland that existed during the last ice age.[30] It is believed that the first early human migration to Australia was achieved when this landmass formed part of the Sahul continent, connected to the island of New Guinea via a land bridge.[31] The Torres Strait Islanders are indigenous to the Torres Strait Islands, which are at the northernmost tip of Queensland near Papua New Guinea.[32] The earliest definite human remains found in Australia are that of Mungo Man, which have been dated at about 40,000 years old.[33]
|
24 |
+
|
25 |
+
The original inhabitants of the group of islands now named Melanesia were likely the ancestors of the present-day Papuan-speaking people. Migrating from South-East Asia, they appear to have occupied these islands as far east as the main islands in the Solomon Islands archipelago, including Makira and possibly the smaller islands farther to the east.[34]
|
26 |
+
|
27 |
+
Particularly along the north coast of New Guinea and in the islands north and east of New Guinea, the Austronesian people, who had migrated into the area somewhat more than 3,000 years ago, came into contact with these pre-existing populations of Papuan-speaking peoples. In the late 20th century, some scholars theorized a long period of interaction, which resulted in many complex changes in genetics, languages, and culture among the peoples.[35]
|
28 |
+
|
29 |
+
Micronesia began to be settled several millennia ago, although there are competing theories about the origin and arrival of the first settlers. There are numerous difficulties with conducting archaeological excavations in the islands, due to their size, settlement patterns and storm damage. As a result, much evidence is based on linguistic analysis.[36]
|
30 |
+
|
31 |
+
The earliest archaeological traces of civilization have been found on the island of Saipan, dated to 1500 BC or slightly before. The ancestors of the Micronesians settled there over 4,000 years ago. A decentralized chieftain-based system eventually evolved into a more centralized economic and religious culture centered on Yap and Pohnpei.[37] The prehistories of many Micronesian islands such as Yap are not known very well.[38]
|
32 |
+
|
33 |
+
The first people of the Northern Mariana Islands navigated to the islands and discovered it at some period between 4000 BC to 2000 BC from South-East Asia. They became known as the Chamorros. Their language was named after them. The ancient Chamorro left a number of megalithic ruins, including Latte stone. The Refaluwasch, or Carolinian, people came to the Marianas in the 1800s from the Caroline Islands. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BC, with inter-island navigation made possible using traditional stick charts.[39]
|
34 |
+
|
35 |
+
The Polynesian people are considered to be by linguistic, archaeological and human genetic ancestry a subset of the sea-migrating Austronesian people and tracing Polynesian languages places their prehistoric origins in the Malay Archipelago, and ultimately, in Taiwan. Between about 3000 and 1000 BCE speakers of Austronesian languages began spreading from Taiwan into Island South-East Asia,[40][41][42] as tribes whose natives were thought to have arrived through South China about 8,000 years ago to the edges of western Micronesia and on into Melanesia.
|
36 |
+
|
37 |
+
In the archaeological record there are well-defined traces of this expansion which allow the path it took to be followed and dated with some certainty. It is thought that by roughly 1400 BC,[43] "Lapita Peoples", so-named after their pottery tradition, appeared in the Bismarck Archipelago of north-west Melanesia.[44][45]
|
38 |
+
|
39 |
+
Easter Islanders claimed that a chief Hotu Matu'a[46] discovered the island in one or two large canoes with his wife and extended family.[47] They are believed to have been Polynesian. Around 1200, Tahitian explorers discovered and began settling the area. This date range is based on glottochronological calculations and on three radiocarbon dates from charcoal that appears to have been produced during forest clearance activities.[48] Moreover, a recent study which included radiocarbon dates from what is thought to be very early material suggests that the island was discovered and settled as recently as 1200.[49]
|
40 |
+
|
41 |
+
From 1527 to 1595 a number of other large Spanish expeditions crossed the Pacific Ocean, leading to the arrival in Marshall Islands and Palau in the North Pacific, as well as Tuvalu, the Marquesas, the Solomon Islands archipelago, the Cook Islands and the Admiralty Islands in the South Pacific.[50]
|
42 |
+
|
43 |
+
In the quest for Terra Australis, Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, sailed to Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula.[51] Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Van Diemen's Land (now Tasmania), New Zealand in 1642, and Fiji islands.[52] He was the first known European explorer to reach these islands.[53]
|
44 |
+
|
45 |
+
On 23 April 1770 British explorer James Cook made his first recorded direct observation of indigenous Australians at Brush Island near Bawley Point.[54] On 29 April, Cook and crew made their first landfall on the mainland of the continent at a place now known as the Kurnell Peninsula. It is here that James Cook made first contact with an aboriginal tribe known as the Gweagal. His expedition became the first recorded Europeans to have encountered its eastern coastline of Australia.[55]
|
46 |
+
|
47 |
+
In 1789 the Mutiny on the Bounty against William Bligh led to several of the mutineers escaping the Royal Navy and settling on Pitcairn Islands, which later became a British colony. Britain also established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire. The Gilbert Islands (now known as Kiribati) and the Ellice Islands (now known as Tuvalu) came under Britain's sphere of influence in the late 19th century.[56][57]
|
48 |
+
|
49 |
+
French Catholic missionaries arrived on Tahiti in 1834; their expulsion in 1836 caused France to send a gunboat in 1838. In 1842, Tahiti and Tahuata were declared a French protectorate, to allow Catholic missionaries to work undisturbed. The capital of Papeetē was founded in 1843.[58] On 24 September 1853, under orders from Napoleon III, Admiral Febvrier Despointes took formal possession of New Caledonia and Port-de-France (Nouméa) was founded 25 June 1854.[59]
|
50 |
+
|
51 |
+
The Spanish explorer Alonso de Salazar landed in the Marshall Islands in 1529. They were named by Krusenstern, after English explorer John Marshall, who visited them together with Thomas Gilbert in 1788, en route from Botany Bay to Canton (two ships of the First Fleet). In 1905 the British government transferred some administrative responsibility over south-east New Guinea to Australia (which renamed the area "Territory of Papua"); and in 1906, transferred all remaining responsibility to Australia. The Marshall Islands were claimed by Spain in 1874. Germany established colonies in New Guinea in 1884, and Samoa in 1900. The United States also expanded into the Pacific, beginning with Baker Island and Howland Island in 1857, and with Hawaii becoming a U.S. territory in 1898. Disagreements between the US, Germany and UK over Samoa led to the Tripartite Convention of 1899.[60]
|
52 |
+
|
53 |
+
One of the first land offensives in Oceania was the Occupation of German Samoa in August 1914 by New Zealand forces. The campaign to take Samoa ended without bloodshed after over 1,000 New Zealanders landed on the German colony. Australian forces attacked German New Guinea in September 1914. A company of Australians and a British warship besieged the Germans and their colonial subjects, ending with a German surrender.[61]
|
54 |
+
|
55 |
+
The attack on Pearl Harbor by the Japanese Imperial General Headquarters,[62][63] was a surprise military strike conducted by the Imperial Japanese Navy against the United States naval base at Pearl Harbor, Hawaii, on the morning of 7 December 1941. The attack led to the United States' entry into World War II. The Japanese subsequently invaded New Guinea, Solomon Islands and other Pacific islands. The Japanese were turned back at the Battle of the Coral Sea and the Kokoda Track campaign before they were finally defeated in 1945. Some of the most prominent Oceanic battlegrounds were the Battle of Bita Paka, the Solomon Islands campaign, the Air raids on Darwin, the Kokada Track, and the Borneo campaign.[64][65] The United States fought the Battle of Guam from July 21 to August 10, 1944, to recapture the island from Japanese military occupation.[66]
|
56 |
+
|
57 |
+
Australia and New Zealand became dominions in the 20th century, adopting the Statute of Westminster Act in 1942 and 1947 respectively. In 1946, Polynesians were granted French citizenship and the islands' status was changed to an overseas territory; the islands' name was changed in 1957 to Polynésie Française (French Polynesia). Hawaii became a U.S. state in 1959. Fiji and Tonga became independent in 1970. On 1 May 1979, in recognition of the evolving political status of the Marshall Islands, the United States recognized the constitution of the Marshall Islands and the establishment of the Government of the Republic of the Marshall Islands. The South Pacific Forum was founded in 1971, which became the Pacific Islands Forum in 2000.[61]
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
Oceania was originally conceived as the lands of the Pacific Ocean, stretching from the Strait of Malacca to the coast of the Americas. It comprised four regions: Polynesia, Micronesia, Malaysia (now called the Malay Archipelago), and Melanesia.[67] Today, parts of three geological continents are included in the term "Oceania": Eurasia, Australia, and Zealandia, as well the non-continental volcanic islands of the Philippines, Wallacea, and the open Pacific.
|
62 |
+
|
63 |
+
Oceania extends to New Guinea in the west, the Bonin Islands in the northwest, the Hawaiian Islands in the northeast, Rapa Nui and Sala y Gómez Island in the east, and Macquarie Island in the south. Not included are the Pacific islands of Taiwan, the Ryukyu Islands, the Japanese archipelago, and the Maluku Islands, all on the margins of Asia, and the Aleutian Islands of North America. In its periphery, Oceania sprawls 28 degrees north to the Bonin Islands in the northern hemisphere, and 55 degrees south to Macquarie Island in the southern hemisphere.[68]
|
64 |
+
|
65 |
+
Oceanian islands are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and Solomon Islands.[69]
|
66 |
+
|
67 |
+
Oceania is one of eight terrestrial ecozones, which constitute the major ecological regions of the planet. Related to these concepts are Near Oceania, that part of western Island Melanesia which has been inhabited for tens of millennia, and Remote Oceania which is more recently settled. Although the majority of the Oceanian islands lie in the South Pacific, a few of them are not restricted to the Pacific Ocean – Kangaroo Island and Ashmore and Cartier Islands, for instance, are situated in the Southern Ocean and Indian Ocean, respectively, and Tasmania's west coast faces the Southern Ocean.[70]
|
68 |
+
The coral reefs of the South Pacific are low-lying structures that have built up on basaltic lava flows under the ocean's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia.[71][72]
|
69 |
+
|
70 |
+
Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the west and the islands of Kiribati in the southeast.[73][74]
|
71 |
+
|
72 |
+
Melanesia, to the southwest, includes New Guinea, the world's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands archipelago, Santa Cruz, Vanuatu, Fiji and New Caledonia.[75]
|
73 |
+
|
74 |
+
Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east.[76]
|
75 |
+
|
76 |
+
Australasia comprises Australia, New Zealand, the island of New Guinea, and neighbouring islands in the Pacific Ocean. Along with India most of Australasia lies on the Indo-Australian Plate with the latter occupying the Southern area. It is flanked by the Indian Ocean to the west and the Southern Ocean to the south.[77][78]
|
77 |
+
|
78 |
+
The Pacific Plate, which makes up most of Oceania, is an oceanic tectonic plate that lies beneath the Pacific Ocean. At 103 million square kilometres (40,000,000 sq mi), it is the largest tectonic plate. The plate contains an interior hot spot forming the Hawaiian Islands.[79] It is almost entirely oceanic crust.[80] The oldest member disappearing by way of the plate tectonics cycle is early-Cretaceous (145 to 137 million years ago).[81]
|
79 |
+
|
80 |
+
Australia, being part of the Indo-Australian plate, is the lowest, flattest, and oldest landmass on Earth[82] and it has had a relatively stable geological history. Geological forces such as tectonic uplift of mountain ranges or clashes between tectonic plates occurred mainly in Australia's early history, when it was still a part of Gondwana. Australia is situated in the middle of the tectonic plate, and therefore currently has no active volcanism.[83]
|
81 |
+
The geology of New Zealand is noted for its volcanic activity, earthquakes and geothermal areas because of its position on the boundary of the Australian Plate and Pacific Plates. Much of the basement rock of New Zealand was once part of the super-continent of Gondwana, along with South America, Africa, Madagascar, India, Antarctica and Australia. The rocks that now form the continent of Zealandia were nestled between Eastern Australia and Western Antarctica.[84]
|
82 |
+
|
83 |
+
The Australia-New Zealand continental fragment of Gondwana split from the rest of Gondwana in the late Cretaceous time (95–90 Ma). By 75 Ma, Zealandia was essentially separate from Australia and Antarctica, although only shallow seas might have separated Zealandia and Australia in the north. The Tasman Sea, and part of Zealandia then locked together with Australia to form the Australian Plate (40 Ma), and a new plate boundary was created between the Australian Plate and Pacific Plate.
|
84 |
+
|
85 |
+
Most islands in the Pacific are high islands (volcanic islands), such as, Easter Island, American Samoa and Fiji, among others, having peaks up to 1300 m rising abruptly from the shore.[85] The Northwestern Hawaiian Islands were formed approximately 7 to 30 million years ago, as shield volcanoes over the same volcanic hotspot that formed the Emperor Seamounts to the north and the Main Hawaiian Islands to the south.[86] Hawaii's tallest mountain Mauna Kea is 4,205 m (13,796 ft) above mean sea level.[87]
|
86 |
+
|
87 |
+
The most diverse country of Oceania when it comes to the environment is Australia, with tropical rainforests in the north-east, mountain ranges in the south-east, south-west and east, and dry desert in the centre.[88] Desert or semi-arid land commonly known as the outback makes up by far the largest portion of land.[89] The coastal uplands and a belt of Brigalow grasslands lie between the coast and the mountains, while inland of the dividing range are large areas of grassland.[90] The northernmost point of the east coast is the tropical-rainforested Cape York Peninsula.[91][92][93][94][95]
|
88 |
+
|
89 |
+
Prominent features of the Australian flora are adaptations to aridity and fire which include scleromorphy and serotiny. These adaptations are common in species from the large and well-known families Proteaceae (Banksia), Myrtaceae (Eucalyptus – gum trees), and Fabaceae (Acacia – wattle). The flora of Fiji, Solomon Islands, Vanuatu and New Caledonia is tropical dry forest, with tropical vegetation that includes palm trees, premna protrusa, psydrax odorata, gyrocarpus americanus and derris trifoliata.[96]
|
90 |
+
|
91 |
+
New Zealand's landscape ranges from the fjord-like sounds of the southwest to the tropical beaches of the far north. South Island is dominated by the Southern Alps. There are 18 peaks of more than 3000 metres (9800 ft) in the South Island. All summits over 2,900 m are within the Southern Alps, a chain that forms the backbone of the South Island; the highest peak of which is Aoraki / Mount Cook, at 3,754 metres (12,316 ft). Earthquakes are common, though usually not severe, averaging 3,000 per year.[97] There is a wide variety of native trees, adapted to all the various micro-climates in New Zealand.[98]
|
92 |
+
|
93 |
+
In Hawaii, one endemic plant, Brighamia, now requires hand-pollination because its natural pollinator is presumed to be extinct.[99] The two species of Brighamia – B. rockii and B. insignis – are represented in the wild by around 120 individual plants. To ensure these plants set seed, biologists rappel down 910-metre (3,000 ft) cliffs to brush pollen onto their stigmas.[100]
|
94 |
+
|
95 |
+
The aptly-named Pacific kingfisher is found in the Pacific Islands,[102] as is the Red-vented bulbul,[103] Polynesian starling,[104] Brown goshawk,[105]Pacific Swallow[106] and the Cardinal myzomela, among others.[107] Birds breeding on Pitcairn include the fairy tern, common noddy and red-tailed tropicbird. The Pitcairn reed warbler, endemic to Pitcairn Island, was added to the endangered species list in 2008.[108]
|
96 |
+
|
97 |
+
Native to Hawaii is the Hawaiian crow, which has been extinct in the wild since 2002.[109] The brown tree snake is native to northern and eastern coasts of Australia, Papua New Guinea, Guam and Solomon Islands.[110] Native to Australia, New Guinea and proximate islands are birds of paradise, honeyeaters, Australasian treecreeper, Australasian robin, kingfishers, butcherbirds and bowerbirds.[111][112]
|
98 |
+
|
99 |
+
A unique feature of Australia's fauna is the relative scarcity of native placental mammals, and dominance of the marsupials – a group of mammals that raise their young in a pouch, including the macropods, possums and dasyuromorphs. The passerines of Australia, also known as songbirds or perching birds, include wrens, the magpie group, thornbills, corvids, pardalotes, lyrebirds.[113] Predominant bird species in the country include the Australian magpie, Australian raven, the pied currawong, crested pigeons and the laughing kookaburra.[114] The koala, emu, platypus and kangaroo are national animals of Australia,[115] and the Tasmanian devil is also one of the well-known animals in the country.[116] The goanna is a predatory lizard native to the Australian mainland.[117]
|
100 |
+
|
101 |
+
The birds of New Zealand evolved into an avifauna that included a large number of endemic species. As an island archipelago New Zealand accumulated bird diversity and when Captain James Cook arrived in the 1770s he noted that the bird song was deafening. The mix includes species with unusual biology such as the kakapo which is the world's only flightless, nocturnal, lek breeding parrot, but also many species that are similar to neighboring land areas. Some of the more well known and distinctive bird species in New Zealand are the kiwi, kea, takahe, kakapo, mohua, tui and the bellbird.[118] The tuatara is a notable reptile endemic to New Zealand.[119]
|
102 |
+
|
103 |
+
The Pacific Islands are ruled by a tropical rainforest and tropical savanna climate. In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions.[120] In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass.[121] November is the only month in which all the tropical cyclone basins are active.[122]
|
104 |
+
|
105 |
+
To the southwest of the region, in the Australian landmass, the climate is mostly desert or semi-arid, with the southern coastal corners having a temperate climate, such as oceanic and humid subtropical climate in the east coast and Mediterranean climate in the west. The northern parts of the country have a tropical climate.[123] Snow falls frequently on the highlands near the east coast, in the states of Victoria, New South Wales, Tasmania and in the Australian Capital Territory.[124]
|
106 |
+
|
107 |
+
Most regions of New Zealand belong to the temperate zone with a maritime climate (Köppen climate classification: Cfb) characterised by four distinct seasons. Conditions vary from extremely wet on the West Coast of the South Island to almost semi-arid in Central Otago and subtropical in Northland.[125][126] Snow falls in New Zealand's South Island and at higher altitudes in the North Island. It is extremely rare at sea level in the North Island.[127]
|
108 |
+
|
109 |
+
Hawaii, although being in the tropics, experiences many different climates, depending on latitude and its geography. The island of Hawaii for example hosts 4 (out of 5 in total) climate groups on a surface as small as 10,430 km2 (4,028 sq mi) according to the Köppen climate types: tropical, arid, temperate and polar. The Hawaiian Islands receive most of their precipitation during the winter months (October to April).[128] A few islands in the northwest, such as Guam, are susceptible to typhoons in the wet season.[129]
|
110 |
+
|
111 |
+
The highest recorded temperature in Oceania occurred in Oodnadatta, South Australia (2 January 1960), where the temperature reached 50.7 °C (123.3 °F).[130] The lowest temperature ever recorded in Oceania was −25.6 °C (−14.1 °F), at Ranfurly in Otago in 1903, with a more recent temperature of −21.6 °C (−6.9 °F) recorded in 1995 in nearby Ophir.[131] Pohnpei of the Senyavin Islands in Micronesia is the wettest settlement in Oceania, and one of the wettest places on earth, with annual recorded rainfall exceeding 7,600 mm (300 in) each year in certain mountainous locations.[132] The Big Bog on the island of Maui is the wettest place, receiving an average 10,271 mm (404.4 in) each year.[133]
|
112 |
+
|
113 |
+
Australia
|
114 |
+
|
115 |
+
Hawaii
|
116 |
+
|
117 |
+
New Zealand
|
118 |
+
|
119 |
+
Papua New Guinea
|
120 |
+
|
121 |
+
Australasia and adjacent islands
|
122 |
+
|
123 |
+
The linked map below shows the exclusive economic zones (EEZs) of the islands of Oceania and neighbouring areas, as a guide to the following table (there are few land boundaries that can be drawn on a map of the Pacific at this scale).
|
124 |
+
|
125 |
+
The demographic table below shows the subregions and countries of geopolitical Oceania. The countries and territories in this table are categorised according to the scheme for geographic subregions used by the United Nations. The information shown follows sources in cross-referenced articles; where sources differ, provisos have been clearly indicated. These territories and regions are subject to various additional categorisations, depending on the source and purpose of each description.
|
126 |
+
|
127 |
+
Melbourne
|
128 |
+
|
129 |
+
Perth
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
The predominant religion in Oceania is Christianity (73%).[150][151] A 2011 survey found that 92% in Melanesia,[150] 93% in Micronesia[150] and 96% in Polynesia described themselves as Christians.[150] Traditional religions are often animist, and prevalent among traditional tribes is the belief in spirits (masalai in Tok Pisin) representing natural forces.[152] In the 2018 census, 37% of New Zealanders affiliated themselves with Christianity and 48% declared no religion.[153] In the 2016 Census, 52% of the Australian population declared some variety of Christianity and 30% stated "no religion".[154]
|
134 |
+
|
135 |
+
In recent Australian and New Zealand censuses, large proportions of the population say they belong to "no religion" (which includes atheism, agnosticism, deism, secular humanism). In Tonga, everyday life is heavily influenced by Polynesian traditions and especially by the Christian faith. The Ahmadiyya mosque in Marshall Islands is the only mosque in Micronesia.[155] Another one in Tuvalu belongs to the same sect. The Bahá'í House of Worship in Tiapapata, Samoa, is one of seven designations administered in the Bahá'í Faith.
|
136 |
+
|
137 |
+
Other religions in the region include Islam, Buddhism and Hinduism, which are prominent minority religions in Australia and New Zealand. Judaism, Sikhism and Jainism are also present. Sir Isaac Isaacs was the first Australian born Governor General of Australia and was the first Jewish vice-regal representative in the British Empire.[156] Prince Philip Movement is followed around Yaohnanen village on the southern island of Tanna in Vanuatu.
|
138 |
+
|
139 |
+
Native languages of Oceania fall into three major geographic groups:
|
140 |
+
|
141 |
+
Colonial languages include English in Australia, New Zealand, Hawaii, and many other territories; French in New Caledonia, French Polynesia, Wallis and Futuna, and Vanuatu, Japanese in the Bonin Islands, Spanish on Galápagos Islands and Easter Island. There are also Creoles formed from the interaction of Malay or the colonial languages with indigenous languages, such as Tok Pisin, Bislama, Chavacano, various Malay trade and creole languages, Hawaiian Pidgin, Norfuk, and Pitkern. Contact between Austronesian and Papuan resulted in several instances in mixed languages such as Maisin.
|
142 |
+
|
143 |
+
Immigrants brought their own languages to the region, such as Mandarin, Italian, Arabic, Polish, Hindi, German, Spanish, Korean, Cantonese and Greek, among others, namely in Australia and New Zealand,[157] or Fiji Hindi in Fiji.
|
144 |
+
|
145 |
+
The most multicultural areas in Oceania, which have a high degree of immigration, are Australia, New Zealand and Hawaii. Since 1945, more than 7 million people have settled in Australia. From the late 1970s, there was a significant increase in immigration from Asian and other non-European countries, making Australia a multicultural country.[158]
|
146 |
+
|
147 |
+
Sydney is the most multicultural city in Oceania, having more than 250 different languages spoken with about 40 percent of residents speaking a language other than English at home.[159] Furthermore, 36 percent of the population reported having been born overseas, with top countries being Italy, Lebanon, Vietnam and Iraq, among others.[160][161] Melbourne is also fairly multicultural, having the largest Greek-speaking population outside of Europe,[162] and the second largest Asian population in Australia after Sydney.[163][164][165]
|
148 |
+
|
149 |
+
European migration to New Zealand provided a major influx following the signing of the Treaty of Waitangi in 1840. Subsequent immigration has been chiefly from the British Isles, but also from continental Europe, the Pacific, The Americas and Asia.[166][167] Auckland is home to over half (51.6 percent) of New Zealand's overseas born population, including 72 percent of the country's Pacific Island-born population, 64 percent of its Asian-born population, and 56 percent of its Middle Eastern and African born population.[168]
|
150 |
+
|
151 |
+
Hawaii is a majority-minority state.[169] Chinese workers on Western trading ships settled in Hawaii starting in 1789. In 1820, the first American missionaries arrived to preach Christianity and teach the Hawaiians Western ways.[170] As of 2015[update], a large proportion of Hawaii's population have Asian ancestry – especially Filipino, Japanese, Korean and Chinese. Many are descendants of immigrants brought to work on the sugarcane plantations in the mid-to-late 19th century. Almost 13,000 Portuguese immigrants had arrived by 1899; they also worked on the sugarcane plantations.[171] Puerto Rican immigration to Hawaii began in 1899 when Puerto Rico's sugar industry was devastated by two hurricanes, causing a worldwide shortage of sugar and a huge demand for sugar from Hawaii.[172]
|
152 |
+
|
153 |
+
Between 2001 and 2007 Australia's Pacific Solution policy transferred asylum seekers to several Pacific nations, including the Nauru detention centre. Australia, New Zealand and other nations took part in the Regional Assistance Mission to Solomon Islands between 2003 and 2017 after a request for aid.[173]
|
154 |
+
|
155 |
+
Archaeology, linguistics, and existing genetic studies indicate that Oceania was settled by two major waves of migration. The first migration Australo-Melanesian) took place approximately 40 to 80 thousand years ago, and these migrants, Papuans, colonised much of Near Oceania. Approximately 3.5 thousand years ago, a second expansion of Austronesian speakers arrived in Near Oceania, and the descendants of these people spread to the far corners of the Pacific, colonising Remote Oceania.[174]
|
156 |
+
|
157 |
+
Mitochondrial DNA (mtDNA) studies quantify the magnitude of the Austronesian expansion and demonstrate the homogenising effect of this expansion. With regards to Papuan influence, autochthonous haplogroups support the hypothesis of a long history in Near Oceania, with some lineages suggesting a time depth of 60 thousand years. Santa Cruz, a population located in Remote Oceania, is an anomaly with extreme frequencies of autochthonous haplogroups of Near Oceanian origin.[174]
|
158 |
+
|
159 |
+
Large areas of New Guinea are unexplored by scientists and anthropologists due to extensive forestation and mountainous terrain. Known indigenous tribes in Papua New Guinea have very little contact with local authorities aside from the authorities knowing who they are. Many remain preliterate and, at the national or international level, the names of tribes and information about them is extremely hard to obtain. The Indonesian provinces of Papua and West Papua on the island of New Guinea are home to an estimated 44 uncontacted tribal groups.[175]
|
160 |
+
|
161 |
+
Australia and New Zealand are the only developed nations in the region, although the economy of Australia is by far the largest and most dominant economy in the region and one of the largest in the world. Australia's per-capita GDP is higher than that of the UK, Canada, Germany, and France in terms of purchasing power parity.[176] New Zealand is also one of the most globalised economies and depends greatly on international trade.[177][178]
|
162 |
+
|
163 |
+
The Australian Securities Exchange in Sydney is the largest stock exchange in Australia and in the South Pacific.[179] New Zealand is the 53rd-largest national economy in the world measured by nominal gross domestic product (GDP) and 68th-largest in the world measured by purchasing power parity (PPP). In 2012, Australia was the 12th largest national economy by nominal GDP and the 19th-largest measured by PPP-adjusted GDP.[180]
|
164 |
+
|
165 |
+
Mercer Quality of Living Survey ranks Sydney tenth in the world in terms of quality of living,[181] making it one of the most livable cities.[182] It is classified as an Alpha+ World City by GaWC.[183][184] Melbourne also ranked highly in the world's most liveable city list,[185] and is a leading financial centre in the Asia-Pacific region.[186][187] Auckland and Wellington, in New Zealand, are frequently ranked among the world's most liveable cities with Auckland being ranked 3rd according to the Mercer Quality of Living Survey.[188][189]
|
166 |
+
|
167 |
+
The majority of people living in Australia and to a lesser extent, New Zealand work in mining, electrical and manufacturing sectors also. Australia boasts the largest amount of manufacturing in the region, producing cars, electrical equipment, machinery and clothes.
|
168 |
+
|
169 |
+
The overwhelming majority of people living in the Pacific islands work in the service industry which includes tourism, education and financial services. Oceania's largest export markets include Japan, China, the United States and South Korea. The smallest Pacific nations rely on trade with Australia, New Zealand and the United States for exporting goods and for accessing other products. Australia and New Zealand's trading arrangements are known as Closer Economic Relations. Australia and New Zealand, along with other countries, are members of Asia-Pacific Economic Cooperation (APEC) and the East Asia Summit (EAS), which may become trade blocs in the future particularly EAS.
|
170 |
+
|
171 |
+
The main produce from the Pacific is copra or coconut, but timber, beef, palm oil, cocoa, sugar and ginger are also commonly grown across the tropics of the Pacific. Fishing provides a major industry for many of the smaller nations in the Pacific, although many fishing areas are exploited by other larger countries, namely Japan. Natural Resources, such as lead, zinc, nickel and gold, are mined in Australia and Solomon Islands. Oceania's largest export markets include Japan, China, the United States, India, South Korea and the European Union.
|
172 |
+
|
173 |
+
Endowed with forest, mineral, and fish resources, Fiji is one of the most developed of the Pacific island economies, though it remains a developing country with a large subsistence agriculture sector.[190] Agriculture accounts for 18% of gross domestic product, although it employed some 70% of the workforce as of 2001. Sugar exports and the growing tourist industry are the major sources of foreign exchange. Sugar cane processing makes up one-third of industrial activity. Coconuts, ginger, and copra are also significant.
|
174 |
+
|
175 |
+
The history of Hawaii's economy can be traced through a succession of dominant industries; sandalwood,[191] whaling,[192] sugarcane, pineapple, the military, tourism and education.[193] Hawaiian exports include food and clothing. These industries play a small role in the Hawaiian economy, due to the shipping distance to viable markets, such as the West Coast of the contiguous U.S. The state's food exports include coffee, macadamia nuts, pineapple, livestock, sugarcane and honey.[194] As of 2015[update], Honolulu was ranked high on world livability rankings, and was also ranked as the 2nd safest city in the U.S.[195][196]
|
176 |
+
|
177 |
+
Tourists mostly come from Japan, the United Kingdom and the United States. Fiji currently draws almost half a million tourists each year; more than a quarter from Australia. This contributes $1 billion or more since 1995 to Fiji's economy but the Government of Fiji islands underestimate these figures due to the invisible economy inside the tourism industry.
|
178 |
+
|
179 |
+
Vanuatu is widely recognised as one of the premier vacation destinations for scuba divers wishing to explore coral reefs of the South Pacific region. Tourism has been promoted, in part, by Vanuatu being the site of several reality-TV shows. The ninth season of the reality TV series Survivor was filmed on Vanuatu, entitled Survivor: Vanuatu – Islands of Fire. Two years later, Australia's Celebrity Survivor was filmed at the same location used by the US version.[197]
|
180 |
+
|
181 |
+
Tourism in Australia is an important component of the Australian economy. In the financial year 2014/15, tourism represented 3% of Australia's GDP contributing A$47.5 billion to the national economy.[198] In 2015, there were 7.4 million visitor arrivals.[199] Popular Australian destinations include the Sydney Harbour (Sydney Opera House, Sydney Harbour Bridge, Royal Botanic Garden, etc.), Gold Coast (theme parks such as Warner Bros. Movie World, Dreamworld and Sea World), Walls of Jerusalem National Park and Mount Field National Park in Tasmania, Royal Exhibition Building in Melbourne, the Great Barrier Reef in Queensland, The Twelve Apostles in Victoria, Uluru (Ayers Rock) and the Australian outback.[200]
|
182 |
+
|
183 |
+
Tourism in New Zealand contributes NZ$7.3 billion (or 4%) of the country's GDP in 2013, as well as directly supporting 110,800 full-time equivalent jobs (nearly 6% of New Zealand's workforce). International tourist spending accounted for 16% of New Zealand's export earnings (nearly NZ$10 billion). International and domestic tourism contributes, in total, NZ$24 billion to New Zealand's economy every year. Tourism New Zealand, the country's official tourism agency, is actively promoting the country as a destination worldwide.[201] Milford Sound in South Island is acclaimed as New Zealand's most famous tourist destination.[202]
|
184 |
+
|
185 |
+
In 2003 alone, according to state government data, there were over 6.4 million visitors to the Hawaiian Islands with expenditures of over $10.6 billion.[203] Due to the mild year-round weather, tourist travel is popular throughout the year. In 2011, Hawaii saw increasing arrivals and share of foreign tourists from Canada, Australia and China increasing 13%, 24% and 21% respectively from 2010.[204]
|
186 |
+
|
187 |
+
Australia is a federal parliamentary constitutional monarchy[205] with Elizabeth II at its apex as the Queen of Australia, a role that is distinct from her position as monarch of the other Commonwealth realms. The Queen is represented in Australia by the Governor-General at the federal level and by the Governors at the state level, who by convention act on the advice of her ministers.[206][207] There are two major political groups that usually form government, federally and in the states: the Australian Labor Party and the Coalition which is a formal grouping of the Liberal Party and its minor partner, the National Party.[208][209] Within Australian political culture, the Coalition is considered centre-right and the Labor Party is considered centre-left.[210] The Australian Defence Force is by far the largest military force in Oceania.[211]
|
188 |
+
|
189 |
+
New Zealand is a constitutional monarchy with a parliamentary democracy,[212] although its constitution is not codified.[213] Elizabeth II is the Queen of New Zealand and the head of state.[214] The Queen is represented by the Governor-General, whom she appoints on the advice of the Prime Minister.[215] The New Zealand Parliament holds legislative power and consists of the Queen and the House of Representatives.[216] A parliamentary general election must be called no later than three years after the previous election.[217] New Zealand is identified as one of the world's most stable and well-governed states,[218][219] with high government transparency and among the lowest perceived levels of corruption.[220]
|
190 |
+
|
191 |
+
In Samoan politics, the Prime Minister of Samoa is the head of government. The 1960 constitution, which formally came into force with independence from New Zealand in 1962, builds on the British pattern of parliamentary democracy, modified to take account of Samoan customs. The national government (malo) generally controls the legislative assembly.[221] Politics of Tonga takes place in a framework of a constitutional monarchy, whereby the King is the Head of State.
|
192 |
+
|
193 |
+
Fiji has a multiparty system with the Prime Minister of Fiji as head of government. The executive power is exercised by the government. Legislative power is vested in both the government and the Parliament of Fiji. Fiji's Head of State is the President. He is elected by Parliament of Fiji after nomination by the Prime Minister or the Leader of the Opposition, for a three-year term.
|
194 |
+
|
195 |
+
In the politics of Papua New Guinea the Prime Minister is the head of government. In Kiribati, a Parliamentary regime, the President of Kiribati is the head of state and government, and of a multi-party system.
|
196 |
+
|
197 |
+
New Caledonia remains an integral part of the French Republic. Inhabitants of New Caledonia are French citizens and carry French passports. They take part in the legislative and presidential French elections. New Caledonia sends two representatives to the French National Assembly and two senators to the French Senate.
|
198 |
+
|
199 |
+
Hawaii is dominated by the Democratic Party. As codified in the Constitution of Hawaii, there are three branches of government: executive, legislative and judicial. The governor is elected statewide. The lieutenant governor acts as the Secretary of State. The governor and lieutenant governor oversee twenty agencies and departments from offices in the State Capitol.
|
200 |
+
|
201 |
+
Since 1788, the primary influence behind Australian culture has been Anglo-Celtic Western culture, with some Indigenous influences.[223][224] The divergence and evolution that has occurred in the ensuing centuries has resulted in a distinctive Australian culture.[225][226] Since the mid-20th century, American popular culture has strongly influenced Australia, particularly through television and cinema.[227] Other cultural influences come from neighbouring Asian countries, and through large-scale immigration from non-English-speaking nations.[227][228] The Story of the Kelly Gang (1906), the world's first feature length film, spurred a boom in Australian cinema during the silent film era.[229][230] The Australian Museum in Sydney and the National Gallery of Victoria in Melbourne are the oldest and largest museums in Oceania.[231][232] The city's New Year's Eve celebrations are the largest in Oceania.[233]
|
202 |
+
|
203 |
+
Australia is also known for its cafe and coffee culture in urban centres.[234] Australia and New Zealand were responsible for the flat white coffee. Most Indigenous Australian tribal groups subsisted on a simple hunter-gatherer diet of native fauna and flora, otherwise called bush tucker.[235][236] The first settlers introduced British food to the continent, much of which is now considered typical Australian food, such as the Sunday roast.[237][238] Multicultural immigration transformed Australian cuisine; post-World War II European migrants, particularly from the Mediterranean, helped to build a thriving Australian coffee culture, and the influence of Asian cultures has led to Australian variants of their staple foods, such as the Chinese-inspired dim sim and Chiko Roll.[239]
|
204 |
+
|
205 |
+
The music of Hawaii includes traditional and popular styles, ranging from native Hawaiian folk music to modern rock and hip hop. Hawaii's musical contributions to the music of the United States are out of proportion to the state's small size. Styles such as slack-key guitar are well known worldwide, while Hawaiian-tinged music is a frequent part of Hollywood soundtracks. Hawaii also made a major contribution to country music with the introduction of the steel guitar.[240] The Hawaiian religion is polytheistic and animistic, with a belief in many deities and spirits, including the belief that spirits are found in non-human beings and objects such as animals, the waves, and the sky.[241]
|
206 |
+
|
207 |
+
The cuisine of Hawaii is a fusion of many foods brought by immigrants to the Hawaiian Islands, including the earliest Polynesians and Native Hawaiian cuisine, and American, Chinese, Filipino, Japanese, Korean, Polynesian and Portuguese origins. Native Hawaiian musician and Hawaiian sovereignty activist Israel Kamakawiwoʻole, famous for his medley of "Somewhere Over the Rainbow/What a Wonderful World", was named "The Voice of Hawaii" by NPR in 2010 in its 50 great voices series.[242]
|
208 |
+
|
209 |
+
New Zealand as a culture is a Western culture, which is influenced by the cultural input of the indigenous Māori and the various waves of multi-ethnic migration which followed the British colonisation of New Zealand. Māori people constitute one of the major cultures of Polynesia. The country has been broadened by globalisation and immigration from the Pacific Islands, East Asia and South Asia.[244] New Zealand marks two national days of remembrance, Waitangi Day and ANZAC Day, and also celebrates holidays during or close to the anniversaries of the founding dates of each province.[245]
|
210 |
+
|
211 |
+
The New Zealand recording industry began to develop from 1940 onwards and many New Zealand musicians have obtained success in Britain and the United States.[246] Some artists release Māori language songs and the Māori tradition-based art of kapa haka (song and dance) has made a resurgence.[247] The country's diverse scenery and compact size, plus government incentives,[248] have encouraged some producers to film big budget movies in New Zealand, including Avatar, The Lord of the Rings, The Hobbit, The Chronicles of Narnia, King Kong and The Last Samurai.[249]
|
212 |
+
|
213 |
+
The national cuisine has been described as Pacific Rim, incorporating the native Māori cuisine and diverse culinary traditions introduced by settlers and immigrants from Europe, Polynesia and Asia.[250] New Zealand yields produce from land and sea – most crops and livestock, such as maize, potatoes and pigs, were gradually introduced by the early European settlers.[251] Distinctive ingredients or dishes include lamb, salmon, koura (crayfish),[252] dredge oysters, whitebait, paua (abalone), mussels, scallops, pipi and tuatua (both are types of New Zealand shellfish),[253] kumara (sweet potato), kiwifruit, tamarillo and pavlova (considered a national dish).[254][250]
|
214 |
+
|
215 |
+
The fa'a Samoa, or traditional Samoan way, remains a strong force in Samoan life and politics. Despite centuries of European influence, Samoa maintains its historical customs, social and political systems, and language. Cultural customs such as the Samoa 'ava ceremony are significant and solemn rituals at important occasions including the bestowal of matai chiefly titles. Items of great cultural value include the finely woven 'ie toga.
|
216 |
+
|
217 |
+
The Samoan word for dance is siva, which consists of unique gentle movements of the body in time to music and which tell a story. Samoan male dances can be more snappy.[255] The sasa is also a traditional dance where rows of dancers perform rapid synchronised movements in time to the rhythm of wooden drums (pate) or rolled mats. Another dance performed by males is called the fa'ataupati or the slap dance, creating rhythmic sounds by slapping different parts of the body. As with other Polynesian cultures (Hawaiian, Tahitian and Māori) with significant and unique tattoos, Samoans have two gender specific and culturally significant tattoos.[256]
|
218 |
+
|
219 |
+
The artistic creations of native Oceanians varies greatly throughout the cultures and regions. The subject matter typically carries themes of fertility or the supernatural.
|
220 |
+
Petroglyphs, Tattooing, painting, wood carving, stone carving and textile work are other common art forms.[257] Art of Oceania properly encompasses the artistic traditions of the people indigenous to Australia and the Pacific Islands.[258] These early peoples lacked a writing system, and made works on perishable materials, so few records of them exist from this time.[259]
|
221 |
+
|
222 |
+
Indigenous Australian rock art is the oldest and richest unbroken tradition of art in the world, dating as far back as 60,000 years and spread across hundreds of thousands of sites.[260][261] These rock paintings served several functions. Some were used in magic, others to increase animal populations for hunting, while some were simply for amusement.[262] Sculpture in Oceania first appears on New Guinea as a series of stone figures found throughout the island, but mostly in mountainous highlands. Establishing a chronological timeframe for these pieces in most cases is difficult, but one has been dated to 1500 BC.[263]
|
223 |
+
|
224 |
+
By 1500 BC the Lapita culture, descendants of the second wave, would begin to expand and spread into the more remote islands. At around the same time, art began to appear in New Guinea, including the earliest examples of sculpture in Oceania. Starting around 1100 AD, the people of Easter Island would begin construction of nearly 900 moai (large stone statues). At about 1200 AD, the people of Pohnpei, a Micronesian island, would embark on another megalithic construction, building Nan Madol, a city of artificial islands and a system of canals.[264] Hawaiian art includes wood carvings, feather work, petroglyphs, bark cloth (called kapa in Hawaiian and tapa elsewhere in the Pacific) and tattoos. Native Hawaiians had neither metal nor woven cloth.[265]
|
225 |
+
|
226 |
+
Rugby union is one of the region's most prominent sports,[266] and is the national sport of New Zealand, Samoa, Fiji and Tonga. The most popular sport in Australia is cricket, the most popular sport among Australian women is netball, while Australian rules football is the most popular sport in terms of spectatorship and television ratings.[267][268][269] Rugby is the most popular sport among New Zealanders.[270] In Papua New Guinea, the most popular sport is Rugby league.[271]
|
227 |
+
|
228 |
+
Australian rules football is the national sport in Nauru[272] and is the most popular football code in Australia in terms of attendance.[273] It has a large following in Papua New Guinea, where it is the second most popular sport after Rugby League.[274][275][276] It attracts significant attention across New Zealand and the Pacific Islands.[277] Fiji's sevens team is one of the most successful in the world, as is New Zealand's.[278]
|
229 |
+
|
230 |
+
Currently Vanuatu is the only country in Oceania to call association football its national sport. However, it is also the most popular sport in Kiribati, Solomon Islands and Tuvalu, and has a significant (and growing) popularity in Australia. In 2006 Australia joined the Asian Football Confederation and qualified for the 2010, 2014 and 2018 World Cups as an Asian entrant.[279]
|
231 |
+
|
232 |
+
Australia has hosted two Summer Olympics: Melbourne 1956 and Sydney 2000. Also, Australia has hosted five editions of the Commonwealth Games (Sydney 1938, Perth 1962, Brisbane 1982, Melbourne 2006, Gold Coast 2018). Meanwhile, New Zealand has hosted the Commonwealth Games three times: Auckland 1950, Christchurch 1974 and Auckland 1990. The Pacific Games (formerly known as the South Pacific Games) is a multi-sport event, much like the Olympics on a much smaller scale, with participation exclusively from countries around the Pacific. It is held every four years and began in 1963.
|
233 |
+
Australia and New Zealand competed in the games for the first time in 2015.[280]
|
234 |
+
|
235 |
+
Africa
|
236 |
+
|
237 |
+
Antarctica
|
238 |
+
|
239 |
+
Asia
|
240 |
+
|
241 |
+
Australia
|
242 |
+
|
243 |
+
Europe
|
244 |
+
|
245 |
+
North America
|
246 |
+
|
247 |
+
South America
|
248 |
+
|
249 |
+
Afro-Eurasia
|
250 |
+
|
251 |
+
America
|
252 |
+
|
253 |
+
Eurasia
|
254 |
+
|
255 |
+
Oceania
|
en/4224.html.txt
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Indian Ocean is the third-largest of the world's oceanic divisions, covering 70,560,000 km2 (27,240,000 sq mi) or 19.8% of the water on Earth's surface.[5] It is bounded by Asia to the north, Africa to the west and Australia to the east. To the south it is bounded by the Southern Ocean or Antarctica, depending on the definition in use.[6] Along its core, the Indian Ocean has some large marginal or regional seas such as the Arabian Sea, the Laccadive Sea, the Somali Sea, Bay of Bengal, and the Andaman Sea.
|
4 |
+
|
5 |
+
The Indian Ocean has been known by its present name since at least 1515, when the Latin form Oceanus Orientalis Indicus ("Indian Eastern Ocean") is attested, named for India, which projects into it. It was earlier known as the Eastern Ocean, a term was that was still in use during the mid-18th century (see map), as opposed to the Western Ocean (Atlantic) before the Pacific was surmised.[7]
|
6 |
+
|
7 |
+
Conversely, Chinese explorers in the Indian Ocean, during the 15th century, called it the Western Oceans.[8] The ocean has also been known as the Hindu Ocean and Indic Ocean in various languages.
|
8 |
+
|
9 |
+
In Ancient Greek geography the region of the Indian Ocean known to the Greeks was called the Erythraean Sea.[9]
|
10 |
+
|
11 |
+
A relatively new concept of an "Indian Ocean World" and attempts to rewrite its history has resulted in new proposed names, such as 'Asian Sea' and 'Afrasian Sea'.[10]
|
12 |
+
|
13 |
+
The borders of the Indian Ocean, as delineated by the International Hydrographic Organization in 1953 included the Southern Ocean but not the marginal seas along the northern rim, but in 2000 the IHO delimited the Southern Ocean separately, which removed waters south of 60°S from the Indian Ocean but included the northern marginal seas.
|
14 |
+
[11][12] Meridionally, the Indian Ocean is delimited from the Atlantic Ocean by the 20° east meridian, running south from Cape Agulhas, and from the Pacific Ocean by the meridian of 146°49'E, running south from the southernmost point of Tasmania. The northernmost extent of the Indian Ocean (including marginal seas) is approximately 30° north in the Persian Gulf.[12]
|
15 |
+
|
16 |
+
The Indian Ocean covers 70,560,000 km2 (27,240,000 sq mi), including the Red Sea and the Persian Gulf but excluding the Southern Ocean, or 19.5% of the world's oceans; its volume is 264,000,000 km3 (63,000,000 cu mi) or 19.8% of the world's oceans' volume; it has an average depth of 3,741 m (12,274 ft) and a maximum depth of 7,906 m (25,938 ft).[5]
|
17 |
+
|
18 |
+
All of the Indian Ocean is in the Eastern Hemisphere and the centre of the Eastern Hemisphere, the 90th meridian east, passes through the Ninety East Ridge.
|
19 |
+
|
20 |
+
In contrast to the Atlantic and Pacific, the Indian Ocean is enclosed by major landmasses and an archipelago on three sides and does not stretch from pole to pole and can be likened to an embayed ocean.
|
21 |
+
It is centred on the Indian Peninsula and although this subcontinent has played a major role in its history the Indian Ocean has foremostly been a cosmopolitan stage interlinking diverse regions by innovations, trade, and religion since early in human history.[10]
|
22 |
+
|
23 |
+
The active margins of the Indian Ocean have an average depth (land to shelf break) of 19 ± 0.61 km (11.81 ± 0.38 mi) with a maximum depth of 175 km (109 mi). The passive margins have an average depth of 47.6 ± 0.8 km (29.58 ± 0.50 mi).[13]
|
24 |
+
The average width of the slopes of the continental shelves are 50.4–52.4 km (31.3–32.6 mi) for active and passive margins respectively, with a maximum depth of 205.3–255.2 km (127.6–158.6 mi).[14]
|
25 |
+
|
26 |
+
Australia, Indonesia, and India are the three countries with the longest shorelines and exclusive economic zones. The continental shelf makes up 15% of the Indian Ocean.
|
27 |
+
More than two billion people live in countries bordering the Indian Ocean, compared to 1.7 billion for the Atlantic and 2.7 billion for the Pacific (some countries border more than one ocean).[2]
|
28 |
+
|
29 |
+
The Indian Ocean drainage basin covers 21,100,000 km2 (8,100,000 sq mi), virtually identical to that of the Pacific Ocean and half that of the Atlantic basin, or 30% of its ocean surface (compared to 15% for the Pacific). The Indian Ocean drainage basin is divided into roughly 800 individual basins, half that of the Pacific, of which 50% are located in Asia, 30% in Africa, and 20% in Australasia. The rivers of the Indian Ocean are shorter in average (740 km (460 mi)) than those of the other major oceans. The largest rivers are (order 5) the Zambezi, Ganges-Brahmaputra, Indus, Jubba, and Murray rivers and (order 4) the Shatt al-Arab, Wadi Ad Dawasir (a dried-out river system on the Arabian Peninsula) and Limpopo rivers.[15]
|
30 |
+
|
31 |
+
Marginal seas, gulfs, bays and straits of the Indian Ocean include:[12]
|
32 |
+
|
33 |
+
Along the east coast of Africa, the Mozambique Channel separates Madagascar from mainland Africa, while the Sea of Zanj is located north of Madagascar.
|
34 |
+
|
35 |
+
On the northern coast of the Arabian Sea, Gulf of Aden is connected to the Red Sea by the strait of Bab-el-Mandeb. In the Gulf of Aden, the Gulf of Tadjoura is located in Djibouti and the Guardafui Channel separates Socotra island from the Horn of Africa. The northern end of the Red Sea terminates in the Gulf of Aqaba and Gulf of Suez. The Indian Ocean is artificially connected to the Mediterranean Sea through the Suez Canal, which is accessible via the Red Sea.
|
36 |
+
The Arabian Sea is connected to the Persian Gulf by the Gulf of Oman and the Strait of Hormuz. In the Persian Gulf, the Gulf of Bahrain separates Qatar from the Arabic Peninsula.
|
37 |
+
|
38 |
+
Along the west coast of India, the Gulf of Kutch and Gulf of Khambat are located in Gujarat in the northern end while the Laccadive Sea separates the Maldives from the southern tip of India.
|
39 |
+
The Bay of Bengal is off the east coast of India. The Gulf of Mannar and the Palk Strait separates Sri Lanka from India, while the Adam's Bridge separates the two. The Andaman Sea is located between the Bay of Bengal and the Andaman Islands.
|
40 |
+
|
41 |
+
In Indonesia, the so-called Indonesian Seaway is composed of the Malacca, Sunda and Torres Straits.
|
42 |
+
The Gulf of Carpentaria of located on the Australian north coast while the Great Australian Bight constitutes a large part of its southern coast.
|
43 |
+
|
44 |
+
Several features make the Indian Ocean unique. It constitutes the core of the large-scale Tropical Warm Pool which, when interacting with the atmosphere, affects the climate both regionally and globally. Asia blocks heat export and prevents the ventilation of the Indian Ocean thermocline. That continent also drives the Indian Ocean monsoon, the strongest on Earth, which causes large-scale seasonal variations in ocean currents, including the reversal of the Somali Current and Indian Monsoon Current. Because of the Indian Ocean Walker circulation there are no continuous equatorial easterlies. Upwelling occurs near the Horn of Africa and the Arabian Peninsula in the Northern Hemisphere and north of the trade winds in the Southern Hemisphere. The Indonesian Throughflow is a unique Equatorial connection to the Pacific.[16]
|
45 |
+
|
46 |
+
The climate north of the equator is affected by a monsoon climate. Strong north-east winds blow from October until April; from May until October south and west winds prevail. In the Arabian Sea, the violent Monsoon brings rain to the Indian subcontinent. In the southern hemisphere, the winds are generally milder, but summer storms near Mauritius can be severe. When the monsoon winds change, cyclones sometimes strike the shores of the Arabian Sea and the Bay of Bengal.[17]
|
47 |
+
Some 80% of the total annual rainfall in India occurs during summer and the region is so dependent on this rainfall that many civilisations perished when the Monsoon failed in the past. The huge variability in the Indian Summer Monsoon has also occurred pre-historically, with a strong, wet phase 33,500–32,500 BP; a weak, dry phase 26,000–23,500 BC; and a very weak phase 17,000–15,000 BP,
|
48 |
+
corresponding to a series of dramatic global events: Bølling-Allerød, Heinrich, and Younger Dryas.[18]
|
49 |
+
|
50 |
+
The Indian Ocean is the warmest ocean in the world.[19] Long-term ocean temperature records show a rapid, continuous warming in the Indian Ocean, at about 1.2 °C (34.2 °F) (compared to 0.7 °C (33.3 °F) for the warm pool region) during 1901–2012.[20] Research indicates that human induced greenhouse warming, and changes in the frequency and magnitude of El Niño (or the Indian Ocean Dipole), events are a trigger to this strong warming in the Indian Ocean.[20]
|
51 |
+
|
52 |
+
South of the Equator (20-5°S), the Indian Ocean is gaining heat from June to October, during the austral winter, while it is losing heat from November to March, during the austral summer.[21]
|
53 |
+
|
54 |
+
In 1999, the Indian Ocean Experiment showed that fossil fuel and biomass burning in South and Southeast Asia caused air pollution (also known as the Asian brown cloud) that reach as far as the Intertropical Convergence Zone at 60°S. This pollution has implications on both a local and global scale.[22]
|
55 |
+
|
56 |
+
40% of the sediment of the Indian Ocean is found in the Indus and Ganges fans. The oceanic basins adjacent to the continental slopes mostly contain terrigenous sediments. The ocean south of the polar front (roughly 50° south latitude) is high in biologic productivity and dominated by non-stratified sediment composed mostly of siliceous oozes. Near the three major mid-ocean ridges the ocean floor is relatively young and therefore bare of sediment, except for the Southwest Indian Ridge due to its ultra-slow spreading rate.[23]
|
57 |
+
|
58 |
+
The ocean's currents are mainly controlled by the monsoon. Two large gyres, one in the northern hemisphere flowing clockwise and one south of the equator moving anticlockwise (including the Agulhas Current and Agulhas Return Current), constitute the dominant flow pattern. During the winter monsoon (November–February), however, circulation is reversed north of 30°S and winds are weakened during winter and the transitional periods between the monsoons.[24]
|
59 |
+
|
60 |
+
The Indian Ocean contains the largest submarine fans of the world, the Bengal Fan and Indus Fan, and the largest areas of slope terraces and rift valleys.
|
61 |
+
[25]
|
62 |
+
|
63 |
+
The inflow of deep water into the Indian Ocean is 11 Sv, most of which comes from the Circumpolar Deep Water (CDW). The CDW enters the Indian Ocean through the Crozet and Madagascar basins and crosses the Southwest Indian Ridge at 30°S. In the Mascarene Basin the CDW becomes a deep western boundary current before it is met by a re-circulated branch of itself, the North Indian Deep Water. This mixed water partly flows north into the Somali Basin whilst most of it flows clockwise in the Mascarene Basin where an oscillating flow is produced by Rossby waves.[26]
|
64 |
+
|
65 |
+
Water circulation in the Indian Ocean is dominated by the Subtropical Anticyclonic Gyre, the eastern extension of which is blocked by the Southeast Indian Ridge and the 90°E Ridge. Madagascar and the Southwest Indian Ridge separates three cells south of Madagascar and off South Africa. North Atlantic Deep Water reaches into the Indian Ocean south of Africa at a depth of 2,000–3,000 m (6,600–9,800 ft) and flows north along the eastern continental slope of Africa. Deeper than NADW, Antarctic Bottom Water flows from Enderby Basin to Agulhas Basin across deep channels (<4,000 m (13,000 ft)) in the Southwest Indian Ridge, from where it continues into the Mozambique Channel and Prince Edward Fracture Zone.[27]
|
66 |
+
|
67 |
+
North of 20° south latitude the minimum surface temperature is 22 °C (72 °F), exceeding 28 °C (82 °F) to the east. Southward of 40° south latitude, temperatures drop quickly.[17]
|
68 |
+
|
69 |
+
The Bay of Bengal contributes more than half (2,950 km3 (710 cu mi)) of the runoff water to the Indian Ocean. Mainly in summer, this runoff flows into the Arabian Sea but also south across the Equator where it mixes with fresher seawater from the Indonesian Throughflow. This mixed freshwater joins the South Equatorial Current in the southern tropical Indian Ocean.[28]
|
70 |
+
Sea surface salinity is highest (more than 36 PSU) in the Arabian Sea because evaporation exceeds precipitation there. In the Southeast Arabian Sea salinity drops to less than 34 PSU. It is the lowest (c. 33 PSU) in the Bay of Bengal because of river runoff and precipitation. The Indonesian Throughflow and precipitation results in lower salinity (34 PSU) along the Sumatran west coast. Monsoonal variation results in eastward transportation of saltier water from the Arabian Sea to the Bay of Bengal from June to September and in westerly transport by the East India Coastal Current to the Arabian Sea from January to April.[29]
|
71 |
+
|
72 |
+
An Indian Ocean garbage patch was discovered in 2010 covering at least 5 million square kilometres (1.9 million square miles). Riding the southern Indian Ocean Gyre, this vortex of plastic garbage constantly circulates the ocean from Australia to Africa, down the Mozambique Channel, and back to Australia in a period of six years, except for debris that gets indefinitely stuck in the centre of the gyre.[30]
|
73 |
+
The garbage patch in the Indian Ocean will, according to a 2012 study, decrease in size after several decades to vanish completely over centuries. Over several millennia, however, the global system of garbage patches will accumulate in the North Pacific.[31]
|
74 |
+
|
75 |
+
There are two amphidromes of opposite rotation in the Indian Ocean, probably caused by Rossby wave propagation.[32]
|
76 |
+
|
77 |
+
Icebergs drift as far north as 55° south latitude, similar to the Pacific but less than in the Atlantic where icebergs reach up to 45°S. The volume of iceberg loss in the Indian Ocean between 2004 and 2012 was 24 Gt.[33]
|
78 |
+
|
79 |
+
Since the 1960s, anthropogenic warming of the global ocean combined with contributions of freshwater from retreating land ice causes a global rise in sea level. Sea level increases in the Indian Ocean too, except in the south tropical Indian Ocean where it decreases, a pattern most likely caused by rising levels of greenhouse gases.[34]
|
80 |
+
|
81 |
+
Among the tropical oceans, the western Indian Ocean hosts one of the largest concentration of phytoplankton blooms in summer, due to the strong monsoon winds. The monsoonal wind forcing leads to a strong coastal and open ocean upwelling, which introduces nutrients into the upper zones where sufficient light is available for photosynthesis and phytoplankton production. These phytoplankton blooms support the marine ecosystem, as the base of the marine food web, and eventually the larger fish species. The Indian Ocean accounts for the second largest share of the most economically valuable tuna catch.[35] It's fish are of great and growing importance to the bordering countries for domestic consumption and export. Fishing fleets from Russia, Japan, South Korea, and Taiwan also exploit the Indian Ocean, mainly for shrimp and tuna.[3]
|
82 |
+
|
83 |
+
Research indicates that increasing ocean temperatures are taking a toll on the marine ecosystem. A study on the phytoplankton changes in the Indian Ocean indicates a decline of up to 20% in the marine plankton in the Indian Ocean, during the past six decades. The tuna catch rates have also declined 50–90% during the past half century, mostly due to increased industrial fisheries, with the ocean warming adding further stress to the fish species.[36]
|
84 |
+
|
85 |
+
Endangered and vulnerable marine mammals and turtles:[37]
|
86 |
+
|
87 |
+
80% of the Indian Ocean is open ocean and includes nine large marine ecosystems: the Agulhas Current, Somali Coastal Current, Red Sea, Arabian Sea, Bay of Bengal, Gulf of Thailand, West Central Australian Shelf, Northwest Australian Shelf, and Southwest Australian Shelf. Coral reefs cover c. 200,000 km2 (77,000 sq mi). The coasts of the Indian Ocean includes beaches and intertidal zones covering 3,000 km2 (1,200 sq mi) and 246 larger estuaries. Upwelling areas are small but important. The hypersaline salterns in India covers between 5,000–10,000 km2 (1,900–3,900 sq mi) and species adapted for this environment, such as Artemia salina and Dunaliella salina, are important to bird life.[38]
|
88 |
+
|
89 |
+
Coral reefs, sea grass beds, and mangrove forests are the most productive ecosystems of the Indian Ocean — coastal areas produce 20 tones per square kilometre of fish. These areas, however, are also being urbanised with populations often exceeding several thousand people per square kilometre and fishing techniques become more effective and often destructive beyond sustainable levels while increase in sea surface temperature spreads coral bleaching.[39]
|
90 |
+
|
91 |
+
Mangroves covers 80,984 km2 (31,268 sq mi) in the Indian Ocean region, or almost half of world's mangrove habitat, of which 42,500 km2 (16,400 sq mi) is located in Indonesia, or 50% of mangroves in the Indian Ocean. Mangroves originated in the Indian Ocean region and have adapted to a wide range of its habitats but it is also where it suffers its biggest loss of habitat.[40]
|
92 |
+
|
93 |
+
In 2016 six new animal species were identified at hydrothermal vents in the Southwest Indian Ridge: a "Hoff" crab, a "giant peltospirid" snail, a whelk-like snail, a limpet, a scaleworm and a polychaete worm.[41]
|
94 |
+
|
95 |
+
The West Indian Ocean coelacanth was discovered in the Indian Ocean off South Africa in the 1930s and in the late 1990s another species, the Indonesian coelacanth, was discovered off Sulawesi Island, Indonesia. Most extant coelacanths have been found in the Comoros. Although both species represent an order of lobe-finned fishes known from the Early Devonian (410 mya) and though extinct 66 mya, they are morphologically distinct from their Devonian ancestors. Over millions of years, coelacanths evolved to inhabit different environments — lungs adapted for shallow, brackish waters evolved into gills adapted for deep marine waters.[42]
|
96 |
+
|
97 |
+
Of Earth's 36 biodiversity hotspot nine (or 25%) are located on the margins of the Indian Ocean.
|
98 |
+
|
99 |
+
The origin of this diversity is debated; the break-up of Gondwana can explain vicariance older than 100 mya, but the diversity on the younger, smaller islands must have required a Cenozoic dispersal from the rims of the Indian Ocean to the islands. A "reverse colonisation", from islands to continents, apparently occurred more recently; the chameleons, for example, first diversified on Madagascar and then colonised Africa. Several species on the islands of the Indian Ocean are textbook cases of evolutionary processes; the dung beetles, day geckos, and lemurs are all examples of adaptive radiation.[citation needed]
|
100 |
+
Many bones (250 bones per square metre) of recently extinct vertebrates have been found in the Mare aux Songes swamp in Mauritius, including bones of the Dodo bird (Raphus cucullatus) and Cylindraspis giant tortoise. An analysis of these remains suggests a process of artidification began in the southwest Indian Ocean began around 4,000 year ago.[44]
|
101 |
+
|
102 |
+
Mammalian megafauna once widespread in the MPA was driven to near extinction in the early 20th century. Some species have been successfully recovered since then — the population of white rhinoceros (Ceratotherium simum simum) increased from less than 20 individuals in 1895 to more than 17,000 as of 2013. Other species are still dependent of fenced areas and management programs, including black rhinoceros (Diceros bicornis minor), African wild dog (Lycaon pictus), cheetah (Acynonix junatus), elephant (Loxodonta africana), and lion (Panthera leo).[45]
|
103 |
+
|
104 |
+
This biodiversity hotspot (and namesake ecoregion and "Endemic Bird Area") is a patchwork of small forested areas, often with a unique assemblage of spieces within each, located within 200 km (120 mi) from the coast and covering a total area of c. 6,200 km2 (2,400 sq mi). It also encompasses coastal islands, including Zanzibar and Pemba, and Mafia.[46]
|
105 |
+
|
106 |
+
This area, one of the only two hotspots that are entirely arid, includes the Ethiopian Highlands, the East African Rift valley, the Socotra islands, as well as some small islands in the Red Sea and areas on the southern Arabic Peninsula. Endemic and threatened mammals include the dibatag (Ammodorcas clarkei) and Speke's gazelle (Gazella spekei); the Somali wild ass (Equus africanus somaliensis) and hamadryas baboon (Papio hamadryas). It also contains many reptiles.[47]
|
107 |
+
In Somalia, the centre of the 1,500,000 km2 (580,000 sq mi) hotspot, the landscape is dominated by Acacia-Commiphora deciduous bushland, but also includes the Yeheb nut (Cordeauxia edulus) and species discovered more recently such as the Somali cyclamen (Cyclamen somalense), the only cyclamen outside the Mediterranean. Warsangli linnet (Carduelis johannis) is an endemic bird found only in northern Somalia. An unstable political regime has resulted in overgrasing which has produced one of the most degraded hotspots where only c. 5 % of the original habitat remains.[48]
|
108 |
+
|
109 |
+
Encompassing the westcoast of India and Sri Lanka, until c. 10,000 years ago a landbridge connected Sri Lanka to the Indian Subcontinent, hence this region shares a common community of species.[49]
|
110 |
+
|
111 |
+
Indo-Burma encompasses a series of mountain ranges, five of Asia's largest river systems, and a wide range of habitats. The region has a long and complex geological history, and long periods rising sea levels and glaciations have isolated ecosystems and thus promoted a high degree of endemism and speciation. The region includes two centres of endemism: the Annamite Mountains and the northern highlands on the China-Vietnam border.[50]
|
112 |
+
Several distinct floristic regions, the Indian, Malesian, Sino-Himalayan, and Indochinese regions, meet in a unique way in Indo-Burma and the hotspot contains an estimated 15,000–25,000 species of vascular plants, many of them endemic.[51]
|
113 |
+
|
114 |
+
Sundaland encompasses 17,000 islands of which Borneo and Sumatra are the largest. Endangered mammals include the Bornean and Sumatran orangutans, the proboscis monkey, and the Javan and Sumatran rhinoceroses.[52]
|
115 |
+
|
116 |
+
Stretching from Shark Bay to Israelite Bay and isolated by the arid Nullarbor Plain, the southwestern corner of Australia is a floristic region with a stable climate in which one of the world's largest floral biodiversity and an 80% endemism has evolved. From June to September it is an explosion of colours and the Wildflower Festival in Perth in September attracts more than half a million visitors.[53]
|
117 |
+
|
118 |
+
As the youngest of the major oceans,[54] the Indian Ocean has active spreading ridges that are part of the worldwide system of mid-ocean ridges. In the Indian Ocean these spreading ridges meet at the Rodrigues Triple Point with the Central Indian Ridge, including the Carlsberg Ridge, separating the African Plate from the Indian Plate; the Southwest Indian Ridge separating the African Plate from the Antarctic Plate; and the Southeast Indian Ridge separating the Australian Plate from the Antarctic Plate. The Central Indian Ridge is intercepted by the Owen Fracture Zone.[55]
|
119 |
+
Since the late 1990s, however, it has become clear that this traditional definition of the Indo-Australian Plate cannot be correct; it consists of three plates — the Indian Plate, the Capricorn Plate, and Australian Plate — separated by diffuse boundary zones.[56]
|
120 |
+
Since 20 Ma the African Plate is being divided by the East African Rift System into the Nubian and Somalia plates.[57]
|
121 |
+
|
122 |
+
There are only two trenches in the Indian Ocean: the 6,000 km (3,700 mi)-long Java Trench between Java and the Sunda Trench and the 900 km (560 mi)-long Makran Trench south of Iran and Pakistan.[55]
|
123 |
+
|
124 |
+
A series of ridges and seamount chains produced by hotspots pass over the Indian Ocean. The Réunion hotspot (active 70–40 million years ago) connects Réunion and the Mascarene Plateau to the Chagos-Laccadive Ridge and the Deccan Traps in north-western India; the Kerguelen hotspot (100–35 million years ago) connects the Kerguelen Islands and Kerguelen Plateau to the Ninety East Ridge and the Rajmahal Traps in north-eastern India; the Marion hotspot (100–70 million years ago) possibly connects Prince Edward Islands to the Eighty Five East Ridge.[58] These hotspot tracks have been broken by the still active spreading ridges mentioned above.[55]
|
125 |
+
|
126 |
+
There are fewer seamounts in the Indian Ocean than in the Atlantic and Pacific. These are typically deeper than 3,000 m (9,800 ft) and located north of 55°S and west of 80°E. Most originated at spreading ridges but some are now located in basins far away from these ridges. The ridges of the Indian Ocean form ranges of seamounts, sometimes very long, including the Carlsberg Ridge, Madagascar Ridge, Central Indian Ridge, Southwest Indian Ridge, Chagos-Laccadive Ridge, 85°E Ridge, 90°E Ridge, Southeast Indian Ridge, Broken Ridge, and East Indiaman Ridge. The Agulhas Plateau and Mascarene Plateau are the two major shallow areas.[27]
|
127 |
+
|
128 |
+
The opening of the Indian Ocean began c. 156 Ma when Africa separated from East Gondwana. The Indian Subcontinent began to separate from Australia-Antarctica 135–125 Ma and as the Tethys Ocean north of India began to close 118–84 Ma the Indian Ocean opened behind it.[55]
|
129 |
+
|
130 |
+
The Indian Ocean, together with the Mediterranean, has connected people since ancient times, whereas the Atlantic and Pacific have had the roles of barriers or mare incognitum. The written history of the Indian Ocean, however, has been Eurocentric and largely dependent on the availability of written sources from the colonial era. This history is often divided into an ancient period followed by an Islamic period; the subsequent early modern and colonial/modern periods are often subdivided into Portuguese, Dutch, and British periods.[59]
|
131 |
+
|
132 |
+
A concept of an "Indian Ocean World" (IOW), similar to that of the "Atlantic World", exists but emerged much more recently and is not well established. The IOW is, nevertheless, sometimes referred to as the "first global economy" and was based on the monsoon which linked Asia, China, India, and Mesopotamia. It developed independently from the European global trade in the Mediterranean and Atlantic and remained largely independent from them until European 19th century colonial dominance.[60]
|
133 |
+
|
134 |
+
The diverse history of the Indian Ocean is a unique mix of cultures, ethnical groups, natural resources, and shipping routes. It grew in importance beginning in the 1960s and 1970s and after the Cold War it has undergone periods of political instability, most recently with the emergence of India and China as regional powers.[61]
|
135 |
+
|
136 |
+
Pleistocene fossils of Homo erectus and other pre-H. sapiens homonin fossils, similar to H. heidelbergensis in Europe, have been found in India. According to the Toba catastrophe theory, a supereruption c. 74000 years ago at Lake Toba, Sumatra, covered India with volcanic ashes and wiped out one or more lineages of such archaic humans in India and Southeast Asia.[62]
|
137 |
+
|
138 |
+
The Out of Africa theory states that Homo sapiens spread from Africa into mainland Eurasia. The more recent Southern Dispersal or Coastal hypothesis instead advocates that modern humans spread along the coasts of the Arabic Peninsula and southern Asia. This hypothesis is supported by mtDNA research which reveals a rapid dispersal event during the Late Pleistocene (11,000 years ago). This coastal dispersal, however, began in East Africa 75,000 years ago and occurred intermittently from estuary to estuary along the northern perimetre of the Indian Ocean at rate of 0.7–4.0 km (0.43–2.49 mi) per year. It eventually resulted in modern humans migrating from Sunda over Wallacea to Sahul (Southeast Asia to Australia).[63]
|
139 |
+
Since then, waves of migration have resettled people and, clearly, the Indian Ocean littoral had been inhabited long before the first civilisations emerged. 5000–6000 years ago six distinct cultural centres had evolved around the Indian Ocean: East Africa, the Middle East, the Indian Subcontinent, South East Asia, the Malay World, and Australia; each interlinked to its neighbours.[64]
|
140 |
+
|
141 |
+
Food globalisation began on the Indian Ocean littoral c. 4.000 years ago. Five African crops — sorghum, pearl millet, finger millet, cowpea, and hyacinth bean — somehow found their way to Gujarat in India during the Late Harappan (2000–1700 BCE). Gujarati merchants evolved into the first explorers of the Indian Ocean as they traded African goods such as ivory, tortoise shells, and slaves. Broomcorn millet found its way from Central Asia to Africa, together with chicken and zebu cattle, although the exact timing is disputed. Around 2000 BCE black pepper and sesame, both native to Asia, appears in Egypt, albeit in small quantities. Around the same time the black rat and the house mouse emigrates from Asia to Egypt. Banana reached Africa around 3000 years ago.[65]
|
142 |
+
|
143 |
+
At least eleven prehistoric tsunamis have struck the Indian Ocean coast of Indonesia between 7400 and 2900 years ago. Analysing sand beds in caves in the Aceh region, scientists concluded that the intervals between these tsunamis have varied from series of minor tsunamis over a century to dormant periods of more than 2000 years preceding megathrusts in the Sunda Trench. Although the risk for future tsunamis is high, a major megathrust such as the one in 2004 is likely to be followed by a long dormant period.[66]
|
144 |
+
|
145 |
+
A group of scientists have argued that two large-scale impact events have occurred in the Indian Ocean: the Burckle Crater in the southern Indian Ocean in 2800 BCE and the Kanmare and Tabban craters in the Gulf of Carpentaria in northern Australia in 536 CE. Evidences for these impacts, the team argue, are micro-ejecta and Chevron dunes in southern Madagascar and in the Australian gulf. Geological evidences suggest the tsunamis caused by these impacts reached 205 m (673 ft) above sea level and 45 km (28 mi) inland. The impact events must have disrupted human settlements and perhaps even contributed to major climate changes.[67]
|
146 |
+
|
147 |
+
The history of the Indian Ocean is marked by maritime trade; cultural and commercial exchange probably date back at least seven thousand years.[68] Human culture spread early on the shores of the Indian Ocean and was always linked to the cultures of the Mediterranean and Persian Gulf. Before c. 2000 BCE, however, cultures on its shores were only loosely tied to each other; bronze, for example, was developed in Mesopotamia c. 3000 BCE but remained uncommon in Egypt before 1800 BCE.[69]
|
148 |
+
During this period, independent, short-distance oversea communications along its littoral margins evolved into an all-embracing network. The début of this network was not the achievement of a centralised or advanced civilisation but of local and regional exchange in the Persian Gulf, the Red Sea, and Arabian Sea. Sherds of Ubaid (2500–500 BCE) pottery have been found in the western Gulf at Dilmun, present-day Bahrain; traces of exchange between this trading centre and Mesopotamia. The Sumerians traded grain, pottery, and bitumen (used for reed boats) for copper, stone, timber, tin, dates, onions, and pearls.[70]
|
149 |
+
Coast-bound vessels transported goods between the Indus Valley Civilisation (2600–1900 BCE) in the Indian subcontinent (modern-day Pakistan and Northwest India) and the Persian Gulf and Egypt.[68]
|
150 |
+
|
151 |
+
The Red Sea, one of the main trade routes in Antiquity, was explored by Egyptians and Phoenicians during the last two millennia BCE. In the 6th century BCE Greek explorer Scylax of Caryanda made a journey to India, working for the Persian king Darius, and his now lost account put the Indian Ocean on the maps of Greek geographers. The Greeks began to explore the Indian Ocean following the conquests of Alexander the Great, who ordered a circumnavigation of the Arabian Peninsula in 323 BCE. During the two centuries that followed the reports of the explorers of Ptolemaic Egypt resulted in the best maps of the region until the Portuguese era many centuries later. The main interest in the region for the Ptolemies was not commercial but military; they explored Africa to hunt for war elephants.[71]
|
152 |
+
|
153 |
+
The Rub' al Khali desert isolates the southern parts of the Arabic Peninsula and the Indian Ocean from the Arabic world. This encouraged the development of maritime trade in the region linking the Red Sea and the Persian Gulf to East Africa and India. The monsoon (from mawsim, the Arabic word for season), however, was used by sailors long before being "discovered" by Hippalus in the 1st century. Indian wood have been found in Sumerian cities, there is evidence of Akkad coastal trade in the region, and contacts between India and the Red Sea dates back to the 2300 B.C.. The archipelagoes of the central Indian Ocean, the Laccadive and Maldive islands, were probably populated during the 2nd century B.C. from the Indian mainland. They appear in written history in the account of merchant Sulaiman al-Tajir in the 9th century but the treacherous reefs of the islands were most likely cursed by the sailors of Aden long before the islands were even settled.[72]
|
154 |
+
|
155 |
+
Periplus of the Erythraean Sea, an Alexandrian guide to the world beyond the Red Sea — including Africa and India — from the first century CE, not only gives insights into trade in the region but also shows that Roman and Greek sailors had already gained knowledge about the monsoon winds.[68] The contemporaneous settlement of Madagascar by Austronesian sailors shows that the littoral margins of the Indian Ocean were being both well-populated and regularly traversed at least by this time. Albeit the monsoon must have been common knowledge in the Indian Ocean for centuries.[68]
|
156 |
+
|
157 |
+
The Indian Ocean's relatively calmer waters opened the areas bordering it to trade earlier than the Atlantic or Pacific oceans. The powerful monsoons also meant ships could easily sail west early in the season, then wait a few months and return eastwards. This allowed ancient Indonesian peoples to cross the Indian Ocean to settle in Madagascar around 1 CE.[73]
|
158 |
+
|
159 |
+
In the 2nd or 1st century BCE, Eudoxus of Cyzicus was the first Greek to cross the Indian Ocean. The probably fictitious sailor Hippalus is said to have learnt the direct route from Arabia to India around this time.[74] During the 1st and 2nd centuries AD intensive trade relations developed between Roman Egypt and the Tamil kingdoms of the Cheras, Cholas and Pandyas in Southern India. Like the Indonesian people above, the western sailors used the monsoon to cross the ocean. The unknown author of the Periplus of the Erythraean Sea describes this route, as well as the commodities that were traded along various commercial ports on the coasts of the Horn of Africa and India circa 1 CE. Among these trading settlements were Mosylon and Opone on the Red Sea littoral.[9]
|
160 |
+
|
161 |
+
Unlike the Pacific Ocean where the civilization of the Polynesians reached most of the far flung islands and atolls and populated them, almost all the islands, archipelagos and atolls of the Indian Ocean were uninhabited until colonial times. Although there were numerous ancient civilizations in the coastal states of Asia and parts of Africa, the Maldives were the only island group in the Central Indian Ocean region where an ancient civilization flourished.[75] Maldivians, on their annual trade trip, took their oceangoing trade ships to Sri Lanka rather than mainland India, which is much closer, because their ships were dependent of the Indian Monsoon Current.[76]
|
162 |
+
|
163 |
+
Arabic missionaries and merchants began to spread Islam along the western shores of the Indian Ocean from the 8th century, if not earlier. A Swahili stone mosque dating to the 8th–15th centuries have been found in Shanga, Kenya. Trade across the Indian Ocean gradually introduced Arabic script and rice as a staple in Eastern Africa.[77]
|
164 |
+
Muslim merchants traded an estimated 1000 African slaves annually between 800 and 1700, a number that grew to c. 4000 during the 18th century, and 3700 during the period 1800–1870. Slave trade also occurred in the eastern Indian Ocean before the Dutch settled there around 1600 but the volume of this trade is unknown.[78]
|
165 |
+
|
166 |
+
From 1405 to 1433 admiral Zheng He said to have led large fleets of the Ming Dynasty on several treasure voyages through the Indian Ocean, ultimately reaching the coastal countries of East Africa.[79]
|
167 |
+
|
168 |
+
The Portuguese navigator Vasco da Gama rounded the Cape of Good Hope during his first voyage in 1497 and became the first European to sail to India. The Swahili people he encountered along the African eastcoast lived in a series of cities and had established trade routes to India and to China. Among them, the Portuguese kidnapped most of their pilots in coastal raids and onboard ships. A few of the pilots, however, were gifts by local Swahili rulers, including the sailor from Gujarat, a gift by a Malindi ruler in Kenya, who helped the Portuguese to reach India. In expeditions after 1500 the Portuguese attacked and colonised cities along the African coast.[80]
|
169 |
+
European slave trade in the Indian Ocean began when Portugal established Estado da Índia in the early 16th century. From then until the 1830s, c. 200 slaves were exported from Mozambique annually and similar figures has been estimated for slaves brought from Asia to the Philippines during the Iberian Union (1580–1640).[78]
|
170 |
+
|
171 |
+
The Ottoman Empire began its expansion into the Indian Ocean in 1517 with the conquest of Egypt under Sultan Selim I. Although the Ottomans shared the same religion as the trading communities in the Indian Ocean the region was unexplored by them. Maps that included the Indian Ocean had been produced by Muslim geographers centuries before the Ottoman conquests; Muslim scholars, such as Ibn Battuta in the 14th Century, had visited most parts of the known world; contemporarily with Vasco da Gama, Arab navigator Ahmad ibn Mājid had compiled a guide to navigation in the Indian Ocean; the Ottomans, nevertheless, began their own parallel era of discovery which rivaled the European expansion.[81]
|
172 |
+
|
173 |
+
The establishment of the Dutch East India Company in the early 17th century lead to a quick increase in trade volume; there were perhaps up to 500,000 slaves working in Dutch colonies during the 17th and 18th centuries mostly in the Indian Ocean. For example, some 4000 African slaves were used to build the Colombo fortress in Sri Lanka. Bali and neighbouring islands supplied regional networks with c. 100,000–150,000 slaves 1620–1830. Indian and Chinese traders supplied Dutch Indonesia with perhaps 250,000 slaves during 17th and 18th centuries.[78]
|
174 |
+
|
175 |
+
The British East India Company was established during the same period and in 1622 its ship first carried slaves from the Indian Coromandel Coast to Indonesia. The British mostly brought slaves from Africa and islands in the Indian Ocean to India and Indonesia but also exported slaves from India. The French colonised Réunion and Mauritius in 1721; by 1735 some 7200 slaves populated the Mascarene Islands, a number which had reached 133,000 in 1807. The British captured the islands in 1810, however, and because the British Parliament had prohibited slavery in 1807 a system of clandestine slave trade developed; resulting in 336,000–388,000 slaves exported to the Mascarane Islands 1670–1848.[78]
|
176 |
+
|
177 |
+
In all, Europeans traded 567,900–733,200 slaves within the Indian Ocean between 1500 and 1850 and almost that same amount were exported from the Indian Ocean to the Americas during the same period. Slave trade in the Indian Ocean was, nevertheless, very limited compared to c. 12,000,000 slaves exported across the Atlantic.[78]
|
178 |
+
|
179 |
+
Scientifically, the Indian Ocean remained poorly explored before the International Indian Ocean Expedition in the early 1960s. However, the Challenger expedition 1872–1876 only reported from south of the polar front. The Valdivia expedition 1898–1899 made deep samples in the Indian Ocean. In the 1930s, the John Murray Expedition mainly studied shallow-water habitats. The Swedish Deep Sea Expedition 1947–1948 also sampled the Indian Ocean on its global tour and the Danish Galathea sampled deep-water fauna from Sri Lanka to South Africa on its second expedition 1950–1952. The Soviet research vessel Vityaz also did research in the Indian Ocean.[1]
|
180 |
+
|
181 |
+
The Suez Canal opened in 1869 when the Industrial Revolution dramatically changed global shipping – the sailing ship declined in importance as did the importance of European trade in favour of trade in East Asia and Australia.[82]
|
182 |
+
The construction of the canal introduced many non-indigenous species into the Mediterranean. For example, the goldband goatfish (Upeneus moluccensis) has replaced the red mullet (Mullus barbatus); since the 1980s huge swarms of scyphozoan jellyfish (Rhopilema nomadica) have affected tourism and fisheries along the Levantian coast and clogged power and desalination plants. Plans announced in 2014 to build a new, much larger Suez Canal parallel to the 19th century canal will most likely boost economy in the region but also cause ecological damage in a much wider area.[83]
|
183 |
+
|
184 |
+
Throughout the colonial era, islands such as Mauritius were important shipping nodes for the Dutch, French, and British. Mauritius, an inhabited island, became populated by slaves from Africa and indenture labour from India. The end of World War II marked the end of the colonial era. The British left Mauritius in 1974 and with 70% of the population of Indian descent, Mauritius became a close ally of India. In the 1980s, during the Cold War, the South African regime acted to destabilise several island nations in the Indian Ocean, including the Seychelles, Comoros, and Madagascar. India intervened in Mauritius to prevent a coup d'état, backed-up by the United States who feared the Soviet Union could gain access to Port Louis and threaten the U.S. base on Diego Garcia.[84]
|
185 |
+
Iranrud is an unrealised plan by Iran and the Soviet Union to build a canal between the Caspian Sea and Persian Gulf.
|
186 |
+
|
187 |
+
Testimonies from the colonial era are stories of African slaves, Indian indentured labourers, and white settlers. But, while there was a clear racial line between free men and slaves in the Atlantic World, this delineation is less distinct in the Indian Ocean — there were Indian slaves and settlers as well as black indentured labourers. There were also a string of prison camps across the Indian Ocean, from Robben Island in South Africa to Cellular Jail in the Andamans, in which prisoners, exiles, POWs, forced labourers, merchants, and people of different faiths were forcefully united. On the islands of the Indian Ocean, therefore, a trend of creolisation emerged.[85]
|
188 |
+
|
189 |
+
On 26 December 2004 fourteen countries around the Indian Ocean were hit by a wave of tsunamis caused by the 2004 Indian Ocean earthquake. The waves radiated across the ocean at speeds exceeding 500 km/h (310 mph), reached up to 20 m (66 ft) in height, and resulted in an estimated 236,000 death.[86]
|
190 |
+
|
191 |
+
In the late 2000s the ocean evolved into a hub of pirate activity. By 2013, attacks off the Horn region's coast had steadily declined due to active private security and international navy patrols, especially by the Indian Navy.[87]
|
192 |
+
|
193 |
+
Malaysian Airlines Flight 370, a Boeing 777 airliner with 239 persons on board, disappeared on 8 March 2014 and is alleged to have crashed into the southeastern Indian Ocean about 2,000 km (1,200 mi) from the coast of southwest Western Australia. Despite an extensive search, the whereabouts of the remains of the aircraft are unknown.[88]
|
194 |
+
|
195 |
+
The sea lanes in the Indian Ocean are considered among the most strategically important in the world with more than 80 percent of the world's seaborne trade in oil transits through the Indian Ocean and its vital chokepoints, with 40 percent passing through the Strait of Hormuz, 35 percent through the Strait of Malacca and 8 percent through the Bab el-Mandab Strait.[89]
|
196 |
+
|
197 |
+
The Indian Ocean provides major sea routes connecting the Middle East, Africa, and East Asia with Europe and the Americas. It carries a particularly heavy traffic of petroleum and petroleum products from the oil fields of the Persian Gulf and Indonesia. Large reserves of hydrocarbons are being tapped in the offshore areas of Saudi Arabia, Iran, India, and Western Australia. An estimated 40% of the world's offshore oil production comes from the Indian Ocean.[3] Beach sands rich in heavy minerals, and offshore placer deposits are actively exploited by bordering countries, particularly India, Pakistan, South Africa, Indonesia, Sri Lanka, and Thailand.
|
198 |
+
|
199 |
+
Chinese companies have made investments in several Indian Ocean ports, including Gwadar, Hambantota, Colombo and Sonadia. This has sparked a debate about the strategic implications of these investments.[90] (See String of Pearls)
|
en/4225.html.txt
ADDED
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Indian Ocean is the third-largest of the world's oceanic divisions, covering 70,560,000 km2 (27,240,000 sq mi) or 19.8% of the water on Earth's surface.[5] It is bounded by Asia to the north, Africa to the west and Australia to the east. To the south it is bounded by the Southern Ocean or Antarctica, depending on the definition in use.[6] Along its core, the Indian Ocean has some large marginal or regional seas such as the Arabian Sea, the Laccadive Sea, the Somali Sea, Bay of Bengal, and the Andaman Sea.
|
4 |
+
|
5 |
+
The Indian Ocean has been known by its present name since at least 1515, when the Latin form Oceanus Orientalis Indicus ("Indian Eastern Ocean") is attested, named for India, which projects into it. It was earlier known as the Eastern Ocean, a term was that was still in use during the mid-18th century (see map), as opposed to the Western Ocean (Atlantic) before the Pacific was surmised.[7]
|
6 |
+
|
7 |
+
Conversely, Chinese explorers in the Indian Ocean, during the 15th century, called it the Western Oceans.[8] The ocean has also been known as the Hindu Ocean and Indic Ocean in various languages.
|
8 |
+
|
9 |
+
In Ancient Greek geography the region of the Indian Ocean known to the Greeks was called the Erythraean Sea.[9]
|
10 |
+
|
11 |
+
A relatively new concept of an "Indian Ocean World" and attempts to rewrite its history has resulted in new proposed names, such as 'Asian Sea' and 'Afrasian Sea'.[10]
|
12 |
+
|
13 |
+
The borders of the Indian Ocean, as delineated by the International Hydrographic Organization in 1953 included the Southern Ocean but not the marginal seas along the northern rim, but in 2000 the IHO delimited the Southern Ocean separately, which removed waters south of 60°S from the Indian Ocean but included the northern marginal seas.
|
14 |
+
[11][12] Meridionally, the Indian Ocean is delimited from the Atlantic Ocean by the 20° east meridian, running south from Cape Agulhas, and from the Pacific Ocean by the meridian of 146°49'E, running south from the southernmost point of Tasmania. The northernmost extent of the Indian Ocean (including marginal seas) is approximately 30° north in the Persian Gulf.[12]
|
15 |
+
|
16 |
+
The Indian Ocean covers 70,560,000 km2 (27,240,000 sq mi), including the Red Sea and the Persian Gulf but excluding the Southern Ocean, or 19.5% of the world's oceans; its volume is 264,000,000 km3 (63,000,000 cu mi) or 19.8% of the world's oceans' volume; it has an average depth of 3,741 m (12,274 ft) and a maximum depth of 7,906 m (25,938 ft).[5]
|
17 |
+
|
18 |
+
All of the Indian Ocean is in the Eastern Hemisphere and the centre of the Eastern Hemisphere, the 90th meridian east, passes through the Ninety East Ridge.
|
19 |
+
|
20 |
+
In contrast to the Atlantic and Pacific, the Indian Ocean is enclosed by major landmasses and an archipelago on three sides and does not stretch from pole to pole and can be likened to an embayed ocean.
|
21 |
+
It is centred on the Indian Peninsula and although this subcontinent has played a major role in its history the Indian Ocean has foremostly been a cosmopolitan stage interlinking diverse regions by innovations, trade, and religion since early in human history.[10]
|
22 |
+
|
23 |
+
The active margins of the Indian Ocean have an average depth (land to shelf break) of 19 ± 0.61 km (11.81 ± 0.38 mi) with a maximum depth of 175 km (109 mi). The passive margins have an average depth of 47.6 ± 0.8 km (29.58 ± 0.50 mi).[13]
|
24 |
+
The average width of the slopes of the continental shelves are 50.4–52.4 km (31.3–32.6 mi) for active and passive margins respectively, with a maximum depth of 205.3–255.2 km (127.6–158.6 mi).[14]
|
25 |
+
|
26 |
+
Australia, Indonesia, and India are the three countries with the longest shorelines and exclusive economic zones. The continental shelf makes up 15% of the Indian Ocean.
|
27 |
+
More than two billion people live in countries bordering the Indian Ocean, compared to 1.7 billion for the Atlantic and 2.7 billion for the Pacific (some countries border more than one ocean).[2]
|
28 |
+
|
29 |
+
The Indian Ocean drainage basin covers 21,100,000 km2 (8,100,000 sq mi), virtually identical to that of the Pacific Ocean and half that of the Atlantic basin, or 30% of its ocean surface (compared to 15% for the Pacific). The Indian Ocean drainage basin is divided into roughly 800 individual basins, half that of the Pacific, of which 50% are located in Asia, 30% in Africa, and 20% in Australasia. The rivers of the Indian Ocean are shorter in average (740 km (460 mi)) than those of the other major oceans. The largest rivers are (order 5) the Zambezi, Ganges-Brahmaputra, Indus, Jubba, and Murray rivers and (order 4) the Shatt al-Arab, Wadi Ad Dawasir (a dried-out river system on the Arabian Peninsula) and Limpopo rivers.[15]
|
30 |
+
|
31 |
+
Marginal seas, gulfs, bays and straits of the Indian Ocean include:[12]
|
32 |
+
|
33 |
+
Along the east coast of Africa, the Mozambique Channel separates Madagascar from mainland Africa, while the Sea of Zanj is located north of Madagascar.
|
34 |
+
|
35 |
+
On the northern coast of the Arabian Sea, Gulf of Aden is connected to the Red Sea by the strait of Bab-el-Mandeb. In the Gulf of Aden, the Gulf of Tadjoura is located in Djibouti and the Guardafui Channel separates Socotra island from the Horn of Africa. The northern end of the Red Sea terminates in the Gulf of Aqaba and Gulf of Suez. The Indian Ocean is artificially connected to the Mediterranean Sea through the Suez Canal, which is accessible via the Red Sea.
|
36 |
+
The Arabian Sea is connected to the Persian Gulf by the Gulf of Oman and the Strait of Hormuz. In the Persian Gulf, the Gulf of Bahrain separates Qatar from the Arabic Peninsula.
|
37 |
+
|
38 |
+
Along the west coast of India, the Gulf of Kutch and Gulf of Khambat are located in Gujarat in the northern end while the Laccadive Sea separates the Maldives from the southern tip of India.
|
39 |
+
The Bay of Bengal is off the east coast of India. The Gulf of Mannar and the Palk Strait separates Sri Lanka from India, while the Adam's Bridge separates the two. The Andaman Sea is located between the Bay of Bengal and the Andaman Islands.
|
40 |
+
|
41 |
+
In Indonesia, the so-called Indonesian Seaway is composed of the Malacca, Sunda and Torres Straits.
|
42 |
+
The Gulf of Carpentaria of located on the Australian north coast while the Great Australian Bight constitutes a large part of its southern coast.
|
43 |
+
|
44 |
+
Several features make the Indian Ocean unique. It constitutes the core of the large-scale Tropical Warm Pool which, when interacting with the atmosphere, affects the climate both regionally and globally. Asia blocks heat export and prevents the ventilation of the Indian Ocean thermocline. That continent also drives the Indian Ocean monsoon, the strongest on Earth, which causes large-scale seasonal variations in ocean currents, including the reversal of the Somali Current and Indian Monsoon Current. Because of the Indian Ocean Walker circulation there are no continuous equatorial easterlies. Upwelling occurs near the Horn of Africa and the Arabian Peninsula in the Northern Hemisphere and north of the trade winds in the Southern Hemisphere. The Indonesian Throughflow is a unique Equatorial connection to the Pacific.[16]
|
45 |
+
|
46 |
+
The climate north of the equator is affected by a monsoon climate. Strong north-east winds blow from October until April; from May until October south and west winds prevail. In the Arabian Sea, the violent Monsoon brings rain to the Indian subcontinent. In the southern hemisphere, the winds are generally milder, but summer storms near Mauritius can be severe. When the monsoon winds change, cyclones sometimes strike the shores of the Arabian Sea and the Bay of Bengal.[17]
|
47 |
+
Some 80% of the total annual rainfall in India occurs during summer and the region is so dependent on this rainfall that many civilisations perished when the Monsoon failed in the past. The huge variability in the Indian Summer Monsoon has also occurred pre-historically, with a strong, wet phase 33,500–32,500 BP; a weak, dry phase 26,000–23,500 BC; and a very weak phase 17,000–15,000 BP,
|
48 |
+
corresponding to a series of dramatic global events: Bølling-Allerød, Heinrich, and Younger Dryas.[18]
|
49 |
+
|
50 |
+
The Indian Ocean is the warmest ocean in the world.[19] Long-term ocean temperature records show a rapid, continuous warming in the Indian Ocean, at about 1.2 °C (34.2 °F) (compared to 0.7 °C (33.3 °F) for the warm pool region) during 1901–2012.[20] Research indicates that human induced greenhouse warming, and changes in the frequency and magnitude of El Niño (or the Indian Ocean Dipole), events are a trigger to this strong warming in the Indian Ocean.[20]
|
51 |
+
|
52 |
+
South of the Equator (20-5°S), the Indian Ocean is gaining heat from June to October, during the austral winter, while it is losing heat from November to March, during the austral summer.[21]
|
53 |
+
|
54 |
+
In 1999, the Indian Ocean Experiment showed that fossil fuel and biomass burning in South and Southeast Asia caused air pollution (also known as the Asian brown cloud) that reach as far as the Intertropical Convergence Zone at 60°S. This pollution has implications on both a local and global scale.[22]
|
55 |
+
|
56 |
+
40% of the sediment of the Indian Ocean is found in the Indus and Ganges fans. The oceanic basins adjacent to the continental slopes mostly contain terrigenous sediments. The ocean south of the polar front (roughly 50° south latitude) is high in biologic productivity and dominated by non-stratified sediment composed mostly of siliceous oozes. Near the three major mid-ocean ridges the ocean floor is relatively young and therefore bare of sediment, except for the Southwest Indian Ridge due to its ultra-slow spreading rate.[23]
|
57 |
+
|
58 |
+
The ocean's currents are mainly controlled by the monsoon. Two large gyres, one in the northern hemisphere flowing clockwise and one south of the equator moving anticlockwise (including the Agulhas Current and Agulhas Return Current), constitute the dominant flow pattern. During the winter monsoon (November–February), however, circulation is reversed north of 30°S and winds are weakened during winter and the transitional periods between the monsoons.[24]
|
59 |
+
|
60 |
+
The Indian Ocean contains the largest submarine fans of the world, the Bengal Fan and Indus Fan, and the largest areas of slope terraces and rift valleys.
|
61 |
+
[25]
|
62 |
+
|
63 |
+
The inflow of deep water into the Indian Ocean is 11 Sv, most of which comes from the Circumpolar Deep Water (CDW). The CDW enters the Indian Ocean through the Crozet and Madagascar basins and crosses the Southwest Indian Ridge at 30°S. In the Mascarene Basin the CDW becomes a deep western boundary current before it is met by a re-circulated branch of itself, the North Indian Deep Water. This mixed water partly flows north into the Somali Basin whilst most of it flows clockwise in the Mascarene Basin where an oscillating flow is produced by Rossby waves.[26]
|
64 |
+
|
65 |
+
Water circulation in the Indian Ocean is dominated by the Subtropical Anticyclonic Gyre, the eastern extension of which is blocked by the Southeast Indian Ridge and the 90°E Ridge. Madagascar and the Southwest Indian Ridge separates three cells south of Madagascar and off South Africa. North Atlantic Deep Water reaches into the Indian Ocean south of Africa at a depth of 2,000–3,000 m (6,600–9,800 ft) and flows north along the eastern continental slope of Africa. Deeper than NADW, Antarctic Bottom Water flows from Enderby Basin to Agulhas Basin across deep channels (<4,000 m (13,000 ft)) in the Southwest Indian Ridge, from where it continues into the Mozambique Channel and Prince Edward Fracture Zone.[27]
|
66 |
+
|
67 |
+
North of 20° south latitude the minimum surface temperature is 22 °C (72 °F), exceeding 28 °C (82 °F) to the east. Southward of 40° south latitude, temperatures drop quickly.[17]
|
68 |
+
|
69 |
+
The Bay of Bengal contributes more than half (2,950 km3 (710 cu mi)) of the runoff water to the Indian Ocean. Mainly in summer, this runoff flows into the Arabian Sea but also south across the Equator where it mixes with fresher seawater from the Indonesian Throughflow. This mixed freshwater joins the South Equatorial Current in the southern tropical Indian Ocean.[28]
|
70 |
+
Sea surface salinity is highest (more than 36 PSU) in the Arabian Sea because evaporation exceeds precipitation there. In the Southeast Arabian Sea salinity drops to less than 34 PSU. It is the lowest (c. 33 PSU) in the Bay of Bengal because of river runoff and precipitation. The Indonesian Throughflow and precipitation results in lower salinity (34 PSU) along the Sumatran west coast. Monsoonal variation results in eastward transportation of saltier water from the Arabian Sea to the Bay of Bengal from June to September and in westerly transport by the East India Coastal Current to the Arabian Sea from January to April.[29]
|
71 |
+
|
72 |
+
An Indian Ocean garbage patch was discovered in 2010 covering at least 5 million square kilometres (1.9 million square miles). Riding the southern Indian Ocean Gyre, this vortex of plastic garbage constantly circulates the ocean from Australia to Africa, down the Mozambique Channel, and back to Australia in a period of six years, except for debris that gets indefinitely stuck in the centre of the gyre.[30]
|
73 |
+
The garbage patch in the Indian Ocean will, according to a 2012 study, decrease in size after several decades to vanish completely over centuries. Over several millennia, however, the global system of garbage patches will accumulate in the North Pacific.[31]
|
74 |
+
|
75 |
+
There are two amphidromes of opposite rotation in the Indian Ocean, probably caused by Rossby wave propagation.[32]
|
76 |
+
|
77 |
+
Icebergs drift as far north as 55° south latitude, similar to the Pacific but less than in the Atlantic where icebergs reach up to 45°S. The volume of iceberg loss in the Indian Ocean between 2004 and 2012 was 24 Gt.[33]
|
78 |
+
|
79 |
+
Since the 1960s, anthropogenic warming of the global ocean combined with contributions of freshwater from retreating land ice causes a global rise in sea level. Sea level increases in the Indian Ocean too, except in the south tropical Indian Ocean where it decreases, a pattern most likely caused by rising levels of greenhouse gases.[34]
|
80 |
+
|
81 |
+
Among the tropical oceans, the western Indian Ocean hosts one of the largest concentration of phytoplankton blooms in summer, due to the strong monsoon winds. The monsoonal wind forcing leads to a strong coastal and open ocean upwelling, which introduces nutrients into the upper zones where sufficient light is available for photosynthesis and phytoplankton production. These phytoplankton blooms support the marine ecosystem, as the base of the marine food web, and eventually the larger fish species. The Indian Ocean accounts for the second largest share of the most economically valuable tuna catch.[35] It's fish are of great and growing importance to the bordering countries for domestic consumption and export. Fishing fleets from Russia, Japan, South Korea, and Taiwan also exploit the Indian Ocean, mainly for shrimp and tuna.[3]
|
82 |
+
|
83 |
+
Research indicates that increasing ocean temperatures are taking a toll on the marine ecosystem. A study on the phytoplankton changes in the Indian Ocean indicates a decline of up to 20% in the marine plankton in the Indian Ocean, during the past six decades. The tuna catch rates have also declined 50–90% during the past half century, mostly due to increased industrial fisheries, with the ocean warming adding further stress to the fish species.[36]
|
84 |
+
|
85 |
+
Endangered and vulnerable marine mammals and turtles:[37]
|
86 |
+
|
87 |
+
80% of the Indian Ocean is open ocean and includes nine large marine ecosystems: the Agulhas Current, Somali Coastal Current, Red Sea, Arabian Sea, Bay of Bengal, Gulf of Thailand, West Central Australian Shelf, Northwest Australian Shelf, and Southwest Australian Shelf. Coral reefs cover c. 200,000 km2 (77,000 sq mi). The coasts of the Indian Ocean includes beaches and intertidal zones covering 3,000 km2 (1,200 sq mi) and 246 larger estuaries. Upwelling areas are small but important. The hypersaline salterns in India covers between 5,000–10,000 km2 (1,900–3,900 sq mi) and species adapted for this environment, such as Artemia salina and Dunaliella salina, are important to bird life.[38]
|
88 |
+
|
89 |
+
Coral reefs, sea grass beds, and mangrove forests are the most productive ecosystems of the Indian Ocean — coastal areas produce 20 tones per square kilometre of fish. These areas, however, are also being urbanised with populations often exceeding several thousand people per square kilometre and fishing techniques become more effective and often destructive beyond sustainable levels while increase in sea surface temperature spreads coral bleaching.[39]
|
90 |
+
|
91 |
+
Mangroves covers 80,984 km2 (31,268 sq mi) in the Indian Ocean region, or almost half of world's mangrove habitat, of which 42,500 km2 (16,400 sq mi) is located in Indonesia, or 50% of mangroves in the Indian Ocean. Mangroves originated in the Indian Ocean region and have adapted to a wide range of its habitats but it is also where it suffers its biggest loss of habitat.[40]
|
92 |
+
|
93 |
+
In 2016 six new animal species were identified at hydrothermal vents in the Southwest Indian Ridge: a "Hoff" crab, a "giant peltospirid" snail, a whelk-like snail, a limpet, a scaleworm and a polychaete worm.[41]
|
94 |
+
|
95 |
+
The West Indian Ocean coelacanth was discovered in the Indian Ocean off South Africa in the 1930s and in the late 1990s another species, the Indonesian coelacanth, was discovered off Sulawesi Island, Indonesia. Most extant coelacanths have been found in the Comoros. Although both species represent an order of lobe-finned fishes known from the Early Devonian (410 mya) and though extinct 66 mya, they are morphologically distinct from their Devonian ancestors. Over millions of years, coelacanths evolved to inhabit different environments — lungs adapted for shallow, brackish waters evolved into gills adapted for deep marine waters.[42]
|
96 |
+
|
97 |
+
Of Earth's 36 biodiversity hotspot nine (or 25%) are located on the margins of the Indian Ocean.
|
98 |
+
|
99 |
+
The origin of this diversity is debated; the break-up of Gondwana can explain vicariance older than 100 mya, but the diversity on the younger, smaller islands must have required a Cenozoic dispersal from the rims of the Indian Ocean to the islands. A "reverse colonisation", from islands to continents, apparently occurred more recently; the chameleons, for example, first diversified on Madagascar and then colonised Africa. Several species on the islands of the Indian Ocean are textbook cases of evolutionary processes; the dung beetles, day geckos, and lemurs are all examples of adaptive radiation.[citation needed]
|
100 |
+
Many bones (250 bones per square metre) of recently extinct vertebrates have been found in the Mare aux Songes swamp in Mauritius, including bones of the Dodo bird (Raphus cucullatus) and Cylindraspis giant tortoise. An analysis of these remains suggests a process of artidification began in the southwest Indian Ocean began around 4,000 year ago.[44]
|
101 |
+
|
102 |
+
Mammalian megafauna once widespread in the MPA was driven to near extinction in the early 20th century. Some species have been successfully recovered since then — the population of white rhinoceros (Ceratotherium simum simum) increased from less than 20 individuals in 1895 to more than 17,000 as of 2013. Other species are still dependent of fenced areas and management programs, including black rhinoceros (Diceros bicornis minor), African wild dog (Lycaon pictus), cheetah (Acynonix junatus), elephant (Loxodonta africana), and lion (Panthera leo).[45]
|
103 |
+
|
104 |
+
This biodiversity hotspot (and namesake ecoregion and "Endemic Bird Area") is a patchwork of small forested areas, often with a unique assemblage of spieces within each, located within 200 km (120 mi) from the coast and covering a total area of c. 6,200 km2 (2,400 sq mi). It also encompasses coastal islands, including Zanzibar and Pemba, and Mafia.[46]
|
105 |
+
|
106 |
+
This area, one of the only two hotspots that are entirely arid, includes the Ethiopian Highlands, the East African Rift valley, the Socotra islands, as well as some small islands in the Red Sea and areas on the southern Arabic Peninsula. Endemic and threatened mammals include the dibatag (Ammodorcas clarkei) and Speke's gazelle (Gazella spekei); the Somali wild ass (Equus africanus somaliensis) and hamadryas baboon (Papio hamadryas). It also contains many reptiles.[47]
|
107 |
+
In Somalia, the centre of the 1,500,000 km2 (580,000 sq mi) hotspot, the landscape is dominated by Acacia-Commiphora deciduous bushland, but also includes the Yeheb nut (Cordeauxia edulus) and species discovered more recently such as the Somali cyclamen (Cyclamen somalense), the only cyclamen outside the Mediterranean. Warsangli linnet (Carduelis johannis) is an endemic bird found only in northern Somalia. An unstable political regime has resulted in overgrasing which has produced one of the most degraded hotspots where only c. 5 % of the original habitat remains.[48]
|
108 |
+
|
109 |
+
Encompassing the westcoast of India and Sri Lanka, until c. 10,000 years ago a landbridge connected Sri Lanka to the Indian Subcontinent, hence this region shares a common community of species.[49]
|
110 |
+
|
111 |
+
Indo-Burma encompasses a series of mountain ranges, five of Asia's largest river systems, and a wide range of habitats. The region has a long and complex geological history, and long periods rising sea levels and glaciations have isolated ecosystems and thus promoted a high degree of endemism and speciation. The region includes two centres of endemism: the Annamite Mountains and the northern highlands on the China-Vietnam border.[50]
|
112 |
+
Several distinct floristic regions, the Indian, Malesian, Sino-Himalayan, and Indochinese regions, meet in a unique way in Indo-Burma and the hotspot contains an estimated 15,000–25,000 species of vascular plants, many of them endemic.[51]
|
113 |
+
|
114 |
+
Sundaland encompasses 17,000 islands of which Borneo and Sumatra are the largest. Endangered mammals include the Bornean and Sumatran orangutans, the proboscis monkey, and the Javan and Sumatran rhinoceroses.[52]
|
115 |
+
|
116 |
+
Stretching from Shark Bay to Israelite Bay and isolated by the arid Nullarbor Plain, the southwestern corner of Australia is a floristic region with a stable climate in which one of the world's largest floral biodiversity and an 80% endemism has evolved. From June to September it is an explosion of colours and the Wildflower Festival in Perth in September attracts more than half a million visitors.[53]
|
117 |
+
|
118 |
+
As the youngest of the major oceans,[54] the Indian Ocean has active spreading ridges that are part of the worldwide system of mid-ocean ridges. In the Indian Ocean these spreading ridges meet at the Rodrigues Triple Point with the Central Indian Ridge, including the Carlsberg Ridge, separating the African Plate from the Indian Plate; the Southwest Indian Ridge separating the African Plate from the Antarctic Plate; and the Southeast Indian Ridge separating the Australian Plate from the Antarctic Plate. The Central Indian Ridge is intercepted by the Owen Fracture Zone.[55]
|
119 |
+
Since the late 1990s, however, it has become clear that this traditional definition of the Indo-Australian Plate cannot be correct; it consists of three plates — the Indian Plate, the Capricorn Plate, and Australian Plate — separated by diffuse boundary zones.[56]
|
120 |
+
Since 20 Ma the African Plate is being divided by the East African Rift System into the Nubian and Somalia plates.[57]
|
121 |
+
|
122 |
+
There are only two trenches in the Indian Ocean: the 6,000 km (3,700 mi)-long Java Trench between Java and the Sunda Trench and the 900 km (560 mi)-long Makran Trench south of Iran and Pakistan.[55]
|
123 |
+
|
124 |
+
A series of ridges and seamount chains produced by hotspots pass over the Indian Ocean. The Réunion hotspot (active 70–40 million years ago) connects Réunion and the Mascarene Plateau to the Chagos-Laccadive Ridge and the Deccan Traps in north-western India; the Kerguelen hotspot (100–35 million years ago) connects the Kerguelen Islands and Kerguelen Plateau to the Ninety East Ridge and the Rajmahal Traps in north-eastern India; the Marion hotspot (100–70 million years ago) possibly connects Prince Edward Islands to the Eighty Five East Ridge.[58] These hotspot tracks have been broken by the still active spreading ridges mentioned above.[55]
|
125 |
+
|
126 |
+
There are fewer seamounts in the Indian Ocean than in the Atlantic and Pacific. These are typically deeper than 3,000 m (9,800 ft) and located north of 55°S and west of 80°E. Most originated at spreading ridges but some are now located in basins far away from these ridges. The ridges of the Indian Ocean form ranges of seamounts, sometimes very long, including the Carlsberg Ridge, Madagascar Ridge, Central Indian Ridge, Southwest Indian Ridge, Chagos-Laccadive Ridge, 85°E Ridge, 90°E Ridge, Southeast Indian Ridge, Broken Ridge, and East Indiaman Ridge. The Agulhas Plateau and Mascarene Plateau are the two major shallow areas.[27]
|
127 |
+
|
128 |
+
The opening of the Indian Ocean began c. 156 Ma when Africa separated from East Gondwana. The Indian Subcontinent began to separate from Australia-Antarctica 135–125 Ma and as the Tethys Ocean north of India began to close 118–84 Ma the Indian Ocean opened behind it.[55]
|
129 |
+
|
130 |
+
The Indian Ocean, together with the Mediterranean, has connected people since ancient times, whereas the Atlantic and Pacific have had the roles of barriers or mare incognitum. The written history of the Indian Ocean, however, has been Eurocentric and largely dependent on the availability of written sources from the colonial era. This history is often divided into an ancient period followed by an Islamic period; the subsequent early modern and colonial/modern periods are often subdivided into Portuguese, Dutch, and British periods.[59]
|
131 |
+
|
132 |
+
A concept of an "Indian Ocean World" (IOW), similar to that of the "Atlantic World", exists but emerged much more recently and is not well established. The IOW is, nevertheless, sometimes referred to as the "first global economy" and was based on the monsoon which linked Asia, China, India, and Mesopotamia. It developed independently from the European global trade in the Mediterranean and Atlantic and remained largely independent from them until European 19th century colonial dominance.[60]
|
133 |
+
|
134 |
+
The diverse history of the Indian Ocean is a unique mix of cultures, ethnical groups, natural resources, and shipping routes. It grew in importance beginning in the 1960s and 1970s and after the Cold War it has undergone periods of political instability, most recently with the emergence of India and China as regional powers.[61]
|
135 |
+
|
136 |
+
Pleistocene fossils of Homo erectus and other pre-H. sapiens homonin fossils, similar to H. heidelbergensis in Europe, have been found in India. According to the Toba catastrophe theory, a supereruption c. 74000 years ago at Lake Toba, Sumatra, covered India with volcanic ashes and wiped out one or more lineages of such archaic humans in India and Southeast Asia.[62]
|
137 |
+
|
138 |
+
The Out of Africa theory states that Homo sapiens spread from Africa into mainland Eurasia. The more recent Southern Dispersal or Coastal hypothesis instead advocates that modern humans spread along the coasts of the Arabic Peninsula and southern Asia. This hypothesis is supported by mtDNA research which reveals a rapid dispersal event during the Late Pleistocene (11,000 years ago). This coastal dispersal, however, began in East Africa 75,000 years ago and occurred intermittently from estuary to estuary along the northern perimetre of the Indian Ocean at rate of 0.7–4.0 km (0.43–2.49 mi) per year. It eventually resulted in modern humans migrating from Sunda over Wallacea to Sahul (Southeast Asia to Australia).[63]
|
139 |
+
Since then, waves of migration have resettled people and, clearly, the Indian Ocean littoral had been inhabited long before the first civilisations emerged. 5000–6000 years ago six distinct cultural centres had evolved around the Indian Ocean: East Africa, the Middle East, the Indian Subcontinent, South East Asia, the Malay World, and Australia; each interlinked to its neighbours.[64]
|
140 |
+
|
141 |
+
Food globalisation began on the Indian Ocean littoral c. 4.000 years ago. Five African crops — sorghum, pearl millet, finger millet, cowpea, and hyacinth bean — somehow found their way to Gujarat in India during the Late Harappan (2000–1700 BCE). Gujarati merchants evolved into the first explorers of the Indian Ocean as they traded African goods such as ivory, tortoise shells, and slaves. Broomcorn millet found its way from Central Asia to Africa, together with chicken and zebu cattle, although the exact timing is disputed. Around 2000 BCE black pepper and sesame, both native to Asia, appears in Egypt, albeit in small quantities. Around the same time the black rat and the house mouse emigrates from Asia to Egypt. Banana reached Africa around 3000 years ago.[65]
|
142 |
+
|
143 |
+
At least eleven prehistoric tsunamis have struck the Indian Ocean coast of Indonesia between 7400 and 2900 years ago. Analysing sand beds in caves in the Aceh region, scientists concluded that the intervals between these tsunamis have varied from series of minor tsunamis over a century to dormant periods of more than 2000 years preceding megathrusts in the Sunda Trench. Although the risk for future tsunamis is high, a major megathrust such as the one in 2004 is likely to be followed by a long dormant period.[66]
|
144 |
+
|
145 |
+
A group of scientists have argued that two large-scale impact events have occurred in the Indian Ocean: the Burckle Crater in the southern Indian Ocean in 2800 BCE and the Kanmare and Tabban craters in the Gulf of Carpentaria in northern Australia in 536 CE. Evidences for these impacts, the team argue, are micro-ejecta and Chevron dunes in southern Madagascar and in the Australian gulf. Geological evidences suggest the tsunamis caused by these impacts reached 205 m (673 ft) above sea level and 45 km (28 mi) inland. The impact events must have disrupted human settlements and perhaps even contributed to major climate changes.[67]
|
146 |
+
|
147 |
+
The history of the Indian Ocean is marked by maritime trade; cultural and commercial exchange probably date back at least seven thousand years.[68] Human culture spread early on the shores of the Indian Ocean and was always linked to the cultures of the Mediterranean and Persian Gulf. Before c. 2000 BCE, however, cultures on its shores were only loosely tied to each other; bronze, for example, was developed in Mesopotamia c. 3000 BCE but remained uncommon in Egypt before 1800 BCE.[69]
|
148 |
+
During this period, independent, short-distance oversea communications along its littoral margins evolved into an all-embracing network. The début of this network was not the achievement of a centralised or advanced civilisation but of local and regional exchange in the Persian Gulf, the Red Sea, and Arabian Sea. Sherds of Ubaid (2500–500 BCE) pottery have been found in the western Gulf at Dilmun, present-day Bahrain; traces of exchange between this trading centre and Mesopotamia. The Sumerians traded grain, pottery, and bitumen (used for reed boats) for copper, stone, timber, tin, dates, onions, and pearls.[70]
|
149 |
+
Coast-bound vessels transported goods between the Indus Valley Civilisation (2600–1900 BCE) in the Indian subcontinent (modern-day Pakistan and Northwest India) and the Persian Gulf and Egypt.[68]
|
150 |
+
|
151 |
+
The Red Sea, one of the main trade routes in Antiquity, was explored by Egyptians and Phoenicians during the last two millennia BCE. In the 6th century BCE Greek explorer Scylax of Caryanda made a journey to India, working for the Persian king Darius, and his now lost account put the Indian Ocean on the maps of Greek geographers. The Greeks began to explore the Indian Ocean following the conquests of Alexander the Great, who ordered a circumnavigation of the Arabian Peninsula in 323 BCE. During the two centuries that followed the reports of the explorers of Ptolemaic Egypt resulted in the best maps of the region until the Portuguese era many centuries later. The main interest in the region for the Ptolemies was not commercial but military; they explored Africa to hunt for war elephants.[71]
|
152 |
+
|
153 |
+
The Rub' al Khali desert isolates the southern parts of the Arabic Peninsula and the Indian Ocean from the Arabic world. This encouraged the development of maritime trade in the region linking the Red Sea and the Persian Gulf to East Africa and India. The monsoon (from mawsim, the Arabic word for season), however, was used by sailors long before being "discovered" by Hippalus in the 1st century. Indian wood have been found in Sumerian cities, there is evidence of Akkad coastal trade in the region, and contacts between India and the Red Sea dates back to the 2300 B.C.. The archipelagoes of the central Indian Ocean, the Laccadive and Maldive islands, were probably populated during the 2nd century B.C. from the Indian mainland. They appear in written history in the account of merchant Sulaiman al-Tajir in the 9th century but the treacherous reefs of the islands were most likely cursed by the sailors of Aden long before the islands were even settled.[72]
|
154 |
+
|
155 |
+
Periplus of the Erythraean Sea, an Alexandrian guide to the world beyond the Red Sea — including Africa and India — from the first century CE, not only gives insights into trade in the region but also shows that Roman and Greek sailors had already gained knowledge about the monsoon winds.[68] The contemporaneous settlement of Madagascar by Austronesian sailors shows that the littoral margins of the Indian Ocean were being both well-populated and regularly traversed at least by this time. Albeit the monsoon must have been common knowledge in the Indian Ocean for centuries.[68]
|
156 |
+
|
157 |
+
The Indian Ocean's relatively calmer waters opened the areas bordering it to trade earlier than the Atlantic or Pacific oceans. The powerful monsoons also meant ships could easily sail west early in the season, then wait a few months and return eastwards. This allowed ancient Indonesian peoples to cross the Indian Ocean to settle in Madagascar around 1 CE.[73]
|
158 |
+
|
159 |
+
In the 2nd or 1st century BCE, Eudoxus of Cyzicus was the first Greek to cross the Indian Ocean. The probably fictitious sailor Hippalus is said to have learnt the direct route from Arabia to India around this time.[74] During the 1st and 2nd centuries AD intensive trade relations developed between Roman Egypt and the Tamil kingdoms of the Cheras, Cholas and Pandyas in Southern India. Like the Indonesian people above, the western sailors used the monsoon to cross the ocean. The unknown author of the Periplus of the Erythraean Sea describes this route, as well as the commodities that were traded along various commercial ports on the coasts of the Horn of Africa and India circa 1 CE. Among these trading settlements were Mosylon and Opone on the Red Sea littoral.[9]
|
160 |
+
|
161 |
+
Unlike the Pacific Ocean where the civilization of the Polynesians reached most of the far flung islands and atolls and populated them, almost all the islands, archipelagos and atolls of the Indian Ocean were uninhabited until colonial times. Although there were numerous ancient civilizations in the coastal states of Asia and parts of Africa, the Maldives were the only island group in the Central Indian Ocean region where an ancient civilization flourished.[75] Maldivians, on their annual trade trip, took their oceangoing trade ships to Sri Lanka rather than mainland India, which is much closer, because their ships were dependent of the Indian Monsoon Current.[76]
|
162 |
+
|
163 |
+
Arabic missionaries and merchants began to spread Islam along the western shores of the Indian Ocean from the 8th century, if not earlier. A Swahili stone mosque dating to the 8th–15th centuries have been found in Shanga, Kenya. Trade across the Indian Ocean gradually introduced Arabic script and rice as a staple in Eastern Africa.[77]
|
164 |
+
Muslim merchants traded an estimated 1000 African slaves annually between 800 and 1700, a number that grew to c. 4000 during the 18th century, and 3700 during the period 1800–1870. Slave trade also occurred in the eastern Indian Ocean before the Dutch settled there around 1600 but the volume of this trade is unknown.[78]
|
165 |
+
|
166 |
+
From 1405 to 1433 admiral Zheng He said to have led large fleets of the Ming Dynasty on several treasure voyages through the Indian Ocean, ultimately reaching the coastal countries of East Africa.[79]
|
167 |
+
|
168 |
+
The Portuguese navigator Vasco da Gama rounded the Cape of Good Hope during his first voyage in 1497 and became the first European to sail to India. The Swahili people he encountered along the African eastcoast lived in a series of cities and had established trade routes to India and to China. Among them, the Portuguese kidnapped most of their pilots in coastal raids and onboard ships. A few of the pilots, however, were gifts by local Swahili rulers, including the sailor from Gujarat, a gift by a Malindi ruler in Kenya, who helped the Portuguese to reach India. In expeditions after 1500 the Portuguese attacked and colonised cities along the African coast.[80]
|
169 |
+
European slave trade in the Indian Ocean began when Portugal established Estado da Índia in the early 16th century. From then until the 1830s, c. 200 slaves were exported from Mozambique annually and similar figures has been estimated for slaves brought from Asia to the Philippines during the Iberian Union (1580–1640).[78]
|
170 |
+
|
171 |
+
The Ottoman Empire began its expansion into the Indian Ocean in 1517 with the conquest of Egypt under Sultan Selim I. Although the Ottomans shared the same religion as the trading communities in the Indian Ocean the region was unexplored by them. Maps that included the Indian Ocean had been produced by Muslim geographers centuries before the Ottoman conquests; Muslim scholars, such as Ibn Battuta in the 14th Century, had visited most parts of the known world; contemporarily with Vasco da Gama, Arab navigator Ahmad ibn Mājid had compiled a guide to navigation in the Indian Ocean; the Ottomans, nevertheless, began their own parallel era of discovery which rivaled the European expansion.[81]
|
172 |
+
|
173 |
+
The establishment of the Dutch East India Company in the early 17th century lead to a quick increase in trade volume; there were perhaps up to 500,000 slaves working in Dutch colonies during the 17th and 18th centuries mostly in the Indian Ocean. For example, some 4000 African slaves were used to build the Colombo fortress in Sri Lanka. Bali and neighbouring islands supplied regional networks with c. 100,000–150,000 slaves 1620–1830. Indian and Chinese traders supplied Dutch Indonesia with perhaps 250,000 slaves during 17th and 18th centuries.[78]
|
174 |
+
|
175 |
+
The British East India Company was established during the same period and in 1622 its ship first carried slaves from the Indian Coromandel Coast to Indonesia. The British mostly brought slaves from Africa and islands in the Indian Ocean to India and Indonesia but also exported slaves from India. The French colonised Réunion and Mauritius in 1721; by 1735 some 7200 slaves populated the Mascarene Islands, a number which had reached 133,000 in 1807. The British captured the islands in 1810, however, and because the British Parliament had prohibited slavery in 1807 a system of clandestine slave trade developed; resulting in 336,000–388,000 slaves exported to the Mascarane Islands 1670–1848.[78]
|
176 |
+
|
177 |
+
In all, Europeans traded 567,900–733,200 slaves within the Indian Ocean between 1500 and 1850 and almost that same amount were exported from the Indian Ocean to the Americas during the same period. Slave trade in the Indian Ocean was, nevertheless, very limited compared to c. 12,000,000 slaves exported across the Atlantic.[78]
|
178 |
+
|
179 |
+
Scientifically, the Indian Ocean remained poorly explored before the International Indian Ocean Expedition in the early 1960s. However, the Challenger expedition 1872–1876 only reported from south of the polar front. The Valdivia expedition 1898–1899 made deep samples in the Indian Ocean. In the 1930s, the John Murray Expedition mainly studied shallow-water habitats. The Swedish Deep Sea Expedition 1947–1948 also sampled the Indian Ocean on its global tour and the Danish Galathea sampled deep-water fauna from Sri Lanka to South Africa on its second expedition 1950–1952. The Soviet research vessel Vityaz also did research in the Indian Ocean.[1]
|
180 |
+
|
181 |
+
The Suez Canal opened in 1869 when the Industrial Revolution dramatically changed global shipping – the sailing ship declined in importance as did the importance of European trade in favour of trade in East Asia and Australia.[82]
|
182 |
+
The construction of the canal introduced many non-indigenous species into the Mediterranean. For example, the goldband goatfish (Upeneus moluccensis) has replaced the red mullet (Mullus barbatus); since the 1980s huge swarms of scyphozoan jellyfish (Rhopilema nomadica) have affected tourism and fisheries along the Levantian coast and clogged power and desalination plants. Plans announced in 2014 to build a new, much larger Suez Canal parallel to the 19th century canal will most likely boost economy in the region but also cause ecological damage in a much wider area.[83]
|
183 |
+
|
184 |
+
Throughout the colonial era, islands such as Mauritius were important shipping nodes for the Dutch, French, and British. Mauritius, an inhabited island, became populated by slaves from Africa and indenture labour from India. The end of World War II marked the end of the colonial era. The British left Mauritius in 1974 and with 70% of the population of Indian descent, Mauritius became a close ally of India. In the 1980s, during the Cold War, the South African regime acted to destabilise several island nations in the Indian Ocean, including the Seychelles, Comoros, and Madagascar. India intervened in Mauritius to prevent a coup d'état, backed-up by the United States who feared the Soviet Union could gain access to Port Louis and threaten the U.S. base on Diego Garcia.[84]
|
185 |
+
Iranrud is an unrealised plan by Iran and the Soviet Union to build a canal between the Caspian Sea and Persian Gulf.
|
186 |
+
|
187 |
+
Testimonies from the colonial era are stories of African slaves, Indian indentured labourers, and white settlers. But, while there was a clear racial line between free men and slaves in the Atlantic World, this delineation is less distinct in the Indian Ocean — there were Indian slaves and settlers as well as black indentured labourers. There were also a string of prison camps across the Indian Ocean, from Robben Island in South Africa to Cellular Jail in the Andamans, in which prisoners, exiles, POWs, forced labourers, merchants, and people of different faiths were forcefully united. On the islands of the Indian Ocean, therefore, a trend of creolisation emerged.[85]
|
188 |
+
|
189 |
+
On 26 December 2004 fourteen countries around the Indian Ocean were hit by a wave of tsunamis caused by the 2004 Indian Ocean earthquake. The waves radiated across the ocean at speeds exceeding 500 km/h (310 mph), reached up to 20 m (66 ft) in height, and resulted in an estimated 236,000 death.[86]
|
190 |
+
|
191 |
+
In the late 2000s the ocean evolved into a hub of pirate activity. By 2013, attacks off the Horn region's coast had steadily declined due to active private security and international navy patrols, especially by the Indian Navy.[87]
|
192 |
+
|
193 |
+
Malaysian Airlines Flight 370, a Boeing 777 airliner with 239 persons on board, disappeared on 8 March 2014 and is alleged to have crashed into the southeastern Indian Ocean about 2,000 km (1,200 mi) from the coast of southwest Western Australia. Despite an extensive search, the whereabouts of the remains of the aircraft are unknown.[88]
|
194 |
+
|
195 |
+
The sea lanes in the Indian Ocean are considered among the most strategically important in the world with more than 80 percent of the world's seaborne trade in oil transits through the Indian Ocean and its vital chokepoints, with 40 percent passing through the Strait of Hormuz, 35 percent through the Strait of Malacca and 8 percent through the Bab el-Mandab Strait.[89]
|
196 |
+
|
197 |
+
The Indian Ocean provides major sea routes connecting the Middle East, Africa, and East Asia with Europe and the Americas. It carries a particularly heavy traffic of petroleum and petroleum products from the oil fields of the Persian Gulf and Indonesia. Large reserves of hydrocarbons are being tapped in the offshore areas of Saudi Arabia, Iran, India, and Western Australia. An estimated 40% of the world's offshore oil production comes from the Indian Ocean.[3] Beach sands rich in heavy minerals, and offshore placer deposits are actively exploited by bordering countries, particularly India, Pakistan, South Africa, Indonesia, Sri Lanka, and Thailand.
|
198 |
+
|
199 |
+
Chinese companies have made investments in several Indian Ocean ports, including Gwadar, Hambantota, Colombo and Sonadia. This has sparked a debate about the strategic implications of these investments.[90] (See String of Pearls)
|
en/4226.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Pacific Ocean is the largest and deepest of Earth's oceanic divisions. It extends from the Arctic Ocean in the north to the Southern Ocean (or, depending on definition, to Antarctica) in the south and is bounded by the continents of Asia and Australia in the west and the Americas in the east.
|
4 |
+
|
5 |
+
At 165,250,000 square kilometers (63,800,000 square miles) in area (as defined with an Antarctic southern border), this largest division of the World Ocean—and, in turn, the hydrosphere—covers about 46% of Earth's water surface and about 32% of its total surface area, making it larger than all of Earth's land area combined.[1] The centers of both the Water Hemisphere and the Western Hemisphere are in the Pacific Ocean. The equator subdivides it into the North(ern) Pacific Ocean and South(ern) Pacific Ocean, with two exceptions: the Galápagos and Gilbert Islands, while straddling the equator, are deemed wholly within the South Pacific.[2]
|
6 |
+
|
7 |
+
Its mean depth is 4,000 meters (13,000 feet).[3] Challenger Deep in the Mariana Trench, located in the western north Pacific, is the deepest point in the world, reaching a depth of 10,928 meters (35,853 feet).[4] The Pacific also contains the deepest point in the Southern Hemisphere, the Horizon Deep in the Tonga Trench, at 10,823 meters (35,509 feet).[5] The third deepest point on Earth, the Sirena Deep, is also located in the Mariana Trench.
|
8 |
+
|
9 |
+
The western Pacific has many major marginal seas, including the South China Sea, the East China Sea, the Sea of Japan, the Sea of Okhotsk, the Philippine Sea, the Coral Sea, and the Tasman Sea.
|
10 |
+
|
11 |
+
Though the peoples of Asia and Oceania have traveled the Pacific Ocean since prehistoric times, the eastern Pacific was first sighted by Europeans in the early 16th century when Spanish explorer Vasco Núñez de Balboa crossed the Isthmus of Panama in 1513 and discovered the great "southern sea" which he named Mar del Sur (in Spanish). The ocean's current name was coined by Portuguese explorer Ferdinand Magellan during the Spanish circumnavigation of the world in 1521, as he encountered favorable winds on reaching the ocean. He called it Mar Pacífico, which in both Portuguese and Spanish means "peaceful sea".[6]
|
12 |
+
|
13 |
+
Important human migrations occurred in the Pacific in prehistoric times. About 3000 BC, the Austronesian peoples on the island of Taiwan mastered the art of long-distance canoe travel and spread themselves and their languages south to the Philippines, Indonesia, and maritime Southeast Asia; west towards Madagascar; southeast towards New Guinea and Melanesia (intermarrying with native Papuans); and east to the islands of Micronesia, Oceania and Polynesia.[8]
|
14 |
+
|
15 |
+
Long-distance trade developed all along the coast from Mozambique to Japan. Trade, and therefore knowledge, extended to the Indonesian islands but apparently not Australia. By at least 878 when there was a significant Islamic settlement in Canton much of this trade was controlled by Arabs or Muslims. In 219 BC Xu Fu sailed out into the Pacific searching for the elixir of immortality. From 1404 to 1433 Zheng He led expeditions into the Indian Ocean.
|
16 |
+
|
17 |
+
The first contact of European navigators with the western edge of the Pacific Ocean was made by the Portuguese expeditions of António de Abreu and Francisco Serrão, via the Lesser Sunda Islands, to the Maluku Islands, in 1512,[9][10] and with Jorge Álvares's expedition to southern China in 1513,[11] both ordered by Afonso de Albuquerque from Malacca.
|
18 |
+
|
19 |
+
The east side of the ocean was discovered by Spanish explorer Vasco Núñez de Balboa in 1513 after his expedition crossed the Isthmus of Panama and reached a new ocean.[12] He named it Mar del Sur (literally, "Sea of the South" or "South Sea") because the ocean was to the south of the coast of the isthmus where he first observed the Pacific.
|
20 |
+
|
21 |
+
In 1519, Portuguese explorer Ferdinand Magellan sailed the Pacific East to West on a Spanish expedition to the Spice Islands that would eventually result in the first world circumnavigation. Magellan called the ocean Pacífico (or "Pacific" meaning, "peaceful") because, after sailing through the stormy seas off Cape Horn, the expedition found calm waters. The ocean was often called the Sea of Magellan in his honor until the eighteenth century.[13] Magellan stopped at one uninhabited Pacific island before stopping at Guam in March 1521.[14] Although Magellan himself died in the Philippines in 1521, Spanish Basque navigator Juan Sebastián Elcano led the remains of the expedition back to Spain across the Indian Ocean and round the Cape of Good Hope, completing the first world circumnavigation in a single expedition in 1522.[15] Sailing around and east of the Moluccas, between 1525 and 1527, Portuguese expeditions discovered the Caroline Islands,[16] the Aru Islands,[17] and Papua New Guinea.[18] In 1542–43 the Portuguese also reached Japan.[19]
|
22 |
+
|
23 |
+
In 1564, five Spanish ships carrying 379 explorers crossed the ocean from Mexico led by Miguel López de Legazpi, and sailed to the Philippines and Mariana Islands.[20] For the remainder of the 16th century, Spanish influence was paramount, with ships sailing from Mexico and Peru across the Pacific Ocean to the Philippines via Guam, and establishing the Spanish East Indies. The Manila galleons operated for two and a half centuries, linking Manila and Acapulco, in one of the longest trade routes in history. Spanish expeditions also discovered Tuvalu, the Marquesas, the Cook Islands, the Solomon Islands, and the Admiralty Islands in the South Pacific.[21]
|
24 |
+
|
25 |
+
Later, in the quest for Terra Australis ("the [great] Southern Land"), Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Dutch explorers, sailing around southern Africa, also engaged in discovery and trade; Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula,[22] and Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Tasmania and New Zealand in 1642.[23]
|
26 |
+
|
27 |
+
In the 16th and 17th centuries, Spain considered the Pacific Ocean a mare clausum—a sea closed to other naval powers. As the only known entrance from the Atlantic, the Strait of Magellan was at times patrolled by fleets sent to prevent entrance of non-Spanish ships. On the western side of the Pacific Ocean the Dutch threatened the Spanish Philippines.[24]
|
28 |
+
|
29 |
+
The 18th century marked the beginning of major exploration by the Russians in Alaska and the Aleutian Islands, such as the First Kamchatka expedition and the Great Northern Expedition, led by the Danish Russian navy officer Vitus Bering. Spain also sent expeditions to the Pacific Northwest, reaching Vancouver Island in southern Canada, and Alaska. The French explored and settled Polynesia, and the British made three voyages with James Cook to the South Pacific and Australia, Hawaii, and the North American Pacific Northwest. In 1768, Pierre-Antoine Véron, a young astronomer accompanying Louis Antoine de Bougainville on his voyage of exploration, established the width of the Pacific with precision for the first time in history.[25] One of the earliest voyages of scientific exploration was organized by Spain in the Malaspina Expedition of 1789–1794. It sailed vast areas of the Pacific, from Cape Horn to Alaska, Guam and the Philippines, New Zealand, Australia, and the South Pacific.[21]
|
30 |
+
|
31 |
+
Growing imperialism during the 19th century resulted in the occupation of much of Oceania by European powers, and later Japan and the United States. Significant contributions to oceanographic knowledge were made by the voyages of HMS Beagle in the 1830s, with Charles Darwin aboard;[26] HMS Challenger during the 1870s;[27] the USS Tuscarora (1873–76);[28] and the German Gazelle (1874–76).[29]
|
32 |
+
|
33 |
+
In Oceania, France obtained a leading position as imperial power after making Tahiti and New Caledonia protectorates in 1842 and 1853, respectively.[30] After navy visits to Easter Island in 1875 and 1887, Chilean navy officer Policarpo Toro negotiated the incorporation of the island into Chile with native Rapanui in 1888. By occupying Easter Island, Chile joined the imperial nations.[31](p53) By 1900 nearly all Pacific islands were in control of Britain, France, United States, Germany, Japan, and Chile.[30]
|
34 |
+
|
35 |
+
Although the United States gained control of Guam and the Philippines from Spain in 1898,[32] Japan controlled most of the western Pacific by 1914 and occupied many other islands during the Pacific War; however, by the end of that war, Japan was defeated and the U.S. Pacific Fleet was the virtual master of the ocean. The Japanese-ruled Northern Mariana Islands came under the control of the United States.[33] Since the end of World War II, many former colonies in the Pacific have become independent states.
|
36 |
+
|
37 |
+
The Pacific separates Asia and Australia from the Americas. It may be further subdivided by the equator into northern (North Pacific) and southern (South Pacific) portions. It extends from the Antarctic region in the South to the Arctic in the north.[1] The Pacific Ocean encompasses approximately one-third of the Earth's surface, having an area of 165,200,000 km2 (63,800,000 sq mi)— larger than Earth's entire landmass combined, 150,000,000 km2 (58,000,000 sq mi).[34]
|
38 |
+
|
39 |
+
Extending approximately 15,500 km (9,600 mi) from the Bering Sea in the Arctic to the northern extent of the circumpolar Southern Ocean at 60°S (older definitions extend it to Antarctica's Ross Sea), the Pacific reaches its greatest east–west width at about 5°N latitude, where it stretches approximately 19,800 km (12,300 mi) from Indonesia to the coast of Colombia—halfway around the world, and more than five times the diameter of the Moon.[35] The lowest known point on Earth—the Mariana Trench—lies 10,911 m (35,797 ft; 5,966 fathoms) below sea level. Its average depth is 4,280 m (14,040 ft; 2,340 fathoms), putting the total water volume at roughly 710,000,000 km3 (170,000,000 cu mi).[1]
|
40 |
+
|
41 |
+
Due to the effects of plate tectonics, the Pacific Ocean is currently shrinking by roughly 2.5 cm (1 in) per year on three sides, roughly averaging 0.52 km2 (0.20 sq mi) a year. By contrast, the Atlantic Ocean is increasing in size.[36][37]
|
42 |
+
|
43 |
+
Along the Pacific Ocean's irregular western margins lie many seas, the largest of which are the Celebes Sea, Coral Sea, East China Sea (East Sea), Philippine Sea, Sea of Japan, South China Sea (South Sea), Sulu Sea, Tasman Sea, and Yellow Sea (West Sea of Korea). The Indonesian Seaway (including the Strait of Malacca and Torres Strait) joins the Pacific and the Indian Ocean to the west, and Drake Passage and the Strait of Magellan link the Pacific with the Atlantic Ocean on the east. To the north, the Bering Strait connects the Pacific with the Arctic Ocean.[38]
|
44 |
+
|
45 |
+
As the Pacific straddles the 180th meridian, the West Pacific (or western Pacific, near Asia) is in the Eastern Hemisphere, while the East Pacific (or eastern Pacific, near the Americas) is in the Western Hemisphere.[39]
|
46 |
+
|
47 |
+
The Southern Pacific Ocean harbors the Southeast Indian Ridge crossing from south of Australia turning into the Pacific-Antarctic Ridge (north of the South Pole) and merges with another ridge (south of South America) to form the East Pacific Rise which also connects with another ridge (south of North America) which overlooks the Juan de Fuca Ridge.
|
48 |
+
|
49 |
+
For most of Magellan's voyage from the Strait of Magellan to the Philippines, the explorer indeed found the ocean peaceful; however, the Pacific is not always peaceful. Many tropical storms batter the islands of the Pacific.[40] The lands around the Pacific Rim are full of volcanoes and often affected by earthquakes.[41] Tsunamis, caused by underwater earthquakes, have devastated many islands and in some cases destroyed entire towns.[42]
|
50 |
+
|
51 |
+
The Martin Waldseemüller map of 1507 was the first to show the Americas separating two distinct oceans.[43] Later, the Diogo Ribeiro map of 1529 was the first to show the Pacific at about its proper size.[44]
|
52 |
+
|
53 |
+
1 The status of Taiwan is disputed. For more information, see political status of Taiwan.
|
54 |
+
|
55 |
+
The ocean has most of the islands in the world. There are about 25,000 islands in the Pacific Ocean.[45][46][47] The islands entirely within the Pacific Ocean can be divided into three main groups known as Micronesia, Melanesia and Polynesia. Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the east and the islands of Kiribati in the southeast.[48][49]
|
56 |
+
|
57 |
+
Melanesia, to the southwest, includes New Guinea, the world's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands, Santa Cruz, Vanuatu, Fiji and New Caledonia.[50]
|
58 |
+
|
59 |
+
The largest area, Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east.[51]
|
60 |
+
|
61 |
+
Islands in the Pacific Ocean are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. Continental islands lie outside the andesite line and include New Guinea, the islands of New Zealand, and the Philippines. Some of these islands are structurally associated with nearby continents. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and the Solomon Islands.[52]
|
62 |
+
|
63 |
+
The coral reefs of the South Pacific are low-lying structures that have built up on basaltic lava flows under the ocean's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia.[53][54]
|
64 |
+
|
65 |
+
Ladrilleros Beach in Colombia on the coast of Chocó natural region
|
66 |
+
|
67 |
+
Point Reyes headlands, Point Reyes National Seashore, California
|
68 |
+
|
69 |
+
Tahuna maru islet, French Polynesia
|
70 |
+
|
71 |
+
Los Molinos on the coast of Southern Chile
|
72 |
+
|
73 |
+
The volume of the Pacific Ocean, representing about 50.1 percent of the world's oceanic water, has been estimated at some 714 million cubic kilometers (171 million cubic miles).[55] Surface water temperatures in the Pacific can vary from −1.4 °C (29.5 °F), the freezing point of sea water, in the poleward areas to about 30 °C (86 °F) near the equator.[56] Salinity also varies latitudinally, reaching a maximum of 37 parts per thousand in the southeastern area. The water near the equator, which can have a salinity as low as 34 parts per thousand, is less salty than that found in the mid-latitudes because of abundant equatorial precipitation throughout the year. The lowest counts of less than 32 parts per thousand are found in the far north as less evaporation of seawater takes place in these frigid areas.[57] The motion of Pacific waters is generally clockwise in the Northern Hemisphere (the North Pacific gyre) and counter-clockwise in the Southern Hemisphere. The North Equatorial Current, driven westward along latitude 15°N by the trade winds, turns north near the Philippines to become the warm Japan or Kuroshio Current.[58]
|
74 |
+
|
75 |
+
Turning eastward at about 45°N, the Kuroshio forks and some water moves northward as the Aleutian Current, while the rest turns southward to rejoin the North Equatorial Current.[59] The Aleutian Current branches as it approaches North America and forms the base of a counter-clockwise circulation in the Bering Sea. Its southern arm becomes the chilled slow, south-flowing California Current.[60] The South Equatorial Current, flowing west along the equator, swings southward east of New Guinea, turns east at about 50°S, and joins the main westerly circulation of the South Pacific, which includes the Earth-circling Antarctic Circumpolar Current. As it approaches the Chilean coast, the South Equatorial Current divides; one branch flows around Cape Horn and the other turns north to form the Peru or Humboldt Current.[61]
|
76 |
+
|
77 |
+
The climate patterns of the Northern and Southern Hemispheres generally mirror each other. The trade winds in the southern and eastern Pacific are remarkably steady while conditions in the North Pacific are far more varied with, for example, cold winter temperatures on the east coast of Russia contrasting with the milder weather off British Columbia during the winter months due to the preferred flow of ocean currents.[62]
|
78 |
+
|
79 |
+
In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions. To determine the phase of ENSO, the most recent three-month sea surface temperature average for the area approximately 3,000 km (1,900 mi) to the southeast of Hawaii is computed, and if the region is more than 0.5 °C (0.9 °F) above or below normal for that period, then an El Niño or La Niña is considered in progress.[63]
|
80 |
+
|
81 |
+
In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass.[64] Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest; however, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are active.[65] The Pacific hosts the two most active tropical cyclone basins, which are the northwestern Pacific and the eastern Pacific. Pacific hurricanes form south of Mexico, sometimes striking the western Mexican coast and occasionally the southwestern United States between June and October, while typhoons forming in the northwestern Pacific moving into southeast and east Asia from May to December. Tropical cyclones also form in the South Pacific basin, where they occasionally impact island nations.
|
82 |
+
|
83 |
+
In the arctic, icing from October to May can present a hazard for shipping while persistent fog occurs from June to December.[66] A climatological low in the Gulf of Alaska keeps the southern coast wet and mild during the winter months. The Westerlies and associated jet stream within the Mid-Latitudes can be particularly strong, especially in the Southern Hemisphere, due to the temperature difference between the tropics and Antarctica,[67] which records the coldest temperature readings on the planet. In the Southern hemisphere, because of the stormy and cloudy conditions associated with extratropical cyclones riding the jet stream, it is usual to refer to the Westerlies as the Roaring Forties, Furious Fifties and Shrieking Sixties according to the varying degrees of latitude.[68]
|
84 |
+
|
85 |
+
The ocean was first mapped by Abraham Ortelius; he called it Maris Pacifici following Ferdinand Magellan's description of it as "a pacific sea" during his circumnavigation from 1519 to 1522. To Magellan, it seemed much more calm (pacific) than the Atlantic.[69]
|
86 |
+
|
87 |
+
The andesite line is the most significant regional distinction in the Pacific. A petrologic boundary, it separates the deeper, mafic igneous rock of the Central Pacific Basin from the partially submerged continental areas of felsic igneous rock on its margins.[70] The andesite line follows the western edge of the islands off California and passes south of the Aleutian arc, along the eastern edge of the Kamchatka Peninsula, the Kuril Islands, Japan, the Mariana Islands, the Solomon Islands, and New Zealand's North Island.[71][72]
|
88 |
+
|
89 |
+
The dissimilarity continues northeastward along the western edge of the Andes Cordillera along South America to Mexico, returning then to the islands off California. Indonesia, the Philippines, Japan, New Guinea, and New Zealand lie outside the andesite line.
|
90 |
+
|
91 |
+
Within the closed loop of the andesite line are most of the deep troughs, submerged volcanic mountains, and oceanic volcanic islands that characterize the Pacific basin. Here basaltic lavas gently flow out of rifts to build huge dome-shaped volcanic mountains whose eroded summits form island arcs, chains, and clusters. Outside the andesite line, volcanism is of the explosive type, and the Pacific Ring of Fire is the world's foremost belt of explosive volcanism.[48] The Ring of Fire is named after the several hundred active volcanoes that sit above the various subduction zones.
|
92 |
+
|
93 |
+
The Pacific Ocean is the only ocean which is almost totally bounded by subduction zones. Only the Antarctic and Australian coasts have no nearby subduction zones.
|
94 |
+
|
95 |
+
The Pacific Ocean was born 750 million years ago at the breakup of Rodinia, although it is generally called the Panthalassic Ocean until the breakup of Pangea, about 200 million years ago.[73] The oldest Pacific Ocean floor is only around 180 Ma old, with older crust subducted by now.[74]
|
96 |
+
|
97 |
+
The Pacific Ocean contains several long seamount chains, formed by hotspot volcanism.[citation needed]
|
98 |
+
|
99 |
+
The exploitation of the Pacific's mineral wealth is hampered by the ocean's great depths. In shallow waters of the continental shelves off the coasts of Australia and New Zealand, petroleum and natural gas are extracted, and pearls are harvested along the coasts of Australia, Japan, Papua New Guinea, Nicaragua, Panama, and the Philippines, although in sharply declining volume in some cases.[75]
|
100 |
+
|
101 |
+
Fish are an important economic asset in the Pacific. The shallower shoreline waters of the continents and the more temperate islands yield herring, salmon, sardines, snapper, swordfish, and tuna, as well as shellfish.[76] Overfishing has become a serious problem in some areas. For example, catches in the rich fishing grounds of the Okhotsk Sea off the Russian coast have been reduced by at least half since the 1990s as a result of overfishing.[77]
|
102 |
+
|
103 |
+
The quantity of small plastic fragments floating in the north-east Pacific Ocean increased a hundredfold between 1972 and 2012.[79] The ever-growing Great Pacific garbage patch between California and Japan is three times the size of France.[80] An estimated 80,000 metric tons of plastic inhabit the patch, totaling 1.8 trillion pieces.[81]
|
104 |
+
|
105 |
+
Marine pollution is a generic term for the harmful entry into the ocean of chemicals or particles. The main culprits are those using the rivers for disposing of their waste.[82] The rivers then empty into the ocean, often also bringing chemicals used as fertilizers in agriculture. The excess of oxygen-depleting chemicals in the water leads to hypoxia and the creation of a dead zone.[83]
|
106 |
+
|
107 |
+
Marine debris, also known as marine litter, is human-created waste that has ended up floating in a lake, sea, ocean, or waterway. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter.[82]
|
108 |
+
|
109 |
+
From 1946 to 1958, Marshall Islands served as the Pacific Proving Grounds for the United States and was the site of 67 nuclear tests on various atolls.[84][85] Several nuclear weapons were lost in the Pacific Ocean,[86] including one-megaton bomb lost during the 1965 Philippine Sea A-4 incident.[87]
|
110 |
+
|
111 |
+
In addition, the Pacific Ocean has served as the crash site of satellites, including Mars 96, Fobos-Grunt, and Upper Atmosphere Research Satellite.
|
en/4227.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Pacific Ocean is the largest and deepest of Earth's oceanic divisions. It extends from the Arctic Ocean in the north to the Southern Ocean (or, depending on definition, to Antarctica) in the south and is bounded by the continents of Asia and Australia in the west and the Americas in the east.
|
4 |
+
|
5 |
+
At 165,250,000 square kilometers (63,800,000 square miles) in area (as defined with an Antarctic southern border), this largest division of the World Ocean—and, in turn, the hydrosphere—covers about 46% of Earth's water surface and about 32% of its total surface area, making it larger than all of Earth's land area combined.[1] The centers of both the Water Hemisphere and the Western Hemisphere are in the Pacific Ocean. The equator subdivides it into the North(ern) Pacific Ocean and South(ern) Pacific Ocean, with two exceptions: the Galápagos and Gilbert Islands, while straddling the equator, are deemed wholly within the South Pacific.[2]
|
6 |
+
|
7 |
+
Its mean depth is 4,000 meters (13,000 feet).[3] Challenger Deep in the Mariana Trench, located in the western north Pacific, is the deepest point in the world, reaching a depth of 10,928 meters (35,853 feet).[4] The Pacific also contains the deepest point in the Southern Hemisphere, the Horizon Deep in the Tonga Trench, at 10,823 meters (35,509 feet).[5] The third deepest point on Earth, the Sirena Deep, is also located in the Mariana Trench.
|
8 |
+
|
9 |
+
The western Pacific has many major marginal seas, including the South China Sea, the East China Sea, the Sea of Japan, the Sea of Okhotsk, the Philippine Sea, the Coral Sea, and the Tasman Sea.
|
10 |
+
|
11 |
+
Though the peoples of Asia and Oceania have traveled the Pacific Ocean since prehistoric times, the eastern Pacific was first sighted by Europeans in the early 16th century when Spanish explorer Vasco Núñez de Balboa crossed the Isthmus of Panama in 1513 and discovered the great "southern sea" which he named Mar del Sur (in Spanish). The ocean's current name was coined by Portuguese explorer Ferdinand Magellan during the Spanish circumnavigation of the world in 1521, as he encountered favorable winds on reaching the ocean. He called it Mar Pacífico, which in both Portuguese and Spanish means "peaceful sea".[6]
|
12 |
+
|
13 |
+
Important human migrations occurred in the Pacific in prehistoric times. About 3000 BC, the Austronesian peoples on the island of Taiwan mastered the art of long-distance canoe travel and spread themselves and their languages south to the Philippines, Indonesia, and maritime Southeast Asia; west towards Madagascar; southeast towards New Guinea and Melanesia (intermarrying with native Papuans); and east to the islands of Micronesia, Oceania and Polynesia.[8]
|
14 |
+
|
15 |
+
Long-distance trade developed all along the coast from Mozambique to Japan. Trade, and therefore knowledge, extended to the Indonesian islands but apparently not Australia. By at least 878 when there was a significant Islamic settlement in Canton much of this trade was controlled by Arabs or Muslims. In 219 BC Xu Fu sailed out into the Pacific searching for the elixir of immortality. From 1404 to 1433 Zheng He led expeditions into the Indian Ocean.
|
16 |
+
|
17 |
+
The first contact of European navigators with the western edge of the Pacific Ocean was made by the Portuguese expeditions of António de Abreu and Francisco Serrão, via the Lesser Sunda Islands, to the Maluku Islands, in 1512,[9][10] and with Jorge Álvares's expedition to southern China in 1513,[11] both ordered by Afonso de Albuquerque from Malacca.
|
18 |
+
|
19 |
+
The east side of the ocean was discovered by Spanish explorer Vasco Núñez de Balboa in 1513 after his expedition crossed the Isthmus of Panama and reached a new ocean.[12] He named it Mar del Sur (literally, "Sea of the South" or "South Sea") because the ocean was to the south of the coast of the isthmus where he first observed the Pacific.
|
20 |
+
|
21 |
+
In 1519, Portuguese explorer Ferdinand Magellan sailed the Pacific East to West on a Spanish expedition to the Spice Islands that would eventually result in the first world circumnavigation. Magellan called the ocean Pacífico (or "Pacific" meaning, "peaceful") because, after sailing through the stormy seas off Cape Horn, the expedition found calm waters. The ocean was often called the Sea of Magellan in his honor until the eighteenth century.[13] Magellan stopped at one uninhabited Pacific island before stopping at Guam in March 1521.[14] Although Magellan himself died in the Philippines in 1521, Spanish Basque navigator Juan Sebastián Elcano led the remains of the expedition back to Spain across the Indian Ocean and round the Cape of Good Hope, completing the first world circumnavigation in a single expedition in 1522.[15] Sailing around and east of the Moluccas, between 1525 and 1527, Portuguese expeditions discovered the Caroline Islands,[16] the Aru Islands,[17] and Papua New Guinea.[18] In 1542–43 the Portuguese also reached Japan.[19]
|
22 |
+
|
23 |
+
In 1564, five Spanish ships carrying 379 explorers crossed the ocean from Mexico led by Miguel López de Legazpi, and sailed to the Philippines and Mariana Islands.[20] For the remainder of the 16th century, Spanish influence was paramount, with ships sailing from Mexico and Peru across the Pacific Ocean to the Philippines via Guam, and establishing the Spanish East Indies. The Manila galleons operated for two and a half centuries, linking Manila and Acapulco, in one of the longest trade routes in history. Spanish expeditions also discovered Tuvalu, the Marquesas, the Cook Islands, the Solomon Islands, and the Admiralty Islands in the South Pacific.[21]
|
24 |
+
|
25 |
+
Later, in the quest for Terra Australis ("the [great] Southern Land"), Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Dutch explorers, sailing around southern Africa, also engaged in discovery and trade; Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula,[22] and Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Tasmania and New Zealand in 1642.[23]
|
26 |
+
|
27 |
+
In the 16th and 17th centuries, Spain considered the Pacific Ocean a mare clausum—a sea closed to other naval powers. As the only known entrance from the Atlantic, the Strait of Magellan was at times patrolled by fleets sent to prevent entrance of non-Spanish ships. On the western side of the Pacific Ocean the Dutch threatened the Spanish Philippines.[24]
|
28 |
+
|
29 |
+
The 18th century marked the beginning of major exploration by the Russians in Alaska and the Aleutian Islands, such as the First Kamchatka expedition and the Great Northern Expedition, led by the Danish Russian navy officer Vitus Bering. Spain also sent expeditions to the Pacific Northwest, reaching Vancouver Island in southern Canada, and Alaska. The French explored and settled Polynesia, and the British made three voyages with James Cook to the South Pacific and Australia, Hawaii, and the North American Pacific Northwest. In 1768, Pierre-Antoine Véron, a young astronomer accompanying Louis Antoine de Bougainville on his voyage of exploration, established the width of the Pacific with precision for the first time in history.[25] One of the earliest voyages of scientific exploration was organized by Spain in the Malaspina Expedition of 1789–1794. It sailed vast areas of the Pacific, from Cape Horn to Alaska, Guam and the Philippines, New Zealand, Australia, and the South Pacific.[21]
|
30 |
+
|
31 |
+
Growing imperialism during the 19th century resulted in the occupation of much of Oceania by European powers, and later Japan and the United States. Significant contributions to oceanographic knowledge were made by the voyages of HMS Beagle in the 1830s, with Charles Darwin aboard;[26] HMS Challenger during the 1870s;[27] the USS Tuscarora (1873–76);[28] and the German Gazelle (1874–76).[29]
|
32 |
+
|
33 |
+
In Oceania, France obtained a leading position as imperial power after making Tahiti and New Caledonia protectorates in 1842 and 1853, respectively.[30] After navy visits to Easter Island in 1875 and 1887, Chilean navy officer Policarpo Toro negotiated the incorporation of the island into Chile with native Rapanui in 1888. By occupying Easter Island, Chile joined the imperial nations.[31](p53) By 1900 nearly all Pacific islands were in control of Britain, France, United States, Germany, Japan, and Chile.[30]
|
34 |
+
|
35 |
+
Although the United States gained control of Guam and the Philippines from Spain in 1898,[32] Japan controlled most of the western Pacific by 1914 and occupied many other islands during the Pacific War; however, by the end of that war, Japan was defeated and the U.S. Pacific Fleet was the virtual master of the ocean. The Japanese-ruled Northern Mariana Islands came under the control of the United States.[33] Since the end of World War II, many former colonies in the Pacific have become independent states.
|
36 |
+
|
37 |
+
The Pacific separates Asia and Australia from the Americas. It may be further subdivided by the equator into northern (North Pacific) and southern (South Pacific) portions. It extends from the Antarctic region in the South to the Arctic in the north.[1] The Pacific Ocean encompasses approximately one-third of the Earth's surface, having an area of 165,200,000 km2 (63,800,000 sq mi)— larger than Earth's entire landmass combined, 150,000,000 km2 (58,000,000 sq mi).[34]
|
38 |
+
|
39 |
+
Extending approximately 15,500 km (9,600 mi) from the Bering Sea in the Arctic to the northern extent of the circumpolar Southern Ocean at 60°S (older definitions extend it to Antarctica's Ross Sea), the Pacific reaches its greatest east–west width at about 5°N latitude, where it stretches approximately 19,800 km (12,300 mi) from Indonesia to the coast of Colombia—halfway around the world, and more than five times the diameter of the Moon.[35] The lowest known point on Earth—the Mariana Trench—lies 10,911 m (35,797 ft; 5,966 fathoms) below sea level. Its average depth is 4,280 m (14,040 ft; 2,340 fathoms), putting the total water volume at roughly 710,000,000 km3 (170,000,000 cu mi).[1]
|
40 |
+
|
41 |
+
Due to the effects of plate tectonics, the Pacific Ocean is currently shrinking by roughly 2.5 cm (1 in) per year on three sides, roughly averaging 0.52 km2 (0.20 sq mi) a year. By contrast, the Atlantic Ocean is increasing in size.[36][37]
|
42 |
+
|
43 |
+
Along the Pacific Ocean's irregular western margins lie many seas, the largest of which are the Celebes Sea, Coral Sea, East China Sea (East Sea), Philippine Sea, Sea of Japan, South China Sea (South Sea), Sulu Sea, Tasman Sea, and Yellow Sea (West Sea of Korea). The Indonesian Seaway (including the Strait of Malacca and Torres Strait) joins the Pacific and the Indian Ocean to the west, and Drake Passage and the Strait of Magellan link the Pacific with the Atlantic Ocean on the east. To the north, the Bering Strait connects the Pacific with the Arctic Ocean.[38]
|
44 |
+
|
45 |
+
As the Pacific straddles the 180th meridian, the West Pacific (or western Pacific, near Asia) is in the Eastern Hemisphere, while the East Pacific (or eastern Pacific, near the Americas) is in the Western Hemisphere.[39]
|
46 |
+
|
47 |
+
The Southern Pacific Ocean harbors the Southeast Indian Ridge crossing from south of Australia turning into the Pacific-Antarctic Ridge (north of the South Pole) and merges with another ridge (south of South America) to form the East Pacific Rise which also connects with another ridge (south of North America) which overlooks the Juan de Fuca Ridge.
|
48 |
+
|
49 |
+
For most of Magellan's voyage from the Strait of Magellan to the Philippines, the explorer indeed found the ocean peaceful; however, the Pacific is not always peaceful. Many tropical storms batter the islands of the Pacific.[40] The lands around the Pacific Rim are full of volcanoes and often affected by earthquakes.[41] Tsunamis, caused by underwater earthquakes, have devastated many islands and in some cases destroyed entire towns.[42]
|
50 |
+
|
51 |
+
The Martin Waldseemüller map of 1507 was the first to show the Americas separating two distinct oceans.[43] Later, the Diogo Ribeiro map of 1529 was the first to show the Pacific at about its proper size.[44]
|
52 |
+
|
53 |
+
1 The status of Taiwan is disputed. For more information, see political status of Taiwan.
|
54 |
+
|
55 |
+
The ocean has most of the islands in the world. There are about 25,000 islands in the Pacific Ocean.[45][46][47] The islands entirely within the Pacific Ocean can be divided into three main groups known as Micronesia, Melanesia and Polynesia. Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the east and the islands of Kiribati in the southeast.[48][49]
|
56 |
+
|
57 |
+
Melanesia, to the southwest, includes New Guinea, the world's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands, Santa Cruz, Vanuatu, Fiji and New Caledonia.[50]
|
58 |
+
|
59 |
+
The largest area, Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east.[51]
|
60 |
+
|
61 |
+
Islands in the Pacific Ocean are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. Continental islands lie outside the andesite line and include New Guinea, the islands of New Zealand, and the Philippines. Some of these islands are structurally associated with nearby continents. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and the Solomon Islands.[52]
|
62 |
+
|
63 |
+
The coral reefs of the South Pacific are low-lying structures that have built up on basaltic lava flows under the ocean's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia.[53][54]
|
64 |
+
|
65 |
+
Ladrilleros Beach in Colombia on the coast of Chocó natural region
|
66 |
+
|
67 |
+
Point Reyes headlands, Point Reyes National Seashore, California
|
68 |
+
|
69 |
+
Tahuna maru islet, French Polynesia
|
70 |
+
|
71 |
+
Los Molinos on the coast of Southern Chile
|
72 |
+
|
73 |
+
The volume of the Pacific Ocean, representing about 50.1 percent of the world's oceanic water, has been estimated at some 714 million cubic kilometers (171 million cubic miles).[55] Surface water temperatures in the Pacific can vary from −1.4 °C (29.5 °F), the freezing point of sea water, in the poleward areas to about 30 °C (86 °F) near the equator.[56] Salinity also varies latitudinally, reaching a maximum of 37 parts per thousand in the southeastern area. The water near the equator, which can have a salinity as low as 34 parts per thousand, is less salty than that found in the mid-latitudes because of abundant equatorial precipitation throughout the year. The lowest counts of less than 32 parts per thousand are found in the far north as less evaporation of seawater takes place in these frigid areas.[57] The motion of Pacific waters is generally clockwise in the Northern Hemisphere (the North Pacific gyre) and counter-clockwise in the Southern Hemisphere. The North Equatorial Current, driven westward along latitude 15°N by the trade winds, turns north near the Philippines to become the warm Japan or Kuroshio Current.[58]
|
74 |
+
|
75 |
+
Turning eastward at about 45°N, the Kuroshio forks and some water moves northward as the Aleutian Current, while the rest turns southward to rejoin the North Equatorial Current.[59] The Aleutian Current branches as it approaches North America and forms the base of a counter-clockwise circulation in the Bering Sea. Its southern arm becomes the chilled slow, south-flowing California Current.[60] The South Equatorial Current, flowing west along the equator, swings southward east of New Guinea, turns east at about 50°S, and joins the main westerly circulation of the South Pacific, which includes the Earth-circling Antarctic Circumpolar Current. As it approaches the Chilean coast, the South Equatorial Current divides; one branch flows around Cape Horn and the other turns north to form the Peru or Humboldt Current.[61]
|
76 |
+
|
77 |
+
The climate patterns of the Northern and Southern Hemispheres generally mirror each other. The trade winds in the southern and eastern Pacific are remarkably steady while conditions in the North Pacific are far more varied with, for example, cold winter temperatures on the east coast of Russia contrasting with the milder weather off British Columbia during the winter months due to the preferred flow of ocean currents.[62]
|
78 |
+
|
79 |
+
In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions. To determine the phase of ENSO, the most recent three-month sea surface temperature average for the area approximately 3,000 km (1,900 mi) to the southeast of Hawaii is computed, and if the region is more than 0.5 °C (0.9 °F) above or below normal for that period, then an El Niño or La Niña is considered in progress.[63]
|
80 |
+
|
81 |
+
In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass.[64] Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest; however, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are active.[65] The Pacific hosts the two most active tropical cyclone basins, which are the northwestern Pacific and the eastern Pacific. Pacific hurricanes form south of Mexico, sometimes striking the western Mexican coast and occasionally the southwestern United States between June and October, while typhoons forming in the northwestern Pacific moving into southeast and east Asia from May to December. Tropical cyclones also form in the South Pacific basin, where they occasionally impact island nations.
|
82 |
+
|
83 |
+
In the arctic, icing from October to May can present a hazard for shipping while persistent fog occurs from June to December.[66] A climatological low in the Gulf of Alaska keeps the southern coast wet and mild during the winter months. The Westerlies and associated jet stream within the Mid-Latitudes can be particularly strong, especially in the Southern Hemisphere, due to the temperature difference between the tropics and Antarctica,[67] which records the coldest temperature readings on the planet. In the Southern hemisphere, because of the stormy and cloudy conditions associated with extratropical cyclones riding the jet stream, it is usual to refer to the Westerlies as the Roaring Forties, Furious Fifties and Shrieking Sixties according to the varying degrees of latitude.[68]
|
84 |
+
|
85 |
+
The ocean was first mapped by Abraham Ortelius; he called it Maris Pacifici following Ferdinand Magellan's description of it as "a pacific sea" during his circumnavigation from 1519 to 1522. To Magellan, it seemed much more calm (pacific) than the Atlantic.[69]
|
86 |
+
|
87 |
+
The andesite line is the most significant regional distinction in the Pacific. A petrologic boundary, it separates the deeper, mafic igneous rock of the Central Pacific Basin from the partially submerged continental areas of felsic igneous rock on its margins.[70] The andesite line follows the western edge of the islands off California and passes south of the Aleutian arc, along the eastern edge of the Kamchatka Peninsula, the Kuril Islands, Japan, the Mariana Islands, the Solomon Islands, and New Zealand's North Island.[71][72]
|
88 |
+
|
89 |
+
The dissimilarity continues northeastward along the western edge of the Andes Cordillera along South America to Mexico, returning then to the islands off California. Indonesia, the Philippines, Japan, New Guinea, and New Zealand lie outside the andesite line.
|
90 |
+
|
91 |
+
Within the closed loop of the andesite line are most of the deep troughs, submerged volcanic mountains, and oceanic volcanic islands that characterize the Pacific basin. Here basaltic lavas gently flow out of rifts to build huge dome-shaped volcanic mountains whose eroded summits form island arcs, chains, and clusters. Outside the andesite line, volcanism is of the explosive type, and the Pacific Ring of Fire is the world's foremost belt of explosive volcanism.[48] The Ring of Fire is named after the several hundred active volcanoes that sit above the various subduction zones.
|
92 |
+
|
93 |
+
The Pacific Ocean is the only ocean which is almost totally bounded by subduction zones. Only the Antarctic and Australian coasts have no nearby subduction zones.
|
94 |
+
|
95 |
+
The Pacific Ocean was born 750 million years ago at the breakup of Rodinia, although it is generally called the Panthalassic Ocean until the breakup of Pangea, about 200 million years ago.[73] The oldest Pacific Ocean floor is only around 180 Ma old, with older crust subducted by now.[74]
|
96 |
+
|
97 |
+
The Pacific Ocean contains several long seamount chains, formed by hotspot volcanism.[citation needed]
|
98 |
+
|
99 |
+
The exploitation of the Pacific's mineral wealth is hampered by the ocean's great depths. In shallow waters of the continental shelves off the coasts of Australia and New Zealand, petroleum and natural gas are extracted, and pearls are harvested along the coasts of Australia, Japan, Papua New Guinea, Nicaragua, Panama, and the Philippines, although in sharply declining volume in some cases.[75]
|
100 |
+
|
101 |
+
Fish are an important economic asset in the Pacific. The shallower shoreline waters of the continents and the more temperate islands yield herring, salmon, sardines, snapper, swordfish, and tuna, as well as shellfish.[76] Overfishing has become a serious problem in some areas. For example, catches in the rich fishing grounds of the Okhotsk Sea off the Russian coast have been reduced by at least half since the 1990s as a result of overfishing.[77]
|
102 |
+
|
103 |
+
The quantity of small plastic fragments floating in the north-east Pacific Ocean increased a hundredfold between 1972 and 2012.[79] The ever-growing Great Pacific garbage patch between California and Japan is three times the size of France.[80] An estimated 80,000 metric tons of plastic inhabit the patch, totaling 1.8 trillion pieces.[81]
|
104 |
+
|
105 |
+
Marine pollution is a generic term for the harmful entry into the ocean of chemicals or particles. The main culprits are those using the rivers for disposing of their waste.[82] The rivers then empty into the ocean, often also bringing chemicals used as fertilizers in agriculture. The excess of oxygen-depleting chemicals in the water leads to hypoxia and the creation of a dead zone.[83]
|
106 |
+
|
107 |
+
Marine debris, also known as marine litter, is human-created waste that has ended up floating in a lake, sea, ocean, or waterway. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter.[82]
|
108 |
+
|
109 |
+
From 1946 to 1958, Marshall Islands served as the Pacific Proving Grounds for the United States and was the site of 67 nuclear tests on various atolls.[84][85] Several nuclear weapons were lost in the Pacific Ocean,[86] including one-megaton bomb lost during the 1965 Philippine Sea A-4 incident.[87]
|
110 |
+
|
111 |
+
In addition, the Pacific Ocean has served as the crash site of satellites, including Mars 96, Fobos-Grunt, and Upper Atmosphere Research Satellite.
|
en/4228.html.txt
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An ocean is a body of water that composes much of a planet's hydrosphere.[1] On Earth, an ocean is one of the major conventional divisions of the World Ocean. These are, in descending order by area, the Pacific, Atlantic, Indian, Southern (Antarctic), and Arctic Oceans.[2][3] The phrases "the ocean" or "the sea" used without specification refer to the interconnected body of salt water covering the majority of the Earth's surface.[2][3] As a general term, "the ocean" is mostly interchangeable with "the sea" in American English, but not in British English.[4] Strictly speaking, a sea is a body of water (generally a division of the world ocean) partly or fully enclosed by land.[5]
|
4 |
+
|
5 |
+
Saline seawater covers approximately 361,000,000 km2 (139,000,000 sq mi) and is customarily divided into several principal oceans and smaller seas, with the ocean covering approximately 71% of Earth's surface and 90% of the Earth's biosphere.[6] The ocean contains 97% of Earth's water, and oceanographers have stated that less than 20% of the World Ocean has been mapped.[6] The total volume is approximately 1.35 billion cubic kilometers (320 million cu mi) with an average depth of nearly 3,700 meters (12,100 ft).[7][8][9]
|
6 |
+
|
7 |
+
As the world ocean is the principal component of Earth's hydrosphere, it is integral to life, forms part of the carbon cycle, and influences climate and weather patterns. The World Ocean is the habitat of 230,000 known species, but because much of it is unexplored, the number of species that exist in the ocean is much larger, possibly over two million.[10] The origin of Earth's oceans is unknown; oceans are thought to have formed in the Hadean eon and may have been the cause for the emergence of life.
|
8 |
+
|
9 |
+
Extraterrestrial oceans may be composed of water or other elements and compounds. The only confirmed large stable bodies of extraterrestrial surface liquids are the lakes of Titan, although there is evidence for the existence of oceans elsewhere in the Solar System. Early in their geologic histories, Mars and Venus are theorized to have had large water oceans. The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, and a runaway greenhouse effect may have boiled away the global ocean of Venus. Compounds such as salts and ammonia dissolved in water lower its freezing point so that water might exist in large quantities in extraterrestrial environments as brine or convecting ice. Unconfirmed oceans are speculated beneath the surface of many dwarf planets and natural satellites; notably, the ocean of the moon Europa is estimated to have over twice the water volume of Earth. The Solar System's giant planets are also thought to have liquid atmospheric layers of yet to be confirmed compositions. Oceans may also exist on exoplanets and exomoons, including surface oceans of liquid water within a circumstellar habitable zone. Ocean planets are a hypothetical type of planet with a surface completely covered with liquid.[11][12]
|
10 |
+
|
11 |
+
The word ocean comes from the figure in classical antiquity, Oceanus (/oʊˈsiːənəs/; Greek: Ὠκεανός Ōkeanós,[13] pronounced [ɔːkeanós]), the elder of the Titans in classical Greek mythology, believed by the ancient Greeks and Romans to be the divine personification of the sea, an enormous river encircling the world.
|
12 |
+
|
13 |
+
The concept of Ōkeanós has an Indo-European connection. Greek Ōkeanós has been compared to the Vedic epithet ā-śáyāna-, predicated of the dragon Vṛtra-, who captured the cows/rivers. Related to this notion, the Okeanos is represented with a dragon-tail on some early Greek vases.[14]
|
14 |
+
|
15 |
+
Though generally described as several separate oceans, the global, interconnected body of salt water is sometimes referred to as the World Ocean or global ocean.[15][16] The concept of a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.[17]
|
16 |
+
|
17 |
+
The major oceanic divisions – listed below in descending order of area and volume – are defined in part by the continents, various archipelagos, and other criteria.[9][12][18]
|
18 |
+
|
19 |
+
(sq. km)
|
20 |
+
|
21 |
+
Oceans are fringed by smaller, adjoining bodies of water such as seas, gulfs, bays, bights, and straits.
|
22 |
+
|
23 |
+
The mid-ocean ridges of the world are connected and form a single global mid-oceanic ridge system that is part of every ocean and the longest mountain range in the world. The continuous mountain range is 65,000 km (40,000 mi) long (several times longer than the Andes, the longest continental mountain range).[28]
|
24 |
+
|
25 |
+
The total mass of the hydrosphere is about 1.4 quintillion tonnes (1.4×1018 long tons or 1.5×1018 short tons), which is about 0.023% of Earth's total mass. Less than 3% is freshwater; the rest is saltwater, almost all of which is in the ocean. The area of the World Ocean is about 361.9 million square kilometers (139.7 million square miles),[9] which covers about 70.9% of Earth's surface, and its volume is approximately 1.335 billion cubic kilometers (320.3 million cubic miles).[9] This can be thought of as a cube of water with an edge length of 1,101 kilometers (684 mi). Its average depth is about 3,688 meters (12,100 ft),[9] and its maximum depth is 10,994 meters (6.831 mi) at the Mariana Trench.[29] Nearly half of the world's marine waters are over 3,000 meters (9,800 ft) deep.[16] The vast expanses of deep ocean (anything below 200 meters or 660 feet) cover about 66% of Earth's surface.[30] This does not include seas not connected to the World Ocean, such as the Caspian Sea.
|
26 |
+
|
27 |
+
The bluish ocean color is a composite of several contributing agents. Prominent contributors include dissolved organic matter and chlorophyll.[31] Mariners and other seafarers have reported that the ocean often emits a visible glow which extends for miles at night. In 2005, scientists announced that for the first time, they had obtained photographic evidence of this glow.[32] It is most likely caused by bioluminescence.[33][34][35]
|
28 |
+
|
29 |
+
Oceanographers divide the ocean into different vertical zones defined by physical and biological conditions. The pelagic zone includes all open ocean regions, and can be divided into further regions categorized by depth and light abundance. The photic zone includes the oceans from the surface to a depth of 200 m; it is the region where photosynthesis can occur and is, therefore, the most biodiverse. Because plants require photosynthesis, life found deeper than the photic zone must either rely on material sinking from above (see marine snow) or find another energy source. Hydrothermal vents are the primary source of energy in what is known as the aphotic zone (depths exceeding 200 m). The pelagic part of the photic zone is known as the epipelagic.
|
30 |
+
|
31 |
+
The pelagic part of the aphotic zone can be further divided into vertical regions according to temperature.
|
32 |
+
The mesopelagic is the uppermost region. Its lowermost boundary is at a thermocline of 12 °C (54 °F), which, in the tropics generally lies at 700–1,000 meters (2,300–3,300 ft). Next is the bathypelagic lying between 10 and 4 °C (50 and 39 °F), typically between 700–1,000 meters (2,300–3,300 ft) and 2,000–4,000 meters (6,600–13,100 ft), lying along the top of the abyssal plain is the abyssopelagic, whose lower boundary lies at about 6,000 meters (20,000 ft). The last zone includes the deep oceanic trench, and is known as the hadalpelagic. This lies between 6,000–11,000 meters (20,000–36,000 ft) and is the deepest oceanic zone.
|
33 |
+
|
34 |
+
The benthic zones are aphotic and correspond to the three deepest zones of the deep-sea. The bathyal zone covers the continental slope down to about 4,000 meters (13,000 ft). The abyssal zone covers the abyssal plains between 4,000 and 6,000 m. Lastly, the hadal zone corresponds to the hadalpelagic zone, which is found in oceanic trenches.
|
35 |
+
|
36 |
+
The pelagic zone can be further subdivided into two subregions: the neritic zone and the oceanic zone. The neritic zone encompasses the water mass directly above the continental shelves whereas the oceanic zone includes all the completely open water.
|
37 |
+
|
38 |
+
In contrast, the littoral zone covers the region between low and high tide and represents the transitional area between marine and terrestrial conditions. It is also known as the intertidal zone because it is the area where tide level affects the conditions of the region.
|
39 |
+
|
40 |
+
If a zone undergoes dramatic changes in temperature with depth, it contains a thermocline. The tropical thermocline is typically deeper than the thermocline at higher latitudes. Polar waters, which receive relatively little solar energy, are not stratified by temperature and generally lack a thermocline because surface water at polar latitudes are nearly as cold as water at greater depths. Below the thermocline, water is very cold, ranging from −1 °C to 3 °C. Because this deep and cold layer contains the bulk of ocean water, the average temperature of the world ocean is 3.9 °C.[citation needed]
|
41 |
+
If a zone undergoes dramatic changes in salinity with depth, it contains a halocline. If a zone undergoes a strong, vertical chemistry gradient with depth, it contains a chemocline.
|
42 |
+
|
43 |
+
The halocline often coincides with the thermocline, and the combination produces a pronounced pycnocline.
|
44 |
+
|
45 |
+
The deepest point in the ocean is the Mariana Trench, located in the Pacific Ocean near the Northern Mariana Islands. Its maximum depth has been estimated to be 10,971 meters (35,994 ft) (plus or minus 11 meters; see the Mariana Trench article for discussion of the various estimates of the maximum depth.) The British naval vessel Challenger II surveyed the trench in 1951 and named the deepest part of the trench the "Challenger Deep". In 1960, the Trieste successfully reached the bottom of the trench, manned by a crew of two men.
|
46 |
+
|
47 |
+
Oceanic maritime currents have different origins. Tidal currents are in phase with the tide, hence are quasiperiodic; they may form various knots in certain places,[clarification needed] most notably around headlands. Non-periodic currents have for origin the waves, wind and different densities.
|
48 |
+
|
49 |
+
The wind and waves create surface currents (designated as “drift currents”). These currents can decompose in one quasi-permanent current (which varies within the hourly scale) and one movement of Stokes drift under the effect of rapid waves movement (at the echelon of a couple of seconds).).[36] The quasi-permanent current is accelerated by the breaking of waves, and in a lesser governing effect, by the friction of the wind on the surface.[37]
|
50 |
+
|
51 |
+
This acceleration of the current takes place in the direction of waves and dominant wind. Accordingly, when the sea depth increases, the rotation of the earth changes the direction of currents in proportion with the increase of depth, while friction lowers their speed. At a certain sea depth, the current changes direction and is seen inverted in the opposite direction with current speed becoming null: known as the Ekman spiral. The influence of these currents is mainly experienced at the mixed layer of the ocean surface, often from 400 to 800 meters of maximum depth. These currents can considerably alter, change and are dependent on the various yearly seasons. If the mixed layer is less thick (10 to 20 meters), the quasi-permanent current at the surface adopts an extreme oblique direction in relation to the direction of the wind, becoming virtually homogeneous, until the Thermocline.[38]
|
52 |
+
|
53 |
+
In the deep however, maritime currents are caused by the temperature gradients and the salinity between water density masses.
|
54 |
+
|
55 |
+
In littoral zones, breaking waves are so intense and the depth measurement so low, that maritime currents reach often 1 to 2 knots.
|
56 |
+
|
57 |
+
Ocean currents greatly affect Earth's climate by transferring heat from the tropics to the polar regions. Transferring warm or cold air and precipitation to coastal regions, winds may carry them inland. Surface heat and freshwater fluxes create global density gradients that drive the thermohaline circulation part of large-scale ocean circulation. It plays an important role in supplying heat to the polar regions, and thus in sea ice regulation. Changes in the thermohaline circulation are thought to have significant impacts on Earth's energy budget. In so far as the thermohaline circulation governs the rate at which deep waters reach the surface, it may also significantly influence atmospheric carbon dioxide concentrations.
|
58 |
+
|
59 |
+
For a discussion of the possibilities of changes to the thermohaline circulation under global warming, see shutdown of thermohaline circulation.
|
60 |
+
|
61 |
+
The Antarctic Circumpolar Current encircles that continent, influencing the area's climate and connecting currents in several oceans.
|
62 |
+
|
63 |
+
One of the most dramatic forms of weather occurs over the oceans: tropical cyclones (also called "typhoons" and "hurricanes" depending upon where the system forms).
|
64 |
+
|
65 |
+
The ocean has a significant effect on the biosphere. Oceanic evaporation, as a phase of the water cycle, is the source of most rainfall, and ocean temperatures determine climate and wind patterns that affect life on land. Life within the ocean evolved 3 billion years prior to life on land. Both the depth and the distance from shore strongly influence the biodiversity of the plants and animals present in each region.[39]
|
66 |
+
|
67 |
+
As it is thought that life evolved in the ocean, the diversity of life is immense, including:
|
68 |
+
|
69 |
+
In addition, many land animals have adapted to living a major part of their life on the oceans. For instance, seabirds are a diverse group of birds that have adapted to a life mainly on the oceans. They feed on marine animals and spend most of their lifetime on water, many only going on land for breeding. Other birds that have adapted to oceans as their living space are penguins, seagulls and pelicans. Seven species of turtles, the sea turtles, also spend most of their time in the oceans.
|
70 |
+
|
71 |
+
A zone of rapid salinity increase with depth is called a halocline. The temperature of maximum density of seawater decreases as its salt content increases. Freezing temperature of water decreases with salinity, and boiling temperature of water increases with salinity. Typical seawater freezes at around −2 °C at atmospheric pressure.[53] If precipitation exceeds evaporation, as is the case in polar and temperate regions, salinity will be lower. If evaporation exceeds precipitation, as is the case in tropical regions, salinity will be higher. Thus, oceanic waters in polar regions have lower salinity content than oceanic waters in temperate and tropical regions.[54]
|
72 |
+
|
73 |
+
Salinity can be calculated using the chlorinity, which is a measure of the total mass of halogen ions (includes fluorine, chlorine, bromine, and iodine) in seawater. By international agreement, the following formula is used to determine salinity:
|
74 |
+
|
75 |
+
The average chlorinity is about 19.2‰, and, thus, the average salinity is around 34.7‰ [54]
|
76 |
+
|
77 |
+
Many of the world's goods are moved by ship between the world's seaports.[55] Oceans are also the major supply source for the fishing industry. Some of the major harvests are shrimp, fish, crabs, and lobster.[6]
|
78 |
+
|
79 |
+
The motions of the ocean surface, known as undulations or waves, are the partial and alternate rising and falling of the ocean surface. The series of mechanical waves that propagate along the interface between water and air is called swell.[citation needed]
|
80 |
+
|
81 |
+
Although Earth is the only known planet with large stable bodies of liquid water on its surface and the only one in the Solar System, other celestial bodies are thought to have large oceans.[58] In June 2020, NASA scientists reported that it's likely that exoplanets with oceans may be common in the Milky Way galaxy, based on mathematical modeling studies.[59][60]
|
82 |
+
|
83 |
+
The gas giants, Jupiter and Saturn, are thought to lack surfaces and instead have a stratum of liquid hydrogen; however their planetary geology is not well understood. The possibility of the ice giants Uranus and Neptune having hot, highly compressed, supercritical water under their thick atmospheres has been hypothesised. Although their composition is still not fully understood, a 2006 study by Wiktorowicz and Ingersall ruled out the possibility of such a water "ocean" existing on Neptune,[61] though some studies have suggested that exotic oceans of liquid diamond are possible.[62]
|
84 |
+
|
85 |
+
The Mars ocean hypothesis suggests that nearly a third of the surface of Mars was once covered by water, though the water on Mars is no longer oceanic (much of it residing in the ice caps). The possibility continues to be studied along with reasons for their apparent disappearance. Astronomers now think that Venus may have had liquid water and perhaps oceans for over 2 billion years. [63]
|
86 |
+
|
87 |
+
A global layer of liquid water thick enough to decouple the crust from the mantle is thought to be present on the natural satellites Titan, Europa, Enceladus and, with less certainty, Callisto, Ganymede[64][65] and Triton.[66][67] A magma ocean is thought to be present on Io.[68] Geysers have been found on Saturn's moon Enceladus, possibly originating from an ocean about 10 kilometers (6.2 mi) beneath the surface ice shell.[56] Other icy moons may also have internal oceans, or may once have had internal oceans that have now frozen.[69]
|
88 |
+
|
89 |
+
Large bodies of liquid hydrocarbons are thought to be present on the surface of Titan, although they are not large enough to be considered oceans and are sometimes referred to as lakes or seas. The Cassini–Huygens space mission initially discovered only what appeared to be dry lakebeds and empty river channels, suggesting that Titan had lost what surface liquids it might have had. Later flybys of Titan provided radar and infrared images that showed a series of hydrocarbon lakes in the colder polar regions. Titan is thought to have a subsurface liquid-water ocean under the ice in addition to the hydrocarbon mix that forms atop its outer crust.
|
90 |
+
|
91 |
+
Ceres appears to be differentiated into a rocky core and icy mantle and may harbour a liquid-water ocean under its surface.[70][71]
|
92 |
+
|
93 |
+
Not enough is known of the larger trans-Neptunian objects to determine whether they are differentiated bodies capable of supporting oceans, although models of radioactive decay suggest that Pluto,[72] Eris, Sedna, and Orcus have oceans beneath solid icy crusts approximately 100 to 180 km thick.[69] In June 2020, astronomers reported evidence that the dwarf planet Pluto may have had a subsurface ocean, and consequently may have been habitable, when it was first formed.[73][74]
|
94 |
+
|
95 |
+
Some planets and natural satellites outside the Solar System are likely to have oceans, including possible water ocean planets similar to Earth in the habitable zone or "liquid-water belt". The detection of oceans, even through the spectroscopy method, however is likely extremely difficult and inconclusive.
|
96 |
+
|
97 |
+
Theoretical models have been used to predict with high probability that GJ 1214 b, detected by transit, is composed of exotic form of ice VII, making up 75% of its mass,[75]
|
98 |
+
making it an ocean planet.
|
99 |
+
|
100 |
+
Other possible candidates are merely speculated based on their mass and position in the habitable zone include planet though little is actually known of their composition. Some scientists speculate Kepler-22b may be an "ocean-like" planet.[76] Models have been proposed for Gliese 581 d that could include surface oceans. Gliese 436 b is speculated to have an ocean of "hot ice".[77] Exomoons orbiting planets, particularly gas giants within their parent star's habitable zone may theoretically have surface oceans.
|
101 |
+
|
102 |
+
Terrestrial planets will acquire water during their accretion, some of which will be buried in the magma ocean but most of it will go into a steam atmosphere, and when the atmosphere cools it will collapse on to the surface forming an ocean. There will also be outgassing of water from the mantle as the magma solidifies—this will happen even for planets with a low percentage of their mass composed of water, so "super-Earth exoplanets may be expected to commonly produce water oceans within tens to hundreds of millions of years of their last major accretionary impact."[78]
|
103 |
+
|
104 |
+
Oceans, seas, lakes and other bodies of liquids can be composed of liquids other than water, for example the hydrocarbon lakes on Titan. The possibility of seas of nitrogen on Triton was also considered but ruled out.[79] There is evidence that the icy surfaces of the moons Ganymede, Callisto, Europa, Titan and Enceladus are shells floating on oceans of very dense liquid water or water–ammonia.[80][81][82][83][84] Earth is often called the ocean planet because it is 70% covered in water.[85][86] Extrasolar terrestrial planets that are extremely close to their parent star will be tidally locked and so one half of the planet will be a magma ocean.[87] It is also possible that terrestrial planets had magma oceans at some point during their formation as a result of giant impacts.[88] Hot Neptunes close to their star could lose their atmospheres via hydrodynamic escape, leaving behind their cores with various liquids on the surface.[89] Where there are suitable temperatures and pressures, volatile chemicals that might exist as liquids in abundant quantities on planets include ammonia, argon, carbon disulfide, ethane, hydrazine, hydrogen, hydrogen cyanide, hydrogen sulfide, methane, neon, nitrogen, nitric oxide, phosphine, silane, sulfuric acid, and water.[90]
|
105 |
+
|
106 |
+
Supercritical fluids, although not liquids, do share various properties with liquids. Underneath the thick atmospheres of the planets Uranus and Neptune, it is expected that these planets are composed of oceans of hot high-density fluid mixtures of water, ammonia and other volatiles.[91] The gaseous outer layers of Jupiter and Saturn transition smoothly into oceans of supercritical hydrogen.[92][93] The atmosphere of Venus is 96.5% carbon dioxide, which is a supercritical fluid at its surface.
|
107 |
+
|
108 |
+
On other bodies:
|
en/4229.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Augustus (Imperator Caesar divi filius Augustus; 23 September 63 BC – 19 August AD 14) was a Roman statesman and military leader who became the first emperor of the Roman Empire, reigning from 27 BC until his death in AD 14.[nb 1] He was the first ruler of the Julio-Claudian dynasty. His status as the founder of the Roman Principate has consolidated an enduring legacy as one of the most effective and controversial leaders in human history.[1][2] The reign of Augustus initiated an era of relative peace known as the Pax Romana. The Roman world was largely free from large-scale conflict for more than two centuries, despite continuous wars of imperial expansion on the Empire's frontiers and the year-long civil war known as the "Year of the Four Emperors" over the imperial succession.
|
4 |
+
|
5 |
+
Augustus was born Gaius Octavius into an old and wealthy equestrian branch of the plebeian gens Octavia. His maternal great-uncle Julius Caesar was assassinated in 44 BC, and Octavius was named in Caesar's will as his adopted son and heir, taking the name Octavian (Latin: Gaius Julius Caesar Octavianus). Along with Mark Antony and Marcus Lepidus, he formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at the Battle of Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as de facto dictators. The Triumvirate was eventually torn apart by the competing ambitions of its members. Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Octavian in 31 BC.
|
6 |
+
|
7 |
+
After the demise of the Second Triumvirate, Augustus restored the outward façade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself Princeps Civitatis ("First Citizen"). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire.
|
8 |
+
|
9 |
+
Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia, expanding possessions in Africa, and completing the conquest of Hispania, but suffered a major setback in Germania. Beyond the frontiers, he secured the Empire with a buffer region of client states and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign. Augustus died in AD 14 at the age of 75, probably from natural causes. However, there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as emperor by his adopted son Tiberius (also stepson and former son-in-law).
|
10 |
+
|
11 |
+
As a consequence of Roman customs, society, and personal preference, Augustus (/ɔːˈɡʌstəs, əˈ-/ aw-GUST-əs, ə-, Latin: [au̯ˈɡʊstʊs]) was known by many names throughout his life:
|
12 |
+
|
13 |
+
While his paternal family was from the Volscian town of Velletri, approximately 40 kilometres (25 mi) to the south-east of Rome, Augustus was born in the city of Rome on 23 September 63 BC.[12] He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves which occurred a few years after his birth.[13][14] Suetonius wrote: "There are many indications that the Octavian family was in days of old a distinguished one at Velitrae; for not only was a street in the most frequented part of town long ago called Octavian, but an altar was shown there besides, consecrated by an Octavius. This man was leader in a war with a neighbouring town ..." [15]
|
14 |
+
|
15 |
+
Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius mentions his father's equestrian family only briefly in his memoirs. His paternal great-grandfather Gaius Octavius was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. His mother, Atia, was the niece of Julius Caesar.[16][17]
|
16 |
+
|
17 |
+
In 59 BC, when he was four years old, his father died.[18] His mother married a former governor of Syria, Lucius Marcius Philippus.[19] Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother, Julia, the sister of Julius Caesar. Julia died in 52 or 51 BC, and Octavius delivered the funeral oration for his grandmother.[20][21] From this point, his mother and stepfather took a more active role in raising him. He donned the toga virilis four years later,[22] and was elected to the College of Pontiffs in 47 BC.[23][24] The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar.[24] According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested.[25] In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel.
|
18 |
+
|
19 |
+
When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably.[22] Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage.[26] When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary.[27]
|
20 |
+
|
21 |
+
Octavius was studying and undergoing military training in Apollonia, Illyria, when Julius Caesar was killed on the Ides of March (15 March) 44 BC. He rejected the advice of some army officers to take refuge with the troops in Macedonia and sailed to Italy to ascertain whether he had any potential political fortunes or security.[28] Caesar had no living legitimate children under Roman law,[nb 3] and so had adopted Octavius, his grand-nephew, making him his primary heir.[29] Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius describes Antony's accusation as political slander.[30] This form of slander was popular during this time in the Roman Republic to demean and discredit political opponents by accusing them of having an inappropriate sexual affair.[31][32] After landing at Lupiae near Brundisium, Octavius learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate.[24][28][33]
|
22 |
+
|
23 |
+
Upon his adoption, Octavius assumed his great-uncle's name Gaius Julius Caesar. Roman citizens adopted into a new family usually retained their old nomen in cognomen form (e.g., Octavianus for one who had been an Octavius, Aemilianus for one who had been an Aemilius, etc.). However, though some of his contemporaries did,[34] there is no evidence that Octavius ever himself officially used the name Octavianus, as it would have made his modest origins too obvious.[35][36][37] Historians usually refer to the new Caesar as Octavian during the time between his adoption and his assumption of the name Augustus in 27 BC in order to avoid confusing the dead dictator with his heir.[38]
|
24 |
+
|
25 |
+
Octavian could not rely on his limited funds to make a successful entry into the upper echelons of the Roman political hierarchy.[39] After a warm welcome by Caesar's soldiers at Brundisium,[40] Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against the Parthian Empire in the Middle East.[39] This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east.[41]
|
26 |
+
|
27 |
+
A later senatorial investigation into the disappearance of the public funds took no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy Mark Antony.[40] Octavian made another bold move in 44 BC when, without official permission, he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy.[36][42]
|
28 |
+
|
29 |
+
Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar.[28][43] On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania.[36] By June, he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii.[44][45][46]
|
30 |
+
|
31 |
+
Arriving in Rome on 6 May 44 BC, Octavian found consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins. They had been granted a general amnesty on 17 March, yet Antony had succeeded in driving most of them out of Rome with an inflammatory eulogy at Caesar's funeral, mounting public opinion against the assassins.[36]
|
32 |
+
|
33 |
+
Mark Antony was amassing political support, but Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he initially opposed the motion to elevate Caesar to divine status.[47] Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer, he managed to win support from Caesarian sympathizers and also made common with the Optimates, the former enemies of Caesar, who saw him as the lesser evil and hoped to manipulate him.[48] In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order.[49][50]
|
34 |
+
|
35 |
+
With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws that would assign him the province of Cisalpine Gaul.[51][52] Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans and, on 28 November, he won over two of Antony's legions with the enticing offer of monetary gain.[53][54][55]
|
36 |
+
|
37 |
+
In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome and, to the relief of the Senate, he left Rome for Cisalpine Gaul, which was to be handed to him on 1 January.[55] However, the province had earlier been assigned to Decimus Junius Brutus Albinus, one of Caesar's assassins, who now refused to yield to Antony. Antony besieged him at Mutina[56] and rejected the resolutions passed by the Senate to stop the fighting. The Senate had no army to enforce their resolutions. This provided an opportunity for Octavian, who already was known to have armed forces.[54] Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name, stating "we have no more brilliant example of traditional piety among our youth."[57]
|
38 |
+
|
39 |
+
At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls.[54][55] In addition, Octavian was granted propraetor imperium (commanding power) which legalized his command of troops, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC).[54][58] In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies.[59][60]
|
40 |
+
|
41 |
+
The senate heaped many more rewards on Decimus Brutus than on Octavian for defeating Antony, then attempted to give command of the consular legions to Decimus Brutus.[61] In response, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony.[62] In July, an embassy of centurions sent by Octavian entered Rome and demanded the consulship left vacant by Hirtius and Pansa[63] and also that the decree should be rescinded which declared Antony a public enemy.[62] When this was refused, he marched on the city with eight legions.[62] He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul.[64][65] Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian.[66]
|
42 |
+
|
43 |
+
In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed the Second Triumvirate.[68] This explicit arrogation of special powers lasting five years was then legalised by law passed by the plebs, unlike the unofficial First Triumvirate formed by Pompey, Julius Caesar, and Marcus Licinius Crassus.[68][69] The triumvirs then set in motion proscriptions, in which between 130 and 300 senators[nb 4] and 2,000 equites were branded as outlaws and deprived of their property and, for those who failed to escape, their lives.[71] This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus and Gaius Cassius Longinus.[72] Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs.[71]
|
44 |
+
|
45 |
+
Contemporary Roman historians provide conflicting reports as to which triumvir was most responsible for the proscriptions and killing. However, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies.[73] Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with.[74]
|
46 |
+
|
47 |
+
This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies.[75] Suetonius said that Octavian was reluctant to proscribe officials, but did pursue his enemies with more vigor than the other triumvirs.[73] Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus.[74]
|
48 |
+
|
49 |
+
On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, Divus Iulius. Octavian was able to further his cause by emphasizing the fact that he was Divi filius, "Son of the Divine".[76] Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece.[77] After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony later used the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead.[78]
|
50 |
+
|
51 |
+
After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. Gaul and the province of Hispania were placed in the hands of Octavian. Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead.[79]
|
52 |
+
|
53 |
+
Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius could easily ally with a political opponent of Octavian if not appeased, and they also required land.[79] There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former.[80] There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions.[81]
|
54 |
+
|
55 |
+
There was widespread dissatisfaction with Octavian over these settlements of his soldiers, and this encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia (Mark Antony's wife) and her first husband Publius Clodius Pulcher. He returned Clodia to her mother, claiming that their marriage had never been consummated. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC.[81]
|
56 |
+
|
57 |
+
Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon.[82] Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius.[83] Perusia also was pillaged and burned as a warning for others.[82] This bloody event sullied Octavian's reputation and was criticized by many, such as Augustan poet Sextus Propertius.[83]
|
58 |
+
|
59 |
+
Sextus Pompeius, the son of Pompey and still a renegade general following Julius Caesar's victory over his father, had established himself in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC.[84] Both Antony and Octavian were vying for an alliance with Pompeius. Octavian succeeded in a temporary alliance in 40 BC when he married Scribonia, a sister or daughter of Pompeius's father-in-law Lucius Scribonius Libo. Scribonia gave birth to Octavian's only natural child, Julia, the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage.[83]
|
60 |
+
|
61 |
+
While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her.[nb 5] Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile, in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation.[85][86]
|
62 |
+
|
63 |
+
In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian Peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC.[85]
|
64 |
+
|
65 |
+
Sextus Pompeius threatened Octavian in Italy by denying shipments of grain through the Mediterranean Sea to the peninsula. Pompeius's own son was put in charge as naval commander in the effort to cause widespread famine in Italy.[86] Pompeius's control over the sea prompted him to take on the name Neptuni filius, "son of Neptune".[87] A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC.[86][87]
|
66 |
+
|
67 |
+
The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC.[88] One of Pompeius's naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC.[69][89]
|
68 |
+
|
69 |
+
In supporting Octavian, Antony expected to gain support for his own campaign against the Parthian Empire, desiring to avenge Rome's defeat at Carrhae in 53 BC.[89] In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth of those promised, however, which Antony viewed as an intentional provocation.[90]
|
70 |
+
|
71 |
+
Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC.[91] Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by General Agrippa at the naval Battle of Naulochus. Sextus fled to the east with his remaining forces, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius's troops, Lepidus attempted to claim Sicily for himself, ordering Octavian to leave. Lepidus's troops deserted him, however, and defected to Octavian since they were weary of fighting and were enticed by Octavian's promises of money.[92]
|
72 |
+
|
73 |
+
Lepidus surrendered to Octavian and was permitted to retain the office of Pontifex Maximus (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy.[72][92] The Roman dominions were now divided between Octavian in the West and Antony in the East. Octavian ensured Rome's citizens of their rights to property in order to maintain peace and stability in his portion of the Empire. This time, he settled his discharged soldiers outside of Italy, while also returning 30,000 slaves to their former Roman owners—slaves who had fled to join Pompeius's army and navy.[93] Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or sacrosanctitas, in order to ensure his own safety and that of Livia and Octavia once he returned to Rome.[94]
|
74 |
+
|
75 |
+
Meanwhile, Antony's campaign turned disastrous against Parthia, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces.[95] On the other hand, Cleopatra could restore his army to full strength; he already was engaged in a romantic affair with her, so he decided to send Octavia back to Rome.[96] Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an "Oriental paramour".[97] In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir—if only Antony would do the same. Antony refused.[98]
|
76 |
+
|
77 |
+
Roman troops captured the Kingdom of Armenia in 34 BC, and Antony made his son Alexander Helios the ruler of Armenia. He also awarded the title "Queen of Kings" to Cleopatra, acts that Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome.[97] Octavian became consul once again on 1 January 33 BC, and he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen.[99]
|
78 |
+
|
79 |
+
The breach between Antony and Octavian prompted a large portion of the Senators, as well as both of that year's consuls, to leave Rome and defect to Antony. However, Octavian received two key deserters from Antony in the autumn of 32 BC: Munatius Plancus and Marcus Titius.[100] These defectors gave Octavian the information that he needed to confirm with the Senate all the accusations that he made against Antony.[101]
|
80 |
+
|
81 |
+
Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen.[102][103] In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt.[104][105]
|
82 |
+
|
83 |
+
In early 31 BC, Antony and Cleopatra were temporarily stationed in Greece when Octavian gained a preliminary victory: the navy successfully ferried troops across the Adriatic Sea under the command of Agrippa. Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, while Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations.[108]
|
84 |
+
|
85 |
+
Antony's fleet sailed through the bay of Actium on the western coast of Greece in a desperate attempt to break free of the naval blockade. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the Battle of Actium on 2 September 31 BC.[109] Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby.[110]
|
86 |
+
|
87 |
+
Octavian pursued them and defeated their forces in Alexandria on 1 August 30 BC—after which Antony and Cleopatra committed suicide. Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison.[111] Octavian had exploited his position as Caesar's heir to further his own political career, and he was well aware of the dangers in allowing another person to do the same. He therefore followed the advice of Arius Didymus that "two Caesars are one too many", ordering Caesarion, Julius Caesar's son by Cleopatra, killed, while sparing Cleopatra's children by Antony, with the exception of Antony's older son.[112][113] Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium.[114]
|
88 |
+
|
89 |
+
After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate[115]—but he had to achieve this through incremental power gains. He did so by courting the Senate and the people while upholding the republican traditions of Rome, appearing that he was not aspiring to dictatorship or monarchy.[116][117] Marching into Rome, Octavian and Marcus Agrippa were elected as consuls by the Senate.[118]
|
90 |
+
|
91 |
+
Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars among the Roman generals and, even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality, and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections—in name at least.[119]
|
92 |
+
|
93 |
+
In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Octavian was no longer in direct control of the provinces and their armies, but he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power was unrivaled in the Roman Republic.[118] Historian Werner Eck states:
|
94 |
+
|
95 |
+
The sum of his power derived first of all from various powers of office delegated to him by the Senate and people, secondly from his immense private fortune, and thirdly from numerous patron-client relationships he established with individuals and groups throughout the Empire. All of them taken together formed the basis of his auctoritas, which he himself emphasized as the foundation of his political actions.[120]
|
96 |
+
|
97 |
+
To a large extent, the public were aware of the vast financial resources that Octavian commanded. He failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy in 20 BC, but he undertook direct responsibility for them. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the aerarium Saturni, the public treasury.[121]
|
98 |
+
|
99 |
+
According to H. H. Scullard, however, Octavian's power was based on the exercise of "a predominant military power and ... the ultimate sanction of his authority was force, however much the fact was disguised."[122] The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic.[123][124]
|
100 |
+
|
101 |
+
The provinces ceded to Augustus for that ten-year period comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt.[123][125] Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions.[125][126]
|
102 |
+
|
103 |
+
While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure that his orders were carried out. The provinces not under Octavian's control were overseen by governors chosen by the Roman Senate.[126] Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but he did not have a monopoly on political and martial power.[127]
|
104 |
+
|
105 |
+
The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two strategic regions with several legions.[127] However, the Senate had control of only five or six legions distributed among three senatorial proconsuls, compared to the twenty legions under the control of Octavian, and their control of these regions did not amount to any political or military challenge to Octavian.[116][122] The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces followed Republican-era precedents for the objective of securing peace and creating stability, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability.[116]
|
106 |
+
|
107 |
+
On 16 January 27 BC the Senate gave Octavian the new titles of Augustus and Princeps.[128] Augustus is from the Latin word Augere (meaning to increase) and can be translated as "the illustrious one". It was a title of religious authority rather than political authority. His new title of Augustus was also more favorable than Romulus, the previous one which he styled for himself in reference to the story of the legendary founder of Rome, which symbolized a second founding of Rome.[114] The title of Romulus was associated too strongly with notions of monarchy and kingship, an image that Octavian tried to avoid.[129] The title princeps senatus originally meant the member of the Senate with the highest precedence,[130] but in the case of Augustus, it became an almost regnal title for a leader who was first in charge.[131] Augustus also styled himself as Imperator Caesar divi filius, "Commander Caesar son of the deified one". With this title, he boasted his familial link to deified Julius Caesar, and the use of Imperator signified a permanent link to the Roman tradition of victory. He transformed Caesar, a cognomen for one branch of the Julian family, into a new family line that began with him.[128]
|
108 |
+
|
109 |
+
Augustus was granted the right to hang the corona civica above his door, the "civic crown" made from oak, and to have laurels drape his doorposts.[127] However, he renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar.[132] If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription virtus, pietas, clementia, iustitia—"valor, piety, clemency, and justice."[127][133]
|
110 |
+
|
111 |
+
By 23 BC, some of the un-Republican implications were becoming apparent concerning the settlement of 27 BC. Augustus's retention of an annual consulate drew attention to his de facto dominance over the Roman political system, and cut in half the opportunities for others to achieve what was still nominally the preeminent position in the Roman state.[134] Further, he was causing political problems by desiring to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn,[nb 6] alienating his three greatest supporters – Agrippa, Maecenas, and Livia.[135] He appointed noted Republican Calpurnius Piso (who had fought against Julius Caesar and supported Cassius and Brutus[136]) as co-consul in 23 BC, after his choice Aulus Terentius Varro Murena died unexpectedly.[137]
|
112 |
+
|
113 |
+
In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form,[138] while allaying senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus's supposedly favored nephew Marcellus came away empty-handed.[139][140] This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor.[141]
|
114 |
+
|
115 |
+
Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility among the republican-minded Romans fearful of monarchy.[117] With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position;[142] nonetheless, by giving his signet ring to Agrippa, Augustus intended to signal to the legions that Agrippa was to be his successor, and that constitutional procedure notwithstanding, they should continue to obey Agrippa.[143]
|
116 |
+
|
117 |
+
Soon after his bout of illness subsided, Augustus gave up his consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC,[140][144] both times to introduce his grandsons into public life.[136] This was a clever ploy by Augustus; ceasing to serve as one of two annually elected consuls allowed aspiring senators a better chance to attain the consular position, while allowing Augustus to exercise wider patronage within the senatorial class.[145] Although Augustus had resigned as consul, he desired to retain his consular imperium not just in his provinces but throughout the empire. This desire, as well as the Marcus Primus Affair, led to a second compromise between him and the Senate known as the Second Settlement.[146]
|
118 |
+
|
119 |
+
The primary reasons for the Second Settlement were as follows. First, after Augustus relinquished the annual consulship, he was no longer in an official position to rule the state, yet his dominant position remained unchanged over his Roman, 'imperial' provinces where he was still a proconsul.[140][147] When he annually held the office of consul, he had the power to intervene with the affairs of the other provincial proconsuls appointed by the Senate throughout the empire, when he deemed necessary.[148]
|
120 |
+
|
121 |
+
A second problem later arose showing the need for the Second Settlement in what became known as the "Marcus Primus Affair".[149] In late 24 or early 23 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, for waging a war without prior approval of the Senate on the Odrysian kingdom of Thrace, whose king was a Roman ally.[150] He was defended by Lucius Lucinius Varro Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state.[151] Later, Primus testified that the orders came from the recently deceased Marcellus.[152]
|
122 |
+
|
123 |
+
Such orders, had they been given, would have been considered a breach of the Senate's prerogative under the Constitutional settlement of 27 BC and its aftermath—i.e., before Augustus was granted imperium proconsulare maius—as Macedonia was a Senatorial province under the Senate's jurisdiction, not an imperial province under the authority of Augustus. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals.[151] Even worse, the involvement of Marcellus provided some measure of proof that Augustus's policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out.[142]
|
124 |
+
|
125 |
+
The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order.[153] Murena disbelieved Augustus's testimony and resented his attempt to subvert the trial by using his auctoritas. He rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest.[154] Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus's testimony, an insult to the 'August One'.[155]
|
126 |
+
|
127 |
+
The Second Constitutional Settlement was completed in part to allay confusion and formalize Augustus's legal authority to intervene in Senatorial provinces. The Senate granted Augustus a form of general imperium proconsulare, or proconsular imperium (power) that applied throughout the empire, not solely to his provinces. Moreover, the Senate augmented Augustus's proconsular imperium into imperium proconsulare maius, or proconsular imperium applicable throughout the empire that was more (maius) or greater than that held by the other proconsuls. This in effect gave Augustus constitutional power superior to all other proconsuls in the empire.[146] Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support, thereby ensuring that his status of proconsular imperium maius was renewed in 13 BC.[144]
|
128 |
+
|
129 |
+
During the second settlement, Augustus was also granted the power of a tribune (tribunicia potestas) for life, though not the official title of tribune.[146] For some years, Augustus had been awarded tribunicia sacrosanctitas, the immunity given to a Tribune of the Plebs. Now he decided to assume the full powers of the magistracy, renewed annually, in perpetuity. Legally, it was closed to patricians, a status that Augustus had acquired some years earlier when adopted by Julius Caesar.[145]
|
130 |
+
|
131 |
+
This power allowed him to convene the Senate and people at will and lay business before them, to veto the actions of either the Assembly or the Senate, to preside over elections, and to speak first at any meeting.[144][156] Also included in Augustus's tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure that they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate.[157]
|
132 |
+
|
133 |
+
With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all attire but the classic toga while entering the Forum.[158] There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor.[159] Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state. However, this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the tribunus plebis began to lose its prestige due to Augustus's amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship.[160]
|
134 |
+
|
135 |
+
Augustus was granted sole imperium within the city of Rome itself, in addition to being granted proconsular imperium maius and tribunician authority for life. Traditionally, proconsuls (Roman province governors) lost their proconsular "imperium" when they crossed the Pomerium – the sacred boundary of Rome – and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that, when he was in the city, he might not be the constitutional magistrate with the most authority. Thanks to his prestige or auctoritas, his wishes would usually be obeyed, but there might be some difficulty. To fill this power vacuum, the Senate voted that Augustus's imperium proconsulare maius (superior proconsular power) should not lapse when he was inside the city walls. All armed forces in the city had formerly been under the control of the urban praetors and consuls, but this situation now placed them under the sole authority of Augustus.[161]
|
136 |
+
|
137 |
+
In addition, the credit was given to Augustus for each subsequent Roman military victory after this time, because the majority of Rome's armies were stationed in imperial provinces commanded by Augustus through the legatus who were deputies of the princeps in the provinces. Moreover, if a battle was fought in a Senatorial province, Augustus's proconsular imperium maius allowed him to take command of (or credit for) any major military victory. This meant that Augustus was the only individual able to receive a triumph, a tradition that began with Romulus, Rome's first King and first triumphant general. Lucius Cornelius Balbus was the last man outside Augustus's family to receive this award, in 19 BC.[162] Tiberius, Augustus's eldest stepson by Livia, was the only other general to receive a triumph—for victories in Germania in 7 BC.[163]
|
138 |
+
|
139 |
+
Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class, who were Augustus's greatest supporters and clientele. This caused them to insist upon Augustus's participation in imperial affairs from time to time. Augustus failed to stand for election as consul in 22 BC, and fears arose once again that he was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus.[164]
|
140 |
+
|
141 |
+
Likewise, there was a food shortage in Rome in 22 BC which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis. After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply "by virtue of his proconsular imperium", and ended the crisis almost immediately.[144] It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a praefectus annonae, a permanent prefect who was in charge of procuring food supplies for Rome.[165]
|
142 |
+
|
143 |
+
There were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio.[149] Some time prior to 1 September 22 BC, a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio.[166] Murena, the outspoken Consul who defended Primus in the Marcus Primus Affair, was named among the conspirators. The conspirators were tried in absentia with Tiberius acting as prosecutor; the jury found them guilty, but it was not a unanimous verdict.[167] All the accused were sentenced to death for treason and executed as soon as they were captured—without ever giving testimony in their defence.[168] Augustus ensured that the facade of Republican government continued with an effective cover-up of the events.[169]
|
144 |
+
|
145 |
+
In 19 BC, the Senate granted Augustus a form of 'general consular imperium', which was probably 'imperium consulare maius', like the proconsular powers that he received in 23 BC. Like his tribune authority, the consular powers were another instance of gaining power from offices that he did not actually hold.[170] In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate,[161] as well as to sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority.[170] This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he both appeared as one before the people and could exercise consular power if necessary. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the college of the Pontiffs, the most important position in Roman religion.[171][172] On 5 February 2 BC, Augustus was also given the title pater patriae, or "father of the country".[173][174]
|
146 |
+
|
147 |
+
A final reason for the Second Settlement was to give the Principate constitutional stability and staying power in case something happened to Princeps Augustus. His illness of early 23 BC and the Caepio conspiracy showed that the regime's existence hung by the thin thread of the life of one man, Augustus himself, who suffered from several severe and dangerous illnesses throughout his life.[175] If he were to die from natural causes or fall victim to assassination, Rome could be subjected to another round of civil war. The memories of Pharsalus, the Ides of March, the proscriptions, Philippi, and Actium, barely twenty-five years distant, were still vivid in the minds of many citizens. Proconsular imperium was conferred upon Agrippa for five years, similar to Augustus's power, in order to accomplish this constitutional stability. The exact nature of the grant is uncertain but it probably covered Augustus's imperial provinces, east and west, perhaps lacking authority over the provinces of the Senate. That came later, as did the jealously guarded tribunicia potestas.[176] Augustus's accumulation of powers was now complete. In fact, he dated his 'reign' from the completion of the Second Settlement, 1 July 23 BC.[177]
|
148 |
+
|
149 |
+
Augustus chose Imperator ("victorious commander") to be his first name, since he wanted to make an emphatically clear connection between himself and the notion of victory, and consequently became known as Imperator Caesar Divi Filius Augustus. By the year 13, Augustus boasted 21 occasions where his troops proclaimed "imperator" as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the Res Gestae was devoted to his military victories and honors.[178]
|
150 |
+
|
151 |
+
Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (to the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: tu regere imperio populos, Romane, memento—"Roman, remember by your strength to rule the Earth's peoples!"[158] The impulse for expansionism was apparently prominent among all classes at Rome, and it is accorded divine sanction by Virgil's Jupiter in Book 1 of the Aeneid, where Jupiter promises Rome imperium sine fine, "sovereignty without end".[179]
|
152 |
+
|
153 |
+
By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal) and the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and had extended the borders of the Africa Province to the east and south. Judea was added to the province of Syria when Augustus deposed Herod Archelaus, successor to client king Herod the Great (73–4 BC). Syria (like Egypt after Antony) was governed by a high prefect of the equestrian class rather than by a proconsul or legate of Augustus.[180]
|
154 |
+
|
155 |
+
Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada.[180] The rebellious tribes of Asturias and Cantabria in modern-day Spain were finally quelled in 19 BC, and the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus's future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas.[181]
|
156 |
+
|
157 |
+
Conquering the peoples of the Alps in 16 BC was another important victory for Rome, since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north.[182] Horace dedicated an ode to the victory, while the monumental Trophy of Augustus near Monaco was built to honor the occasion.[183] The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum, and his brother Nero Claudius Drusus moved against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus's forces reached the Elbe River by 9 BC—though he died shortly after by falling off his horse.[184] It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome.[185]
|
158 |
+
|
159 |
+
To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas that could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East.[186] Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia.[185]
|
160 |
+
|
161 |
+
Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome.[185][186][187] Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus's defeat by military means.[188] However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards.[189]
|
162 |
+
|
163 |
+
Parthia had always posed a threat to Rome in the east, but the real battlefront was along the Rhine and Danube rivers.[186] Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia were the first step in expanding Roman dominions to the Danube.[190] Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania.[186]
|
164 |
+
|
165 |
+
A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed by Arminius, leader of the Cherusci, an apparent Roman ally.[191] Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany.[192] Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that Battle of Idistaviso in AD 16 but was killed later in 21 due to treachery.[193]
|
166 |
+
|
167 |
+
The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed to Augustus's unofficial position of power, he would have to earn it through his own publicly proven merits.[194]
|
168 |
+
|
169 |
+
Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus's daughter Julia the Elder.[195] Other historians dispute this due to Augustus's will being read aloud to the Senate while he was seriously ill in 23 BC,[196] instead indicating a preference for Marcus Agrippa, who was Augustus's second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together.[197]
|
170 |
+
|
171 |
+
After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the imperium of a proconsul and the same tribunicia potestas granted to Augustus (although not trumping Augustus's authority), his seat of governance stationed at Samos in the eastern Aegean.[197][198] This granting of power showed Augustus's favor for Agrippa, but it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him.[198]
|
172 |
+
|
173 |
+
Augustus's intent became apparent to make Gaius and Lucius Caesar his heirs when he adopted them as his own children.[199] He took the consulship in 5 and 2 BC so that he could personally usher them into their political careers,[200] and they were nominated for the consulships of AD 1 and 4.[201] Augustus also showed favor to his stepsons, Livia's children from her first marriage Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius), granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania Agrippina and marry Agrippa's widow, Augustus's daughter Julia—as soon as a period of mourning for Agrippa had ended.[202] Drusus's marriage to Augustus's niece Antonia was considered an unbreakable affair, whereas Vipsania was "only" the daughter of the late Agrippa from his first marriage.[202]
|
174 |
+
|
175 |
+
Tiberius shared in Augustus's tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes.[163][203] No specific reason is known for his departure, though it could have been a combination of reasons, including a failing marriage with Julia,[163][203] as well as a sense of envy and exclusion over Augustus's apparent favouring of his young grandchildren-turned-sons Gaius and Lucius. (Gaius and Lucius joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.)[204][205]
|
176 |
+
|
177 |
+
After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus.[206] This continued the tradition of presenting at least two generations of heirs.[202] In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by AD 13 was awarded with his second triumph and equal level of imperium with that of Augustus.[207]
|
178 |
+
|
179 |
+
The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus's favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a "vulgar young man, brutal and brutish, and of depraved character".[208]
|
180 |
+
|
181 |
+
On 19 August AD 14, Augustus died while visiting Nola where his father had died. Both Tacitus and Cassius Dio wrote that Livia was rumored to have brought about Augustus's death by poisoning fresh figs.[209][210] This element features in many modern works of historical fiction pertaining to Augustus's life, but some historians view it as likely to have been a salacious fabrication made by those who had favoured Postumus as heir, or other of Tiberius's political enemies. Livia had long been the target of similar rumors of poisoning on the behalf of her son, most or all of which are unlikely to have been true.[211]
|
182 |
+
|
183 |
+
Alternatively, it is possible that Livia did supply a poisoned fig (she did cultivate a variety of fig named for her that Augustus is said to have enjoyed), but did so as a means of assisted suicide rather than murder. Augustus's health had been in decline in the months immediately before his death, and he had made significant preparations for a smooth transition in power, having at last reluctantly settled on Tiberius as his choice of heir.[212] It is likely that Augustus was not expected to return alive from Nola, but it seems that his health improved once there; it has therefore been speculated that Augustus and Livia conspired to end his life at the anticipated time, having committed all political process to accepting Tiberius, in order to not endanger that transition.[211]
|
184 |
+
|
185 |
+
Augustus's famous last words were, "Have I played the part well? Then applaud as I exit"—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, "Behold, I found Rome of clay, and leave her to you of marble." An enormous funerary procession of mourners traveled with Augustus's body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day.[212] Tiberius and his son Drusus delivered the eulogy while standing atop two rostra. Augustus's body was coffin-bound and cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon.[213]
|
186 |
+
|
187 |
+
Historian D. C. A. Shotter states that Augustus's policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus.[214] Shotter suggests that Augustus's deification obliged Tiberius to suppress any open resentment that he might have harbored, coupled with Tiberius's "extremely conservative" attitude towards religion.[215] Also, historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits.[216] Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her), as well as toward the two young Caesars, Gaius and Lucius—instead of Augustus, the real architect of his divorce and imperial demotion.[215]
|
188 |
+
|
189 |
+
Augustus's reign laid the foundations of a regime that lasted, in one form or another, for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title Augustus became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, Caesar became the word for Emperor, as in the German Kaiser and in the Bulgarian and subsequently Russian Tsar (sometimes Csar or Czar). The cult of Divus Augustus continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the Res Gestae Divi Augusti, to be inscribed in bronze in front of his mausoleum.[218] Copies of the text were inscribed throughout the Empire upon his death.[219] The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the Monumentum Ancyranum, called the "queen of inscriptions" by historian Theodor Mommsen.[220]
|
190 |
+
|
191 |
+
The Res Gestae is the only work to have survived from antiquity, though Augustus is also known to have composed poems entitled Sicily, Epiphanus, and Ajax, an autobiography of 13 books, a philosophical treatise, and a written rebuttal to Brutus's Eulogy of Cato.[221] Historians are able to analyze excerpts of letters penned by Augustus, preserved in other works, to others for additional facts or clues about his personal life.[216][222]
|
192 |
+
|
193 |
+
Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated Pax Romana or Pax Augusta. The Roman Senate wished subsequent emperors to "be more fortunate than Augustus and better than Trajan". Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar and was influenced on occasion by Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors.[223]
|
194 |
+
|
195 |
+
A praefectus vigilum, or "Prefect of the Watch" was put in charge of the vigiles, Rome's fire brigade and police.[224] With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers.[225] This was supported by numerous auxiliary units of 500 non-citizen soldiers each, often recruited from recently conquered areas.[226]
|
196 |
+
|
197 |
+
With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the praefectus vehiculorum.[227] Besides the advent of swifter communication among Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country.[228] In the year 6 Augustus established the aerarium militare, donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers.[229]
|
198 |
+
|
199 |
+
One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome.[230] They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria.[231]
|
200 |
+
|
201 |
+
Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus gave 400 sesterces (equal to 1/10 of a Roman pound of gold) each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon.[232] He also restored 82 different temples to display his care for the Roman pantheon of deities.[232] In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest.[232]
|
202 |
+
|
203 |
+
The longevity of Augustus's reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate.[233] Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus's own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus's ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title.[213] The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire.[234]
|
204 |
+
|
205 |
+
However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his Annals, the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus's death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another.[235] Tacitus, however, records two contradictory but common views of Augustus:
|
206 |
+
|
207 |
+
Intelligent people praised or criticized him in varying ways. One opinion was as follows. Filial duty and a national emergency, in which there was no place for law-abiding conduct, had driven him to civil war—and this can neither be initiated nor maintained by decent methods. He had made many concessions to Anthony and to Lepidus for the sake of vengeance on his father's murderers. When Lepidus grew old and lazy, and Anthony's self-indulgence got the better of him, the only possible cure for the distracted country had been government by one man. However, Augustus had put the state in order not by making himself king or dictator, but by creating the Principate. The Empire's frontiers were on the ocean, or distant rivers. Armies, provinces, fleets, the whole system was interrelated. Roman citizens were protected by the law. Provincials were decently treated. Rome itself had been lavishly beautified. Force had been sparingly used—merely to preserve peace for the majority.[236]
|
208 |
+
|
209 |
+
According to the second opposing opinion:
|
210 |
+
|
211 |
+
filial duty and national crisis had been merely pretexts. In actual fact, the motive of Octavian, the future Augustus, was lust for power ... There had certainly been peace, but it was a blood-stained peace of disasters and assassinations.[237]
|
212 |
+
|
213 |
+
In a 2006 biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus's reign have oscillated between these two extremes but stresses that:
|
214 |
+
|
215 |
+
Opposites do not have to be mutually exclusive, and we are not obliged to choose one or the other. The story of his career shows that Augustus was indeed ruthless, cruel, and ambitious for himself. This was only in part a personal trait, for upper-class Romans were educated to compete with one another and to excel. However, he combined an overriding concern for his personal interests with a deep-seated patriotism, based on a nostalgia of Rome's antique virtues. In his capacity as princeps, selfishness and selflessness coexisted in his mind. While fighting for dominance, he paid little attention to legality or to the normal civilities of political life. He was devious, untrustworthy, and bloodthirsty. But once he had established his authority, he governed efficiently and justly, generally allowed freedom of speech, and promoted the rule of law. He was immensely hardworking and tried as hard as any democratic parliamentarian to treat his senatorial colleagues with respect and sensitivity. He suffered from no delusions of grandeur.[238]
|
216 |
+
|
217 |
+
Tacitus was of the belief that Nerva (r. 96–98) successfully "mingled two formerly alien ideas, principate and liberty".[239] The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat.[235] The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, "perhaps Augustus was too sacred a figure to accuse directly."[239]
|
218 |
+
|
219 |
+
The Anglo-Irish writer Jonathan Swift (1667–1745), in his Discourse on the Contests and Dissentions in Athens and Rome, criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658).[240] Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle.[241] In his Memoirs of the Court of Augustus, the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, "a bloodthirsty vindicative usurper", "wicked and worthless", "a mean spirit", and a "tyrant".[241]
|
220 |
+
|
221 |
+
Augustus's public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus's predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute.[242]
|
222 |
+
|
223 |
+
The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin.[243]
|
224 |
+
|
225 |
+
An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors who collected taxes for the State were the norm in the Republican era. Some of them were powerful enough to influence the number of votes for men running for offices in Rome. These tax farmers called publicans were infamous for their depredations, great private wealth, and the right to tax local areas.[242]
|
226 |
+
|
227 |
+
The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus's conquest of Egypt and the shift to a Roman form of government.[244] As it was effectively considered Augustus's private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium.[245] Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard.[246] The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions.[244] During his reign the circus games resulted in the killing of 3,500 elephants.[247]
|
228 |
+
|
229 |
+
The month of August (Latin: Augustus) is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is sex). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a senatus consultum quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month.[248]
|
230 |
+
|
231 |
+
On his deathbed, Augustus boasted "I found a Rome of bricks; I leave to you one of marble." Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength.[249] Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus.[250]
|
232 |
+
|
233 |
+
Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt.[251] The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus's triumphs in the Res Gestae. Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome.[252]
|
234 |
+
|
235 |
+
He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor.[253] Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family.[254] To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design.[250]
|
236 |
+
|
237 |
+
After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair.[223]
|
238 |
+
|
239 |
+
In the late Augustan era, the commission of five senators called the curatores locorum publicorum iudicandorum (translated as "Supervisors of Public Property") was put in charge of maintaining public buildings and temples of the state cult.[223] Augustus created the senatorial group of the curatores viarum (translated as "Supervisors for Roads") for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs.[227]
|
240 |
+
|
241 |
+
The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model.[250]
|
242 |
+
|
243 |
+
His biographer Suetonius, writing about a century after Augustus's death, described his appearance as: "... unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something ... He had clear, bright eyes ... His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclined to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature, although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches (just under 5 ft. 7 in., or 1.70 meters, in modern height measurements), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him...",[255] adding that "his shoes [were] somewhat high-soled, to make him look taller than he really was".[256] Scientific analysis of traces of paint found in his official statues show that he most likely had light brown hair and eyes (his hair and eyes were depicted as the same color).[257]
|
244 |
+
|
245 |
+
His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC "the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus's person."[258] The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had "a distanced air of ageless majesty".[259] Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and Gemma Augustea.
|
246 |
+
|
247 |
+
Primary sources
|
248 |
+
|
249 |
+
Secondary source material
|
250 |
+
|
251 |
+
|
252 |
+
|
en/423.html.txt
ADDED
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Asunción (UK: /əˌsʊnsiˈɒn/, US: /ɑːˌsuːnsiˈoʊn, ɑːsuːnˈsjoʊn/,[2][3][4] Spanish: [asunˈsjon]) is the capital and largest city of Paraguay.
|
2 |
+
The city is located on the left bank of the Paraguay River, almost at the confluence of this river with the River Pilcomayo, on the South American continent. The Paraguay River and the Bay of Asunción in the northwest separate the city from the Occidental Region of Paraguay and Argentina in the south part of the city. The rest of the city is surrounded by the Central Department.
|
3 |
+
|
4 |
+
The city is an autonomous capital district, not a part of any department. The metropolitan area, called Gran Asunción, includes the cities of San Lorenzo, Fernando de la Mora, Lambaré, Luque, Mariano Roque Alonso, Ñemby, San Antonio, Limpio, Capiatá and Villa Elisa, which are part of the Central Department. The Asunción metropolitan area has around two million inhabitants. The Municipality of Asunción is listed on the Asunción Stock Exchange, as BVPASA: MUA.
|
5 |
+
|
6 |
+
Asunción is one of the oldest cities in South America and the longest continually inhabited area in the Río de la Plata Basin; for this reason it is known as "the Mother of Cities". From Asunción the colonial expeditions departed to found other cities, including the second foundation of Buenos Aires and other important cities such as Villarrica, Corrientes, Santa Fe and Santa Cruz de la Sierra.
|
7 |
+
|
8 |
+
Asunción is considered a 'Gamma City' by the Globalization and World Cities Research Network.[5] It is the home of the national government, principal port, and the chief industrial and cultural center of the country. Near Asunción are the headquarters of the CONMEBOL, the continental governing body of association football in South America. Asunción is said to be one of the cheapest cities in the world for foreign visitors.[6]
|
9 |
+
|
10 |
+
The Spanish conquistador Juan de Ayolas (died c. 1537) may have first visited the site of the future city on his way north, up the Paraguay River, looking for a passage to the mines of Alto Perú (present-day Bolivia). Later, Juan de Salazar y Espinosa and Gonzalo de Mendoza, a relative of Pedro de Mendoza, were sent in search of Ayolas, but failed to find him. On his way up and then down the river, de Salazar stopped briefly at a bay in the left bank to resupply his ships. He found the natives friendly, and decided to found a fort there in August 1537. He named it Nuestra Señora Santa María de la Asunción (Our Lady Saint Mary of the Assumption – the Roman Catholic Church celebrates the Feast of the Assumption on August 15).[7]
|
11 |
+
|
12 |
+
In 1542 natives destroyed Buenos Aires, and the Spaniards there fled to Asunción. Thus the city became the center of a large Spanish colonial province comprising part of Brazil, present-day Paraguay and northeastern Argentina: the Giant Province of the Indies. In 1603 Asunción was the seat of the First Synod of Asunción, which set guidelines for the evangelization of the natives in their lingua franca, Guaraní.
|
13 |
+
|
14 |
+
In 1731 an uprising under José de Antequera y Castro was one of the first rebellions against Spanish colonial rule. The uprising failed, but it was the first sign of the independent spirit that was growing among the criollos, mestizos and natives of Paraguay. The event influenced the independence of Paraguay, which subsequently materialised in 1811. The secret meetings between the independence leaders to plan an ambush against the Spanish Governor in Paraguay (Bernardo de Velasco) took place at the home of Juana María de Lara, in downtown Asunción. On the night of May 14 and May 15, 1811, the rebels succeeded and forced governor Velasco to surrender. Today, Lara's former home, known as Casa de la Independencia (House of the Independence), operates as a museum and historical building.
|
15 |
+
|
16 |
+
After Paraguay became independent, significant change occurred in Asunción. Under the rule of Gaspar Rodríguez de Francia (in office 1813–1840) roads were built throughout the city and the streets were named. However, during the presidency of Carlos Antonio López (President 1844–1862) Asunción (and Paraguay) saw further progress as the new president implemented new economic policies. More than 400 schools, metallurgic factories and the first railroad service in South America were built during the López presidency. After López died (1862), his son Francisco Solano López became the new president and led the country through the disastrous Paraguayan War that lasted for five years (1864-1870). On 1 January 1869, the capital city Asunción fell to Brazilian forces led by Gen. João de Souza da Fonseca Costa. After the end of the armed conflict, Brazilian troops occupied Asunción until 1876.
|
17 |
+
|
18 |
+
Many historians[which?] have claimed that this war provoked a steady downfall of the city and country, since it massacred two thirds of the country's population. Progress slowed down greatly afterwards, and the economy stagnated.
|
19 |
+
|
20 |
+
After the Paraguayan War, Asunción began a slow attempt at recovery. Towards the end of the 19th century and during the early years of the 20th century, a flow of immigrants from Europe and the Ottoman Empire came to the city. This led to a change in the appearance of the city as many new buildings were built and Asunción went through an era more prosperous than any since the war.
|
21 |
+
|
22 |
+
Asunción's Downtown in 1872
|
23 |
+
|
24 |
+
Asunción at night
|
25 |
+
|
26 |
+
Harbour of Asuncion
|
27 |
+
|
28 |
+
Asunción is located between the parallels 25° 15' and 25° 20' of south latitude and between the meridians 57° 40' and 57° 30' of west longitude. The city sits on the left bank of the Paraguay River, almost at the confluence of this river with the River Pilcomayo. The Paraguay River and the Bay of Asunción in the northwest separate the city from the Occidental Region of Paraguay and Argentina in the south part of the city. The rest of the city is surrounded by the Central Department.
|
29 |
+
|
30 |
+
With its location along the Paraguay River, the city offers many landscapes; it spreads out over gentle hills in a pattern of rectangular blocks. Places such as Cerro Lambaré, a hill located in Lambaré, offer a spectacular show in the springtime because of the blossoming lapacho trees in the area. Parks such as Parque Independencia and Parque Carlos Antonio López offer large areas of typical Paraguayan vegetation and are frequented by tourists. There are several small hills and slightly elevated areas throughout the city, including Cabará, Clavel, Tarumá, Cachinga, and Tacumbú, among others.
|
31 |
+
|
32 |
+
Asunción, seen from the International Space Station
|
33 |
+
|
34 |
+
Mariscal Lopez Avenue, one of the largest in the city
|
35 |
+
|
36 |
+
Democracy Square
|
37 |
+
|
38 |
+
Asunción is organized geographically into districts and these in turn bring together the different neighborhoods.
|
39 |
+
|
40 |
+
Asunción has a humid subtropical climate (Köppen: Cfa) that closely borders on a tropical savanna climate (Köppen Aw), characterized by hot, humid summers (average of 27.5 °C or 81.5 °F in January), and mild winters (average of 17.6 °C, 63.7 °F in July).[8] Relative humidity is high throughout the year, so the heat index is higher than the true air temperature in the summer, and in the winter it can feel cooler.[citation needed] The average annual temperature is 23 °C (73 °F). The average annual precipitation is high, with 1,400 millimetres (55 in) distributed in over 80 days yearly. The highest recorded temperature was 42.0 °C (107.6 °F) on 2 January 1934, while the lowest recorded temperature was −1.2 °C (29.8 °F) on 27 June 2011.
|
41 |
+
Snow is unknown in modern times, but it fell during the Little Ice Age, last time in June 1751. [9]
|
42 |
+
|
43 |
+
Asunción generally has a very short dry season between May and September, but the coldest months are June, July and August. Slight frosts can occur on average one or two days a year. The wet season covers the remainder of the year.
|
44 |
+
|
45 |
+
During the wet season, Asunción is generally hot and humid though towards the end of this season, it becomes noticeably cooler. In contrast, Asunción's dry season is pleasantly mild. Asuncion's annual precipitation values observe a summer maximum, due to severe subtropical summer thunderstorms which travel southward from northern Paraguay, originating in the Gran Chaco region of the northwestern part of the country. The wettest and driest months of the year are April and July, on average receiving respectively 166 mm (6.54 in) and 39 mm (1.54 in) of precipitation.
|
46 |
+
|
47 |
+
The population is approximately 540,000 people in the city proper.[1] Roughly 30% of Paraguay's 6 million people live within Greater Asunción. Sixty-five percent of the total population in the city are under the age of 30.[13]
|
48 |
+
|
49 |
+
The population has increased greatly during the last few decades as a consequence of internal migration from other Departments of Paraguay, at first because of the economic boom in the 1970s, and later because of economic recession in the countryside. The adjacent cities in the Gran Asunción area, such as Luque, Lambaré, San Lorenzo, Fernando de la Mora and Mariano Roque Alonso, have absorbed most of this influx due to the low cost of the land and easy access to Asunción. The city has ranked as the least expensive city to live in for five years running by Mercer Human Resource Consulting.[14]
|
50 |
+
|
51 |
+
Population by sex and age according to the 2002 census
|
52 |
+
|
53 |
+
Approximately 90% of the population of Asunción professes Catholicism.[17] The Catholic Archdiocese of Asunción covers an area of 2,582 square kilometres (997 square miles) including the city and surrounding area and has a total population of 1,780,000 of whom 1,612,000 are Catholic.[17] The Catholic Archbishop is Eustaquio Pastor Cuquejo Verga, C.SS.R.[17] In Paraguay's capital there are also places of worship of other Christian denominations, the Church of Jesus Christ of Latter-day Saints, as well as of other religions including Islam, Buddhism and Judaism.[citation needed]
|
54 |
+
|
55 |
+
Most people in Paraguay speak one of two languages as their principal language: Paraguayan Spanish (spoken by 56.9% of the population) and Guaraní (spoken by 90.1%). 27.4% of the population speaks the Jopará dialect, a mix of Guaraní with loanwords from Spanish (Creole). Other languages are represented by 4.5% of the population.[citation needed]
|
56 |
+
|
57 |
+
The city has a large number of both public and private schools. The best-known public schools are the Colegio Nacional de la Capital (which is one of the oldest schools in the city, founded in 1877), Colegio Técnico Nacional, Colegio Nacional Presidente Franco and Colegio Nacional Asunción Escalada. The best-known private schools are, American School of Asunción, Colegio San José, St. Annes School, Colegio del Sol, Colegio Santa Clara, Colegio Goethe and Colegio de la Asunción, Colegio Las Almenas, Colegio Campoalto, Colegio Dante Alighieri, Colegio San Francisco, Colegio San Ignacio de Loyola, Colegio Santa Teresa de Jesús, Colegio Inmaculado Corazón de María, Salesianito, Colegio Cristo Rey, Colegio Internacional.
|
58 |
+
|
59 |
+
The main universities in the city are the Universidad Americana and the Universidad Nacional de Asunción (state-run). The Universidad Nacional de Asunción was founded in 1889 and has an enrollment of just over 40,000 students. The Universidad Católica Nuestra Señora de la Asunción was founded in 1960 and has a current enrollment of around 21,000 students. The Católica has a small campus in the downtown area next to the Cathedral and a larger campus in the Santa Ana neighborhood, outwards toward the adjoining city of Lambaré, while the Universidad Nacional has its main campus in the city of San Lorenzo, some 5 km (3 mi) eastward from Asunción. There are also a number of smaller privately run universities such as Uninorte, Universidad Católica Nuestra Señora de la Asunción and Universidad Autónoma de Asunción, among others.
|
60 |
+
|
61 |
+
In terms of commerce, this sector has grown considerably in recent years stretching towards the suburbs where shopping malls and supermarkets have been built. Paraguay's only stock exchange, the BVPASA, is located here. The city itself is listed on it, as BVPASA: MUA.[citation needed]
|
62 |
+
|
63 |
+
In Asuncion, the most important companies, businesses and investment groups are headquartered. The attractiveness of the city can be attributed to its easy going tax policies. Asunción has unrestrained taxes on the investments and movements of capital. In addition to this, the Asunción stock exchange traded up 485.7% in August 2012 relative to August 2011. There is also no income tax for investors in Bonds of Asunción Stock Exchange. Incentives like these attract significant foreign investment into the city. By many Latin American experts, Paraguay is tapped as one of the top three counties with the best investment climate in Latin America and the Caribbean as well it remains the most attractive nation of the hemisphere in doing business and is equipped with a series of legislations that protect strategic investments and guarantee a friendly environment for the development of large industrial plants and infrastructure projects. The city is the economic center of Paraguay, followed by Ciudad del Este and Encarnación.
|
64 |
+
|
65 |
+
Because the Paraguay River runs right next to Asunción the city is served by a river terminal in the downtown area. This port is strategically located inside a bay and it is where most freight enters and leaves the country. There is a lesser terminal in the Sajonia neighbourhood, and a shuttle port in Ita Enramada, almost opposite the Argentine city of Clorinda, Formosa.
|
66 |
+
|
67 |
+
Public transportation is used heavily and is served through buses that reach all the regions of the city and surrounding dormitory communities. The main long-distance bus terminal is on the Avenida República Argentina and its bus services connect all of the Departments of Paraguay, as well as international routes to nearby countries such as Argentina, Brazil, Bolivia and Uruguay.
|
68 |
+
|
69 |
+
Silvio Pettirossi International Airport is Paraguay's main national and international gateway, located at Luque, suburb of the capital Asunción. It is named after Paraguayan aviator Silvio Petrossi and is formerly known as Presidente Stroessner International Airport. As Paraguay's busiest airport, it is the hub of TAM Paraguayan Airlines.
|
70 |
+
|
71 |
+
The city is home to the Godoy Museum, the Museo Nacional de Bellas Artes (which contains paintings from the 19th century), the Church of La Encarnación, the Metropolitan Cathedral and the National Pantheon of the Heroes, a smaller version of Les Invalides in Paris, where many of the nation's heroes are entombed. Other landmarks include the Palacio de los López, the old Senate building (a modern building opened to house Congress in 2003) and the Casa de la Independencia (one of the few examples of colonial architecture remaining in the city).
|
72 |
+
|
73 |
+
Calle Palma is the main street downtown where several historical buildings, plazas, shops, restaurants and cafés are located. The "Manzana de la Rivera", located in front of the Presidential Palace, is a series of old traditional homes that have been restored and serve as a museum showcasing the architectural evolution of the city. The old railway station maintains the old trains that now are used in tourist trips to the cities of Luque and Areguá.
|
74 |
+
|
75 |
+
Asunción also has luxurious malls that contain shops selling well-known brands. The biggest shopping malls are Shopping del Sol; Mariscal López Shopping, Shopping Villa Morra in the central part of the city, Shopping Multiplaza on the outskirts of the city and the Mall Excelsior located downtown. Pinedo Shopping and San Lorenzo Shopping are the newest and also sizeable shopping malls located just 5.6 and 9.3 kilometres (3.5 and 5.8 mi) from Asunción's boundaries respectively, in the city of San Lorenzo, part of Greater Asunción. In 2016 a new shopping mall, La Galeria, was inaugurated. It is in between the blue towers, and is now the largest shopping mall in the country.
|
76 |
+
|
77 |
+
Association football is the main sport in Paraguay, and Asunción is home to some of the most important and traditional teams in the country. These include Olimpia, Cerro Porteño, Club Libertad, Club Nacional, Club Guaraní and Club Sol de América, which have their own stadiums and sport facilities for affiliated members. The Defensores del Chaco stadium is the main football stadium of the country and is located in the neighbourhood of Sajonia, just a few blocks away from the centre of Asunción. Since it is a national stadium sometimes it is used for other activities such as rock concerts. Asunción is also the heart of Paraguayan rugby union.
|
78 |
+
|
79 |
+
The nightlife revolves around two areas: one in the downtown part of the city and the other in the neighbourhoods of Manora and Las Carmelitas, a night full of nightclubs and bars.
|
80 |
+
|
81 |
+
Asunción also hosts several symphony orchestras, and ballet, opera and theater companies. The most well known orchestras are the City of Asunción's Symphony Orchestra (OSCA), the National Symphony Orchestra and the Northern University Symphony Orchestra. Among professional ballet companies, most renowned are the Asunción Classic and Modern Municipal Ballet, the National Ballet and the Northern University Ballet. The main opera company is the Northern University Opera Company. A long-standing theater company is Arlequín Theater Foundation's. Traditional venues include the Municipal Theater, the Paraguayan-Japanese Center, the Central Bank's Great Lyric Theater, the Juan de Salazar Cultural Center, the Americas Theater, the Tom Jobim Theater, the Arlequín Theater and the Manzana de la Rivera. Asunción is also the center of Architecture in Paraguay.
|
82 |
+
|
83 |
+
The choice of the seven treasures of cultural heritage material has been developed Asunción during the months of April and May 2009. Promoted by the "Organización Capital Americana de la Cultura", with the collaboration of the Paraguayan authorities participating in the election was carried out with the intention to disclose the material cultural heritage of Assumption.[citation needed]
|
84 |
+
|
85 |
+
A total of 45 candidates have chosen to become one of the treasures of cultural heritage material Assumption. The result of the vote, which involved 12,417 people, is as follows:
|
86 |
+
|
87 |
+
The most read newspapers are: Diario La Nación, Diario Hoy, Diario ABC, Diario Última Hora, and Diario Crónica, although the most successful are ABC and Última Hora.
|
88 |
+
|
89 |
+
The most important tv channel are SNT and the other free to air channels Paravisión, Sur TV, Unicanal, Latele, C9N, RPC, Telefuturo and the public station of tv Paraguay TV.
|
90 |
+
|
91 |
+
Asunción is twinned with:
|
en/4230.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Augustus (Imperator Caesar divi filius Augustus; 23 September 63 BC – 19 August AD 14) was a Roman statesman and military leader who became the first emperor of the Roman Empire, reigning from 27 BC until his death in AD 14.[nb 1] He was the first ruler of the Julio-Claudian dynasty. His status as the founder of the Roman Principate has consolidated an enduring legacy as one of the most effective and controversial leaders in human history.[1][2] The reign of Augustus initiated an era of relative peace known as the Pax Romana. The Roman world was largely free from large-scale conflict for more than two centuries, despite continuous wars of imperial expansion on the Empire's frontiers and the year-long civil war known as the "Year of the Four Emperors" over the imperial succession.
|
4 |
+
|
5 |
+
Augustus was born Gaius Octavius into an old and wealthy equestrian branch of the plebeian gens Octavia. His maternal great-uncle Julius Caesar was assassinated in 44 BC, and Octavius was named in Caesar's will as his adopted son and heir, taking the name Octavian (Latin: Gaius Julius Caesar Octavianus). Along with Mark Antony and Marcus Lepidus, he formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at the Battle of Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as de facto dictators. The Triumvirate was eventually torn apart by the competing ambitions of its members. Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Octavian in 31 BC.
|
6 |
+
|
7 |
+
After the demise of the Second Triumvirate, Augustus restored the outward façade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself Princeps Civitatis ("First Citizen"). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire.
|
8 |
+
|
9 |
+
Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia, expanding possessions in Africa, and completing the conquest of Hispania, but suffered a major setback in Germania. Beyond the frontiers, he secured the Empire with a buffer region of client states and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign. Augustus died in AD 14 at the age of 75, probably from natural causes. However, there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as emperor by his adopted son Tiberius (also stepson and former son-in-law).
|
10 |
+
|
11 |
+
As a consequence of Roman customs, society, and personal preference, Augustus (/ɔːˈɡʌstəs, əˈ-/ aw-GUST-əs, ə-, Latin: [au̯ˈɡʊstʊs]) was known by many names throughout his life:
|
12 |
+
|
13 |
+
While his paternal family was from the Volscian town of Velletri, approximately 40 kilometres (25 mi) to the south-east of Rome, Augustus was born in the city of Rome on 23 September 63 BC.[12] He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves which occurred a few years after his birth.[13][14] Suetonius wrote: "There are many indications that the Octavian family was in days of old a distinguished one at Velitrae; for not only was a street in the most frequented part of town long ago called Octavian, but an altar was shown there besides, consecrated by an Octavius. This man was leader in a war with a neighbouring town ..." [15]
|
14 |
+
|
15 |
+
Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius mentions his father's equestrian family only briefly in his memoirs. His paternal great-grandfather Gaius Octavius was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. His mother, Atia, was the niece of Julius Caesar.[16][17]
|
16 |
+
|
17 |
+
In 59 BC, when he was four years old, his father died.[18] His mother married a former governor of Syria, Lucius Marcius Philippus.[19] Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother, Julia, the sister of Julius Caesar. Julia died in 52 or 51 BC, and Octavius delivered the funeral oration for his grandmother.[20][21] From this point, his mother and stepfather took a more active role in raising him. He donned the toga virilis four years later,[22] and was elected to the College of Pontiffs in 47 BC.[23][24] The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar.[24] According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested.[25] In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel.
|
18 |
+
|
19 |
+
When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably.[22] Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage.[26] When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary.[27]
|
20 |
+
|
21 |
+
Octavius was studying and undergoing military training in Apollonia, Illyria, when Julius Caesar was killed on the Ides of March (15 March) 44 BC. He rejected the advice of some army officers to take refuge with the troops in Macedonia and sailed to Italy to ascertain whether he had any potential political fortunes or security.[28] Caesar had no living legitimate children under Roman law,[nb 3] and so had adopted Octavius, his grand-nephew, making him his primary heir.[29] Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius describes Antony's accusation as political slander.[30] This form of slander was popular during this time in the Roman Republic to demean and discredit political opponents by accusing them of having an inappropriate sexual affair.[31][32] After landing at Lupiae near Brundisium, Octavius learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate.[24][28][33]
|
22 |
+
|
23 |
+
Upon his adoption, Octavius assumed his great-uncle's name Gaius Julius Caesar. Roman citizens adopted into a new family usually retained their old nomen in cognomen form (e.g., Octavianus for one who had been an Octavius, Aemilianus for one who had been an Aemilius, etc.). However, though some of his contemporaries did,[34] there is no evidence that Octavius ever himself officially used the name Octavianus, as it would have made his modest origins too obvious.[35][36][37] Historians usually refer to the new Caesar as Octavian during the time between his adoption and his assumption of the name Augustus in 27 BC in order to avoid confusing the dead dictator with his heir.[38]
|
24 |
+
|
25 |
+
Octavian could not rely on his limited funds to make a successful entry into the upper echelons of the Roman political hierarchy.[39] After a warm welcome by Caesar's soldiers at Brundisium,[40] Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against the Parthian Empire in the Middle East.[39] This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east.[41]
|
26 |
+
|
27 |
+
A later senatorial investigation into the disappearance of the public funds took no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy Mark Antony.[40] Octavian made another bold move in 44 BC when, without official permission, he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy.[36][42]
|
28 |
+
|
29 |
+
Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar.[28][43] On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania.[36] By June, he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii.[44][45][46]
|
30 |
+
|
31 |
+
Arriving in Rome on 6 May 44 BC, Octavian found consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins. They had been granted a general amnesty on 17 March, yet Antony had succeeded in driving most of them out of Rome with an inflammatory eulogy at Caesar's funeral, mounting public opinion against the assassins.[36]
|
32 |
+
|
33 |
+
Mark Antony was amassing political support, but Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he initially opposed the motion to elevate Caesar to divine status.[47] Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer, he managed to win support from Caesarian sympathizers and also made common with the Optimates, the former enemies of Caesar, who saw him as the lesser evil and hoped to manipulate him.[48] In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order.[49][50]
|
34 |
+
|
35 |
+
With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws that would assign him the province of Cisalpine Gaul.[51][52] Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans and, on 28 November, he won over two of Antony's legions with the enticing offer of monetary gain.[53][54][55]
|
36 |
+
|
37 |
+
In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome and, to the relief of the Senate, he left Rome for Cisalpine Gaul, which was to be handed to him on 1 January.[55] However, the province had earlier been assigned to Decimus Junius Brutus Albinus, one of Caesar's assassins, who now refused to yield to Antony. Antony besieged him at Mutina[56] and rejected the resolutions passed by the Senate to stop the fighting. The Senate had no army to enforce their resolutions. This provided an opportunity for Octavian, who already was known to have armed forces.[54] Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name, stating "we have no more brilliant example of traditional piety among our youth."[57]
|
38 |
+
|
39 |
+
At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls.[54][55] In addition, Octavian was granted propraetor imperium (commanding power) which legalized his command of troops, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC).[54][58] In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies.[59][60]
|
40 |
+
|
41 |
+
The senate heaped many more rewards on Decimus Brutus than on Octavian for defeating Antony, then attempted to give command of the consular legions to Decimus Brutus.[61] In response, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony.[62] In July, an embassy of centurions sent by Octavian entered Rome and demanded the consulship left vacant by Hirtius and Pansa[63] and also that the decree should be rescinded which declared Antony a public enemy.[62] When this was refused, he marched on the city with eight legions.[62] He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul.[64][65] Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian.[66]
|
42 |
+
|
43 |
+
In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed the Second Triumvirate.[68] This explicit arrogation of special powers lasting five years was then legalised by law passed by the plebs, unlike the unofficial First Triumvirate formed by Pompey, Julius Caesar, and Marcus Licinius Crassus.[68][69] The triumvirs then set in motion proscriptions, in which between 130 and 300 senators[nb 4] and 2,000 equites were branded as outlaws and deprived of their property and, for those who failed to escape, their lives.[71] This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus and Gaius Cassius Longinus.[72] Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs.[71]
|
44 |
+
|
45 |
+
Contemporary Roman historians provide conflicting reports as to which triumvir was most responsible for the proscriptions and killing. However, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies.[73] Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with.[74]
|
46 |
+
|
47 |
+
This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies.[75] Suetonius said that Octavian was reluctant to proscribe officials, but did pursue his enemies with more vigor than the other triumvirs.[73] Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus.[74]
|
48 |
+
|
49 |
+
On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, Divus Iulius. Octavian was able to further his cause by emphasizing the fact that he was Divi filius, "Son of the Divine".[76] Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece.[77] After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony later used the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead.[78]
|
50 |
+
|
51 |
+
After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. Gaul and the province of Hispania were placed in the hands of Octavian. Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead.[79]
|
52 |
+
|
53 |
+
Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius could easily ally with a political opponent of Octavian if not appeased, and they also required land.[79] There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former.[80] There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions.[81]
|
54 |
+
|
55 |
+
There was widespread dissatisfaction with Octavian over these settlements of his soldiers, and this encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia (Mark Antony's wife) and her first husband Publius Clodius Pulcher. He returned Clodia to her mother, claiming that their marriage had never been consummated. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC.[81]
|
56 |
+
|
57 |
+
Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon.[82] Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius.[83] Perusia also was pillaged and burned as a warning for others.[82] This bloody event sullied Octavian's reputation and was criticized by many, such as Augustan poet Sextus Propertius.[83]
|
58 |
+
|
59 |
+
Sextus Pompeius, the son of Pompey and still a renegade general following Julius Caesar's victory over his father, had established himself in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC.[84] Both Antony and Octavian were vying for an alliance with Pompeius. Octavian succeeded in a temporary alliance in 40 BC when he married Scribonia, a sister or daughter of Pompeius's father-in-law Lucius Scribonius Libo. Scribonia gave birth to Octavian's only natural child, Julia, the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage.[83]
|
60 |
+
|
61 |
+
While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her.[nb 5] Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile, in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation.[85][86]
|
62 |
+
|
63 |
+
In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian Peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC.[85]
|
64 |
+
|
65 |
+
Sextus Pompeius threatened Octavian in Italy by denying shipments of grain through the Mediterranean Sea to the peninsula. Pompeius's own son was put in charge as naval commander in the effort to cause widespread famine in Italy.[86] Pompeius's control over the sea prompted him to take on the name Neptuni filius, "son of Neptune".[87] A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC.[86][87]
|
66 |
+
|
67 |
+
The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC.[88] One of Pompeius's naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC.[69][89]
|
68 |
+
|
69 |
+
In supporting Octavian, Antony expected to gain support for his own campaign against the Parthian Empire, desiring to avenge Rome's defeat at Carrhae in 53 BC.[89] In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth of those promised, however, which Antony viewed as an intentional provocation.[90]
|
70 |
+
|
71 |
+
Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC.[91] Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by General Agrippa at the naval Battle of Naulochus. Sextus fled to the east with his remaining forces, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius's troops, Lepidus attempted to claim Sicily for himself, ordering Octavian to leave. Lepidus's troops deserted him, however, and defected to Octavian since they were weary of fighting and were enticed by Octavian's promises of money.[92]
|
72 |
+
|
73 |
+
Lepidus surrendered to Octavian and was permitted to retain the office of Pontifex Maximus (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy.[72][92] The Roman dominions were now divided between Octavian in the West and Antony in the East. Octavian ensured Rome's citizens of their rights to property in order to maintain peace and stability in his portion of the Empire. This time, he settled his discharged soldiers outside of Italy, while also returning 30,000 slaves to their former Roman owners—slaves who had fled to join Pompeius's army and navy.[93] Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or sacrosanctitas, in order to ensure his own safety and that of Livia and Octavia once he returned to Rome.[94]
|
74 |
+
|
75 |
+
Meanwhile, Antony's campaign turned disastrous against Parthia, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces.[95] On the other hand, Cleopatra could restore his army to full strength; he already was engaged in a romantic affair with her, so he decided to send Octavia back to Rome.[96] Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an "Oriental paramour".[97] In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir—if only Antony would do the same. Antony refused.[98]
|
76 |
+
|
77 |
+
Roman troops captured the Kingdom of Armenia in 34 BC, and Antony made his son Alexander Helios the ruler of Armenia. He also awarded the title "Queen of Kings" to Cleopatra, acts that Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome.[97] Octavian became consul once again on 1 January 33 BC, and he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen.[99]
|
78 |
+
|
79 |
+
The breach between Antony and Octavian prompted a large portion of the Senators, as well as both of that year's consuls, to leave Rome and defect to Antony. However, Octavian received two key deserters from Antony in the autumn of 32 BC: Munatius Plancus and Marcus Titius.[100] These defectors gave Octavian the information that he needed to confirm with the Senate all the accusations that he made against Antony.[101]
|
80 |
+
|
81 |
+
Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen.[102][103] In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt.[104][105]
|
82 |
+
|
83 |
+
In early 31 BC, Antony and Cleopatra were temporarily stationed in Greece when Octavian gained a preliminary victory: the navy successfully ferried troops across the Adriatic Sea under the command of Agrippa. Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, while Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations.[108]
|
84 |
+
|
85 |
+
Antony's fleet sailed through the bay of Actium on the western coast of Greece in a desperate attempt to break free of the naval blockade. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the Battle of Actium on 2 September 31 BC.[109] Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby.[110]
|
86 |
+
|
87 |
+
Octavian pursued them and defeated their forces in Alexandria on 1 August 30 BC—after which Antony and Cleopatra committed suicide. Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison.[111] Octavian had exploited his position as Caesar's heir to further his own political career, and he was well aware of the dangers in allowing another person to do the same. He therefore followed the advice of Arius Didymus that "two Caesars are one too many", ordering Caesarion, Julius Caesar's son by Cleopatra, killed, while sparing Cleopatra's children by Antony, with the exception of Antony's older son.[112][113] Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium.[114]
|
88 |
+
|
89 |
+
After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate[115]—but he had to achieve this through incremental power gains. He did so by courting the Senate and the people while upholding the republican traditions of Rome, appearing that he was not aspiring to dictatorship or monarchy.[116][117] Marching into Rome, Octavian and Marcus Agrippa were elected as consuls by the Senate.[118]
|
90 |
+
|
91 |
+
Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars among the Roman generals and, even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality, and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections—in name at least.[119]
|
92 |
+
|
93 |
+
In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Octavian was no longer in direct control of the provinces and their armies, but he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power was unrivaled in the Roman Republic.[118] Historian Werner Eck states:
|
94 |
+
|
95 |
+
The sum of his power derived first of all from various powers of office delegated to him by the Senate and people, secondly from his immense private fortune, and thirdly from numerous patron-client relationships he established with individuals and groups throughout the Empire. All of them taken together formed the basis of his auctoritas, which he himself emphasized as the foundation of his political actions.[120]
|
96 |
+
|
97 |
+
To a large extent, the public were aware of the vast financial resources that Octavian commanded. He failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy in 20 BC, but he undertook direct responsibility for them. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the aerarium Saturni, the public treasury.[121]
|
98 |
+
|
99 |
+
According to H. H. Scullard, however, Octavian's power was based on the exercise of "a predominant military power and ... the ultimate sanction of his authority was force, however much the fact was disguised."[122] The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic.[123][124]
|
100 |
+
|
101 |
+
The provinces ceded to Augustus for that ten-year period comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt.[123][125] Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions.[125][126]
|
102 |
+
|
103 |
+
While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure that his orders were carried out. The provinces not under Octavian's control were overseen by governors chosen by the Roman Senate.[126] Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but he did not have a monopoly on political and martial power.[127]
|
104 |
+
|
105 |
+
The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two strategic regions with several legions.[127] However, the Senate had control of only five or six legions distributed among three senatorial proconsuls, compared to the twenty legions under the control of Octavian, and their control of these regions did not amount to any political or military challenge to Octavian.[116][122] The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces followed Republican-era precedents for the objective of securing peace and creating stability, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability.[116]
|
106 |
+
|
107 |
+
On 16 January 27 BC the Senate gave Octavian the new titles of Augustus and Princeps.[128] Augustus is from the Latin word Augere (meaning to increase) and can be translated as "the illustrious one". It was a title of religious authority rather than political authority. His new title of Augustus was also more favorable than Romulus, the previous one which he styled for himself in reference to the story of the legendary founder of Rome, which symbolized a second founding of Rome.[114] The title of Romulus was associated too strongly with notions of monarchy and kingship, an image that Octavian tried to avoid.[129] The title princeps senatus originally meant the member of the Senate with the highest precedence,[130] but in the case of Augustus, it became an almost regnal title for a leader who was first in charge.[131] Augustus also styled himself as Imperator Caesar divi filius, "Commander Caesar son of the deified one". With this title, he boasted his familial link to deified Julius Caesar, and the use of Imperator signified a permanent link to the Roman tradition of victory. He transformed Caesar, a cognomen for one branch of the Julian family, into a new family line that began with him.[128]
|
108 |
+
|
109 |
+
Augustus was granted the right to hang the corona civica above his door, the "civic crown" made from oak, and to have laurels drape his doorposts.[127] However, he renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar.[132] If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription virtus, pietas, clementia, iustitia—"valor, piety, clemency, and justice."[127][133]
|
110 |
+
|
111 |
+
By 23 BC, some of the un-Republican implications were becoming apparent concerning the settlement of 27 BC. Augustus's retention of an annual consulate drew attention to his de facto dominance over the Roman political system, and cut in half the opportunities for others to achieve what was still nominally the preeminent position in the Roman state.[134] Further, he was causing political problems by desiring to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn,[nb 6] alienating his three greatest supporters – Agrippa, Maecenas, and Livia.[135] He appointed noted Republican Calpurnius Piso (who had fought against Julius Caesar and supported Cassius and Brutus[136]) as co-consul in 23 BC, after his choice Aulus Terentius Varro Murena died unexpectedly.[137]
|
112 |
+
|
113 |
+
In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form,[138] while allaying senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus's supposedly favored nephew Marcellus came away empty-handed.[139][140] This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor.[141]
|
114 |
+
|
115 |
+
Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility among the republican-minded Romans fearful of monarchy.[117] With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position;[142] nonetheless, by giving his signet ring to Agrippa, Augustus intended to signal to the legions that Agrippa was to be his successor, and that constitutional procedure notwithstanding, they should continue to obey Agrippa.[143]
|
116 |
+
|
117 |
+
Soon after his bout of illness subsided, Augustus gave up his consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC,[140][144] both times to introduce his grandsons into public life.[136] This was a clever ploy by Augustus; ceasing to serve as one of two annually elected consuls allowed aspiring senators a better chance to attain the consular position, while allowing Augustus to exercise wider patronage within the senatorial class.[145] Although Augustus had resigned as consul, he desired to retain his consular imperium not just in his provinces but throughout the empire. This desire, as well as the Marcus Primus Affair, led to a second compromise between him and the Senate known as the Second Settlement.[146]
|
118 |
+
|
119 |
+
The primary reasons for the Second Settlement were as follows. First, after Augustus relinquished the annual consulship, he was no longer in an official position to rule the state, yet his dominant position remained unchanged over his Roman, 'imperial' provinces where he was still a proconsul.[140][147] When he annually held the office of consul, he had the power to intervene with the affairs of the other provincial proconsuls appointed by the Senate throughout the empire, when he deemed necessary.[148]
|
120 |
+
|
121 |
+
A second problem later arose showing the need for the Second Settlement in what became known as the "Marcus Primus Affair".[149] In late 24 or early 23 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, for waging a war without prior approval of the Senate on the Odrysian kingdom of Thrace, whose king was a Roman ally.[150] He was defended by Lucius Lucinius Varro Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state.[151] Later, Primus testified that the orders came from the recently deceased Marcellus.[152]
|
122 |
+
|
123 |
+
Such orders, had they been given, would have been considered a breach of the Senate's prerogative under the Constitutional settlement of 27 BC and its aftermath—i.e., before Augustus was granted imperium proconsulare maius—as Macedonia was a Senatorial province under the Senate's jurisdiction, not an imperial province under the authority of Augustus. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals.[151] Even worse, the involvement of Marcellus provided some measure of proof that Augustus's policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out.[142]
|
124 |
+
|
125 |
+
The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order.[153] Murena disbelieved Augustus's testimony and resented his attempt to subvert the trial by using his auctoritas. He rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest.[154] Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus's testimony, an insult to the 'August One'.[155]
|
126 |
+
|
127 |
+
The Second Constitutional Settlement was completed in part to allay confusion and formalize Augustus's legal authority to intervene in Senatorial provinces. The Senate granted Augustus a form of general imperium proconsulare, or proconsular imperium (power) that applied throughout the empire, not solely to his provinces. Moreover, the Senate augmented Augustus's proconsular imperium into imperium proconsulare maius, or proconsular imperium applicable throughout the empire that was more (maius) or greater than that held by the other proconsuls. This in effect gave Augustus constitutional power superior to all other proconsuls in the empire.[146] Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support, thereby ensuring that his status of proconsular imperium maius was renewed in 13 BC.[144]
|
128 |
+
|
129 |
+
During the second settlement, Augustus was also granted the power of a tribune (tribunicia potestas) for life, though not the official title of tribune.[146] For some years, Augustus had been awarded tribunicia sacrosanctitas, the immunity given to a Tribune of the Plebs. Now he decided to assume the full powers of the magistracy, renewed annually, in perpetuity. Legally, it was closed to patricians, a status that Augustus had acquired some years earlier when adopted by Julius Caesar.[145]
|
130 |
+
|
131 |
+
This power allowed him to convene the Senate and people at will and lay business before them, to veto the actions of either the Assembly or the Senate, to preside over elections, and to speak first at any meeting.[144][156] Also included in Augustus's tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure that they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate.[157]
|
132 |
+
|
133 |
+
With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all attire but the classic toga while entering the Forum.[158] There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor.[159] Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state. However, this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the tribunus plebis began to lose its prestige due to Augustus's amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship.[160]
|
134 |
+
|
135 |
+
Augustus was granted sole imperium within the city of Rome itself, in addition to being granted proconsular imperium maius and tribunician authority for life. Traditionally, proconsuls (Roman province governors) lost their proconsular "imperium" when they crossed the Pomerium – the sacred boundary of Rome – and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that, when he was in the city, he might not be the constitutional magistrate with the most authority. Thanks to his prestige or auctoritas, his wishes would usually be obeyed, but there might be some difficulty. To fill this power vacuum, the Senate voted that Augustus's imperium proconsulare maius (superior proconsular power) should not lapse when he was inside the city walls. All armed forces in the city had formerly been under the control of the urban praetors and consuls, but this situation now placed them under the sole authority of Augustus.[161]
|
136 |
+
|
137 |
+
In addition, the credit was given to Augustus for each subsequent Roman military victory after this time, because the majority of Rome's armies were stationed in imperial provinces commanded by Augustus through the legatus who were deputies of the princeps in the provinces. Moreover, if a battle was fought in a Senatorial province, Augustus's proconsular imperium maius allowed him to take command of (or credit for) any major military victory. This meant that Augustus was the only individual able to receive a triumph, a tradition that began with Romulus, Rome's first King and first triumphant general. Lucius Cornelius Balbus was the last man outside Augustus's family to receive this award, in 19 BC.[162] Tiberius, Augustus's eldest stepson by Livia, was the only other general to receive a triumph—for victories in Germania in 7 BC.[163]
|
138 |
+
|
139 |
+
Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class, who were Augustus's greatest supporters and clientele. This caused them to insist upon Augustus's participation in imperial affairs from time to time. Augustus failed to stand for election as consul in 22 BC, and fears arose once again that he was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus.[164]
|
140 |
+
|
141 |
+
Likewise, there was a food shortage in Rome in 22 BC which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis. After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply "by virtue of his proconsular imperium", and ended the crisis almost immediately.[144] It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a praefectus annonae, a permanent prefect who was in charge of procuring food supplies for Rome.[165]
|
142 |
+
|
143 |
+
There were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio.[149] Some time prior to 1 September 22 BC, a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio.[166] Murena, the outspoken Consul who defended Primus in the Marcus Primus Affair, was named among the conspirators. The conspirators were tried in absentia with Tiberius acting as prosecutor; the jury found them guilty, but it was not a unanimous verdict.[167] All the accused were sentenced to death for treason and executed as soon as they were captured—without ever giving testimony in their defence.[168] Augustus ensured that the facade of Republican government continued with an effective cover-up of the events.[169]
|
144 |
+
|
145 |
+
In 19 BC, the Senate granted Augustus a form of 'general consular imperium', which was probably 'imperium consulare maius', like the proconsular powers that he received in 23 BC. Like his tribune authority, the consular powers were another instance of gaining power from offices that he did not actually hold.[170] In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate,[161] as well as to sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority.[170] This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he both appeared as one before the people and could exercise consular power if necessary. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the college of the Pontiffs, the most important position in Roman religion.[171][172] On 5 February 2 BC, Augustus was also given the title pater patriae, or "father of the country".[173][174]
|
146 |
+
|
147 |
+
A final reason for the Second Settlement was to give the Principate constitutional stability and staying power in case something happened to Princeps Augustus. His illness of early 23 BC and the Caepio conspiracy showed that the regime's existence hung by the thin thread of the life of one man, Augustus himself, who suffered from several severe and dangerous illnesses throughout his life.[175] If he were to die from natural causes or fall victim to assassination, Rome could be subjected to another round of civil war. The memories of Pharsalus, the Ides of March, the proscriptions, Philippi, and Actium, barely twenty-five years distant, were still vivid in the minds of many citizens. Proconsular imperium was conferred upon Agrippa for five years, similar to Augustus's power, in order to accomplish this constitutional stability. The exact nature of the grant is uncertain but it probably covered Augustus's imperial provinces, east and west, perhaps lacking authority over the provinces of the Senate. That came later, as did the jealously guarded tribunicia potestas.[176] Augustus's accumulation of powers was now complete. In fact, he dated his 'reign' from the completion of the Second Settlement, 1 July 23 BC.[177]
|
148 |
+
|
149 |
+
Augustus chose Imperator ("victorious commander") to be his first name, since he wanted to make an emphatically clear connection between himself and the notion of victory, and consequently became known as Imperator Caesar Divi Filius Augustus. By the year 13, Augustus boasted 21 occasions where his troops proclaimed "imperator" as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the Res Gestae was devoted to his military victories and honors.[178]
|
150 |
+
|
151 |
+
Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (to the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: tu regere imperio populos, Romane, memento—"Roman, remember by your strength to rule the Earth's peoples!"[158] The impulse for expansionism was apparently prominent among all classes at Rome, and it is accorded divine sanction by Virgil's Jupiter in Book 1 of the Aeneid, where Jupiter promises Rome imperium sine fine, "sovereignty without end".[179]
|
152 |
+
|
153 |
+
By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal) and the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and had extended the borders of the Africa Province to the east and south. Judea was added to the province of Syria when Augustus deposed Herod Archelaus, successor to client king Herod the Great (73–4 BC). Syria (like Egypt after Antony) was governed by a high prefect of the equestrian class rather than by a proconsul or legate of Augustus.[180]
|
154 |
+
|
155 |
+
Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada.[180] The rebellious tribes of Asturias and Cantabria in modern-day Spain were finally quelled in 19 BC, and the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus's future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas.[181]
|
156 |
+
|
157 |
+
Conquering the peoples of the Alps in 16 BC was another important victory for Rome, since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north.[182] Horace dedicated an ode to the victory, while the monumental Trophy of Augustus near Monaco was built to honor the occasion.[183] The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum, and his brother Nero Claudius Drusus moved against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus's forces reached the Elbe River by 9 BC—though he died shortly after by falling off his horse.[184] It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome.[185]
|
158 |
+
|
159 |
+
To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas that could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East.[186] Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia.[185]
|
160 |
+
|
161 |
+
Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome.[185][186][187] Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus's defeat by military means.[188] However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards.[189]
|
162 |
+
|
163 |
+
Parthia had always posed a threat to Rome in the east, but the real battlefront was along the Rhine and Danube rivers.[186] Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia were the first step in expanding Roman dominions to the Danube.[190] Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania.[186]
|
164 |
+
|
165 |
+
A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed by Arminius, leader of the Cherusci, an apparent Roman ally.[191] Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany.[192] Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that Battle of Idistaviso in AD 16 but was killed later in 21 due to treachery.[193]
|
166 |
+
|
167 |
+
The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed to Augustus's unofficial position of power, he would have to earn it through his own publicly proven merits.[194]
|
168 |
+
|
169 |
+
Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus's daughter Julia the Elder.[195] Other historians dispute this due to Augustus's will being read aloud to the Senate while he was seriously ill in 23 BC,[196] instead indicating a preference for Marcus Agrippa, who was Augustus's second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together.[197]
|
170 |
+
|
171 |
+
After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the imperium of a proconsul and the same tribunicia potestas granted to Augustus (although not trumping Augustus's authority), his seat of governance stationed at Samos in the eastern Aegean.[197][198] This granting of power showed Augustus's favor for Agrippa, but it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him.[198]
|
172 |
+
|
173 |
+
Augustus's intent became apparent to make Gaius and Lucius Caesar his heirs when he adopted them as his own children.[199] He took the consulship in 5 and 2 BC so that he could personally usher them into their political careers,[200] and they were nominated for the consulships of AD 1 and 4.[201] Augustus also showed favor to his stepsons, Livia's children from her first marriage Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius), granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania Agrippina and marry Agrippa's widow, Augustus's daughter Julia—as soon as a period of mourning for Agrippa had ended.[202] Drusus's marriage to Augustus's niece Antonia was considered an unbreakable affair, whereas Vipsania was "only" the daughter of the late Agrippa from his first marriage.[202]
|
174 |
+
|
175 |
+
Tiberius shared in Augustus's tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes.[163][203] No specific reason is known for his departure, though it could have been a combination of reasons, including a failing marriage with Julia,[163][203] as well as a sense of envy and exclusion over Augustus's apparent favouring of his young grandchildren-turned-sons Gaius and Lucius. (Gaius and Lucius joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.)[204][205]
|
176 |
+
|
177 |
+
After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus.[206] This continued the tradition of presenting at least two generations of heirs.[202] In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by AD 13 was awarded with his second triumph and equal level of imperium with that of Augustus.[207]
|
178 |
+
|
179 |
+
The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus's favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a "vulgar young man, brutal and brutish, and of depraved character".[208]
|
180 |
+
|
181 |
+
On 19 August AD 14, Augustus died while visiting Nola where his father had died. Both Tacitus and Cassius Dio wrote that Livia was rumored to have brought about Augustus's death by poisoning fresh figs.[209][210] This element features in many modern works of historical fiction pertaining to Augustus's life, but some historians view it as likely to have been a salacious fabrication made by those who had favoured Postumus as heir, or other of Tiberius's political enemies. Livia had long been the target of similar rumors of poisoning on the behalf of her son, most or all of which are unlikely to have been true.[211]
|
182 |
+
|
183 |
+
Alternatively, it is possible that Livia did supply a poisoned fig (she did cultivate a variety of fig named for her that Augustus is said to have enjoyed), but did so as a means of assisted suicide rather than murder. Augustus's health had been in decline in the months immediately before his death, and he had made significant preparations for a smooth transition in power, having at last reluctantly settled on Tiberius as his choice of heir.[212] It is likely that Augustus was not expected to return alive from Nola, but it seems that his health improved once there; it has therefore been speculated that Augustus and Livia conspired to end his life at the anticipated time, having committed all political process to accepting Tiberius, in order to not endanger that transition.[211]
|
184 |
+
|
185 |
+
Augustus's famous last words were, "Have I played the part well? Then applaud as I exit"—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, "Behold, I found Rome of clay, and leave her to you of marble." An enormous funerary procession of mourners traveled with Augustus's body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day.[212] Tiberius and his son Drusus delivered the eulogy while standing atop two rostra. Augustus's body was coffin-bound and cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon.[213]
|
186 |
+
|
187 |
+
Historian D. C. A. Shotter states that Augustus's policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus.[214] Shotter suggests that Augustus's deification obliged Tiberius to suppress any open resentment that he might have harbored, coupled with Tiberius's "extremely conservative" attitude towards religion.[215] Also, historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits.[216] Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her), as well as toward the two young Caesars, Gaius and Lucius—instead of Augustus, the real architect of his divorce and imperial demotion.[215]
|
188 |
+
|
189 |
+
Augustus's reign laid the foundations of a regime that lasted, in one form or another, for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title Augustus became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, Caesar became the word for Emperor, as in the German Kaiser and in the Bulgarian and subsequently Russian Tsar (sometimes Csar or Czar). The cult of Divus Augustus continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the Res Gestae Divi Augusti, to be inscribed in bronze in front of his mausoleum.[218] Copies of the text were inscribed throughout the Empire upon his death.[219] The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the Monumentum Ancyranum, called the "queen of inscriptions" by historian Theodor Mommsen.[220]
|
190 |
+
|
191 |
+
The Res Gestae is the only work to have survived from antiquity, though Augustus is also known to have composed poems entitled Sicily, Epiphanus, and Ajax, an autobiography of 13 books, a philosophical treatise, and a written rebuttal to Brutus's Eulogy of Cato.[221] Historians are able to analyze excerpts of letters penned by Augustus, preserved in other works, to others for additional facts or clues about his personal life.[216][222]
|
192 |
+
|
193 |
+
Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated Pax Romana or Pax Augusta. The Roman Senate wished subsequent emperors to "be more fortunate than Augustus and better than Trajan". Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar and was influenced on occasion by Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors.[223]
|
194 |
+
|
195 |
+
A praefectus vigilum, or "Prefect of the Watch" was put in charge of the vigiles, Rome's fire brigade and police.[224] With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers.[225] This was supported by numerous auxiliary units of 500 non-citizen soldiers each, often recruited from recently conquered areas.[226]
|
196 |
+
|
197 |
+
With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the praefectus vehiculorum.[227] Besides the advent of swifter communication among Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country.[228] In the year 6 Augustus established the aerarium militare, donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers.[229]
|
198 |
+
|
199 |
+
One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome.[230] They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria.[231]
|
200 |
+
|
201 |
+
Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus gave 400 sesterces (equal to 1/10 of a Roman pound of gold) each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon.[232] He also restored 82 different temples to display his care for the Roman pantheon of deities.[232] In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest.[232]
|
202 |
+
|
203 |
+
The longevity of Augustus's reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate.[233] Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus's own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus's ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title.[213] The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire.[234]
|
204 |
+
|
205 |
+
However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his Annals, the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus's death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another.[235] Tacitus, however, records two contradictory but common views of Augustus:
|
206 |
+
|
207 |
+
Intelligent people praised or criticized him in varying ways. One opinion was as follows. Filial duty and a national emergency, in which there was no place for law-abiding conduct, had driven him to civil war—and this can neither be initiated nor maintained by decent methods. He had made many concessions to Anthony and to Lepidus for the sake of vengeance on his father's murderers. When Lepidus grew old and lazy, and Anthony's self-indulgence got the better of him, the only possible cure for the distracted country had been government by one man. However, Augustus had put the state in order not by making himself king or dictator, but by creating the Principate. The Empire's frontiers were on the ocean, or distant rivers. Armies, provinces, fleets, the whole system was interrelated. Roman citizens were protected by the law. Provincials were decently treated. Rome itself had been lavishly beautified. Force had been sparingly used—merely to preserve peace for the majority.[236]
|
208 |
+
|
209 |
+
According to the second opposing opinion:
|
210 |
+
|
211 |
+
filial duty and national crisis had been merely pretexts. In actual fact, the motive of Octavian, the future Augustus, was lust for power ... There had certainly been peace, but it was a blood-stained peace of disasters and assassinations.[237]
|
212 |
+
|
213 |
+
In a 2006 biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus's reign have oscillated between these two extremes but stresses that:
|
214 |
+
|
215 |
+
Opposites do not have to be mutually exclusive, and we are not obliged to choose one or the other. The story of his career shows that Augustus was indeed ruthless, cruel, and ambitious for himself. This was only in part a personal trait, for upper-class Romans were educated to compete with one another and to excel. However, he combined an overriding concern for his personal interests with a deep-seated patriotism, based on a nostalgia of Rome's antique virtues. In his capacity as princeps, selfishness and selflessness coexisted in his mind. While fighting for dominance, he paid little attention to legality or to the normal civilities of political life. He was devious, untrustworthy, and bloodthirsty. But once he had established his authority, he governed efficiently and justly, generally allowed freedom of speech, and promoted the rule of law. He was immensely hardworking and tried as hard as any democratic parliamentarian to treat his senatorial colleagues with respect and sensitivity. He suffered from no delusions of grandeur.[238]
|
216 |
+
|
217 |
+
Tacitus was of the belief that Nerva (r. 96–98) successfully "mingled two formerly alien ideas, principate and liberty".[239] The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat.[235] The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, "perhaps Augustus was too sacred a figure to accuse directly."[239]
|
218 |
+
|
219 |
+
The Anglo-Irish writer Jonathan Swift (1667–1745), in his Discourse on the Contests and Dissentions in Athens and Rome, criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658).[240] Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle.[241] In his Memoirs of the Court of Augustus, the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, "a bloodthirsty vindicative usurper", "wicked and worthless", "a mean spirit", and a "tyrant".[241]
|
220 |
+
|
221 |
+
Augustus's public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus's predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute.[242]
|
222 |
+
|
223 |
+
The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin.[243]
|
224 |
+
|
225 |
+
An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors who collected taxes for the State were the norm in the Republican era. Some of them were powerful enough to influence the number of votes for men running for offices in Rome. These tax farmers called publicans were infamous for their depredations, great private wealth, and the right to tax local areas.[242]
|
226 |
+
|
227 |
+
The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus's conquest of Egypt and the shift to a Roman form of government.[244] As it was effectively considered Augustus's private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium.[245] Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard.[246] The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions.[244] During his reign the circus games resulted in the killing of 3,500 elephants.[247]
|
228 |
+
|
229 |
+
The month of August (Latin: Augustus) is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is sex). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a senatus consultum quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month.[248]
|
230 |
+
|
231 |
+
On his deathbed, Augustus boasted "I found a Rome of bricks; I leave to you one of marble." Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength.[249] Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus.[250]
|
232 |
+
|
233 |
+
Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt.[251] The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus's triumphs in the Res Gestae. Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome.[252]
|
234 |
+
|
235 |
+
He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor.[253] Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family.[254] To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design.[250]
|
236 |
+
|
237 |
+
After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair.[223]
|
238 |
+
|
239 |
+
In the late Augustan era, the commission of five senators called the curatores locorum publicorum iudicandorum (translated as "Supervisors of Public Property") was put in charge of maintaining public buildings and temples of the state cult.[223] Augustus created the senatorial group of the curatores viarum (translated as "Supervisors for Roads") for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs.[227]
|
240 |
+
|
241 |
+
The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model.[250]
|
242 |
+
|
243 |
+
His biographer Suetonius, writing about a century after Augustus's death, described his appearance as: "... unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something ... He had clear, bright eyes ... His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclined to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature, although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches (just under 5 ft. 7 in., or 1.70 meters, in modern height measurements), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him...",[255] adding that "his shoes [were] somewhat high-soled, to make him look taller than he really was".[256] Scientific analysis of traces of paint found in his official statues show that he most likely had light brown hair and eyes (his hair and eyes were depicted as the same color).[257]
|
244 |
+
|
245 |
+
His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC "the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus's person."[258] The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had "a distanced air of ageless majesty".[259] Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and Gemma Augustea.
|
246 |
+
|
247 |
+
Primary sources
|
248 |
+
|
249 |
+
Secondary source material
|
250 |
+
|
251 |
+
|
252 |
+
|
en/4231.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Augustus (Imperator Caesar divi filius Augustus; 23 September 63 BC – 19 August AD 14) was a Roman statesman and military leader who became the first emperor of the Roman Empire, reigning from 27 BC until his death in AD 14.[nb 1] He was the first ruler of the Julio-Claudian dynasty. His status as the founder of the Roman Principate has consolidated an enduring legacy as one of the most effective and controversial leaders in human history.[1][2] The reign of Augustus initiated an era of relative peace known as the Pax Romana. The Roman world was largely free from large-scale conflict for more than two centuries, despite continuous wars of imperial expansion on the Empire's frontiers and the year-long civil war known as the "Year of the Four Emperors" over the imperial succession.
|
4 |
+
|
5 |
+
Augustus was born Gaius Octavius into an old and wealthy equestrian branch of the plebeian gens Octavia. His maternal great-uncle Julius Caesar was assassinated in 44 BC, and Octavius was named in Caesar's will as his adopted son and heir, taking the name Octavian (Latin: Gaius Julius Caesar Octavianus). Along with Mark Antony and Marcus Lepidus, he formed the Second Triumvirate to defeat the assassins of Caesar. Following their victory at the Battle of Philippi, the Triumvirate divided the Roman Republic among themselves and ruled as de facto dictators. The Triumvirate was eventually torn apart by the competing ambitions of its members. Lepidus was driven into exile and stripped of his position, and Antony committed suicide following his defeat at the Battle of Actium by Octavian in 31 BC.
|
6 |
+
|
7 |
+
After the demise of the Second Triumvirate, Augustus restored the outward façade of the free Republic, with governmental power vested in the Roman Senate, the executive magistrates, and the legislative assemblies. In reality, however, he retained his autocratic power over the Republic. By law, Augustus held a collection of powers granted to him for life by the Senate, including supreme military command, and those of tribune and censor. It took several years for Augustus to develop the framework within which a formally republican state could be led under his sole rule. He rejected monarchical titles, and instead called himself Princeps Civitatis ("First Citizen"). The resulting constitutional framework became known as the Principate, the first phase of the Roman Empire.
|
8 |
+
|
9 |
+
Augustus dramatically enlarged the Empire, annexing Egypt, Dalmatia, Pannonia, Noricum, and Raetia, expanding possessions in Africa, and completing the conquest of Hispania, but suffered a major setback in Germania. Beyond the frontiers, he secured the Empire with a buffer region of client states and made peace with the Parthian Empire through diplomacy. He reformed the Roman system of taxation, developed networks of roads with an official courier system, established a standing army, established the Praetorian Guard, created official police and fire-fighting services for Rome, and rebuilt much of the city during his reign. Augustus died in AD 14 at the age of 75, probably from natural causes. However, there were unconfirmed rumors that his wife Livia poisoned him. He was succeeded as emperor by his adopted son Tiberius (also stepson and former son-in-law).
|
10 |
+
|
11 |
+
As a consequence of Roman customs, society, and personal preference, Augustus (/ɔːˈɡʌstəs, əˈ-/ aw-GUST-əs, ə-, Latin: [au̯ˈɡʊstʊs]) was known by many names throughout his life:
|
12 |
+
|
13 |
+
While his paternal family was from the Volscian town of Velletri, approximately 40 kilometres (25 mi) to the south-east of Rome, Augustus was born in the city of Rome on 23 September 63 BC.[12] He was born at Ox Head, a small property on the Palatine Hill, very close to the Roman Forum. He was given the name Gaius Octavius Thurinus, his cognomen possibly commemorating his father's victory at Thurii over a rebellious band of slaves which occurred a few years after his birth.[13][14] Suetonius wrote: "There are many indications that the Octavian family was in days of old a distinguished one at Velitrae; for not only was a street in the most frequented part of town long ago called Octavian, but an altar was shown there besides, consecrated by an Octavius. This man was leader in a war with a neighbouring town ..." [15]
|
14 |
+
|
15 |
+
Due to the crowded nature of Rome at the time, Octavius was taken to his father's home village at Velletri to be raised. Octavius mentions his father's equestrian family only briefly in his memoirs. His paternal great-grandfather Gaius Octavius was a military tribune in Sicily during the Second Punic War. His grandfather had served in several local political offices. His father, also named Gaius Octavius, had been governor of Macedonia. His mother, Atia, was the niece of Julius Caesar.[16][17]
|
16 |
+
|
17 |
+
In 59 BC, when he was four years old, his father died.[18] His mother married a former governor of Syria, Lucius Marcius Philippus.[19] Philippus claimed descent from Alexander the Great, and was elected consul in 56 BC. Philippus never had much of an interest in young Octavius. Because of this, Octavius was raised by his grandmother, Julia, the sister of Julius Caesar. Julia died in 52 or 51 BC, and Octavius delivered the funeral oration for his grandmother.[20][21] From this point, his mother and stepfather took a more active role in raising him. He donned the toga virilis four years later,[22] and was elected to the College of Pontiffs in 47 BC.[23][24] The following year he was put in charge of the Greek games that were staged in honor of the Temple of Venus Genetrix, built by Julius Caesar.[24] According to Nicolaus of Damascus, Octavius wished to join Caesar's staff for his campaign in Africa, but gave way when his mother protested.[25] In 46 BC, she consented for him to join Caesar in Hispania, where he planned to fight the forces of Pompey, Caesar's late enemy, but Octavius fell ill and was unable to travel.
|
18 |
+
|
19 |
+
When he had recovered, he sailed to the front, but was shipwrecked; after coming ashore with a handful of companions, he crossed hostile territory to Caesar's camp, which impressed his great-uncle considerably.[22] Velleius Paterculus reports that after that time, Caesar allowed the young man to share his carriage.[26] When back in Rome, Caesar deposited a new will with the Vestal Virgins, naming Octavius as the prime beneficiary.[27]
|
20 |
+
|
21 |
+
Octavius was studying and undergoing military training in Apollonia, Illyria, when Julius Caesar was killed on the Ides of March (15 March) 44 BC. He rejected the advice of some army officers to take refuge with the troops in Macedonia and sailed to Italy to ascertain whether he had any potential political fortunes or security.[28] Caesar had no living legitimate children under Roman law,[nb 3] and so had adopted Octavius, his grand-nephew, making him his primary heir.[29] Mark Antony later charged that Octavian had earned his adoption by Caesar through sexual favours, though Suetonius describes Antony's accusation as political slander.[30] This form of slander was popular during this time in the Roman Republic to demean and discredit political opponents by accusing them of having an inappropriate sexual affair.[31][32] After landing at Lupiae near Brundisium, Octavius learned the contents of Caesar's will, and only then did he decide to become Caesar's political heir as well as heir to two-thirds of his estate.[24][28][33]
|
22 |
+
|
23 |
+
Upon his adoption, Octavius assumed his great-uncle's name Gaius Julius Caesar. Roman citizens adopted into a new family usually retained their old nomen in cognomen form (e.g., Octavianus for one who had been an Octavius, Aemilianus for one who had been an Aemilius, etc.). However, though some of his contemporaries did,[34] there is no evidence that Octavius ever himself officially used the name Octavianus, as it would have made his modest origins too obvious.[35][36][37] Historians usually refer to the new Caesar as Octavian during the time between his adoption and his assumption of the name Augustus in 27 BC in order to avoid confusing the dead dictator with his heir.[38]
|
24 |
+
|
25 |
+
Octavian could not rely on his limited funds to make a successful entry into the upper echelons of the Roman political hierarchy.[39] After a warm welcome by Caesar's soldiers at Brundisium,[40] Octavian demanded a portion of the funds that were allotted by Caesar for the intended war against the Parthian Empire in the Middle East.[39] This amounted to 700 million sesterces stored at Brundisium, the staging ground in Italy for military operations in the east.[41]
|
26 |
+
|
27 |
+
A later senatorial investigation into the disappearance of the public funds took no action against Octavian, since he subsequently used that money to raise troops against the Senate's arch enemy Mark Antony.[40] Octavian made another bold move in 44 BC when, without official permission, he appropriated the annual tribute that had been sent from Rome's Near Eastern province to Italy.[36][42]
|
28 |
+
|
29 |
+
Octavian began to bolster his personal forces with Caesar's veteran legionaries and with troops designated for the Parthian war, gathering support by emphasizing his status as heir to Caesar.[28][43] On his march to Rome through Italy, Octavian's presence and newly acquired funds attracted many, winning over Caesar's former veterans stationed in Campania.[36] By June, he had gathered an army of 3,000 loyal veterans, paying each a salary of 500 denarii.[44][45][46]
|
30 |
+
|
31 |
+
Arriving in Rome on 6 May 44 BC, Octavian found consul Mark Antony, Caesar's former colleague, in an uneasy truce with the dictator's assassins. They had been granted a general amnesty on 17 March, yet Antony had succeeded in driving most of them out of Rome with an inflammatory eulogy at Caesar's funeral, mounting public opinion against the assassins.[36]
|
32 |
+
|
33 |
+
Mark Antony was amassing political support, but Octavian still had opportunity to rival him as the leading member of the faction supporting Caesar. Mark Antony had lost the support of many Romans and supporters of Caesar when he initially opposed the motion to elevate Caesar to divine status.[47] Octavian failed to persuade Antony to relinquish Caesar's money to him. During the summer, he managed to win support from Caesarian sympathizers and also made common with the Optimates, the former enemies of Caesar, who saw him as the lesser evil and hoped to manipulate him.[48] In September, the leading Optimate orator Marcus Tullius Cicero began to attack Antony in a series of speeches portraying him as a threat to the Republican order.[49][50]
|
34 |
+
|
35 |
+
With opinion in Rome turning against him and his year of consular power nearing its end, Antony attempted to pass laws that would assign him the province of Cisalpine Gaul.[51][52] Octavian meanwhile built up a private army in Italy by recruiting Caesarian veterans and, on 28 November, he won over two of Antony's legions with the enticing offer of monetary gain.[53][54][55]
|
36 |
+
|
37 |
+
In the face of Octavian's large and capable force, Antony saw the danger of staying in Rome and, to the relief of the Senate, he left Rome for Cisalpine Gaul, which was to be handed to him on 1 January.[55] However, the province had earlier been assigned to Decimus Junius Brutus Albinus, one of Caesar's assassins, who now refused to yield to Antony. Antony besieged him at Mutina[56] and rejected the resolutions passed by the Senate to stop the fighting. The Senate had no army to enforce their resolutions. This provided an opportunity for Octavian, who already was known to have armed forces.[54] Cicero also defended Octavian against Antony's taunts about Octavian's lack of noble lineage and aping of Julius Caesar's name, stating "we have no more brilliant example of traditional piety among our youth."[57]
|
38 |
+
|
39 |
+
At the urging of Cicero, the Senate inducted Octavian as senator on 1 January 43 BC, yet he also was given the power to vote alongside the former consuls.[54][55] In addition, Octavian was granted propraetor imperium (commanding power) which legalized his command of troops, sending him to relieve the siege along with Hirtius and Pansa (the consuls for 43 BC).[54][58] In April 43 BC, Antony's forces were defeated at the battles of Forum Gallorum and Mutina, forcing Antony to retreat to Transalpine Gaul. Both consuls were killed, however, leaving Octavian in sole command of their armies.[59][60]
|
40 |
+
|
41 |
+
The senate heaped many more rewards on Decimus Brutus than on Octavian for defeating Antony, then attempted to give command of the consular legions to Decimus Brutus.[61] In response, Octavian stayed in the Po Valley and refused to aid any further offensive against Antony.[62] In July, an embassy of centurions sent by Octavian entered Rome and demanded the consulship left vacant by Hirtius and Pansa[63] and also that the decree should be rescinded which declared Antony a public enemy.[62] When this was refused, he marched on the city with eight legions.[62] He encountered no military opposition in Rome, and on 19 August 43 BC was elected consul with his relative Quintus Pedius as co-consul.[64][65] Meanwhile, Antony formed an alliance with Marcus Aemilius Lepidus, another leading Caesarian.[66]
|
42 |
+
|
43 |
+
In a meeting near Bologna in October 43 BC, Octavian, Antony, and Lepidus formed the Second Triumvirate.[68] This explicit arrogation of special powers lasting five years was then legalised by law passed by the plebs, unlike the unofficial First Triumvirate formed by Pompey, Julius Caesar, and Marcus Licinius Crassus.[68][69] The triumvirs then set in motion proscriptions, in which between 130 and 300 senators[nb 4] and 2,000 equites were branded as outlaws and deprived of their property and, for those who failed to escape, their lives.[71] This decree issued by the triumvirate was motivated in part by a need to raise money to pay the salaries of their troops for the upcoming conflict against Caesar's assassins, Marcus Junius Brutus and Gaius Cassius Longinus.[72] Rewards for their arrest gave incentive for Romans to capture those proscribed, while the assets and properties of those arrested were seized by the triumvirs.[71]
|
44 |
+
|
45 |
+
Contemporary Roman historians provide conflicting reports as to which triumvir was most responsible for the proscriptions and killing. However, the sources agree that enacting the proscriptions was a means by all three factions to eliminate political enemies.[73] Marcus Velleius Paterculus asserted that Octavian tried to avoid proscribing officials whereas Lepidus and Antony were to blame for initiating them. Cassius Dio defended Octavian as trying to spare as many as possible, whereas Antony and Lepidus, being older and involved in politics longer, had many more enemies to deal with.[74]
|
46 |
+
|
47 |
+
This claim was rejected by Appian, who maintained that Octavian shared an equal interest with Lepidus and Antony in eradicating his enemies.[75] Suetonius said that Octavian was reluctant to proscribe officials, but did pursue his enemies with more vigor than the other triumvirs.[73] Plutarch described the proscriptions as a ruthless and cutthroat swapping of friends and family among Antony, Lepidus, and Octavian. For example, Octavian allowed the proscription of his ally Cicero, Antony the proscription of his maternal uncle Lucius Julius Caesar (the consul of 64 BC), and Lepidus his brother Paullus.[74]
|
48 |
+
|
49 |
+
On 1 January 42 BC, the Senate posthumously recognized Julius Caesar as a divinity of the Roman state, Divus Iulius. Octavian was able to further his cause by emphasizing the fact that he was Divi filius, "Son of the Divine".[76] Antony and Octavian then sent 28 legions by sea to face the armies of Brutus and Cassius, who had built their base of power in Greece.[77] After two battles at Philippi in Macedonia in October 42, the Caesarian army was victorious and Brutus and Cassius committed suicide. Mark Antony later used the examples of these battles as a means to belittle Octavian, as both battles were decisively won with the use of Antony's forces. In addition to claiming responsibility for both victories, Antony also branded Octavian as a coward for handing over his direct military control to Marcus Vipsanius Agrippa instead.[78]
|
50 |
+
|
51 |
+
After Philippi, a new territorial arrangement was made among the members of the Second Triumvirate. Gaul and the province of Hispania were placed in the hands of Octavian. Antony traveled east to Egypt where he allied himself with Queen Cleopatra VII, the former lover of Julius Caesar and mother of Caesar's infant son Caesarion. Lepidus was left with the province of Africa, stymied by Antony, who conceded Hispania to Octavian instead.[79]
|
52 |
+
|
53 |
+
Octavian was left to decide where in Italy to settle the tens of thousands of veterans of the Macedonian campaign, whom the triumvirs had promised to discharge. The tens of thousands who had fought on the republican side with Brutus and Cassius could easily ally with a political opponent of Octavian if not appeased, and they also required land.[79] There was no more government-controlled land to allot as settlements for their soldiers, so Octavian had to choose one of two options: alienating many Roman citizens by confiscating their land, or alienating many Roman soldiers who could mount a considerable opposition against him in the Roman heartland. Octavian chose the former.[80] There were as many as eighteen Roman towns affected by the new settlements, with entire populations driven out or at least given partial evictions.[81]
|
54 |
+
|
55 |
+
There was widespread dissatisfaction with Octavian over these settlements of his soldiers, and this encouraged many to rally at the side of Lucius Antonius, who was brother of Mark Antony and supported by a majority in the Senate. Meanwhile, Octavian asked for a divorce from Clodia Pulchra, the daughter of Fulvia (Mark Antony's wife) and her first husband Publius Clodius Pulcher. He returned Clodia to her mother, claiming that their marriage had never been consummated. Fulvia decided to take action. Together with Lucius Antonius, she raised an army in Italy to fight for Antony's rights against Octavian. Lucius and Fulvia took a political and martial gamble in opposing Octavian, however, since the Roman army still depended on the triumvirs for their salaries. Lucius and his allies ended up in a defensive siege at Perusia (modern Perugia), where Octavian forced them into surrender in early 40 BC.[81]
|
56 |
+
|
57 |
+
Lucius and his army were spared, due to his kinship with Antony, the strongman of the East, while Fulvia was exiled to Sicyon.[82] Octavian showed no mercy, however, for the mass of allies loyal to Lucius; on 15 March, the anniversary of Julius Caesar's assassination, he had 300 Roman senators and equestrians executed for allying with Lucius.[83] Perusia also was pillaged and burned as a warning for others.[82] This bloody event sullied Octavian's reputation and was criticized by many, such as Augustan poet Sextus Propertius.[83]
|
58 |
+
|
59 |
+
Sextus Pompeius, the son of Pompey and still a renegade general following Julius Caesar's victory over his father, had established himself in Sicily and Sardinia as part of an agreement reached with the Second Triumvirate in 39 BC.[84] Both Antony and Octavian were vying for an alliance with Pompeius. Octavian succeeded in a temporary alliance in 40 BC when he married Scribonia, a sister or daughter of Pompeius's father-in-law Lucius Scribonius Libo. Scribonia gave birth to Octavian's only natural child, Julia, the same day that he divorced her to marry Livia Drusilla, little more than a year after their marriage.[83]
|
60 |
+
|
61 |
+
While in Egypt, Antony had been engaged in an affair with Cleopatra and had fathered three children with her.[nb 5] Aware of his deteriorating relationship with Octavian, Antony left Cleopatra; he sailed to Italy in 40 BC with a large force to oppose Octavian, laying siege to Brundisium. This new conflict proved untenable for both Octavian and Antony, however. Their centurions, who had become important figures politically, refused to fight due to their Caesarian cause, while the legions under their command followed suit. Meanwhile, in Sicyon, Antony's wife Fulvia died of a sudden illness while Antony was en route to meet her. Fulvia's death and the mutiny of their centurions allowed the two remaining triumvirs to effect a reconciliation.[85][86]
|
62 |
+
|
63 |
+
In the autumn of 40, Octavian and Antony approved the Treaty of Brundisium, by which Lepidus would remain in Africa, Antony in the East, Octavian in the West. The Italian Peninsula was left open to all for the recruitment of soldiers, but in reality, this provision was useless for Antony in the East. To further cement relations of alliance with Mark Antony, Octavian gave his sister, Octavia Minor, in marriage to Antony in late 40 BC.[85]
|
64 |
+
|
65 |
+
Sextus Pompeius threatened Octavian in Italy by denying shipments of grain through the Mediterranean Sea to the peninsula. Pompeius's own son was put in charge as naval commander in the effort to cause widespread famine in Italy.[86] Pompeius's control over the sea prompted him to take on the name Neptuni filius, "son of Neptune".[87] A temporary peace agreement was reached in 39 BC with the treaty of Misenum; the blockade on Italy was lifted once Octavian granted Pompeius Sardinia, Corsica, Sicily, and the Peloponnese, and ensured him a future position as consul for 35 BC.[86][87]
|
66 |
+
|
67 |
+
The territorial agreement between the triumvirate and Sextus Pompeius began to crumble once Octavian divorced Scribonia and married Livia on 17 January 38 BC.[88] One of Pompeius's naval commanders betrayed him and handed over Corsica and Sardinia to Octavian. Octavian lacked the resources to confront Pompeius alone, however, so an agreement was reached with the Second Triumvirate's extension for another five-year period beginning in 37 BC.[69][89]
|
68 |
+
|
69 |
+
In supporting Octavian, Antony expected to gain support for his own campaign against the Parthian Empire, desiring to avenge Rome's defeat at Carrhae in 53 BC.[89] In an agreement reached at Tarentum, Antony provided 120 ships for Octavian to use against Pompeius, while Octavian was to send 20,000 legionaries to Antony for use against Parthia. Octavian sent only a tenth of those promised, however, which Antony viewed as an intentional provocation.[90]
|
70 |
+
|
71 |
+
Octavian and Lepidus launched a joint operation against Sextus in Sicily in 36 BC.[91] Despite setbacks for Octavian, the naval fleet of Sextus Pompeius was almost entirely destroyed on 3 September by General Agrippa at the naval Battle of Naulochus. Sextus fled to the east with his remaining forces, where he was captured and executed in Miletus by one of Antony's generals the following year. As Lepidus and Octavian accepted the surrender of Pompeius's troops, Lepidus attempted to claim Sicily for himself, ordering Octavian to leave. Lepidus's troops deserted him, however, and defected to Octavian since they were weary of fighting and were enticed by Octavian's promises of money.[92]
|
72 |
+
|
73 |
+
Lepidus surrendered to Octavian and was permitted to retain the office of Pontifex Maximus (head of the college of priests), but was ejected from the Triumvirate, his public career at an end, and effectively was exiled to a villa at Cape Circei in Italy.[72][92] The Roman dominions were now divided between Octavian in the West and Antony in the East. Octavian ensured Rome's citizens of their rights to property in order to maintain peace and stability in his portion of the Empire. This time, he settled his discharged soldiers outside of Italy, while also returning 30,000 slaves to their former Roman owners—slaves who had fled to join Pompeius's army and navy.[93] Octavian had the Senate grant him, his wife, and his sister tribunal immunity, or sacrosanctitas, in order to ensure his own safety and that of Livia and Octavia once he returned to Rome.[94]
|
74 |
+
|
75 |
+
Meanwhile, Antony's campaign turned disastrous against Parthia, tarnishing his image as a leader, and the mere 2,000 legionaries sent by Octavian to Antony were hardly enough to replenish his forces.[95] On the other hand, Cleopatra could restore his army to full strength; he already was engaged in a romantic affair with her, so he decided to send Octavia back to Rome.[96] Octavian used this to spread propaganda implying that Antony was becoming less than Roman because he rejected a legitimate Roman spouse for an "Oriental paramour".[97] In 36 BC, Octavian used a political ploy to make himself look less autocratic and Antony more the villain by proclaiming that the civil wars were coming to an end, and that he would step down as triumvir—if only Antony would do the same. Antony refused.[98]
|
76 |
+
|
77 |
+
Roman troops captured the Kingdom of Armenia in 34 BC, and Antony made his son Alexander Helios the ruler of Armenia. He also awarded the title "Queen of Kings" to Cleopatra, acts that Octavian used to convince the Roman Senate that Antony had ambitions to diminish the preeminence of Rome.[97] Octavian became consul once again on 1 January 33 BC, and he opened the following session in the Senate with a vehement attack on Antony's grants of titles and territories to his relatives and to his queen.[99]
|
78 |
+
|
79 |
+
The breach between Antony and Octavian prompted a large portion of the Senators, as well as both of that year's consuls, to leave Rome and defect to Antony. However, Octavian received two key deserters from Antony in the autumn of 32 BC: Munatius Plancus and Marcus Titius.[100] These defectors gave Octavian the information that he needed to confirm with the Senate all the accusations that he made against Antony.[101]
|
80 |
+
|
81 |
+
Octavian forcibly entered the temple of the Vestal Virgins and seized Antony's secret will, which he promptly publicized. The will would have given away Roman-conquered territories as kingdoms for his sons to rule, and designated Alexandria as the site for a tomb for him and his queen.[102][103] In late 32 BC, the Senate officially revoked Antony's powers as consul and declared war on Cleopatra's regime in Egypt.[104][105]
|
82 |
+
|
83 |
+
In early 31 BC, Antony and Cleopatra were temporarily stationed in Greece when Octavian gained a preliminary victory: the navy successfully ferried troops across the Adriatic Sea under the command of Agrippa. Agrippa cut off Antony and Cleopatra's main force from their supply routes at sea, while Octavian landed on the mainland opposite the island of Corcyra (modern Corfu) and marched south. Trapped on land and sea, deserters of Antony's army fled to Octavian's side daily while Octavian's forces were comfortable enough to make preparations.[108]
|
84 |
+
|
85 |
+
Antony's fleet sailed through the bay of Actium on the western coast of Greece in a desperate attempt to break free of the naval blockade. It was there that Antony's fleet faced the much larger fleet of smaller, more maneuverable ships under commanders Agrippa and Gaius Sosius in the Battle of Actium on 2 September 31 BC.[109] Antony and his remaining forces were spared only due to a last-ditch effort by Cleopatra's fleet that had been waiting nearby.[110]
|
86 |
+
|
87 |
+
Octavian pursued them and defeated their forces in Alexandria on 1 August 30 BC—after which Antony and Cleopatra committed suicide. Antony fell on his own sword and was taken by his soldiers back to Alexandria where he died in Cleopatra's arms. Cleopatra died soon after, reputedly by the venomous bite of an asp or by poison.[111] Octavian had exploited his position as Caesar's heir to further his own political career, and he was well aware of the dangers in allowing another person to do the same. He therefore followed the advice of Arius Didymus that "two Caesars are one too many", ordering Caesarion, Julius Caesar's son by Cleopatra, killed, while sparing Cleopatra's children by Antony, with the exception of Antony's older son.[112][113] Octavian had previously shown little mercy to surrendered enemies and acted in ways that had proven unpopular with the Roman people, yet he was given credit for pardoning many of his opponents after the Battle of Actium.[114]
|
88 |
+
|
89 |
+
After Actium and the defeat of Antony and Cleopatra, Octavian was in a position to rule the entire Republic under an unofficial principate[115]—but he had to achieve this through incremental power gains. He did so by courting the Senate and the people while upholding the republican traditions of Rome, appearing that he was not aspiring to dictatorship or monarchy.[116][117] Marching into Rome, Octavian and Marcus Agrippa were elected as consuls by the Senate.[118]
|
90 |
+
|
91 |
+
Years of civil war had left Rome in a state of near lawlessness, but the Republic was not prepared to accept the control of Octavian as a despot. At the same time, Octavian could not simply give up his authority without risking further civil wars among the Roman generals and, even if he desired no position of authority whatsoever, his position demanded that he look to the well-being of the city of Rome and the Roman provinces. Octavian's aims from this point forward were to return Rome to a state of stability, traditional legality, and civility by lifting the overt political pressure imposed on the courts of law and ensuring free elections—in name at least.[119]
|
92 |
+
|
93 |
+
In 27 BC, Octavian made a show of returning full power to the Roman Senate and relinquishing his control of the Roman provinces and their armies. Under his consulship, however, the Senate had little power in initiating legislation by introducing bills for senatorial debate. Octavian was no longer in direct control of the provinces and their armies, but he retained the loyalty of active duty soldiers and veterans alike. The careers of many clients and adherents depended on his patronage, as his financial power was unrivaled in the Roman Republic.[118] Historian Werner Eck states:
|
94 |
+
|
95 |
+
The sum of his power derived first of all from various powers of office delegated to him by the Senate and people, secondly from his immense private fortune, and thirdly from numerous patron-client relationships he established with individuals and groups throughout the Empire. All of them taken together formed the basis of his auctoritas, which he himself emphasized as the foundation of his political actions.[120]
|
96 |
+
|
97 |
+
To a large extent, the public were aware of the vast financial resources that Octavian commanded. He failed to encourage enough senators to finance the building and maintenance of networks of roads in Italy in 20 BC, but he undertook direct responsibility for them. This was publicized on the Roman currency issued in 16 BC, after he donated vast amounts of money to the aerarium Saturni, the public treasury.[121]
|
98 |
+
|
99 |
+
According to H. H. Scullard, however, Octavian's power was based on the exercise of "a predominant military power and ... the ultimate sanction of his authority was force, however much the fact was disguised."[122] The Senate proposed to Octavian, the victor of Rome's civil wars, that he once again assume command of the provinces. The Senate's proposal was a ratification of Octavian's extra-constitutional power. Through the Senate, Octavian was able to continue the appearance of a still-functional constitution. Feigning reluctance, he accepted a ten-year responsibility of overseeing provinces that were considered chaotic.[123][124]
|
100 |
+
|
101 |
+
The provinces ceded to Augustus for that ten-year period comprised much of the conquered Roman world, including all of Hispania and Gaul, Syria, Cilicia, Cyprus, and Egypt.[123][125] Moreover, command of these provinces provided Octavian with control over the majority of Rome's legions.[125][126]
|
102 |
+
|
103 |
+
While Octavian acted as consul in Rome, he dispatched senators to the provinces under his command as his representatives to manage provincial affairs and ensure that his orders were carried out. The provinces not under Octavian's control were overseen by governors chosen by the Roman Senate.[126] Octavian became the most powerful political figure in the city of Rome and in most of its provinces, but he did not have a monopoly on political and martial power.[127]
|
104 |
+
|
105 |
+
The Senate still controlled North Africa, an important regional producer of grain, as well as Illyria and Macedonia, two strategic regions with several legions.[127] However, the Senate had control of only five or six legions distributed among three senatorial proconsuls, compared to the twenty legions under the control of Octavian, and their control of these regions did not amount to any political or military challenge to Octavian.[116][122] The Senate's control over some of the Roman provinces helped maintain a republican façade for the autocratic Principate. Also, Octavian's control of entire provinces followed Republican-era precedents for the objective of securing peace and creating stability, in which such prominent Romans as Pompey had been granted similar military powers in times of crisis and instability.[116]
|
106 |
+
|
107 |
+
On 16 January 27 BC the Senate gave Octavian the new titles of Augustus and Princeps.[128] Augustus is from the Latin word Augere (meaning to increase) and can be translated as "the illustrious one". It was a title of religious authority rather than political authority. His new title of Augustus was also more favorable than Romulus, the previous one which he styled for himself in reference to the story of the legendary founder of Rome, which symbolized a second founding of Rome.[114] The title of Romulus was associated too strongly with notions of monarchy and kingship, an image that Octavian tried to avoid.[129] The title princeps senatus originally meant the member of the Senate with the highest precedence,[130] but in the case of Augustus, it became an almost regnal title for a leader who was first in charge.[131] Augustus also styled himself as Imperator Caesar divi filius, "Commander Caesar son of the deified one". With this title, he boasted his familial link to deified Julius Caesar, and the use of Imperator signified a permanent link to the Roman tradition of victory. He transformed Caesar, a cognomen for one branch of the Julian family, into a new family line that began with him.[128]
|
108 |
+
|
109 |
+
Augustus was granted the right to hang the corona civica above his door, the "civic crown" made from oak, and to have laurels drape his doorposts.[127] However, he renounced flaunting insignia of power such as holding a scepter, wearing a diadem, or wearing the golden crown and purple toga of his predecessor Julius Caesar.[132] If he refused to symbolize his power by donning and bearing these items on his person, the Senate nonetheless awarded him with a golden shield displayed in the meeting hall of the Curia, bearing the inscription virtus, pietas, clementia, iustitia—"valor, piety, clemency, and justice."[127][133]
|
110 |
+
|
111 |
+
By 23 BC, some of the un-Republican implications were becoming apparent concerning the settlement of 27 BC. Augustus's retention of an annual consulate drew attention to his de facto dominance over the Roman political system, and cut in half the opportunities for others to achieve what was still nominally the preeminent position in the Roman state.[134] Further, he was causing political problems by desiring to have his nephew Marcus Claudius Marcellus follow in his footsteps and eventually assume the Principate in his turn,[nb 6] alienating his three greatest supporters – Agrippa, Maecenas, and Livia.[135] He appointed noted Republican Calpurnius Piso (who had fought against Julius Caesar and supported Cassius and Brutus[136]) as co-consul in 23 BC, after his choice Aulus Terentius Varro Murena died unexpectedly.[137]
|
112 |
+
|
113 |
+
In the late spring Augustus suffered a severe illness, and on his supposed deathbed made arrangements that would ensure the continuation of the Principate in some form,[138] while allaying senators' suspicions of his anti-republicanism. Augustus prepared to hand down his signet ring to his favored general Agrippa. However, Augustus handed over to his co-consul Piso all of his official documents, an account of public finances, and authority over listed troops in the provinces while Augustus's supposedly favored nephew Marcellus came away empty-handed.[139][140] This was a surprise to many who believed Augustus would have named an heir to his position as an unofficial emperor.[141]
|
114 |
+
|
115 |
+
Augustus bestowed only properties and possessions to his designated heirs, as an obvious system of institutionalized imperial inheritance would have provoked resistance and hostility among the republican-minded Romans fearful of monarchy.[117] With regards to the Principate, it was obvious to Augustus that Marcellus was not ready to take on his position;[142] nonetheless, by giving his signet ring to Agrippa, Augustus intended to signal to the legions that Agrippa was to be his successor, and that constitutional procedure notwithstanding, they should continue to obey Agrippa.[143]
|
116 |
+
|
117 |
+
Soon after his bout of illness subsided, Augustus gave up his consulship. The only other times Augustus would serve as consul would be in the years 5 and 2 BC,[140][144] both times to introduce his grandsons into public life.[136] This was a clever ploy by Augustus; ceasing to serve as one of two annually elected consuls allowed aspiring senators a better chance to attain the consular position, while allowing Augustus to exercise wider patronage within the senatorial class.[145] Although Augustus had resigned as consul, he desired to retain his consular imperium not just in his provinces but throughout the empire. This desire, as well as the Marcus Primus Affair, led to a second compromise between him and the Senate known as the Second Settlement.[146]
|
118 |
+
|
119 |
+
The primary reasons for the Second Settlement were as follows. First, after Augustus relinquished the annual consulship, he was no longer in an official position to rule the state, yet his dominant position remained unchanged over his Roman, 'imperial' provinces where he was still a proconsul.[140][147] When he annually held the office of consul, he had the power to intervene with the affairs of the other provincial proconsuls appointed by the Senate throughout the empire, when he deemed necessary.[148]
|
120 |
+
|
121 |
+
A second problem later arose showing the need for the Second Settlement in what became known as the "Marcus Primus Affair".[149] In late 24 or early 23 BC, charges were brought against Marcus Primus, the former proconsul (governor) of Macedonia, for waging a war without prior approval of the Senate on the Odrysian kingdom of Thrace, whose king was a Roman ally.[150] He was defended by Lucius Lucinius Varro Murena, who told the trial that his client had received specific instructions from Augustus, ordering him to attack the client state.[151] Later, Primus testified that the orders came from the recently deceased Marcellus.[152]
|
122 |
+
|
123 |
+
Such orders, had they been given, would have been considered a breach of the Senate's prerogative under the Constitutional settlement of 27 BC and its aftermath—i.e., before Augustus was granted imperium proconsulare maius—as Macedonia was a Senatorial province under the Senate's jurisdiction, not an imperial province under the authority of Augustus. Such an action would have ripped away the veneer of Republican restoration as promoted by Augustus, and exposed his fraud of merely being the first citizen, a first among equals.[151] Even worse, the involvement of Marcellus provided some measure of proof that Augustus's policy was to have the youth take his place as Princeps, instituting a form of monarchy – accusations that had already played out.[142]
|
124 |
+
|
125 |
+
The situation was so serious that Augustus himself appeared at the trial, even though he had not been called as a witness. Under oath, Augustus declared that he gave no such order.[153] Murena disbelieved Augustus's testimony and resented his attempt to subvert the trial by using his auctoritas. He rudely demanded to know why Augustus had turned up to a trial to which he had not been called; Augustus replied that he came in the public interest.[154] Although Primus was found guilty, some jurors voted to acquit, meaning that not everybody believed Augustus's testimony, an insult to the 'August One'.[155]
|
126 |
+
|
127 |
+
The Second Constitutional Settlement was completed in part to allay confusion and formalize Augustus's legal authority to intervene in Senatorial provinces. The Senate granted Augustus a form of general imperium proconsulare, or proconsular imperium (power) that applied throughout the empire, not solely to his provinces. Moreover, the Senate augmented Augustus's proconsular imperium into imperium proconsulare maius, or proconsular imperium applicable throughout the empire that was more (maius) or greater than that held by the other proconsuls. This in effect gave Augustus constitutional power superior to all other proconsuls in the empire.[146] Augustus stayed in Rome during the renewal process and provided veterans with lavish donations to gain their support, thereby ensuring that his status of proconsular imperium maius was renewed in 13 BC.[144]
|
128 |
+
|
129 |
+
During the second settlement, Augustus was also granted the power of a tribune (tribunicia potestas) for life, though not the official title of tribune.[146] For some years, Augustus had been awarded tribunicia sacrosanctitas, the immunity given to a Tribune of the Plebs. Now he decided to assume the full powers of the magistracy, renewed annually, in perpetuity. Legally, it was closed to patricians, a status that Augustus had acquired some years earlier when adopted by Julius Caesar.[145]
|
130 |
+
|
131 |
+
This power allowed him to convene the Senate and people at will and lay business before them, to veto the actions of either the Assembly or the Senate, to preside over elections, and to speak first at any meeting.[144][156] Also included in Augustus's tribunician authority were powers usually reserved for the Roman censor; these included the right to supervise public morals and scrutinize laws to ensure that they were in the public interest, as well as the ability to hold a census and determine the membership of the Senate.[157]
|
132 |
+
|
133 |
+
With the powers of a censor, Augustus appealed to virtues of Roman patriotism by banning all attire but the classic toga while entering the Forum.[158] There was no precedent within the Roman system for combining the powers of the tribune and the censor into a single position, nor was Augustus ever elected to the office of censor.[159] Julius Caesar had been granted similar powers, wherein he was charged with supervising the morals of the state. However, this position did not extend to the censor's ability to hold a census and determine the Senate's roster. The office of the tribunus plebis began to lose its prestige due to Augustus's amassing of tribunal powers, so he revived its importance by making it a mandatory appointment for any plebeian desiring the praetorship.[160]
|
134 |
+
|
135 |
+
Augustus was granted sole imperium within the city of Rome itself, in addition to being granted proconsular imperium maius and tribunician authority for life. Traditionally, proconsuls (Roman province governors) lost their proconsular "imperium" when they crossed the Pomerium – the sacred boundary of Rome – and entered the city. In these situations, Augustus would have power as part of his tribunician authority but his constitutional imperium within the Pomerium would be less than that of a serving consul. That would mean that, when he was in the city, he might not be the constitutional magistrate with the most authority. Thanks to his prestige or auctoritas, his wishes would usually be obeyed, but there might be some difficulty. To fill this power vacuum, the Senate voted that Augustus's imperium proconsulare maius (superior proconsular power) should not lapse when he was inside the city walls. All armed forces in the city had formerly been under the control of the urban praetors and consuls, but this situation now placed them under the sole authority of Augustus.[161]
|
136 |
+
|
137 |
+
In addition, the credit was given to Augustus for each subsequent Roman military victory after this time, because the majority of Rome's armies were stationed in imperial provinces commanded by Augustus through the legatus who were deputies of the princeps in the provinces. Moreover, if a battle was fought in a Senatorial province, Augustus's proconsular imperium maius allowed him to take command of (or credit for) any major military victory. This meant that Augustus was the only individual able to receive a triumph, a tradition that began with Romulus, Rome's first King and first triumphant general. Lucius Cornelius Balbus was the last man outside Augustus's family to receive this award, in 19 BC.[162] Tiberius, Augustus's eldest stepson by Livia, was the only other general to receive a triumph—for victories in Germania in 7 BC.[163]
|
138 |
+
|
139 |
+
Many of the political subtleties of the Second Settlement seem to have evaded the comprehension of the Plebeian class, who were Augustus's greatest supporters and clientele. This caused them to insist upon Augustus's participation in imperial affairs from time to time. Augustus failed to stand for election as consul in 22 BC, and fears arose once again that he was being forced from power by the aristocratic Senate. In 22, 21, and 19 BC, the people rioted in response, and only allowed a single consul to be elected for each of those years, ostensibly to leave the other position open for Augustus.[164]
|
140 |
+
|
141 |
+
Likewise, there was a food shortage in Rome in 22 BC which sparked panic, while many urban plebs called for Augustus to take on dictatorial powers to personally oversee the crisis. After a theatrical display of refusal before the Senate, Augustus finally accepted authority over Rome's grain supply "by virtue of his proconsular imperium", and ended the crisis almost immediately.[144] It was not until AD 8 that a food crisis of this sort prompted Augustus to establish a praefectus annonae, a permanent prefect who was in charge of procuring food supplies for Rome.[165]
|
142 |
+
|
143 |
+
There were some who were concerned by the expansion of powers granted to Augustus by the Second Settlement, and this came to a head with the apparent conspiracy of Fannius Caepio.[149] Some time prior to 1 September 22 BC, a certain Castricius provided Augustus with information about a conspiracy led by Fannius Caepio.[166] Murena, the outspoken Consul who defended Primus in the Marcus Primus Affair, was named among the conspirators. The conspirators were tried in absentia with Tiberius acting as prosecutor; the jury found them guilty, but it was not a unanimous verdict.[167] All the accused were sentenced to death for treason and executed as soon as they were captured—without ever giving testimony in their defence.[168] Augustus ensured that the facade of Republican government continued with an effective cover-up of the events.[169]
|
144 |
+
|
145 |
+
In 19 BC, the Senate granted Augustus a form of 'general consular imperium', which was probably 'imperium consulare maius', like the proconsular powers that he received in 23 BC. Like his tribune authority, the consular powers were another instance of gaining power from offices that he did not actually hold.[170] In addition, Augustus was allowed to wear the consul's insignia in public and before the Senate,[161] as well as to sit in the symbolic chair between the two consuls and hold the fasces, an emblem of consular authority.[170] This seems to have assuaged the populace; regardless of whether or not Augustus was a consul, the importance was that he both appeared as one before the people and could exercise consular power if necessary. On 6 March 12 BC, after the death of Lepidus, he additionally took up the position of pontifex maximus, the high priest of the college of the Pontiffs, the most important position in Roman religion.[171][172] On 5 February 2 BC, Augustus was also given the title pater patriae, or "father of the country".[173][174]
|
146 |
+
|
147 |
+
A final reason for the Second Settlement was to give the Principate constitutional stability and staying power in case something happened to Princeps Augustus. His illness of early 23 BC and the Caepio conspiracy showed that the regime's existence hung by the thin thread of the life of one man, Augustus himself, who suffered from several severe and dangerous illnesses throughout his life.[175] If he were to die from natural causes or fall victim to assassination, Rome could be subjected to another round of civil war. The memories of Pharsalus, the Ides of March, the proscriptions, Philippi, and Actium, barely twenty-five years distant, were still vivid in the minds of many citizens. Proconsular imperium was conferred upon Agrippa for five years, similar to Augustus's power, in order to accomplish this constitutional stability. The exact nature of the grant is uncertain but it probably covered Augustus's imperial provinces, east and west, perhaps lacking authority over the provinces of the Senate. That came later, as did the jealously guarded tribunicia potestas.[176] Augustus's accumulation of powers was now complete. In fact, he dated his 'reign' from the completion of the Second Settlement, 1 July 23 BC.[177]
|
148 |
+
|
149 |
+
Augustus chose Imperator ("victorious commander") to be his first name, since he wanted to make an emphatically clear connection between himself and the notion of victory, and consequently became known as Imperator Caesar Divi Filius Augustus. By the year 13, Augustus boasted 21 occasions where his troops proclaimed "imperator" as his title after a successful battle. Almost the entire fourth chapter in his publicly released memoirs of achievements known as the Res Gestae was devoted to his military victories and honors.[178]
|
150 |
+
|
151 |
+
Augustus also promoted the ideal of a superior Roman civilization with a task of ruling the world (to the extent to which the Romans knew it), a sentiment embodied in words that the contemporary poet Virgil attributes to a legendary ancestor of Augustus: tu regere imperio populos, Romane, memento—"Roman, remember by your strength to rule the Earth's peoples!"[158] The impulse for expansionism was apparently prominent among all classes at Rome, and it is accorded divine sanction by Virgil's Jupiter in Book 1 of the Aeneid, where Jupiter promises Rome imperium sine fine, "sovereignty without end".[179]
|
152 |
+
|
153 |
+
By the end of his reign, the armies of Augustus had conquered northern Hispania (modern Spain and Portugal) and the Alpine regions of Raetia and Noricum (modern Switzerland, Bavaria, Austria, Slovenia), Illyricum and Pannonia (modern Albania, Croatia, Hungary, Serbia, etc.), and had extended the borders of the Africa Province to the east and south. Judea was added to the province of Syria when Augustus deposed Herod Archelaus, successor to client king Herod the Great (73–4 BC). Syria (like Egypt after Antony) was governed by a high prefect of the equestrian class rather than by a proconsul or legate of Augustus.[180]
|
154 |
+
|
155 |
+
Again, no military effort was needed in 25 BC when Galatia (modern Turkey) was converted to a Roman province shortly after Amyntas of Galatia was killed by an avenging widow of a slain prince from Homonada.[180] The rebellious tribes of Asturias and Cantabria in modern-day Spain were finally quelled in 19 BC, and the territory fell under the provinces of Hispania and Lusitania. This region proved to be a major asset in funding Augustus's future military campaigns, as it was rich in mineral deposits that could be fostered in Roman mining projects, especially the very rich gold deposits at Las Medulas.[181]
|
156 |
+
|
157 |
+
Conquering the peoples of the Alps in 16 BC was another important victory for Rome, since it provided a large territorial buffer between the Roman citizens of Italy and Rome's enemies in Germania to the north.[182] Horace dedicated an ode to the victory, while the monumental Trophy of Augustus near Monaco was built to honor the occasion.[183] The capture of the Alpine region also served the next offensive in 12 BC, when Tiberius began the offensive against the Pannonian tribes of Illyricum, and his brother Nero Claudius Drusus moved against the Germanic tribes of the eastern Rhineland. Both campaigns were successful, as Drusus's forces reached the Elbe River by 9 BC—though he died shortly after by falling off his horse.[184] It was recorded that the pious Tiberius walked in front of his brother's body all the way back to Rome.[185]
|
158 |
+
|
159 |
+
To protect Rome's eastern territories from the Parthian Empire, Augustus relied on the client states of the east to act as territorial buffers and areas that could raise their own troops for defense. To ensure security of the Empire's eastern flank, Augustus stationed a Roman army in Syria, while his skilled stepson Tiberius negotiated with the Parthians as Rome's diplomat to the East.[186] Tiberius was responsible for restoring Tigranes V to the throne of the Kingdom of Armenia.[185]
|
160 |
+
|
161 |
+
Yet arguably his greatest diplomatic achievement was negotiating with Phraates IV of Parthia (37–2 BC) in 20 BC for the return of the battle standards lost by Crassus in the Battle of Carrhae, a symbolic victory and great boost of morale for Rome.[185][186][187] Werner Eck claims that this was a great disappointment for Romans seeking to avenge Crassus's defeat by military means.[188] However, Maria Brosius explains that Augustus used the return of the standards as propaganda symbolizing the submission of Parthia to Rome. The event was celebrated in art such as the breastplate design on the statue Augustus of Prima Porta and in monuments such as the Temple of Mars Ultor ('Mars the Avenger') built to house the standards.[189]
|
162 |
+
|
163 |
+
Parthia had always posed a threat to Rome in the east, but the real battlefront was along the Rhine and Danube rivers.[186] Before the final fight with Antony, Octavian's campaigns against the tribes in Dalmatia were the first step in expanding Roman dominions to the Danube.[190] Victory in battle was not always a permanent success, as newly conquered territories were constantly retaken by Rome's enemies in Germania.[186]
|
164 |
+
|
165 |
+
A prime example of Roman loss in battle was the Battle of Teutoburg Forest in AD 9, where three entire legions led by Publius Quinctilius Varus were destroyed by Arminius, leader of the Cherusci, an apparent Roman ally.[191] Augustus retaliated by dispatching Tiberius and Drusus to the Rhineland to pacify it, which had some success although the battle of AD 9 brought the end to Roman expansion into Germany.[192] Roman general Germanicus took advantage of a Cherusci civil war between Arminius and Segestes; they defeated Arminius, who fled that Battle of Idistaviso in AD 16 but was killed later in 21 due to treachery.[193]
|
166 |
+
|
167 |
+
The illness of Augustus in 23 BC brought the problem of succession to the forefront of political issues and the public. To ensure stability, he needed to designate an heir to his unique position in Roman society and government. This was to be achieved in small, undramatic, and incremental ways that did not stir senatorial fears of monarchy. If someone was to succeed to Augustus's unofficial position of power, he would have to earn it through his own publicly proven merits.[194]
|
168 |
+
|
169 |
+
Some Augustan historians argue that indications pointed toward his sister's son Marcellus, who had been quickly married to Augustus's daughter Julia the Elder.[195] Other historians dispute this due to Augustus's will being read aloud to the Senate while he was seriously ill in 23 BC,[196] instead indicating a preference for Marcus Agrippa, who was Augustus's second in charge and arguably the only one of his associates who could have controlled the legions and held the Empire together.[197]
|
170 |
+
|
171 |
+
After the death of Marcellus in 23 BC, Augustus married his daughter to Agrippa. This union produced five children, three sons and two daughters: Gaius Caesar, Lucius Caesar, Vipsania Julia, Agrippina the Elder, and Postumus Agrippa, so named because he was born after Marcus Agrippa died. Shortly after the Second Settlement, Agrippa was granted a five-year term of administering the eastern half of the Empire with the imperium of a proconsul and the same tribunicia potestas granted to Augustus (although not trumping Augustus's authority), his seat of governance stationed at Samos in the eastern Aegean.[197][198] This granting of power showed Augustus's favor for Agrippa, but it was also a measure to please members of his Caesarian party by allowing one of their members to share a considerable amount of power with him.[198]
|
172 |
+
|
173 |
+
Augustus's intent became apparent to make Gaius and Lucius Caesar his heirs when he adopted them as his own children.[199] He took the consulship in 5 and 2 BC so that he could personally usher them into their political careers,[200] and they were nominated for the consulships of AD 1 and 4.[201] Augustus also showed favor to his stepsons, Livia's children from her first marriage Nero Claudius Drusus Germanicus (henceforth referred to as Drusus) and Tiberius Claudius (henceforth Tiberius), granting them military commands and public office, though seeming to favor Drusus. After Agrippa died in 12 BC, Tiberius was ordered to divorce his own wife Vipsania Agrippina and marry Agrippa's widow, Augustus's daughter Julia—as soon as a period of mourning for Agrippa had ended.[202] Drusus's marriage to Augustus's niece Antonia was considered an unbreakable affair, whereas Vipsania was "only" the daughter of the late Agrippa from his first marriage.[202]
|
174 |
+
|
175 |
+
Tiberius shared in Augustus's tribune powers as of 6 BC, but shortly thereafter went into retirement, reportedly wanting no further role in politics while he exiled himself to Rhodes.[163][203] No specific reason is known for his departure, though it could have been a combination of reasons, including a failing marriage with Julia,[163][203] as well as a sense of envy and exclusion over Augustus's apparent favouring of his young grandchildren-turned-sons Gaius and Lucius. (Gaius and Lucius joined the college of priests at an early age, were presented to spectators in a more favorable light, and were introduced to the army in Gaul.)[204][205]
|
176 |
+
|
177 |
+
After the early deaths of both Lucius and Gaius in AD 2 and 4 respectively, and the earlier death of his brother Drusus (9 BC), Tiberius was recalled to Rome in June AD 4, where he was adopted by Augustus on the condition that he, in turn, adopt his nephew Germanicus.[206] This continued the tradition of presenting at least two generations of heirs.[202] In that year, Tiberius was also granted the powers of a tribune and proconsul, emissaries from foreign kings had to pay their respects to him, and by AD 13 was awarded with his second triumph and equal level of imperium with that of Augustus.[207]
|
178 |
+
|
179 |
+
The only other possible claimant as heir was Postumus Agrippa, who had been exiled by Augustus in AD 7, his banishment made permanent by senatorial decree, and Augustus officially disowned him. He certainly fell out of Augustus's favor as an heir; the historian Erich S. Gruen notes various contemporary sources that state Postumus Agrippa was a "vulgar young man, brutal and brutish, and of depraved character".[208]
|
180 |
+
|
181 |
+
On 19 August AD 14, Augustus died while visiting Nola where his father had died. Both Tacitus and Cassius Dio wrote that Livia was rumored to have brought about Augustus's death by poisoning fresh figs.[209][210] This element features in many modern works of historical fiction pertaining to Augustus's life, but some historians view it as likely to have been a salacious fabrication made by those who had favoured Postumus as heir, or other of Tiberius's political enemies. Livia had long been the target of similar rumors of poisoning on the behalf of her son, most or all of which are unlikely to have been true.[211]
|
182 |
+
|
183 |
+
Alternatively, it is possible that Livia did supply a poisoned fig (she did cultivate a variety of fig named for her that Augustus is said to have enjoyed), but did so as a means of assisted suicide rather than murder. Augustus's health had been in decline in the months immediately before his death, and he had made significant preparations for a smooth transition in power, having at last reluctantly settled on Tiberius as his choice of heir.[212] It is likely that Augustus was not expected to return alive from Nola, but it seems that his health improved once there; it has therefore been speculated that Augustus and Livia conspired to end his life at the anticipated time, having committed all political process to accepting Tiberius, in order to not endanger that transition.[211]
|
184 |
+
|
185 |
+
Augustus's famous last words were, "Have I played the part well? Then applaud as I exit"—referring to the play-acting and regal authority that he had put on as emperor. Publicly, though, his last words were, "Behold, I found Rome of clay, and leave her to you of marble." An enormous funerary procession of mourners traveled with Augustus's body from Nola to Rome, and on the day of his burial all public and private businesses closed for the day.[212] Tiberius and his son Drusus delivered the eulogy while standing atop two rostra. Augustus's body was coffin-bound and cremated on a pyre close to his mausoleum. It was proclaimed that Augustus joined the company of the gods as a member of the Roman pantheon.[213]
|
186 |
+
|
187 |
+
Historian D. C. A. Shotter states that Augustus's policy of favoring the Julian family line over the Claudian might have afforded Tiberius sufficient cause to show open disdain for Augustus after the latter's death; instead, Tiberius was always quick to rebuke those who criticized Augustus.[214] Shotter suggests that Augustus's deification obliged Tiberius to suppress any open resentment that he might have harbored, coupled with Tiberius's "extremely conservative" attitude towards religion.[215] Also, historian R. Shaw-Smith points to letters of Augustus to Tiberius which display affection towards Tiberius and high regard for his military merits.[216] Shotter states that Tiberius focused his anger and criticism on Gaius Asinius Gallus (for marrying Vipsania after Augustus forced Tiberius to divorce her), as well as toward the two young Caesars, Gaius and Lucius—instead of Augustus, the real architect of his divorce and imperial demotion.[215]
|
188 |
+
|
189 |
+
Augustus's reign laid the foundations of a regime that lasted, in one form or another, for nearly fifteen hundred years through the ultimate decline of the Western Roman Empire and until the Fall of Constantinople in 1453. Both his adoptive surname, Caesar, and his title Augustus became the permanent titles of the rulers of the Roman Empire for fourteen centuries after his death, in use both at Old Rome and at New Rome. In many languages, Caesar became the word for Emperor, as in the German Kaiser and in the Bulgarian and subsequently Russian Tsar (sometimes Csar or Czar). The cult of Divus Augustus continued until the state religion of the Empire was changed to Christianity in 391 by Theodosius I. Consequently, there are many excellent statues and busts of the first emperor. He had composed an account of his achievements, the Res Gestae Divi Augusti, to be inscribed in bronze in front of his mausoleum.[218] Copies of the text were inscribed throughout the Empire upon his death.[219] The inscriptions in Latin featured translations in Greek beside it, and were inscribed on many public edifices, such as the temple in Ankara dubbed the Monumentum Ancyranum, called the "queen of inscriptions" by historian Theodor Mommsen.[220]
|
190 |
+
|
191 |
+
The Res Gestae is the only work to have survived from antiquity, though Augustus is also known to have composed poems entitled Sicily, Epiphanus, and Ajax, an autobiography of 13 books, a philosophical treatise, and a written rebuttal to Brutus's Eulogy of Cato.[221] Historians are able to analyze excerpts of letters penned by Augustus, preserved in other works, to others for additional facts or clues about his personal life.[216][222]
|
192 |
+
|
193 |
+
Many consider Augustus to be Rome's greatest emperor; his policies certainly extended the Empire's life span and initiated the celebrated Pax Romana or Pax Augusta. The Roman Senate wished subsequent emperors to "be more fortunate than Augustus and better than Trajan". Augustus was intelligent, decisive, and a shrewd politician, but he was not perhaps as charismatic as Julius Caesar and was influenced on occasion by Livia (sometimes for the worse). Nevertheless, his legacy proved more enduring. The city of Rome was utterly transformed under Augustus, with Rome's first institutionalized police force, fire fighting force, and the establishment of the municipal prefect as a permanent office. The police force was divided into cohorts of 500 men each, while the units of firemen ranged from 500 to 1,000 men each, with 7 units assigned to 14 divided city sectors.[223]
|
194 |
+
|
195 |
+
A praefectus vigilum, or "Prefect of the Watch" was put in charge of the vigiles, Rome's fire brigade and police.[224] With Rome's civil wars at an end, Augustus was also able to create a standing army for the Roman Empire, fixed at a size of 28 legions of about 170,000 soldiers.[225] This was supported by numerous auxiliary units of 500 non-citizen soldiers each, often recruited from recently conquered areas.[226]
|
196 |
+
|
197 |
+
With his finances securing the maintenance of roads throughout Italy, Augustus also installed an official courier system of relay stations overseen by a military officer known as the praefectus vehiculorum.[227] Besides the advent of swifter communication among Italian polities, his extensive building of roads throughout Italy also allowed Rome's armies to march swiftly and at an unprecedented pace across the country.[228] In the year 6 Augustus established the aerarium militare, donating 170 million sesterces to the new military treasury that provided for both active and retired soldiers.[229]
|
198 |
+
|
199 |
+
One of the most enduring institutions of Augustus was the establishment of the Praetorian Guard in 27 BC, originally a personal bodyguard unit on the battlefield that evolved into an imperial guard as well as an important political force in Rome.[230] They had the power to intimidate the Senate, install new emperors, and depose ones they disliked; the last emperor they served was Maxentius, as it was Constantine I who disbanded them in the early 4th century and destroyed their barracks, the Castra Praetoria.[231]
|
200 |
+
|
201 |
+
Although the most powerful individual in the Roman Empire, Augustus wished to embody the spirit of Republican virtue and norms. He also wanted to relate to and connect with the concerns of the plebs and lay people. He achieved this through various means of generosity and a cutting back of lavish excess. In the year 29 BC, Augustus gave 400 sesterces (equal to 1/10 of a Roman pound of gold) each to 250,000 citizens, 1,000 sesterces each to 120,000 veterans in the colonies, and spent 700 million sesterces in purchasing land for his soldiers to settle upon.[232] He also restored 82 different temples to display his care for the Roman pantheon of deities.[232] In 28 BC, he melted down 80 silver statues erected in his likeness and in honor of him, an attempt of his to appear frugal and modest.[232]
|
202 |
+
|
203 |
+
The longevity of Augustus's reign and its legacy to the Roman world should not be overlooked as a key factor in its success. As Tacitus wrote, the younger generations alive in AD 14 had never known any form of government other than the Principate.[233] Had Augustus died earlier (in 23 BC, for instance), matters might have turned out differently. The attrition of the civil wars on the old Republican oligarchy and the longevity of Augustus, therefore, must be seen as major contributing factors in the transformation of the Roman state into a de facto monarchy in these years. Augustus's own experience, his patience, his tact, and his political acumen also played their parts. He directed the future of the Empire down many lasting paths, from the existence of a standing professional army stationed at or near the frontiers, to the dynastic principle so often employed in the imperial succession, to the embellishment of the capital at the emperor's expense. Augustus's ultimate legacy was the peace and prosperity the Empire enjoyed for the next two centuries under the system he initiated. His memory was enshrined in the political ethos of the Imperial age as a paradigm of the good emperor. Every Emperor of Rome adopted his name, Caesar Augustus, which gradually lost its character as a name and eventually became a title.[213] The Augustan era poets Virgil and Horace praised Augustus as a defender of Rome, an upholder of moral justice, and an individual who bore the brunt of responsibility in maintaining the empire.[234]
|
204 |
+
|
205 |
+
However, for his rule of Rome and establishing the principate, Augustus has also been subjected to criticism throughout the ages. The contemporary Roman jurist Marcus Antistius Labeo (d. AD 10/11), fond of the days of pre-Augustan republican liberty in which he had been born, openly criticized the Augustan regime. In the beginning of his Annals, the Roman historian Tacitus (c. 56–c.117) wrote that Augustus had cunningly subverted Republican Rome into a position of slavery. He continued to say that, with Augustus's death and swearing of loyalty to Tiberius, the people of Rome simply traded one slaveholder for another.[235] Tacitus, however, records two contradictory but common views of Augustus:
|
206 |
+
|
207 |
+
Intelligent people praised or criticized him in varying ways. One opinion was as follows. Filial duty and a national emergency, in which there was no place for law-abiding conduct, had driven him to civil war—and this can neither be initiated nor maintained by decent methods. He had made many concessions to Anthony and to Lepidus for the sake of vengeance on his father's murderers. When Lepidus grew old and lazy, and Anthony's self-indulgence got the better of him, the only possible cure for the distracted country had been government by one man. However, Augustus had put the state in order not by making himself king or dictator, but by creating the Principate. The Empire's frontiers were on the ocean, or distant rivers. Armies, provinces, fleets, the whole system was interrelated. Roman citizens were protected by the law. Provincials were decently treated. Rome itself had been lavishly beautified. Force had been sparingly used—merely to preserve peace for the majority.[236]
|
208 |
+
|
209 |
+
According to the second opposing opinion:
|
210 |
+
|
211 |
+
filial duty and national crisis had been merely pretexts. In actual fact, the motive of Octavian, the future Augustus, was lust for power ... There had certainly been peace, but it was a blood-stained peace of disasters and assassinations.[237]
|
212 |
+
|
213 |
+
In a 2006 biography on Augustus, Anthony Everitt asserts that through the centuries, judgments on Augustus's reign have oscillated between these two extremes but stresses that:
|
214 |
+
|
215 |
+
Opposites do not have to be mutually exclusive, and we are not obliged to choose one or the other. The story of his career shows that Augustus was indeed ruthless, cruel, and ambitious for himself. This was only in part a personal trait, for upper-class Romans were educated to compete with one another and to excel. However, he combined an overriding concern for his personal interests with a deep-seated patriotism, based on a nostalgia of Rome's antique virtues. In his capacity as princeps, selfishness and selflessness coexisted in his mind. While fighting for dominance, he paid little attention to legality or to the normal civilities of political life. He was devious, untrustworthy, and bloodthirsty. But once he had established his authority, he governed efficiently and justly, generally allowed freedom of speech, and promoted the rule of law. He was immensely hardworking and tried as hard as any democratic parliamentarian to treat his senatorial colleagues with respect and sensitivity. He suffered from no delusions of grandeur.[238]
|
216 |
+
|
217 |
+
Tacitus was of the belief that Nerva (r. 96–98) successfully "mingled two formerly alien ideas, principate and liberty".[239] The 3rd-century historian Cassius Dio acknowledged Augustus as a benign, moderate ruler, yet like most other historians after the death of Augustus, Dio viewed Augustus as an autocrat.[235] The poet Marcus Annaeus Lucanus (AD 39–65) was of the opinion that Caesar's victory over Pompey and the fall of Cato the Younger (95 BC–46 BC) marked the end of traditional liberty in Rome; historian Chester G. Starr, Jr. writes of his avoidance of criticizing Augustus, "perhaps Augustus was too sacred a figure to accuse directly."[239]
|
218 |
+
|
219 |
+
The Anglo-Irish writer Jonathan Swift (1667–1745), in his Discourse on the Contests and Dissentions in Athens and Rome, criticized Augustus for installing tyranny over Rome, and likened what he believed Great Britain's virtuous constitutional monarchy to Rome's moral Republic of the 2nd century BC. In his criticism of Augustus, the admiral and historian Thomas Gordon (1658–1741) compared Augustus to the puritanical tyrant Oliver Cromwell (1599–1658).[240] Thomas Gordon and the French political philosopher Montesquieu (1689–1755) both remarked that Augustus was a coward in battle.[241] In his Memoirs of the Court of Augustus, the Scottish scholar Thomas Blackwell (1701–1757) deemed Augustus a Machiavellian ruler, "a bloodthirsty vindicative usurper", "wicked and worthless", "a mean spirit", and a "tyrant".[241]
|
220 |
+
|
221 |
+
Augustus's public revenue reforms had a great impact on the subsequent success of the Empire. Augustus brought a far greater portion of the Empire's expanded land base under consistent, direct taxation from Rome, instead of exacting varying, intermittent, and somewhat arbitrary tributes from each local province as Augustus's predecessors had done. This reform greatly increased Rome's net revenue from its territorial acquisitions, stabilized its flow, and regularized the financial relationship between Rome and the provinces, rather than provoking fresh resentments with each new arbitrary exaction of tribute.[242]
|
222 |
+
|
223 |
+
The measures of taxation in the reign of Augustus were determined by population census, with fixed quotas for each province. Citizens of Rome and Italy paid indirect taxes, while direct taxes were exacted from the provinces. Indirect taxes included a 4% tax on the price of slaves, a 1% tax on goods sold at auction, and a 5% tax on the inheritance of estates valued at over 100,000 sesterces by persons other than the next of kin.[243]
|
224 |
+
|
225 |
+
An equally important reform was the abolition of private tax farming, which was replaced by salaried civil service tax collectors. Private contractors who collected taxes for the State were the norm in the Republican era. Some of them were powerful enough to influence the number of votes for men running for offices in Rome. These tax farmers called publicans were infamous for their depredations, great private wealth, and the right to tax local areas.[242]
|
226 |
+
|
227 |
+
The use of Egypt's immense land rents to finance the Empire's operations resulted from Augustus's conquest of Egypt and the shift to a Roman form of government.[244] As it was effectively considered Augustus's private property rather than a province of the Empire, it became part of each succeeding emperor's patrimonium.[245] Instead of a legate or proconsul, Augustus installed a prefect from the equestrian class to administer Egypt and maintain its lucrative seaports; this position became the highest political achievement for any equestrian besides becoming Prefect of the Praetorian Guard.[246] The highly productive agricultural land of Egypt yielded enormous revenues that were available to Augustus and his successors to pay for public works and military expeditions.[244] During his reign the circus games resulted in the killing of 3,500 elephants.[247]
|
228 |
+
|
229 |
+
The month of August (Latin: Augustus) is named after Augustus; until his time it was called Sextilis (named so because it had been the sixth month of the original Roman calendar and the Latin word for six is sex). Commonly repeated lore has it that August has 31 days because Augustus wanted his month to match the length of Julius Caesar's July, but this is an invention of the 13th century scholar Johannes de Sacrobosco. Sextilis in fact had 31 days before it was renamed, and it was not chosen for its length (see Julian calendar). According to a senatus consultum quoted by Macrobius, Sextilis was renamed to honor Augustus because several of the most significant events in his rise to power, culminating in the fall of Alexandria, fell in that month.[248]
|
230 |
+
|
231 |
+
On his deathbed, Augustus boasted "I found a Rome of bricks; I leave to you one of marble." Although there is some truth in the literal meaning of this, Cassius Dio asserts that it was a metaphor for the Empire's strength.[249] Marble could be found in buildings of Rome before Augustus, but it was not extensively used as a building material until the reign of Augustus.[250]
|
232 |
+
|
233 |
+
Although this did not apply to the Subura slums, which were still as rickety and fire-prone as ever, he did leave a mark on the monumental topography of the centre and of the Campus Martius, with the Ara Pacis (Altar of Peace) and monumental sundial, whose central gnomon was an obelisk taken from Egypt.[251] The relief sculptures decorating the Ara Pacis visually augmented the written record of Augustus's triumphs in the Res Gestae. Its reliefs depicted the imperial pageants of the praetorians, the Vestals, and the citizenry of Rome.[252]
|
234 |
+
|
235 |
+
He also built the Temple of Caesar, the Baths of Agrippa, and the Forum of Augustus with its Temple of Mars Ultor.[253] Other projects were either encouraged by him, such as the Theatre of Balbus, and Agrippa's construction of the Pantheon, or funded by him in the name of others, often relations (e.g. Portico of Octavia, Theatre of Marcellus). Even his Mausoleum of Augustus was built before his death to house members of his family.[254] To celebrate his victory at the Battle of Actium, the Arch of Augustus was built in 29 BC near the entrance of the Temple of Castor and Pollux, and widened in 19 BC to include a triple-arch design.[250]
|
236 |
+
|
237 |
+
After the death of Agrippa in 12 BC, a solution had to be found in maintaining Rome's water supply system. This came about because it was overseen by Agrippa when he served as aedile, and was even funded by him afterwards when he was a private citizen paying at his own expense. In that year, Augustus arranged a system where the Senate designated three of its members as prime commissioners in charge of the water supply and to ensure that Rome's aqueducts did not fall into disrepair.[223]
|
238 |
+
|
239 |
+
In the late Augustan era, the commission of five senators called the curatores locorum publicorum iudicandorum (translated as "Supervisors of Public Property") was put in charge of maintaining public buildings and temples of the state cult.[223] Augustus created the senatorial group of the curatores viarum (translated as "Supervisors for Roads") for the upkeep of roads; this senatorial commission worked with local officials and contractors to organize regular repairs.[227]
|
240 |
+
|
241 |
+
The Corinthian order of architectural style originating from ancient Greece was the dominant architectural style in the age of Augustus and the imperial phase of Rome. Suetonius once commented that Rome was unworthy of its status as an imperial capital, yet Augustus and Agrippa set out to dismantle this sentiment by transforming the appearance of Rome upon the classical Greek model.[250]
|
242 |
+
|
243 |
+
His biographer Suetonius, writing about a century after Augustus's death, described his appearance as: "... unusually handsome and exceedingly graceful at all periods of his life, though he cared nothing for personal adornment. He was so far from being particular about the dressing of his hair, that he would have several barbers working in a hurry at the same time, and as for his beard he now had it clipped and now shaved, while at the very same time he would either be reading or writing something ... He had clear, bright eyes ... His teeth were wide apart, small, and ill-kept; his hair was slightly curly and inclined to golden; his eyebrows met. His ears were of moderate size, and his nose projected a little at the top and then bent ever so slightly inward. His complexion was between dark and fair. He was short of stature, although Julius Marathus, his freedman and keeper of his records, says that he was five feet and nine inches (just under 5 ft. 7 in., or 1.70 meters, in modern height measurements), but this was concealed by the fine proportion and symmetry of his figure, and was noticeable only by comparison with some taller person standing beside him...",[255] adding that "his shoes [were] somewhat high-soled, to make him look taller than he really was".[256] Scientific analysis of traces of paint found in his official statues show that he most likely had light brown hair and eyes (his hair and eyes were depicted as the same color).[257]
|
244 |
+
|
245 |
+
His official images were very tightly controlled and idealized, drawing from a tradition of Hellenistic royal portraiture rather than the tradition of realism in Roman portraiture. He first appeared on coins at the age of 19, and from about 29 BC "the explosion in the number of Augustan portraits attests a concerted propaganda campaign aimed at dominating all aspects of civil, religious, economic and military life with Augustus's person."[258] The early images did indeed depict a young man, but although there were gradual changes his images remained youthful until he died in his seventies, by which time they had "a distanced air of ageless majesty".[259] Among the best known of many surviving portraits are the Augustus of Prima Porta, the image on the Ara Pacis, and the Via Labicana Augustus, which shows him as a priest. Several cameo portraits include the Blacas Cameo and Gemma Augustea.
|
246 |
+
|
247 |
+
Primary sources
|
248 |
+
|
249 |
+
Secondary source material
|
250 |
+
|
251 |
+
|
252 |
+
|
en/4232.html.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
October is the tenth month of the year in the Julian and Gregorian Calendars and the sixth of seven months to have a length of 31 days. The eighth month in the old calendar of Romulus c. 750 BC, October retained its name (from the Latin and Greek ôctō meaning "eight") after January and February were inserted into the calendar that had originally been created by the Romans. In Ancient Rome, one of three Mundus patet would take place on October 5, Meditrinalia October 11, Augustalia on October 12, October Horse on October 15, and Armilustrium on October 19. These dates do not correspond to the modern Gregorian calendar. Among the Anglo-Saxons, it was known as Ƿinterfylleþ, because at this full moon (fylleþ) winter was supposed to begin.[1]
|
2 |
+
|
3 |
+
October is commonly associated with the season of autumn in the Northern hemisphere and with spring in the Southern hemisphere.
|
4 |
+
|
5 |
+
This list does not necessarily imply either official status or general observance.
|
6 |
+
|
7 |
+
(All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.)
|
8 |
+
|
9 |
+
The last two to three weeks in October (and, occasionally, the first week of November) are the only time of the year during which all of the "Big Four" major professional sports leagues in the U.S. and Canada schedule games; the National Basketball Association begins its preseason and about two weeks later starts the regular season, the National Hockey League is about one month into its regular season, the National Football League is about halfway through its regular season, and Major League Baseball is in its postseason with the League Championship Series and World Series. There have been 19 occasions in which all four leagues have played games on the same day (an occurrence popularly termed a "sports equinox"), with the most recent of these taking place on October 27, 2019.[18] Additionally, the Canadian Football League is typically nearing the end of its regular season during this period, while Major League Soccer is beginning the MLS Cup Playoffs.
|
en/4233.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In geometry, a polygon (/ˈpɒlɪɡɒn/) is a plane figure that is described by a finite number of straight line segments connected to form a closed polygonal chain or polygonal circuit. The solid plane region, the bounding circuit, or the two together, may be called a polygon.
|
4 |
+
|
5 |
+
The segments of a polygonal circuit are called its edges or sides, and the points where two edges meet are the polygon's vertices (singular: vertex) or corners. The interior of a solid polygon is sometimes called its body. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon.
|
6 |
+
|
7 |
+
A simple polygon is one which does not intersect itself. Mathematicians are often concerned only with the bounding polygonal chains of simple polygons and they often define a polygon accordingly. A polygonal boundary may be allowed to cross over itself, creating star polygons and other self-intersecting polygons.
|
8 |
+
|
9 |
+
A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes.
|
10 |
+
|
11 |
+
The word polygon derives from the Greek adjective πολύς (polús) "much", "many" and γωνία (gōnía) "corner" or "angle". It has been suggested that γόνυ (gónu) "knee" may be the origin of gon.[1]
|
12 |
+
|
13 |
+
Polygons are primarily classified by the number of sides. See the table below.
|
14 |
+
|
15 |
+
Polygons may be characterized by their convexity or type of non-convexity:
|
16 |
+
|
17 |
+
Euclidean geometry is assumed throughout.
|
18 |
+
|
19 |
+
Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are:
|
20 |
+
|
21 |
+
In this section, the vertices of the polygon under consideration are taken to be
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
(
|
26 |
+
|
27 |
+
x
|
28 |
+
|
29 |
+
0
|
30 |
+
|
31 |
+
|
32 |
+
,
|
33 |
+
|
34 |
+
y
|
35 |
+
|
36 |
+
0
|
37 |
+
|
38 |
+
|
39 |
+
)
|
40 |
+
,
|
41 |
+
(
|
42 |
+
|
43 |
+
x
|
44 |
+
|
45 |
+
1
|
46 |
+
|
47 |
+
|
48 |
+
,
|
49 |
+
|
50 |
+
y
|
51 |
+
|
52 |
+
1
|
53 |
+
|
54 |
+
|
55 |
+
)
|
56 |
+
,
|
57 |
+
…
|
58 |
+
,
|
59 |
+
(
|
60 |
+
|
61 |
+
x
|
62 |
+
|
63 |
+
n
|
64 |
+
−
|
65 |
+
1
|
66 |
+
|
67 |
+
|
68 |
+
,
|
69 |
+
|
70 |
+
y
|
71 |
+
|
72 |
+
n
|
73 |
+
−
|
74 |
+
1
|
75 |
+
|
76 |
+
|
77 |
+
)
|
78 |
+
|
79 |
+
|
80 |
+
{\displaystyle (x_{0},y_{0}),(x_{1},y_{1}),\ldots ,(x_{n-1},y_{n-1})}
|
81 |
+
|
82 |
+
in order. For convenience in some formulas, the notation (xn, yn) = (x0, y0) will also be used.
|
83 |
+
|
84 |
+
If the polygon is non-self-intersecting (that is, simple), the signed area is
|
85 |
+
|
86 |
+
or, using determinants
|
87 |
+
|
88 |
+
where
|
89 |
+
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
Q
|
94 |
+
|
95 |
+
i
|
96 |
+
,
|
97 |
+
j
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
{\displaystyle Q_{i,j}}
|
103 |
+
|
104 |
+
is the squared distance between
|
105 |
+
|
106 |
+
|
107 |
+
|
108 |
+
(
|
109 |
+
|
110 |
+
x
|
111 |
+
|
112 |
+
i
|
113 |
+
|
114 |
+
|
115 |
+
,
|
116 |
+
|
117 |
+
y
|
118 |
+
|
119 |
+
i
|
120 |
+
|
121 |
+
|
122 |
+
)
|
123 |
+
|
124 |
+
|
125 |
+
{\displaystyle (x_{i},y_{i})}
|
126 |
+
|
127 |
+
and
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
(
|
132 |
+
|
133 |
+
x
|
134 |
+
|
135 |
+
j
|
136 |
+
|
137 |
+
|
138 |
+
,
|
139 |
+
|
140 |
+
y
|
141 |
+
|
142 |
+
j
|
143 |
+
|
144 |
+
|
145 |
+
)
|
146 |
+
.
|
147 |
+
|
148 |
+
|
149 |
+
{\displaystyle (x_{j},y_{j}).}
|
150 |
+
|
151 |
+
[3][4]
|
152 |
+
|
153 |
+
The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive x-axis to the positive y-axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or Surveyor's formula.[5]
|
154 |
+
|
155 |
+
The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from:
|
156 |
+
|
157 |
+
The formula was described by Lopshits in 1963.[6]
|
158 |
+
|
159 |
+
If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1.
|
160 |
+
|
161 |
+
In every polygon with perimeter p and area A , the isoperimetric inequality
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
+
|
166 |
+
p
|
167 |
+
|
168 |
+
2
|
169 |
+
|
170 |
+
|
171 |
+
>
|
172 |
+
4
|
173 |
+
π
|
174 |
+
A
|
175 |
+
|
176 |
+
|
177 |
+
{\displaystyle p^{2}>4\pi A}
|
178 |
+
|
179 |
+
holds.[7]
|
180 |
+
|
181 |
+
For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon.
|
182 |
+
|
183 |
+
The lengths of the sides of a polygon do not in general determine its area.[8] However, if the polygon is cyclic then the sides do determine the area.[9] Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic).[10]
|
184 |
+
|
185 |
+
Many specialized formulas apply to the areas of regular polygons.
|
186 |
+
|
187 |
+
The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by
|
188 |
+
|
189 |
+
This radius is also termed its apothem and is often represented as a.
|
190 |
+
|
191 |
+
The area of a regular n-gon with side s inscribed in a unit circle is
|
192 |
+
|
193 |
+
The area of a regular n-gon in terms of the radius R of its circumscribed circle and its perimeter p is given by
|
194 |
+
|
195 |
+
The area of a regular n-gon inscribed in a unit-radius circle, with side s and interior angle
|
196 |
+
|
197 |
+
|
198 |
+
|
199 |
+
α
|
200 |
+
,
|
201 |
+
|
202 |
+
|
203 |
+
{\displaystyle \alpha ,}
|
204 |
+
|
205 |
+
can also be expressed trigonometrically as
|
206 |
+
|
207 |
+
The area of a self-intersecting polygon can be defined in two different ways, giving different answers:
|
208 |
+
|
209 |
+
Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are
|
210 |
+
|
211 |
+
In these formulas, the signed value of area
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
A
|
216 |
+
|
217 |
+
|
218 |
+
{\displaystyle A}
|
219 |
+
|
220 |
+
must be used.
|
221 |
+
|
222 |
+
For triangles (n = 3), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for n > 3. The centroid of the vertex set of a polygon with n vertices has the coordinates
|
223 |
+
|
224 |
+
The idea of a polygon has been generalized in various ways. Some of the more important include:
|
225 |
+
|
226 |
+
The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions.
|
227 |
+
|
228 |
+
Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon.[14]
|
229 |
+
|
230 |
+
Exceptions exist for side counts that are more easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram.
|
231 |
+
|
232 |
+
To construct the name of a polygon with more than 20 and less than 100 edges, combine the prefixes as follows.[18] The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity to concatenated prefix numbers in the naming of quasiregular polyhedra.[20]
|
233 |
+
|
234 |
+
Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum.[35][36]
|
235 |
+
|
236 |
+
The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century.[37]
|
237 |
+
|
238 |
+
In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons.[38]
|
239 |
+
|
240 |
+
Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made.
|
241 |
+
|
242 |
+
Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California.
|
243 |
+
|
244 |
+
In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons.
|
245 |
+
|
246 |
+
In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials.[39][40]
|
247 |
+
|
248 |
+
Any surface is modelled as a tessellation called polygon mesh. If a square mesh has n + 1 points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are (n + 1)2 / 2(n2) vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines).
|
249 |
+
|
250 |
+
The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation.
|
251 |
+
|
252 |
+
In computer graphics and computational geometry, it is often necessary to determine whether a given point P = (x0,y0) lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test.[41]
|
en/4234.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In geometry, a polygon (/ˈpɒlɪɡɒn/) is a plane figure that is described by a finite number of straight line segments connected to form a closed polygonal chain or polygonal circuit. The solid plane region, the bounding circuit, or the two together, may be called a polygon.
|
4 |
+
|
5 |
+
The segments of a polygonal circuit are called its edges or sides, and the points where two edges meet are the polygon's vertices (singular: vertex) or corners. The interior of a solid polygon is sometimes called its body. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon.
|
6 |
+
|
7 |
+
A simple polygon is one which does not intersect itself. Mathematicians are often concerned only with the bounding polygonal chains of simple polygons and they often define a polygon accordingly. A polygonal boundary may be allowed to cross over itself, creating star polygons and other self-intersecting polygons.
|
8 |
+
|
9 |
+
A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes.
|
10 |
+
|
11 |
+
The word polygon derives from the Greek adjective πολύς (polús) "much", "many" and γωνία (gōnía) "corner" or "angle". It has been suggested that γόνυ (gónu) "knee" may be the origin of gon.[1]
|
12 |
+
|
13 |
+
Polygons are primarily classified by the number of sides. See the table below.
|
14 |
+
|
15 |
+
Polygons may be characterized by their convexity or type of non-convexity:
|
16 |
+
|
17 |
+
Euclidean geometry is assumed throughout.
|
18 |
+
|
19 |
+
Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are:
|
20 |
+
|
21 |
+
In this section, the vertices of the polygon under consideration are taken to be
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
(
|
26 |
+
|
27 |
+
x
|
28 |
+
|
29 |
+
0
|
30 |
+
|
31 |
+
|
32 |
+
,
|
33 |
+
|
34 |
+
y
|
35 |
+
|
36 |
+
0
|
37 |
+
|
38 |
+
|
39 |
+
)
|
40 |
+
,
|
41 |
+
(
|
42 |
+
|
43 |
+
x
|
44 |
+
|
45 |
+
1
|
46 |
+
|
47 |
+
|
48 |
+
,
|
49 |
+
|
50 |
+
y
|
51 |
+
|
52 |
+
1
|
53 |
+
|
54 |
+
|
55 |
+
)
|
56 |
+
,
|
57 |
+
…
|
58 |
+
,
|
59 |
+
(
|
60 |
+
|
61 |
+
x
|
62 |
+
|
63 |
+
n
|
64 |
+
−
|
65 |
+
1
|
66 |
+
|
67 |
+
|
68 |
+
,
|
69 |
+
|
70 |
+
y
|
71 |
+
|
72 |
+
n
|
73 |
+
−
|
74 |
+
1
|
75 |
+
|
76 |
+
|
77 |
+
)
|
78 |
+
|
79 |
+
|
80 |
+
{\displaystyle (x_{0},y_{0}),(x_{1},y_{1}),\ldots ,(x_{n-1},y_{n-1})}
|
81 |
+
|
82 |
+
in order. For convenience in some formulas, the notation (xn, yn) = (x0, y0) will also be used.
|
83 |
+
|
84 |
+
If the polygon is non-self-intersecting (that is, simple), the signed area is
|
85 |
+
|
86 |
+
or, using determinants
|
87 |
+
|
88 |
+
where
|
89 |
+
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
Q
|
94 |
+
|
95 |
+
i
|
96 |
+
,
|
97 |
+
j
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
{\displaystyle Q_{i,j}}
|
103 |
+
|
104 |
+
is the squared distance between
|
105 |
+
|
106 |
+
|
107 |
+
|
108 |
+
(
|
109 |
+
|
110 |
+
x
|
111 |
+
|
112 |
+
i
|
113 |
+
|
114 |
+
|
115 |
+
,
|
116 |
+
|
117 |
+
y
|
118 |
+
|
119 |
+
i
|
120 |
+
|
121 |
+
|
122 |
+
)
|
123 |
+
|
124 |
+
|
125 |
+
{\displaystyle (x_{i},y_{i})}
|
126 |
+
|
127 |
+
and
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
(
|
132 |
+
|
133 |
+
x
|
134 |
+
|
135 |
+
j
|
136 |
+
|
137 |
+
|
138 |
+
,
|
139 |
+
|
140 |
+
y
|
141 |
+
|
142 |
+
j
|
143 |
+
|
144 |
+
|
145 |
+
)
|
146 |
+
.
|
147 |
+
|
148 |
+
|
149 |
+
{\displaystyle (x_{j},y_{j}).}
|
150 |
+
|
151 |
+
[3][4]
|
152 |
+
|
153 |
+
The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive x-axis to the positive y-axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or Surveyor's formula.[5]
|
154 |
+
|
155 |
+
The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from:
|
156 |
+
|
157 |
+
The formula was described by Lopshits in 1963.[6]
|
158 |
+
|
159 |
+
If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1.
|
160 |
+
|
161 |
+
In every polygon with perimeter p and area A , the isoperimetric inequality
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
+
|
166 |
+
p
|
167 |
+
|
168 |
+
2
|
169 |
+
|
170 |
+
|
171 |
+
>
|
172 |
+
4
|
173 |
+
π
|
174 |
+
A
|
175 |
+
|
176 |
+
|
177 |
+
{\displaystyle p^{2}>4\pi A}
|
178 |
+
|
179 |
+
holds.[7]
|
180 |
+
|
181 |
+
For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon.
|
182 |
+
|
183 |
+
The lengths of the sides of a polygon do not in general determine its area.[8] However, if the polygon is cyclic then the sides do determine the area.[9] Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic).[10]
|
184 |
+
|
185 |
+
Many specialized formulas apply to the areas of regular polygons.
|
186 |
+
|
187 |
+
The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by
|
188 |
+
|
189 |
+
This radius is also termed its apothem and is often represented as a.
|
190 |
+
|
191 |
+
The area of a regular n-gon with side s inscribed in a unit circle is
|
192 |
+
|
193 |
+
The area of a regular n-gon in terms of the radius R of its circumscribed circle and its perimeter p is given by
|
194 |
+
|
195 |
+
The area of a regular n-gon inscribed in a unit-radius circle, with side s and interior angle
|
196 |
+
|
197 |
+
|
198 |
+
|
199 |
+
α
|
200 |
+
,
|
201 |
+
|
202 |
+
|
203 |
+
{\displaystyle \alpha ,}
|
204 |
+
|
205 |
+
can also be expressed trigonometrically as
|
206 |
+
|
207 |
+
The area of a self-intersecting polygon can be defined in two different ways, giving different answers:
|
208 |
+
|
209 |
+
Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are
|
210 |
+
|
211 |
+
In these formulas, the signed value of area
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
A
|
216 |
+
|
217 |
+
|
218 |
+
{\displaystyle A}
|
219 |
+
|
220 |
+
must be used.
|
221 |
+
|
222 |
+
For triangles (n = 3), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for n > 3. The centroid of the vertex set of a polygon with n vertices has the coordinates
|
223 |
+
|
224 |
+
The idea of a polygon has been generalized in various ways. Some of the more important include:
|
225 |
+
|
226 |
+
The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions.
|
227 |
+
|
228 |
+
Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon.[14]
|
229 |
+
|
230 |
+
Exceptions exist for side counts that are more easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram.
|
231 |
+
|
232 |
+
To construct the name of a polygon with more than 20 and less than 100 edges, combine the prefixes as follows.[18] The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity to concatenated prefix numbers in the naming of quasiregular polyhedra.[20]
|
233 |
+
|
234 |
+
Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum.[35][36]
|
235 |
+
|
236 |
+
The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century.[37]
|
237 |
+
|
238 |
+
In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons.[38]
|
239 |
+
|
240 |
+
Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made.
|
241 |
+
|
242 |
+
Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California.
|
243 |
+
|
244 |
+
In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons.
|
245 |
+
|
246 |
+
In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials.[39][40]
|
247 |
+
|
248 |
+
Any surface is modelled as a tessellation called polygon mesh. If a square mesh has n + 1 points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are (n + 1)2 / 2(n2) vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines).
|
249 |
+
|
250 |
+
The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation.
|
251 |
+
|
252 |
+
In computer graphics and computational geometry, it is often necessary to determine whether a given point P = (x0,y0) lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test.[41]
|
en/4235.html.txt
ADDED
@@ -0,0 +1,242 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Odessa or Odesa (Ukrainian: Оде́са, romanized: Odesa [oˈdɛsɐ] (listen); Russian: Оде́сса, romanized: Odessa [ɐˈdʲesə]) is the third most populous city of Ukraine and a major tourism center, seaport and transport hub located on the northwestern shore of the Black Sea. It is also the administrative center of the Odessa Oblast and a multiethnic cultural center. Odessa is sometimes called the "pearl of the Black Sea",[2] the "South Capital" (under the Russian Empire and Soviet Union), and "Southern Palmyra".
|
4 |
+
|
5 |
+
Before the Tsarist establishment of Odessa, an ancient Greek settlement existed at its location. A more recent Tatar settlement was also founded at the location by Hacı I Giray, the Khan of Crimea in 1440 that was named after him as Hacibey (or Khadjibey).[3] After a period of Lithuanian Grand Duchy control, Hacibey and surroundings became part of the domain of the Ottomans in 1529 and remained there until the empire's defeat in the Russo-Turkish War of 1792.
|
6 |
+
|
7 |
+
In 1794, the city of Odessa was founded by a decree of the Russian empress Catherine the Great. From 1819 to 1858, Odessa was a free port—a porto-franco. During the Soviet period, it was the most important port of trade in the Soviet Union and a Soviet naval base. On 1 January 2000, the Quarantine Pier at Odessa Commercial Sea Port was declared a free port and free economic zone for a period of 25 years.
|
8 |
+
|
9 |
+
During the 19th century, Odessa was the fourth largest city of Imperial Russia, after Moscow, Saint Petersburg and Warsaw.[4] Its historical architecture has a style more Mediterranean than Russian, having been heavily influenced by French and Italian styles. Some buildings are built in a mixture of different styles, including Art Nouveau, Renaissance and Classicist.[5]
|
10 |
+
|
11 |
+
Odessa is a warm-water port. The city of Odessa hosts both the Port of Odessa and Port Yuzhne, a significant oil terminal situated in the city's suburbs. Another notable port, Chornomorsk, is located in the same oblast, to the south-west of Odessa. Together they represent a major transport hub integrating with railways. Odessa's oil and chemical processing facilities are connected to Russian and European networks by strategic pipelines.
|
12 |
+
|
13 |
+
The city was named in compliance with the Greek Plan of Catherine the Great. It was named after the ancient Greek city of Odessos, which was mistakenly believed to have been located here. Odessa is located in between the ancient Greek cities of Tyras and Olbia, different from the ancient Odessos's location further west along the coast, which is at present day Varna, Bulgaria.[6]
|
14 |
+
|
15 |
+
Catherine's secretary of state Adrian Gribovsky [ru] claimed in his memoirs that the name was his suggestion. Some expressed doubts about this claim, while others noted the reputation of Gribovsky as an honest and modest man.[7]
|
16 |
+
|
17 |
+
Odessa was the site of a large Greek settlement no later than the middle of the 6th century BC (a necropolis from the 5th–3rd centuries BC has long been known in this area). Some scholars believe it to have been a trade settlement established by the Greek city of Histria. Whether the Bay of Odessa is the ancient "Port of the Histrians" cannot yet be considered a settled question based on the available evidence.[9] Archaeological artifacts confirm extensive links between the Odessa area and the eastern Mediterranean.
|
18 |
+
|
19 |
+
In the Middle Ages successive rulers of the Odessa region included various nomadic tribes (Petchenegs, Cumans), the Golden Horde, the Crimean Khanate, the Grand Duchy of Lithuania, and the Ottoman Empire. Yedisan Crimean Tatars traded there in the 14th century.
|
20 |
+
|
21 |
+
Since middle of the 13th century the city's territory belonged to the Golden Horde domain.[10] On Italian navigational maps of 14th century on the place of Odessa is indicated the castle of Ginestra (more Gazaria (Genoese colonies)).[10] At times when the Northern Black Sea littoral was controlled by the Grand Duchy of Lithuania, there existed a settlement of Kachibei which at first was mentioned in 1415.[10] By middle of 15th century the settlement was depopulated.[10]
|
22 |
+
|
23 |
+
During the reign of Khan Hacı I Giray of Crimea (1441–1466), the Khanate was endangered by the Golden Horde and the Ottoman Turks and, in search of allies, the khan agreed to cede the area to Lithuania. The site of present-day Odessa was then a fortress known as Khadjibey (named for Hacı I Giray, and also spelled Kocibey in English, Hacıbey or Hocabey in Turkish, and Hacıbey in Crimean Tatar).
|
24 |
+
|
25 |
+
Khadjibey came under direct control of the Ottoman Empire after 1529[10] as part of a region known as Yedisan after one of Nogay Hordes, and was administered in the Ottoman Silistra (Özi) Eyalet, Sanjak of Özi.[citation needed] In the mid-18th century, the Ottomans rebuilt the fortress at Khadjibey (also was known Hocabey), which was named Yeni Dünya[10] (literally "New World"). Hocabey was a sanjak centre of Silistre Province.[citation needed]
|
26 |
+
|
27 |
+
The sleepy fishing village that Odessa had witnessed a sea-change in its fortunes when the wealthy magnate and future Voivode of Kiev (1791), Antoni Protazy Potocki, established trade routes through the port for the Polish Black Sea Trading Company and set up the infrastructure in the 1780s.[11]
|
28 |
+
During the Russian-Turkish War of 1787–1792,[10] on 25 September 1789, a detachment of the Russian forces, including Zaporozhian Cossacks under Alexander Suvorov and Ivan Gudovich, took Khadjibey and Yeni Dünya for the Russian Empire. One section of the troops came under command of a Spaniard in Russian service, Major General José de Ribas (known in Russia as Osip Mikhailovich Deribas); today, the main street in Odessa, Deribasivska Street, is named after him. Russia formally gained possession of the Sanjak of Özi (Ochacov Oblast)[12] as a result of the Treaty of Jassy (Iaşi)[10] in 1792 and it became a part of Yekaterinoslav Viceroyalty. The newly acquired Ochakov Oblast was promised to the Cossacks by the Russian government for resettlement.[13] On permission of the Archbishop of Yekaterinoslav Amvrosiy, the Black Sea Kosh Host, that was located around the area between Bender and Ochakiv, built second after Sucleia wooden church of Saint Nicholas.[14]
|
29 |
+
|
30 |
+
By the Highest rescript of 17 June 1792 addressed to General Kakhovsky it was ordered to establish the Dniester Border Line of fortresses.[14] The commander of the land forces in Ochakiv Oblast was appointed Graf (Count) Suvorov-Rymnikskiy.[14] The main fortress was built near Sucleia at the mouth of river Botna as the Head Dniester Fortress by Engineer-Major de Wollant.[14] Near the new fortress saw the formation of a new "Vorstadt" (suburb) where people moved from Sucleia and Parkan.[14] With the establishment of the Voznesensk Governorate on 27 January 1795, the Vorstadt was named Tiraspol.[14]
|
31 |
+
|
32 |
+
The city of Odessa, founded by the Russian Empress Catherine the Great, centers on the site of the Turkish fortress Khadzhibei, which was occupied by a Russian Army in 1789. The Flemish engineer working for the empress, Franz de Volan (François Sainte de Wollant) recommended the area of Khadzhibei fortress as the site for the region's basic port: it had an ice-free harbor, breakwaters could be cheaply constructed that would render the harbor safe and it would have the capacity to accommodate large fleets. The Namestnik of Yekaterinoslav and Voznesensk, Platon Zubov (one of Catherine's favorites) supported this proposal, and in 1794 Catherine approved the founding of the new port-city and invested the first money in constructing the city.
|
33 |
+
|
34 |
+
However, adjacent to the new official locality, a Moldavian colony already existed, which by the end of the 18th century was an independent settlement named Moldavanka. Some local historians consider that the settlement predates Odessa by about thirty years and assert that the locality was founded by Moldavians who came to build the fortress of Yeni Dunia for the Ottomans and eventually settled in the area in the late 1760s, right next to the settlement of Khadjibey (since 1795 Odessa proper), on what later became the Primorsky Boulevard. Another version posits that the settlement appeared after Odessa itself was founded, as a settlement of Moldavians, Greeks and Albanians fleeing the Ottoman yoke.[15]
|
35 |
+
|
36 |
+
In 1795 Khadjibey was officially renamed as Odessa after a Greek colony of Odessos that supposedly was located in the area.[10][16] In reality it was located at the mouth of Tylihul Estuary (liman).[10] The first census that was conducted in Odessa was in 1797 which accounted for 3,455 people.[10] Since 1795, the city had its own city magistrate, and since 1796 a city council of six members and the Odessa Commodity Exchange.[10] In 1801 in Odessa had opened the first commercial bank.[10] In 1803 the city accounted for 9,000 people.[16]
|
37 |
+
|
38 |
+
In their settlement, also known as Novaya Slobodka, the Moldavians owned relatively small plots on which they built village-style houses and cultivated vineyards and gardens. What became Mykhailovsky Square was the center of this settlement and the site of its first Orthodox church, the Church of the Dormition, built in 1821 close to the seashore, as well as of a cemetery. Nearby stood the military barracks and the country houses (dacha) of the city's wealthy residents, including that of the Duc de Richelieu, appointed by Tzar Alexander I as Governor of Odessa in 1803. Richelieu played a role during Ottoman plague epidemic which hit Odessa in the autumn 1812.[17][18] Dismissive of any attempt to forge a compromise between quarantine requirements and free trade, Prince Kuriakin (the Saint Petersburg-based High Commissioner for Sanitation) countermanded Richelieu's orders.[19]
|
39 |
+
|
40 |
+
In the period from 1795 to 1814 the population of Odessa increased 15 times over and reached almost 20 thousand people. The first city plan was designed by the engineer F. Devollan in the late 18th century.[5] Colonists of various ethnicities settled mainly in the area of the former colony, outside of the official boundaries, and as a consequence, in the first third of the 19th century, Moldavanka emerged as the dominant settlement. After planning by the official architects who designed buildings in Odessa's central district, such as the Italians Francesco Carlo Boffo and Giovanni Torricelli, Moldovanka was included in the general city plan, though the original grid-like plan of Moldovankan streets, lanes and squares remained unchanged.[15]
|
41 |
+
|
42 |
+
The new city quickly became a major success although initially it received little state funding and privileges.[20] Its early growth owed much to the work of the Duc de Richelieu, who served as the city's governor between 1803 and 1814. Having fled the French Revolution, he had served in Catherine's army against the Turks. He is credited with designing the city and organizing its amenities and infrastructure, and is considered[by whom?] one of the founding fathers of Odessa, together with another Frenchman, Count Andrault de Langeron, who succeeded him in office. Richelieu is commemorated by a bronze statue, unveiled in 1828 to a design by Ivan Martos. His contributions to the city are mentioned by Mark Twain in his travelogue Innocents Abroad: "I mention this statue and this stairway because they have their story. Richelieu founded Odessa – watched over it with paternal care – labored with a fertile brain and a wise understanding for its best interests – spent his fortune freely to the same end – endowed it with a sound prosperity, and one which will yet make it one of the great cities of the Old World".
|
43 |
+
|
44 |
+
In 1819, the city became a free port, a status it retained until 1859. It became home to an extremely diverse population of Albanians, Armenians, Azeris, Bulgarians, Crimean Tatars, Frenchmen, Germans (including Mennonites), Greeks, Italians, Jews, Poles, Romanians, Russians, Turks, Ukrainians, and traders representing many other nationalities (hence numerous "ethnic" names on the city's map, for example Frantsuzky (French) and Italiansky (Italian) Boulevards, Grecheskaya (Greek), Yevreyskaya (Jewish), Arnautskaya (Albanian) Streets). Its cosmopolitan nature was documented by the great Russian poet Alexander Pushkin, who lived in internal exile in Odessa between 1823 and 1824. In his letters he wrote that Odessa was a city where "the air is filled with all Europe, French is spoken and there are European papers and magazines to read".
|
45 |
+
|
46 |
+
Odessa's growth was interrupted by the Crimean War of 1853–1856, during which it was bombarded by British and Imperial French naval forces.[21] It soon recovered and the growth in trade made Odessa Russia's largest grain-exporting port. In 1866, the city was linked by rail with Kiev and Kharkiv as well as with Iaşi in Romania.
|
47 |
+
|
48 |
+
The city became the home of a large Jewish community during the 19th century, and by 1897 Jews were estimated to comprise some 37% of the population. The community, however, was repeatedly subjected to anti-Semitism and anti-Jewish agitation from almost all Christian segments of the population.[22] Pogroms were carried out in 1821, 1859, 1871, 1881 and 1905. Many Odessan Jews fled abroad after 1882, particularly to the Ottoman region that became Palestine, and the city became an important base of support for Zionism.
|
49 |
+
|
50 |
+
In 1905, Odessa was the site of a workers' uprising supported by the crew of the Russian battleship Potemkin and the Menshevik's Iskra. Sergei Eisenstein's famous motion picture The Battleship Potemkin commemorated the uprising and included a scene where hundreds of Odessan citizens were murdered on the great stone staircase (now popularly known as the "Potemkin Steps"), in one of the most famous scenes in motion picture history. At the top of the steps, which lead down to the port, stands a statue of the Duc de Richelieu. The actual massacre took place in streets nearby, not on the steps themselves, but the film caused many to visit Odessa to see the site of the "slaughter". The "Odessa Steps" continue to be a tourist attraction in Odessa. The film was made at Odessa's Cinema Factory, one of the oldest cinema studios in the former Soviet Union.
|
51 |
+
Following the Bolshevik Revolution in 1917 during Ukrainian-Soviet War, Odessa saw two Bolshevik armed insurgencies, the second of which succeeded in establishing their control over the city; for the following months the city became a center of the Odessa Soviet Republic. After signing of the Brest-Litovsk Treaty all Bolshevik forces were driven out by 13 March 1918 by the combined armed forces of the Austro-Hungarian Army, providing support to the Ukrainian People's Republic.[23]
|
52 |
+
|
53 |
+
With the end of the World War I and withdrawal of armies of Central Powers, the Soviet forces fought for control over the country with the army of the Ukrainian People's Republic. A few months later the city was occupied by the French Army and the Greek Army that supported the Russian White Army in its struggle with the Bolsheviks. The Ukrainian general Nikifor Grigoriev who sided with Bolsheviks managed to drive the unwelcome Triple Entente forces out of the city, but Odessa was soon retaken by the Russian White Army. Finally, by 1920 the Soviet Red Army managed to overpower both Ukrainian and Russian White Army and secure the city.
|
54 |
+
|
55 |
+
The people of Odessa suffered badly from a famine that resulted from the Russian Civil War in 1921–1922 due to the Soviet policies of prodrazverstka.
|
56 |
+
|
57 |
+
Odessa during first days of Revolution - 1916
|
58 |
+
|
59 |
+
Revolutionary soldiers - 1916
|
60 |
+
|
61 |
+
Revolutionary soldiers, Odessa - 1916
|
62 |
+
|
63 |
+
Odessa was attacked by Romanian and German troops in August 1941. The defense of Odessa lasted 73 days from 5 August to 16 October 1941. The defense was organized on three lines with emplacements consisting of trenches, anti-tank ditches and pillboxes. The first line was 80 kilometres (50 miles) long and situated some 25 to 30 kilometres (16 to 19 miles) from the city. The second and main line of defense was situated 6 to 8 kilometres (3.7 to 5.0 miles) from the city and was about 30 kilometres (19 miles) long. The third and last line of defense was organized inside the city itself.
|
64 |
+
|
65 |
+
A medal, "For the Defence of Odessa", was established on 22 December 1942. Approximately 38,000 medals were awarded to servicemen of the Soviet Army, Navy, Ministry of Internal Affairs, and civil citizens who took part in the city's defense. It was one of the first four Soviet cities to be awarded the title of "Hero City" in 1945. (These others were Leningrad, Stalingrad, and Sevastopol).
|
66 |
+
|
67 |
+
Lyudmila Pavlichenko, the famous female sniper, took part in the battle for Odessa. Her first two kills were effected near Belyayevka using a Mosin-Nagant bolt-action rifle with a P.E. 4-power scope. She recorded 187 confirmed kills during the defense of Odessa. Pavlichenko's confirmed kills during World War II totaled 309 (including 36 enemy snipers).
|
68 |
+
|
69 |
+
Before being occupied by Romanian troops in 1941, a part of the city's population, industry, infrastructure and all cultural valuables possible were evacuated to inner regions of the USSR and the retreating Red Army units destroyed as much as they could of Odessa's remaining harbour facilities. The city was land mined in the same way as Kiev.[citation needed]
|
70 |
+
|
71 |
+
During World War II, from 1941–1944, Odessa was subject to Romanian administration, as the city had been made part of Transnistria.[24] Partisan fighting continued, however, in the city's catacombs.
|
72 |
+
|
73 |
+
Following the Siege of Odessa, and the Axis occupation, approximately 25,000 Odessans were murdered in the outskirts of the city and over 35,000 deported; this came to be known as the Odessa massacre. Most of the atrocities were committed during the first six months of the occupation which officially began on 17 October 1941, when 80% of the 210,000 Jews in the region were killed,[25] compared to Jews in Romania proper where the majority survived.[26] After the Nazi forces began to lose ground on the Eastern Front, the Romanian administration changed its policy, refusing to deport the remaining Jewish population to extermination camps in German occupied Poland, and allowing Jews to work as hired labourers. As a result, despite the events of 1941, the survival of the Jewish population in this area was higher than in other areas of occupied eastern Europe.[25]
|
74 |
+
|
75 |
+
The city suffered severe damage and sustained many casualties over the course of the war. Many parts of Odessa were damaged during both its siege and recapture on 10 April 1944, when the city was finally liberated by the Red Army. Some of the Odessans had a more favourable view of the Romanian occupation, in contrast with the Soviet official view that the period was exclusively a time of hardship, deprivation, oppression and suffering – claims embodied in public monuments and disseminated through the media to this day.[27] Subsequent Soviet policies imprisoned and executed numerous Odessans (and deported most of the German population) on account of collaboration with the occupiers.[28]
|
76 |
+
|
77 |
+
Postage stamp of the USSR 1965 “Hero-City Odessa 1941-1945”
|
78 |
+
|
79 |
+
Obverse of the Soviet campaign medal "For the Defence of Odessa"
|
80 |
+
|
81 |
+
Reverse of the Soviet campaign medal "For the Defence of Odessa"; inscription reads “For our Soviet homeland”
|
82 |
+
|
83 |
+
Certificate "For taking part in the heroic defense of Odessa" Logvinov Petr Leontievich was awarded the Medal for the Defense of Odessa.
|
84 |
+
|
85 |
+
During the 1960s and 1970s, the city grew. Nevertheless, the majority of Odessa's Jews emigrated to Israel, the United States and other Western countries between the 1970s and 1990s. Many ended up in the Brooklyn neighborhood of Brighton Beach, sometimes known as "Little Odessa". Domestic migration of the Odessan middle and upper classes to Moscow and Leningrad, cities that offered even greater opportunities for career advancement, also occurred on a large scale. Despite this, the city grew rapidly by filling the void of those left with new migrants from rural Ukraine and industrial professionals invited from all over the Soviet Union.
|
86 |
+
|
87 |
+
As a part of the Ukrainian Soviet Socialist Republic, the city preserved and somewhat reinforced its unique cosmopolitan mix of Russian/Ukrainian/Jewish culture and a predominantly Russophone environment with the uniquely accented dialect of Russian spoken in the city. The city's unique identity has been formed largely thanks to its varied demography; all the city's communities have influenced aspects of Odessan life in some way or form.
|
88 |
+
|
89 |
+
Odessa is a city of more than 1 million people. The city's industries include shipbuilding, oil refining, chemicals, metalworking, and food processing. Odessa is also a Ukrainian naval base and home to a fishing fleet. It is known for its large outdoor market – the Seventh-Kilometer Market, the largest of its kind in Europe.
|
90 |
+
|
91 |
+
The city has seen violence in the 2014 pro-Russian conflict in Ukraine during 2014 Odessa clashes. The 2 May 2014 Odessa clashes between pro-Ukrainian and pro-Russian protestors killed 42 people. Four were killed during the protests, and at least 32 trade unionists were killed after a trade union building was set on fire and its exits blocked by Ukrainian nationalists.[29] Polls conducted from September to December 2014 found no support for joining Russia.[30]
|
92 |
+
|
93 |
+
Odessa was struck by three bomb blasts in December 2014, one of which killed one person (the injuries sustained by the victim indicated that he had dealt with explosives).[31][32] Internal Affairs Ministry advisor Zorian Shkiryak said on 25 December that Odessa and Kharkiv had become "cities which are being used to escalate tensions" in Ukraine. Shkiryak said that he suspected that these cities were singled out because of their "geographic position".[31] On 5 January 2015 the city's Euromaidan Coordination Center and a cargo train car were (non-lethally) bombed.[33]
|
94 |
+
|
95 |
+
Odessa is situated (46°28′N 30°44′E / 46.467°N 30.733°E / 46.467; 30.733) on terraced hills overlooking a small harbor on the Black Sea in the Gulf of Odessa, approximately 31 km (19 mi) north of the estuary of the Dniester river and some 443 km (275 mi) south of the Ukrainian capital Kiev. The average elevation at which the city is located is around 50 metres (160 feet), while the maximum is 65 metres (213 feet) and minimum (on the coast) amounts to 4.2 metres (13.8 feet) above sea level. The city currently covers a territory of 162.42 km2 (63 sq mi),[34] the population density for which is around 6,139 persons/km². Sources of running water in the city include the Dniester River, from which water is taken and then purified at a processing plant just outside the city. Being located in the south of Ukraine, the topography of the area surrounding the city is typically flat and there are no large mountains or hills for many kilometres around. Flora is of the deciduous variety and Odessa is known for its tree-lined avenues which, in the late 19th and early 20th centuries, made the city a favourite year-round retreat for the Russian aristocracy.[citation needed]
|
96 |
+
|
97 |
+
The city's location on the coast of the Black Sea has also helped to create a booming tourist industry in Odessa.[citation needed] The city's Arkadia beach has long been a favourite place for relaxation, both for the city's inhabitants and its visitors.[citation needed] This is a large sandy beach which is located to the south of the city centre. Odessa's many sandy beaches are considered to be quite unique in Ukraine,[citation needed] as the country's southern coast (particularly in the Crimea) tends to be a location in which the formation of stoney and pebble beaches has proliferated.
|
98 |
+
|
99 |
+
The coastal cliffs adjacent to the city are home to frequent landslides, resulting in a typical change of landscape along the Black Sea. Due to the fluctuating slopes of land, city planners are responsible for monitoring the stability of such areas, and for preserving potentially threatened building and other structures of the city above sea level near water.[35] Also a potential danger to the infrastructure and architecture of the city is the presence of multiple openings underground. These cavities can cause buildings to collapse, resulting in a loss of money and business. Due to the effects of climate and weather on sedimentary rocks beneath the city, the result is instability under some buildings' foundations.
|
100 |
+
|
101 |
+
Odessa has a hot-summer humid continental climate (Dfa, using the 0 °C [32 °F] isotherm) that borderlines the semi-arid climate (BSk) as well as a humid subtropical climate (Cfa) This has, over the past few centuries, aided the city greatly in creating conditions necessary for the development of summer tourism. During the tsarist era, Odessa's climate was considered to be beneficial for the body, and thus many wealthy but sickly persons were sent to the city in order to relax and recuperate. This resulted in the development of spa culture and the establishment of a number of high-end hotels in the city. The average annual temperature of sea is 13–14 °C (55–57 °F), whilst seasonal temperatures range from an average of 6 °C (43 °F) in the period from January to March, to 23 °C (73 °F) in August. Typically, for a total of 4 months – from June to September – the average sea temperature in the Gulf of Odessa and city's bay area exceeds 20 °C (68 °F).[36]
|
102 |
+
|
103 |
+
The city typically experiences dry, cold winters, which are relatively mild when compared to most of Ukraine as they're marked by temperatures which rarely fall below −10 °C (14 °F). Summers on the other hand do see an increased level of precipitation, and the city often experiences warm weather with temperatures often reaching into the high 20s and low 30s. Snow cover is often light or moderate, and municipal services rarely experience the same problems that can often be found in other, more northern, Ukrainian cities. This is largely because the higher winter temperatures and coastal location of Odessa prevent significant snowfall. Additionally the city hardly ever faces the phenomenon of sea-freezing.
|
104 |
+
|
105 |
+
According to the 2001 census, Ukrainians make up a majority (62 percent) of Odessa's inhabitants, along with an ethnic Russian minority (29 percent).[39]
|
106 |
+
|
107 |
+
A 2015 study by the International Republican Institute found that 68% of Odessa was ethnic Ukrainian, and 25% ethnic Russian.[45]
|
108 |
+
|
109 |
+
Despite Odessa's Ukrainian majority, Russian is the dominant language in the city. In 2015, the main language spoken at home was Russian − around 78% of the total population − followed by Ukrainian at 6%, and an equal combination of Ukrainian and Russian, 15%.[45]
|
110 |
+
|
111 |
+
Odessa oblast is also home to a number of other nationalities and minority ethnic groups, including Albanians, Armenians, Azeris, Crimean Tatars, Bulgarians, Georgians, Greeks, Jews, Poles, Romanians, Turks, among others.[39] Up until the early 1940s the city had a large Jewish population. As the result of mass deportation to extermination camps during the Second World War, the city's Jewish population declined considerably. Since the 1970s, the majority of the remaining Jewish population emigrated to Israel and other countries, shrinking the Jewish community.
|
112 |
+
|
113 |
+
Through most of the 19th century and until the mid 20th century, the largest ethnic group in Odessa was Russians, with the second largest ethnic group being Jews.[46]
|
114 |
+
|
115 |
+
Whilst Odessa is the administrative centre of the Odessa Oblast, the city is also the main constituent of the Odessa Municipality. However, since Odessa is a city of regional significance, this makes the city subject directly to the administration of the oblast's authorities, thus removing it from the responsibility of the municipality.
|
116 |
+
|
117 |
+
The city of Odessa is governed by a mayor and city council which work cooperatively to ensure the smooth-running of the city and procure its municipal bylaws. The city's budget is also controlled by the administration.
|
118 |
+
|
119 |
+
The mayoralty[51] plays the role of the executive in the city's municipal administration. Above all comes the mayor, who is elected, by the city's electorate, for five years in a direct election. 2015 Mayoral election of Odessa Gennadiy Trukhanov was reelected in the first round of the election with 52,9% of the vote.[1]
|
120 |
+
|
121 |
+
There are five deputy mayors, each of which is responsible for a certain particular part of the city's public policy.
|
122 |
+
|
123 |
+
The City Council[52] of the city makes up the administration's legislative branch, thus effectively making it a city 'parliament' or rada. The municipal council is made up of 120 elected members,[53] who are each elected to represent a certain district of the city for a four-year term. The current council is the fifth in the city's modern history, and was elected in January 2011. In the regular meetings of the municipal council, problems facing the city are discussed, and annually the city's budget is drawn up. The council has seventeen standing commissions[54] which play an important role in controlling the finances and trading practices of the city and its merchants.
|
124 |
+
|
125 |
+
The territory of Odessa is divided into four administrative raions (districts):
|
126 |
+
|
127 |
+
In addition, every raion has its own administration, subordinate to the Odessa City council, and with limited responsibilities.
|
128 |
+
|
129 |
+
Many of Odessa's buildings have, rather uniquely for a Ukrainian city, been influenced by the Mediterranean style of classical architecture. This is particularly noticeable in buildings built by architects such as the Italian Francesco Boffo, who in early 19th-century built a palace and colonnade for the Governor of Odessa, Prince Mikhail Vorontsov, the Potocki Palace and many other public buildings.
|
130 |
+
|
131 |
+
In 1887 one of the city's most well known architectural monuments was completed – the theatre, which still hosts a range of performances to this day; it is widely regarded as one of the world's finest opera houses. The first opera house was opened in 1810 and destroyed by fire in 1873. The modern building was constructed by Fellner and Helmer in neo-baroque; its luxurious hall was built in the rococo style. It is said that thanks to its unique acoustics even a whisper from the stage can be heard in any part of the hall. The theatre was projected along the lines of Dresden's Semperoper built in 1878, with its nontraditional foyer following the curvatures of the auditorium; the building's most recent renovation was completed in 2007.[55]
|
132 |
+
|
133 |
+
Odessa's most iconic symbol, the Potemkin Stairs, is a vast staircase that conjures an illusion so that those at the top only see a series of large steps, while at the bottom all the steps appear to merge into one pyramid-shaped mass. The original 200 steps (now reduced to 192) were designed by Italian architect Francesco Boffo and built between 1837 and 1841. The steps were made famous by Sergei Eisenstein in his film, Battleship Potemkin.
|
134 |
+
|
135 |
+
Most of the city's 19th-century houses were built of limestone mined nearby. Abandoned mines were later used and broadened by local smugglers. This created a gigantic complicated labyrinth of tunnels beneath Odessa, known as "Odessa Catacombs". During World War II, the catacombs served as a hiding place for partisans and natural shelter for civilians, who were escaping air plane bombing.
|
136 |
+
|
137 |
+
Deribasivska Street, an attractive pedestrian avenue named after José de Ribas, the Spanish-born founder of Odessa and decorated Russian Navy Admiral from the Russo-Turkish War, is famous by its unique character and architecture.[citation needed] During the summer it is common to find large crowds of people leisurely sitting and talking on the outdoor terraces of numerous cafés, bars and restaurants, or simply enjoying a walk along the cobblestone street, which is not open to vehicular traffic and is kept shaded by the linden trees which line its route.[56] A similar streetscape can also be found in that of Primorsky Bulvar, a grand thoroughfare which runs along the edge of the plateau upon which the city is situated, and where many of the city's most imposing buildings are to be found.
|
138 |
+
|
139 |
+
As one of the biggest on the Black Sea, Odessa's port is busy all year round. The Odessa Sea Port is located on an artificial stretch of Black Sea coast, along with the north-western part of the Gulf of Odessa. The total shoreline length of Odessa's sea port is around 7.23 kilometres (4.49 mi). The port, which includes an oil refinery, container handling facility, passenger area and numerous areas for handling dry cargo, is lucky in that its work does not depend on seasonal weather; the harbour itself is defended from the elements by breakwaters. The port is able to handle up to 14 million tons of cargo and about 24 million tons of oil products annually, whilst its passenger terminals can cater for around 4 million passengers a year at full capacity.[57]
|
140 |
+
|
141 |
+
There are a number of public parks and gardens in Odessa, among these are the Preobrazhensky, Gorky and Victory parks, the latter of which is an arboretum. The city is also home to a university botanical garden, which recently celebrated its 200th anniversary, and a number of other smaller gardens.
|
142 |
+
|
143 |
+
The City Garden, or Gorodskoy Sad, is perhaps the most famous of Odessa's gardens. Laid out in 1803 by Felix De Ribas (brother of the founder of Odessa, José de Ribas) on a plot of urban land he owned, the garden is located right in the heart of the city. When Felix decided that he was no longer able to provide enough money for the garden's upkeep, he decided to present it to the people of Odessa.[58] The transfer of ownership took place on 10 November 1806. Nowadays the garden is home to a bandstand and is the traditional location for outdoor theater in the summertime. Numerous sculptures can also be found within the grounds as well as a musical fountain, the waters of which are computer controlled to coordinate with the musical melody being played.
|
144 |
+
|
145 |
+
Odessa's largest park, Shevchenko Park (previously Alexander Park), was founded in 1875, during a visit to the city by Emperor Alexander II. The park covers an area of around 700 by 900 metres (2,300 by 3,000 feet) and is located near the centre of the city, on the side closest to the sea. Within the park there are a variety of cultural and entertainment facilities, and wide pedestrian avenues. In the center of the park is the local top-flight football team's Chornomorets Stadium, the Alexander Column and municipal observatory. The Baryatinsky Bulvar is popular for its route, which starts at the park's gate before winding its way along the edge of the coastal plateau. There are a number of monuments and memorials in the park, one of which is dedicated to the park's namesake, the Ukrainian national poet Taras Shevchenko.
|
146 |
+
|
147 |
+
Odessa is home to several universities and other institutions of higher education. The city's best-known and most prestigious university is the Odessa 'I.I. Mechnikov' National University. This university is the oldest in the city and was first founded by an edict of Tsar Alexander II of Russia in 1865 as the Imperial Novorossian University. Since then the university has developed to become one of modern Ukraine's leading research and teaching universities, with staff of around 1,800 and total of thirteen academic faculties. Other than the National University, the city is also home to the 1921-inaugurated Odessa National Economic University, the Odessa National Medical University (founded 1900), the 1918-founded Odessa National Polytechnic University and the Odessa National Maritime University (established 1930).
|
148 |
+
|
149 |
+
In addition to these universities, the city is home to the Odessa Law Academy, the National Academy of Telecommunications and the Odessa National Maritime Academy. The last of these institutions is a highly specialised and prestigious establishment for the preparation and training of merchant mariners which sees around 1,000 newly qualified officer cadets graduate each year and take up employment in the merchant marines of numerous countries around the world. The South Ukrainian National Pedagogical University is also based in the city, this is one of the largest institutions for the preparation of educational specialists in Ukraine and is recognised as one of the country's finest of such universities.
|
150 |
+
|
151 |
+
In addition to all the state-run universities mentioned above, Odessa is also home to many private educational institutes and academies which offer highly specified courses in a range of different subjects. These establishments, however, typically charge much higher fees than government-owned establishments and may not have held the same level of official accreditation as their state-run peers.[citation needed]
|
152 |
+
|
153 |
+
With regard to primary and secondary education, Odessa has many schools catering for all ages from kindergarten through to lyceum (final secondary school level) age. Most of these schools are state-owned and operated, and all schools have to be state-accredited in order to teach children.
|
154 |
+
|
155 |
+
Fine Arts museum is the biggest art gallery in the city, which collection includes canvas mostly of Russian painters from 17th-21st centuries, icon collection and modern art. The Odessa Museum of Western and Eastern Art is big art museum; it has large European collections from the 16–20th centuries along with the art from the East on display. There are paintings from Caravaggio, Mignard, Hals, Teniers and Del Piombo. Also of note is the city's Alexander Pushkin Museum, which is dedicated to detailing the short time Pushkin spent in exile in Odessa, a period during which he continued to write. The poet also has a city street named after him, as well as a statue.[59] Other museums in the city include the Odessa Archeological Museum, which is housed in a neoclassical building, the Odessa Numismatics Museum, the Odessa Museum of the Regional History, Museum of Heroic Defense of Odessa (411th Battery).
|
156 |
+
|
157 |
+
Among the city's public sculptures, two sets of Medici lions can be noted, at the Vorontsov Palace[60] as well as the Starosinnyi Garden.[61]
|
158 |
+
|
159 |
+
Jacob Adler, the major star of the Yiddish theatre in New York and father of the actor, director and teacher Stella Adler, was born and spent his youth in Odessa. The most popular Russian show business people from Odessa are Yakov Smirnoff (comedian), Mikhail Zhvanetsky (legendary humorist writer, who began his career as a port engineer) and Roman Kartsev (comedian Карцев, Роман Андреевич [ru]). Zhvanetsky's and Kartsev's success in the 1970s, along with Odessa's KVN team, contributed to Odessa's established status as "capital of Soviet humor", culminating in the annual Humoryna festival, carried out around the beginning of April.
|
160 |
+
|
161 |
+
Odessa was also the home of the late Armenian painter Sarkis Ordyan (1918–2003), the Ukrainian painter Mickola Vorokhta and the Greek philologist, author and promoter of Demotic Greek Ioannis Psycharis (1854–1929). Yuri Siritsov, bass player of the Israeli Metal band PallaneX is originally from Odessa. Igor Glazer Production Manager Baruch Agadati (1895–1976), the Israeli classical ballet dancer, choreographer, painter, and film producer and director grew up in Odessa, as did Israeli artist and author Nachum Gutman (1898–1980). Israeli painter Avigdor Stematsky (1908–89) was born in Odessa.
|
162 |
+
|
163 |
+
Odessa produced one of the founders of the Soviet violin school, Pyotr Stolyarsky. It has also produced many musicians, including the violinists Nathan Milstein, David Oistrakh and Igor Oistrakh, Boris Goldstein, Zakhar Bron and pianists Sviatoslav Richter, Benno Moiseiwitsch, Vladimir de Pachmann, Shura Cherkassky, Emil Gilels, Maria Grinberg, Simon Barere, Leo Podolsky and Yakov Zak. (Note: Richter studied in Odessa but wasn't born there.)
|
164 |
+
|
165 |
+
The Odessa International Film Festival is also held in this city annually since 2010.
|
166 |
+
|
167 |
+
Poet Anna Akhmatova was born in Bolshoy Fontan near Odessa,[62] however her further work was not connected with the city and its literary tradition. The city has produced many writers, including Isaac Babel, whose series of short stories, Odessa Tales, are set in the city. Other Odessites are the duo Ilf and Petrov - authors of "The Twelve chairs", and Yuri Olesha - author of "The Three Fat Men". Vera Inber, a poet and writer, as well as the poet and journalist, Margarita Aliger were both born in Odessa. The Italian writer, slavist and anti-fascist dissident Leone Ginzburg was born in Odessa into a Jewish family, and then went to Italy where he grew up and lived.
|
168 |
+
|
169 |
+
One of the most prominent pre-war Soviet writers, Valentin Kataev, was born here and began his writing career as early as high school (gymnasia). Before moving to Moscow in 1922, he made quite a few acquaintances here, including Yury Olesha and Ilya Ilf (Ilf's co-author Petrov was in fact Kataev's brother, Petrov being his pen-name). Kataev became a benefactor for these young authors, who would become some of the most talented and popular Russian writers of this period. In 1955 Kataev became the first chief editor of the Youth (Russian: Юность, Yunost'), one of the leading literature magazines of the Ottepel of the 1950s and 1960s.[citation needed]
|
170 |
+
|
171 |
+
These authors and comedians played a great role in establishing the "Odessa myth" in the Soviet Union. Odessites were and are viewed in the ethnic stereotype as sharp-witted, street-wise and eternally optimistic.[citation needed] These qualities are reflected in the "Odessa dialect", which borrows chiefly from the characteristic speech of the Odessan Jews, and is enriched by a plethora of influences common for the port city. The "Odessite speech" became a staple of the "Soviet Jew" depicted in a multitude of jokes and comedy acts, in which a Jewish adherent served as a wise and subtle dissenter and opportunist, always pursuing his own well-being, but unwittingly pointing out the flaws and absurdities of the Soviet regime. The Odessan Jew in the jokes always "came out clean" and was, in the end, a lovable character – unlike some of other jocular nation stereotypes such as The Chukcha, The Ukrainian, The Estonian or The American.[63]
|
172 |
+
|
173 |
+
Odessa is a popular tourist destination, with many therapeutic resorts in and around the city. The city's Filatov Institute of Eye Diseases & Tissue Therapy is one of the world's leading ophthalmology clinics.
|
174 |
+
|
175 |
+
April Fools' Day, held annually on 1 April, is one of the most celebrated festivals in the city. Practical joking is a central theme throughout, and Odessans dress in unique, colorful attire to express their spontaneous and comedic selves. The tradition has been celebrated since the early 1970s, when the humor of Ukraine’s citizens were drawn to television and the media, further developing into a mass festival. Large amounts of money are made from the festivities, supporting Odessa’s local entertainers and shops.[64]
|
176 |
+
|
177 |
+
Pyotr Schmidt (better known as "Lieutenant Schmidt"), one of the leaders of the Sevastopol uprising, was born in Odessa.
|
178 |
+
|
179 |
+
Ze'ev Jabotinsky was born in Odessa, and largely developed his version of Zionism there in the early 1920s.[65] One Marshal of the Soviet Union, Rodion Yakovlevich Malinovsky, a military commander in World War II and Defense Minister of the Soviet Union, was born in Odessa, whilst renowned Nazi hunter Simon Wiesenthal lived in the city at one time.
|
180 |
+
|
181 |
+
Georgi Rosenblum, who was employed by William Melville as one of the first spies of the British Secret Service Bureau, was a native Odessan. Another intelligence agent from Odessa was Genrikh Lyushkov, who joined in the Odessa Cheka in 1920 and reached two-star rank in the NKVD before fleeing to Japanese-occupied Manchuria in 1938 to avoid being murdered.
|
182 |
+
|
183 |
+
The composer Jacob Weinberg (1879–1956) was born in Odessa. He composed over 135 works and was the founder of the Jewish National Conservatory in Jerusalem before immigrating to the U.S. where he became "an influential voice in the promotion of American Jewish music".[66]
|
184 |
+
|
185 |
+
Valeria Lukyanova, a girl from Odessa who looks very similar to a Barbie doll, has received attention on the Internet and from the media for her doll-like appearance.[67]
|
186 |
+
|
187 |
+
Mikhail Zhvanetsky, writer, satirist and performer best known for his shows targeting different aspects of the Soviet and post-Soviet everyday life is one of most famous living Odessans.[citation needed]
|
188 |
+
|
189 |
+
VitaliV (Vitali Vinogradov), and artist and sculptor based in London since 1989, was born in Odessa.[68]
|
190 |
+
|
191 |
+
Kostyantyn Mykolayovych Bocharov, better known by his stage name, Mélovin, is a native of Odessa. He is best known for winning season six of X-Factor Ukraine and for representing Ukraine in the Eurovision Song Contest 2018, singing the song "Under the Ladder".
|
192 |
+
|
193 |
+
Yaakov Dori, the first Chief of Staff of the Israel Defense Forces, and President of the Technion – Israel Institute of Technology, was born in Odessa, as was Israel Dostrovsky, Israeli physical chemist who was the fifth president of the Weizmann Institute of Science.
|
194 |
+
|
195 |
+
The economy of Odessa largely stems from its traditional role as a port city. The nearly ice-free port lies near the mouths of the Dnieper, the Southern Bug, the Dniester and the Danube rivers, which provide good links to the hinterland.[69]
|
196 |
+
During the Soviet period (until 1991) the city functioned as the USSR's largest trading port; it continues in a similar role as independent Ukraine's busiest international port. The port complex contains an oil and gas transfer and storage facility, a cargo-handling area and a large passenger port. In 2007 the Port of Odessa handled 31,368,000 tonnes of cargo.[70][71] The port of Odessa is also one of the Ukrainian Navy's most important bases on the Black Sea. Rail transport is another important sector of the economy in Odessa – largely due to the role it plays in delivering goods and imports to and from the city's port.
|
197 |
+
|
198 |
+
Industrial enterprises located in and around the city include those dedicated to fuel refinement, machine building, metallurgy, and other types of light industry such as food preparation, timber plants and chemical industry. Agriculture is a relatively important sector in the territories surrounding the city. The Seventh-Kilometer Market is a major commercial complex on the outskirts of the city where private traders now operate one of the largest market complexes in Eastern Europe.[72] The market has roughly 6,000 traders and an estimated 150,000 customers per day. Daily sales, according to the Ukrainian periodical Zerkalo Nedeli, were believed to be as high as USD 20 million in 2004. With a staff of 1,200 (mostly guards and janitors), the market is also the region's largest employer. It is owned by local land and agriculture tycoon Viktor A. Dobriansky and three partners of his. Tavria-V is the most popular retail chain in Odessa. Key areas of business include: retail, wholesale, catering, production, construction and development, private label. Consumer recognition is mainly attributed[by whom?] to the high level of service and the quality of services. Tavria-V is the biggest private company and the biggest tax payer.
|
199 |
+
|
200 |
+
Deribasivska Street is one of the city's most important commercial streets, hosting many of the city's boutiques and higher-end shops. In addition to this there are a number of large commercial shopping centres in the city. The 19th-century shopping gallery Passage was, for a long time, the city's most upscale shopping district, and remains to this day[update] an important landmark of Odessa.
|
201 |
+
|
202 |
+
The tourism sector is of great importance to Odessa, which is currently[when?] the second most-visited Ukrainian city.[73] In 2003 this sector recorded a total revenue of 189,2 mln UAH. Other sectors of the city's economy include the banking sector: the city hosts a branch of the National Bank of Ukraine. Imexbank, one of Ukraine's largest commercial banks, was based in the city, however on May 27, 2015, the Deposit Guarantee Fund of Ukraine made a decision to liquidate the bank. Foreign business ventures have thrived in the area, as since 1 January 2000, much of the city and its surrounding area has been declared[by whom?] a free economic zone – this has aided the foundation of foreign companies' and corporations' Ukrainian divisions and allowed them to more easily invest in the Ukrainian manufacturing and service sectors. To date a number of Japanese and Chinese companies, as well as a host of European enterprises, have invested in the development of the free economic zone, to this end private investors in the city have invested a great deal of money into the provision of quality office real estate and modern manufacturing facilities such as warehouses and plant complexes.
|
203 |
+
|
204 |
+
Odessa also has a well-developed IT industry with large number of IT outsourcing companies and IT product startups. Among most famous startups is Looksery[74] and AI Factory both developed in Odessa and acquaired by Snap inc.[75]
|
205 |
+
|
206 |
+
A number of world-famous scientists have lived and worked in Odessa. They include: Illya Mechnikov (Nobel Prize in Medicine 1908),[76] Igor Tamm (Nobel Prize in Physics 1958), Selman Waksman (Nobel Prize in Medicine 1952), Dmitri Mendeleev, Nikolay Pirogov, Ivan Sechenov, Vladimir Filatov, Nikolay Umov, Leonid Mandelstam, Aleksandr Lyapunov, Mark Krein, Alexander Smakula, Waldemar Haffkine, Valentin Glushko, Israel Dostrovsky, and George Gamow.[77]
|
207 |
+
|
208 |
+
Odessa is a major maritime-transport hub that includes several ports including Port of Odessa, Port of Chornomorsk (ferry, freight), Yuzhne (freight only). The Port of Odessa became a provisional headquarters for the Ukrainian Navy, following the Russian occupation of Crimea in 2014. Before the fall of the Soviet Union, the Port of Odessa harbored the major Soviet cruise line Black Sea Shipping Company.
|
209 |
+
|
210 |
+
Passenger ships and ferries connect Odessa with Istanbul, Haifa and Varna, whilst river cruises can occasionally be booked for travel up the Dnieper River to cities such as Kherson, Dnipro and Kiev.
|
211 |
+
|
212 |
+
The first car in the Russian Empire, a Mercedes-Benz belonging to V. Navrotsky, came to Odessa from France in 1891. He was a popular city publisher of the newspaper The Odessa Leaf.
|
213 |
+
|
214 |
+
Odessa is linked to the Ukrainian capital, Kiev, by the M05 Highway, a high quality multi-lane road which is set to be re-designated, after further reconstructive works, as an 'Avtomagistral' (motorway) in the near future. Other routes of national significance, passing through Odessa, include the M16 Highway to Moldova, M15 to Izmail and Romania, and the M14 which runs from Odessa, through Mykolaiv and Kherson to Ukraine's eastern border with Russia. The M14 is of particular importance to Odessa's maritime and shipbuilding industries as it links the city with Ukraine's other large deep water port Mariupol which is located in the south east of the country.
|
215 |
+
|
216 |
+
Odessa also has a well-developed system of inter-urban municipal roads and minor beltways. However, the city is still lacking an extra-urban bypass for transit traffic which does not wish to proceed through the city centre.
|
217 |
+
|
218 |
+
Intercity bus services are available from Odessa to many cities in Russia (Moscow, Rostov-on-Don, Krasnodar, Pyatigorsk), Germany (Berlin, Hamburg and Munich), Greece (Thessaloniki and Athens), Bulgaria (Varna and Sofia) and several cities of Ukraine and Europe.
|
219 |
+
|
220 |
+
Odessa is served by a number of railway stations and halts, the largest of which is Odessa Holovna (Main Station), from where passenger train services connect Odessa with Warsaw, Prague, Bratislava, Vienna, Berlin, Moscow, St. Petersburg, the cities of Ukraine and many other cities of the former USSR. The city's first railway station was opened in the 1880s, however, during the Second World War, the iconic building of the main station, which had long been considered to be one of the Russian Empire's premier stations, was destroyed through enemy action. In 1952 the station was rebuilt to the designs of A Chuprina. The current station, which is characterised by its many socialist-realist architectural details and grand scale, was renovated by the state railway operator Ukrainian Railways in 2006.
|
221 |
+
|
222 |
+
In 1881 Odessa became the first city in Imperial Russia to have steam tramway lines, an innovation that came only one year after the establishment of horse tramway services in 1880 operated by the "Tramways d'Odessa", a Belgian owned company. The first metre gauge steam tramway line ran from Railway Station to Great Fontaine and the second one to Hadzhi Bey Liman. These routes were both operated by the same Belgian company. Electric tramway started to operate on 22 August 1907. Trams were imported from Germany.
|
223 |
+
|
224 |
+
The city's public transit system is currently made up of trams,[78] trolleybuses, buses and fixed-route taxis (marshrutkas). Odessa also has a cable car to Vidrada Beach,[79] and recreational ferry service. There are two routes of public transport which connect Odessa Airport with the city center: trolley-bus №14 and marshrutka №117.[80]
|
225 |
+
|
226 |
+
One additional mode of transport in Odessa is the Potemkin Stairs funicular railway, which runs between the city's Primorsky Bulvar and the sea terminal, has been in service since 1902. In 1998, after many years of neglect, the city decided to raise funds for a replacement track and cars. This project was delayed on multiple occasions but was finally completed eight years later in 2005.[81] The funicular has now become as much a part of historic Odessa as the staircase to which it runs parallel.
|
227 |
+
|
228 |
+
Odesa International Airport, which is located to the south-west of the city centre, is served by a number of airlines. The airport is also often used by citizens of neighbouring countries for whom Odessa is the nearest large city and who can travel visa-free to Ukraine.
|
229 |
+
|
230 |
+
Transit flights from the Americas, Africa, Asia, Europe and the Middle East to Odessa are offered by Ukraine International Airlines through their hub at Kiev's Boryspil International Airport. Additionally Turkish Airlines wide network and daily flights offers more than 246 destinations all over the world.
|
231 |
+
|
232 |
+
The most popular sport in Odessa is football. The main professional football club in the city is FC Chornomorets Odesa, who play in the Ukrainian Premier League. Chornomorets play their home games at the Chornomorets Stadium, an elite-class stadium which has a maximum capacity of 34,164. The second football team in Odessa is FC Odessa.
|
233 |
+
|
234 |
+
Basketball is also a prominent sport in Odessa, with BC Odessa representing the city in the Ukrainian Basketball League, the highest tier basketball league in Ukraine. Odessa will become one of five Ukrainian cities to host the 39th European Basketball Championship in 2015.
|
235 |
+
|
236 |
+
The cyclist and aviator Sergei Utochkin was one of the most famous natives of Odessa in the years before the Russian Revolution. Chess player Efim Geller was born in the city. Gymnast Tatiana Gutsu (known as "The Painted Bird of Odessa") brought home Ukraine's first Olympic gold medal as an independent nation when she outscored the USA's Shannon Miller in the women's all-around event at 1992 Summer Olympics in Barcelona, Spain. Figure skaters Oksana Grishuk and Evgeny Platov won the 1994 and 1998 Olympic gold medals as well as the 1994, 1995, 1996, and 1997 World Championships in ice dance. Both were born and raised in the city, though they skated at first for the Soviet Union, in the Unified Team, the Commonwealth of Independent States, and then Russia. Hennadiy Avdyeyenko won a 1988 Olympic gold medal in thehigh jump, setting an Olympic record at 2.38 metres (7.81 feet).
|
237 |
+
|
238 |
+
Other notable athletes:
|
239 |
+
|
240 |
+
Odessa is twinned with:[83]
|
241 |
+
|
242 |
+
Odessa cooperated with:[84]
|
en/4236.html.txt
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Olfaction is a chemoreception that, through the sensory olfactory system, forms the perception of smell.[1] Olfaction has many purposes, such as the detection of hazards, pheromones, and food.
|
2 |
+
|
3 |
+
Olfaction occurs when odorants bind to specific sites on olfactory receptors located in the nasal cavity.[2] Glomeruli aggregate signals from these receptors and transmit them to the olfactory bulb, where the sensory input will start to interact with parts of the brain responsible for smell identification, memory, and emotion.[3]
|
4 |
+
|
5 |
+
Olfactory dysfunction arises as the result of many different peripheral and central disturbances, including upper respiratory infections, traumatic brain injury, and neurodegenerative disease.[4][5]
|
6 |
+
|
7 |
+
Early scientific study of olfaction includes the extensive doctoral dissertation of Eleanor Gamble, published in 1898, which compared olfactory to other stimulus modalities, and implied that smell had a lower intensity discrimination.[6] As the Epicurean and atomistic Roman philosopher Lucretius (1st century BCE) speculated, different odors are attributed to different shapes and sizes of "atoms" (odor molecules in the modern understanding) that stimulate the olfactory organ [1]. A modern demonstration of that theory was the cloning of olfactory receptor proteins by Linda B. Buck and Richard Axel (who were awarded the Nobel Prize in 2004), and subsequent pairing of odor molecules to specific receptor proteins. Each odor receptor molecule recognizes only a particular molecular feature or class of odor molecules. Mammals have about a thousand genes that code for odor reception.[7] Of the genes that code for odor receptors, only a portion are functional. Humans have far fewer active odor receptor genes than other primates and other mammals.[8] In mammals, each olfactory receptor neuron expresses only one functional odor receptor.[9] Odor receptor nerve cells function like a key–lock system: if the airborne molecules of a certain chemical can fit into the lock, the nerve cell will respond. There are, at present, a number of competing theories regarding the mechanism of odor coding and perception. According to the shape theory, each receptor detects a feature of the odor molecule. The weak-shape theory, known as the odotope theory, suggests that different receptors detect only small pieces of molecules, and these minimal inputs are combined to form a larger olfactory perception (similar to the way visual perception is built up of smaller, information-poor sensations, combined and refined to create a detailed overall perception)[citation needed]. According to a new study, researchers have found that a functional relationship exists between molecular volume of odorants and the olfactory neural response.[10] An alternative theory, the vibration theory proposed by Luca Turin,[11][12] posits that odor receptors detect the frequencies of vibrations of odor molecules in the infrared range by quantum tunnelling. However, the behavioral predictions of this theory have been called into question.[13] There is no theory yet that explains olfactory perception completely.
|
8 |
+
|
9 |
+
In vertebrates, smells are sensed by olfactory sensory neurons in the olfactory epithelium. The olfactory epithelium is made up of at least six morphologically and biochemically different cell types.[14] The proportion of olfactory epithelium compared to respiratory epithelium (not innervated, or supplied with nerves) gives an indication of the animal's olfactory sensitivity. Humans have about 10 cm2 (1.6 sq in) of olfactory epithelium, whereas some dogs have 170 cm2 (26 sq in). A dog's olfactory epithelium is also considerably more densely innervated, with a hundred times more receptors per square centimeter.[15] The sensory olfactory system integrates with other senses to form the perception of flavor.[16] Often, land organisms will have separate olfaction systems for smell and taste (orthonasal smell and retronasal smell), but water-dwelling organisms usually have only one system.[17]
|
10 |
+
|
11 |
+
Molecules of odorants passing through the superior nasal concha of the nasal passages dissolve in the mucus that lines the superior portion of the cavity and are detected by olfactory receptors on the dendrites of the olfactory sensory neurons. This may occur by diffusion or by the binding of the odorant to odorant-binding proteins. The mucus overlying the epithelium contains mucopolysaccharides, salts, enzymes, and antibodies (these are highly important, as the olfactory neurons provide a direct passage for infection to pass to the brain). This mucus acts as a solvent for odor molecules, flows constantly, and is replaced approximately every ten minutes.
|
12 |
+
|
13 |
+
In insects, smells are sensed by olfactory sensory neurons in the chemosensory sensilla, which are present in insect antenna, palps, and tarsa, but also on other parts of the insect body. Odorants penetrate into the cuticle pores of chemosensory sensilla and get in contact with insect odorant-binding proteins (OBPs) or Chemosensory proteins (CSPs), before activating the sensory neurons.
|
14 |
+
|
15 |
+
The binding of the ligand (odor molecule or odorant) to the receptor leads to an action potential in the receptor neuron, via a second messenger pathway, depending on the organism. In mammals, the odorants stimulate adenylate cyclase to synthesize cAMP via a G protein called Golf. cAMP, which is the second messenger here, opens a cyclic nucleotide-gated ion channel (CNG), producing an influx of cations (largely Ca2+ with some Na+) into the cell, slightly depolarising it. The Ca2+ in turn opens a Ca2+-activated chloride channel, leading to efflux of Cl−, further depolarizing the cell and triggering an action potential. Ca2+ is then extruded through a sodium-calcium exchanger. A calcium-calmodulin complex also acts to inhibit the binding of cAMP to the cAMP-dependent channel, thus contributing to olfactory adaptation.
|
16 |
+
|
17 |
+
The main olfactory system of some mammals also contains small subpopulations of olfactory sensory neurons that detect and transduce odors somewhat differently. Olfactory sensory neurons that use trace amine-associated receptors (TAARs) to detect odors use the same second messenger signaling cascade as do the canonical olfactory sensory neurons.[18] Other subpopulations, such as those that express the receptor guanylyl cyclase GC-D (Gucy2d)[19] or the soluble guanylyl cyclase Gucy1b2,[20] use a cGMP cascade to transduce their odorant ligands.[21][22][23] These distinct subpopulations (olfactory subsystems) appear specialized for the detection of small groups of chemical stimuli.
|
18 |
+
|
19 |
+
This mechanism of transduction is somewhat unusual, in that cAMP works by directly binding to the ion channel rather than through activation of protein kinase A. It is similar to the transduction mechanism for photoreceptors, in which the second messenger cGMP works by directly binding to ion channels, suggesting that maybe one of these receptors was evolutionarily adapted into the other. There are also considerable similarities in the immediate processing of stimuli by lateral inhibition.
|
20 |
+
|
21 |
+
Averaged activity of the receptor neurons can be measured in several ways. In vertebrates, responses to an odor can be measured by an electro-olfactogram or through calcium imaging of receptor neuron terminals in the olfactory bulb. In insects, one can perform electroantennography or calcium imaging within the olfactory bulb.
|
22 |
+
|
23 |
+
Olfactory sensory neurons project axons to the brain within the olfactory nerve, (cranial nerve I). These nerve fibers, lacking myelin sheaths, pass to the olfactory bulb of the brain through perforations in the cribriform plate, which in turn projects olfactory information to the olfactory cortex and other areas.[24] The axons from the olfactory receptors converge in the outer layer of the olfactory bulb within small (≈50 micrometers in diameter) structures called glomeruli. Mitral cells, located in the inner layer of the olfactory bulb, form synapses with the axons of the sensory neurons within glomeruli and send the information about the odor to other parts of the olfactory system, where multiple signals may be processed to form a synthesized olfactory perception. A large degree of convergence occurs, with 25,000 axons synapsing on 25 or so mitral cells, and with each of these mitral cells projecting to multiple glomeruli. Mitral cells also project to periglomerular cells and granular cells that inhibit the mitral cells surrounding it (lateral inhibition). Granular cells also mediate inhibition and excitation of mitral cells through pathways from centrifugal fibers and the anterior olfactory nuclei. Neuromodulators like acetylcholine, serotonin and norepinephrine all send axons to the olfactory bulb and have been implicated in gain modulation,[25] pattern separation,[26] and memory functions,[27] respectively.
|
24 |
+
|
25 |
+
The mitral cells leave the olfactory bulb in the lateral olfactory tract, which synapses on five major regions of the cerebrum: the anterior olfactory nucleus, the olfactory tubercle, the amygdala, the piriform cortex, and the entorhinal cortex. The anterior olfactory nucleus projects, via the anterior commissure, to the contralateral olfactory bulb, inhibiting it. The piriform cortex has two major divisions with anatomically distinct organizations and functions. The anterior piriform cortex (APC) appears to be better at determining the chemical structure of the odorant molecules, and the posterior piriform cortex (PPC) has a strong role in categorizing odors and assessing similarities between odors (e.g. minty, woody, and citrus are odors that can, despite being highly variant chemicals, be distinguished via the PPC in a concentration-independent manner).[28] The piriform cortex projects to the medial dorsal nucleus of the thalamus, which then projects to the orbitofrontal cortex. The orbitofrontal cortex mediates conscious perception of the odor (citation needed). The three-layered piriform cortex projects to a number of thalamic and hypothalamic nuclei, the hippocampus and amygdala and the orbitofrontal cortex, but its function is largely unknown. The entorhinal cortex projects to the amygdala and is involved in emotional and autonomic responses to odor. It also projects to the hippocampus and is involved in motivation and memory. Odor information is stored in long-term memory and has strong connections to emotional memory. This is possibly due to the olfactory system's close anatomical ties to the limbic system and hippocampus, areas of the brain that have long been known to be involved in emotion and place memory, respectively.
|
26 |
+
|
27 |
+
Since any one receptor is responsive to various odorants, and there is a great deal of convergence at the level of the olfactory bulb, it may seem strange that human beings are able to distinguish so many different odors. It seems that a highly complex form of processing must be occurring; however, as it can be shown that, while many neurons in the olfactory bulb (and even the pyriform cortex and amygdala) are responsive to many different odors, half the neurons in the orbitofrontal cortex are responsive to only one odor, and the rest to only a few. It has been shown through microelectrode studies that each individual odor gives a particular spatial map of excitation in the olfactory bulb. It is possible that the brain is able to distinguish specific odors through spatial encoding, but temporal coding must also be taken into account. Over time, the spatial maps change, even for one particular odor, and the brain must be able to process these details as well.
|
28 |
+
|
29 |
+
Inputs from the two nostrils have separate inputs to the brain, with the result that, when each nostril takes up a different odorant, a person may experience perceptual rivalry in the olfactory sense akin to that of binocular rivalry.[29]
|
30 |
+
|
31 |
+
In insects, smells are sensed by sensilla located on the antenna and maxillary palp and first processed by the antennal lobe (analogous to the olfactory bulb), and next by the mushroom bodies and lateral horn.
|
32 |
+
|
33 |
+
Many animals, including most mammals and reptiles, but not humans[citation needed], have two distinct and segregated olfactory systems: a main olfactory system, which detects volatile stimuli, and an accessory olfactory system, which detects fluid-phase stimuli. Behavioral evidence suggests that these fluid-phase stimuli often function as pheromones, although pheromones can also be detected by the main olfactory system. In the accessory olfactory system, stimuli are detected by the vomeronasal organ, located in the vomer, between the nose and the mouth. Snakes use it to smell prey, sticking their tongue out and touching it to the organ. Some mammals make a facial expression called flehmen to direct stimuli to this organ.
|
34 |
+
|
35 |
+
The sensory receptors of the accessory olfactory system are located in the vomeronasal organ. As in the main olfactory system, the axons of these sensory neurons project from the vomeronasal organ to the accessory olfactory bulb, which in the mouse is located on the dorsal-posterior portion of the main olfactory bulb. Unlike in the main olfactory system, the axons that leave the accessory olfactory bulb do not project to the brain's cortex but rather to targets in the amygdala and bed nucleus of the stria terminalis, and from there to the hypothalamus, where they may influence aggression and mating behavior.
|
36 |
+
|
37 |
+
The MHC genes (known as HLA in humans) are a group of genes present in many animals and important for the immune system; in general, offspring from parents with differing MHC genes have a stronger immune system. Fish, mice, and female humans are able to smell some aspect of the MHC genes of potential sex partners and prefer partners with MHC genes different from their own.[30][31]
|
38 |
+
|
39 |
+
Humans can detect blood relatives from olfaction.[32] Mothers can identify by body odor their biological children but not their stepchildren. Pre-adolescent children can olfactorily detect their full siblings but not half-siblings or step siblings, and this might explain incest avoidance and the Westermarck effect.[33] Functional imaging shows that this olfactory kinship detection process involves the frontal-temporal junction, the insula, and the dorsomedial prefrontal cortex, but not the primary or secondary olfactory cortices, or the related piriform cortex or orbitofrontal cortex.[34]
|
40 |
+
|
41 |
+
The process by which olfactory information is coded in the brain to allow for proper perception is still being researched, and is not completely understood. When an odorant is detected by receptors, they in a sense break the odorant down, and then the brain puts the odorant back together for identification and perception.[35] The odorant binds to receptors that recognize only a specific functional group, or feature, of the odorant, which is why the chemical nature of the odorant is important.[36]
|
42 |
+
|
43 |
+
After binding the odorant, the receptor is activated and will send a signal to the glomeruli.[36] Each glomerulus receives signals from multiple receptors that detect similar odorant features. Because several receptor types are activated due to the different chemical features of the odorant, several glomeruli are activated as well. All of the signals from the glomeruli are then sent to the brain, where the combination of glomeruli activation encodes the different chemical features of the odorant. The brain then essentially puts the pieces of the activation pattern back together in order to identify and perceive the odorant.[36] This distributed code allows the brain to detect specific odors in mixtures of many background odors.[37]
|
44 |
+
|
45 |
+
It is a general idea that the layout of brain structures corresponds to physical features of stimuli (called topographic coding), and similar analogies have been made in olfaction with concepts such as a layout corresponding to chemical features (called chemotopy) or perceptual features.[38] While chemotopy remains a highly controversial concept,[39] evidence exists for perceptual information implemented in the spatial dimensions of olfactory networks.[38]
|
46 |
+
|
47 |
+
Although conventional wisdom and lay literature, based on impressionistic findings in the 1920s, have long presented human olfaction as capable of distinguishing between roughly 10,000 unique odors, recent research has suggested that the average individual is capable of distinguishing over one trillion unique odors.[40] Researchers in the most recent study, which tested the psychophysical responses to combinations of over 128 unique odor molecules with combinations composed of up to 30 different component molecules, noted that this estimate is "conservative" and that some subjects of their research might be capable of deciphering between a thousand trillion odorants, adding that their worst performer could probably still distinguish between 80 million scents.[41] Authors of the study concluded, "This is far more than previous estimates of distinguishable olfactory stimuli. It demonstrates that the human olfactory system, with its hundreds of different olfactory receptors, far out performs the other senses in the number of physically different stimuli it can discriminate."[42] However, it was also noted by the authors that the ability to distinguish between smells is not analogous to being able to consistently identify them, and that subjects were not typically capable of identifying individual odor stimulants from within the odors the researchers had prepared from multiple odor molecules. In November 2014 the study was strongly criticized by Caltech scientist Markus Meister, who wrote that the study's "extravagant claims are based on errors of mathematical logic".[43][44] The logic of his paper has in turn been criticized by the authors of the original paper.[45]
|
48 |
+
|
49 |
+
Different people smell different odors, and most of these differences are caused by genetic differences.[46] Although odorant receptor genes make up one of the largest gene families in the human genome, only a handful of genes have been linked conclusively to particular smells. For instance, the odorant receptor OR5A1 and its genetic variants (alleles) are responsible for our ability (or failure) to smell β-ionone, a key aroma in foods and beverages.[47] Similarly, the odorant receptor OR2J3 is associated with the ability to detect the "grassy" odor, cis-3-hexen-1-ol.[48] The preference (or dislike) of cilantro (coriander) has been linked to the olfactory receptor OR6A2.[49]
|
50 |
+
|
51 |
+
Flavor perception is an aggregation of auditory, taste, haptic, and smell sensory information.[16] Retronasal smell plays the biggest role in the sensation of flavor. During the process of mastication, the tongue manipulates food to release odorants. These odorants enter the nasal cavity during exhalation.[50] The olfaction of food has the sensation of being in the mouth because of co-activation of the motor cortex and olfactory epithelium during mastication.[16]
|
52 |
+
|
53 |
+
Olfaction, taste, and trigeminal receptors (also called chemesthesis) together contribute to flavor. The human tongue can distinguish only among five distinct qualities of taste, while the nose can distinguish among hundreds of substances, even in minute quantities. It is during exhalation that the olfaction contribution to flavor occurs, in contrast to that of proper smell, which occurs during the inhalation phase of breathing.[50] The olfactory system is the only human sense that bypasses the thalamus and connects directly to the forebrain.[51]
|
54 |
+
|
55 |
+
Olfaction and sound information has been shown to converge in the olfactory tubercles of rodents.[52] This neural convergence is proposed to give rise to a perception termed smound.[53] Whereas a flavor results from interactions between smell and taste, a smound may result from interactions between smell and sound.
|
56 |
+
|
57 |
+
The following are disorders associated with olfaction:
|
58 |
+
|
59 |
+
Scientists have devised methods for quantifying the intensity of odors, in particular for the purpose of analyzing unpleasant or objectionable odors released by an industrial source into a community. Since the 1800s industrial countries have encountered incidents where proximity of an industrial source or landfill produced adverse reactions among nearby residents regarding airborne odor. The basic theory of odor analysis is to measure what extent of dilution with "pure" air is required before the sample in question is rendered indistinguishable from the "pure" or reference standard. Since each person perceives odor differently, an "odor panel" composed of several different people is assembled, each sniffing the same sample of diluted specimen air. A field olfactometer can be utilized to determine the magnitude of an odor.
|
60 |
+
|
61 |
+
Many air management districts in the US have numerical standards of acceptability for the intensity of odor that is allowed to cross into a residential property. For example, the Bay Area Air Quality Management District has applied its standard in regulating numerous industries, landfills, and sewage treatment plants. Example applications this district has engaged are the San Mateo, California, wastewater treatment plant; the Shoreline Amphitheatre in Mountain View, California; and the IT Corporation waste ponds, Martinez, California.
|
62 |
+
|
63 |
+
The tendrils of plants are especially sensitive to airborne volatile organic compounds. Parasites such as dodder make use of this in locating their preferred hosts and locking on to them.[55] The emission of volatile compounds is detected when foliage is browsed by animals. Threatened plants are then able to take defensive chemical measures, such as moving tannin compounds to their foliage. (See Plant perception).
|
64 |
+
|
65 |
+
The importance and sensitivity of smell varies among different organisms; most mammals have a good sense of smell, whereas most birds do not, except the tubenoses (e.g., petrels and albatrosses), certain species of vultures, and the kiwis. Although, recent analysis of the chemical composition of volatile organic compounds (VOCs) from King Penguin feathers suggest that VOCs may provide olfactory cues, used by the penguins to locate their colony and recognise individuals.[56] Among mammals, it is well developed in the carnivores and ungulates, which must always be aware of each other, and in those that smell for their food, such as moles. Having a strong sense of smell is referred to as macrosmatic.
|
66 |
+
|
67 |
+
Figures suggesting greater or lesser sensitivity in various species reflect experimental findings from the reactions of animals exposed to aromas in known extreme dilutions. These are, therefore, based on perceptions by these animals, rather than mere nasal function. That is, the brain's smell-recognizing centers must react to the stimulus detected for the animal to be said to show a response to the smell in question. It is estimated that dogs, in general, have an olfactory sense approximately ten thousand to a hundred thousand times more acute than a human's.[57] This does not mean they are overwhelmed by smells our noses can detect; rather, it means they can discern a molecular presence when it is in much greater dilution in the carrier, air.
|
68 |
+
|
69 |
+
Scenthounds as a group can smell one- to ten-million times more acutely than a human, and bloodhounds, which have the keenest sense of smell of any dogs,[citation needed] have noses ten- to one-hundred-million times more sensitive than a human's. They were bred for the specific purpose of tracking humans, and can detect a scent trail a few days old. The second-most-sensitive nose is possessed by the Basset Hound, which was bred to track and hunt rabbits and other small animals.
|
70 |
+
|
71 |
+
Bears, such as the Silvertip Grizzly found in parts of North America, have a sense of smell seven times stronger than that of the bloodhound, essential for locating food underground. Using their elongated claws, bears dig deep trenches in search of burrowing animals and nests as well as roots, bulbs, and insects. Bears can detect the scent of food from up to eighteen miles away; because of their immense size, they often scavenge new kills, driving away the predators (including packs of wolves and human hunters) in the process.
|
72 |
+
|
73 |
+
The sense of smell is less developed in the catarrhine primates, and nonexistent in cetaceans, which compensate with a well-developed sense of taste.[citation needed] In some strepsirrhines, such as the red-bellied lemur, scent glands occur atop the head. In many species, olfaction is highly tuned to pheromones; a male silkworm moth, for example, can sense a single molecule of bombykol.
|
74 |
+
|
75 |
+
Fish, too, have a well-developed sense of smell, even though they inhabit an aquatic environment. Salmon utilize their sense of smell to identify and return to their home stream waters. Catfish use their sense of smell to identify other individual catfish and to maintain a social hierarchy. Many fishes use the sense of smell to identify mating partners or to alert to the presence of food.
|
76 |
+
|
77 |
+
Since inbreeding is detrimental, it tends to be avoided. In the house mouse, the major urinary protein (MUP) gene cluster provides a highly polymorphic scent signal of genetic identity that appears to underlie kin recognition and inbreeding avoidance. Thus, there are fewer matings between mice sharing MUP haplotypes than would be expected if there were random mating.[58]
|
78 |
+
|
en/4237.html.txt
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Odysseus /oʊˈdɪsiːəs/ (Greek: Ὀδυσσεύς, Ὀδυσεύς, Ὀdysseús [odysse͜ús]), also known by the Latin variant Ulysses (US: /juːˈlɪsiːz/, UK: /ˈjuːlɪsiːz/; Latin: Ulyssēs, Ulixēs), is a legendary Greek king of Ithaca and the hero of Homer's epic poem the Odyssey. Odysseus also plays a key role in Homer's Iliad and other works in that same epic cycle.
|
6 |
+
|
7 |
+
Son of Laërtes and Anticlea, husband of Penelope, and father of Telemachus and Acusilaus,[1] Odysseus is renowned for his intellectual brilliance, guile, and versatility (polytropos), and is thus known by the epithet Odysseus the Cunning (Greek: μῆτις or mētis, "cunning intelligence"[2]). He is most famous for his nostos, or "homecoming", which took him ten eventful years after the decade-long Trojan War.
|
8 |
+
|
9 |
+
In Greek the name was used in various versions. Vase inscriptions have the two groups of Olyseus (Ὀλυσεύς), Olysseus (Ὀλυσσεύς) or Ōlysseus (Ὠλυσσεύς), and Olyteus (Ὀλυτεύς) or Olytteus (Ὀλυττεύς). Probably from an early source from Magna Graecia dates the form Oulixēs (Οὐλίξης), while a later grammarian has Oulixeus (Οὐλιξεύς).[3] In Latin, the figure was known as Ulixēs or (considered less correct) Ulyssēs. Some have supposed that "there may originally have been two separate figures, one called something like Odysseus, the other something like Ulixes, who were combined into one complex personality."[4] However, the change between d and l is common also in some Indo-European and Greek names,[5] and the Latin form is supposed to be derived from the Etruscan Uthuze (see below), which perhaps accounts for some of the phonetic innovations.
|
10 |
+
|
11 |
+
The etymology of the name is unknown. Ancient authors linked the name to the Greek verbs odussomai (ὀδύσσομαι) “to be wroth against, to hate”,[6] to oduromai (ὀδύρομαι) “to lament, bewail”,[7][8] or even to ollumi (ὄλλυμι) “to perish, to be lost”.[9][10] Homer relates it to various forms of this verb in references and puns. In Book 19 of the Odyssey, where Odysseus' early childhood is recounted, Euryclea asks the boy's grandfather Autolycus to name him. Euryclea seems to suggest a name like Polyaretos, "for he has much been prayed for" (πολυάρητος) but Autolycus "apparently in a sardonic mood" decided to give the child another name commemorative of "his own experience in life":[11] "Since I have been angered (ὀδυσσάμενος odyssamenos) with many, both men and women, let the name of the child be Odysseus".[12] Odysseus often receives the patronymic epithet Laertiades (Λαερτιάδης), "son of Laërtes". In the Iliad and Odyssey there are several further epithets used to describe Odysseus.
|
12 |
+
|
13 |
+
It has also been suggested that the name is of non-Greek origin, possibly not even Indo-European, with an unknown etymology.[13] Robert S. P. Beekes has suggested a Pre-Greek origin.[14] In Etruscan religion the name (and stories) of Odysseus were adopted under the name Uthuze (Uθuze), which has been interpreted as a parallel borrowing from a preceding Minoan form of the name (possibly *Oduze, pronounced /'ot͡θut͡se/); this theory is supposed to explain also the insecurity of the phonologies (d or l), since the affricate /t͡θ/, unknown to the Greek of that time, gave rise to different counterparts (i. e. δ or λ in Greek, θ in Etruscan).[15]
|
14 |
+
|
15 |
+
Relatively little is given of Odysseus' background other than that according to Pseudo-Apollodorus, his paternal grandfather or step-grandfather is Arcesius, son of Cephalus and grandson of Aeolus, while his maternal grandfather is the thief Autolycus, son of Hermes[16] and Chione. Hence, Odysseus was the great-grandson of the Olympian god Hermes.
|
16 |
+
|
17 |
+
According to the Iliad and Odyssey, his father is Laertes[17] and his mother Anticlea, although there was a non-Homeric tradition[18][19] that Sisyphus was his true father.[20] The rumour went that Laërtes bought Odysseus from the conniving king.[21] Odysseus is said to have a younger sister, Ctimene, who went to Same to be married and is mentioned by the swineherd Eumaeus, whom she grew up alongside, in book 15 of the Odyssey.[22]
|
18 |
+
|
19 |
+
The majority of sources for Odysseus' pre-war exploits—principally the mythographers Pseudo-Apollodorus and Hyginus—postdate Homer by many centuries. Two stories in particular are well known:
|
20 |
+
|
21 |
+
When Helen is abducted, Menelaus calls upon the other suitors to honour their oaths and help him to retrieve her, an attempt that leads to the Trojan War. Odysseus tries to avoid it by feigning lunacy, as an oracle had prophesied a long-delayed return home for him if he went. He hooks a donkey and an ox to his plow (as they have different stride lengths, hindering the efficiency of the plow) and (some modern sources add) starts sowing his fields with salt. Palamedes, at the behest of Menelaus' brother Agamemnon, seeks to disprove Odysseus' madness and places Telemachus, Odysseus' infant son, in front of the plow. Odysseus veers the plow away from his son, thus exposing his stratagem.[23] Odysseus holds a grudge against Palamedes during the war for dragging him away from his home.
|
22 |
+
|
23 |
+
Odysseus and other envoys of Agamemnon travel to Scyros to recruit Achilles because of a prophecy that Troy could not be taken without him. By most accounts, Thetis, Achilles' mother, disguises the youth as a woman to hide him from the recruiters because an oracle had predicted that Achilles would either live a long uneventful life or achieve everlasting glory while dying young. Odysseus cleverly discovers which among the women before him is Achilles when the youth is the only one of them to show interest in examining the weapons hidden among an array of adornment gifts for the daughters of their host. Odysseus arranges further for the sounding of a battle horn, which prompts Achilles to clutch a weapon and show his trained disposition. With his disguise foiled, he is exposed and joins Agamemnon's call to arms among the Hellenes.[24]
|
24 |
+
|
25 |
+
Odysseus is one of the most influential Greek champions during the Trojan War. Along with Nestor and Idomeneus he is one of the most trusted counsellors and advisors. He always champions the Achaean cause, especially when others question Agamemnon's command, as in one instance when Thersites speaks against him. When Agamemnon, to test the morale of the Achaeans, announces his intentions to depart Troy, Odysseus restores order to the Greek camp.[25] Later on, after many of the heroes leave the battlefield due to injuries (including Odysseus and Agamemnon), Odysseus once again persuades Agamemnon not to withdraw. Along with two other envoys, he is chosen in the failed embassy to try to persuade Achilles to return to combat.[26]
|
26 |
+
|
27 |
+
When Hector proposes a single combat duel, Odysseus is one of the Danaans who reluctantly volunteered to battle him. Telamonian Ajax ("The Greater"), however, is the volunteer who eventually fights Hector. Odysseus aids Diomedes during the night operations to kill Rhesus, because it had been foretold that if his horses drank from the Scamander River, Troy could not be taken.[27]
|
28 |
+
|
29 |
+
After Patroclus is slain, it is Odysseus who counsels Achilles to let the Achaean men eat and rest rather than follow his rage-driven desire to go back on the offensive—and kill Trojans—immediately. Eventually (and reluctantly), he consents. During the funeral games for Patroclus, Odysseus becomes involved in a wrestling match with Ajax "The Greater" and foot race with Ajax "The Lesser," son of Oileus and Nestor's son Antilochus. He draws the wrestling match, and with the help of the goddess Athena, he wins the race.[28]
|
30 |
+
|
31 |
+
Odysseus has traditionally been viewed as Achilles' antithesis in the Iliad:[29] while Achilles' anger is all-consuming and of a self-destructive nature, Odysseus is frequently viewed as a man of the mean, a voice of reason, renowned for his self-restraint and diplomatic skills. He is also in some respects antithetical to Telamonian Ajax (Shakespeare's "beef-witted" Ajax): while the latter has only brawn to recommend him, Odysseus is not only ingenious (as evidenced by his idea for the Trojan Horse), but an eloquent speaker, a skill perhaps best demonstrated in the embassy to Achilles in book 9 of the Iliad. The two are not only foils in the abstract but often opposed in practice since they have many duels and run-ins.
|
32 |
+
|
33 |
+
Since a prophecy suggested that the Trojan War would not be won without Achilles, Odysseus and several other Achaean leaders went to Skyros to find him. Odysseus discovered Achilles by offering gifts, adornments and musical instruments as well as weapons, to the king's daughters, and then having his companions imitate the noises of an enemy's attack on the island (most notably, making a blast of a trumpet heard), which prompted Achilles to reveal himself by picking a weapon to fight back, and together they departed for the Trojan War.[31]
|
34 |
+
|
35 |
+
The story of the death of Palamedes has many versions. According to some, Odysseus never forgives Palamedes for unmasking his feigned madness and plays a part in his downfall. One tradition says Odysseus convinces a Trojan captive to write a letter pretending to be from Palamedes. A sum of gold is mentioned to have been sent as a reward for Palamedes' treachery. Odysseus then kills the prisoner and hides the gold in Palamedes' tent. He ensures that the letter is found and acquired by Agamemnon, and also gives hints directing the Argives to the gold. This is evidence enough for the Greeks, and they have Palamedes stoned to death. Other sources say that Odysseus and Diomedes goad Palamedes into descending a well with the prospect of treasure being at the bottom. When Palamedes reaches the bottom, the two proceed to bury him with stones, killing him.[32]
|
36 |
+
|
37 |
+
When Achilles is slain in battle by Paris, it is Odysseus and Telamonian Ajax who retrieve the fallen warrior's body and armour in the thick of heavy fighting. During the funeral games for Achilles, Odysseus competes once again with Telamonian Ajax. Thetis says that the arms of Achilles will go to the bravest of the Greeks, but only these two warriors dare lay claim to that title. The two Argives became embroiled in a heavy dispute about one another's merits to receive the reward. The Greeks dither out of fear in deciding a winner, because they did not want to insult one and have him abandon the war effort. Nestor suggests that they allow the captive Trojans decide the winner.[33] The accounts of the Odyssey disagree, suggesting that the Greeks themselves hold a secret vote.[34] In any case, Odysseus is the winner. Enraged and humiliated, Ajax is driven mad by Athena. When he returns to his senses, in shame at how he has slaughtered livestock in his madness, Ajax kills himself by the sword that Hector had given him after their duel.[35]
|
38 |
+
|
39 |
+
Together with Diomedes, Odysseus fetches Achilles' son, Pyrrhus, to come to the aid of the Achaeans, because an oracle had stated that Troy could not be taken without him. A great warrior, Pyrrhus is also called Neoptolemus (Greek for "new warrior"). Upon the success of the mission, Odysseus gives Achilles' armour to him.
|
40 |
+
|
41 |
+
It is learned that the war can not be won without the poisonous arrows of Heracles, which are owned by the abandoned Philoctetes. Odysseus and Diomedes (or, according to some accounts, Odysseus and Neoptolemus) leave to retrieve them. Upon their arrival, Philoctetes (still suffering from the wound) is seen still to be enraged at the Danaans, especially at Odysseus, for abandoning him. Although his first instinct is to shoot Odysseus, his anger is eventually diffused by Odysseus' persuasive powers and the influence of the gods. Odysseus returns to the Argive camp with Philoctetes and his arrows.[36]
|
42 |
+
|
43 |
+
Perhaps Odysseus' most famous contribution to the Greek war effort is devising the strategem of the Trojan Horse, which allows the Greek army to sneak into Troy under cover of darkness. It is built by Epeius and filled with Greek warriors, led by Odysseus.[37] Odysseus and Diomedes steal the Palladium that lay within Troy's walls, for the Greeks were told they could not sack the city without it. Some late Roman sources indicate that Odysseus schemed to kill his partner on the way back, but Diomedes thwarts this attempt.
|
44 |
+
|
45 |
+
Homer's Iliad and Odyssey portray Odysseus as a culture hero, but the Romans, who believed themselves the heirs of Prince Aeneas of Troy, considered him a villainous falsifier. In Virgil's Aeneid, written between 29 and 19 BC, he is constantly referred to as "cruel Odysseus" (Latin dirus Ulixes) or "deceitful Odysseus" (pellacis, fandi fictor). Turnus, in Aeneid, book 9, reproaches the Trojan Ascanius with images of rugged, forthright Latin virtues, declaring (in John Dryden's translation), "You shall not find the sons of Atreus here, nor need the frauds of sly Ulysses fear." While the Greeks admired his cunning and deceit, these qualities did not recommend themselves to the Romans, who possessed a rigid sense of honour. In Euripides' tragedy Iphigenia at Aulis, having convinced Agamemnon to consent to the sacrifice of his daughter, Iphigenia, to appease the goddess Artemis, Odysseus facilitates the immolation by telling Iphigenia's mother, Clytemnestra, that the girl is to be wed to Achilles. Odysseus' attempts to avoid his sacred oath to defend Menelaus and Helen offended Roman notions of duty, and the many stratagems and tricks that he employed to get his way offended Roman notions of honour.
|
46 |
+
|
47 |
+
Odysseus is probably best known as the eponymous hero of the Odyssey. This epic describes his travails, which lasted for 10 years, as he tries to return home after the Trojan War and reassert his place as rightful king of Ithaca.
|
48 |
+
|
49 |
+
On the way home from Troy, after a raid on Ismarus in the land of the Cicones, he and his twelve ships are driven off course by storms. They visit the lethargic Lotus-Eaters and are captured by the Cyclops Polyphemus while visiting his island. After Polyphemus eats several of his men, Polyphemus and Odysseus have a discussion and Odysseus tells Polyphemus his name is "Nobody". Odysseus takes a barrel of wine, and the Cyclops drinks it, falling asleep. Odysseus and his men take a wooden stake, ignite it with the remaining wine, and blind him. While they escape, Polyphemus cries in pain, and the other Cyclopes ask him what is wrong. Polyphemus cries, "Nobody has blinded me!" and the other Cyclopes think he has gone mad. Odysseus and his crew escape, but Odysseus rashly reveals his real name, and Polyphemus prays to Poseidon, his father, to take revenge. They stay with Aeolus, the master of the winds, who gives Odysseus a leather bag containing all the winds, except the west wind, a gift that should have ensured a safe return home. However, the sailors foolishly open the bag while Odysseus sleeps, thinking that it contains gold. All of the winds fly out, and the resulting storm drives the ships back the way they had come, just as Ithaca comes into sight.
|
50 |
+
|
51 |
+
After pleading in vain with Aeolus to help them again, they re-embark and encounter the cannibalistic Laestrygonians. Odysseus' ship is the only one to escape. He sails on and visits the witch-goddess Circe. She turns half of his men into swine after feeding them cheese and wine. Hermes warns Odysseus about Circe and gives him a drug called moly, which resists Circe's magic. Circe, being attracted to Odysseus' resistance, falls in love with him and releases his men. Odysseus and his crew remain with her on the island for one year, while they feast and drink. Finally, Odysseus' men convince him to leave for Ithaca.
|
52 |
+
|
53 |
+
Guided by Circe's instructions, Odysseus and his crew cross the ocean and reach a harbor at the western edge of the world, where Odysseus sacrifices to the dead and summons the spirit of the old prophet Tiresias for advice. Next Odysseus meets the spirit of his own mother, who had died of grief during his long absence. From her, he learns for the first time news of his own household, threatened by the greed of Penelope's suitors. Odysseus also talks to his fallen war comrades and the mortal shade of Heracles.
|
54 |
+
|
55 |
+
Odysseus and his men return to Circe's island, and she advises them on the remaining stages of the journey. They skirt the land of the Sirens, pass between the six-headed monster Scylla and the whirlpool Charybdis, where they row directly between the two. However, Scylla drags the boat towards her by grabbing the oars and eats six men.
|
56 |
+
|
57 |
+
They land on the island of Thrinacia. There, Odysseus' men ignore the warnings of Tiresias and Circe and hunt down the sacred cattle of the sun god Helios. Helios tells Zeus what happened and demands Odysseus' men be punished or else he will take the sun and shine it in the Underworld. Zeus fulfills Helios' demands by causing a shipwreck during a thunderstorm in which all but Odysseus drown. He washes ashore on the island of Ogygia, where Calypso compels him to remain as her lover for seven years. He finally escapes when Hermes tells Calypso to release Odysseus.
|
58 |
+
|
59 |
+
Odysseus is shipwrecked and befriended by the Phaeacians. After he tells them his story, the Phaeacians, led by King Alcinous, agree to help Odysseus get home. They deliver him at night, while he is fast asleep, to a hidden harbor on Ithaca. He finds his way to the hut of one of his own former slaves, the swineherd Eumaeus, and also meets up with Telemachus returning from Sparta. Athena disguises Odysseus as a wandering beggar to learn how things stand in his household.
|
60 |
+
|
61 |
+
When the disguised Odysseus returns after 20 years, he is recognized only by his faithful dog, Argos. Penelope announces in her long interview with the disguised hero that whoever can string Odysseus' rigid bow and shoot an arrow through twelve axe shafts may have her hand. According to Bernard Knox, "For the plot of the Odyssey, of course, her decision is the turning point, the move that makes possible the long-predicted triumph of the returning hero".[38] Odysseus' identity is discovered by the housekeeper, Eurycleia, as she is washing his feet and discovers an old scar Odysseus received during a boar hunt. Odysseus swears her to secrecy, threatening to kill her if she tells anyone.
|
62 |
+
|
63 |
+
When the contest of the bow begins, none of the suitors is able to string the bow. After all the suitors have given up, the disguised Odysseus asks to participate. Though the suitors refuse at first, Penelope intervenes and allows the "stranger" (the disguised Odysseus) to participate. Odysseus easily strings his bow and wins the contest. Having done so, he proceeds to slaughter the suitors (beginning with Antinous whom he finds drinking from Odysseus' cup) with help from Telemachus and two of Odysseus' servants, Eumaeus the swineherd and Philoetius the cowherd. Odysseus tells the serving women who slept with the suitors to clean up the mess of corpses and then has those women hanged in terror. He tells Telemachus that he will replenish his stocks by raiding nearby islands. Odysseus has now revealed himself in all his glory (with a little makeover by Athena); yet Penelope cannot believe that her husband has really returned—she fears that it is perhaps some god in disguise, as in the story of Alcmene (mother of Heracles)—and tests him by ordering her servant Euryclea to move the bed in their wedding-chamber. Odysseus protests that this cannot be done since he made the bed himself and knows that one of its legs is a living olive tree. Penelope finally accepts that he truly is her husband, a moment that highlights their homophrosýnē (“like-mindedness”).
|
64 |
+
|
65 |
+
The next day Odysseus and Telemachus visit the country farm of his old father Laërtes. The citizens of Ithaca follow Odysseus on the road, planning to avenge the killing of the Suitors, their sons. The goddess Athena intervenes and persuades both sides to make peace.
|
66 |
+
|
67 |
+
Odysseus is one of the most recurrent characters in Western culture.
|
68 |
+
|
69 |
+
According to some late sources, most of them purely genealogical, Odysseus had many other children besides Telemachus, the most famous being:
|
70 |
+
|
71 |
+
Most such genealogies aimed to link Odysseus with the foundation of many Italic cities in remote antiquity.
|
72 |
+
|
73 |
+
He figures in the end of the story of King Telephus of Mysia.
|
74 |
+
|
75 |
+
The supposed last poem in the Epic Cycle is called the Telegony and is thought to tell the story of Odysseus' last voyage, and of his death at the hands of Telegonus, his son with Circe. The poem, like the others of the cycle, is "lost" in that no authentic version has been discovered.
|
76 |
+
|
77 |
+
In 5th century BC Athens, tales of the Trojan War were popular subjects for tragedies. Odysseus figures centrally or indirectly in a number of the extant plays by Aeschylus, Sophocles (Ajax, Philoctetes) and Euripides (Hecuba, Rhesus, Cyclops) and figured in still more that have not survived. In his Ajax, Sophocles portrays Odysseus as a modern voice of reasoning compared to the title character's rigid antiquity.
|
78 |
+
|
79 |
+
Plato in his dialogue Hippias Minor examines a literary question about whom Homer intended to portray as the better man, Achilles or Odysseus.
|
80 |
+
|
81 |
+
As Ulysses, he is mentioned regularly in Virgil's Aeneid written between 29 and 19 BC, and the poem's hero, Aeneas, rescues one of Ulysses' crew members who was left behind on the island of the Cyclopes. He in turn offers a first-person account of some of the same events Homer relates, in which Ulysses appears directly. Virgil's Ulysses typifies his view of the Greeks: he is cunning but impious, and ultimately malicious and hedonistic.
|
82 |
+
|
83 |
+
Ovid retells parts of Ulysses' journeys, focusing on his romantic involvements with Circe and Calypso, and recasts him as, in Harold Bloom's phrase, "one of the great wandering womanizers." Ovid also gives a detailed account of the contest between Ulysses and Ajax for the armour of Achilles.
|
84 |
+
|
85 |
+
Greek legend tells of Ulysses as the founder of Lisbon, Portugal, calling it Ulisipo or Ulisseya, during his twenty-year errand on the Mediterranean and Atlantic seas. Olisipo was Lisbon's name in the Roman Empire. This folk etymology is recounted by Strabo based on Asclepiades of Myrleia's words, by Pomponius Mela, by Gaius Julius Solinus (3rd century AD), and will be resumed by Camões in his epic poem Os Lusíadas (first printed in 1572).[citation needed]
|
86 |
+
|
87 |
+
Dante Alighieri, in the Canto XXVI of the Inferno segment of his Divine Comedy (1308–1320), encounters Odysseus ("Ulisse" in Italian) near the very bottom of Hell: with Diomedes, he walks wrapped in flame in the eighth ring (Counselors of Fraud) of the Eighth Circle (Sins of Malice), as punishment for his schemes and conspiracies that won the Trojan War. In a famous passage, Dante has Odysseus relate a different version of his voyage and death from the one told by Homer. He tells how he set out with his men from Circe's island for a journey of exploration to sail beyond the Pillars of Hercules and into the Western sea to find what adventures awaited them. Men, says Ulisse, are not made to live like brutes, but to follow virtue and knowledge.[40]
|
88 |
+
|
89 |
+
After travelling west and south for five months, they see in the distance a great mountain rising from the sea (this is Purgatory, in Dante's cosmology) before a storm sinks them. Dante does not have access to the original Greek texts of the Homeric epics, so his knowledge of their subject-matter was based only on information from later sources, chiefly Virgil's Aeneid but also Ovid; hence the discrepancy between Dante and Homer.
|
90 |
+
|
91 |
+
He appears in Shakespeare's Troilus and Cressida (1602), set during the Trojan War.
|
92 |
+
|
93 |
+
In her poem Site of the Castle of Ulysses. (published in 1836), Letitia Elizabeth Landon gives her version of The Song of the Sirens with an explanation of its purpose, structure and meaning.
|
94 |
+
|
95 |
+
Alfred, Lord Tennyson's poem "Ulysses" (published in 1842) presents an aging king who has seen too much of the world to be happy sitting on a throne idling his days away. Leaving the task of civilizing his people to his son, he gathers together a band of old comrades "to sail beyond the sunset".
|
96 |
+
|
97 |
+
Frederick Rolfe's The Weird of the Wanderer (1912) has the hero Nicholas Crabbe (based on the author) travelling back in time, discovering that he is the reincarnation of Odysseus, marrying Helen, being deified and ending up as one of the three Magi.
|
98 |
+
|
99 |
+
James Joyce's novel Ulysses (first published 1918–1920) uses modern literary devices to narrate a single day in the life of a Dublin businessman named Leopold Bloom. Bloom's day turns out to bear many elaborate parallels to Odysseus' ten years of wandering.
|
100 |
+
|
101 |
+
In Virginia Woolf's response novel Mrs Dalloway (1925) the comparable character is Clarisse Dalloway, who also appears in The Voyage Out (1915) and several short stories.
|
102 |
+
|
103 |
+
Nikos Kazantzakis' The Odyssey: A Modern Sequel (1938), a 33,333 line epic poem, begins with Odysseus cleansing his body of the blood of Penelope's suitors. Odysseus soon leaves Ithaca in search of new adventures. Before his death he abducts Helen, incites revolutions in Crete and Egypt, communes with God, and meets representatives of such famous historical and literary figures as Vladimir Lenin, Don Quixote and Jesus.
|
104 |
+
|
105 |
+
Return to Ithaca (1946) by Eyvind Johnson is a more realistic retelling of the events that adds a deeper psychological study of the characters of Odysseus, Penelope, and Telemachus. Thematically, it uses Odysseus' backstory and struggle as a metaphor for dealing with the aftermath of war (the novel being written immediately after the end of the Second World War).
|
106 |
+
|
107 |
+
Odysseus is the hero of The Luck of Troy (1961) by Roger Lancelyn Green, whose title refers to the theft of the Palladium.
|
108 |
+
|
109 |
+
In 1986, Irish poet Eilean Ni Chuilleanain published "The Second Voyage", a poem in which she makes use of the story of Odysseus.
|
110 |
+
|
111 |
+
In S. M. Stirling's Island in the Sea of Time (1998), first part to his Nantucket series of alternate history novels, Odikweos ("Odysseus" in Mycenaean Greek) is a 'historical' figure who is every bit as cunning as his legendary self and is one of the few Bronze Age inhabitants who discerns the time-travellers' real background. Odikweos first aids William Walker's rise to power in Achaea and later helps bring Walker down after seeing his homeland turn into a police state.
|
112 |
+
|
113 |
+
The Penelopiad (2005) by Margaret Atwood retells his story from the point of view of his wife Penelope.
|
114 |
+
|
115 |
+
The literary theorist Núria Perpinyà conceived twenty different interpretations of the Odyssey in a 2008 study.[41]
|
116 |
+
|
117 |
+
Odysseus is also a character in David Gemmell's Troy trilogy (2005–2007), in which he is a good friend and mentor of Helikaon. He is known as the ugly king of Ithaka. His marriage with Penelope was arranged, but they grew to love each other. He is also a famous storyteller, known to exaggerate his stories and heralded as the greatest storyteller of his age. This is used as a plot device to explain the origins of such myths as those of Circe and the Gorgons. In the series, he is fairly old and an unwilling ally of Agamemnon.
|
118 |
+
|
119 |
+
In Madeline Miller's The Song of Achilles (a retelling of the Trojan War as well as the life of Patroclus and his romance with Achilles), Odysseus is a major character with much the same role he had in Homer's "Illiad", though it is expanded upon. Miller's Circe tells of Odysseus's visit to Circe's island from Circe's point of view, the birth of their son Telegonus, and Odysseus' inadvertent death when Telegonus travels to Ithaca to meet him.
|
120 |
+
|
121 |
+
The actors who have portrayed Odysseus in feature films include Kirk Douglas in the Italian Ulysses (1955), John Drew Barrymore in The Trojan Horse (1961), Piero Lulli in The Fury of Achilles (1962), and Sean Bean in Troy (2004).
|
122 |
+
|
123 |
+
In TV miniseries he has been played by Bekim Fehmiu in L'Odissea (1968), Armand Assante in The Odyssey (1997), and by Joseph Mawle in Troy: Fall of a City (2018).
|
124 |
+
|
125 |
+
Ulysses 31 is a French-Japanese animated television series (1981) that updates the Greek mythology of Odysseus to the 31st century.[42]
|
126 |
+
|
127 |
+
Joel and Ethan Coen's film O Brother Where Art Thou? (2000) is loosely based on the Odyssey. However, the Coens have stated that they had never read the epic. George Clooney plays Ulysses Everett McGill, leading a group of escapees from a chain gang through an adventure in search of the proceeds of an armoured truck heist. On their voyage, the gang encounter—amongst other characters—a trio of Sirens and a one-eyed bible salesman.
|
128 |
+
|
129 |
+
The British group Cream recorded the song "Tales of Brave Ulysses" in 1967. Suzanne Vega's song "Calypso" from 1987 album Solitude Standing shows Odysseus from Calypso's point of view, and tells the tale of him coming to the island and his leaving.
|
130 |
+
|
131 |
+
Rolf Riehm composed an opera based on the myth, Sirenen – Bilder des Begehrens und des Vernichtens (Sirens – Images of Desire and Destruction) which premiered at the Oper Frankfurt in 2014.
|
132 |
+
|
133 |
+
Over time, comparisons between Odysseus and other heroes of different mythologies and religions have been made.
|
134 |
+
|
135 |
+
A similar story exists in Hindu mythology with Nala and Damayanti where Nala separates from Damayanti and is reunited with her.[43] The story of stringing a bow is similar to the description in Ramayana of Rama stringing the bow to win Sita's hand in marriage.[44]
|
136 |
+
|
137 |
+
The Aeneid tells the story of Aeneas and his travels to what would become Rome. On his journey he also endures strife comparable to that of Odysseus. However, the motives for both of their journeys differ as Aeneas was driven by this sense of duty granted to him by the Gods that he must abide by. He also kept in mind the future of his people, fitting for the future Father of Rome.
|
138 |
+
|
139 |
+
Strabo writes that on Meninx (Ancient Greek: Μῆνιγξ) island, modern Djerba at Tunisia, there was an altar of the Odysseus.[45]
|
140 |
+
|
141 |
+
Prince Odysseas-Kimon of Greece and Denmark (born 2004), is the grandson of the deposed Greek king, Constantine II.
|
en/4238.html.txt
ADDED
@@ -0,0 +1,880 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A tropical cyclone is a rapidly rotating storm system characterized by a low-pressure center, a closed low-level atmospheric circulation, strong winds, and a spiral arrangement of thunderstorms that produce heavy rain or squalls. Depending on its location and strength, a tropical cyclone is referred to by different names, including hurricane (/ˈhʌrɪkən, -keɪn/),[1][2][3] typhoon (/taɪˈfuːn/), tropical storm, cyclonic storm, tropical depression, and simply cyclone.[4] A hurricane is a tropical cyclone that occurs in the Atlantic Ocean and northeastern Pacific Ocean, and a typhoon occurs in the northwestern Pacific Ocean; in the south Pacific or Indian Ocean, comparable storms are referred to simply as "tropical cyclones" or "severe cyclonic storms".[4]
|
4 |
+
|
5 |
+
"Tropical" refers to the geographical origin of these systems, which form almost exclusively over tropical seas. "Cyclone" refers to their winds moving in a circle,[5] whirling round their central clear eye, with their winds blowing counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. The opposite direction of circulation is due to the Coriolis effect. Tropical cyclones typically form over large bodies of relatively warm water. They derive their energy through the evaporation of water from the ocean surface, which ultimately recondenses into clouds and rain when moist air rises and cools to saturation. This energy source differs from that of mid-latitude cyclonic storms, such as nor'easters and European windstorms, which are fueled primarily by horizontal temperature contrasts. Tropical cyclones are typically between 100 and 2,000 km (62 and 1,243 mi) in diameter.
|
6 |
+
|
7 |
+
The strong rotating winds of a tropical cyclone are a result of the conservation of angular momentum imparted by the Earth's rotation as air flows inwards toward the axis of rotation. As a result, they rarely form within 5° of the equator.[6] Tropical cyclones are almost unknown in the South Atlantic due to a consistently strong wind shear and a weak Intertropical Convergence Zone.[7] Conversely, the African easterly jet and areas of atmospheric instability give rise to cyclones in the Atlantic Ocean and Caribbean Sea, while cyclones near Australia owe their genesis to the Asian monsoon and Western Pacific Warm Pool.
|
8 |
+
|
9 |
+
Coastal regions are particularly vulnerable to the impact of a tropical cyclone, compared to inland regions. The primary energy source for these storms is warm ocean waters. These storms are therefore typically strongest when over or near water, and weaken quite rapidly over land. Coastal damage may be caused by strong winds and rain, high waves (due to winds), storm surges (due to wind and severe pressure changes), and the potential of spawning tornadoes. Tropical cyclones also draw in air from a large area—which can be a vast area for the most severe cyclones—and concentrate that air's water content (made up from atmospheric moisture and moisture evaporated from water) into precipitation over a much smaller area. This continual replacement of moisture-bearing air by new moisture-bearing air after its moisture has fallen as rain, may cause multi-hour or multi-day extremely heavy rain up to 40 kilometers (25 mi) from the coastline, far beyond the amount of water that the local atmosphere holds at any one time. This in turn can lead to river flooding, overland flooding, and a general overwhelming of local man-made water control structures across a large area.
|
10 |
+
|
11 |
+
Though their effects on human populations are often devastating, tropical cyclones can relieve drought conditions. They also carry heat energy away from the tropics and transport it toward temperate latitudes, which may play an important role in modulating regional and global climate.
|
12 |
+
|
13 |
+
Tropical cyclones are areas of relatively low pressure in the troposphere, with the largest pressure perturbations occurring at low altitudes near the surface. On Earth, the pressures recorded at the centers of tropical cyclones are among the lowest ever observed at sea level.[8] The environment near the center of tropical cyclones is warmer than the surroundings at all altitudes, thus they are characterized as "warm core" systems.[9]
|
14 |
+
|
15 |
+
The near-surface wind field of a tropical cyclone is characterized by air rotating rapidly around a center of circulation while also flowing radially inwards. At the outer edge of the storm, air may be nearly calm; however, due to the Earth's rotation, the air has non-zero absolute angular momentum. As air flows radially inward, it begins to rotate cyclonically (counter-clockwise in the Northern Hemisphere, and clockwise in the Southern Hemisphere) to conserve angular momentum. At an inner radius, air begins to ascend to the top of the troposphere. This radius is typically coincident with the inner radius of the eyewall, and has the strongest near-surface winds of the storm; consequently, it is known as the radius of maximum winds.[10] Once aloft, air flows away from the storm's center, producing a shield of cirrus clouds.[11]
|
16 |
+
|
17 |
+
The previously mentioned processes result in a nearly axisymmetric wind field: Wind speeds are low at the center, increase rapidly moving outwards to the radius of maximum winds, and then decay more gradually with radius to large radii. However, the wind field often exhibits additional spatial and temporal variability due to the effects of localized processes, such as thunderstorm activity and horizontal flow instabilities. In the vertical direction, winds are strongest near the surface and decay with height within the troposphere.[12]
|
18 |
+
|
19 |
+
At the center of a mature tropical cyclone, air sinks rather than rises. For a sufficiently strong storm, air may sink over a layer deep enough to suppress cloud formation, thereby creating a clear "eye". Weather in the eye is normally calm and free of clouds, although the sea may be extremely violent.[13] The eye is normally circular and is typically 30–65 km (19–40 mi) in diameter, though eyes as small as 3 km (1.9 mi) and as large as 370 km (230 mi) have been observed.[14][15]
|
20 |
+
|
21 |
+
The cloudy outer edge of the eye is called the "eyewall". The eyewall typically expands outward with height, resembling an arena football stadium; this phenomenon is sometimes referred to as the "stadium effect".[15] The eyewall is where the greatest wind speeds are found, air rises most rapidly, clouds reach their highest altitude, and precipitation is the heaviest. The heaviest wind damage occurs where a tropical cyclone's eyewall passes over land.[13]
|
22 |
+
|
23 |
+
In a weaker storm, the eye may be obscured by the central dense overcast, which is the upper-level cirrus shield that is associated with a concentrated area of strong thunderstorm activity near the center of a tropical cyclone.[16]
|
24 |
+
|
25 |
+
The eyewall may vary over time in the form of eyewall replacement cycles, particularly in intense tropical cyclones. Outer rainbands can organize into an outer ring of thunderstorms that slowly moves inward, which is believed to rob the primary eyewall of moisture and angular momentum. When the primary eyewall weakens, the tropical cyclone weakens temporarily. The outer eyewall eventually replaces the primary one at the end of the cycle, at which time the storm may return to its original intensity.[17]
|
26 |
+
|
27 |
+
On occasion, tropical cyclones may undergo a process known as rapid intensification, a period in which the maximum sustained winds of a tropical cyclone increase by 30 knots within 24 hours.[18] For rapid intensification to occur, several conditions must be in place. Water temperatures must be extremely high (near or above 30 °C, 86 °F), and water of this temperature must be sufficiently deep such that waves do not up well cooler waters to the surface. Wind shear must be low; when wind shear is high, the convection and circulation in the cyclone will be disrupted. Usually, an anticyclone in the upper layers of the troposphere above the storm must be present as well—for extremely low surface pressures to develop, air must be rising very rapidly in the eyewall of the storm, and an upper-level anticyclone helps channel this air away from the cyclone efficiently.[19]
|
28 |
+
|
29 |
+
There are a variety of metrics commonly used to measure storm size. The most common metrics include the radius of maximum wind, the radius of 34-knot wind (i.e. gale force), the radius of outermost closed isobar (ROCI), and the radius of vanishing wind.[21][22] An additional metric is the radius at which the cyclone's relative vorticity field decreases to 1×10−5 s−1.[15]
|
30 |
+
|
31 |
+
On Earth, tropical cyclones span a large range of sizes, from 100–2,000 kilometres (62–1,243 mi) as measured by the radius of vanishing wind. They are largest on average in the northwest Pacific Ocean basin and smallest in the northeastern Pacific Ocean basin.[23] If the radius of outermost closed isobar is less than two degrees of latitude (222 km (138 mi)), then the cyclone is "very small" or a "midget". A radius of 3–6 latitude degrees (333–670 km (207–416 mi)) is considered "average sized". "Very large" tropical cyclones have a radius of greater than 8 degrees (888 km (552 mi)).[20] Observations indicate that size is only weakly correlated to variables such as storm intensity (i.e. maximum wind speed), radius of maximum wind, latitude, and maximum potential intensity.[22][23]
|
32 |
+
|
33 |
+
Size plays an important role in modulating damage caused by a storm. All else equal, a larger storm will impact a larger area for a longer period of time. Additionally, a larger near-surface wind field can generate higher storm surge due to the combination of longer wind fetch, longer duration, and enhanced wave setup.[24]
|
34 |
+
|
35 |
+
The upper circulation of strong hurricanes extends into the tropopause of the atmosphere, which at low latitudes is 15,000–18,000 metres (50,000–60,000 ft).[25]
|
36 |
+
|
37 |
+
The three-dimensional wind field in a tropical cyclone can be separated into two components: a "primary circulation" and a "secondary circulation". The primary circulation is the rotational part of the flow; it is purely circular. The secondary circulation is the overturning (in-up-out-down) part of the flow; it is in the radial and vertical directions. The primary circulation is larger in magnitude, dominating the surface wind field, and is responsible for the majority of the damage a storm causes, while the secondary circulation is slower but governs the energetics of the storm.
|
38 |
+
|
39 |
+
A tropical cyclone's primary energy source is heat from the evaporation of water from the surface of a warm ocean, previously heated by sunshine. The energetics of the system may be idealized as an atmospheric Carnot heat engine.[27] First, inflowing air near the surface acquires heat primarily via evaporation of water (i.e. latent heat) at the temperature of the warm ocean surface (during evaporation, the ocean cools and the air warms). Second, the warmed air rises and cools within the eyewall while conserving total heat content (latent heat is simply converted to sensible heat during condensation). Third, air outflows and loses heat via infrared radiation to space at the temperature of the cold tropopause. Finally, air subsides and warms at the outer edge of the storm while conserving total heat content. The first and third legs are nearly isothermal, while the second and fourth legs are nearly isentropic. This in-up-out-down overturning flow is known as the secondary circulation. The Carnot perspective provides an upper bound on the maximum wind speed that a storm can attain.
|
40 |
+
|
41 |
+
Scientists estimate that a tropical cyclone releases heat energy at the rate of 50 to 200 exajoules (1018 J) per day,[28] equivalent to about 1 PW (1015 watt). This rate of energy release is equivalent to 70 times the world energy consumption of humans and 200 times the worldwide electrical generating capacity, or to exploding a 10-megaton nuclear bomb every 20 minutes.[28][29]
|
42 |
+
|
43 |
+
The primary rotating flow in a tropical cyclone results from the conservation of angular momentum by the secondary circulation. Absolute angular momentum on a rotating planet
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
M
|
48 |
+
|
49 |
+
|
50 |
+
{\displaystyle M}
|
51 |
+
|
52 |
+
is given by
|
53 |
+
|
54 |
+
where
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
f
|
59 |
+
|
60 |
+
|
61 |
+
{\displaystyle f}
|
62 |
+
|
63 |
+
is the Coriolis parameter,
|
64 |
+
|
65 |
+
|
66 |
+
|
67 |
+
v
|
68 |
+
|
69 |
+
|
70 |
+
{\displaystyle v}
|
71 |
+
|
72 |
+
is the azimuthal (i.e. rotating) wind speed, and
|
73 |
+
|
74 |
+
|
75 |
+
|
76 |
+
r
|
77 |
+
|
78 |
+
|
79 |
+
{\displaystyle r}
|
80 |
+
|
81 |
+
is the radius to the axis of rotation. The first term on the right hand side is the component of planetary angular momentum that projects onto the local vertical (i.e. the axis of rotation). The second term on the right hand side is the relative angular momentum of the circulation itself with respect to the axis of rotation. Because the planetary angular momentum term vanishes at the equator (where
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
f
|
86 |
+
=
|
87 |
+
0
|
88 |
+
|
89 |
+
|
90 |
+
{\displaystyle f=0}
|
91 |
+
|
92 |
+
), tropical cyclones rarely form within 5° of the equator.[6][30]
|
93 |
+
|
94 |
+
As air flows radially inward at low levels, it begins to rotate cyclonically in order to conserve angular momentum. Similarly, as rapidly rotating air flows radially outward near the tropopause, its cyclonic rotation decreases and ultimately changes sign at large enough radius, resulting in an upper-level anti-cyclone. The result is a vertical structure characterized by a strong cyclone at low levels and a strong anti-cyclone near the tropopause; from thermal wind balance, this corresponds to a system that is warmer at its center than in the surrounding environment at all altitudes (i.e. "warm-core"). From hydrostatic balance, the warm core translates to lower pressure at the center at all altitudes, with the maximum pressure drop located at the surface.[12]
|
95 |
+
|
96 |
+
Due to surface friction, the inflow only partially conserves angular momentum. Thus, the sea surface lower boundary acts as both a source (evaporation) and sink (friction) of energy for the system. This fact leads to the existence of a theoretical upper bound on the strongest wind speed that a tropical cyclone can attain. Because evaporation increases linearly with wind speed (just as climbing out of a pool feels much colder on a windy day), there is a positive feedback on energy input into the system known as the Wind-Induced Surface Heat Exchange (WISHE) feedback.[27] This feedback is offset when frictional dissipation, which increases with the cube of the wind speed, becomes sufficiently large. This upper bound is called the "maximum potential intensity",
|
97 |
+
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
v
|
102 |
+
|
103 |
+
p
|
104 |
+
|
105 |
+
|
106 |
+
|
107 |
+
|
108 |
+
{\displaystyle v_{p}}
|
109 |
+
|
110 |
+
, and is given by
|
111 |
+
|
112 |
+
where
|
113 |
+
|
114 |
+
|
115 |
+
|
116 |
+
|
117 |
+
T
|
118 |
+
|
119 |
+
s
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
|
124 |
+
{\displaystyle T_{s}}
|
125 |
+
|
126 |
+
is the temperature of the sea surface,
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
T
|
132 |
+
|
133 |
+
o
|
134 |
+
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
{\displaystyle T_{o}}
|
139 |
+
|
140 |
+
is the temperature of the outflow ([K]),
|
141 |
+
|
142 |
+
|
143 |
+
|
144 |
+
Δ
|
145 |
+
k
|
146 |
+
|
147 |
+
|
148 |
+
{\displaystyle \Delta k}
|
149 |
+
|
150 |
+
is the enthalpy difference between the surface and the overlying air ([J/kg]), and
|
151 |
+
|
152 |
+
|
153 |
+
|
154 |
+
|
155 |
+
C
|
156 |
+
|
157 |
+
k
|
158 |
+
|
159 |
+
|
160 |
+
|
161 |
+
|
162 |
+
{\displaystyle C_{k}}
|
163 |
+
|
164 |
+
and
|
165 |
+
|
166 |
+
|
167 |
+
|
168 |
+
|
169 |
+
C
|
170 |
+
|
171 |
+
d
|
172 |
+
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
{\displaystyle C_{d}}
|
177 |
+
|
178 |
+
are the surface exchange coefficients (dimensionless) of enthalpy and momentum, respectively.[31] The surface-air enthalpy difference is taken as
|
179 |
+
|
180 |
+
|
181 |
+
|
182 |
+
Δ
|
183 |
+
k
|
184 |
+
=
|
185 |
+
|
186 |
+
k
|
187 |
+
|
188 |
+
s
|
189 |
+
|
190 |
+
|
191 |
+
∗
|
192 |
+
|
193 |
+
|
194 |
+
−
|
195 |
+
k
|
196 |
+
|
197 |
+
|
198 |
+
{\displaystyle \Delta k=k_{s}^{*}-k}
|
199 |
+
|
200 |
+
, where
|
201 |
+
|
202 |
+
|
203 |
+
|
204 |
+
|
205 |
+
k
|
206 |
+
|
207 |
+
s
|
208 |
+
|
209 |
+
|
210 |
+
∗
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
{\displaystyle k_{s}^{*}}
|
216 |
+
|
217 |
+
is the saturation enthalpy of air at sea surface temperature and sea-level pressure and
|
218 |
+
|
219 |
+
|
220 |
+
|
221 |
+
k
|
222 |
+
|
223 |
+
|
224 |
+
{\displaystyle k}
|
225 |
+
|
226 |
+
is the enthalpy of boundary layer air overlying the surface.
|
227 |
+
|
228 |
+
The maximum potential intensity is predominantly a function of the background environment alone (i.e. without a tropical cyclone), and thus this quantity can be used to determine which regions on Earth can support tropical cyclones of a given intensity, and how these regions may evolve in time.[32][33] Specifically, the maximum potential intensity has three components, but its variability in space and time is due predominantly to the variability in the surface-air enthalpy difference component
|
229 |
+
|
230 |
+
|
231 |
+
|
232 |
+
Δ
|
233 |
+
k
|
234 |
+
|
235 |
+
|
236 |
+
{\displaystyle \Delta k}
|
237 |
+
|
238 |
+
.
|
239 |
+
|
240 |
+
A tropical cyclone may be viewed as a heat engine that converts input heat energy from the surface into mechanical energy that can be used to do mechanical work against surface friction. At equilibrium, the rate of net energy production in the system must equal the rate of energy loss due to frictional dissipation at the surface, i.e.
|
241 |
+
|
242 |
+
The rate of energy loss per unit surface area from surface friction,
|
243 |
+
|
244 |
+
|
245 |
+
|
246 |
+
|
247 |
+
W
|
248 |
+
|
249 |
+
o
|
250 |
+
u
|
251 |
+
t
|
252 |
+
|
253 |
+
|
254 |
+
|
255 |
+
|
256 |
+
{\displaystyle W_{out}}
|
257 |
+
|
258 |
+
, is given by
|
259 |
+
|
260 |
+
where
|
261 |
+
|
262 |
+
|
263 |
+
|
264 |
+
ρ
|
265 |
+
|
266 |
+
|
267 |
+
{\displaystyle \rho }
|
268 |
+
|
269 |
+
is the density of near-surface air ([kg/m3]) and
|
270 |
+
|
271 |
+
|
272 |
+
|
273 |
+
|
274 |
+
|
|
275 |
+
|
276 |
+
|
277 |
+
u
|
278 |
+
|
279 |
+
|
280 |
+
|
|
281 |
+
|
282 |
+
|
283 |
+
|
284 |
+
{\displaystyle |\mathbf {u} |}
|
285 |
+
|
286 |
+
is the near surface wind speed ([m/s]).
|
287 |
+
|
288 |
+
The rate of energy production per unit surface area,
|
289 |
+
|
290 |
+
|
291 |
+
|
292 |
+
|
293 |
+
W
|
294 |
+
|
295 |
+
i
|
296 |
+
n
|
297 |
+
|
298 |
+
|
299 |
+
|
300 |
+
|
301 |
+
{\displaystyle W_{in}}
|
302 |
+
|
303 |
+
is given by
|
304 |
+
|
305 |
+
where
|
306 |
+
|
307 |
+
|
308 |
+
|
309 |
+
ϵ
|
310 |
+
|
311 |
+
|
312 |
+
{\displaystyle \epsilon }
|
313 |
+
|
314 |
+
is the heat engine efficiency and
|
315 |
+
|
316 |
+
|
317 |
+
|
318 |
+
|
319 |
+
Q
|
320 |
+
|
321 |
+
i
|
322 |
+
n
|
323 |
+
|
324 |
+
|
325 |
+
|
326 |
+
|
327 |
+
{\displaystyle Q_{in}}
|
328 |
+
|
329 |
+
is the total rate of heat input into the system per unit surface area. Given that a tropical cyclone may be idealized as a Carnot heat engine, the Carnot heat engine efficiency is given by
|
330 |
+
|
331 |
+
Heat (enthalpy) per unit mass is given by
|
332 |
+
|
333 |
+
where
|
334 |
+
|
335 |
+
|
336 |
+
|
337 |
+
|
338 |
+
C
|
339 |
+
|
340 |
+
p
|
341 |
+
|
342 |
+
|
343 |
+
|
344 |
+
|
345 |
+
{\displaystyle C_{p}}
|
346 |
+
|
347 |
+
is the heat capacity of air,
|
348 |
+
|
349 |
+
|
350 |
+
|
351 |
+
T
|
352 |
+
|
353 |
+
|
354 |
+
{\displaystyle T}
|
355 |
+
|
356 |
+
is air temperature,
|
357 |
+
|
358 |
+
|
359 |
+
|
360 |
+
|
361 |
+
L
|
362 |
+
|
363 |
+
v
|
364 |
+
|
365 |
+
|
366 |
+
|
367 |
+
|
368 |
+
{\displaystyle L_{v}}
|
369 |
+
|
370 |
+
is the latent heat of vaporization, and
|
371 |
+
|
372 |
+
|
373 |
+
|
374 |
+
q
|
375 |
+
|
376 |
+
|
377 |
+
{\displaystyle q}
|
378 |
+
|
379 |
+
is the concentration of water vapor. The first component corresponds to sensible heat and the second to latent heat.
|
380 |
+
|
381 |
+
There are two sources of heat input. The dominant source is the input of heat at the surface, primarily due to evaporation. The bulk aerodynamic formula for the rate of heat input per unit area at the surface,
|
382 |
+
|
383 |
+
|
384 |
+
|
385 |
+
|
386 |
+
Q
|
387 |
+
|
388 |
+
i
|
389 |
+
n
|
390 |
+
:
|
391 |
+
k
|
392 |
+
|
393 |
+
|
394 |
+
|
395 |
+
|
396 |
+
{\displaystyle Q_{in:k}}
|
397 |
+
|
398 |
+
, is given by
|
399 |
+
|
400 |
+
where
|
401 |
+
|
402 |
+
|
403 |
+
|
404 |
+
Δ
|
405 |
+
k
|
406 |
+
=
|
407 |
+
|
408 |
+
k
|
409 |
+
|
410 |
+
s
|
411 |
+
|
412 |
+
|
413 |
+
∗
|
414 |
+
|
415 |
+
|
416 |
+
−
|
417 |
+
k
|
418 |
+
|
419 |
+
|
420 |
+
{\displaystyle \Delta k=k_{s}^{*}-k}
|
421 |
+
|
422 |
+
represents the enthalpy difference between the ocean surface and the overlying air. The second source is the internal sensible heat generated from frictional dissipation (equal to
|
423 |
+
|
424 |
+
|
425 |
+
|
426 |
+
|
427 |
+
W
|
428 |
+
|
429 |
+
o
|
430 |
+
u
|
431 |
+
t
|
432 |
+
|
433 |
+
|
434 |
+
|
435 |
+
|
436 |
+
{\displaystyle W_{out}}
|
437 |
+
|
438 |
+
), which occurs near the surface within the tropical cyclone and is recycled to the system.
|
439 |
+
|
440 |
+
Thus, the total rate of net energy production per unit surface area is given by
|
441 |
+
|
442 |
+
Setting
|
443 |
+
|
444 |
+
|
445 |
+
|
446 |
+
|
447 |
+
W
|
448 |
+
|
449 |
+
i
|
450 |
+
n
|
451 |
+
|
452 |
+
|
453 |
+
=
|
454 |
+
|
455 |
+
W
|
456 |
+
|
457 |
+
o
|
458 |
+
u
|
459 |
+
t
|
460 |
+
|
461 |
+
|
462 |
+
|
463 |
+
|
464 |
+
{\displaystyle W_{in}=W_{out}}
|
465 |
+
|
466 |
+
and taking
|
467 |
+
|
468 |
+
|
469 |
+
|
470 |
+
|
471 |
+
|
|
472 |
+
|
473 |
+
|
474 |
+
u
|
475 |
+
|
476 |
+
|
477 |
+
|
|
478 |
+
|
479 |
+
≈
|
480 |
+
v
|
481 |
+
|
482 |
+
|
483 |
+
{\displaystyle |\mathbf {u} |\approx v}
|
484 |
+
|
485 |
+
(i.e. the rotational wind speed is dominant) leads to the solution for
|
486 |
+
|
487 |
+
|
488 |
+
|
489 |
+
|
490 |
+
v
|
491 |
+
|
492 |
+
p
|
493 |
+
|
494 |
+
|
495 |
+
|
496 |
+
|
497 |
+
{\displaystyle v_{p}}
|
498 |
+
|
499 |
+
given above. This derivation assumes that total energy input and loss within the system can be approximated by their values at the radius of maximum wind. The inclusion of
|
500 |
+
|
501 |
+
|
502 |
+
|
503 |
+
|
504 |
+
Q
|
505 |
+
|
506 |
+
i
|
507 |
+
n
|
508 |
+
:
|
509 |
+
f
|
510 |
+
r
|
511 |
+
i
|
512 |
+
c
|
513 |
+
t
|
514 |
+
i
|
515 |
+
o
|
516 |
+
n
|
517 |
+
|
518 |
+
|
519 |
+
|
520 |
+
|
521 |
+
{\displaystyle Q_{in:friction}}
|
522 |
+
|
523 |
+
acts to multiply the total heat input rate by the factor
|
524 |
+
|
525 |
+
|
526 |
+
|
527 |
+
|
528 |
+
|
529 |
+
|
530 |
+
T
|
531 |
+
|
532 |
+
s
|
533 |
+
|
534 |
+
|
535 |
+
|
536 |
+
T
|
537 |
+
|
538 |
+
o
|
539 |
+
|
540 |
+
|
541 |
+
|
542 |
+
|
543 |
+
|
544 |
+
|
545 |
+
{\displaystyle {\frac {T_{s}}{T_{o}}}}
|
546 |
+
|
547 |
+
. Mathematically, this has the effect of replacing
|
548 |
+
|
549 |
+
|
550 |
+
|
551 |
+
|
552 |
+
T
|
553 |
+
|
554 |
+
s
|
555 |
+
|
556 |
+
|
557 |
+
|
558 |
+
|
559 |
+
{\displaystyle T_{s}}
|
560 |
+
|
561 |
+
with
|
562 |
+
|
563 |
+
|
564 |
+
|
565 |
+
|
566 |
+
T
|
567 |
+
|
568 |
+
o
|
569 |
+
|
570 |
+
|
571 |
+
|
572 |
+
|
573 |
+
{\displaystyle T_{o}}
|
574 |
+
|
575 |
+
in the denominator of the Carnot efficiency.
|
576 |
+
|
577 |
+
An alternative definition for the maximum potential intensity, which is mathematically equivalent to the above formulation, is
|
578 |
+
|
579 |
+
where CAPE stands for the Convective Available Potential Energy,
|
580 |
+
|
581 |
+
|
582 |
+
|
583 |
+
C
|
584 |
+
A
|
585 |
+
P
|
586 |
+
|
587 |
+
E
|
588 |
+
|
589 |
+
s
|
590 |
+
|
591 |
+
|
592 |
+
∗
|
593 |
+
|
594 |
+
|
595 |
+
|
596 |
+
|
597 |
+
{\displaystyle CAPE_{s}^{*}}
|
598 |
+
|
599 |
+
is the CAPE of an air parcel lifted from saturation at sea level in reference to the environmental sounding,
|
600 |
+
|
601 |
+
|
602 |
+
|
603 |
+
C
|
604 |
+
A
|
605 |
+
P
|
606 |
+
|
607 |
+
E
|
608 |
+
|
609 |
+
b
|
610 |
+
|
611 |
+
|
612 |
+
|
613 |
+
|
614 |
+
{\displaystyle CAPE_{b}}
|
615 |
+
|
616 |
+
is the CAPE of the boundary layer air, and both quantities are calculated at the radius of maximum wind.[34]
|
617 |
+
|
618 |
+
On Earth, a characteristic temperature for
|
619 |
+
|
620 |
+
|
621 |
+
|
622 |
+
|
623 |
+
T
|
624 |
+
|
625 |
+
s
|
626 |
+
|
627 |
+
|
628 |
+
|
629 |
+
|
630 |
+
{\displaystyle T_{s}}
|
631 |
+
|
632 |
+
is 300 K and for
|
633 |
+
|
634 |
+
|
635 |
+
|
636 |
+
|
637 |
+
T
|
638 |
+
|
639 |
+
o
|
640 |
+
|
641 |
+
|
642 |
+
|
643 |
+
|
644 |
+
{\displaystyle T_{o}}
|
645 |
+
|
646 |
+
is 200 K, corresponding to a Carnot efficiency of
|
647 |
+
|
648 |
+
|
649 |
+
|
650 |
+
ϵ
|
651 |
+
=
|
652 |
+
1
|
653 |
+
|
654 |
+
/
|
655 |
+
|
656 |
+
3
|
657 |
+
|
658 |
+
|
659 |
+
{\displaystyle \epsilon =1/3}
|
660 |
+
|
661 |
+
. The ratio of the surface exchange coefficients,
|
662 |
+
|
663 |
+
|
664 |
+
|
665 |
+
|
666 |
+
C
|
667 |
+
|
668 |
+
k
|
669 |
+
|
670 |
+
|
671 |
+
|
672 |
+
/
|
673 |
+
|
674 |
+
|
675 |
+
C
|
676 |
+
|
677 |
+
d
|
678 |
+
|
679 |
+
|
680 |
+
|
681 |
+
|
682 |
+
{\displaystyle C_{k}/C_{d}}
|
683 |
+
|
684 |
+
, is typically taken to be 1. However, observations suggest that the drag coefficient
|
685 |
+
|
686 |
+
|
687 |
+
|
688 |
+
|
689 |
+
C
|
690 |
+
|
691 |
+
d
|
692 |
+
|
693 |
+
|
694 |
+
|
695 |
+
|
696 |
+
{\displaystyle C_{d}}
|
697 |
+
|
698 |
+
varies with wind speed and may decrease at high wind speeds within the boundary layer of a mature hurricane.[35] Additionally,
|
699 |
+
|
700 |
+
|
701 |
+
|
702 |
+
|
703 |
+
C
|
704 |
+
|
705 |
+
k
|
706 |
+
|
707 |
+
|
708 |
+
|
709 |
+
|
710 |
+
{\displaystyle C_{k}}
|
711 |
+
|
712 |
+
may vary at high wind speeds due to the effect of sea spray on evaporation within the boundary layer.[36]
|
713 |
+
|
714 |
+
A characteristic value of the maximum potential intensity,
|
715 |
+
|
716 |
+
|
717 |
+
|
718 |
+
|
719 |
+
v
|
720 |
+
|
721 |
+
p
|
722 |
+
|
723 |
+
|
724 |
+
|
725 |
+
|
726 |
+
{\displaystyle v_{p}}
|
727 |
+
|
728 |
+
, is 80 metres per second (180 mph; 290 km/h). However, this quantity varies significantly across space and time, particularly within the seasonal cycle, spanning a range of 0 to 100 metres per second (0 to 224 mph; 0 to 360 km/h).[34] This variability is primarily due to variability in the surface enthalpy disequilibrium (
|
729 |
+
|
730 |
+
|
731 |
+
|
732 |
+
Δ
|
733 |
+
k
|
734 |
+
|
735 |
+
|
736 |
+
{\displaystyle \Delta k}
|
737 |
+
|
738 |
+
) as well as in the thermodynamic structure of the troposphere, which are controlled by the large-scale dynamics of the tropical climate. These processes are modulated by factors including the sea surface temperature (and underlying ocean dynamics), background near-surface wind speed, and the vertical structure of atmospheric radiative heating.[37] The nature of this modulation is complex, particularly on climate time-scales (decades or longer). On shorter time-scales, variability in the maximum potential intensity is commonly linked to sea surface temperature perturbations from the tropical mean, as regions with relatively warm water have thermodynamic states much more capable of sustaining a tropical cyclone than regions with relatively cold water.[38] However, this relationship is indirect via the large-scale dynamics of the tropics; the direct influence of the absolute sea surface temperature on
|
739 |
+
|
740 |
+
|
741 |
+
|
742 |
+
|
743 |
+
v
|
744 |
+
|
745 |
+
p
|
746 |
+
|
747 |
+
|
748 |
+
|
749 |
+
|
750 |
+
{\displaystyle v_{p}}
|
751 |
+
|
752 |
+
is weak in comparison.
|
753 |
+
|
754 |
+
The passage of a tropical cyclone over the ocean causes the upper layers of the ocean to cool substantially, which can influence subsequent cyclone development. This cooling is primarily caused by wind-driven mixing of cold water from deeper in the ocean with the warm surface waters. This effect results in a negative feedback process that can inhibit further development or lead to weakening. Additional cooling may come in the form of cold water from falling raindrops (this is because the atmosphere is cooler at higher altitudes). Cloud cover may also play a role in cooling the ocean, by shielding the ocean surface from direct sunlight before and slightly after the storm passage. All these effects can combine to produce a dramatic drop in sea surface temperature over a large area in just a few days.[39] Conversely, the mixing of the sea can result in heat being inserted in deeper waters, with potential effects on global climate.[40]
|
755 |
+
|
756 |
+
There are six Regional Specialized Meteorological Centres (RSMCs) worldwide. These organizations are designated by the World Meteorological Organization and are responsible for tracking and issuing bulletins, warnings, and advisories about tropical cyclones in their designated areas of responsibility. Also, there are six Tropical Cyclone Warning Centres (TCWCs) that provide information to smaller regions.[46]
|
757 |
+
|
758 |
+
The RSMCs and TCWCs are not the only organizations that provide information about tropical cyclones to the public. The Joint Typhoon Warning Center (JTWC) issues advisories in all basins except the Northern Atlantic for the United States Government.[47] The Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA) issues advisories and names for tropical cyclones that approach the Philippines in the Northwestern Pacific to protect the life and property of its citizens.[48] The Canadian Hurricane Center (CHC) issues advisories on hurricanes and their remnants for Canadian citizens when they affect Canada.[49]
|
759 |
+
|
760 |
+
On March 26, 2004, Hurricane Catarina became the first recorded South Atlantic cyclone, striking southern Brazil with winds equivalent to Category 2 on the Saffir-Simpson Hurricane Scale. As the cyclone formed outside the authority of another warning center, Brazilian meteorologists initially treated the system as an extratropical cyclone, but later on, classified it as tropical.[50]
|
761 |
+
|
762 |
+
Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are in season.[51]
|
763 |
+
|
764 |
+
In the Northern Atlantic Ocean, a distinct cyclone season occurs from June 1 to November 30, sharply peaking from late August through September.[51] The statistical peak of the Atlantic hurricane season is September 10. The Northeast Pacific Ocean has a broader period of activity, but in a similar time frame to the Atlantic.[52] The Northwest Pacific sees tropical cyclones year-round, with a minimum in February and March and a peak in early September.[51] In the North Indian basin, storms are most common from April to December, with peaks in May and November.[51] In the Southern Hemisphere, the tropical cyclone year begins on July 1 and runs all year-round encompassing the tropical cyclone seasons, which run from November 1 until the end of April, with peaks in mid-February to early March.[51][45]
|
765 |
+
|
766 |
+
|
767 |
+
|
768 |
+
The formation of tropical cyclones is the topic of extensive ongoing research and is still not fully understood.[57] While six factors appear to be generally necessary, tropical cyclones may occasionally form without meeting all of the following conditions. In most situations, water temperatures of at least 26.5 °C (79.7 °F) are needed down to a depth of at least 50 m (160 ft);[58] waters of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms.[59] For tropical transitioning cyclones (i.e. Hurricane Ophelia (2017)) a water temperature of at least 22.5 °C (72.5 °F) has been suggested.[60]
|
769 |
+
|
770 |
+
Another factor is rapid cooling with height, which allows the release of the heat of condensation that powers a tropical cyclone.[58] High humidity is needed, especially in the lower-to-mid troposphere; when there is a great deal of moisture in the atmosphere, conditions are more favorable for disturbances to develop.[58] Low amounts of wind shear are needed, as high shear is disruptive to the storm's circulation.[58] Tropical cyclones generally need to form more than 555 km (345 mi) or five degrees of latitude away from the equator, allowing the Coriolis effect to deflect winds blowing towards the low pressure center and creating a circulation.[58] Lastly, a formative tropical cyclone needs a preexisting system of disturbed weather. Tropical cyclones will not form spontaneously.[58] Low-latitude and low-level westerly wind bursts associated with the Madden–Julian oscillation can create favorable conditions for tropical cyclogenesis by initiating tropical disturbances.[61]
|
771 |
+
|
772 |
+
Most tropical cyclones form in a worldwide band of thunderstorm activity near the equator, referred to as the Intertropical Front (ITF), the Intertropical Convergence Zone (ITCZ), or the monsoon trough.[62][63][64] Another important source of atmospheric instability is found in tropical waves, which contribute to the development of about 85% of intense tropical cyclones in the Atlantic Ocean and become most of the tropical cyclones in the Eastern Pacific.[65][66][67] The majority forms between 10 and 30 degrees of latitude away of the equator,[68] and 87% forms no farther away than 20 degrees north or south.[69][70] Because the Coriolis effect initiates and maintains their rotation, tropical cyclones rarely form or move within 5 degrees of the equator, where the effect is weakest.[69] However, it is still possible for tropical systems to form within this boundary as Tropical Storm Vamei and Cyclone Agni did in 2001 and 2004, respectively.[71][72]
|
773 |
+
|
774 |
+
The movement of a tropical cyclone (i.e. its "track") is typically approximated as the sum of two terms: "steering" by the background environmental wind and "beta drift".[73]
|
775 |
+
|
776 |
+
Environmental steering is the dominant term. Conceptually, it represents the movement of the storm due to prevailing winds and other wider environmental conditions, similar to "leaves carried along by a stream".[74] Physically, the winds, or flow field, in the vicinity of a tropical cyclone may be treated as having two parts: the flow associated with the storm itself, and the large-scale background flow of the environment in which the storm takes place. In this way, tropical cyclone motion may be represented to first-order simply as advection of the storm by the local environmental flow. This environmental flow is termed the "steering flow".
|
777 |
+
|
778 |
+
Climatologically, tropical cyclones are steered primarily westward by the east-to-west trade winds on the equatorial side of the subtropical ridge—a persistent high-pressure area over the world's subtropical oceans.[74] In the tropical North Atlantic and Northeast Pacific oceans, the trade winds steer tropical easterly waves westward from the African coast toward the Caribbean Sea, North America, and ultimately into the central Pacific Ocean before the waves dampen out.[66] These waves are the precursors to many tropical cyclones within this region.[65] In contrast, in the Indian Ocean and Western Pacific in both hemispheres, tropical cyclogenesis is influenced less by tropical easterly waves and more by the seasonal movement of the Inter-tropical Convergence Zone and the monsoon trough.[75] Additionally, tropical cyclone motion can be influenced by transient weather systems, such as extratropical cyclones.
|
779 |
+
|
780 |
+
In addition to environmental steering, a tropical cyclone will tend to drift slowly poleward and westward, a motion known as "beta drift". This motion is due to the superposition of a vortex, such as a tropical cyclone, onto an environment in which the Coriolis force varies with latitude, such as on a sphere or beta plane. It is induced indirectly by the storm itself, the result of a feedback between the cyclonic flow of the storm and its environment.
|
781 |
+
|
782 |
+
Physically, the cyclonic circulation of the storm advects environmental air poleward east of center and equatorial west of center. Because air must conserve its angular momentum, this flow configuration induces a cyclonic gyre equatorward and westward of the storm center and an anticyclonic gyre poleward and eastward of the storm center. The combined flow of these gyres acts to advect the storm slowly poleward and westward. This effect occurs even if there is zero environmental flow.
|
783 |
+
|
784 |
+
A third component of motion that occurs relatively infrequently involves the interaction of multiple tropical cyclones. When two cyclones approach one another, their centers will begin orbiting cyclonically about a point between the two systems. Depending on their separation distance and strength, the two vortices may simply orbit around one another or else may spiral into the center point and merge. When the two vortices are of unequal size, the larger vortex will tend to dominate the interaction, and the smaller vortex will orbit around it. This phenomenon is called the Fujiwhara effect, after Sakuhei Fujiwhara.[76]
|
785 |
+
|
786 |
+
Though a tropical cyclone typically moves from east to west in the tropics, its track may shift poleward and eastward either as it moves west of the subtropical ridge axis or else if it interacts with the mid-latitude flow, such as the jet stream or an extratropical cyclone. This motion, termed "recurvature", commonly occurs near the western edge of the major ocean basins, where the jet stream typically has a poleward component and extratropical cyclones are common.[77] An example of tropical cyclone recurvature was Typhoon Ioke in 2006.[78]
|
787 |
+
|
788 |
+
The landfall of a tropical cyclone occurs when a storm's surface center (eye if a stronger cyclone) moves over a coastline.[10] Storm conditions may be experienced on the coast and inland hours before landfall; in fact, a tropical cyclone can launch its strongest winds over land, yet not make landfall. NOAA uses the term "direct hit" to describe when a location (on the left side of the eye) falls within the radius of maximum winds (or twice that radius if on the right side), whether or not the hurricane's eye made landfall.[10]
|
789 |
+
|
790 |
+
Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving into the main belt of the Westerlies.[79] When the subtropical ridge position shifts due to El Niño, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience much fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E which would favor the Japanese archipelago.[80] During El Niño years, Guam's chance of a tropical cyclone impact is one-third more likely than of the long-term average.[81] The tropical Atlantic Ocean experiences depressed activity due to increased vertical wind shear across the region during El Niño years.[82] During La Niña years, the formation of tropical cyclones, along with the subtropical ridge position, shifts westward across the western Pacific Ocean, which increases the landfall threat to China and much greater intensity in the Philippines.[80]
|
791 |
+
|
792 |
+
A tropical cyclone can cease to have tropical characteristics in several different ways. One such way is if it moves over land, thus depriving it of the warm water it needs to power itself, quickly losing strength.[83] Most strong storms lose their strength very rapidly after landfall and become disorganized areas of low pressure within a day or two, or evolve into extratropical cyclones. There is a chance a tropical cyclone could regenerate if it managed to get back over open warm water, such as with Hurricane Ivan. If it remains over mountains for even a short time, weakening will accelerate.[84] Many storm fatalities occur in mountainous terrain, when diminishing cyclones unleash their moisture as torrential rainfall.[85] This rainfall may lead to deadly floods and mudslides, as was the case with Hurricane Mitch around Honduras in October 1998.[86] Without warm surface water, the storm cannot survive.[87]
|
793 |
+
|
794 |
+
A tropical cyclone can dissipate when it moves over waters significantly below 26.5 °C (79.7 °F). This will cause the storm to lose its tropical characteristics, such as a warm core with thunderstorms near the center, and become a remnant low-pressure area. These remnant systems may persist for up to several days before losing their identity. This dissipation mechanism is most common in the eastern North Pacific.[88] Weakening or dissipation can occur if it experiences vertical wind shear, causing the convection and heat engine to move away from the center; this normally ceases development of a tropical cyclone.[89] In addition, its interaction with the main belt of the Westerlies, by means of merging with a nearby frontal zone, can cause tropical cyclones to evolve into extratropical cyclones. This transition can take 1–3 days.[90] Even after a tropical cyclone is said to be extratropical or dissipated, it can still have tropical storm force (or occasionally hurricane/typhoon force) winds and drop several inches of rainfall. In the Pacific Ocean and Atlantic Ocean, such tropical-derived cyclones of higher latitudes can be violent and may occasionally remain at hurricane or typhoon-force wind speeds when they reach the west coast of North America. These phenomena can also affect Europe, where they are known as European windstorms; Hurricane Iris's extratropical remnants are an example of such a windstorm from 1995.[91] A cyclone can also merge with another area of low pressure, becoming a larger area of low pressure. This can strengthen the resultant system, although it may no longer be a tropical cyclone.[89] Studies in the 2000s have given rise to the hypothesis that large amounts of dust reduce the strength of tropical cyclones.[92]
|
795 |
+
|
796 |
+
In the 1960s and 1970s, the United States government attempted to weaken hurricanes through Project Stormfury by seeding selected storms with silver iodide. It was thought that the seeding would cause supercooled water in the outer rainbands to freeze, causing the inner eyewall to collapse and thus reducing the winds.[93] The winds of Hurricane Debbie—a hurricane seeded in Project Stormfury—dropped as much as 31%, but Debbie regained its strength after each of two seeding forays.[94] In an earlier episode in 1947, disaster struck when a hurricane east of Jacksonville, Florida promptly changed its course after being seeded, and smashed into Savannah, Georgia.[95] Because there was so much uncertainty about the behavior of these storms, the federal government would not approve seeding operations unless the hurricane had a less than 10% chance of making landfall within 48 hours, greatly reducing the number of possible test storms. The project was dropped after it was discovered that eyewall replacement cycles occur naturally in strong hurricanes, casting doubt on the result of the earlier attempts. Today, it is known that silver iodide seeding is not likely to have an effect because the amount of supercooled water in the rainbands of a tropical cyclone is too low.[96]
|
797 |
+
|
798 |
+
Other approaches have been suggested over time, including cooling the water under a tropical cyclone by towing icebergs into the tropical oceans.[97] Other ideas range from covering the ocean in a substance that inhibits evaporation,[98] dropping large quantities of ice into the eye at very early stages of development (so that the latent heat is absorbed by the ice, instead of being converted to kinetic energy that would feed the positive feedback loop),[97] or blasting the cyclone apart with nuclear weapons.[99] Project Cirrus even involved throwing dry ice on a cyclone.[100] These approaches all suffer from one flaw above many others: tropical cyclones are simply too large and long-lived for any of the weakening techniques to be practical.[101]
|
799 |
+
|
800 |
+
Tropical cyclones out at sea cause large waves, heavy rain, flood and high winds, disrupting international shipping and, at times, causing shipwrecks.[102] Tropical cyclones stir up water, leaving a cool wake behind them, which causes the region to be less favorable for subsequent tropical cyclones.[39] On land, strong winds can damage or destroy vehicles, buildings, bridges, and other outside objects, turning loose debris into deadly flying projectiles. The storm surge, or the increase in sea level due to the cyclone, is typically the worst effect from landfalling tropical cyclones, historically resulting in 90% of tropical cyclone deaths.[103]
|
801 |
+
The broad rotation of a landfalling tropical cyclone, and vertical wind shear at its periphery, spawns tornadoes. Tornadoes can also be spawned as a result of eyewall mesovortices, which persist until landfall.[104]
|
802 |
+
|
803 |
+
Over the past two centuries, tropical cyclones have been responsible for the deaths of about 1.9 million people worldwide. Large areas of standing water caused by flooding lead to infection, as well as contributing to mosquito-borne illnesses. Crowded evacuees in shelters increase the risk of disease propagation.[103] Tropical cyclones significantly interrupt infrastructure, leading to power outages, bridge destruction, and the hampering of reconstruction efforts.[103][105] On average, the Gulf and east coasts of the United States suffer approximately US$5 billion (1995 US $) in cyclone damage every year. The majority (83%) of tropical cyclone damage is caused by severe hurricanes, category 3 or greater. However, category 3 or greater hurricanes only account for about one-fifth of cyclones that make landfall every year.[106]
|
804 |
+
|
805 |
+
Although cyclones take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions.[107] Tropical cyclones also help maintain the global heat balance by moving warm, moist tropical air to the middle latitudes and polar regions,[108] and by regulating the thermohaline circulation through upwelling.[109] The storm surge and winds of hurricanes may be destructive to human-made structures, but they also stir up the waters of coastal estuaries, which are typically important fish breeding locales. Tropical cyclone destruction spurs redevelopment, greatly increasing local property values.[110]
|
806 |
+
|
807 |
+
When hurricanes surge upon shore from the ocean, salt is introduced to many freshwater areas and raises the salinity levels too high for some habitats to withstand. Some are able to cope with the salt and recycle it back into the ocean, but others can not release the extra surface water quickly enough or do not have a large enough freshwater source to replace it. Because of this, some species of plants and vegetation die due to the excess salt.[111] In addition, hurricanes can carry toxins and acids onto shore when they make landfall. The flood water can pick up the toxins from different spills and contaminate the land that it passes over. The toxins are very harmful to the people and animals in the area, as well as the environment around them. The flooding water can also spark many dangerous oil spills.[112]
|
808 |
+
|
809 |
+
Hurricane preparedness encompasses the actions and planning taken before a tropical cyclone strikes to mitigate damage and injury from the storm. Knowledge of tropical cyclone impacts on an area help plan for future possibilities. Preparedness may involve preparations made by individuals as well as centralized efforts by governments or other organizations. Tracking storms during the tropical cyclone season helps individuals know current threats. Regional Specialized Meteorological Centers and Tropical Cyclone Warning Centers provide current information and forecasts to help individuals make the best decision possible.
|
810 |
+
|
811 |
+
Hurricane response is the disaster response after a hurricane. Activities performed by hurricane responders include assessment, restoration, and demolition of buildings; removal of debris and waste; repairs to land-based and maritime infrastructure; and public health services including search and rescue operations.[113] Hurricane response requires coordination between federal, tribal, state, local, and private entities.[114] According to the National Voluntary Organizations Active in Disaster, potential response volunteers should affiliate with established organizations and should not self-deploy, so that proper training and support can be provided to mitigate the danger and stress of response work.[115]
|
812 |
+
|
813 |
+
Hurricane responders face many hazards. Hurricane responders may be exposed to chemical and biological contaminants including stored chemicals, sewage, human remains, and mold growth encouraged by flooding,[116][117][118] as well as asbestos and lead that may be present in older buildings.[117][119] Common injuries arise from falls from heights, such as from a ladder or from level surfaces; from electrocution in flooded areas, including from backfeed from portable generators; or from motor vehicle accidents.[116][119][120] Long and irregular shifts may lead to sleep deprivation and fatigue, increasing the risk of injuries, and workers may experience mental stress associated with a traumatic incident. Additionally, heat stress is a concern as workers are often exposed to hot and humid temperatures, wear protective clothing and equipment, and have physically difficult tasks.[116][119]
|
814 |
+
|
815 |
+
Intense tropical cyclones pose a particular observation challenge, as they are a dangerous oceanic phenomenon, and weather stations, being relatively sparse, are rarely available on the site of the storm itself. In general, surface observations are available only if the storm is passing over an island or a coastal area, or if there is a nearby ship. Real-time measurements are usually taken in the periphery of the cyclone, where conditions are less catastrophic and its true strength cannot be evaluated. For this reason, there are teams of meteorologists that move into the path of tropical cyclones to help evaluate their strength at the point of landfall.[121]
|
816 |
+
|
817 |
+
Tropical cyclones far from land are tracked by weather satellites capturing visible and infrared images from space, usually at half-hour to quarter-hour intervals. As a storm approaches land, it can be observed by land-based Doppler weather radar. Radar plays a crucial role around landfall by showing a storm's location and intensity every several minutes.[122]
|
818 |
+
|
819 |
+
In situ measurements, in real-time, can be taken by sending specially equipped reconnaissance flights into the cyclone. In the Atlantic basin, these flights are regularly flown by United States government hurricane hunters.[123] The aircraft used are WC-130 Hercules and WP-3D Orions, both four-engine turboprop cargo aircraft. These aircraft fly directly into the cyclone and take direct and remote-sensing measurements. The aircraft also launch GPS dropsondes inside the cyclone. These sondes measure temperature, humidity, pressure, and especially winds between flight level and the ocean's surface. A new era in hurricane observation began when a remotely piloted Aerosonde, a small drone aircraft, was flown through Tropical Storm Ophelia as it passed Virginia's Eastern Shore during the 2005 hurricane season. A similar mission was also completed successfully in the western Pacific Ocean. This demonstrated a new way to probe the storms at low altitudes that human pilots seldom dare.[124]
|
820 |
+
|
821 |
+
Because of the forces that affect tropical cyclone tracks, accurate track predictions depend on determining the position and strength of high- and low-pressure areas, and predicting how those areas will change during the life of a tropical system. The deep layer mean flow, or average wind through the depth of the troposphere, is considered the best tool in determining track direction and speed. If storms are significantly sheared, use of wind speed measurements at a lower altitude, such as at the 70 kPa pressure surface (3,000 metres or 9,800 feet above sea level) will produce better predictions. Tropical forecasters also consider smoothing out short-term wobbles of the storm as it allows them to determine a more accurate long-term trajectory.[125] High-speed computers and sophisticated simulation software allow forecasters to produce computer models that predict tropical cyclone tracks based on the future position and strength of high- and low-pressure systems. Combining forecast models with increased understanding of the forces that act on tropical cyclones, as well as with a wealth of data from Earth-orbiting satellites and other sensors, scientists have increased the accuracy of track forecasts over recent decades.[126] However, scientists are not as skillful at predicting the intensity of tropical cyclones.[127] The lack of improvement in intensity forecasting is attributed to the complexity of tropical systems and an incomplete understanding of factors that affect their development. New tropical cyclone position and forecast information is available at least every six hours from the various warning centers.[46][128][129][130][131]
|
822 |
+
|
823 |
+
Around the world, tropical cyclones are classified in different ways, based on the location, the structure of the system and its intensity. For example, within the Northern Atlantic and Eastern Pacific basins, a tropical cyclone with wind speeds of over 65 kn (75 mph; 120 km/h) is called a hurricane, while it is called a typhoon or a severe cyclonic storm within the Western Pacific or North Indian Oceans.[41][42][43] Within the Southern Hemisphere, it is either called a hurricane, tropical cyclone or a severe tropical cyclone, depending on if it is located within the South Atlantic, South-West Indian Ocean, Australian region or the South Pacific Ocean.[44][45] If a tropical cyclone moves from one basin to another, then it is generally classified using that basin's terminology[clarification needed].
|
824 |
+
|
825 |
+
Tropical cyclones that develop around the world are assigned an identification code consisting of a two-digit number and suffix letter by the warning centers that monitor them. These codes start at 01 every year and are assigned in order to systems, which have the potential to develop further, cause significant impact to life and property or when the warning centers start to write advisories on the system.[45][132]
|
826 |
+
|
827 |
+
The practice of using names to identify tropical cyclones goes back many years, with systems named after places or things they hit before the formal start of naming.[141][142] The system currently used provides positive identification of severe weather systems in a brief form, that is readily understood and recognized by the public.[141][142] The credit for the first usage of personal names for weather systems is generally given to the Queensland Government Meteorologist Clement Wragge who named systems between 1887 and 1907.[141][142] This system of naming weather systems subsequently fell into disuse for several years after Wragge retired, until it was revived in the latter part of World War II for the Western Pacific.[141][142] Formal naming schemes have subsequently been introduced for the North and South Atlantic, Eastern, Central, Western and Southern Pacific basins as well as the Australian region and Indian Ocean.[142]
|
828 |
+
|
829 |
+
At present, tropical cyclones are officially named by one of eleven meteorological services and retain their names throughout their lifetimes to provide ease of communication between forecasters and the general public regarding forecasts, watches, and warnings.[141] Since the systems can last a week or longer and more than one can be occurring in the same basin at the same time, the names are thought to reduce the confusion about what storm is being described.[141] Names are assigned in order from predetermined lists with one, three, or ten-minute sustained wind speeds of more than 65 km/h (40 mph) depending on which basin it originates.[41][43][44] However, standards vary from basin to basin with some tropical depressions named in the Western Pacific, while tropical cyclones have to have a significant amount of gale-force winds occurring around the center before they are named within the Southern Hemisphere.[44][45] The names of significant tropical cyclones in the North Atlantic Ocean, Pacific Ocean, and Australian region are retired from the naming lists and replaced with another name.[41][42][45]
|
830 |
+
|
831 |
+
Tropical cyclones that cause extreme destruction are rare, although when they occur, they can cause great amounts of damage or thousands of fatalities.
|
832 |
+
|
833 |
+
The 1970 Bhola cyclone is considered to be the deadliest tropical cyclone on record, which killed around 300,000 people, after striking the densely populated Ganges Delta region of Bangladesh on November 13, 1970.[143] Its powerful storm surge was responsible for the high death toll.[144] The North Indian cyclone basin has historically been the deadliest basin.[103][145] Elsewhere, Typhoon Nina killed nearly 100,000 in China in 1975 due to a 100-year flood that caused 62 dams including the Banqiao Dam to fail.[146] The Great Hurricane of 1780 is the deadliest North Atlantic hurricane on record, killing about 22,000 people in the Lesser Antilles.[147] A tropical cyclone does not need to be particularly strong to cause memorable damage, primarily if the deaths are from rainfall or mudslides. Tropical Storm Thelma in November 1991 killed thousands in the Philippines,[148] although the strongest typhoon to ever make landfall on record was Typhoon Haiyan in November 2013, causing widespread devastation in Eastern Visayas, and killing at least 6,300 people in the Philippines alone. In 1982, the unnamed tropical depression that eventually became Hurricane Paul killed around 1,000 people in Central America.[149]
|
834 |
+
|
835 |
+
Hurricane Harvey and Hurricane Katrina are estimated to be the costliest tropical cyclones to impact the United States mainland, each causing damage estimated at $125 billion.[150] Harvey killed at least 90 people in August 2017 after making landfall in Texas as a low-end Category 4 hurricane. Hurricane Katrina is estimated as the second-costliest tropical cyclone worldwide,[151] causing $81.2 billion in property damage (2008 USD) alone,[152] with overall damage estimates exceeding $100 billion (2005 USD).[151] Katrina killed at least 1,836 people after striking Louisiana and Mississippi as a major hurricane in August 2005.[152] Hurricane Maria is the third most destructive tropical cyclone in U.S. history, with damage totaling $91.61 billion (2017 USD), and with damage costs at $68.7 billion (2012 USD), Hurricane Sandy is the fourth most destructive tropical cyclone in U.S history. The Galveston Hurricane of 1900 is the deadliest natural disaster in the United States, killing an estimated 6,000 to 12,000 people in Galveston, Texas.[153] Hurricane Mitch caused more than 10,000 fatalities in Central America, making it the second deadliest Atlantic hurricane in history. Hurricane Iniki in 1992 was the most powerful storm to strike Hawaii in recorded history, hitting Kauai as a Category 4 hurricane, killing six people, and causing U.S. $3 billion in damage.[154] Other destructive Eastern Pacific hurricanes include Pauline and Kenna, both causing severe damage after striking Mexico as major hurricanes.[155][156] In March 2004, Cyclone Gafilo struck northeastern Madagascar as a powerful cyclone, killing 74, affecting more than 200,000 and becoming the worst cyclone to affect the nation for more than 20 years.[157]
|
836 |
+
|
837 |
+
The most intense storm on record is Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of 870 hectopascals (25.69 inHg) and maximum sustained wind speeds of 165 knots (85 m/s) or 190 miles per hour (310 km/h).[158] The highest maximum sustained wind speed ever recorded was 185 knots (95 m/s) or 215 miles per hour (346 km/h) in Hurricane Patricia in 2015—the most intense cyclone ever recorded in the Western Hemisphere.[159] Typhoon Nancy in 1961 also had recorded wind speeds of 185 knots (95 m/s) or 215 miles per hour (346 km/h), but recent research indicates that wind speeds from the 1940s to the 1960s were gauged too high, and thus it is no longer considered the storm with the highest wind speed on record.[160] Likewise, a surface-level gust caused by Typhoon Paka on Guam in late 1997 was recorded at 205 knots (105 m/s) or 235 miles per hour (378 km/h). Had it been confirmed, it would be the strongest non-tornadic wind ever recorded on the Earth's surface, but the reading had to be discarded since the anemometer was damaged by the storm.[161]
|
838 |
+
The World Meteorological Organization established Barrow Island (Queensland) as the location of the highest non-tornado related wind gust at 408 kilometres per hour (254 mph)[162] on April 10, 1996, during Severe Tropical Cyclone Olivia.[163]
|
839 |
+
|
840 |
+
In addition to being the most intense tropical cyclone on record based on pressure, Tip is the largest cyclone on record, with tropical storm-force winds 2,170 kilometres (1,350 mi) in diameter. The smallest storm on record, Tropical Storm Marco, formed during October 2008 and made landfall in Veracruz. Marco generated tropical storm-force winds only 37 kilometres (23 mi) in diameter.[164]
|
841 |
+
|
842 |
+
Hurricane John is the longest-lasting tropical cyclone on record, lasting 31 days in 1994. Before the advent of satellite imagery in 1961, however, many tropical cyclones were underestimated in their durations.[165] John is also the longest-tracked tropical cyclone in the Northern Hemisphere on record, with a path of 8,250 mi (13,280 km).[166] Cyclone Rewa of the 1993–94 South Pacific and Australian region cyclone seasons had one of the longest tracks observed within the Southern Hemisphere, traveling a distance of over 5,545 mi (8,920 km) during December 1993 and January 1994.[166]
|
843 |
+
|
844 |
+
While the number of storms in the Atlantic has increased since 1995, there is no obvious global trend; the annual number of tropical cyclones worldwide remains about 87 ± 10 (Between 77 and 97 tropical cyclones annually). However, the ability of climatologists to make long-term data analysis in certain basins is limited by the lack of reliable historical data in some basins, primarily in the Southern Hemisphere,[167] while noting that a significant downward trend in tropical cyclone numbers has been identified for the region near Australia (based on high quality data and accounting for the influence of the El Niño-Southern Oscillation).[168] In spite of that, there is some evidence that the intensity of hurricanes is increasing. Kerry Emanuel stated, "Records of hurricane activity worldwide show an upswing of both the maximum wind speed in and the duration of hurricanes. The energy released by the average hurricane (again considering all hurricanes worldwide) seems to have increased by around 70% in the past 30 years or so, corresponding to about a 15% increase in the maximum wind speed and a 60% increase in storm lifetime."[169]
|
845 |
+
|
846 |
+
Atlantic storms are becoming more destructive financially, as evidenced by the fact that the ten most expensive storms in United States history have occurred since 1990. According to the World Meteorological Organization, "recent increase in societal impact from tropical cyclones has been caused largely by rising concentrations of population and infrastructure in coastal regions."[170] Political scientist Pielke et al. (2008) normalized mainland US hurricane damage from 1900–2005 to 2005 values and found no remaining trend of increasing absolute damage. The 1970s and 1980s were notable because of the extremely low amounts of damage compared to other decades. The decade 1996–2005 was the second most damaging among the past 11 decades, with only the decade 1926–1935 surpassing its costs.
|
847 |
+
|
848 |
+
Often in part because of the threat of hurricanes, many coastal regions had sparse population between major ports until the advent of automobile tourism; therefore, the most severe portions of hurricanes striking the coast may have gone unmeasured in some instances. The combined effects of ship destruction and remote landfall severely limit the number of intense hurricanes in the official record before the era of hurricane reconnaissance aircraft and satellite meteorology. Although the record shows a distinct increase in the number and strength of intense hurricanes, therefore, experts regard the early data as suspect.[171]
|
849 |
+
|
850 |
+
The number and strength of Atlantic hurricanes may undergo a 50–70 year cycle, also known as the Atlantic Multidecadal Oscillation. Nyberg et al. reconstructed Atlantic major hurricane activity back to the early 18th century and found five periods averaging 3–5 major hurricanes per year and lasting 40–60 years, and six other averaging 1.5–2.5 major hurricanes per year and lasting 10–20 years. These periods are associated with the Atlantic multidecadal oscillation. Throughout, a decadal oscillation related to solar irradiance was responsible for enhancing/dampening the number of major hurricanes by 1–2 per year.[172]
|
851 |
+
|
852 |
+
Although more common since 1995, few above-normal hurricane seasons occurred during 1970–94.[173] Destructive hurricanes struck frequently from 1926 to 1960, including many major New England hurricanes. Twenty-one Atlantic tropical storms formed in 1933, a record only recently exceeded in 2005, which saw 28 storms. Tropical hurricanes occurred infrequently during the seasons of 1900–25; however, many intense storms formed during 1870–99. During the 1887 season, 19 tropical storms formed, of which a record 4 occurred after November 1 and 11 strengthened into hurricanes. Few hurricanes occurred in the 1840s to 1860s; however, many struck in the early 19th century, including an 1821 storm that made a direct hit on New York City. Some historical weather experts say these storms may have been as high as Category 4 in strength.[174]
|
853 |
+
|
854 |
+
These active hurricane seasons predated satellite coverage of the Atlantic basin. Before the satellite era began in 1960, tropical storms or hurricanes went undetected unless a reconnaissance aircraft encountered one, a ship reported a voyage through the storm, or a storm hit land in a populated area.[171]
|
855 |
+
|
856 |
+
Proxy records based on paleotempestological research have revealed that major hurricane activity along the Gulf of Mexico coast varies on timescales of centuries to millennia.[175][176] Few major hurricanes struck the Gulf coast during 3000–1400 BC and again during the most recent millennium. These quiescent intervals were separated by a hyperactive period during 1400 BC and 1000 AD, when the Gulf coast was struck frequently by catastrophic hurricanes and their landfall probabilities increased by 3–5 times. This millennial-scale variability has been attributed to long-term shifts in the position of the Azores High,[176] which may also be linked to changes in the strength of the North Atlantic oscillation.[177]
|
857 |
+
|
858 |
+
According to the Azores High hypothesis, an anti-phase pattern is expected to exist between the Gulf of Mexico coast and the Atlantic coast. During the quiescent periods, a more northeasterly position of the Azores High would result in more hurricanes being steered towards the Atlantic coast. During the hyperactive period, more hurricanes were steered towards the Gulf coast as the Azores High was shifted to a more southwesterly position near the Caribbean. Such a displacement of the Azores High is consistent with paleoclimatic evidence that shows an abrupt onset of a drier climate in Haiti around 3200 14C years BP,[178] and a change towards more humid conditions in the Great Plains during the late-Holocene as more moisture was pumped up the Mississippi Valley through the Gulf coast. Preliminary data from the northern Atlantic coast seem to support the Azores High hypothesis. A 3000-year proxy record from a coastal lake in Cape Cod suggests that hurricane activity increased significantly during the past 500–1000 years, just as the Gulf coast was amid a quiescent period of the last millennium.
|
859 |
+
|
860 |
+
|
861 |
+
|
862 |
+
The 2007 IPCC report noted many observed changes in the climate, including atmospheric composition, global average temperatures, ocean conditions, and others. The report concluded the observed increase in tropical cyclone intensity is larger than climate models predict. In addition, the report considered that it is likely that storm intensity will continue to increase through the 21st century, and declared it more likely than not that there has been some human contribution to the increases in tropical cyclone intensity.[179]
|
863 |
+
|
864 |
+
P.J. Webster and others published in 2005 an article in Science examining the "changes in tropical cyclone number, duration, and intensity" over the past 35 years, the period when satellite data has been available. Their main finding was although the number of cyclones decreased throughout the planet excluding the north Atlantic Ocean, there was a great increase in the number and proportion of very strong cyclones.[180] Projections currently show no consensus on how climate change will affect the overall frequency of tropical cyclones.[181]
|
865 |
+
|
866 |
+
According to 2006 studies by the National Oceanic and Atmospheric Administration, "the strongest hurricanes in the present climate may be upstaged by even more intense hurricanes over the next century as the earth's climate is warmed by increasing levels of greenhouse gases in the atmosphere".[182] A greater percentage (+13%) of tropical cyclones are expected to reach Category 4 and 5 strength with 2 °C warming.[181]
|
867 |
+
|
868 |
+
Studies published since 2008, by Kerry Emanuel from MIT, indicate that global warming is likely to increase the intensity but decrease the frequency of hurricane and cyclone activity.[183] In an article in Nature, Kerry Emanuel stated that potential hurricane destructiveness, a measure combining hurricane strength, duration, and frequency, "is highly correlated with tropical sea surface temperature, reflecting well-documented climate signals, including multidecadal oscillations in the North Atlantic and North Pacific, and global warming". Emanuel predicted "a substantial increase in hurricane-related losses in the twenty-first century".[184]
|
869 |
+
|
870 |
+
Research reported in the September 3, 2008 issue of Nature found that the strongest tropical cyclones are getting stronger, in particular over the North Atlantic and Indian oceans. Wind speeds for the strongest tropical storms increased from an average of 225 km/h (140 mph) in 1981 to 251 km/h (156 mph) in 2006, while the ocean temperature, averaged globally over all the regions where tropical cyclones form, increased from 28.2 °C (82.8 °F) to 28.5 °C (83.3 °F) during this period.[185][186]
|
871 |
+
|
872 |
+
A 2017 study looked at compounding effects from floods, storm surge, and terrestrial flooding (rivers), and projects an increase due to global warming.[187][188]
|
873 |
+
|
874 |
+
In 2020 a study was published, saying that climate change is increasing the likelihood of a storm of 3 category or above by 8% per decade.[189]
|
875 |
+
|
876 |
+
In addition to tropical cyclones, there are two other classes of cyclones within the spectrum of cyclone types. These kinds of cyclones, known as extratropical cyclones and subtropical cyclones, can be stages a tropical cyclone passes through during its formation or dissipation.[190] An extratropical cyclone is a storm that derives energy from horizontal temperature differences, which are typical in higher latitudes. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses; although not as frequently, an extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone.[191] From space, extratropical storms have a characteristic "comma-shaped" cloud pattern.[192] Extratropical cyclones can also be dangerous when their low-pressure centers cause powerful winds and high seas.[193]
|
877 |
+
|
878 |
+
A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form in a wide band of latitudes, from the equator to 50°. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm.[194] From an operational standpoint, a tropical cyclone is usually not considered to become subtropical during its extratropical transition.[195]
|
879 |
+
|
880 |
+
In popular culture, tropical cyclones have made several appearances in different types of media, including films, books, television, music, and electronic games.[196] These media often portray tropical cyclones that are either entirely fictional or based on real events.[196] For example, George Rippey Stewart's Storm, a best-seller published in 1941, is thought to have influenced meteorologists on their decision to assign female names to Pacific tropical cyclones.[142] Another example is the hurricane in The Perfect Storm, which describes the sinking of the Andrea Gail by the 1991 Perfect Storm.[197] Hurricanes have been featured in parts of the plots of series such as The Simpsons, Invasion, Family Guy, Seinfeld, Dawson's Creek, Burn Notice and CSI: Miami.[196][198][199][200][201] The 2004 film The Day After Tomorrow includes several mentions of actual tropical cyclones and features fantastical "hurricane-like", albeit non-tropical, Arctic storms.[202][203]
|