de-francophones commited on
Commit
2b8aed0
·
verified ·
1 Parent(s): ff32bbf

6ed952c81fff64d841322d1cb72fef1792fce45f6408b6e2bea69bb00500a836

Browse files
en/2259.html.txt ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Great Pyramid of Giza (also known as the Pyramid of Khufu or the Pyramid of Cheops) is the oldest and largest of the three pyramids in the Giza pyramid complex bordering present-day Giza in Greater Cairo, Egypt. It is the oldest of the Seven Wonders of the Ancient World, and the only one to remain largely intact.
4
+
5
+ Based on a mark in an interior chamber naming the work gang and a reference to the Fourth Dynasty Egyptian pharaoh Khufu, Egyptologists believe that the pyramid was built as a tomb over a 10- to 20-year period concluding around 2560 BC. Initially standing at 146.5 metres (481 feet), the Great Pyramid was the tallest man-made structure in the world for more than 3,800 years until Lincoln Cathedral was finished in 1311 AD. It is estimated that the pyramid weighs approximately 6 million tonnes, and consists of 2.3 million blocks of limestone and granite, some weighing as much as 80 tonnes. Originally, the Great Pyramid was covered by limestone casing stones that formed a smooth outer surface; what is seen today is the underlying core structure. Some of the casing stones that once covered the structure can still be seen around the base. There have been varying scientific and alternative theories about the Great Pyramid's construction techniques. Most accepted construction hypotheses are based on the idea that it was built by moving huge stones from a quarry and dragging and lifting them into place.
6
+
7
+ There are three known chambers inside the Great Pyramid. The lowest chamber is cut into the bedrock upon which the pyramid was built and was unfinished. The so-called[2] Queen's Chamber and King's Chamber are higher up within the pyramid structure. The main part of the Giza complex is a set of buildings that included two mortuary temples in honour of Khufu (one close to the pyramid and one near the Nile), three smaller pyramids for Khufu's wives, an even smaller "satellite" pyramid, a raised causeway connecting the two temples, and small mastaba tombs for nobles surrounding the pyramid.
8
+
9
+ Egyptologists believe the pyramid was built as a tomb for the Fourth Dynasty Egyptian pharaoh Khufu (often Hellenized as "Cheops") and was constructed over a 20-year period. Khufu's vizier, Hemiunu (also called Hemon), is believed by some to be the architect of the Great Pyramid.[3] It is thought that, at construction, the Great Pyramid was originally 146.6 metres (481.0 ft) tall, but with the removal of its original casing, its present height is 137 metres (449.5 ft). The lengths of the sides at the base are difficult to reconstruct, given the absence of the casing, but recent analyses put them in a range between 230.26 metres (755.4 ft) and 230.44 metres (756.0 ft). The volume, including an internal hillock, is roughly 2,300,000 cubic metres (81,000,000 cu ft).[4]
10
+
11
+ The first precision measurements of the pyramid were made by Egyptologist Sir Flinders Petrie in 1880–82 and published as The Pyramids and Temples of Gizeh.[5] Almost all reports are based on his measurements. Many of the casing-stones and inner chamber blocks of the Great Pyramid fit together with extremely high precision. Based on measurements taken on the north-eastern casing stones, the mean opening of the joints is only 0.5 millimetres (0.020 in) wide.[6]
12
+
13
+ The pyramid remained the tallest man-made structure in the world for over 3,800 years,[7] unsurpassed until the 160-metre-tall (520 ft) spire of Lincoln Cathedral was completed c. 1300. The accuracy of the pyramid's workmanship is such that the four sides of the base have an average error of only 58 millimetres in length.[8] The base is horizontal and flat to within ±15 mm (0.6 in).[9] The sides of the square base are closely aligned to the four cardinal compass points (within four minutes of arc)[10] based on true north, not magnetic north,[11] and the finished base was squared to a mean corner error of only 12 seconds of arc.[12]
14
+
15
+ The completed design dimensions, as suggested by Petrie's survey and subsequent studies, are estimated to have originally been 280 Egyptian Royal cubits high by 440 cubits long at each of the four sides of its base. The ratio of the perimeter to height of 1760/280 Egyptian Royal cubits equates to 2π to an accuracy of better than 0.05 percent (corresponding to the well-known approximation of π as 22/7). Some Egyptologists consider this to have been the result of deliberate design proportion. Verner wrote, "We can conclude that although the ancient Egyptians could not precisely define the value of π, in practice they used it".[13] Petrie concluded: "but these relations of areas and of circular ratio are so systematic that we should grant that they were in the builder's design".[14] Others have argued that the ancient Egyptians had no concept of pi and would not have thought to encode it in their monuments. They believe that the observed pyramid slope may be based on a simple seked slope choice alone, with no regard to the overall size and proportions of the finished building.[15] In 2013, rolls of papyrus called the Diary of Merer were discovered written by some of those who delivered limestone and other construction materials from Tora to Giza.[16]
16
+
17
+ The Great Pyramid consists of an estimated 2.3 million blocks which most believe to have been transported from nearby quarries. The Tura limestone used for the casing was quarried across the river. The largest granite stones in the pyramid, found in the "King's" chamber, weigh 25 to 80 tonnes and were transported from Aswan, more than 800 km (500 mi) away.[citation needed] Ancient Egyptians cut stone into rough blocks by hammering grooves into natural stone faces, inserting wooden wedges, then soaking these with water. As the water was absorbed, the wedges expanded, breaking off workable chunks. Once the blocks were cut, they were carried by boat either up or down the Nile River to the pyramid.[17] It is estimated that 5.5 million tonnes of limestone, 8,000 tonnes of granite (imported from Aswan), and 500,000 tonnes of mortar were used in the construction of the Great Pyramid.[18]
18
+
19
+ At completion, the Great Pyramid was surfaced with white "casing stones"—slant-faced, but flat-topped, blocks of highly polished white limestone. These were carefully cut to what is approximately a face slope with a seked of 5+1/2 palms to give the required dimensions. Visibly, all that remains is the underlying stepped core structure seen today.[citation needed] In 1303 AD, a massive earthquake loosened many of the outer casing stones, which in 1356 were carted away by Bahri Sultan An-Nasir Nasir-ad-Din al-Hasan to build mosques and fortresses in nearby Cairo.[citation needed] Many more casing stones were removed from the great pyramids by Muhammad Ali Pasha in the early 19th century to build the upper portion of his Alabaster Mosque in Cairo, not far from Giza.[citation needed] These limestone casings can still be seen as parts of these structures. Later explorers reported massive piles of rubble at the base of the pyramids left over from the continuing collapse of the casing stones, which were subsequently cleared away during continuing excavations of the site.[citation needed]
20
+
21
+ Nevertheless, a few of the casing stones from the lowest course can be seen to this day in situ around the base of the Great Pyramid, and display the same workmanship and precision that has been reported for centuries. Petrie also found a different orientation in the core and in the casing measuring 193 centimetres ± 25 centimetres. He suggested a redetermination of north was made after the construction of the core, but a mistake was made, and the casing was built with a different orientation.[5] Petrie related the precision of the casing stones as to being "equal to opticians' work of the present day, but on a scale of acres" and "to place such stones in exact contact would be careful work; but to do so with cement in the joints seems almost impossible".[20] It has been suggested it was the mortar (Petrie's "cement") that made this seemingly impossible task possible, providing a level bed, which enabled the masons to set the stones exactly.[21][22]
22
+
23
+ Many alternative, often contradictory, theories have been proposed regarding the pyramid's construction techniques.[23] Many disagree on whether the blocks were dragged, lifted, or even rolled into place. The Greeks believed that slave labour was used, but modern discoveries made at nearby workers' camps associated with construction at Giza suggest that it was built instead by tens of thousands of skilled workers. Verner posited that the labour was organized into a hierarchy, consisting of two gangs of 100,000 men, divided into five zaa or phyle of 20,000 men each, which may have been further divided according to the skills of the workers.[24]
24
+
25
+ One mystery of the pyramid's construction is its planning. John Romer suggests that they used the same method that had been used for earlier and later constructions, laying out parts of the plan on the ground at a 1-to-1 scale. He writes that "such a working diagram would also serve to generate the architecture of the pyramid with precision unmatched by any other means".[25] He also argues for a 14-year time-span for its construction.[26] A modern construction management study, in association with Mark Lehner and other Egyptologists, estimated that the total project required an average workforce of about 14,500 people and a peak workforce of roughly 40,000. Without the use of pulleys, wheels, or iron tools, they used critical path analysis methods, which suggest that the Great Pyramid was completed from start to finish in approximately 10 years.[27]
26
+
27
+ The original entrance to the Great Pyramid is on the north, 17 metres (56 ft) vertically above ground level and 7.29 metres (23.9 ft) east of the center line of the pyramid. From this original entrance, there is a Descending Passage 0.96 metres (3.1 ft) high and 1.04 metres (3.4 ft) wide, which goes down at an angle of 26° 31'23" through the masonry of the pyramid and then into the bedrock beneath it. After 105.23 metres (345.2 ft), the passage becomes level and continues for an additional 8.84 metres (29.0 ft) to the lower Chamber, which appears not to have been finished. There is a continuation of the horizontal passage in the south wall of the lower chamber; there is also a pit dug in the floor of the chamber. Some Egyptologists suggest that this Lower Chamber was intended to be the original burial chamber, but Pharaoh Khufu later changed his mind and wanted it to be higher up in the pyramid.[28]
28
+
29
+ 28.2 metres (93 ft) from the entrance is a square hole in the roof of the Descending Passage. Originally concealed with a slab of stone, this is the beginning of the Ascending Passage.[citation needed] The Ascending Passage is 39.3 metres (129 ft) long, as wide and high as the Descending Passage and slopes up at almost precisely the same angle to reach the Grand Gallery. The lower end of the Ascending Passage is closed by three huge blocks of granite, each about 1.5 metres (4.9 ft) long.[citation needed] One must use the Robbers' Tunnel (see below) to access the Ascending Passage.[citation needed] At the start of the Grand Gallery on the right-hand side there is a hole cut in the wall. This is the start of a vertical shaft which follows an irregular path through the masonry of the pyramid to join the Descending Passage. Also at the start of the Grand Gallery there is the Horizontal Passage leading to the "Queen's Chamber". The passage is 1.1m (3'8") high for most of its length, but near the chamber there is a step in the floor, after which the passage is 1.73 metres (5.7 ft) high.[citation needed]
30
+
31
+ The "Queen's Chamber"[2] is exactly halfway between the north and south faces of the pyramid and measures 5.75 metres (18.9 ft) north to south, 5.23 metres (17.2 ft) east to west, and has a pointed roof with an apex 6.23 metres (20.4 ft) above the floor. At the eastern end of the chamber there is a niche 4.67 metres (15.3 ft) high. The original depth of the niche was 1.04 metres (3.4 ft), but has since been deepened by treasure hunters.[29]
32
+
33
+ In the north and south walls of the Queen's Chamber there are shafts, which, unlike those in the King's Chamber that immediately slope upwards (see below), are horizontal for around 2 m (6.6 ft) before sloping upwards. The horizontal distance was cut in 1872 by a British engineer, Waynman Dixon, who believed a shaft similar to those in the King's Chamber must also exist. He was proved right, but because the shafts are not connected to the outer faces of the pyramid or the Queen's Chamber, their purpose is unknown. At the end of one of his shafts, Dixon discovered a ball of black diorite (a type of rock) and a bronze implement of unknown purpose. Both objects are currently in the British Museum.[30]
34
+
35
+ The shafts in the Queen's Chamber were explored in 1993 by the German engineer Rudolf Gantenbrink using a crawler robot he designed, Upuaut 2. After a climb of 65 m (213 ft),[31] he discovered that one of the shafts was blocked by limestone "doors" with two eroded copper "handles". Some years later the National Geographic Society created a similar robot which, in September 2002, drilled a small hole in the southern door, only to find another door behind it.[32] The northern passage, which was difficult to navigate because of twists and turns, was also found to be blocked by a door.[33]
36
+
37
+ Research continued in 2011 with the Djedi Project. Realizing the problem was that the National Geographic Society's camera was only able to see straight ahead of it, they instead used a fibre-optic "micro snake camera" that could see around corners. With this they were able to penetrate the first door of the southern shaft through the hole drilled in 2002, and view all the sides of the small chamber behind it. They discovered hieroglyphs written in red paint. They were also able to scrutinize the inside of the two copper "handles" embedded in the door, and they now believe them to be for decorative purposes. They also found the reverse side of the "door" to be finished and polished, which suggests that it was not put there just to block the shaft from debris, but rather for a more specific reason.[34]
38
+
39
+ The Grand Gallery continues the slope of the Ascending Passage, but is 8.6 metres (28 ft) high and 46.68 metres (153.1 ft) long. At the base it is 2.06 metres (6.8 ft) wide, but after 2.29 metres (7.5 ft) the blocks of stone in the walls are corbelled inwards by 7.6 centimetres (3.0 in) on each side.[citation needed] There are seven of these steps, so, at the top, the Grand Gallery is only 1.04 metres (3.4 ft) wide. It is roofed by slabs of stone laid at a slightly steeper angle than the floor of the gallery, so that each stone fits into a slot cut in the top of the gallery like the teeth of a ratchet. The purpose was to have each block supported by the wall of the Gallery, rather than resting on the block beneath it, in order to prevent cumulative pressure.[35]
40
+
41
+ At the upper end of the Gallery on the right-hand side there is a hole near the roof that opens into a short tunnel by which access can be gained to the lowest of the Relieving Chambers.[citation needed] The other Relieving Chambers were discovered in 1837–1838 by Colonel Howard Vyse and J.S. Perring, who dug tunnels upwards using blasting powder.[citation needed]
42
+
43
+ The floor of the Grand Gallery consists of a shelf or step on either side, 51 centimetres (20 in) wide, leaving a lower ramp 1.04 metres (3.4 ft) wide between them. In the shelves there are 54 slots, 27 on each side matched by vertical and horizontal slots in the walls of the Gallery. These form a cross shape that rises out of the slot in the shelf.[citation needed] The purpose of these slots is not known, but the central gutter in the floor of the Gallery, which is the same width as the Ascending Passage, has led to speculation that the blocking stones were stored in the Grand Gallery and the slots held wooden beams to restrain them from sliding down the passage.[36] This, in turn, has led to the proposal that originally many more than 3 blocking stones were intended, to completely fill the Ascending Passage.[citation needed]
44
+
45
+ At the top of the Grand Gallery, there is a step giving onto a horizontal passage some metres long and approximately 1.02 metres (3.3 ft) in height and width, in which can be detected four slots, three of which were probably intended to hold granite portcullises.[citation needed] Fragments of granite found by Petrie in the Descending Passage may have come from these now-vanished doors.[citation needed]
46
+
47
+ In 2017, scientists from the ScanPyramids project discovered a large cavity above the Grand Gallery using muon radiography, which they called the "ScanPyramids Big Void". Its length is at least 30 metres (98 ft) and its cross-section is similar to that of the Grand Gallery. Its existence was confirmed by independent detection with three different technologies: nuclear emulsion films, scintillator hodoscopes, and gas detectors.[37][38] The purpose of the cavity is not known and it is not accessible but according to Zahi Hawass it may have been a gap used in the construction of the Grand Gallery.[39] The Japanese research team disputes this, however, saying that the huge void is completely different from the construction spaces previously identified.[40]
48
+ To verify the "ScanPyramids Big Void" and pinpoint the same, a Japanese team of researchers from Kyushu University, Tohoku University, the University of Tokyo and the Chiba Institute of Technology plans to rescan the structure with a newly developed muon detector in 2020.[41]
49
+
50
+ The "King's Chamber"[2] is 20 Egyptian Royal cubits or 10.47 metres (34.4 ft) from east to west and 10 cubits or 5.234 metres (17.17 ft) north to south. It has a flat roof 11 cubits and 5 digits or 5.852 metres (19.20 ft) above the floor. 0.91 m (3.0 ft) above the floor there are two narrow shafts in the north and south walls (one is now filled by an extractor fan in an attempt to circulate air inside the pyramid).[citation needed] The purpose of these shafts is not clear: they appear to be aligned towards stars or areas of the northern and southern skies, yet one of them follows a dog-leg course through the masonry, indicating no intention to directly sight stars through them.[citation needed] They were long believed by Egyptologists to be "air shafts" for ventilation, but this idea has now been widely abandoned in favour of the shafts serving a ritualistic purpose associated with the ascension of the king's spirit to the heavens.[42]
51
+
52
+ The King's Chamber is entirely faced with granite. Above the roof, which is formed of nine slabs of stone weighing in total about 400 tons, are five compartments known as Relieving Chambers. The first four, like the King's Chamber, have flat roofs formed by the floor of the chamber above, but the final chamber has a pointed roof.[citation needed] Vyse suspected the presence of upper chambers when he found that he could push a long reed through a crack in the ceiling of the first chamber. From lower to upper, the chambers are known as "Davison's Chamber", "Wellington's Chamber", "Nelson's Chamber", "Lady Arbuthnot's Chamber", and "Campbell's Chamber". It is believed that the compartments were intended to safeguard the King's Chamber from the possibility of a roof collapsing under the weight of stone above the Chamber. As the chambers were not intended to be seen, they were not finished in any way and a few of the stones still retain masons' marks painted on them. One of the stones in Campbell's Chamber bears a mark, apparently the name of a work gang.[43][44]
53
+
54
+ The only object in the King's Chamber is a rectangular granite sarcophagus, one corner of which is damaged.[citation needed] The sarcophagus is slightly larger than the Ascending Passage, which indicates that it must have been placed in the Chamber before the roof was put in place.[citation needed] Unlike the fine masonry of the walls of the Chamber, the sarcophagus is roughly finished, with saw-marks visible in several places.[citation needed] This is in contrast with the finely finished and decorated sarcophagi found in other pyramids of the same period. Petrie suggested that such a sarcophagus was intended but was lost in the river on the way north from Aswan and a hurriedly made replacement was used instead.[citation needed]
55
+
56
+ Today tourists enter the Great Pyramid via the Robbers' Tunnel, which was long ago cut straight through the masonry of the pyramid for approximately 27 metres (89 ft), then turns sharply left to encounter the blocking stones in the Ascending Passage. It is possible to enter the Descending Passage from this point, but access is usually forbidden.[45] The origin of this Robbers' Tunnel is the subject of much scholarly discussion. According to tradition, the chasm was cut around 820 AD by Caliph al-Ma'mun's workmen using a battering ram. According to these accounts, al-Ma'mun's digging dislodged the stone fitted in the ceiling of the Descending Passage to hide the entrance to the Ascending Passage and it was the noise of that stone falling and then sliding down the Descending Passage, which alerted them to the need to turn left. Unable to remove these stones, however, the workmen tunneled up beside them through the softer limestone of the Pyramid until they reached the Ascending Passage.[46][47] Due to a number of historical and archaeological discrepancies, many scholars (with Antoine Isaac Silvestre de Sacy perhaps being the first) contend that this story is apocryphal. They argue that it is much more likely that the tunnel had been carved sometime after the pyramid was initially sealed. This tunnel, the scholars continue, was then resealed (likely during the Ramesside Restoration), and it was this plug that al-Ma'mun's ninth century expedition cleared away.[48]
57
+
58
+ The Great Pyramid is surrounded by a complex of several buildings including small pyramids. The Pyramid Temple, which stood on the east side of the pyramid and measured 52.2 metres (171 ft) north to south and 40 metres (130 ft) east to west, has almost entirely disappeared apart from the black basalt paving. There are only a few remnants of the causeway which linked the pyramid with the valley and the Valley Temple. The Valley Temple is buried beneath the village of Nazlet el-Samman; basalt paving and limestone walls have been found but the site has not been excavated.[49][50] The basalt blocks show "clear evidence" of having been cut with some kind of saw with an estimated cutting blade of 15 feet (4.6 m) in length, capable of cutting at a rate of 1.5 inches (38 mm) per minute. Romer suggests that this "super saw" may have had copper teeth and weighed up to 300 pounds (140 kg). He theorizes that such a saw could have been attached to a wooden trestle and possibly used in conjunction with vegetable oil, cutting sand, emery or pounded quartz to cut the blocks, which would have required the labour of at least a dozen men to operate it.[51]
59
+
60
+ On the south side are the subsidiary pyramids, popularly known as the Queens' Pyramids. Three remain standing to nearly full height but the fourth was so ruined that its existence was not suspected until the recent discovery of the first course of stones and the remains of the capstone. Hidden beneath the paving around the pyramid was the tomb of Queen Hetepheres I, sister-wife of Sneferu and mother of Khufu. Discovered by accident by the Reisner expedition, the burial was intact, though the carefully sealed coffin proved to be empty.
61
+
62
+ A notable construction flanking the Giza pyramid complex is a cyclopean stone wall, the Wall of the Crow.[52] Lehner has discovered a worker's town outside of the wall, otherwise known as "The Lost City", dated by pottery styles, seal impressions, and stratigraphy to have been constructed and occupied sometime during the reigns of Khafre (2520–2494 BC) and Menkaure (2490–2472 BC).[53][54] In the early 21st century, Mark Lehner and his team made several discoveries, including what appears to have been a thriving port, suggesting the town and associated living quarters, which consisted of barracks called "galleries", may not have been for the pyramid workers after all but rather for the soldiers and sailors who utilized the port. In light of this new discovery, as to where then the pyramid workers may have lived, Lehner suggested the alternative possibility they may have camped on the ramps he believes were used to construct the pyramids or possibly at nearby quarries.[55]
63
+
64
+ In the early 1970s, the Australian archaeologist Karl Kromer excavated a mound in the South Field of the plateau. This mound contained artefacts including mudbrick seals of Khufu, which he identified with an artisans' settlement.[56] Mudbrick buildings just south of Khufu's Valley Temple contained mud sealings of Khufu and have been suggested to be a settlement serving the cult of Khufu after his death.[57] A worker's cemetery used at least between Khufu's reign and the end of the Fifth Dynasty was discovered south of the Wall of the Crow by Hawass in 1990.[58]
65
+
66
+ There are three boat-shaped pits around the pyramid, of a size and shape to have held complete boats, though so shallow that any superstructure, if there ever was one, must have been removed or disassembled. In May 1954, the Egyptian archaeologist Kamal el-Mallakh discovered a fourth pit, a long, narrow rectangle, still covered with slabs of stone weighing up to 15 tons. Inside were 1,224 pieces of wood, the longest 23 metres (75 ft) long, the shortest 10 centimetres (0.33 ft). These were entrusted to a boat builder, Haj Ahmed Yusuf, who worked out how the pieces fit together. The entire process, including conservation and straightening of the warped wood, took fourteen years.
67
+
68
+ The result is a cedar-wood boat 43.6 metres (143 ft) long, its timbers held together by ropes, which is currently housed in a special boat-shaped, air-conditioned museum beside the pyramid. During construction of this museum, which stands above the boat pit, a second sealed boat pit was discovered. It was deliberately left unopened until 2011 when excavation began on the boat.[59]
69
+
70
+ Although succeeding pyramids were smaller, pyramid-building continued until the end of the Middle Kingdom. However, as authors Brier and Hobbs claim, "all the pyramids were robbed" by the New Kingdom, when the construction of royal tombs in a desert valley, now known as the Valley of the Kings, began.[60][61] Joyce Tyldesley states that the Great Pyramid itself "is known to have been opened and emptied by the Middle Kingdom", before the Arab caliph Al-Ma'mun entered the pyramid around 820 AD.[46]
71
+
72
+ I. E. S. Edwards discusses Strabo's mention that the pyramid "a little way up one side has a stone that may be taken out, which being raised up there is a sloping passage to the foundations". Edwards suggested that the pyramid was entered by robbers after the end of the Old Kingdom and sealed and then reopened more than once until Strabo's door was added. He adds: "If this highly speculative surmise be correct, it is also necessary to assume either that the existence of the door was forgotten or that the entrance was again blocked with facing stones", in order to explain why al-Ma'mun could not find the entrance.[62]
73
+
74
+ He also discusses a story told by Herodotus. Herodotus visited Egypt in the 5th century BC and recounts a story that he was told concerning vaults under the pyramid built on an island where the body of Cheops lies. Edwards notes that the pyramid had "almost certainly been opened and its contents plundered long before the time of Herodotus" and that it might have been closed again during the Twenty-sixth Dynasty of Egypt when other monuments were restored. He suggests that the story told to Herodotus could have been the result of almost two centuries of telling and retelling by Pyramid guides.[63]
75
+
76
+
77
+
en/226.html.txt ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Andrew Johnson (December 29, 1808 – July 31, 1875) was the 17th president of the United States, serving from 1865 to 1869. He assumed the presidency as he was vice president at the time of the assassination of Abraham Lincoln. Johnson was a Democrat who ran with Lincoln on the National Union ticket, coming to office as the Civil War concluded. He favored quick restoration of the seceded states to the Union without protection for the former slaves. This led to conflict with the Republican-dominated Congress, culminating in his impeachment by the House of Representatives in 1868. He was acquitted in the Senate by one vote. His main accomplishment as president was the Alaska purchase.
4
+
5
+ Johnson was born in poverty in Raleigh, North Carolina and never attended school. He was apprenticed as a tailor and worked in several frontier towns before settling in Greeneville, Tennessee. He served as alderman and mayor there before being elected to the Tennessee House of Representatives in 1835. After brief service in the Tennessee Senate, Johnson was elected to the House of Representatives in 1843, where he served five two-year terms. He became governor of Tennessee for four years, and was elected by the legislature to the Senate in 1857. In his congressional service, he sought passage of the Homestead Bill which was enacted soon after he left his Senate seat in 1862. Southern slave states seceded to form the Confederate States of America, including Tennessee, but Johnson remained firmly with the Union. He was the only sitting senator from a Confederate state who did not resign his seat upon learning of his state's secession. In 1862, Lincoln appointed him as military governor of Tennessee after most of it had been retaken. In 1864, Johnson was a logical choice as running mate for Lincoln, who wished to send a message of national unity in his re-election campaign; their ticket easily won. Johnson was sworn in as vice president in March 1865 and gave a rambling speech, after which he secluded himself to avoid public ridicule. Six weeks later, the assassination of Lincoln made him president.
6
+
7
+ Johnson implemented his own form of Presidential Reconstruction, a series of proclamations directing the seceded states to hold conventions and elections to reform their civil governments. Southern states returned many of their old leaders and passed Black Codes to deprive the freedmen of many civil liberties, but Congressional Republicans refused to seat legislators from those states and advanced legislation to overrule the Southern actions. Johnson vetoed their bills, and Congressional Republicans overrode him, setting a pattern for the remainder of his presidency.[1] Johnson opposed the Fourteenth Amendment which gave citizenship to former slaves. In 1866, he went on an unprecedented national tour promoting his executive policies, seeking to break Republican opposition.[2] As the conflict grew between the branches of government, Congress passed the Tenure of Office Act restricting Johnson's ability to fire Cabinet officials. He persisted in trying to dismiss Secretary of War Edwin Stanton, but ended up being impeached by the House of Representatives and narrowly avoided conviction in the Senate. He did not win the 1868 Democratic presidential nomination and left office the following year.
8
+
9
+ Johnson returned to Tennessee after his presidency and gained some vindication when he was elected to the Senate in 1875, making him the only former president to serve in the Senate. He died five months into his term. Johnson's strong opposition to federally guaranteed rights for black Americans is widely criticized. He is regarded by many historians as one of the worst presidents in American history.
10
+
11
+ Andrew Johnson was born in Raleigh, North Carolina, on December 29, 1808, to Jacob Johnson (1778–1812) and Mary ("Polly") McDonough (1783–1856), a laundress. He was of English, Scots-Irish, and Irish ancestry.[3] He had a brother William, four years his senior, and an older sister Elizabeth, who died in childhood. Johnson's birth in a two-room shack was a political asset in the mid-19th century, and he would frequently remind voters of his humble origins.[4][5] Jacob Johnson was a poor man, as had been his father, William Johnson, but he became town constable of Raleigh before marrying and starting a family. Both Jacob and Mary were illiterate, and had worked as tavern servants, while Johnson never attended school[5] and grew up in poverty.[5] Jacob died of an apparent heart attack while ringing the town bell, shortly after rescuing three drowning men, when his son Andrew was three.[6] Polly Johnson worked as a washerwoman and became the sole support of her family. Her occupation was then looked down on, as it often took her into other homes unaccompanied. Since Andrew did not resemble either of his siblings, there are rumors that he may have been fathered by another man. Polly Johnson eventually remarried to a man named Turner Doughtry, who was as poor as she was.[7]
12
+
13
+ Johnson's mother apprenticed her son William to a tailor, James Selby. Andrew also became an apprentice in Selby's shop at age ten and was legally bound to serve until his 21st birthday. Johnson lived with his mother for part of his service, and one of Selby's employees taught him rudimentary literacy skills.[8] His education was augmented by citizens who would come to Selby's shop to read to the tailors as they worked. Even before he became an apprentice, Johnson came to listen. The readings caused a lifelong love of learning, and one of his biographers, Annette Gordon-Reed, suggests that Johnson, later a gifted public speaker, learned the art as he threaded needles and cut cloth.[9]
14
+
15
+ Johnson was not happy at James Selby's, and after about five years, both he and his brother ran away. Selby responded by placing a reward for their return: "Ten Dollars Reward. Ran away from the subscriber, two apprentice boys, legally bound, named William and Andrew Johnson ... [payment] to any person who will deliver said apprentices to me in Raleigh, or I will give the above reward for Andrew Johnson alone."[10] The brothers went to Carthage, North Carolina, where Andrew Johnson worked as a tailor for several months. Fearing he would be arrested and returned to Raleigh, Johnson moved to Laurens, South Carolina. He found work quickly, met his first love, Mary Wood, and made her a quilt as a gift. However, she rejected his marriage proposal. He returned to Raleigh, hoping to buy out his apprenticeship, but could not come to terms with Selby. Unable to stay in Raleigh, where he risked being apprehended for abandoning Selby, he decided to move west.[11][12]
16
+
17
+ Johnson left North Carolina for Tennessee, traveling mostly on foot. After a brief period in Knoxville, he moved to Mooresville, Alabama.[11][13] He then worked as a tailor in Columbia, Tennessee, but was called back to Raleigh by his mother and stepfather, who saw limited opportunities there and who wished to emigrate west. Johnson and his party traveled through the Blue Ridge Mountains to Greeneville, Tennessee. Andrew Johnson fell in love with the town at first sight, and when he became prosperous purchased the land where he had first camped and planted a tree in commemoration.[14]
18
+
19
+ In Greeneville, Johnson established a successful tailoring business in the front of his home. In 1827, at the age of 18, he married 16-year-old Eliza McCardle, the daughter of a local shoemaker. The pair were married by Justice of the Peace Mordecai Lincoln, first cousin of Thomas Lincoln, whose son would become president. The Johnsons were married for almost 50 years and had five children: Martha (1828), Charles (1830), Mary (1832), Robert (1834), and Andrew Jr. (1852). Though she suffered from tuberculosis, Eliza supported her husband's endeavors. She taught him mathematics skills and tutored him to improve his writing.[15][16] Shy and retiring by nature, Eliza Johnson usually remained in Greeneville during Johnson's political rise. She was not often seen during her husband's presidency; their daughter Martha usually served as official hostess.[17]
20
+
21
+ Johnson's tailoring business prospered during the early years of the marriage, enabling him to hire help and giving him the funds to invest profitably in real estate.[18] He later boasted of his talents as a tailor, "my work never ripped or gave way."[19] He was a voracious reader. Books about famous orators aroused his interest in political dialogue, and he had private debates on the issues of the day with customers who held opposing views. He also took part in debates at Greeneville College.[20]
22
+
23
+ In 1843, Johnson purchased his first slave, Dolly, who was 14 years old at the time. Soon after, he purchased Dolly's half-brother Sam. Dolly had three children—Liz, Florence and William. In 1857, Andrew Johnson purchased Henry, who was 13 at the time and would later accompany the Johnson family to the White House. Sam Johnson and his wife Margaret had nine children. Sam became a commissioner of the Freedmen's Bureau and was known for being a proud man who negotiated the nature of his work with the Johnson family. Notably, he received some monetary compensation for his labors and negotiated with Andrew Johnson to receive a tract of land which Andrew Johnson deeded to him in 1867. Ultimately, Johnson owned at least ten slaves. He was said to have had a compassionate and familial relationship toward them.[21]
24
+
25
+ Andrew Johnson's slaves were freed on August 8, 1863. A year later, all of Tennessee's slaves were freed. As a sign of appreciation, Andrew Johnson was given a watch by the newly-emancipated slaves inscribed with "…for his untiring energy in the cause of Freedom."[22]
26
+
27
+ Johnson helped organize a mechanics' (working men's) ticket in the 1829 Greeneville municipal election. He was elected town alderman, along with his friends Blackston McDannel and Mordecai Lincoln.[23][24] Following the 1831 Nat Turner slave rebellion, a state convention was called to pass a new constitution, including provisions to disenfranchise free people of color. The convention also wanted to reform real estate tax rates, and provide ways of funding improvements to Tennessee's infrastructure. The constitution was submitted for a public vote, and Johnson spoke widely for its adoption; the successful campaign provided him with statewide exposure. On January 4, 1834, his fellow aldermen elected him mayor of Greeneville.[25][26]
28
+
29
+ In 1835, Johnson made a bid for election to the "floater" seat which Greene County shared with neighboring Washington County in the Tennessee House of Representatives. According to his biographer, Hans L. Trefousse, Johnson "demolished" the opposition in debate and won the election with almost a two to one margin.[27][28] During his Greeneville days, Johnson joined the Tennessee Militia as a member of the 90th Regiment. He attained the rank of colonel, though while an enrolled member, Johnson was fined for an unknown offense.[29] Afterwards, he was often addressed or referred to by his rank.
30
+
31
+ In his first term in the legislature, which met in the state capital of Nashville, Johnson did not consistently vote with either the Democratic or the newly formed Whig Party, though he revered President Andrew Jackson, a Democrat and fellow Tennessean. The major parties were still determining their core values and policy proposals, with the party system in a state of flux. The Whig Party had organized in opposition to Jackson, fearing the concentration of power in the Executive Branch of the government; Johnson differed from the Whigs as he opposed more than minimal government spending and spoke against aid for the railroads, while his constituents hoped for improvements in transportation. After Brookins Campbell and the Whigs defeated Johnson for reelection in 1837, Johnson would not lose another race for thirty years. In 1839, he sought to regain his seat, initially as a Whig, but when another candidate sought the Whig nomination, he ran as a Democrat and was elected. From that time he supported the Democratic party and built a powerful political machine in Greene County.[30][31] Johnson became a strong advocate of the Democratic Party, noted for his oratory, and in an era when public speaking both informed the public and entertained it, people flocked to hear him.[32]
32
+
33
+ In 1840, Johnson was selected as a presidential elector for Tennessee, giving him more statewide publicity. Although Democratic President Martin Van Buren was defeated by former Ohio senator William Henry Harrison, Johnson was instrumental in keeping Tennessee and Greene County in the Democratic column.[33] He was elected to the Tennessee Senate in 1841, where he served a two-year term.[34] He had achieved financial success in his tailoring business, but sold it to concentrate on politics. He had also acquired additional real estate, including a larger home and a farm (where his mother and stepfather took residence), and among his assets numbered eight or nine slaves.[35]
34
+
35
+ Having served in both houses of the state legislature, Johnson saw election to Congress as the next step in his political career. He engaged in a number of political maneuvers to gain Democratic support, including the displacement of the Whig postmaster in Greeneville, and defeated Jonesborough lawyer John A. Aiken by 5,495 votes to 4,892.[36][37] In Washington, he joined a new Democratic majority in the House of Representatives. Johnson advocated for the interests of the poor, maintained an anti-abolitionist stance, argued for only limited spending by the government and opposed protective tariffs.[38] With Eliza remaining in Greeneville, Congressman Johnson shunned social functions in favor of study in the Library of Congress.[39] Although a fellow Tennessee Democrat, James K. Polk, was elected president in 1844, and Johnson had campaigned for him, the two men had difficult relations, and President Polk refused some of his patronage suggestions.[40]
36
+
37
+ Johnson believed, as did many Southern Democrats, that the Constitution protected private property, including slaves, and thus prohibited the federal and state governments from abolishing slavery.[41] He won a second term in 1845 against William G. Brownlow, presenting himself as the defender of the poor against the aristocracy. In his second term, Johnson supported the Polk administration's decision to fight the Mexican War, seen by some Northerners as an attempt to gain territory to expand slavery westward, and opposed the Wilmot Proviso, a proposal to ban slavery in any territory gained from Mexico. He introduced for the first time his Homestead Bill, to grant 160 acres (65 ha) to people willing to settle the land and gain title to it.[42][43] This issue was especially important to Johnson because of his own humble beginnings.[42][44]
38
+
39
+ In the presidential election of 1848, the Democrats split over the slavery issue, and abolitionists formed the Free Soil Party, with former president Van Buren as their nominee. Johnson supported the Democratic candidate, former Michigan senator Lewis Cass. With the party split, Whig nominee General Zachary Taylor was easily victorious, and carried Tennessee.[45] Johnson's relations with Polk remained poor; the President recorded of his final New Year's reception in 1849 that
40
+
41
+ Among the visitors I observed in the crowd today was Hon. Andrew Johnson of the Ho. Repts. [House of Representatives] Though he represents a Democratic District in Tennessee (my own State) this is the first time I have seen him during the present session of Congress. Professing to be a Democrat, he has been politically, if not personally hostile to me during my whole term. He is very vindictive and perverse in his temper and conduct. If he had the manliness and independence to declare his opposition openly, he knows he could not be elected by his constituents. I am not aware that I have ever given him cause for offense.[46]
42
+
43
+ Johnson, due to national interest in new railroad construction and in response to the need for better transportation in his own district, also supported government assistance for the East Tennessee and Virginia Railroad.[47]
44
+
45
+ In his campaign for a fourth term, Johnson concentrated on three issues: slavery, homesteads and judicial elections. He defeated his opponent, Nathaniel G. Taylor, in August 1849, with a greater margin of victory than in previous campaigns. When the House convened in December, the party division caused by the Free Soil Party precluded the formation of the majority needed to elect a Speaker. Johnson proposed adoption of a rule allowing election of a Speaker by a plurality; some weeks later others took up a similar proposal, and Democrat Howell Cobb was elected.[48]
46
+
47
+ Once the Speaker election had concluded and Congress was ready to conduct legislative business, the issue of slavery took center stage. Northerners sought to admit California, a free state, to the Union. Kentucky's Henry Clay introduced in the Senate a series of resolutions, the Compromise of 1850, to admit California and pass legislation sought by each side. Johnson voted for all the provisions except for the abolition of slavery in the nation's capital.[49] He pressed resolutions for constitutional amendments to provide for popular election of senators (then elected by state legislatures) and of the president (chosen by the Electoral College), and limiting the tenure of federal judges to 12 years. These were all defeated.[50]
48
+
49
+ A group of Democrats nominated Landon Carter Haynes to oppose Johnson as he sought a fifth term; the Whigs were so pleased with the internecine battle among the Democrats in the general election that they did not nominate a candidate of their own. The campaign included fierce debates: Johnson's main issue was the passage of the Homestead Bill; Haynes contended it would facilitate abolition. Johnson won the election by more than 1600 votes.[50] Though he was not enamored of the party's presidential nominee in 1852, former New Hampshire senator Franklin Pierce, Johnson campaigned for him. Pierce was elected, but he failed to carry Tennessee.[51] In 1852, Johnson managed to get the House to pass his Homestead Bill, but it failed in the Senate.[52] The Whigs had gained control of the Tennessee legislature, and, under the leadership of Gustavus Henry, redrew the boundaries of Johnson's First District to make it a safe seat for their party. The Nashville Union termed this "Henry-mandering";[b][53] lamented Johnson, "I have no political future."[54]
50
+
51
+ If Johnson considered retiring from politics upon deciding not to seek reelection, he soon changed his mind.[55] His political friends began to maneuver to get him the nomination for governor. The Democratic convention unanimously named him, though some party members were not happy at his selection. The Whigs had won the past two gubernatorial elections, and still controlled the legislature.[56] That party nominated Henry, making the "Henry-mandering" of the First District an immediate issue.[56] The two men debated in county seats the length of Tennessee before the meetings were called off two weeks before the August 1853 election due to illness in Henry's family.[55][57] Johnson won the election by 63,413 votes to 61,163; some votes for him were cast in return for his promise to support Whig Nathaniel Taylor for his old seat in Congress.[58][59]
52
+
53
+ Tennessee's governor had little power: Johnson could propose legislation but not veto it, and most appointments were made by the Whig-controlled legislature. Nevertheless, the office was a "bully pulpit" that allowed him to publicize himself and his political views.[60] He succeeded in getting the appointments he wanted in return for his endorsement of John Bell, a Whig, for one of the state's U.S. Senate seats. In his first biennial speech, Johnson urged simplification of the state judicial system, abolition of the Bank of Tennessee, and establishment of an agency to provide uniformity in weights and measures; the last was passed. Johnson was critical of the Tennessee common school system and suggested funding be increased via taxes, either statewide or county by county—a mixture of the two was passed.[61] Reforms carried out during Johnson's time as governor included the foundation of the State's public library (making books available to all) and its first public school system, and the initiation of regular state fairs to benefit craftsmen and farmers.[62]
54
+
55
+ Although the Whig Party was on its final decline nationally, it remained strong in Tennessee, and the outlook for Democrats there in 1855 was poor. Feeling that reelection as governor was necessary to give him a chance at the higher offices he sought, Johnson agreed to make the run. Meredith P. Gentry received the Whig nomination. A series of more than a dozen vitriolic debates ensued. The issues in the campaign were slavery, the prohibition of alcohol, and the nativist positions of the Know Nothing Party. Johnson favored the first, but opposed the others. Gentry was more equivocal on the alcohol question, and had gained the support of the Know Nothings, a group Johnson portrayed as a secret society.[63] Johnson was unexpectedly victorious, albeit with a narrower margin than in 1853.[64]
56
+
57
+ When the presidential election of 1856 approached, Johnson hoped to be nominated; some Tennessee county conventions designated him a "favorite son". His position that the best interests of the Union were served by slavery in some areas made him a practical compromise candidate for president. He was never a major contender; the nomination fell to former Pennsylvania senator James Buchanan. Though he was not impressed by either, Johnson campaigned for Buchanan and his running mate, John C. Breckinridge, who were elected.[65]
58
+
59
+ Johnson decided not to seek a third term as governor, with an eye towards election to the U.S. Senate. In 1857, while returning from Washington, his train derailed, causing serious damage to his right arm. This injury would trouble him in the years to come.[66]
60
+
61
+ The victors in the 1857 state legislative campaign would, once they convened in October, elect a United States Senator. Former Whig governor William B. Campbell wrote to his uncle, "The great anxiety of the Whigs is to elect a majority in the legislature so as to defeat Andrew Johnson for senator. Should the Democrats have the majority, he will certainly be their choice, and there is no man living to whom the Americans[c] and Whigs have as much antipathy as Johnson."[67] The governor spoke widely in the campaign, and his party won the gubernatorial race and control of the legislature.[68] Johnson's final address as governor gave him the chance to influence his electors, and he made proposals popular among Democrats. Two days later the legislature elected him to the Senate. The opposition was appalled, with the Richmond Whig newspaper referring to him as "the vilest radical and most unscrupulous demagogue in the Union".[69]
62
+
63
+ Johnson gained high office due to his proven record as a man popular among the small farmers and self-employed tradesmen who made up much of Tennessee's electorate. He called them the "plebeians"; he was less popular among the planters and lawyers who led the state Democratic Party, but none could match him as a vote-getter. After his death, one Tennessee voter wrote of him, "Johnson was always the same to everyone ... the honors heaped upon him did not make him forget to be kind to the humblest citizen."[70] Always seen in impeccably tailored clothing, he cut an impressive figure,[71] and had the stamina to endure lengthy campaigns with daily travel over bad roads leading to another speech or debate. Mostly denied the party's machinery, he relied on a network of friends, advisers, and contacts.[54] One friend, Hugh Douglas, stated in a letter to him, "you have been in the way of our would be great men for a long time. At heart many of us never wanted you to be Governor only none of the rest of us Could have been elected at the time and we only wanted to use you. Then we did not want you to go to the Senate but the people would send you."[72]
64
+
65
+ The new senator took his seat when Congress convened in December 1857 (the term of his predecessor, James C. Jones, had expired in March). He came to Washington as usual without his wife and family; Eliza would visit Washington only once during Johnson's first time as senator, in 1860. Johnson immediately set about introducing the Homestead Bill in the Senate, but as most senators who supported it were Northern (many associated with the newly founded Republican Party), the matter became caught up in suspicions over the slavery issue. Southern senators felt that those who took advantage of the provisions of the Homestead Bill were more likely to be Northern non-slaveholders. The issue of slavery had been complicated by the Supreme Court's ruling earlier in the year in Dred Scott v. Sandford that slavery could not be prohibited in the territories. Johnson, a slaveholding senator from a Southern state, made a major speech in the Senate the following May in an attempt to convince his colleagues that the Homestead Bill and slavery were not incompatible. Nevertheless, Southern opposition was key to defeating the legislation, 30–22.[73][74] In 1859, it failed on a procedural vote when Vice President Breckinridge broke a tie against the bill, and in 1860, a watered-down version passed both houses, only to be vetoed by Buchanan at the urging of Southerners.[75] Johnson continued his opposition to spending, chairing a committee to control it.
66
+
67
+ He argued against funding to build infrastructure in Washington, D.C., stating that it was unfair to expect state citizens to pay for the city's streets, even if it was the seat of government. He opposed spending money for troops to put down the revolt by the Mormons in Utah Territory, arguing for temporary volunteers as the United States should not have a standing army.[76]
68
+
69
+ In October 1859, abolitionist John Brown and sympathizers raided the federal arsenal at Harpers Ferry, Virginia (today West Virginia). Tensions in Washington between pro- and anti-slavery forces increased greatly. Johnson gave a major speech in the Senate in December, decrying Northerners who would endanger the Union by seeking to outlaw slavery. The Tennessee senator stated that "all men are created equal" from the Declaration of Independence did not apply to African Americans, since the Constitution of Illinois contained that phrase—and that document barred voting by African Americans.[77][78] Johnson, by this time, was a wealthy man who owned a few household slaves,[79] 14 slaves, according to the 1860 Federal Census.[80]
70
+
71
+ Johnson hoped that he would be a compromise candidate for the presidential nomination as the Democratic Party tore itself apart over the slavery question. Busy with the Homestead Bill during the 1860 Democratic National Convention in Charleston, South Carolina, he sent two of his sons and his chief political adviser to represent his interests in the backroom deal-making. The convention deadlocked, with no candidate able to gain the required two-thirds vote, but the sides were too far apart to consider Johnson as a compromise. The party split, with Northerners backing Illinois Senator Stephen Douglas while Southerners, including Johnson, supported Vice President Breckinridge for president. With former Tennessee senator John Bell running a fourth-party candidacy and further dividing the vote, the Republican Party elected its first president, former Illinois representative Abraham Lincoln. The election of Lincoln, known to be against the spread of slavery, was unacceptable to many in the South. Although secession from the Union had not been an issue in the campaign, talk of it began in the Southern states.[81][82]
72
+
73
+ Johnson took to the Senate floor after the election, giving a speech well received in the North, "I will not give up this government ... No; I intend to stand by it ... and I invite every man who is a patriot to ... rally around the altar of our common country ... and swear by our God, and all that is sacred and holy, that the Constitution shall be saved, and the Union preserved."[83][84] As Southern senators announced they would resign if their states seceded, he reminded Mississippi Senator Jefferson Davis that if Southerners would only hold to their seats, the Democrats would control the Senate, and could defend the South's interests against any infringement by Lincoln.[85] Gordon-Reed points out that while Johnson's belief in an indissoluble Union was sincere, he had alienated Southern leaders, including Davis, who would soon be the president of the Confederate States of America, formed by the seceding states. If the Tennessean had backed the Confederacy, he would have had small influence in its government.[86]
74
+
75
+ Johnson returned home when his state took up the issue of secession. His successor as governor, Isham G. Harris, and the legislature organized a referendum on whether to have a constitutional convention to authorize secession; when that failed, they put the question of leaving the Union to a popular vote. Despite threats on Johnson's life, and actual assaults, he campaigned against both questions, sometimes speaking with a gun on the lectern before him. Although Johnson's eastern region of Tennessee was largely against secession, the second referendum passed, and in June 1861, Tennessee joined the Confederacy. Believing he would be killed if he stayed, Johnson fled through the Cumberland Gap, where his party was in fact shot at. He left his wife and family in Greeneville.[87][88]
76
+
77
+ As the only member from a seceded state to remain in the Senate and the most prominent Southern Unionist, Johnson had Lincoln's ear in the early months of the war.[89] With most of Tennessee in Confederate hands, Johnson spent congressional recesses in Kentucky and Ohio, trying in vain to convince any Union commander who would listen to conduct an operation into East Tennessee.[90]
78
+
79
+ Johnson's first tenure in the Senate came to a conclusion in March 1862 when Lincoln appointed him military governor of Tennessee. Much of the central and western portions of that seceded state had been recovered. Although some argued that civil government should simply resume once the Confederates were defeated in an area, Lincoln chose to use his power as commander in chief to appoint military governors over Union-controlled Southern regions.[91] The Senate quickly confirmed Johnson's nomination along with the rank of brigadier general.[92] In response, the Confederates confiscated his land and his slaves, and turned his home into a military hospital.[93] Later in 1862, after his departure from the Senate and in the absence of most Southern legislators, the Homestead Bill was finally enacted. Along with legislation for land-grant colleges and for the transcontinental railroad, the Homestead Bill has been credited with opening the American West to settlement.[94]
80
+
81
+ As military governor, Johnson sought to eliminate rebel influence in the state. He demanded loyalty oaths from public officials, and shut down all newspapers owned by Confederate sympathizers. Much of eastern Tennessee remained in Confederate hands, and the ebb and flow of war during 1862 sometimes brought Confederate control again close to Nashville. However, the Confederates allowed his wife and family to pass through the lines to join him.[95][96] Johnson undertook the defense of Nashville as well as he could, though the city was continually harassed by cavalry raids led by General Nathan Bedford Forrest. Relief from Union regulars did not come until General William S. Rosecrans defeated the Confederates at Murfreesboro in early 1863. Much of eastern Tennessee was captured later that year.[97]
82
+
83
+ When Lincoln issued the Emancipation Proclamation in January 1863, declaring freedom for all slaves in Confederate-held areas, he exempted Tennessee at Johnson's request. The proclamation increased the debate over what should become of the slaves after the war, as not all Unionists supported abolition. Johnson finally decided that slavery had to end. He wrote, "If the institution of slavery ... seeks to overthrow it [the Government], then the Government has a clear right to destroy it."[98] He reluctantly supported efforts to enlist former slaves into the Union Army, feeling that African Americans should perform menial tasks to release white Americans to do the fighting.[99] Nevertheless, he succeeded in recruiting 20,000 black soldiers to serve the Union.[100]
84
+
85
+ In 1860, Lincoln's running mate had been Maine Senator Hannibal Hamlin. Vice President Hamlin had served competently, was in good health, and was willing to run again. Nevertheless, Johnson emerged as running mate for Lincoln's reelection bid in 1864.[101]
86
+
87
+ Lincoln considered several War Democrats for the ticket in 1864, and sent an agent to sound out General Benjamin Butler as a possible running mate. In May 1864, the President dispatched General Daniel Sickles to Nashville on a fact-finding mission. Although Sickles denied he was there either to investigate or interview the military governor, Johnson biographer Hans L. Trefousse believes Sickles's trip was connected to Johnson's subsequent nomination for vice president.[101] According to historian Albert Castel in his account of Johnson's presidency, Lincoln was impressed by Johnson's administration of Tennessee.[95] Gordon-Reed points out that while the Lincoln-Hamlin ticket might have been considered geographically balanced in 1860, "having Johnson, the southern War Democrat, on the ticket sent the right message about the folly of secession and the continuing capacity for union within the country."[102] Another factor was the desire of Secretary of State William Seward to frustrate the vice-presidential candidacy of his fellow New Yorker, former senator Daniel S. Dickinson, a War Democrat, as Seward would probably have had to yield his place if another New Yorker became vice president. Johnson, once he was told by reporters the likely purpose of Sickles' visit, was active on his own behalf, giving speeches and having his political friends work behind the scenes to boost his candidacy.[103]
88
+
89
+ To sound a theme of unity, Lincoln in 1864 ran under the banner of the National Union Party, rather than the Republicans.[102] At the party's convention in Baltimore in June, Lincoln was easily nominated, although there had been some talk of replacing him with a Cabinet officer or one of the more successful generals. After the convention backed Lincoln, former Secretary of War Simon Cameron offered a resolution to nominate Hamlin, but it was defeated. Johnson was nominated for vice president by C.M. Allen of Indiana with an Iowa delegate as seconder. On the first ballot, Johnson led with 200 votes to 150 for Hamlin and 108 for Dickinson. On the second ballot, Kentucky switched to vote for Johnson, beginning a stampede. Johnson was named on the second ballot with 491 votes to Hamlin's 17 and eight for Dickinson; the nomination was made unanimous. Lincoln expressed pleasure at the result, "Andy Johnson, I think, is a good man."[104] When word reached Nashville, a crowd assembled and the military governor obliged with a speech contending his selection as a Southerner meant that the rebel states had not actually left the Union.[104]
90
+
91
+ Although it was unusual at the time for a national candidate to actively campaign, Johnson gave a number of speeches in Tennessee, Kentucky, Ohio, and Indiana. He also sought to boost his chances in Tennessee while reestablishing civil government by making the loyalty oath even more restrictive, in that voters would now have to swear they opposed making a settlement with the Confederacy. The Democratic candidate for president, George McClellan, hoped to avoid additional bloodshed by negotiation, and so the stricter loyalty oath effectively disenfranchised his supporters. Lincoln declined to override Johnson, and their ticket took the state by 25,000 votes. Congress refused to count Tennessee's electoral votes, but Lincoln and Johnson did not need them, having won in most states that had voted, and easily secured the election.[105]
92
+
93
+ Now Vice President-elect, Johnson was anxious to complete the work of reestablishing civilian government in Tennessee, although the timetable for the election of a new governor did not allow it to take place until after Inauguration Day, March 4. He hoped to remain in Nashville to complete his task, but was told by Lincoln's advisers that he could not stay, but would be sworn in with Lincoln. In these months, Union troops finished the retaking of eastern Tennessee, including Greeneville. Just before his departure, the voters of Tennessee ratified a new constitution, abolishing slavery, on February 22, 1865. One of Johnson's final acts as military governor was to certify the results.[106]
94
+
95
+ Johnson traveled to Washington to be sworn in, although according to Gordon-Reed, "in light of what happened on March 4, 1865, it might have been better if Johnson had stayed in Nashville."[107] He may have been ill; Castel cited typhoid fever,[95] though Gordon-Reed notes that there is no independent evidence for that diagnosis.[107] On the evening of March 3, Johnson attended a party in his honor; he drank heavily. Hung over the following morning at the Capitol, he asked Vice President Hamlin for some whiskey. Hamlin produced a bottle, and Johnson took two stiff drinks, stating "I need all the strength for the occasion I can have." In the Senate Chamber, Johnson delivered a rambling address as Lincoln, the Congress, and dignitaries looked on. Almost incoherent at times, he finally meandered to a halt, whereupon Hamlin hastily swore him in as vice president.[108] Lincoln, who had watched sadly during the debacle, then went to his own swearing-in outside the Capitol, and delivered his acclaimed Second Inaugural Address.[109]
96
+
97
+ In the weeks after the inauguration, Johnson only presided over the Senate briefly, and hid from public ridicule at the Maryland home of a friend, Francis Preston Blair. When he did return to Washington, it was with the intent of leaving for Tennessee to reestablish his family in Greeneville. Instead, he remained after word came that General Ulysses S. Grant had captured the Confederate capital of Richmond, Virginia, presaging the end of the war.[110] Lincoln stated, in response to criticism of Johnson's behavior, that "I have known Andy Johnson for many years; he made a bad slip the other day, but you need not be scared; Andy ain't a drunkard."[111]
98
+
99
+ On the afternoon of April 14, 1865, Lincoln and Johnson met for the first time since the inauguration. Trefousse states that Johnson wanted to "induce Lincoln not to be too lenient with traitors"; Gordon-Reed agrees.[112][113]
100
+
101
+ That night, President Lincoln was shot and mortally wounded by John Wilkes Booth, a Confederate sympathizer. The shooting of the President was part of a conspiracy to assassinate Lincoln, Johnson, and Seward the same night. Seward barely survived his wounds, while Johnson escaped attack as his would-be assassin, George Atzerodt, got drunk instead of killing the vice president. Leonard J. Farwell, a fellow boarder at the Kirkwood House, awoke Johnson with news of Lincoln's shooting at Ford's Theatre. Johnson rushed to the President's deathbed, where he remained a short time, on his return promising, "They shall suffer for this. They shall suffer for this."[114] Lincoln died at 7:22 am the next morning; Johnson's swearing-in occurred between 10 and 11 am with Chief Justice Salmon P. Chase presiding in the presence of most of the Cabinet. Johnson's demeanor was described by the newspapers as "solemn and dignified".[115] Some Cabinet members had last seen Johnson, apparently drunk, at the inauguration.[116] At noon, Johnson conducted his first Cabinet meeting in the Treasury Secretary's office, and asked all members to remain in their positions.[117]
102
+
103
+ The events of the assassination resulted in speculation, then and subsequently, concerning Johnson and what the conspirators might have intended for him. In the vain hope of having his life spared after his capture, Atzerodt spoke much about the conspiracy, but did not say anything to indicate that the plotted assassination of Johnson was merely a ruse. Conspiracy theorists point to the fact that on the day of the assassination, Booth came to the Kirkwood House and left one of his cards with Johnson's private secretary, William A. Browning. The message on it was: "Don't wish to disturb you. Are you at home? J. Wilkes Booth."[118]
104
+
105
+ Johnson presided with dignity over Lincoln's funeral ceremonies in Washington, before his predecessor's body was sent home to Springfield, Illinois, for interment.[119] Shortly after Lincoln's death, Union General William T. Sherman reported he had, without consulting Washington, reached an armistice agreement with Confederate General Joseph E. Johnston for the surrender of Confederate forces in North Carolina in exchange for the existing state government remaining in power, with private property rights (slaves) to be respected. This did not even grant freedom to those in slavery. This was not acceptable to Johnson or the Cabinet, who sent word for Sherman to secure the surrender without making political deals, which he did. Further, Johnson placed a $100,000 bounty (equivalent to $1.67 million in 2019) on Confederate President Davis, then a fugitive, which gave Johnson the reputation of a man who would be tough on the South. More controversially, he permitted the execution of Mary Surratt for her part in Lincoln's assassination. Surratt was executed with three others, including Atzerodt, on July 7, 1865.[120]
106
+
107
+ Upon taking office, Johnson faced the question of what to do with the Confederacy. President Lincoln had authorized loyalist governments in Virginia, Arkansas, Louisiana, and Tennessee as the Union came to control large parts of those states and advocated a ten percent plan that would allow elections after ten percent of the voters in any state took an oath of future loyalty to the Union. Congress considered this too lenient; its own plan, requiring a majority of voters to take the loyalty oath, passed both houses in 1864, but Lincoln pocket vetoed it.[121]
108
+
109
+ Johnson had three goals in Reconstruction. He sought a speedy restoration of the states, on the grounds that they had never truly left the Union, and thus should again be recognized once loyal citizens formed a government. To Johnson, African-American suffrage was a delay and a distraction; it had always been a state responsibility to decide who should vote. Second, political power in the Southern states should pass from the planter class to his beloved "plebeians". Johnson feared that the freedmen, many of whom were still economically bound to their former masters, might vote at their direction. Johnson's third priority was election in his own right in 1868, a feat no one who had succeeded a deceased president had managed to accomplish, attempting to secure a Democratic anti-Congressional Reconstruction coalition in the South.[122]
110
+
111
+ The Republicans had formed a number of factions. The Radical Republicans sought voting and other civil rights for African Americans. They believed that the freedmen could be induced to vote Republican in gratitude for emancipation, and that black votes could keep the Republicans in power and Southern Democrats, including former rebels, out of influence. They believed that top Confederates should be punished. The Moderate Republicans sought to keep the Democrats out of power at a national level, and prevent former rebels from resuming power. They were not as enthusiastic about the idea of African-American suffrage as their Radical colleagues, either because of their own local political concerns, or because they believed that the freedman would be likely to cast his vote badly. Northern Democrats favored the unconditional restoration of the Southern states. They did not support African-American suffrage, which might threaten Democratic control in the South.[123]
112
+
113
+ Johnson was initially left to devise a Reconstruction policy without legislative intervention, as Congress was not due to meet again until December 1865.[124] Radical Republicans told the President that the Southern states were economically in a state of chaos and urged him to use his leverage to insist on rights for freedmen as a condition of restoration to the Union. But Johnson, with the support of other officials including Seward, insisted that the franchise was a state, not a federal matter. The Cabinet was divided on the issue.[125]
114
+
115
+ Johnson's first Reconstruction actions were two proclamations, with the unanimous backing of his Cabinet, on May 29. One recognized the Virginia government led by provisional Governor Francis Pierpont. The second provided amnesty for all ex-rebels except those holding property valued at $20,000 or more; it also appointed a temporary governor for North Carolina and authorized elections. Neither of these proclamations included provisions regarding black suffrage or freedmen's rights. The President ordered constitutional conventions in other former rebel states.[126]
116
+
117
+ As Southern states began the process of forming governments, Johnson's policies received considerable public support in the North, which he took as unconditional backing for quick reinstatement of the South. While he received such support from the white South, he underestimated the determination of Northerners to ensure that the war had not been fought for nothing. It was important, in Northern public opinion, that the South acknowledge its defeat, that slavery be ended, and that the lot of African Americans be improved. Voting rights were less important—after all, only a handful of Northern states (mostly in New England) gave African-American men the right to vote on the same basis as whites, and in late 1865, Connecticut, Wisconsin, and Minnesota voted down African-American suffrage proposals by large margins. Northern public opinion tolerated Johnson's inaction on black suffrage as an experiment, to be allowed if it quickened Southern acceptance of defeat. Instead, white Southerners felt emboldened. A number of Southern states passed Black Codes, binding African-American laborers to farms on annual contracts they could not quit, and allowing law enforcement at whim to arrest them for vagrancy and rent out their labor. Most Southerners elected to Congress were former Confederates, with the most prominent being Georgia Senator-designate and former Confederate vice president Alexander Stephens. Congress assembled in early December 1865; Johnson's conciliatory annual message to them was well received. Nevertheless, Congress refused to seat the Southern legislators and established a committee to recommend appropriate Reconstruction legislation.[127]
118
+
119
+ Northerners were outraged at the idea of unrepentant Confederate leaders, such as Stephens, rejoining the federal government at a time when emotional wounds from the war remained raw. They saw the Black Codes placing African Americans in a position barely above slavery. Republicans also feared that restoration of the Southern states would return the Democrats to power.[128][129] In addition, according to David O. Stewart in his book on Johnson's impeachment, "the violence and poverty that oppressed the South would galvanize the opposition to Johnson".[130]
120
+
121
+ Congress was reluctant to confront the President, and initially only sought to fine-tune Johnson's policies towards the South.[131] According to Trefousse, "If there was a time when Johnson could have come to an agreement with the moderates of the Republican Party, it was the period following the return of Congress".[132] The President was unhappy about the provocative actions of the Southern states, and about the continued control by the antebellum elite there, but made no statement publicly, believing that Southerners had a right to act as they did, even if it was unwise to do so. By late January 1866, he was convinced that winning a showdown with the Radical Republicans was necessary to his political plans – both for the success of Reconstruction and for reelection in 1868. He would have preferred that the conflict arise over the legislative efforts to enfranchise African Americans in the District of Columbia, a proposal that had been defeated overwhelmingly in an all-white referendum. A bill to accomplish this passed the House of Representatives, but to Johnson's disappointment, stalled in the Senate before he could veto it.[133]
122
+
123
+ Illinois Senator Lyman Trumbull, leader of the Moderate Republicans and Chairman of the Judiciary Committee, was anxious to reach an understanding with the President. He ushered through Congress a bill extending the Freedmen's Bureau beyond its scheduled abolition in 1867, and the first Civil Rights Bill, to grant citizenship to the freedmen. Trumbull met several times with Johnson and was convinced the President would sign the measures (Johnson rarely contradicted visitors, often fooling those who met with him into thinking he was in accord). In fact, the President opposed both bills as infringements on state sovereignty. Additionally, both of Trumbull's bills were unpopular among white Southerners, whom Johnson hoped to include in his new party. Johnson vetoed the Freedman's Bureau bill on February 18, 1866, to the delight of white Southerners and the puzzled anger of Republican legislators. He considered himself vindicated when a move to override his veto failed in the Senate the following day.[133] Johnson believed that the Radicals would now be isolated and defeated and that the moderate Republicans would form behind him; he did not understand that Moderates also wanted to see African Americans treated fairly.[134]
124
+
125
+ On February 22, 1866, Washington's Birthday, Johnson gave an impromptu speech to supporters who had marched to the White House and called for an address in honor of the first president. In his hour-long speech, he instead referred to himself over 200 times. More damagingly, he also spoke of "men ... still opposed to the Union" to whom he could not extend the hand of friendship he gave to the South.[135][136] When called upon by the crowd to say who they were, Johnson named Pennsylvania Congressman Thaddeus Stevens, Massachusetts Senator Charles Sumner, and abolitionist Wendell Phillips, and accused them of plotting his assassination. Republicans viewed the address as a declaration of war, while one Democratic ally estimated Johnson's speech cost the party 200,000 votes in the 1866 congressional midterm elections.[137]
126
+
127
+ Although strongly urged by moderates to sign the Civil Rights Act of 1866, Johnson broke decisively with them by vetoing it on March 27. In his veto message, he objected to the measure because it conferred citizenship on the freedmen at a time when 11 out of 36 states were unrepresented in the Congress, and that it discriminated in favor of African Americans and against whites.[138][139] Within three weeks, Congress had overridden his veto, the first time that had been done on a major bill in American history.[140] The veto, often seen as a key mistake of Johnson's presidency, convinced moderates there was no hope of working with him. Historian Eric Foner, in his volume on Reconstruction, views it as "the most disastrous miscalculation of his political career". According to Stewart, the veto was "for many his defining blunder, setting a tone of perpetual confrontation with Congress that prevailed for the rest of his presidency".[141]
128
+
129
+ Congress also proposed the Fourteenth Amendment to the states. Written by Trumbull and others, it was sent for ratification by state legislatures in a process in which the president plays no part, though Johnson opposed it. The amendment was designed to put the key provisions of the Civil Rights Act into the Constitution, but also went further. The amendment extended citizenship to every person born in the United States (except Indians on reservations), penalized states that did not give the vote to freedmen, and most importantly, created new federal civil rights that could be protected by federal courts. It also guaranteed that the federal debt would be paid and forbade repayment of Confederate war debts. Further, it disqualified many former Confederates from office, although the disability could be removed — by Congress, not the president.[142] Both houses passed the Freedmen's Bureau Act a second time, and again the President vetoed it; this time, the veto was overridden. By the summer of 1866, when Congress finally adjourned, Johnson's method of restoring states to the Union by executive fiat, without safeguards for the freedmen, was in deep trouble. His home state of Tennessee ratified the Fourteenth Amendment despite the President's opposition.[143] When Tennessee did so, Congress immediately seated its proposed delegation, embarrassing Johnson.[144]
130
+
131
+ Efforts to compromise failed,[145] and a political war ensued between the united Republicans on one side, and on the other, Johnson and his Northern and Southern allies in the Democratic Party. He called a convention of the National Union Party. Republicans had returned to using their previous identifier; Johnson intended to use the discarded name to unite his supporters and gain election to a full term, in 1868.[146] The battleground was the election of 1866; Southern states were not allowed to vote. Johnson campaigned vigorously, undertaking a public speaking tour, known as the "Swing Around the Circle". The trip, including speeches in Chicago, St. Louis, Indianapolis, and Columbus, proved politically disastrous, with the President making controversial comparisons between himself and Christ, and engaging in arguments with hecklers. These exchanges were attacked as beneath the dignity of the presidency. The Republicans won by a landslide, increasing their two-thirds majority in Congress, and made plans to control Reconstruction.[147] Johnson blamed the Democrats for giving only lukewarm support to the National Union movement.[148]
132
+
133
+ Even with the Republican victory in November 1866, Johnson considered himself in a strong position. The Fourteenth Amendment had been ratified by none of the Southern or border states except Tennessee, and had been rejected in Kentucky, Delaware, and Maryland. As the amendment required ratification by three-quarters of the states to become part of the Constitution, he believed the deadlock would be broken in his favor, leading to his election in 1868. Once it reconvened in December 1866, an energized Congress began passing legislation, often over a presidential veto; this included the District of Columbia voting bill. Congress admitted Nebraska to the Union over a veto, and the Republicans gained two senators and a state that promptly ratified the amendment. Johnson's veto of a bill for statehood for Colorado Territory was sustained; enough senators agreed that a district with a population of 30,000 was not yet worthy of statehood to win the day.[149]
134
+
135
+ In January 1867, Congressman Stevens introduced legislation to dissolve the Southern state governments and reconstitute them into five military districts, under martial law. The states would begin again by holding constitutional conventions. African Americans could vote for or become delegates; former Confederates could not. In the legislative process, Congress added to the bill that restoration to the Union would follow the state's ratification of the Fourteenth Amendment, and completion of the process of adding it to the Constitution. Johnson and the Southerners attempted a compromise, whereby the South would agree to a modified version of the amendment without the disqualification of former Confederates, and for limited black suffrage. The Republicans insisted on the full language of the amendment, and the deal fell through. Although Johnson could have pocket vetoed the First Reconstruction Act as it was presented to him less than ten days before the end of the Thirty-Ninth Congress, he chose to veto it directly on March 2, 1867; Congress overruled him the same day. Also on March 2, Congress passed the Tenure of Office Act over the President's veto, in response to statements during the Swing Around the Circle that he planned to fire Cabinet secretaries who did not agree with him. This bill, requiring Senate approval for the firing of Cabinet members during the tenure of the president who appointed them and for one month afterwards, was immediately controversial, with some senators doubting that it was constitutional or that its terms applied to Johnson, whose key Cabinet officers were Lincoln holdovers.[149]
136
+
137
+ Secretary of War Edwin Stanton was an able and hard-working man, but difficult to deal with.[150] Johnson both admired and was exasperated by his War Secretary, who, in combination with General of the Army Grant, worked to undermine the president's Southern policy from within his own administration. Johnson considered firing Stanton, but respected him for his wartime service as secretary. Stanton, for his part, feared allowing Johnson to appoint his successor and refused to resign, despite his public disagreements with his president.[151]
138
+
139
+ The new Congress met for a few weeks in March 1867, then adjourned, leaving the House Committee on the Judiciary behind, charged with reporting back to the full House whether there were grounds for Johnson to be impeached. This committee duly met, examined the President's bank accounts, and summoned members of the Cabinet to testify. When a federal court released former Confederate president Davis on bail on May 13 (he had been captured shortly after the war), the committee investigated whether the President had impeded the prosecution. It learned that Johnson was eager to have Davis tried. A bipartisan majority of the committee voted down impeachment charges; the committee adjourned on June 3.[152]
140
+
141
+ Later in June, Johnson and Stanton battled over the question of whether the military officers placed in command of the South could override the civil authorities. The President had Attorney General Henry Stanbery issue an opinion backing his position that they could not. Johnson sought to pin down Stanton either as for, and thus endorsing Johnson's position, or against, showing himself to be opposed to his president and the rest of the Cabinet. Stanton evaded the point in meetings and written communications. When Congress reconvened in July, it passed a Reconstruction Act against Johnson's position, waited for his veto, overruled it, and went home. In addition to clarifying the powers of the generals, the legislation also deprived the President of control over the Army in the South. With Congress in recess until November, Johnson decided to fire Stanton and relieve one of the military commanders, General Philip Sheridan, who had dismissed the governor of Texas and installed a replacement with little popular support. Johnson was initially deterred by a strong objection from Grant, but on August 5, the President demanded Stanton's resignation; the secretary refused to quit with Congress out of session.[153] Johnson then suspended him pending the next meeting of Congress as permitted under the Tenure of Office Act; Grant agreed to serve as temporary replacement while continuing to lead the Army.[154]
142
+
143
+ Grant, under protest, followed Johnson's order transferring Sheridan and another of the district commanders, Daniel Sickles, who had angered Johnson by firmly following Congress's plan. The President also issued a proclamation pardoning most Confederates, exempting those who held office under the Confederacy, or who had served in federal office before the war but had breached their oaths. Although Republicans expressed anger with his actions, the 1867 elections generally went Democratic. No seats in Congress were directly elected in the polling, but the Democrats took control of the Ohio General Assembly, allowing them to defeat for reelection one of Johnson's strongest opponents, Senator Benjamin Wade. Voters in Ohio, Connecticut, and Minnesota turned down propositions to grant African Americans the vote.[155]
144
+
145
+ The adverse results momentarily put a stop to Republican calls to impeach Johnson, who was elated by the elections.[156] Nevertheless, once Congress met in November, the Judiciary Committee reversed itself and passed a resolution of impeachment against Johnson. After much debate about whether anything the President had done was a high crime or misdemeanor, the standard under the Constitution, the resolution was defeated by the House of Representatives on December 7, 1867, by a vote of 57 in favor to 108 opposed.[157]
146
+
147
+ Johnson notified Congress of Stanton's suspension and Grant's interim appointment. In January 1868, the Senate disapproved of his action, and reinstated Stanton, contending the President had violated the Tenure of Office Act. Grant stepped aside over Johnson's objection, causing a complete break between them. Johnson then dismissed Stanton and appointed Lorenzo Thomas to replace him. Stanton refused to leave his office, and on February 24, 1868, the House impeached the President for intentionally violating the Tenure of Office Act, by a vote of 128 to 47. The House subsequently adopted eleven articles of impeachment, for the most part alleging that he had violated the Tenure of Office Act, and had questioned the legitimacy of Congress.[158]
148
+
149
+ On March 5, 1868, the impeachment trial began in the Senate and lasted almost three months; Congressmen George S. Boutwell, Benjamin Butler and Thaddeus Stevens acted as managers for the House, or prosecutors, and William M. Evarts, Benjamin R. Curtis and former Attorney General Stanbery were Johnson's counsel; Chief Justice Chase served as presiding judge.[159]
150
+
151
+ The defense relied on the provision of the Tenure of Office Act that made it applicable only to appointees of the current administration. Since Lincoln had appointed Stanton, the defense maintained Johnson had not violated the act, and also argued that the President had the right to test the constitutionality of an act of Congress.[160] Johnson's counsel insisted that he make no appearance at the trial, nor publicly comment about the proceedings, and except for a pair of interviews in April, he complied.[161]
152
+
153
+ Johnson maneuvered to gain an acquittal; for example, he pledged to Iowa Senator James W. Grimes that he would not interfere with Congress's Reconstruction efforts. Grimes reported to a group of Moderates, many of whom voted for acquittal, that he believed the President would keep his word. Johnson also promised to install the respected John Schofield as War Secretary. [162] Kansas Senator Edmund G. Ross received assurances that the new, Radical-influenced constitutions ratified in South Carolina and Arkansas would be transmitted to the Congress without delay, an action which would give him and other senators political cover to vote for acquittal.[163]
154
+
155
+ One reason senators were reluctant to remove the President was that his successor would have been Ohio Senator Wade, the president pro tempore of the Senate. Wade, a lame duck who left office in early 1869, was a Radical who supported such measures as women's suffrage, placing him beyond the pale politically in much of the nation.[164][165] Additionally, a President Wade was seen as an obstacle to Grant's ambitions.[166]
156
+
157
+ With the dealmaking, Johnson was confident of the result in advance of the verdict, and in the days leading up to the ballot, newspapers reported that Stevens and his Radicals had given up. On May 16, the Senate voted on the 11th article of impeachment, accusing Johnson of firing Stanton in violation of the Tenure of Office of Act once the Senate had overturned his suspension. Thirty-five senators voted "guilty" and 19 "not guilty", thus falling short by a single vote of the two-thirds majority required for conviction under the Constitution. Seven Republicans—Senators Grimes, Ross, Trumbull, William Pitt Fessenden, Joseph S. Fowler, John B. Henderson, and Peter G. Van Winkle—voted to acquit the President. With Stevens bitterly disappointed at the result, the Senate then adjourned for the Republican National Convention; Grant was nominated for president. The Senate returned on May 26 and voted on the second and third articles, with identical 35–19 results. Faced with those results, Johnson's opponents gave up and dismissed proceedings.[167][168] Stanton "relinquished" his office on May 26, and the Senate subsequently confirmed Schofield.[169] When Johnson renominated Stanbery to return to his position as Attorney General after his service as a defense manager, the Senate refused to confirm him.[170]
158
+
159
+ Allegations were made at the time and again later that bribery dictated the outcome of the trial. Even when it was in progress, Representative Butler began an investigation, held contentious hearings, and issued a report, unendorsed by any other congressman. Butler focused on a New York–based "Astor House Group", supposedly led by political boss and editor Thurlow Weed. This organization was said to have raised large sums of money from whiskey interests through Cincinnati lawyer Charles Woolley to bribe senators to acquit Johnson. Butler went so far as to imprison Woolley in the Capitol building when he refused to answer questions, but failed to prove bribery.[171]
160
+
161
+ Soon after taking office as president, Johnson reached an accord with Secretary of State William H. Seward that there would be no change in foreign policy. In practice, this meant that Seward would continue to run things as he had under Lincoln. Seward and Lincoln had been rivals for the nomination in 1860; the victor hoped that Seward would succeed him as president in 1869. At the time of Johnson's accession, the French had intervened in Mexico, sending troops there. While many politicians had indulged in saber rattling over the Mexican matter, Seward preferred quiet diplomacy, warning the French through diplomatic channels that their presence in Mexico was not acceptable. Although the President preferred a more aggressive approach, Seward persuaded him to follow his lead. In April 1866, the French government informed Seward that its troops would be brought home in stages, to conclude by November 1867.[172]
162
+
163
+ Seward was an expansionist, and sought opportunities to gain territory for the United States. By 1867, the Russian government saw its North American colony (today Alaska) as a financial liability, and feared losing control as American settlement reached there. It instructed its minister in Washington, Baron Eduard de Stoeckl, to negotiate a sale. De Stoeckl did so deftly, getting Seward to raise his offer from $5 million (coincidentally, the minimum that Russia had instructed de Stoeckl to accept) to $7 million, and then getting $200,000 added by raising various objections.[173] This sum of $7.2 million is equivalent to $132 million in present-day terms.[174] On March 30, 1867, de Stoeckl and Seward signed the treaty, working quickly as the Senate was about to adjourn. Johnson and Seward took the signed document to the President's Room in the Capitol, only to be told there was no time to deal with the matter before adjournment. The President summoned the Senate into session to meet on April 1; that body approved the treaty, 37–2.[175] Emboldened by his success in Alaska, Seward sought acquisitions elsewhere. His only success was staking an American claim to uninhabited Wake Island in the Pacific, which would be officially claimed by the U.S. in 1898. He came close with the Danish West Indies as Denmark agreed to sell and the local population approved the transfer in a plebiscite, but the Senate never voted on the treaty and it expired.[176]
164
+
165
+ Another treaty that fared badly was the Johnson-Clarendon convention, negotiated in settlement of the Alabama Claims, for damages to American shipping from British-built Confederate raiders. Negotiated by the United States Minister to Britain, former Maryland senator Reverdy Johnson, in late 1868, it was ignored by the Senate during the remainder of the President's term. The treaty was rejected after he left office, and the Grant administration later negotiated considerably better terms from Britain.[177][178]
166
+
167
+ Johnson appointed nine Article III federal judges during his presidency, all to United States district courts; he did not appoint a justice to serve on the Supreme Court. In April 1866, he nominated Henry Stanbery to fill the vacancy left with the death of John Catron, but Congress eliminated the seat to prevent the appointment, and to ensure that he did not get to make any appointments eliminated the next vacancy as well, providing that the court would shrink by one justice when one next departed from office.[179] Johnson appointed his Greeneville crony, Samuel Milligan, to the United States Court of Claims, where he served from 1868 until his death in 1874.[180][181]
168
+
169
+ In June 1866, Johnson signed the Southern Homestead Act into law, believing that the legislation would assist poor whites. Around 28,000 land claims were successfully patented, although few former slaves benefitted from the law, fraud was rampant, and much of the best land was off-limits, reserved for grants to veterans or railroads.[182] In June 1868, Johnson signed an eight-hour law passed by Congress that established an eight-hour workday for laborers and mechanics employed by the Federal Government.[183] Although Johnson told members of a Workingmen's party delegation in Baltimore that he could not directly commit himself to an eight-hour day, he nevertheless told the same delegation that he greatly favoured the "shortest number of hours consistent with the interests of all".[184] According to Richard F. Selcer, however, the good intentions behind the law were "immediately frustrated" as wages were cut by 20%.[183]
170
+
171
+ Johnson sought nomination by the 1868 Democratic National Convention in New York in July 1868. He remained very popular among Southern whites, and boosted that popularity by issuing, just before the convention, a pardon ending the possibility of criminal proceedings against any Confederate not already indicted, meaning that only Davis and a few others still might face trial. On the first ballot, Johnson was second to former Ohio representative George H. Pendleton, who had been his Democratic opponent for vice president in 1864. Johnson's support was mostly from the South, and fell away as the ballots passed. On the 22nd ballot, former New York governor Horatio Seymour was nominated, and the President received only four votes, all from Tennessee.[185]
172
+
173
+ The conflict with Congress continued. Johnson sent Congress proposals for amendments to limit the president to a single six-year term and make the president and the Senate directly elected, and for term limits for judges. Congress took no action on them. When the President was slow to officially report ratifications of the Fourteenth Amendment by the new Southern legislatures, Congress passed a bill, again over his veto, requiring him to do so within ten days of receipt. He still delayed as much as he could, but was required, in July 1868, to report the ratifications making the amendment part of the Constitution.[186]
174
+
175
+ Seymour's operatives sought Johnson's support, but he long remained silent on the presidential campaign. It was not until October, with the vote already having taken place in some states, that he mentioned Seymour at all, and he never endorsed him. Nevertheless, Johnson regretted Grant's victory, in part because of their animus from the Stanton affair. In his annual message to Congress in December, Johnson urged the repeal of the Tenure of Office Act and told legislators that had they admitted their Southern colleagues in 1865, all would have been well. He celebrated his 60th birthday in late December with a party for several hundred children, though not including those of President-elect Grant, who did not allow his to go.[187]
176
+
177
+ On Christmas Day 1868, Johnson issued a final amnesty, this one covering everyone, including Davis. He also issued, in his final months in office, pardons for crimes, including one for Dr. Samuel Mudd, controversially convicted of involvement in the Lincoln assassination (he had set Booth's broken leg) and imprisoned in Fort Jefferson on Florida's Dry Tortugas.[187]
178
+
179
+ On March 3, the President hosted a large public reception at the White House on his final full day in office. Grant had made it known that he was unwilling to ride in the same carriage as Johnson, as was customary, and Johnson refused to go to the inauguration at all. Despite an effort by Seward to prompt a change of mind, he spent the morning of March 4 finishing last-minute business, and then shortly after noon rode from the White House to the home of a friend.[188][189]
180
+
181
+ After leaving the presidency, Johnson remained for some weeks in Washington, then returned to Greeneville for the first time in eight years. He was honored with large public celebrations along the way, especially in Tennessee, where cities hostile to him during the war hung out welcome banners. He had arranged to purchase a large farm near Greeneville to live on after his presidency.[190]
182
+
183
+ Some expected Johnson to run for Governor of Tennessee or for the Senate again, while others thought that he would become a railroad executive.[178] Johnson found Greeneville boring, and his private life was embittered by the suicide of his son Robert in 1869.[191] Seeking vindication for himself, and revenge against his political enemies, he launched a Senate bid soon after returning home. Tennessee had gone Republican, but court rulings restoring the vote to some whites and the violence of the Ku Klux Klan kept down the African-American vote, leading to a Democratic victory in the legislative elections in August 1869. Johnson was seen as a likely victor in the Senate election, although hated by Radical Republicans, and also by some Democrats because of his wartime activities. Although he was at one point within a single vote of victory in the legislature's balloting, the Republicans eventually elected Henry Cooper over Johnson, 54–51.[192] In 1872, there was a special election for an at-large congressional seat for Tennessee; Johnson initially sought the Democratic nomination, but when he saw that it would go to former Confederate general Benjamin F. Cheatham, decided to run as an independent. The former president was defeated, finishing third, but the split in the Democratic Party defeated Cheatham in favor of an old Johnson Unionist ally, Horace Maynard.[193]
184
+
185
+ In 1873, Johnson contracted cholera during an epidemic but recovered; that year he lost about $73,000, when the First National Bank of Washington went under, though he was eventually repaid much of the sum.[194] He began looking towards the next Senate election, to take place in the legislature in early 1875. Johnson began to woo the farmers' Grange movement; with his Jeffersonian leanings, he easily gained their support. He spoke throughout the state in his final campaign tour. Few African Americans outside the large towns were now able to vote as Reconstruction faded in Tennessee, setting a pattern that would be repeated in the other Southern states; the white domination would last almost a century. In the Tennessee legislative elections in August, the Democrats elected 92 legislators to the Republicans' eight, and Johnson went to Nashville for the legislative session. When the balloting for the Senate seat began on January 20, 1875, he led with 30 votes, but did not have the required majority as three former Confederate generals, one former colonel, and a former Democratic congressman split the vote with him. Johnson's opponents tried to agree on a single candidate who might gain majority support and defeat him, but failed, and he was elected on January 26 on the 54th ballot, with a margin of a single vote. Nashville erupted in rejoicing;[195][196] remarked Johnson, "Thank God for the vindication."[191]
186
+
187
+ Johnson's comeback garnered national attention, with the St. Louis Republican calling it "the most magnificent personal triumph which the history of American politics can show".[196] At his swearing-in in the Senate on March 5, 1875, he was greeted with flowers and sworn in with his predecessor as vice president, Hamlin, by that office's current incumbent, Henry Wilson, who as senator had voted for his ousting. Many Republicans ignored Senator Johnson, though some, such as Ohio's John Sherman (who had voted for conviction), shook his hand. Johnson remains the only former president to serve in the Senate. He spoke only once in the short session, on March 22 lambasting President Grant for his use of federal troops in support of Louisiana's Reconstruction government. The former president asked, "How far off is military despotism?" and concluded his speech, "may God bless this people and God save the Constitution."[197]
188
+
189
+ Johnson returned home after the special session concluded. In late July 1875, convinced some of his opponents were defaming him in the Ohio gubernatorial race, he decided to travel there to give speeches. He began the trip on July 28, and broke the journey at his daughter Mary's farm near Elizabethton, where his daughter Martha was also staying. That evening he suffered a stroke, but refused medical treatment until the next day, when he did not improve and two doctors were sent for from Elizabethton. He seemed to respond to their ministrations, but suffered another stroke on the evening of July 30, and died early the following morning at the age of 66. President Grant had the "painful duty" of announcing the death of the only surviving past president. Northern newspapers, in their obituaries, tended to focus on Johnson's loyalty during the war, while Southern ones paid tribute to his actions as president. Johnson's funeral was held on August 3 in Greeneville.[198][199] He was buried with his body wrapped in an American flag and a copy of the U.S. Constitution placed under his head, according to his wishes. The burial ground was dedicated as the Andrew Johnson National Cemetery in 1906, and with his home and tailor's shop, is part of the Andrew Johnson National Historic Site.[200]
190
+
191
+ According to Castel, "historians [of Johnson's presidency] have tended to concentrate to the exclusion of practically everything else upon his role in that titanic event [Reconstruction]".[201] Through the remainder of the 19th century, there were few historical evaluations of Johnson and his presidency. Memoirs from Northerners who had dealt with him, such as former vice president Henry Wilson and Maine Senator James G. Blaine, depicted him as an obstinate boor who tried to favor the South in Reconstruction, but who was frustrated by Congress.[202] According to historian Howard K. Beale in his journal article about the historiography of Reconstruction, "Men of the postwar decades were more concerned with justifying their own position than they were with painstaking search for truth. Thus [Alabama congressman and historian] Hilary Herbert and his corroborators presented a Southern indictment of Northern policies, and Henry Wilson's history was a brief for the North."[203]
192
+
193
+ The turn of the 20th century saw the first significant historical evaluations of Johnson. Leading the wave was Pulitzer Prize-winning historian James Ford Rhodes, who wrote of the former president:[202]
194
+
195
+ Johnson acted in accordance with his nature. He had intellectual force but it worked in a groove. Obstinate rather than firm it undoubtedly seemed to him that following counsel and making concessions were a display of weakness. At all events from his December message to the veto of the Civil Rights Bill he yielded not a jot to Congress. The moderate senators and representatives (who constituted a majority of the Union party) asked him for only a slight compromise; their action was really an entreaty that he would unite with them to preserve Congress and the country from the policy of the radicals ... His quarrel with Congress prevented the readmission into the Union on generous terms of the members of the late Confederacy ... His pride of opinion, his desire to beat, blinded him to the real welfare of the South and of the whole country.[204]
196
+
197
+ Rhodes ascribed Johnson's faults to his personal weaknesses, and blamed him for the problems of the postbellum South.[203] Other early 20th-century historians, such as John Burgess, Woodrow Wilson (who later became president himself) and William Dunning, all Southerners, concurred with Rhodes, believing Johnson flawed and politically inept, but concluding that he had tried to carry out Lincoln's plans for the South in good faith.[205] Author and journalist Jay Tolson suggests that Wilson "depict[ed Reconstruction] as a vindictive program that hurt even repentant southerners while benefiting northern opportunists, the so-called Carpetbaggers, and cynical white southerners, or Scalawags, who exploited alliances with blacks for political gain".[206]
198
+
199
+ Even as Rhodes and his school wrote, another group of historians was setting out on the full rehabilitation of Johnson, using for the first time primary sources such as his papers, provided by his daughter Martha before her death in 1901, and the diaries of Johnson's Navy Secretary, Gideon Welles, first published in 1911. The resulting volumes, such as David Miller DeWitt's The Impeachment and Trial of President Andrew Johnson (1903), presented him far more favorably than they did those who had sought to oust him. In James Schouler's 1913 History of the Reconstruction Period, the author accused Rhodes of being "quite unfair to Johnson", though agreeing that the former president had created many of his own problems through inept political moves. These works had an effect; although historians continued to view Johnson as having deep flaws which sabotaged his presidency, they saw his Reconstruction policies as fundamentally correct.[207]
200
+
201
+ Castel writes:
202
+
203
+ at the end of the 1920s, an historiographical revolution took place. In the span of three years five widely read books appeared, all highly pro-Johnson....They differed in general approach and specific interpretations, but they all glorified Johnson and condemned his enemies. According to these writers, Johnson was a humane, enlightened, and liberal statesman who waged a courageous battle for the Constitution and democracy against scheming and unscrupulous Radicals, who were motivated by a vindictive hatred of the South, partisanship, and a desire to establish the supremacy of Northern "big business". In short, rather than a boor, Johnson was a martyr; instead of a villain, a hero.[208]
204
+
205
+ Beale wondered in 1940, "is it not time that we studied the history of Reconstruction without first assuming, at least subconsciously, that carpetbaggers and Southern white Republicans were wicked, that Negroes were illiterate incompetents, and that the whole white South owes a debt of gratitude to the restorers of 'white supremacy'?"[209] Despite these doubts, the favorable view of Johnson survived for a time. In 1942, Van Heflin portrayed the former president as a fighter for democracy in the Hollywood film Tennessee Johnson. In 1948, a poll of his colleagues by historian Arthur M. Schlesinger deemed Johnson among the average presidents; in 1956, one by Clinton L. Rossiter named him as one of the near-great Chief Executives.[210] Foner notes that at the time of these surveys, "the Reconstruction era that followed the Civil War was regarded as a time of corruption and misgovernment caused by granting black men the right to vote".[211]
206
+
207
+ Earlier historians, including Beale, believed that money drove events, and had seen Reconstruction as an economic struggle. They also accepted, for the most part, that reconciliation between North and South should have been the top priority of Reconstruction. In the 1950s, historians began to focus on the African-American experience as central to Reconstruction. They rejected completely any claim of black inferiority, which had marked many earlier historical works, and saw the developing civil rights movement as a second Reconstruction; some writers stated they hoped their work on the postbellum era would advance the cause of civil rights. These authors sympathized with the Radical Republicans for their desire to help the African American, and saw Johnson as callous towards the freedman. In a number of works from 1956 onwards by such historians as Fawn Brodie, the former president was depicted as a successful saboteur of efforts to better the freedman's lot. These volumes included major biographies of Stevens and Stanton.[212] Reconstruction was increasingly seen as a noble effort to integrate the freed slaves into society.[206][211]
208
+
209
+ In the early 21st century, Johnson is among those commonly mentioned as the worst presidents in U.S. history.[206] According to historian Glenn W. Lafantasie, who believes Buchanan the worst president, "Johnson is a particular favorite for the bottom of the pile because of his impeachment ... his complete mishandling of Reconstruction policy ... his bristling personality, and his enormous sense of self-importance."[213] Tolson suggests that "Johnson is now scorned for having resisted Radical Republican policies aimed at securing the rights and well-being of the newly emancipated African-Americans".[206] Gordon-Reed notes that Johnson, along with his contemporaries Pierce and Buchanan, are generally listed among the five worst presidents, but states, "there have never been more difficult times in the life of this nation. The problems these men had to confront were enormous. It would have taken a succession of Lincolns to do them justice."[214]
210
+
211
+ Trefousse considers Johnson's legacy to be "the maintenance of white supremacy. His boost to Southern conservatives by undermining Reconstruction was his legacy to the nation, one that would trouble the country for generations to come."[215] Gordon-Reed states of Johnson:
212
+
213
+ We know the results of Johnson's failures—that his preternatural stubbornness, his mean and crude racism, his primitive and instrumental understanding of the Constitution stunted his capacity for enlightened and forward-thinking leadership when those qualities were so desperately needed. At the same time, Johnson's story has a miraculous quality to it: the poor boy who systematically rose to the heights, fell from grace, and then fought his way back to a position of honor in the country. For good or ill, 'only in America,' as they say, could Johnson's story unfold in the way that it did.[216]
214
+
en/2260.html.txt ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The Great Pyramid of Giza (also known as the Pyramid of Khufu or the Pyramid of Cheops) is the oldest and largest of the three pyramids in the Giza pyramid complex bordering present-day Giza in Greater Cairo, Egypt. It is the oldest of the Seven Wonders of the Ancient World, and the only one to remain largely intact.
4
+
5
+ Based on a mark in an interior chamber naming the work gang and a reference to the Fourth Dynasty Egyptian pharaoh Khufu, Egyptologists believe that the pyramid was built as a tomb over a 10- to 20-year period concluding around 2560 BC. Initially standing at 146.5 metres (481 feet), the Great Pyramid was the tallest man-made structure in the world for more than 3,800 years until Lincoln Cathedral was finished in 1311 AD. It is estimated that the pyramid weighs approximately 6 million tonnes, and consists of 2.3 million blocks of limestone and granite, some weighing as much as 80 tonnes. Originally, the Great Pyramid was covered by limestone casing stones that formed a smooth outer surface; what is seen today is the underlying core structure. Some of the casing stones that once covered the structure can still be seen around the base. There have been varying scientific and alternative theories about the Great Pyramid's construction techniques. Most accepted construction hypotheses are based on the idea that it was built by moving huge stones from a quarry and dragging and lifting them into place.
6
+
7
+ There are three known chambers inside the Great Pyramid. The lowest chamber is cut into the bedrock upon which the pyramid was built and was unfinished. The so-called[2] Queen's Chamber and King's Chamber are higher up within the pyramid structure. The main part of the Giza complex is a set of buildings that included two mortuary temples in honour of Khufu (one close to the pyramid and one near the Nile), three smaller pyramids for Khufu's wives, an even smaller "satellite" pyramid, a raised causeway connecting the two temples, and small mastaba tombs for nobles surrounding the pyramid.
8
+
9
+ Egyptologists believe the pyramid was built as a tomb for the Fourth Dynasty Egyptian pharaoh Khufu (often Hellenized as "Cheops") and was constructed over a 20-year period. Khufu's vizier, Hemiunu (also called Hemon), is believed by some to be the architect of the Great Pyramid.[3] It is thought that, at construction, the Great Pyramid was originally 146.6 metres (481.0 ft) tall, but with the removal of its original casing, its present height is 137 metres (449.5 ft). The lengths of the sides at the base are difficult to reconstruct, given the absence of the casing, but recent analyses put them in a range between 230.26 metres (755.4 ft) and 230.44 metres (756.0 ft). The volume, including an internal hillock, is roughly 2,300,000 cubic metres (81,000,000 cu ft).[4]
10
+
11
+ The first precision measurements of the pyramid were made by Egyptologist Sir Flinders Petrie in 1880–82 and published as The Pyramids and Temples of Gizeh.[5] Almost all reports are based on his measurements. Many of the casing-stones and inner chamber blocks of the Great Pyramid fit together with extremely high precision. Based on measurements taken on the north-eastern casing stones, the mean opening of the joints is only 0.5 millimetres (0.020 in) wide.[6]
12
+
13
+ The pyramid remained the tallest man-made structure in the world for over 3,800 years,[7] unsurpassed until the 160-metre-tall (520 ft) spire of Lincoln Cathedral was completed c. 1300. The accuracy of the pyramid's workmanship is such that the four sides of the base have an average error of only 58 millimetres in length.[8] The base is horizontal and flat to within ±15 mm (0.6 in).[9] The sides of the square base are closely aligned to the four cardinal compass points (within four minutes of arc)[10] based on true north, not magnetic north,[11] and the finished base was squared to a mean corner error of only 12 seconds of arc.[12]
14
+
15
+ The completed design dimensions, as suggested by Petrie's survey and subsequent studies, are estimated to have originally been 280 Egyptian Royal cubits high by 440 cubits long at each of the four sides of its base. The ratio of the perimeter to height of 1760/280 Egyptian Royal cubits equates to 2π to an accuracy of better than 0.05 percent (corresponding to the well-known approximation of π as 22/7). Some Egyptologists consider this to have been the result of deliberate design proportion. Verner wrote, "We can conclude that although the ancient Egyptians could not precisely define the value of π, in practice they used it".[13] Petrie concluded: "but these relations of areas and of circular ratio are so systematic that we should grant that they were in the builder's design".[14] Others have argued that the ancient Egyptians had no concept of pi and would not have thought to encode it in their monuments. They believe that the observed pyramid slope may be based on a simple seked slope choice alone, with no regard to the overall size and proportions of the finished building.[15] In 2013, rolls of papyrus called the Diary of Merer were discovered written by some of those who delivered limestone and other construction materials from Tora to Giza.[16]
16
+
17
+ The Great Pyramid consists of an estimated 2.3 million blocks which most believe to have been transported from nearby quarries. The Tura limestone used for the casing was quarried across the river. The largest granite stones in the pyramid, found in the "King's" chamber, weigh 25 to 80 tonnes and were transported from Aswan, more than 800 km (500 mi) away.[citation needed] Ancient Egyptians cut stone into rough blocks by hammering grooves into natural stone faces, inserting wooden wedges, then soaking these with water. As the water was absorbed, the wedges expanded, breaking off workable chunks. Once the blocks were cut, they were carried by boat either up or down the Nile River to the pyramid.[17] It is estimated that 5.5 million tonnes of limestone, 8,000 tonnes of granite (imported from Aswan), and 500,000 tonnes of mortar were used in the construction of the Great Pyramid.[18]
18
+
19
+ At completion, the Great Pyramid was surfaced with white "casing stones"—slant-faced, but flat-topped, blocks of highly polished white limestone. These were carefully cut to what is approximately a face slope with a seked of 5+1/2 palms to give the required dimensions. Visibly, all that remains is the underlying stepped core structure seen today.[citation needed] In 1303 AD, a massive earthquake loosened many of the outer casing stones, which in 1356 were carted away by Bahri Sultan An-Nasir Nasir-ad-Din al-Hasan to build mosques and fortresses in nearby Cairo.[citation needed] Many more casing stones were removed from the great pyramids by Muhammad Ali Pasha in the early 19th century to build the upper portion of his Alabaster Mosque in Cairo, not far from Giza.[citation needed] These limestone casings can still be seen as parts of these structures. Later explorers reported massive piles of rubble at the base of the pyramids left over from the continuing collapse of the casing stones, which were subsequently cleared away during continuing excavations of the site.[citation needed]
20
+
21
+ Nevertheless, a few of the casing stones from the lowest course can be seen to this day in situ around the base of the Great Pyramid, and display the same workmanship and precision that has been reported for centuries. Petrie also found a different orientation in the core and in the casing measuring 193 centimetres ± 25 centimetres. He suggested a redetermination of north was made after the construction of the core, but a mistake was made, and the casing was built with a different orientation.[5] Petrie related the precision of the casing stones as to being "equal to opticians' work of the present day, but on a scale of acres" and "to place such stones in exact contact would be careful work; but to do so with cement in the joints seems almost impossible".[20] It has been suggested it was the mortar (Petrie's "cement") that made this seemingly impossible task possible, providing a level bed, which enabled the masons to set the stones exactly.[21][22]
22
+
23
+ Many alternative, often contradictory, theories have been proposed regarding the pyramid's construction techniques.[23] Many disagree on whether the blocks were dragged, lifted, or even rolled into place. The Greeks believed that slave labour was used, but modern discoveries made at nearby workers' camps associated with construction at Giza suggest that it was built instead by tens of thousands of skilled workers. Verner posited that the labour was organized into a hierarchy, consisting of two gangs of 100,000 men, divided into five zaa or phyle of 20,000 men each, which may have been further divided according to the skills of the workers.[24]
24
+
25
+ One mystery of the pyramid's construction is its planning. John Romer suggests that they used the same method that had been used for earlier and later constructions, laying out parts of the plan on the ground at a 1-to-1 scale. He writes that "such a working diagram would also serve to generate the architecture of the pyramid with precision unmatched by any other means".[25] He also argues for a 14-year time-span for its construction.[26] A modern construction management study, in association with Mark Lehner and other Egyptologists, estimated that the total project required an average workforce of about 14,500 people and a peak workforce of roughly 40,000. Without the use of pulleys, wheels, or iron tools, they used critical path analysis methods, which suggest that the Great Pyramid was completed from start to finish in approximately 10 years.[27]
26
+
27
+ The original entrance to the Great Pyramid is on the north, 17 metres (56 ft) vertically above ground level and 7.29 metres (23.9 ft) east of the center line of the pyramid. From this original entrance, there is a Descending Passage 0.96 metres (3.1 ft) high and 1.04 metres (3.4 ft) wide, which goes down at an angle of 26° 31'23" through the masonry of the pyramid and then into the bedrock beneath it. After 105.23 metres (345.2 ft), the passage becomes level and continues for an additional 8.84 metres (29.0 ft) to the lower Chamber, which appears not to have been finished. There is a continuation of the horizontal passage in the south wall of the lower chamber; there is also a pit dug in the floor of the chamber. Some Egyptologists suggest that this Lower Chamber was intended to be the original burial chamber, but Pharaoh Khufu later changed his mind and wanted it to be higher up in the pyramid.[28]
28
+
29
+ 28.2 metres (93 ft) from the entrance is a square hole in the roof of the Descending Passage. Originally concealed with a slab of stone, this is the beginning of the Ascending Passage.[citation needed] The Ascending Passage is 39.3 metres (129 ft) long, as wide and high as the Descending Passage and slopes up at almost precisely the same angle to reach the Grand Gallery. The lower end of the Ascending Passage is closed by three huge blocks of granite, each about 1.5 metres (4.9 ft) long.[citation needed] One must use the Robbers' Tunnel (see below) to access the Ascending Passage.[citation needed] At the start of the Grand Gallery on the right-hand side there is a hole cut in the wall. This is the start of a vertical shaft which follows an irregular path through the masonry of the pyramid to join the Descending Passage. Also at the start of the Grand Gallery there is the Horizontal Passage leading to the "Queen's Chamber". The passage is 1.1m (3'8") high for most of its length, but near the chamber there is a step in the floor, after which the passage is 1.73 metres (5.7 ft) high.[citation needed]
30
+
31
+ The "Queen's Chamber"[2] is exactly halfway between the north and south faces of the pyramid and measures 5.75 metres (18.9 ft) north to south, 5.23 metres (17.2 ft) east to west, and has a pointed roof with an apex 6.23 metres (20.4 ft) above the floor. At the eastern end of the chamber there is a niche 4.67 metres (15.3 ft) high. The original depth of the niche was 1.04 metres (3.4 ft), but has since been deepened by treasure hunters.[29]
32
+
33
+ In the north and south walls of the Queen's Chamber there are shafts, which, unlike those in the King's Chamber that immediately slope upwards (see below), are horizontal for around 2 m (6.6 ft) before sloping upwards. The horizontal distance was cut in 1872 by a British engineer, Waynman Dixon, who believed a shaft similar to those in the King's Chamber must also exist. He was proved right, but because the shafts are not connected to the outer faces of the pyramid or the Queen's Chamber, their purpose is unknown. At the end of one of his shafts, Dixon discovered a ball of black diorite (a type of rock) and a bronze implement of unknown purpose. Both objects are currently in the British Museum.[30]
34
+
35
+ The shafts in the Queen's Chamber were explored in 1993 by the German engineer Rudolf Gantenbrink using a crawler robot he designed, Upuaut 2. After a climb of 65 m (213 ft),[31] he discovered that one of the shafts was blocked by limestone "doors" with two eroded copper "handles". Some years later the National Geographic Society created a similar robot which, in September 2002, drilled a small hole in the southern door, only to find another door behind it.[32] The northern passage, which was difficult to navigate because of twists and turns, was also found to be blocked by a door.[33]
36
+
37
+ Research continued in 2011 with the Djedi Project. Realizing the problem was that the National Geographic Society's camera was only able to see straight ahead of it, they instead used a fibre-optic "micro snake camera" that could see around corners. With this they were able to penetrate the first door of the southern shaft through the hole drilled in 2002, and view all the sides of the small chamber behind it. They discovered hieroglyphs written in red paint. They were also able to scrutinize the inside of the two copper "handles" embedded in the door, and they now believe them to be for decorative purposes. They also found the reverse side of the "door" to be finished and polished, which suggests that it was not put there just to block the shaft from debris, but rather for a more specific reason.[34]
38
+
39
+ The Grand Gallery continues the slope of the Ascending Passage, but is 8.6 metres (28 ft) high and 46.68 metres (153.1 ft) long. At the base it is 2.06 metres (6.8 ft) wide, but after 2.29 metres (7.5 ft) the blocks of stone in the walls are corbelled inwards by 7.6 centimetres (3.0 in) on each side.[citation needed] There are seven of these steps, so, at the top, the Grand Gallery is only 1.04 metres (3.4 ft) wide. It is roofed by slabs of stone laid at a slightly steeper angle than the floor of the gallery, so that each stone fits into a slot cut in the top of the gallery like the teeth of a ratchet. The purpose was to have each block supported by the wall of the Gallery, rather than resting on the block beneath it, in order to prevent cumulative pressure.[35]
40
+
41
+ At the upper end of the Gallery on the right-hand side there is a hole near the roof that opens into a short tunnel by which access can be gained to the lowest of the Relieving Chambers.[citation needed] The other Relieving Chambers were discovered in 1837–1838 by Colonel Howard Vyse and J.S. Perring, who dug tunnels upwards using blasting powder.[citation needed]
42
+
43
+ The floor of the Grand Gallery consists of a shelf or step on either side, 51 centimetres (20 in) wide, leaving a lower ramp 1.04 metres (3.4 ft) wide between them. In the shelves there are 54 slots, 27 on each side matched by vertical and horizontal slots in the walls of the Gallery. These form a cross shape that rises out of the slot in the shelf.[citation needed] The purpose of these slots is not known, but the central gutter in the floor of the Gallery, which is the same width as the Ascending Passage, has led to speculation that the blocking stones were stored in the Grand Gallery and the slots held wooden beams to restrain them from sliding down the passage.[36] This, in turn, has led to the proposal that originally many more than 3 blocking stones were intended, to completely fill the Ascending Passage.[citation needed]
44
+
45
+ At the top of the Grand Gallery, there is a step giving onto a horizontal passage some metres long and approximately 1.02 metres (3.3 ft) in height and width, in which can be detected four slots, three of which were probably intended to hold granite portcullises.[citation needed] Fragments of granite found by Petrie in the Descending Passage may have come from these now-vanished doors.[citation needed]
46
+
47
+ In 2017, scientists from the ScanPyramids project discovered a large cavity above the Grand Gallery using muon radiography, which they called the "ScanPyramids Big Void". Its length is at least 30 metres (98 ft) and its cross-section is similar to that of the Grand Gallery. Its existence was confirmed by independent detection with three different technologies: nuclear emulsion films, scintillator hodoscopes, and gas detectors.[37][38] The purpose of the cavity is not known and it is not accessible but according to Zahi Hawass it may have been a gap used in the construction of the Grand Gallery.[39] The Japanese research team disputes this, however, saying that the huge void is completely different from the construction spaces previously identified.[40]
48
+ To verify the "ScanPyramids Big Void" and pinpoint the same, a Japanese team of researchers from Kyushu University, Tohoku University, the University of Tokyo and the Chiba Institute of Technology plans to rescan the structure with a newly developed muon detector in 2020.[41]
49
+
50
+ The "King's Chamber"[2] is 20 Egyptian Royal cubits or 10.47 metres (34.4 ft) from east to west and 10 cubits or 5.234 metres (17.17 ft) north to south. It has a flat roof 11 cubits and 5 digits or 5.852 metres (19.20 ft) above the floor. 0.91 m (3.0 ft) above the floor there are two narrow shafts in the north and south walls (one is now filled by an extractor fan in an attempt to circulate air inside the pyramid).[citation needed] The purpose of these shafts is not clear: they appear to be aligned towards stars or areas of the northern and southern skies, yet one of them follows a dog-leg course through the masonry, indicating no intention to directly sight stars through them.[citation needed] They were long believed by Egyptologists to be "air shafts" for ventilation, but this idea has now been widely abandoned in favour of the shafts serving a ritualistic purpose associated with the ascension of the king's spirit to the heavens.[42]
51
+
52
+ The King's Chamber is entirely faced with granite. Above the roof, which is formed of nine slabs of stone weighing in total about 400 tons, are five compartments known as Relieving Chambers. The first four, like the King's Chamber, have flat roofs formed by the floor of the chamber above, but the final chamber has a pointed roof.[citation needed] Vyse suspected the presence of upper chambers when he found that he could push a long reed through a crack in the ceiling of the first chamber. From lower to upper, the chambers are known as "Davison's Chamber", "Wellington's Chamber", "Nelson's Chamber", "Lady Arbuthnot's Chamber", and "Campbell's Chamber". It is believed that the compartments were intended to safeguard the King's Chamber from the possibility of a roof collapsing under the weight of stone above the Chamber. As the chambers were not intended to be seen, they were not finished in any way and a few of the stones still retain masons' marks painted on them. One of the stones in Campbell's Chamber bears a mark, apparently the name of a work gang.[43][44]
53
+
54
+ The only object in the King's Chamber is a rectangular granite sarcophagus, one corner of which is damaged.[citation needed] The sarcophagus is slightly larger than the Ascending Passage, which indicates that it must have been placed in the Chamber before the roof was put in place.[citation needed] Unlike the fine masonry of the walls of the Chamber, the sarcophagus is roughly finished, with saw-marks visible in several places.[citation needed] This is in contrast with the finely finished and decorated sarcophagi found in other pyramids of the same period. Petrie suggested that such a sarcophagus was intended but was lost in the river on the way north from Aswan and a hurriedly made replacement was used instead.[citation needed]
55
+
56
+ Today tourists enter the Great Pyramid via the Robbers' Tunnel, which was long ago cut straight through the masonry of the pyramid for approximately 27 metres (89 ft), then turns sharply left to encounter the blocking stones in the Ascending Passage. It is possible to enter the Descending Passage from this point, but access is usually forbidden.[45] The origin of this Robbers' Tunnel is the subject of much scholarly discussion. According to tradition, the chasm was cut around 820 AD by Caliph al-Ma'mun's workmen using a battering ram. According to these accounts, al-Ma'mun's digging dislodged the stone fitted in the ceiling of the Descending Passage to hide the entrance to the Ascending Passage and it was the noise of that stone falling and then sliding down the Descending Passage, which alerted them to the need to turn left. Unable to remove these stones, however, the workmen tunneled up beside them through the softer limestone of the Pyramid until they reached the Ascending Passage.[46][47] Due to a number of historical and archaeological discrepancies, many scholars (with Antoine Isaac Silvestre de Sacy perhaps being the first) contend that this story is apocryphal. They argue that it is much more likely that the tunnel had been carved sometime after the pyramid was initially sealed. This tunnel, the scholars continue, was then resealed (likely during the Ramesside Restoration), and it was this plug that al-Ma'mun's ninth century expedition cleared away.[48]
57
+
58
+ The Great Pyramid is surrounded by a complex of several buildings including small pyramids. The Pyramid Temple, which stood on the east side of the pyramid and measured 52.2 metres (171 ft) north to south and 40 metres (130 ft) east to west, has almost entirely disappeared apart from the black basalt paving. There are only a few remnants of the causeway which linked the pyramid with the valley and the Valley Temple. The Valley Temple is buried beneath the village of Nazlet el-Samman; basalt paving and limestone walls have been found but the site has not been excavated.[49][50] The basalt blocks show "clear evidence" of having been cut with some kind of saw with an estimated cutting blade of 15 feet (4.6 m) in length, capable of cutting at a rate of 1.5 inches (38 mm) per minute. Romer suggests that this "super saw" may have had copper teeth and weighed up to 300 pounds (140 kg). He theorizes that such a saw could have been attached to a wooden trestle and possibly used in conjunction with vegetable oil, cutting sand, emery or pounded quartz to cut the blocks, which would have required the labour of at least a dozen men to operate it.[51]
59
+
60
+ On the south side are the subsidiary pyramids, popularly known as the Queens' Pyramids. Three remain standing to nearly full height but the fourth was so ruined that its existence was not suspected until the recent discovery of the first course of stones and the remains of the capstone. Hidden beneath the paving around the pyramid was the tomb of Queen Hetepheres I, sister-wife of Sneferu and mother of Khufu. Discovered by accident by the Reisner expedition, the burial was intact, though the carefully sealed coffin proved to be empty.
61
+
62
+ A notable construction flanking the Giza pyramid complex is a cyclopean stone wall, the Wall of the Crow.[52] Lehner has discovered a worker's town outside of the wall, otherwise known as "The Lost City", dated by pottery styles, seal impressions, and stratigraphy to have been constructed and occupied sometime during the reigns of Khafre (2520–2494 BC) and Menkaure (2490–2472 BC).[53][54] In the early 21st century, Mark Lehner and his team made several discoveries, including what appears to have been a thriving port, suggesting the town and associated living quarters, which consisted of barracks called "galleries", may not have been for the pyramid workers after all but rather for the soldiers and sailors who utilized the port. In light of this new discovery, as to where then the pyramid workers may have lived, Lehner suggested the alternative possibility they may have camped on the ramps he believes were used to construct the pyramids or possibly at nearby quarries.[55]
63
+
64
+ In the early 1970s, the Australian archaeologist Karl Kromer excavated a mound in the South Field of the plateau. This mound contained artefacts including mudbrick seals of Khufu, which he identified with an artisans' settlement.[56] Mudbrick buildings just south of Khufu's Valley Temple contained mud sealings of Khufu and have been suggested to be a settlement serving the cult of Khufu after his death.[57] A worker's cemetery used at least between Khufu's reign and the end of the Fifth Dynasty was discovered south of the Wall of the Crow by Hawass in 1990.[58]
65
+
66
+ There are three boat-shaped pits around the pyramid, of a size and shape to have held complete boats, though so shallow that any superstructure, if there ever was one, must have been removed or disassembled. In May 1954, the Egyptian archaeologist Kamal el-Mallakh discovered a fourth pit, a long, narrow rectangle, still covered with slabs of stone weighing up to 15 tons. Inside were 1,224 pieces of wood, the longest 23 metres (75 ft) long, the shortest 10 centimetres (0.33 ft). These were entrusted to a boat builder, Haj Ahmed Yusuf, who worked out how the pieces fit together. The entire process, including conservation and straightening of the warped wood, took fourteen years.
67
+
68
+ The result is a cedar-wood boat 43.6 metres (143 ft) long, its timbers held together by ropes, which is currently housed in a special boat-shaped, air-conditioned museum beside the pyramid. During construction of this museum, which stands above the boat pit, a second sealed boat pit was discovered. It was deliberately left unopened until 2011 when excavation began on the boat.[59]
69
+
70
+ Although succeeding pyramids were smaller, pyramid-building continued until the end of the Middle Kingdom. However, as authors Brier and Hobbs claim, "all the pyramids were robbed" by the New Kingdom, when the construction of royal tombs in a desert valley, now known as the Valley of the Kings, began.[60][61] Joyce Tyldesley states that the Great Pyramid itself "is known to have been opened and emptied by the Middle Kingdom", before the Arab caliph Al-Ma'mun entered the pyramid around 820 AD.[46]
71
+
72
+ I. E. S. Edwards discusses Strabo's mention that the pyramid "a little way up one side has a stone that may be taken out, which being raised up there is a sloping passage to the foundations". Edwards suggested that the pyramid was entered by robbers after the end of the Old Kingdom and sealed and then reopened more than once until Strabo's door was added. He adds: "If this highly speculative surmise be correct, it is also necessary to assume either that the existence of the door was forgotten or that the entrance was again blocked with facing stones", in order to explain why al-Ma'mun could not find the entrance.[62]
73
+
74
+ He also discusses a story told by Herodotus. Herodotus visited Egypt in the 5th century BC and recounts a story that he was told concerning vaults under the pyramid built on an island where the body of Cheops lies. Edwards notes that the pyramid had "almost certainly been opened and its contents plundered long before the time of Herodotus" and that it might have been closed again during the Twenty-sixth Dynasty of Egypt when other monuments were restored. He suggests that the story told to Herodotus could have been the result of almost two centuries of telling and retelling by Pyramid guides.[63]
75
+
76
+
77
+
en/2261.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/2262.html.txt ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ London is the capital and largest city of England and the United Kingdom.[7][8] Standing on the River Thames in the south-east of England, at the head of its 50-mile (80 km) estuary leading to the North Sea, London has been a major settlement for two millennia. Londinium was founded by the Romans.[9] The City of London, London's ancient core − an area of just 1.12 square miles (2.9 km2) and colloquially known as the Square Mile − retains boundaries that closely follow its medieval limits.[10][11][12][13][14][note 1] The City of Westminster is also an Inner London borough holding city status. London is governed by the mayor of London and the London Assembly.[15][note 2][16]
4
+
5
+ London is considered to be one of the world's most important global cities[17][18][19] and has been called the world's most powerful,[20] most desirable,[21] most influential,[22] most visited,[23] most expensive,[24][25] sustainable,[26] most investment-friendly,[27] and most-popular-for-work[28] city. It exerts a considerable impact upon the arts, commerce, education, entertainment, fashion, finance, healthcare, media, professional services, research and development, tourism and transportation.[29][30] London ranks 26th out of 300 major cities for economic performance.[31] It is one of the largest financial centres[32] and has either the fifth- or the sixth-largest metropolitan area GDP.[note 3][33][34][35][36][37] It is the most-visited city as measured by international arrivals[38] and has the busiest city airport system as measured by passenger traffic.[39] It is the leading investment destination,[40][41][42][43] hosting more international retailers[44][45] than any other city. In 2017, London had the second highest number of ultra high-net-worth individuals in Europe after Paris.[46] London's universities form the largest concentration of higher education institutes in Europe,[47] and London is home to highly ranked institutions such as Imperial College London in natural and applied sciences, the London School of Economics in social sciences, and the comprehensive University College London and King's College London.[48][49][50] In 2012, London became the first city to have hosted three modern Summer Olympic Games.[51]
6
+
7
+ London has a diverse range of people and cultures, and more than 300 languages are spoken in the region.[52] Its estimated mid-2018 municipal population (corresponding to Greater London) was 8,908,081,[4] the third most populous of any city in Europe[53] and accounts for 13.4% of the UK population.[54] London's urban area is the fourth most populous in Europe, after Moscow, Istanbul, and Paris, with 9,787,426 inhabitants at the 2011 census.[55][56]. The London commuter belt is the third-most populous in Europe, after the Moscow Metropolitan Area and Istanbul, with 14,040,163 inhabitants in 2016.[note 4][3][57]
8
+
9
+ London contains four World Heritage Sites: the Tower of London; Kew Gardens; the site comprising the Palace of Westminster, Westminster Abbey, and St Margaret's Church; and the historic settlement in Greenwich where the Royal Observatory, Greenwich defines the Prime Meridian (0° longitude) and Greenwich Mean Time.[58] Other landmarks include Buckingham Palace, the London Eye, Piccadilly Circus, St Paul's Cathedral, Tower Bridge, Trafalgar Square and The Shard. London has numerous museums, galleries, libraries and sporting events. These include the British Museum, National Gallery, Natural History Museum, Tate Modern, British Library and West End theatres.[59] The London Underground is the oldest underground railway network in the world.
10
+
11
+ London is an ancient name, already attested in the first century AD, usually in the Latinised form Londinium;[60] for example, handwritten Roman tablets recovered in the city originating from AD 65/70–80 include the word Londinio ('in London').[61]
12
+
13
+ Over the years, the name has attracted many mythicising explanations. The earliest attested appears in Geoffrey of Monmouth's Historia Regum Britanniae, written around 1136.[60] This had it that the name originated from a supposed King Lud, who had allegedly taken over the city and named it Kaerlud.[62]
14
+
15
+ Modern scientific analyses of the name must account for the origins of the different forms found in early sources: Latin (usually Londinium), Old English (usually Lunden), and Welsh (usually Llundein), with reference to the known developments over time of sounds in those different languages. It is agreed that the name came into these languages from Common Brythonic; recent work tends to reconstruct the lost Celtic form of the name as *Londonjon or something similar. This was adapted into Latin as Londinium and borrowed into Old English, the ancestor-language of English.[63]
16
+
17
+ The toponymy of the Common Brythonic form is much debated. A prominent explanation was Richard Coates's 1998 argument that the name derived from pre-Celtic Old European *(p)lowonida, meaning "river too wide to ford". Coates suggested that this was a name given to the part of the River Thames which flows through London; from this, the settlement gained the Celtic form of its name, *Lowonidonjon.[64] However, most work has accepted a Celtic origin for the name, and recent studies have favoured an explanation along the lines of a Celtic derivative of a Proto-Indo-European root *lendh- ('sink, cause to sink'), combined with the Celtic suffix *-injo- or *-onjo- (used to form place-names). Peter Schrijver has specifically suggested, on these grounds, that the name originally meant 'place that floods (periodically, tidally)'.[65][63]
18
+
19
+ Until 1889, the name "London" applied officially to the City of London, but since then it has also referred to the County of London and Greater London.[66]
20
+
21
+ In writing, "London" is, on occasion, colloquially contracted to "LDN".[67][clarification needed] Such usage originated in SMS language, and is often found, on a social media user profile, suffixing an alias or handle.
22
+
23
+ In 1993, the remains of a Bronze Age bridge were found on the south foreshore, upstream of Vauxhall Bridge.[68] This bridge either crossed the Thames or reached a now lost island in it. Two of those timbers were radiocarbon dated to between 1750 BC and 1285 BC.[68]
24
+
25
+ In 2010, the foundations of a large timber structure, dated to between 4800 BC and 4500 BC,[69] were found on the Thames's south foreshore, downstream of Vauxhall Bridge.[70] The function of the mesolithic structure is not known. Both structures are on the south bank where the River Effra flows into the Thames.[70]
26
+
27
+ Although there is evidence of scattered Brythonic settlements in the area, the first major settlement was founded by the Romans about four years[1] after the invasion of AD 43.[71] This lasted only until around AD 61, when the Iceni tribe led by Queen Boudica stormed it, burning the settlement to the ground.[72] The next, heavily planned, incarnation of Londinium prospered, and it superseded Colchester as the capital of the Roman province of Britannia in 100. At its height in the 2nd century, Roman London had a population of around 60,000.[73]
28
+
29
+ With the collapse of Roman rule in the early 5th century, London ceased to be a capital, and the walled city of Londinium was effectively abandoned, although Roman civilisation continued in the area of St Martin-in-the-Fields until around 450.[74] From around 500, an Anglo-Saxon settlement known as Lundenwic developed slightly west of the old Roman city.[75] By about 680, the city had regrown into a major port, although there is little evidence of large-scale production. From the 820s repeated Viking assaults brought decline. Three are recorded; those in 851 and 886 succeeded, while the last, in 994, was rebuffed.[76]
30
+
31
+ The Vikings established Danelaw over much of eastern and northern England; its boundary stretched roughly from London to Chester. It was an area of political and geographical control imposed by the Viking incursions which was formally agreed by the Danish warlord, Guthrum and the West Saxon king Alfred the Great in 886. The Anglo-Saxon Chronicle recorded that Alfred "refounded" London in 886. Archaeological research shows that this involved abandonment of Lundenwic and a revival of life and trade within the old Roman walls. London then grew slowly until about 950, after which activity increased dramatically.[77]
32
+
33
+ By the 11th century, London was beyond all comparison the largest town in England. Westminster Abbey, rebuilt in the Romanesque style by King Edward the Confessor, was one of the grandest churches in Europe. Winchester had previously been the capital of Anglo-Saxon England, but from this time on, London became the main forum for foreign traders and the base for defence in time of war. In the view of Frank Stenton: "It had the resources, and it was rapidly developing the dignity and the political self-consciousness appropriate to a national capital."[78][79]
34
+
35
+ After winning the Battle of Hastings, William, Duke of Normandy was crowned King of England in the newly completed Westminster Abbey on Christmas Day 1066.[80] William constructed the Tower of London, the first of the many Norman castles in England to be rebuilt in stone, in the southeastern corner of the city, to intimidate the native inhabitants.[81] In 1097, William II began the building of Westminster Hall, close by the abbey of the same name. The hall became the basis of a new Palace of Westminster.[82][83]
36
+
37
+ In the 12th century, the institutions of central government, which had hitherto accompanied the royal English court as it moved around the country, grew in size and sophistication and became increasingly fixed in one place. For most purposes this was Westminster, although the royal treasury, having been moved from Winchester, came to rest in the Tower. While the City of Westminster developed into a true capital in governmental terms, its distinct neighbour, the City of London, remained England's largest city and principal commercial centre, and it flourished under its own unique administration, the Corporation of London. In 1100, its population was around 18,000; by 1300 it had grown to nearly 100,000.[84] Disaster struck in the form of the Black Death in the mid-14th century, when London lost nearly a third of its population.[85] London was the focus of the Peasants' Revolt in 1381.[86]
38
+
39
+ London was also a centre of England's Jewish population before their expulsion by Edward I in 1290. Violence against Jews took place in 1190, after it was rumoured that the new king had ordered their massacre after they had presented themselves at his coronation.[87] In 1264 during the Second Barons' War, Simon de Montfort's rebels killed 500 Jews while attempting to seize records of debts.[88]
40
+
41
+ During the Tudor period the Reformation produced a gradual shift to Protestantism, and much of London property passed from church to private ownership, which accelerated trade and business in the city.[89] In 1475, the Hanseatic League set up its main trading base (kontor) of England in London, called the Stalhof or Steelyard. It existed until 1853, when the Hanseatic cities of Lübeck, Bremen and Hamburg sold the property to South Eastern Railway.[90] Woollen cloth was shipped undyed and undressed from 14th/15th century London to the nearby shores of the Low Countries, where it was considered indispensable.[91]
42
+
43
+ But the reach of English maritime enterprise hardly extended beyond the seas of north-west Europe. The commercial route to Italy and the Mediterranean Sea normally lay through Antwerp and over the Alps; any ships passing through the Strait of Gibraltar to or from England were likely to be Italian or Ragusan. Upon the re-opening of the Netherlands to English shipping in January 1565, there ensued a strong outburst of commercial activity.[92] The Royal Exchange was founded.[93] Mercantilism grew, and monopoly trading companies such as the East India Company were established, with trade expanding to the New World. London became the principal North Sea port, with migrants arriving from England and abroad. The population rose from an estimated 50,000 in 1530 to about 225,000 in 1605.[89]
44
+
45
+ In the 16th century William Shakespeare and his contemporaries lived in London at a time of hostility to the development of the theatre. By the end of the Tudor period in 1603, London was still very compact. There was an assassination attempt on James I in Westminster, in the Gunpowder Plot on 5 November 1605.[94]
46
+
47
+ In 1637, the government of Charles I attempted to reform administration in the area of London. The plan called for the Corporation of the City to extend its jurisdiction and administration over expanding areas around the City. Fearing an attempt by the Crown to diminish the Liberties of London, a lack of interest in administering these additional areas, or concern by city guilds of having to share power, the Corporation refused. Later called "The Great Refusal", this decision largely continues to account for the unique governmental status of the City.[95]
48
+
49
+ In the English Civil War the majority of Londoners supported the Parliamentary cause. After an initial advance by the Royalists in 1642, culminating in the battles of Brentford and Turnham Green, London was surrounded by a defensive perimeter wall known as the Lines of Communication. The lines were built by up to 20,000 people, and were completed in under two months.[96]
50
+ The fortifications failed their only test when the New Model Army entered London in 1647,[97] and they were levelled by Parliament the same year.[98]
51
+
52
+ London was plagued by disease in the early 17th century,[99] culminating in the Great Plague of 1665–1666, which killed up to 100,000 people, or a fifth of the population.[100]
53
+
54
+ The Great Fire of London broke out in 1666 in Pudding Lane in the city and quickly swept through the wooden buildings.[101] Rebuilding took over ten years and was supervised by Robert Hooke[102][103][104] as Surveyor of London.[105] In 1708 Christopher Wren's masterpiece, St Paul's Cathedral was completed. During the Georgian era, new districts such as Mayfair were formed in the west; new bridges over the Thames encouraged development in South London. In the east, the Port of London expanded downstream. London's development as an international financial centre matured for much of the 1700s.
55
+
56
+ In 1762, George III acquired Buckingham House and it was enlarged over the next 75 years. During the 18th century, London was dogged by crime, and the Bow Street Runners were established in 1750 as a professional police force.[106] In total, more than 200 offences were punishable by death,[107] including petty theft.[108] Most children born in the city died before reaching their third birthday.[109]
57
+
58
+ The coffeehouse became a popular place to debate ideas, with growing literacy and the development of the printing press making news widely available; and Fleet Street became the centre of the British press. Following the invasion of Amsterdam by Napoleonic armies, many financiers relocated to London, especially a large Jewish community, and the first London international issue[clarification needed] was arranged in 1817. Around the same time, the Royal Navy became the world leading war fleet, acting as a serious deterrent to potential economic adversaries of the United Kingdom. The repeal of the Corn Laws in 1846 was specifically aimed at weakening Dutch economic power. London then overtook Amsterdam as the leading international financial centre.[110] In 1888, London became home to a series of murders by a man known only as Jack the Ripper and It has since become one of the world's most famous unsolved mysteries.
59
+
60
+ According to Samuel Johnson:
61
+
62
+ You find no man, at all intellectual, who is willing to leave London. No, Sir, when a man is tired of London, he is tired of life; for there is in London all that life can afford.
63
+
64
+ London was the world's largest city from c.1831 to 1925,[112] with a population density of 325 people per hectare.[113] London's overcrowded conditions led to cholera epidemics,[114] claiming 14,000 lives in 1848, and 6,000 in 1866.[115] Rising traffic congestion led to the creation of the world's first local urban rail network. The Metropolitan Board of Works oversaw infrastructure expansion in the capital and some of the surrounding counties; it was abolished in 1889 when the London County Council was created out of those areas of the counties surrounding the capital.
65
+
66
+ London was bombed by the Germans during the First World War,[116] and during the Second World War, the Blitz and other bombings by the German Luftwaffe killed over 30,000 Londoners, destroying large tracts of housing and other buildings across the city.[117]
67
+
68
+ Immediately after the War, the 1948 Summer Olympics were held at the original Wembley Stadium, at a time when London was still recovering from the war.[118] From the 1940s onwards, London became home to many immigrants, primarily from Commonwealth countries such as Jamaica, India, Bangladesh and Pakistan,[119] making London one of the most diverse cities worldwide. In 1951, the Festival of Britain was held on the South Bank.[120] The Great Smog of 1952 led to the Clean Air Act 1956, which ended the "pea soup fogs" for which London had been notorious.[121]
69
+
70
+ Primarily starting in the mid-1960s, London became a centre for the worldwide youth culture, exemplified by the Swinging London subculture[122] associated with the King's Road, Chelsea[123] and Carnaby Street.[124] The role of trendsetter was revived during the punk era.[125] In 1965 London's political boundaries were expanded to take into account the growth of the urban area and a new Greater London Council was created.[126] During The Troubles in Northern Ireland, London was subjected to bombing attacks by the Provisional Irish Republican Army[127] for two decades, starting with the Old Bailey bombing in 1973.[128][129] Racial inequality was highlighted by the 1981 Brixton riot.[130]
71
+
72
+ Greater London's population declined steadily in the decades after the Second World War, from an estimated peak of 8.6 million in 1939 to around 6.8 million in the 1980s.[131] The principal ports for London moved downstream to Felixstowe and Tilbury, with the London Docklands area becoming a focus for regeneration, including the Canary Wharf development. This was borne out of London's ever-increasing role as a major international financial centre during the 1980s.[132] The Thames Barrier was completed in the 1980s to protect London against tidal surges from the North Sea.[133]
73
+
74
+ The Greater London Council was abolished in 1986, which left London without a central administration until 2000 when London-wide government was restored, with the creation of the Greater London Authority.[134] To celebrate the start of the 21st century, the Millennium Dome, London Eye and Millennium Bridge were constructed.[135] On 6 July 2005 London was awarded the 2012 Summer Olympics, making London the first city to stage the Olympic Games three times.[136] On 7 July 2005, three London Underground trains and a double-decker bus were bombed in a series of terrorist attacks.[137]
75
+
76
+ In 2008, Time named London alongside New York City and Hong Kong as Nylonkong, hailing it as the world's three most influential global cities.[138] In January 2015, Greater London's population was estimated to be 8.63 million, the highest level since 1939.[139] During the Brexit referendum in 2016, the UK as a whole decided to leave the European Union, but a majority of London constituencies voted to remain in the EU.[140]
77
+
78
+ The administration of London is formed of two tiers: a citywide, strategic tier and a local tier. Citywide administration is coordinated by the Greater London Authority (GLA), while local administration is carried out by 33 smaller authorities.[141] The GLA consists of two elected components: the mayor of London, who has executive powers, and the London Assembly, which scrutinises the mayor's decisions and can accept or reject the mayor's budget proposals each year.
79
+ The headquarters of the GLA is City Hall, Southwark. The mayor since 2016 has been Sadiq Khan, the first Muslim mayor of a major Western capital.[142][143] The mayor's statutory planning strategy is published as the London Plan, which was most recently revised in 2011.[144] The local authorities are the councils of the 32 London boroughs and the City of London Corporation.[145] They are responsible for most local services, such as local planning, schools, social services, local roads and refuse collection. Certain functions, such as waste management, are provided through joint arrangements. In 2009–2010 the combined revenue expenditure by London councils and the GLA amounted to just over £22 billion (£14.7 billion for the boroughs and £7.4 billion for the GLA).[146]
80
+
81
+ The London Fire Brigade is the statutory fire and rescue service for Greater London. It is run by the London Fire and Emergency Planning Authority and is the third largest fire service in the world.[147] National Health Service ambulance services are provided by the London Ambulance Service (LAS) NHS Trust, the largest free-at-the-point-of-use emergency ambulance service in the world.[148] The London Air Ambulance charity operates in conjunction with the LAS where required. Her Majesty's Coastguard and the Royal National Lifeboat Institution operate on the River Thames,[149][150] which is under the jurisdiction of the Port of London Authority from Teddington Lock to the sea.[151]
82
+
83
+ London is the seat of the Government of the United Kingdom. Many government departments, as well as the prime minister's residence at 10 Downing Street, are based close to the Palace of Westminster, particularly along Whitehall.[152] There are 73 members of Parliament (MPs) from London, elected from local parliamentary constituencies in the national Parliament. As of December 2019[update], 49 are from the Labour Party, 21 are Conservatives, and three are Liberal Democrat.[153] The ministerial post of minister for London was created in 1994. The current Minister for London is Paul Scully MP.[154]
84
+
85
+ Policing in Greater London, with the exception of the City of London, is provided by the Metropolitan Police, overseen by the mayor through the Mayor's Office for Policing and Crime (MOPAC).[155][156] The City of London has its own police force – the City of London Police.[157] The British Transport Police are responsible for police services on National Rail, London Underground, Docklands Light Railway and Tramlink services.[158]
86
+ A fourth police force in London, the Ministry of Defence Police, do not generally become involved with policing the general public.
87
+
88
+ Crime rates vary widely by area, ranging from parts with serious issues to parts considered very safe. Today crime figures are made available nationally at Local Authority[159] and Ward level.[160] In 2015, there were 118 homicides, a 25.5% increase over 2014.[161] The Metropolitan Police have made detailed crime figures, broken down by category at borough and ward level, available on their website since 2000.[162]
89
+
90
+ Recorded crime has been rising in London, notably violent crime and murder by stabbing and other means have risen. There have been 50 murders from the start of 2018 to mid April 2018. Funding cuts to police in London are likely to have contributed to this, though other factors are also involved.[163]
91
+
92
+ London, also referred to as Greater London, is one of nine regions of England and the top-level subdivision covering most of the city's metropolis.[note 5] The small ancient City of London at its core once comprised the whole settlement, but as its urban area grew, the Corporation of London resisted attempts to amalgamate the city with its suburbs, causing "London" to be defined in a number of ways for different purposes.[164]
93
+
94
+ Forty per cent of Greater London is covered by the London post town, within which 'LONDON' forms part of postal addresses.[165][166] The London telephone area code (020) covers a larger area, similar in size to Greater London, although some outer districts are excluded and some places just outside are included. The Greater London boundary has been aligned to the M25 motorway in places.[167]
95
+
96
+ Outward urban expansion is now prevented by the Metropolitan Green Belt,[168] although the built-up area extends beyond the boundary in places, resulting in a separately defined Greater London Urban Area. Beyond this is the vast London commuter belt.[169] Greater London is split for some purposes into Inner London and Outer London.[170] The city is split by the River Thames into North and South, with an informal central London area in its interior. The coordinates of the nominal centre of London, traditionally considered to be the original Eleanor Cross at Charing Cross near the junction of Trafalgar Square and Whitehall, are about 51°30′26″N 00°07′39″W / 51.50722°N 0.12750°W / 51.50722; -0.12750.[171] However the geographical centre of London, on one definition, is in the London Borough of Lambeth, just 0.1 miles to the northeast of Lambeth North tube station.[172]
97
+
98
+ Within London, both the City of London and the City of Westminster have city status and both the City of London and the remainder of Greater London are counties for the purposes of lieutenancies.[173] The area of Greater London includes areas that are part of the historic counties of Middlesex, Kent, Surrey, Essex and Hertfordshire.[174] London's status as the capital of England, and later the United Kingdom, has never been granted or confirmed officially—by statute or in written form.[note 6]
99
+
100
+ Its position was formed through constitutional convention, making its status as de facto capital a part of the UK's uncodified constitution. The capital of England was moved to London from Winchester as the Palace of Westminster developed in the 12th and 13th centuries to become the permanent location of the royal court, and thus the political capital of the nation.[178] More recently, Greater London has been defined as a region of England and in this context is known as London.[13]
101
+
102
+ Greater London encompasses a total area of 1,583 square kilometres (611 sq mi), an area which had a population of 7,172,036 in 2001 and a population density of 4,542 inhabitants per square kilometre (11,760/sq mi). The extended area known as the London Metropolitan Region or the London Metropolitan Agglomeration, comprises a total area of 8,382 square kilometres (3,236 sq mi) has a population of 13,709,000 and a population density of 1,510 inhabitants per square kilometre (3,900/sq mi).[179] Modern London stands on the Thames, its primary geographical feature, a navigable river which crosses the city from the south-west to the east. The Thames Valley is a floodplain surrounded by gently rolling hills including Parliament Hill, Addington Hills, and Primrose Hill. Historically London grew up at the lowest bridging point on the Thames. The Thames was once a much broader, shallower river with extensive marshlands; at high tide, its shores reached five times their present width.[180]
103
+
104
+ Since the Victorian era the Thames has been extensively embanked, and many of its London tributaries now flow underground. The Thames is a tidal river, and London is vulnerable to flooding.[181] The threat has increased over time because of a slow but continuous rise in high water level by the slow 'tilting' of the British Isles (up in Scotland and Northern Ireland and down in southern parts of England, Wales and Ireland) caused by post-glacial rebound.[182][183]
105
+
106
+ In 1974 a decade of work began on the construction of the Thames Barrier across the Thames at Woolwich to deal with this threat. While the barrier is expected to function as designed until roughly 2070, concepts for its future enlargement or redesign are already being discussed.[184]
107
+
108
+ London has a temperate oceanic climate (Köppen: Cfb ) receiving less precipitation than Rome, Bordeaux, Lisbon, Naples, Sydney or New York City.[185][186][187][188][189][190] Rainfall records have been kept in the city since at least 1697, when records began at Kew. At Kew, the most rainfall in one month is 7.44 inches (189.0 mm) in November 1755 and the least is 0 inches (0.00 mm) in both December 1788 and July 1800. Mile End also had 0 inches (0.00 mm) in April 1893. [191] The wettest year on record is 1903, with a total fall of 38.17 inches (969.4 mm) and the driest is 1921, with a total fall of 12.14 inches (308.3 mm).[192]
109
+
110
+ Temperature extremes in London range from 38.1 °C (100.6 °F) at Kew during August 2003[193] down to −21.1 °C (−6.0 °F).[194] However, an unofficial reading of −24 °C (−11 °F) was reported on 3 January 1740.[195] Conversely, the highest unofficial temperature ever known to be recorded in the United Kingdom occurred in London in the 1808 heat wave. The temperature was recorded at 105 °F (40.6 °C) on 13 July. It is thought that this temperature, if accurate, is one of the highest temperatures of the millennium in the United Kingdom. It is thought that only days in 1513 and 1707 could have beaten this.[196] Since records began in London (first at Greenwich in 1841[197]), the warmest month on record is July 1868, with a mean temperature of 22.5 °C (72.5 °F) at Greenwich whereas the coldest month is December 2010, with a mean temperature of −6.7 °C (19.9 °F) at Northolt.[198] Records for atmospheric pressure have been kept at London since 1692. The highest pressure ever reported is 1,050 millibars (31 inHg) on 20 January 2020, and the lowest is 945.8 millibars (27.93 inHg) on 25 December 1821.[199][200]
111
+
112
+ Summers are generally warm, sometimes hot. London's average July high is 24 °C (74 °F). On average each year, London experiences 31 days above 25 °C (77.0 °F) and 4.2 days above 30.0 °C (86.0 °F) every year. During the 2003 European heat wave there were 14 consecutive days above 30 °C (86.0 °F) and 2 consecutive days when temperatures reached 38 °C (100 °F), leading to hundreds of heat-related deaths.[201] There was also a previous spell of 15 consecutive days above 32.2 °C (90.0 °F) in 1976 which also caused many heat related deaths.[202] The previous record high was 38 °C (100 °F) in August 1911 at the Greenwich station.[197] Droughts can also, occasionally, be a problem, especially in summer. Most recently in Summer 2018[203] and with much drier than average conditions prevailing from May to December.[204] However, the most consecutive days without rain was 73 days in the spring of 1893.[205]
113
+
114
+ Winters are generally cool with little temperature variation. Heavy snow is rare but snow usually happens at least once each winter. Spring and autumn can be pleasant. As a large city, London has a considerable urban heat island effect,[206] making the centre of London at times 5 °C (9 °F) warmer than the suburbs and outskirts. This can be seen below when comparing London Heathrow, 15 miles (24 km) west of London, with the London Weather Centre.[207]
115
+
116
+ Although London and the British Isles have a reputation of frequent rainfall, London's average of 602 millimetres (23.7 in) of precipitation annually actually makes it drier than the global average.[208][better source needed] The absence of heavy winter rainfall leads to many climates around the Mediterranean having more annual precipitation than London.
117
+
118
+
119
+
120
+ London's vast urban area is often described using a set of district names, such as Mayfair, Southwark, Wembley and Whitechapel. These are either informal designations, reflect the names of villages that have been absorbed by sprawl, or are superseded administrative units such as parishes or former boroughs.
121
+
122
+ Such names have remained in use through tradition, each referring to a local area with its own distinctive character, but without official boundaries. Since 1965 Greater London has been divided into 32 London boroughs in addition to the ancient City of London.[215][216] The City of London is the main financial district,[217] and Canary Wharf has recently developed into a new financial and commercial hub in the Docklands to the east.
123
+
124
+ The West End is London's main entertainment and shopping district, attracting tourists.[218] West London includes expensive residential areas where properties can sell for tens of millions of pounds.[219] The average price for properties in Kensington and Chelsea is over £2 million with a similarly high outlay in most of central London.[220][221]
125
+
126
+ The East End is the area closest to the original Port of London, known for its high immigrant population, as well as for being one of the poorest areas in London.[222] The surrounding East London area saw much of London's early industrial development; now, brownfield sites throughout the area are being redeveloped as part of the Thames Gateway including the London Riverside and Lower Lea Valley, which was developed into the Olympic Park for the 2012 Olympics and Paralympics.[222]
127
+
128
+ London's buildings are too diverse to be characterised by any particular architectural style, partly because of their varying ages. Many grand houses and public buildings, such as the National Gallery, are constructed from Portland stone. Some areas of the city, particularly those just west of the centre, are characterised by white stucco or whitewashed buildings. Few structures in central London pre-date the Great Fire of 1666, these being a few trace Roman remains, the Tower of London and a few scattered Tudor survivors in the City. Further out is, for example, the Tudor-period Hampton Court Palace, England's oldest surviving Tudor palace, built by Cardinal Thomas Wolsey c.1515.[223]
129
+
130
+ Part of the varied architectural heritage are the 17th-century churches by Wren, neoclassical financial institutions such as the Royal Exchange and the Bank of England, to the early 20th century Old Bailey and the 1960s Barbican Estate.
131
+
132
+ The disused – but soon to be rejuvenated – 1939 Battersea Power Station by the river in the south-west is a local landmark, while some railway termini are excellent examples of Victorian architecture, most notably St. Pancras and Paddington.[224] The density of London varies, with high employment density in the central area and Canary Wharf, high residential densities in inner London, and lower densities in Outer London.
133
+
134
+ The Monument in the City of London provides views of the surrounding area while commemorating the Great Fire of London, which originated nearby. Marble Arch and Wellington Arch, at the north and south ends of Park Lane, respectively, have royal connections, as do the Albert Memorial and Royal Albert Hall in Kensington. Nelson's Column is a nationally recognised monument in Trafalgar Square, one of the focal points of central London. Older buildings are mainly brick built, most commonly the yellow London stock brick or a warm orange-red variety, often decorated with carvings and white plaster mouldings.[225]
135
+
136
+ In the dense areas, most of the concentration is via medium- and high-rise buildings. London's skyscrapers, such as 30 St Mary Axe, Tower 42, the Broadgate Tower and One Canada Square, are mostly in the two financial districts, the City of London and Canary Wharf. High-rise development is restricted at certain sites if it would obstruct protected views of St Paul's Cathedral and other historic buildings. Nevertheless, there are a number of tall skyscrapers in central London (see Tall buildings in London), including the 95-storey Shard London Bridge, the tallest building in the United Kingdom.
137
+
138
+ Other notable modern buildings include City Hall in Southwark with its distinctive oval shape,[226] the Art Deco BBC Broadcasting House plus the Postmodernist British Library in Somers Town/Kings Cross and No 1 Poultry by James Stirling. What was formerly the Millennium Dome, by the Thames to the east of Canary Wharf, is now an entertainment venue called the O2 Arena.
139
+
140
+ The London Natural History Society suggest that London is "one of the World's Greenest Cities" with more than 40 per cent green space or open water. They indicate that 2000 species of flowering plant have been found growing there and that the tidal Thames supports 120 species of fish.[227] They also state that over 60 species of bird nest in central London and that their members have recorded 47 species of butterfly, 1173 moths and more than 270 kinds of spider around London. London's wetland areas support nationally important populations of many water birds. London has 38 Sites of Special Scientific Interest (SSSIs), two national nature reserves and 76 local nature reserves.[228]
141
+
142
+ Amphibians are common in the capital, including smooth newts living by the Tate Modern, and common frogs, common toads, palmate newts and great crested newts. On the other hand, native reptiles such as slowworms, common lizards, barred grass snakes and adders, are mostly only seen in Outer London.[229]
143
+
144
+ Among other inhabitants of London are 10,000 red foxes, so that there are now 16 foxes for every square mile (2.6 square kilometres) of London. These urban foxes are noticeably bolder than their country cousins, sharing the pavement with pedestrians and raising cubs in people's backyards. Foxes have even sneaked into the Houses of Parliament, where one was found asleep on a filing cabinet. Another broke into the grounds of Buckingham Palace, reportedly killing some of Queen Elizabeth II's prized pink flamingos. Generally, however, foxes and city folk appear to get along. A survey in 2001 by the London-based Mammal Society found that 80 per cent of 3,779 respondents who volunteered to keep a diary of garden mammal visits liked having them around. This sample cannot be taken to represent Londoners as a whole.[230][231]
145
+
146
+ Other mammals found in Greater London are hedgehog, brown rat, mice, rabbit, shrew, vole, and grey squirrel.[232] In wilder areas of Outer London, such as Epping Forest, a wide variety of mammals are found, including European hare, badger, field, bank and water vole, wood mouse, yellow-necked mouse, mole, shrew, and weasel, in addition to red fox, grey squirrel and hedgehog. A dead otter was found at The Highway, in Wapping, about a mile from the Tower Bridge, which would suggest that they have begun to move back after being absent a hundred years from the city.[233] Ten of England's eighteen species of bats have been recorded in Epping Forest: soprano, Nathusius' and common pipistrelles, common noctule, serotine, barbastelle, Daubenton's, brown long-eared, Natterer's and Leisler's.[234]
147
+
148
+ Among the strange sights seen in London have been a whale in the Thames,[235] while the BBC Two programme "Natural World: Unnatural History of London" shows feral pigeons using the London Underground to get around the city, a seal that takes fish from fishmongers outside Billingsgate Fish Market, and foxes that will "sit" if given sausages.[236]
149
+
150
+ Herds of red and fallow deer also roam freely within much of Richmond and Bushy Park. A cull takes place each November and February to ensure numbers can be sustained.[237] Epping Forest is also known for its fallow deer, which can frequently be seen in herds to the north of the Forest. A rare population of melanistic, black fallow deer is also maintained at the Deer Sanctuary near Theydon Bois. Muntjac deer, which escaped from deer parks at the turn of the twentieth century, are also found in the forest. While Londoners are accustomed to wildlife such as birds and foxes sharing the city, more recently urban deer have started becoming a regular feature, and whole herds of fallow deer come into residential areas at night to take advantage of London's green spaces.[238][239]
151
+
152
+ The 2011 census recorded that 2,998,264 people or 36.7% of London's population are foreign-born making London the city with the second largest immigrant population, behind New York City, in terms of absolute numbers. About 69% of children born in London in 2015 had at least one parent who was born abroad.[241] The table to the right shows the most common countries of birth of London residents. Note that some of the German-born population, in 18th position, are British citizens from birth born to parents serving in the British Armed Forces in Germany.[242]
153
+
154
+ With increasing industrialisation, London's population grew rapidly throughout the 19th and early 20th centuries, and it was for some time in the late 19th and early 20th centuries the most populous city in the world. Its population peaked at 8,615,245 in 1939 immediately before the outbreak of the Second World War, but had declined to 7,192,091 at the 2001 Census. However, the population then grew by just over a million between the 2001 and 2011 Censuses, to reach 8,173,941 in the latter enumeration.[243]
155
+
156
+ However, London's continuous urban area extends beyond the borders of Greater London and was home to 9,787,426 people in 2011,[55] while its wider metropolitan area has a population of between 12 and 14 million depending on the definition used.[244][245] According to Eurostat, London is the most populous city and metropolitan area of the European Union and the second most populous in Europe. During the period 1991–2001 a net 726,000 immigrants arrived in London.[246]
157
+
158
+ The region covers an area of 1,579 square kilometres (610 sq mi). The population density is 5,177 inhabitants per square kilometre (13,410/sq mi),[247] more than ten times that of any other British region.[248] In terms of population, London is the 19th largest city and the 18th largest metropolitan region.[249][250]
159
+
160
+ Children (aged younger than 14 years) constitute 21 percent of the population in Outer London, and 28 percent in Inner London; the age group aged between 15 and 24 years is 12 percent in both Outer and Inner London; those aged between 25 and 44 years are 31 percent in Outer London and 40 percent in Inner London; those aged between 45 and 64 years form 26 percent and 21 percent in Outer and Inner London respectively; while in Outer London those aged 65 and older are 13 percent, though in Inner London just 9 percent.[251]
161
+
162
+ The median age of London in 2017 is 36.5 years old.[252]
163
+
164
+ According to the Office for National Statistics, based on the 2011 Census estimates, 59.8 per cent of the 8,173,941 inhabitants of London were White, with 44.9 per cent White British, 2.2 per cent White Irish, 0.1 per cent gypsy/Irish traveller and 12.1 per cent classified as Other White.[253]
165
+
166
+ 20.9 per cent of Londoners are of Asian and mixed-Asian descent. 19.7 per cent are of full Asian descent, with those of mixed-Asian heritage comprising 1.2 of the population. Indians account for 6.6 per cent of the population, followed by Pakistanis and Bangladeshis at 2.7 per cent each. Chinese peoples account for 1.5 per cent of the population, with Arabs comprising 1.3 per cent. A further 4.9 per cent are classified as "Other Asian".[253]
167
+
168
+ 15.6 per cent of London's population are of Black and mixed-Black descent. 13.3 per cent are of full Black descent, with those of mixed-Black heritage comprising 2.3 per cent. Black Africans account for 7.0 per cent of London's population, with 4.2 per cent as Black Caribbean and 2.1 per cent as "Other Black". 5.0 per cent are of mixed race.[253]
169
+
170
+ Across London, Black and Asian children outnumber White British children by about six to four in state schools.[254] Altogether at the 2011 census, of London's 1,624,768 population aged 0 to 15, 46.4 per cent were White, 19.8 per cent were Asian, 19 per cent were Black, 10.8 per cent were Mixed and 4 per cent represented another ethnic group.[255] In January 2005, a survey of London's ethnic and religious diversity claimed that there were more than 300 languages spoken in London and more than 50 non-indigenous communities with a population of more than 10,000.[256] Figures from the Office for National Statistics show that, in 2010[update], London's foreign-born population was 2,650,000 (33 per cent), up from 1,630,000 in 1997.
171
+
172
+ The 2011 census showed that 36.7 per cent of Greater London's population were born outside the UK.[257] A portion of the German-born population are likely to be British nationals born to parents serving in the British Armed Forces in Germany.[258] Estimates produced by the Office for National Statistics indicate that the five largest foreign-born groups living in London in the period July 2009 to June 2010 were those born in India, Poland, the Republic of Ireland, Bangladesh and Nigeria.[259]
173
+
174
+ According to the 2011 Census, the largest religious groupings are Christians (48.4 per cent), followed by those of no religion (20.7 per cent), Muslims (12.4 per cent), no response (8.5 per cent), Hindus (5.0 per cent), Jews (1.8 per cent), Sikhs (1.5 per cent), Buddhists (1.0 per cent) and other (0.6 per cent).
175
+
176
+ London has traditionally been Christian, and has a large number of churches, particularly in the City of London. The well-known St Paul's Cathedral in the City and Southwark Cathedral south of the river are Anglican administrative centres,[261] while the Archbishop of Canterbury, principal bishop of the Church of England and worldwide Anglican Communion, has his main residence at Lambeth Palace in the London Borough of Lambeth.[262]
177
+
178
+ Important national and royal ceremonies are shared between St Paul's and Westminster Abbey.[263] The Abbey is not to be confused with nearby Westminster Cathedral, which is the largest Roman Catholic cathedral in England and Wales.[264] Despite the prevalence of Anglican churches, observance is very low within the Anglican denomination. Church attendance continues on a long, slow, steady decline, according to Church of England statistics.[265]
179
+
180
+ London is also home to sizeable Muslim, Hindu, Sikh, and Jewish communities.
181
+
182
+ Notable mosques include the East London Mosque in Tower Hamlets, which is allowed to give the Islamic call to prayer through loudspeakers, the London Central Mosque on the edge of Regent's Park[266] and the Baitul Futuh of the Ahmadiyya Muslim Community. Following the oil boom, increasing numbers of wealthy Middle-Eastern Arab Muslims have based themselves around Mayfair, Kensington, and Knightsbridge in West London.[267][268][269] There are large Bengali Muslim communities in the eastern boroughs of Tower Hamlets and Newham.[270]
183
+
184
+ Large Hindu communities are in the north-western boroughs of Harrow and Brent, the latter of which hosts what was, until 2006,[271] Europe's largest Hindu temple, Neasden Temple.[272] London is also home to 44 Hindu temples, including the BAPS Shri Swaminarayan Mandir London. There are Sikh communities in East and West London, particularly in Southall, home to one of the largest Sikh populations and the largest Sikh temple outside India.[273]
185
+
186
+ The majority of British Jews live in London, with significant Jewish communities in Stamford Hill, Stanmore, Golders Green, Finchley, Hampstead, Hendon and Edgware in North London. Bevis Marks Synagogue in the City of London is affiliated to London's historic Sephardic Jewish community. It is the only synagogue in Europe which has held regular services continuously for over 300 years. Stanmore and Canons Park Synagogue has the largest membership of any single Orthodox synagogue in the whole of Europe, overtaking Ilford synagogue (also in London) in 1998.[274] The community set up the London Jewish Forum in 2006 in response to the growing significance of devolved London Government.[275]
187
+
188
+ The accent of a 21st-century Londoner varies widely; what is becoming more and more common amongst the under-30s however is some fusion of Cockney with a whole array of ethnic accents, in particular Caribbean, which help to form an accent labelled Multicultural London English (MLE).[276] The other widely heard and spoken accent is RP (Received Pronunciation) in various forms, which can often be heard in the media and many of other traditional professions and beyond, although this accent is not limited to London and South East England, and can also be heard selectively throughout the whole UK amongst certain social groupings. Since the turn of the century the Cockney dialect is less common in the East End and has 'migrated' east to Havering and the county of Essex.[277][278]
189
+
190
+ London's gross regional product in 2018 was almost £500 billion, around a quarter of UK GDP.[280] London has five major business districts: the City, Westminster, Canary Wharf, Camden & Islington and Lambeth & Southwark. One way to get an idea of their relative importance is to look at relative amounts of office space: Greater London had 27 million m2 of office space in 2001, and the City contains the most space, with 8 million m2 of office space. London has some of the highest real estate prices in the world.[281][282] London is the world's most expensive office market for the last three years according to world property journal (2015) report.[283] As of 2015[update] the residential property in London is worth $2.2 trillion – same value as that of Brazil's annual GDP.[284] The city has the highest property prices of any European city according to the Office for National Statistics and the European Office of Statistics.[285] On average the price per square metre in central London is €24,252 (April 2014). This is higher than the property prices in other G8 European capital cities; Berlin €3,306, Rome €6,188 and Paris €11,229.[286]
191
+
192
+ London's finance industry is based in the City of London and Canary Wharf, the two major business districts in London. London is one of the pre-eminent financial centres of the world as the most important location for international finance.[287][288] London took over as a major financial centre shortly after 1795 when the Dutch Republic collapsed before the Napoleonic armies. For many bankers established in Amsterdam (e.g. Hope, Baring), this was only time to move to London. The London financial elite was strengthened by a strong Jewish community from all over Europe capable of mastering the most sophisticated financial tools of the time.[110] This unique concentration of talents accelerated the transition from the Commercial Revolution to the Industrial Revolution. By the end of the 19th century, Britain was the wealthiest of all nations, and London a leading financial centre. Still, as of 2016[update] London tops the world rankings on the Global Financial Centres Index (GFCI),[289] and it ranked second in A.T. Kearney's 2018 Global Cities Index.[290]
193
+
194
+ London's largest industry is finance, and its financial exports make it a large contributor to the UK's balance of payments. Around 325,000 people were employed in financial services in London until mid-2007. London has over 480 overseas banks, more than any other city in the world. It is also the world's biggest currency trading centre, accounting for some 37 per cent of the $5.1 trillion average daily volume, according to the BIS.[291] Over 85 per cent (3.2 million) of the employed population of greater London works in the services industries. Because of its prominent global role, London's economy had been affected by the financial crisis of 2007–2008. However, by 2010 the City has recovered; put in place new regulatory powers, proceeded to regain lost ground and re-established London's economic dominance.[292] Along with professional services headquarters, the City of London is home to the Bank of England, London Stock Exchange, and Lloyd's of London insurance market.
195
+
196
+ Over half of the UK's top 100 listed companies (the FTSE 100) and over 100 of Europe's 500 largest companies have their headquarters in central London. Over 70 per cent of the FTSE 100 are within London's metropolitan area, and 75 per cent of Fortune 500 companies have offices in London.[293]
197
+
198
+ Media companies are concentrated in London and the media distribution industry is London's second most competitive sector.[294] The BBC is a significant employer, while other broadcasters also have headquarters around the City. Many national newspapers are edited in London. London is a major retail centre and in 2010 had the highest non-food retail sales of any city in the world, with a total spend of around £64.2 billion.[295] The Port of London is the second-largest in the United Kingdom, handling 45 million tonnes of cargo each year.[296]
199
+
200
+ A growing number of technology companies are based in London notably in East London Tech City, also known as Silicon Roundabout. In April 2014, the city was among the first to receive a geoTLD.[297] In February 2014 London was ranked as the European City of the Future[298] in the 2014/15 list by FDi Magazine.[299]
201
+
202
+ The gas and electricity distribution networks that manage and operate the towers, cables and pressure systems that deliver energy to consumers across the city are managed by National Grid plc, SGN[300] and UK Power Networks.[301]
203
+
204
+ London is one of the leading tourist destinations in the world and in 2015 was ranked as the most visited city in the world with over 65 million visits.[302][303] It is also the top city in the world by visitor cross-border spending, estimated at US$20.23 billion in 2015.[304] Tourism is one of London's prime industries, employing the equivalent of 350,000 full-time workers in 2003,[305] and the city accounts for 54% of all inbound visitor spending in the UK.[306] As of 2016[update] London was the world top city destination as ranked by TripAdvisor users.[307]
205
+
206
+ In 2015 the top most-visited attractions in the UK were all in London. The top 10 most visited attractions were: (with visits per venue)[308]
207
+
208
+ The number of hotel rooms in London in 2015 stood at 138,769, and is expected to grow over the years.[309]
209
+
210
+ Transport is one of the four main areas of policy administered by the Mayor of London,[311] however the mayor's financial control does not extend to the longer distance rail network that enters London. In 2007 he assumed responsibility for some local lines, which now form the London Overground network, adding to the existing responsibility for the London Underground, trams and buses. The public transport network is administered by Transport for London (TfL).
211
+
212
+ The lines that formed the London Underground, as well as trams and buses, became part of an integrated transport system in 1933 when the London Passenger Transport Board or London Transport was created. Transport for London is now the statutory corporation responsible for most aspects of the transport system in Greater London, and is run by a board and a commissioner appointed by the Mayor of London.[312]
213
+
214
+ London is a major international air transport hub with the busiest city airspace in the world. Eight airports use the word London in their name, but most traffic passes through six of these. Additionally, various other airports also serve London, catering primarily to general aviation flights.
215
+
216
+ The London Underground, commonly referred to as the Tube, is the oldest[328] and third longest[329] metro system in the world. The system serves 270 stations[330] and was formed from several private companies, including the world's first underground electric line, the City and South London Railway.[331] It dates from 1863.[332]
217
+
218
+ Over four million journeys are made every day on the Underground network, over 1 billion each year.[333] An investment programme is attempting to reduce congestion and improve reliability, including £6.5 billion (€7.7 billion) spent before the 2012 Summer Olympics.[334] The Docklands Light Railway (DLR), which opened in 1987, is a second, more local metro system using smaller and lighter tram-type vehicles that serve the Docklands, Greenwich and Lewisham.
219
+
220
+ There are more than 360 railway stations in the London Travelcard Zones on an extensive above-ground suburban railway network. South London, particularly, has a high concentration of railways as it has fewer Underground lines. Most rail lines terminate around the centre of London, running into eighteen terminal stations, with the exception of the Thameslink trains connecting Bedford in the north and Brighton in the south via Luton and Gatwick airports.[335] London has Britain's busiest station by number of passengers – Waterloo, with over 184 million people using the interchange station complex (which includes Waterloo East station) each year.[336][337] Clapham Junction is the busiest station in Europe by the number of trains passing.
221
+
222
+ With the need for more rail capacity in London, Crossrail is expected to open in 2021.[338] It will be a new railway line running east to west through London and into the Home Counties with a branch to Heathrow Airport.[339] It is Europe's biggest construction project, with a £15 billion projected cost.[340][341]
223
+
224
+ London is the centre of the National Rail network, with 70 per cent of rail journeys starting or ending in London.[342] Like suburban rail services, regional and inter-city trains depart from several termini around the city centre, linking London with the rest of Britain including Birmingham, Brighton, Bristol, Cambridge, Cardiff, Chester, Derby, Holyhead (for Dublin), Edinburgh, Exeter, Glasgow, Leeds, Liverpool, Nottingham, Manchester, Newcastle upon Tyne, Norwich, Reading, Sheffield, York.
225
+
226
+ Some international railway services to Continental Europe were operated during the 20th century as boat trains, such as the Admiraal de Ruijter to Amsterdam and the Night Ferry to Paris and Brussels. The opening of the Channel Tunnel in 1994 connected London directly to the continental rail network, allowing Eurostar services to begin. Since 2007, high-speed trains link St. Pancras International with Lille, Calais, Paris, Disneyland Paris, Brussels, Amsterdam and other European tourist destinations via the High Speed 1 rail link and the Channel Tunnel.[343] The first high-speed domestic trains started in June 2009 linking Kent to London.[344] There are plans for a second high speed line linking London to the Midlands, North West England, and Yorkshire.
227
+
228
+ Although rail freight levels are far down compared to their height, significant quantities of cargo are also carried into and out of London by rail; chiefly building materials and landfill waste.[345] As a major hub of the British railway network, London's tracks also carry large amounts of freight for the other regions, such as container freight from the Channel Tunnel and English Channel ports, and nuclear waste for reprocessing at Sellafield.[345]
229
+
230
+ London's bus network runs 24 hours a day, with about 8,500 buses, more than 700 bus routes and around 19,500 bus stops.[346][better source needed] In 2013, the network had more than 2 billion commuter trips per year, more than the Underground.[346][better source needed] Around £850 million is taken in revenue each year.[citation needed] London has the largest wheelchair-accessible network in the world[347] and, from the third quarter of 2007, became more accessible to hearing and visually impaired passengers as audio-visual announcements were introduced.[citation needed]
231
+
232
+ London has a modern tram network, known as Tramlink, centred on Croydon in South London. The network has 39 stops and four routes, and carried 28 million people in 2013.[348][better source needed] Since June 2008, Transport for London has completely owned Tramlink.[349][better source needed]
233
+
234
+ London's first only cable car is the Emirates Air Line, which opened in June 2012. The cable car crosses the River Thames, and links Greenwich Peninsula and the Royal Docks in the east of the city. It is integrated with London's Oyster Card ticketing system, although special fares are charged.[citation needed] It cost £60 million to build and carries more than 3,500 passengers every day. Similar to the Santander Cycles bike hire scheme, the cable car is sponsored in a 10-year deal by the airline Emirates.
235
+
236
+ In the Greater London Area, around 650,000 people use a bike everyday.[350][better source needed] But out of a total population of
237
+ around 8.8 million,[351] this means that just around 7% of Greater London's population use a bike on an average day.[352] This relatively low percentage of bicycle users may be due to the poor investments for cycling in London of about £110 million per year,[353] equating to around £12 per person, which can be compared to £22 in the Netherlands.[354]
238
+
239
+ Cycling has become an increasingly popular way to get around London.[citation needed] The launch of a cycle hire scheme in July 2010 was successful and generally well received.[citation needed]
240
+
241
+ The Port of London, once the largest in the world, is now only the second-largest in the United Kingdom, handling 45 million tonnes of cargo each year as of 2009.[296] Most of this cargo passes through the Port of Tilbury, outside the boundary of Greater London.[296]
242
+
243
+ London has river boat services on the Thames known as Thames Clippers, which offers both commuter and tourist boat services.[355] These run every 20 minutes between Embankment Pier and North Greenwich Pier.[citation needed] The Woolwich Ferry, with 2.5 million passengers every year,[356] is a frequent service linking the North and South Circular Roads.
244
+
245
+ Although the majority of journeys in central London are made by public transport, car travel is common in the suburbs. The inner ring road (around the city centre), the North and South Circular roads (just within the suburbs), and the outer orbital motorway (the M25, just outside the built-up area in most places) encircle the city and are intersected by a number of busy radial routes—but very few motorways penetrate into inner London. A plan for a comprehensive network of motorways throughout the city (the Ringways Plan) was prepared in the 1960s but was mostly cancelled in the early 1970s.[357] The M25 is the second-longest ring-road motorway in Europe at 117 mi (188 km) long.[358] The A1 and M1 connect London to Leeds, and Newcastle and Edinburgh.
246
+
247
+ London is notorious for its traffic congestion; in 2009, the average speed of a car in the rush hour was recorded at 10.6 mph (17.1 km/h).[359]
248
+
249
+ In 2003, a congestion charge was introduced to reduce traffic volumes in the city centre. With a few exceptions, motorists are required to pay to drive within a defined zone encompassing much of central London.[360] Motorists who are residents of the defined zone can buy a greatly reduced season pass.[361][362] The London government initially expected the Congestion Charge Zone to increase daily peak period Underground and bus users, reduce road traffic, increase traffic speeds, and reduce queues;[363] however, the increase in private for hire vehicles has affected these expectations. Over the course of several years, the average number of cars entering the centre of London on a weekday was reduced from 195,000 to 125,000 cars – a 35-per-cent reduction of vehicles driven per day.[364][365]
250
+
251
+ London is a major global centre of higher education teaching and research and has the largest concentration of higher education institutes in Europe.[47] According to the QS World University Rankings 2015/16, London has the greatest concentration of top class universities in the world[366][367] and its international student population of around 110,000 is larger than any other city in the world.[368] A 2014 PricewaterhouseCoopers report termed London the global capital of higher education.[369]
252
+
253
+ A number of world-leading education institutions are based in London. In the 2014/15 QS World University Rankings, Imperial College London is ranked joint-second in the world, University College London (UCL) is ranked fifth, and King's College London (KCL) is ranked 16th.[370][needs update] The London School of Economics has been described as the world's leading social science institution for both teaching and research.[371] The London Business School is considered one of the world's leading business schools and in 2015 its MBA programme was ranked second-best in the world by the Financial Times.[372] The city is also home to three of the world’s top ten performing arts schools (as ranked by the 2020 QS World University Rankings[373]): the Royal College of Music (ranking 2nd in the world), the Royal Academy of Music (ranking 4th) and the Guildhall School of Music and Drama (ranking 6th).
254
+
255
+ With 178,735 students in London[374] and around 48,000 in University of London Worldwide[375], the federal University of London is the largest contact teaching university in the UK.[376] It includes five multi-faculty universities – City, King's College London, Queen Mary, Royal Holloway and UCL – and a number of smaller and more specialised institutions including Birkbeck, the Courtauld Institute of Art, Goldsmiths, the London Business School, the London School of Economics, the London School of Hygiene & Tropical Medicine, the Royal Academy of Music, the Central School of Speech and Drama, the Royal Veterinary College and the School of Oriental and African Studies.[377] Members of the University of London have their own admissions procedures, and most award their own degrees.
256
+
257
+ A number of universities in London are outside the University of London system, including Brunel University, Imperial College London[note 7], Kingston University, London Metropolitan University,[378] University of East London, University of West London, University of Westminster, London South Bank University, Middlesex University, and University of the Arts London (the largest university of art, design, fashion, communication and the performing arts in Europe).[379] In addition there are three international universities in London – Regent's University London, Richmond, The American International University in London and Schiller International University.
258
+
259
+ London is home to five major medical schools – Barts and The London School of Medicine and Dentistry (part of Queen Mary), King's College London School of Medicine (the largest medical school in Europe), Imperial College School of Medicine, UCL Medical School and St George's, University of London – and has many affiliated teaching hospitals. It is also a major centre for biomedical research, and three of the UK's eight academic health science centres are based in the city – Imperial College Healthcare, King's Health Partners and UCL Partners (the largest such centre in Europe).[380] Additionally, many biomedical and biotechnology spin out companies from these research institutions are based around the city, most prominently in White City.
260
+
261
+ There are a number of business schools in London, including the London School of Business and Finance, Cass Business School (part of City University London), Hult International Business School, ESCP Europe, European Business School London, Imperial College Business School, the London Business School and the UCL School of Management. London is also home to many specialist arts education institutions, including the Academy of Live and Recorded Arts, Central School of Ballet, LAMDA, London College of Contemporary Arts (LCCA), London Contemporary Dance School, National Centre for Circus Arts, RADA, Rambert School of Ballet and Contemporary Dance, the Royal College of Art and Trinity Laban.
262
+
263
+ The majority of primary and secondary schools and further-education colleges in London are controlled by the London boroughs or otherwise state-funded; leading examples include Ashbourne College, Bethnal Green Academy, Brampton Manor Academy, City and Islington College, City of Westminster College, David Game College, Ealing, Hammersmith and West London College, Leyton Sixth Form College, London Academy of Excellence, Tower Hamlets College, and Newham Collegiate Sixth Form Centre. There are also a number of private schools and colleges in London, some old and famous, such as City of London School, Harrow, St Paul's School, Haberdashers' Aske's Boys' School, University College School, The John Lyon School, Highgate School and Westminster School.
264
+
265
+ Leisure is a major part of the London economy, with a 2003 report attributing a quarter of the entire UK leisure economy to London[381] at 25.6 events per 1000 people.[382] Globally the city is amongst the big four fashion capitals of the world, and according to official statistics, London is the world's third-busiest film production centre, presents more live comedy than any other city,[383] and has the biggest theatre audience of any city in the world.[384]
266
+
267
+ Within the City of Westminster in London, the entertainment district of the West End has its focus around Leicester Square, where London and world film premieres are held, and Piccadilly Circus, with its giant electronic advertisements.[385] London's theatre district is here, as are many cinemas, bars, clubs, and restaurants, including the city's Chinatown district (in Soho), and just to the east is Covent Garden, an area housing speciality shops. The city is the home of Andrew Lloyd Webber, whose musicals have dominated the West End theatre since the late 20th century.[386] The United Kingdom's Royal Ballet, English National Ballet, Royal Opera, and English National Opera are based in London and perform at the Royal Opera House, the London Coliseum, Sadler's Wells Theatre, and the Royal Albert Hall, as well as touring the country.[387]
268
+
269
+ Islington's 1 mile (1.6 km) long Upper Street, extending northwards from Angel, has more bars and restaurants than any other street in the United Kingdom.[388] Europe's busiest shopping area is Oxford Street, a shopping street nearly 1 mile (1.6 km) long, making it the longest shopping street in the UK. Oxford Street is home to vast numbers of retailers and department stores, including the world-famous Selfridges flagship store.[389] Knightsbridge, home to the equally renowned Harrods department store, lies to the south-west.
270
+
271
+ London is home to designers Vivienne Westwood, Galliano, Stella McCartney, Manolo Blahnik, and Jimmy Choo, among others; its renowned art and fashion schools make it an international centre of fashion alongside Paris, Milan, and New York City. London offers a great variety of cuisine as a result of its ethnically diverse population. Gastronomic centres include the Bangladeshi restaurants of Brick Lane and the Chinese restaurants of Chinatown.[390]
272
+
273
+ There is a variety of annual events, beginning with the relatively new New Year's Day Parade, a fireworks display at the London Eye; the world's second largest street party, the Notting Hill Carnival, is held on the late August Bank Holiday each year. Traditional parades include November's Lord Mayor's Show, a centuries-old event celebrating the annual appointment of a new Lord Mayor of the City of London with a procession along the streets of the City, and June's Trooping the Colour, a formal military pageant performed by regiments of the Commonwealth and British armies to celebrate the Queen's Official Birthday.[391] The Boishakhi Mela is a Bengali New Year festival celebrated by the British Bangladeshi community. It is the largest open-air Asian festival in Europe. After the Notting Hill Carnival, it is the second-largest street festival in the United Kingdom attracting over 80,000 visitors from across the country.[392]
274
+
275
+ London has been the setting for many works of literature. The pilgrims in Geoffrey Chaucer's late 14th-century Canterbury Tales set out for Canterbury from London – specifically, from the Tabard inn, Southwark. William Shakespeare spent a large part of his life living and working in London; his contemporary Ben Jonson was also based there, and some of his work, most notably his play The Alchemist, was set in the city.[393] A Journal of the Plague Year (1722) by Daniel Defoe is a fictionalisation of the events of the 1665 Great Plague.[393]
276
+
277
+ The literary centres of London have traditionally been hilly Hampstead and (since the early 20th century) Bloomsbury. Writers closely associated with the city are the diarist Samuel Pepys, noted for his eyewitness account of the Great Fire; Charles Dickens, whose representation of a foggy, snowy, grimy London of street sweepers and pickpockets has been a major influence on people's vision of early Victorian London; and Virginia Woolf, regarded as one of the foremost modernist literary figures of the 20th century.[393] Later important depictions of London from the 19th and early 20th centuries are Dickens' novels, and Arthur Conan Doyle's Sherlock Holmes stories.[393] Also of significance is Letitia Elizabeth Landon's Calendar of the London Seasons (1834). Modern writers pervasively influenced by the city include Peter Ackroyd, author of a "biography" of London, and Iain Sinclair, who writes in the genre of psychogeography.
278
+
279
+ London has played a significant role in the film industry. Major studios within or bordering London include Twickenham, Ealing, Shepperton, Pinewood, Elstree and Borehamwood,[394] and a special effects and post-production community centred in Soho. Working Title Films has its headquarters in London.[395] London has been the setting for films including Oliver Twist (1948), Scrooge (1951), Peter Pan (1953), The 101 Dalmatians (1961), My Fair Lady (1964), Mary Poppins (1964), Blowup (1966), The Long Good Friday (1980), The Great Mouse Detective (1986), Notting Hill (1999), Love Actually (2003), V For Vendetta (2005), Sweeney Todd: The Demon Barber of Fleet Street (2008) and The King's Speech (2010). Notable actors and filmmakers from London include; Charlie Chaplin, Alfred Hitchcock, Michael Caine, Helen Mirren, Gary Oldman, Christopher Nolan, Jude Law, Benedict Cumberbatch, Tom Hardy, Keira Knightley and Daniel Day-Lewis. As of 2008[update], the British Academy Film Awards have taken place at the Royal Opera House. London is a major centre for television production, with studios including BBC Television Centre, The Fountain Studios and The London Studios. Many television programmes have been set in London, including the popular television soap opera EastEnders, broadcast by the BBC since 1985.
280
+
281
+ London is home to many museums, galleries, and other institutions, many of which are free of admission charges and are major tourist attractions as well as playing a research role. The first of these to be established was the British Museum in Bloomsbury, in 1753. Originally containing antiquities, natural history specimens, and the national library, the museum now has 7 million artefacts from around the globe. In 1824, the National Gallery was founded to house the British national collection of Western paintings; this now occupies a prominent position in Trafalgar Square.
282
+
283
+ The British Library is one of the largest libraries in the world, and the national library of the United Kingdom.[396] There are many other research libraries, including the Wellcome Library and Dana Centre, as well as university libraries, including the British Library of Political and Economic Science at LSE, the Central Library at Imperial, the Maughan Library at King's, and the Senate House Libraries at the University of London.[397][398]
284
+
285
+ In the latter half of the 19th century the locale of South Kensington was developed as "Albertopolis", a cultural and scientific quarter. Three major national museums are there: the Victoria and Albert Museum (for the applied arts), the Natural History Museum, and the Science Museum. The National Portrait Gallery was founded in 1856 to house depictions of figures from British history; its holdings now comprise the world's most extensive collection of portraits.[399] The national gallery of British art is at Tate Britain, originally established as an annexe of the National Gallery in 1897. The Tate Gallery, as it was formerly known, also became a major centre for modern art; in 2000, this collection moved to Tate Modern, a new gallery housed in the former Bankside Power Station.
286
+
287
+ London is one of the major classical and popular music capitals of the world and hosts major music corporations, such as Universal Music Group International and Warner Music Group, as well as countless bands, musicians and industry professionals. The city is also home to many orchestras and concert halls, such as the Barbican Arts Centre (principal base of the London Symphony Orchestra and the London Symphony Chorus), Cadogan Hall (Royal Philharmonic Orchestra) and the Royal Albert Hall (The Proms).[387] London's two main opera houses are the Royal Opera House and the London Coliseum.[387] The UK's largest pipe organ is at the Royal Albert Hall. Other significant instruments are at the cathedrals and major churches. Several conservatoires are within the city: Royal Academy of Music, Royal College of Music, Guildhall School of Music and Drama and Trinity Laban.
288
+
289
+ London has numerous venues for rock and pop concerts, including the world's busiest indoor venue, The O2 Arena[400] and Wembley Arena, as well as many mid-sized venues, such as Brixton Academy, the Hammersmith Apollo and the Shepherd's Bush Empire.[387] Several music festivals, including the Wireless Festival, South West Four, Lovebox, and Hyde Park's British Summer Time are all held in London.[401] The city is home to the original Hard Rock Cafe and the Abbey Road Studios, where The Beatles recorded many of their hits. In the 1960s, 1970s and 1980s, musicians and groups like Elton John, Pink Floyd, Cliff Richard, David Bowie, Queen, The Kinks, The Rolling Stones, The Who, Eric Clapton, Led Zeppelin, The Small Faces, Iron Maiden, Fleetwood Mac, Elvis Costello, Cat Stevens, The Police, The Cure, Madness, The Jam, Ultravox, Spandau Ballet, Culture Club, Dusty Springfield, Phil Collins, Rod Stewart, Adam Ant, Status Quo and Sade, derived their sound from the streets and rhythms of London.[402]
290
+
291
+ London was instrumental in the development of punk music,[403] with figures such as the Sex Pistols, The Clash,[402] and Vivienne Westwood all based in the city. More recent artists to emerge from the London music scene include George Michael's Wham!, Kate Bush, Seal, the Pet Shop Boys, Bananarama, Siouxsie and the Banshees, Bush, the Spice Girls, Jamiroquai, Blur, McFly, The Prodigy, Gorillaz, Bloc Party, Mumford & Sons, Coldplay, Amy Winehouse, Adele, Sam Smith, Ed Sheeran, Paloma Faith, Ellie Goulding, One Direction and Florence and the Machine.[404][405][406] London is also a centre for urban music. In particular the genres UK garage, drum and bass, dubstep and grime evolved in the city from the foreign genres of hip hop and reggae, alongside local drum and bass. Music station BBC Radio 1Xtra was set up to support the rise of local urban contemporary music both in London and in the rest of the United Kingdom.
292
+
293
+ The Royal Albert Hall hosts concerts and musical events.
294
+
295
+ Abbey Road Studios, 3 Abbey Road, St John's Wood, City of Westminster
296
+
297
+ A 2013 report by the City of London Corporation said that London is the "greenest city" in Europe with 35,000 acres of public parks, woodlands and gardens.[407] The largest parks in the central area of London are three of the eight Royal Parks, namely Hyde Park and its neighbour Kensington Gardens in the west, and Regent's Park to the north.[408] Hyde Park in particular is popular for sports and sometimes hosts open-air concerts. Regent's Park contains London Zoo, the world's oldest scientific zoo, and is near Madame Tussauds Wax Museum.[409][410] Primrose Hill, immediately to the north of Regent's Park, at 256 feet (78 m)[411] is a popular spot from which to view the city skyline.
298
+
299
+ Close to Hyde Park are smaller Royal Parks, Green Park and St. James's Park.[412] A number of large parks lie outside the city centre, including Hampstead Heath and the remaining Royal Parks of Greenwich Park to the southeast[413] and Bushy Park and Richmond Park (the largest) to the southwest,[414][415] Hampton Court Park is also a royal park, but, because it contains a palace, it is administered by the Historic Royal Palaces, unlike the eight Royal Parks.[416]
300
+
301
+ Close to Richmond Park is Kew Gardens which has the world's largest collection of living plants. In 2003, the gardens were put on the UNESCO list of World Heritage Sites.[417] There are also parks administered by London's borough Councils, including Victoria Park in the East End and Battersea Park in the centre. Some more informal, semi-natural open spaces also exist, including the 320-hectare (790-acre) Hampstead Heath of North London,[418] and Epping Forest, which covers 2,476 hectares (6,118 acres)[419] in the east. Both are controlled by the City of London Corporation.[420][421] Hampstead Heath incorporates Kenwood House, a former stately home and a popular location in the summer months when classical musical concerts are held by the lake, attracting thousands of people every weekend to enjoy the music, scenery and fireworks.[422]
302
+
303
+ Epping Forest is a popular venue for various outdoor activities, including mountain biking, walking, horse riding, golf, angling, and orienteering.[423]
304
+
305
+ Walking is a popular recreational activity in London. Areas that provide for walks include Wimbledon Common, Epping Forest, Hampton Court Park, Hampstead Heath, the eight Royal Parks, canals and disused railway tracks.[424] Access to canals and rivers has improved recently, including the creation of the Thames Path, some 28 miles (45 km) of which is within Greater London, and The Wandle Trail; this runs 12 miles (19 km) through South London along the River Wandle, a tributary of the River Thames.[425]
306
+
307
+ Other long distance paths, linking green spaces, have also been created, including the Capital Ring, the Green Chain Walk, London Outer Orbital Path ("Loop"), Jubilee Walkway, Lea Valley Walk, and the Diana, Princess of Wales Memorial Walk.[426]
308
+
309
+ Aerial view of Hyde Park
310
+
311
+ St. James's Park lake with the London Eye in the distance
312
+
313
+ The River Wandle, Carshalton, in the London Borough of Sutton
314
+
315
+ London has hosted the Summer Olympics three times: in 1908, 1948, and 2012,[427][428] making it the first city to host the modern Games three times.[51] The city was also the host of the British Empire Games in 1934.[429] In 2017, London hosted the World Championships in Athletics for the first time.[430]
316
+
317
+ London's most popular sport is football and it has five clubs in the English Premier League as of the 2019–20 season: Arsenal, Chelsea, Crystal Palace, Tottenham Hotspur, and West Ham United.[431] Other professional teams in London are Fulham, Queens Park Rangers, Brentford, Millwall, Charlton Athletic, AFC Wimbledon, Leyton Orient, Barnet, Sutton United, Bromley and Dagenham & Redbridge.
318
+
319
+ From 1924, the original Wembley Stadium was the home of the English national football team. It hosted the 1966 FIFA World Cup Final, with England defeating West Germany, and served as the venue for the FA Cup Final as well as rugby league's Challenge Cup final.[432] The new Wembley Stadium serves exactly the same purposes and has a capacity of 90,000.[433]
320
+
321
+ Two Aviva Premiership rugby union teams are based in London, Saracens and Harlequins.[434] London Scottish, London Welsh and London Irish play in the RFU Championship club and other rugby union clubs in the city include Richmond F.C., Rosslyn Park F.C., Westcombe Park R.F.C. and Blackheath F.C.. Twickenham Stadium in south-west London hosts home matches for the England national rugby union team and has a capacity of 82,000 now that the new south stand has been completed.[435]
322
+
323
+ While rugby league is more popular in the north of England, there are two professional rugby league clubs in London – the London Broncos in the second-tier RFL Championship, who play at the Trailfinders Sports Ground in West Ealing, and the third-tier League 1 team, the London Skolars from Wood Green, Haringey.
324
+
325
+ One of London's best-known annual sports competitions is the Wimbledon Tennis Championships, held at the All England Club in the south-western suburb of Wimbledon.[436] Played in late June to early July, it is the oldest tennis tournament in the world, and widely considered the most prestigious.[437][438][439]
326
+
327
+ London has two Test cricket grounds, Lord's (home of Middlesex C.C.C.) in St John's Wood[440] and the Oval (home of Surrey C.C.C.) in Kennington.[441] Lord's has hosted four finals of the Cricket World Cup, and is known as the Home of Cricket.[442] Other key events are the annual mass-participation London Marathon, in which some 35,000 runners attempt a 26.2 miles (42.2 km) course around the city,[443] and the University Boat Race on the River Thames from Putney to Mortlake.[444]
328
+
329
+ Wembley Stadium, home of the England football team, has a 90,000 capacity. It is the UK's biggest stadium.
330
+
331
+ Twickenham, home of the England rugby union team, has an 82,000 capacity, the world's largest rugby union stadium.
332
+
333
+ Centre Court at Wimbledon. First played in 1877, the Championships is the oldest tennis tournament in the world.[445]
334
+
335
+ London transport portal
336
+
en/2263.html.txt ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The giant panda (Ailuropoda melanoleuca; Chinese: 大熊猫; pinyin: dàxióngmāo),[5] also known as the panda bear or simply the panda, is a bear[6] native to south central China.[1] It is characterised by large, black patches around its eyes, over the ears, and across its round body. The name "giant panda" is sometimes used to distinguish it from the red panda, a neighboring musteloid. Though it belongs to the order Carnivora, the giant panda is a folivore, with bamboo shoots and leaves making up more than 99% of its diet.[7] Giant pandas in the wild will occasionally eat other grasses, wild tubers, or even meat in the form of birds, rodents, or carrion. In captivity, they may receive honey, eggs, fish, yams, shrub leaves, oranges, or bananas along with specially prepared food.[8][9]
6
+
7
+ The giant panda lives in a few mountain ranges in central China, mainly in Sichuan, but also in neighbouring Shaanxi and Gansu.[10] As a result of farming, deforestation, and other development, the giant panda has been driven out of the lowland areas where it once lived, and it is a conservation-reliant vulnerable species.[11][12] A 2007 report showed 239 pandas living in captivity inside China and another 27 outside the country.[13] As of December 2014, 49 giant pandas lived in captivity outside China, living in 18 zoos in 13 different countries.[14] Wild population estimates vary; one estimate shows that there are about 1,590 individuals living in the wild,[13] while a 2006 study via DNA analysis estimated that this figure could be as high as 2,000 to 3,000.[15] Some reports also show that the number of giant pandas in the wild is on the rise.[16] In March 2015, conservation news site Mongabay stated that the wild giant panda population had increased by 268, or 16.8%, to 1,864.[17] In 2016, the IUCN reclassified the species from "endangered" to "vulnerable".[12]
8
+
9
+ While the dragon has often served as China's national symbol, internationally the giant panda has often filled this role. As such, it is becoming widely used within China in international contexts, for example, appearing since 1982 on gold panda bullion coins and as one of the five Fuwa mascots of the Beijing Olympics.
10
+
11
+ For many decades, the precise taxonomic classification of the giant panda was under debate because it shares characteristics with both bears and raccoons.[18] However, molecular studies indicate the giant panda is a true bear, part of the family Ursidae.[6][19] These studies show it diverged about 19 million years ago from the common ancestor of the Ursidae;[20] It is the most basal member of this family and equidistant from all other extant bear species.[21][20] The giant panda has been referred to as a living fossil.[22]
12
+
13
+ The word panda was borrowed into English from French, but no conclusive explanation of the origin of the French word panda has been found.[23] The closest candidate is the Nepali word ponya, possibly referring to the adapted wrist bone of the red panda, which is native to Nepal. The Western world originally applied this name to the red panda.
14
+
15
+ In many older sources, the name "panda" or "common panda" refers to the lesser-known red panda,[24] thus necessitating the inclusion of "giant" and "lesser/red" prefixes in front of the names. Even in 2013, the Encyclopædia Britannica still used "giant panda" or "panda bear" for the bear,[25] and simply "panda" for the red panda,[26] despite the popular usage of the word "panda" to refer to giant pandas.
16
+
17
+ Since the earliest collection of Chinese writings, the Chinese language has given the bear 20 different names, such as huāxióng (花熊 "spotted bear") and zhúxióng (竹熊 "bamboo bear").[27] The most popular names in China today is dàxióngmāo (大熊貓 literally "giant bear cat"), or simply xióngmāo (熊貓 "bear cat"). The name xióngmāo (熊貓 "bear cat") was originally used to describe the red panda (Ailurus fulgens), but since the giant panda was thought to be closely related to the red panda, dàxióngmāo (大熊貓) was named relatively.[27]
18
+
19
+ In Taiwan, another popular name for panda is the inverted dàmāoxióng (大貓熊 "giant cat bear"), though many encyclopediae and dictionaries in Taiwan still use the "bear cat" form as the correct name. Some linguists argue, in this construction, "bear" instead of "cat" is the base noun, making this name more grammatically and logically correct, which may have led to the popular choice despite official writings.[27] This name did not gain its popularity until 1988, when a private zoo in Tainan painted a sun bear black and white and created the Tainan fake panda incident.[28][29]
20
+
21
+ Two subspecies of giant panda have been recognized on the basis of distinct cranial measurements, colour patterns, and population genetics.[30]
22
+
23
+ A detailed study of the giant panda's genetic history from 2012[32] confirms that the separation of the Qinlin population occurred about 300,000 years ago, and reveals that the non-Qinlin population further diverged into two groups, named the Minshan and the Qionglai-Daxiangling-Xiaoxiangling-Liangshan group respectively, about 2,800 years ago.[33]
24
+
25
+ The giant panda has luxuriant black-and-white fur. Adults measure around 1.2 to 1.9 metres (3 feet 11 inches to 6 feet 3 inches) long, including a tail of about 10–15 cm (4–6 in), and 60 to 90 cm (24 to 35 in) tall at the shoulder.[34][35] Males can weigh up to 160 kg (350 lb).[36] Females (generally 10–20% smaller than males)[37] can weigh as little as 70 kg (150 lb), but can also weigh up to 125 kg (276 lb).[11][34][38] Average adult weight is 100 to 115 kg (220 to 254 lb).[39]
26
+
27
+ The giant panda has a body shape typical of bears. It has black fur on its ears, eye patches, muzzle, legs, arms and shoulders. The rest of the animal's coat is white. Although scientists do not know why these unusual bears are black and white, speculation suggests that the bold colouring provides effective camouflage in their shade-dappled snowy and rocky habitat.[40] The giant panda's thick, wooly coat keeps it warm in the cool forests of its habitat.[40] The panda's skull shape is typical of durophagous carnivorans. It has evolved from previous ancestors to exhibit larger molars with increased complexity and expanded temporal fossa.[41][42] A 110.45 kg (243.5 lb) giant panda has a 3D canine teeth bite force of 2603.47 newtons and bite force quotient of 292.[citation needed] Another study had a 117.5 kg (259 lb) giant panda bite of 1298.9 newtons (BFQ 151.4) at canine teeth and 1815.9 newtons (BFQ 141.8) at carnassial teeth.[43]
28
+
29
+ The giant panda's paw has a "thumb" and five fingers; the "thumb" – actually a modified sesamoid bone – helps it to hold bamboo while eating.[44] Stephen Jay Gould discusses this feature in his book of essays on evolution and biology, The Panda's Thumb.
30
+
31
+ The giant panda's tail, measuring 10 to 15 cm (4 to 6 in), is the second-longest in the bear family (the longest belongs to the sloth bear).[37]
32
+
33
+ The giant panda typically lives around 20 years in the wild and up to 30 years in captivity.[45] A female named Jia Jia was the oldest giant panda ever in captivity, born in 1978 and died at an age of 38 on 16 October 2016.[46]
34
+
35
+ A seven-year-old female named Jin Yi died in 2014 in a zoo in Zhengzhou, China, after showing symptoms of gastroenteritis and respiratory disease. It was found that the cause of death was toxoplasmosis, a disease caused by Toxoplasma gondii and infecting most warm-blooded animals, including humans.[47]
36
+
37
+ The giant panda genome was sequenced in 2009 using Illumina dye sequencing.[48] Its genome contains 20 pairs of autosomes and one pair of sex chromosomes.
38
+
39
+ Despite its taxonomic classification as a carnivoran, the giant panda's diet is primarily herbivorous, consisting almost exclusively of bamboo.[45] However, the giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes,[49] and thus derives little energy and little protein from consumption of bamboo. Its ability to digest cellulose is ascribed to the microbes in its gut.[50][51] Pandas are born with sterile intestines and require bacteria obtained from their mother's feces to digest vegetation.[52] The giant panda is a highly specialised animal with unique adaptations, and has lived in bamboo forests for millions of years.[53]
40
+
41
+ The average giant panda eats as much as 9 to 14 kg (20 to 30 lb) of bamboo shoots a day to compensate for the limited energy content of its diet. Ingestion of such a large quantity of material is possible and necessary because of the rapid passage of large amounts of indigestible plant material through the short, straight digestive tract.[54][55] It is also noted, however, that such rapid passage of digesta limits the potential of microbial digestion in the gastrointestinal tract,[54] limiting alternative forms of digestion. Given this voluminous diet, the giant panda defecates up to 40 times a day.[56] The limited energy input imposed on it by its diet has affected the panda's behavior. The giant panda tends to limit its social interactions and avoids steeply sloping terrain to limit its energy expenditures.[57]
42
+
43
+ Two of the panda's most distinctive features, its large size and round face, are adaptations to its bamboo diet. Anthropologist Russell Ciochon observed: "[much] like the vegetarian gorilla, the low body surface area to body volume [of the giant panda] is indicative of a lower metabolic rate. This lower metabolic rate and a more sedentary lifestyle allows the giant panda to subsist on nutrient poor resources such as bamboo."[57] Similarly, the giant panda's round face is the result of powerful jaw muscles, which attach from the top of the head to the jaw.[57] Large molars crush and grind fibrous plant material.
44
+
45
+ The morphological characteristics of extinct relatives of the giant panda suggest that while the ancient giant panda was omnivorous 7 million years ago (mya), it only became herbivorous some 2–2.4 mya with the emergence of A. microta.[58][59] Genome sequencing of the giant panda suggests that the dietary switch could have initiated from the loss of the sole T1R1/T1R3 umami taste receptor, resulting from two frameshift mutations within the T1R1 exons.[60] Umami taste corresponds to high levels of glutamate as found in meat and may have thus altered the food choice of the giant panda.[61] Although the pseudogenisation of the umami taste receptor in Ailuropoda coincides with the dietary switch to herbivory, it is likely a result of, and not the reason for, the dietary change.[59][60][61] The mutation time for the T1R1 gene in the giant panda is estimated to 4.2 mya[59] while fossil evidence indicates bamboo consumption in the giant panda species at least 7 mya,[58] signifying that although complete herbivory occurred around 2 mya, the dietary switch was initiated prior to T1R1 loss-of-function.
46
+
47
+ Pandas eat any of 25 bamboo species in the wild, such as Fargesia dracocephala[62] and Fargesia rufa.[63] Only a few bamboo species are widespread at the high altitudes pandas now inhabit. Bamboo leaves contain the highest protein levels; stems have less.[64]
48
+
49
+ Because of the synchronous flowering, death, and regeneration of all bamboo within a species, the giant panda must have at least two different species available in its range to avoid starvation. While primarily herbivorous, the giant panda still retains decidedly ursine teeth and will eat meat, fish, and eggs when available. In captivity, zoos typically maintain the giant panda's bamboo diet, though some will provide specially formulated biscuits or other dietary supplements.[65]
50
+
51
+ Pandas will travel between different habitats if they need to, so they can get the nutrients that they need and to balance their diet for reproduction. For six years, scientists studied six pandas tagged with GPS collars at the Foping Reserve in the Qinling Mountains. They took note of their foraging and mating habits and analyzed samples of their food and feces. The pandas would move from the valleys into the Qinling Mountains and would only return to the valleys in autumn. During the summer months bamboo shoots rich in protein are only available at higher altitudes which causes low calcium rates in the pandas and during breeding season the pandas would trek back down to eat bamboo leaves rich in calcium.[66]
52
+
53
+ Although adult giant pandas have few natural predators other than humans, young cubs are vulnerable to attacks by snow leopards, yellow-throated martens,[67] eagles, feral dogs, and the Asian black bear. Sub-adults weighing up to 50 kg (110 lb) may be vulnerable to predation by leopards.[68]
54
+
55
+ The giant panda is a terrestrial animal and primarily spends its life roaming and feeding in the bamboo forests of the Qinling Mountains and in the hilly province of Sichuan.[69] Giant pandas are generally solitary.[53] Each adult has a defined territory and a female is not tolerant of other females in her range. Social encounters occur primarily during the brief breeding season in which pandas in proximity to one another will gather.[70] After mating, the male leaves the female alone to raise the cub.[71]
56
+
57
+ Pandas were thought to fall into the crepuscular category, those who are active twice a day, at dawn and dusk; however, Jindong Zhang found that pandas may belong to a category all of their own, with activity peaks in the morning, afternoon and midnight. Due to their sheer size, they can be active at any time of the day.[72] Activity is highest in June and decreases in late summer to fall with an increase from November through the following March.[73] Activity is also directly related to the amount of sunlight during colder days.[73]
58
+
59
+ Pandas communicate through vocalisation and scent marking such as clawing trees or spraying urine.[11] They are able to climb and take shelter in hollow trees or rock crevices, but do not establish permanent dens. For this reason, pandas do not hibernate, which is similar to other subtropical mammals, and will instead move to elevations with warmer temperatures.[74] Pandas rely primarily on spatial memory rather than visual memory.[75]
60
+
61
+ Though the panda is often assumed to be docile, it has been known to attack humans, presumably out of irritation rather than aggression.[76][77][78]
62
+
63
+ Initially, the primary method of breeding giant pandas in captivity was by artificial insemination, as they seemed to lose their interest in mating once they were captured.[80] This led some scientists to try extreme methods, such as showing them videos of giant pandas mating[81] and giving the males sildenafil (commonly known by name Viagra).[82] Only recently have researchers started having success with captive breeding programs, and they have now determined giant pandas have comparable breeding to some populations of the American black bear, a thriving bear species. The normal reproductive rate is considered to be one young every two years.[16][69]
64
+
65
+ Giant pandas reach sexual maturity between the ages of four and eight, and may be reproductive until age 20.[83] The mating season is between March and May, when a female goes into estrus, which lasts for two or three days and only occurs once a year.[84] When mating, the female is in a crouching, head-down position as the male mounts her from behind. Copulation time is short, ranging from 30 seconds to five minutes, but the male may mount her repeatedly to ensure successful fertilisation. The gestation period ranges from 95 to 160 days.[84]
66
+
67
+ Giant pandas give birth to twins in about half of pregnancies.[85] If twins are born, usually only one survives in the wild. The mother will select the stronger of the cubs, and the weaker cub will die due to starvation. The mother is thought to be unable to produce enough milk for two cubs since she does not store fat.[86] The father has no part in helping raise the cub.
68
+
69
+ When the cub is first born, it is pink, blind, and toothless,[87] weighing only 90 to 130 g (3 1⁄4 to 4 1⁄2 oz), or about 1/800th of the mother's weight,[18] proportionally the smallest baby of any placental mammal.[88] It nurses from its mother's breast six to 14 times a day for up to 30 minutes at a time. For three to four hours, the mother may leave the den to feed, which leaves the cub defenseless. One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed. Its fur is very soft and coarsens with age. The cub begins to crawl at 75 to 80 days;[18] mothers play with their cubs by rolling and wrestling with them. The cubs can eat small quantities of bamboo after six months,[89] though mother's milk remains the primary food source for most of the first year. Giant panda cubs weigh 45 kg (100 pounds) at one year and live with their mothers until they are 18 months to two years old. The interval between births in the wild is generally two years.
70
+
71
+ In July 2009, Chinese scientists confirmed the birth of the first cub to be successfully conceived through artificial insemination using frozen sperm.[90] The cub was born at 07:41 on 23 July that year in Sichuan as the third cub of You You, an 11-year-old.[90][91][92] The technique for freezing the sperm in liquid nitrogen was first developed in 1980 and the first birth was hailed as a solution to the dwindling availability of giant panda semen, which had led to inbreeding.[92][93] Panda semen, which can be frozen for decades, could be shared between different zoos to save the species.[90][91] It is expected that zoos in destinations such as San Diego in the United States and Mexico City will now be able to provide their own semen to inseminate more giant pandas.[93] In August 2014, a rare birth of panda triplets was announced in China; it was the fourth of such births ever reported.[94]
72
+
73
+ Attempts have also been made to reproduce giant pandas by interspecific pregnancy by implanting cloned panda embryos into the uterus of an animal of another species. This has resulted in panda fetuses, but no live births.[95]
74
+
75
+ In the past, pandas were thought to be rare and noble creatures – the Empress Dowager Bo was buried with a panda skull in her vault. The grandson of Emperor Taizong of Tang is said to have given Japan two pandas and a sheet of panda skin as a sign of goodwill. Unlike many other animals in Ancient China, pandas were rarely thought to have medical uses. The few known uses include the Sichuan tribal peoples' use of panda urine to melt accidentally swallowed needles, and the use of panda pelts to control menses as described in the Qin Dynasty encyclopedia Erya.[96]
76
+
77
+ The creature named mo (貘) mentioned in some ancient books has been interpreted as giant panda.[96] The dictionary Shuowen Jiezi (Eastern Han Dynasty) says that the mo, from Shu (Sichuan), is bear-like, but yellow-and-black,[97] although the older Erya describes mo simply as a "white leopard".[98] The interpretation of the legendary fierce creature pixiu (貔貅) as referring to the giant panda is also common.[99]
78
+
79
+ During the reign of the Yongle Emperor (early 15th century), his relative from Kaifeng sent him a captured zouyu (騶虞), and another zouyu was sighted in Shandong. Zouyu is a legendary "righteous" animal, which, similarly to a qilin, only appears during the rule of a benevolent and sincere monarch. It is said to be fierce as a tiger, but gentle and strictly vegetarian, and described in some books as a white tiger with black spots. Puzzled about the real zoological identity of the creature captured during the Yongle era, J.J.L. Duyvendak exclaims, "Can it possibly have been a Pandah?"[100]
80
+
81
+ The comparative obscurity of the giant panda throughout most of China's history is illustrated by the fact that, despite there being a number of depictions of bears in Chinese art starting from its most ancient times, and the bamboo being one of the favorite subjects for Chinese painters, there are no known pre-20th-century artistic representations of giant pandas.[citation needed]
82
+
83
+ The West first learned of the giant panda on 11 March 1869, when the French missionary Armand David[18] received a skin from a hunter. The first Westerner known to have seen a living giant panda is the German zoologist Hugo Weigold, who purchased a cub in 1916. Kermit and Theodore Roosevelt, Jr., became the first Westerners to shoot a panda, on an expedition funded by the Field Museum of Natural History in the 1920s. In 1936, Ruth Harkness became the first Westerner to bring back a live giant panda, a cub named Su Lin[101] which went to live at the Brookfield Zoo in Chicago. In 1938, Floyd Tangier Smith captured and delivered five giant pandas to London, they arrived on 23rd December aboard the SS Antenor.[102][103] These five were the first on British soil and were transferred to London Zoo.[104] One, named Grandma, only lasted a few days. She was taxidermied by E. Gerrard and Sons and sold to Leeds City Museum where she is currently on display to the public. Another, Ming, became London Zoo's first Giant Panda. Her skull is held by the Royal College of Surgeons of England.[105]
84
+
85
+ Gifts of giant pandas to American and Japanese zoos formed an important part of the diplomacy of the People's Republic of China (PRC) in the 1970s, as it marked some of the first cultural exchanges between China and the West. This practice has been termed "panda diplomacy".[106]
86
+
87
+ By 1984, however, pandas were no longer given as gifts. Instead, China began to offer pandas to other nations only on 10-year loans, under terms including a fee of up to US$1,000,000 per year and a provision that any cubs born during the loan are the property of China. Since 1998, because of a WWF lawsuit, the United States Fish and Wildlife Service only allows a US zoo to import a panda if the zoo can ensure that China will channel more than half of its loan fee into conservation efforts for the giant panda and its habitat.
88
+
89
+ In May 2005, China offered a breeding pair to Taiwan. The issue became embroiled in cross-Strait relations – both over the underlying symbolism, and over technical issues such as whether the transfer would be considered "domestic" or "international", or whether any true conservation purpose would be served by the exchange.[107] A contest in 2006 to name the pandas was held in the mainland, resulting in the politically charged names Tuan Tuan and Yuan Yuan (from tuanyuan, meaning "reunion", i.e. "reunification"). China's offer was initially rejected by Chen Shui-bian, then President of Taiwan. However, when Ma Ying-jeou assumed the presidency in 2008, the offer was accepted, and the pandas arrived in December of that year.[108]
90
+
91
+ Microbes in panda waste are being investigated for their use in creating biofuels from bamboo and other plant materials.[109]
92
+
93
+ The giant panda is a vulnerable species, threatened by continued habitat loss and habitat fragmentation,[110] and by a very low birthrate, both in the wild and in captivity.[45] Its range is currently confined to a small portion on the western edge of its historical range, which stretched through southern and eastern China, northern Myanmar, and northern Vietnam.[1]
94
+
95
+ The giant panda has been a target of poaching by locals since ancient times and by foreigners since it was introduced to the West. Starting in the 1930s, foreigners were unable to poach giant pandas in China because of the Second Sino-Japanese War and the Chinese Civil War, but pandas remained a source of soft furs for the locals. The population boom in China after 1949 created stress on the pandas' habitat and the subsequent famines led to the increased hunting of wildlife, including pandas. During the Cultural Revolution, all studies and conservation activities on the pandas were stopped. After the Chinese economic reform, demand for panda skins from Hong Kong and Japan led to illegal poaching for the black market, acts generally ignored by the local officials at the time.
96
+
97
+ In 1963, the PRC government set up Wolong National Nature Reserve to save the declining panda population.[111] However, few advances in the conservation of pandas were made at the time, owing to inexperience and insufficient knowledge of ecology. Many believed the best way to save the pandas was to cage them. As a result, pandas were caged at any sign of decline and suffered from terrible conditions. Because of pollution and destruction of their natural habitat, along with segregation caused by caging, reproduction of wild pandas was severely limited. In the 1990s, however, several laws (including gun control and the removal of resident humans from the reserves) helped their chances of survival. With these renewed efforts and improved conservation methods, wild pandas have started to increase in numbers in some areas, though they still are classified as a rare species.[citation needed]
98
+
99
+ In 2006, scientists reported that the number of pandas living in the wild may have been underestimated at about 1,000. Previous population surveys had used conventional methods to estimate the size of the wild panda population, but using a new method that analyzes DNA from panda droppings, scientists believe the wild population may be as large as 3,000.[45] In 2006, there were 40 panda reserves in China, compared to just 13 reserves in 1998.[15] As the species has been reclassified to "vulnerable" since 2016, the conservation efforts are thought to be working. Furthermore, in response to this reclassification, the State Forestry Administration of China announced that they would not accordingly lower the conservation level for panda, and would instead reinforce the conservation efforts.[112]
100
+
101
+ The giant panda is among the world's most adored and protected rare animals, and is one of the few in the world whose natural inhabitant status was able to gain a UNESCO World Heritage Site designation. The Sichuan Giant Panda Sanctuaries, located in the southwest province of Sichuan and covering seven natural reserves, were inscribed onto the World Heritage List in 2006.[113][114][115]
102
+
103
+ Not all conservationists agree that the money spent on conserving pandas is well spent. Chris Packham has argued that the breeding of pandas in captivity is "pointless" because "there is not enough habitat left to sustain them".[116] Packham argues that the money spent on pandas would be better spent elsewhere,[116] and has said he would "eat the last panda if I could have all the money we have spent on panda conservation put back on the table for me to do more sensible things with",[117] though he has apologised for upsetting people who like pandas.[118] He said, "The panda is possibly one of the grossest wastes of conservation money in the last half century."[117] However, a 2015 paper found that the giant panda can serve as an umbrella species as the preservation of their habitat also helps other endemic species in China, including 70% of the country's forest birds, 70% of mammals and 31% of amphibians.[119]
104
+
105
+ In 2012, Earthwatch Institute, a global nonprofit that teams volunteers with scientists to conduct important environmental research, launched a program called "On the Trail of Giant Panda". This program, based in the Wolong National Nature Reserve, allows volunteers to work up close with pandas cared for in captivity, and help them adapt to life in the wild, so that they may breed, and live longer and healthier lives.[120]
106
+
107
+ Pandas have been kept in zoos as early as the Western Han Dynasty in China, where the writer Sima Xiangru noted that the panda was the most treasured animal in the emperor's garden of exotic animals in the capital Chang'an (present Xi'an). Not until the 1950s were pandas again recorded to have been exhibited in China's zoos.[121]
108
+
109
+ Chi Chi at the London Zoo became very popular. This influenced the World Wildlife Fund to use a panda as its symbol.[122]
110
+
111
+ A 2006 New York Times article[123] outlined the economics of keeping pandas, which costs five times more than keeping the next most expensive animal, an elephant. American zoos generally pay the Chinese government $1 million a year in fees, as part of a typical ten-year contract. San Diego's contract with China was to expire in 2008, but got a five-year extension at about half of the previous yearly cost.[124] The last contract, with the Memphis Zoo in Memphis, Tennessee, ended in 2013.[123]
112
+
113
+ The Face of the Giant Panda Sign is an MRI sign in patients with Wilson's disease, named for the midbrain's resemblance to a giant panda's face.
en/2264.html.txt ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ The great white shark (Carcharodon carcharias), also known as the great white, white shark or "white pointer", is a species of large mackerel shark which can be found in the coastal surface waters of all the major oceans. It is notable for its size, with larger female individuals growing to 6.1 m (20 ft) in length and 1,905–2,268 kg (4,200–5,000 lb) in weight at maturity.[3][4][5] However, most are smaller; males measure 3.4 to 4.0 m (11 to 13 ft), and females measure 4.6 to 4.9 m (15 to 16 ft) on average.[4][6] According to a 2014 study, the lifespan of great white sharks is estimated to be as long as 70 years or more, well above previous estimates,[7] making it one of the longest lived cartilaginous fishes currently known.[8] According to the same study, male great white sharks take 26 years to reach sexual maturity, while the females take 33 years to be ready to produce offspring.[9] Great white sharks can swim at speeds of 25 km/hr (16 mph)[10] for short bursts and to depths of 1,200 m (3,900 ft).[11]
6
+
7
+ The great white shark has no known natural predators other than, on very rare occasions, the killer whale.[12] It is arguably the world's largest-known extant macropredatory fish, and is one of the primary predators of marine mammals, up to the size of large baleen whales. This shark is also known to prey upon a variety of other marine animals, including fish, and seabirds. It is the only known surviving species of its genus Carcharodon, and is responsible for more recorded human bite incidents than any other shark.[13][14]
8
+
9
+ The species faces numerous ecological challenges which has resulted in international protection. The IUCN lists the great white shark as a vulnerable species,[2] and it is included in Appendix II of CITES.[15] It is also protected by several national governments such as Australia (as of 2018).[16]
10
+
11
+ The novel Jaws by Peter Benchley and its subsequent film adaptation by Steven Spielberg depicted the great white shark as a "ferocious man-eater". Humans are not the preferred prey of the great white shark,[17] but the great white is nevertheless responsible for the largest number of reported and identified fatal unprovoked shark attacks on humans.[18]
12
+
13
+ Due to their need to travel long distances for seasonal migration and extremely demanding diet, it is not logistically feasible to keep great white sharks in captivity. No aquarium in the world is currently believed to own one.[19]
14
+
15
+ The great white shark was one of the many amphibia originally described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae,[20] with its first scientific name, Squalus carcharias. Later, Sir Andrew Smith gave it Carcharodon as its generic name in 1833, and also in 1873. The generic name was identified with Linnaeus' specific name and the current scientific name, Carcharodon carcharias, was finalized. Carcharodon comes from the Ancient Greek words κάρχαρος (kárkharos, 'sharp' or 'jagged'), and ὀδούς (odoús), ὀδών (odṓn, 'tooth').[21]
16
+
17
+ The earliest-known fossils of the great white shark are about 16 million years old, during the mid-Miocene epoch.[1] However, the phylogeny of the great white is still in dispute. The original hypothesis for the great white's origins is that it shares a common ancestor with a prehistoric shark, such as the C. megalodon. C. megalodon had teeth that were superficially not too dissimilar with those of great white sharks, but its teeth were far larger. Although cartilaginous skeletons do not fossilize, C. megalodon is estimated to have been considerably larger than the great white shark, estimated at up to 17 m (56 ft) and 59,413 kg (130,983 lb).[22] Similarities among the physical remains and the extreme size of both the great white and C. megalodon led many scientists to believe these sharks were closely related, and the name Carcharodon megalodon was applied to the latter.
18
+
19
+ However, a new hypothesis proposes that the great white is also more closely related to an ancient mako shark, Isurus hastalis, than to the C. megalodon. The theory seems to be supported with the earlier discovery of a complete set of jaws with 222 teeth and 45 vertebrae of the extinct transitional species Carcharodon hubbelli in 1988.[23] In addition, the new hypothesis assigns C. megalodon to the genus Carcharocles, which also comprises the other megatoothed sharks.[24]
20
+
21
+ Great white sharks live in almost all coastal and offshore waters which have water temperature between 12 and 24 °C (54 and 75 °F), with greater concentrations in the United States (Northeast and California), South Africa, Japan, Oceania, Chile, and the Mediterranean including Sea of Marmara and Bosphorus.[25][26] One of the densest-known populations is found around Dyer Island, South Africa.[27]
22
+
23
+ The great white is an epipelagic fish, observed mostly in the presence of rich game, such as fur seals (Arctocephalus ssp.), sea lions, cetaceans, other sharks, and large bony fish species. In the open ocean, it has been recorded at depths as great as 1,200��m (3,900 ft).[11] These findings challenge the traditional notion that the great white is a coastal species.[11]
24
+
25
+ According to a recent study, California great whites have migrated to an area between Baja California Peninsula and Hawaii known as the White Shark Café to spend at least 100 days before migrating back to Baja. On the journey out, they swim slowly and dive down to around 900 m (3,000 ft). After they arrive, they change behaviour and do short dives to about 300 m (980 ft) for up to ten minutes. Another white shark that was tagged off the South African coast swam to the southern coast of Australia and back within the year. A similar study tracked a different great white shark from South Africa swimming to Australia's northwestern coast and back, a journey of 20,000 km (12,000 mi; 11,000 nmi) in under nine months.[28]
26
+ These observations argue against traditional theories that white sharks are coastal territorial predators, and open up the possibility of interaction between shark populations that were previously thought to have been discrete. The reasons for their migration and what they do at their destination is still unknown. Possibilities include seasonal feeding or mating.[29]
27
+
28
+ In the Northwest Atlantic the white shark populations off the New England coast were nearly eradicated due to over-fishing.[30] However, in recent years the populations have begun to grow greatly,[31] largely due to the increase in seal populations on Cape Cod, Massachusetts since the enactment of the Marine Mammal Protection Act in 1972.[32] Currently very little is known about the hunting and movement patterns of great whites off Cape Cod, but ongoing studies hope to offer insight into this growing shark population.[33]
29
+
30
+ A 2018 study indicated that white sharks prefer to congregate deep in anticyclonic eddies in the North Atlantic Ocean. The sharks studied tended to favour the warm water eddies, spending the daytime hours at 450 meters and coming to the surface at night.[34]
31
+
32
+ The great white shark has a robust, large, conical snout. The upper and lower lobes on the tail fin are approximately the same size which is similar to some mackerel sharks. A great white displays countershading, by having a white underside and a grey dorsal area (sometimes in a brown or blue shade) that gives an overall mottled appearance. The coloration makes it difficult for prey to spot the shark because it breaks up the shark's outline when seen from the side. From above, the darker shade blends with the sea and from below it exposes a minimal silhouette against the sunlight. Leucism is extremely rare in this species, but has been documented in one great white shark (a pup that washed ashore in Australia and died).[35] Great white sharks, like many other sharks, have rows of serrated teeth behind the main ones, ready to replace any that break off. When the shark bites, it shakes its head side-to-side, helping the teeth saw off large chunks of flesh.[36] Great white sharks, like other mackerel sharks, have larger eyes than other shark species in proportion to their body size. The iris of the eye is a deep blue instead of black.[37]
33
+
34
+ In great white sharks, sexual dimorphism is present, and females are generally larger than males. Male great whites on average measure 3.4 to 4.0 m (11 to 13 ft) long, while females at 4.6 to 4.9 m (15 to 16 ft).[6] Adults of this species weigh 522–771 kg (1,151–1,700 lb) on average;[40] however, mature females can have an average mass of 680–1,110 kg (1,500–2,450 lb).[4] The largest females have been verified up to 6.1 m (20 ft) in length and an estimated 1,905 kg (4,200 lb) in weight,[3][4] perhaps up to 2,268 kg (5,000 lb).[5] The maximum size is subject to debate because some reports are rough estimations or speculations performed under questionable circumstances.[41] Among living cartilaginous fish, only the whale shark (Rhincodon typus), the basking shark (Cetorhinus maximus) and the giant manta ray (Manta birostris), in that order, are on average larger and heavier. These three species are generally quite docile in disposition and given to passively filter-feeding on very small organisms.[40] This makes the great white shark the largest extant macropredatory fish. Great white sharks are at around 1.2 m (3.9 ft) when born, and grow about 25 cm (9.8 in) each year.[42]
35
+
36
+ According to J. E. Randall, the largest white shark reliably measured was a 6 m (20 ft) individual reported from Ledge Point, Western Australia in 1987.[43] Another great white specimen of similar size has been verified by the Canadian Shark Research Center: A female caught by David McKendrick of Alberton, Prince Edward Island, in August 1988 in the Gulf of St. Lawrence off Prince Edward Island. This female great white was 6.1 m (20 ft) long.[4] However, there was a report considered reliable by some experts in the past, of a larger great white shark specimen from Cuba in 1945.[39][44][45][46] This specimen was reportedly 6.4 m (21 ft) long and had a body mass estimated at 3,324 kg (7,328 lb).[39][45] However, later studies also revealed that this particular specimen was actually around 4.9 m (16 ft) in length, a specimen in the average maximum size range.[4]
37
+
38
+ The largest great white recognized by the International Game Fish Association (IGFA) is one caught by Alf Dean in south Australian waters in 1959, weighing 1,208 kg (2,663 lb).[41] Several larger great whites caught by anglers have since been verified, but were later disallowed from formal recognition by IGFA monitors for rules violations.
39
+
40
+ A number of very large unconfirmed great white shark specimens have been recorded.[47] For decades, many ichthyological works, as well as the Guinness Book of World Records, listed two great white sharks as the largest individuals: In the 1870s, a 10.9 m (36 ft) great white captured in southern Australian waters, near Port Fairy, and an 11.3 m (37 ft) shark trapped in a herring weir in New Brunswick, Canada, in the 1930s. However, these measurements were not obtained in a rigorous, scientifically valid manner, and researchers have questioned the reliability of these measurements for a long time, noting they were much larger than any other accurately reported sighting. Later studies proved these doubts to be well founded. This New Brunswick shark may have been a misidentified basking shark, as the two have similar body shapes. The question of the Port Fairy shark was settled in the 1970s when J. E. Randall examined the shark's jaws and "found that the Port Fairy shark was of the order of 5 m (16 ft) in length and suggested that a mistake had been made in the original record, in 1870, of the shark's length".[43] These wrong measurements would make the alleged shark more than five times heavier than it really was.
41
+
42
+ While these measurements have not been confirmed, some great white sharks caught in modern times have been estimated to be more than 7 m (23 ft) long,[48] but these claims have received some criticism.[41][48] However, J. E. Randall believed that great white shark may have exceeded 6.1 m (20 ft) in length.[43] A great white shark was captured near Kangaroo Island in Australia on 1 April 1987. This shark was estimated to be more than 6.9 m (23 ft) long by Peter Resiley,[43][49] and has been designated as KANGA.[48] Another great white shark was caught in Malta by Alfredo Cutajar on 16 April 1987. This shark was also estimated to be around 7.13 m (23.4 ft) long by John Abela and has been designated as MALTA.[48] However, Cappo drew criticism because he used shark size estimation methods proposed by J. E. Randall to suggest that the KANGA specimen was 5.8–6.4 m (19–21 ft) long.[48] In a similar fashion, I. K. Fergusson also used shark size estimation methods proposed by J. E. Randall to suggest that the MALTA specimen was 5.3–5.7 m (17–19 ft) long.[48] However, photographic evidence suggested that these specimens were larger than the size estimations yielded through Randall's methods.[48] Thus, a team of scientists—H. F. Mollet, G. M. Cailliet, A. P. Klimley, D. A. Ebert, A. D. Testi, and L. J. V. Compagno—reviewed the cases of the KANGA and MALTA specimens in 1996 to resolve the dispute by conducting a comprehensive morphometric analysis of the remains of these sharks and re-examination of photographic evidence in an attempt to validate the original size estimations and their findings were consistent with them. The findings indicated that estimations by P. Resiley and J. Abela are reasonable and could not be ruled out.[48] A particularly large female great white nicknamed "Deep Blue", estimated measuring at 6.1 m (20 ft) was filmed off Guadalupe during shooting for the 2014 episode of Shark Week "Jaws Strikes Back". Deep Blue would also later gain significant attention when she was filmed interacting with researcher Mauricio Hoyas Pallida in a viral video that Mauricio posted on Facebook on 11 June 2015.[50] Deep Blue was later seen off Oahu in January 2019 while scavenging a sperm whale carcass, whereupon she was filmed swimming beside divers including dive tourism operator and model Ocean Ramsey in open water.[51][52][53] In July 2019, a fisherman, J. B. Currell, was on a trip to Cape Cod from Bermuda with Tom Brownell when they saw a large shark about 40 mi (64 km) southeast of Martha's Vineyard. Recording it on video, he said that it weighed about 5,000 lb (2,300 kg), and measured 25–30 ft (7.6–9.1 m), evoking a comparison with the fictional shark Jaws. The video was shared with the page "Troy Dando Fishing" on Facebook.[54][55] A particularly infamous great white shark, supposedly of record proportions, once patrolled the area that comprises False Bay, South Africa, was said to be well over 7 m (23 ft) during the early 1980s. This shark, known locally as the "Submarine", had a legendary reputation that was supposedly well founded. Though rumours have stated this shark was exaggerated in size or non-existent altogether, witness accounts by the then young Craig Anthony Ferreira, a notable shark expert in South Africa, and his father indicate an unusually large animal of considerable size and power (though it remains uncertain just how massive the shark was as it escaped capture each time it was hooked). Ferreira describes the four encounters with the giant shark he participated in with great detail in his book "Great White Sharks On Their Best Behavior".[56]
43
+
44
+ One contender in maximum size among the predatory sharks is the tiger shark (Galeocerdo cuvier). While tiger sharks which are typically both a few feet smaller and have a leaner, less heavy body structure than white sharks, have been confirmed to reach at least 5.5 m (18 ft) in the length, an unverified specimen was reported to have measured 7.4 m (24 ft) in length and weighed 3,110 kg (6,860 lb), more than two times heavier than the largest confirmed specimen at 1,524 kg (3,360 lb).[40][57][58] Some other macropredatory sharks such as the Greenland shark (Somniosus microcephalus) and the Pacific sleeper shark (S. pacificus) are also reported to rival these sharks in length (but probably weigh a bit less since they are more slender in build than a great white) in exceptional cases.[59][60] The question of maximum weight is complicated by the unresolved question of whether or not to include the shark's stomach contents when weighing the shark. With a single bite a great white can take in up to 14 kg (31 lb) of flesh and can also consume several hundred kilograms of food.
45
+
46
+ Great white sharks, like all other sharks, have an extra sense given by the ampullae of Lorenzini which enables them to detect the electromagnetic field emitted by the movement of living animals. Great whites are so sensitive they can detect variations of half a billionth of a volt. At close range, this allows the shark to locate even immobile animals by detecting their heartbeat. Most fish have a less-developed but similar sense using their body's lateral line.[75]
47
+
48
+ To more successfully hunt fast and agile prey such as sea lions, the great white has adapted to maintain a body temperature warmer than the surrounding water. One of these adaptations is a "rete mirabile" (Latin for "wonderful net"). This close web-like structure of veins and arteries, located along each lateral side of the shark, conserves heat by warming the cooler arterial blood with the venous blood that has been warmed by the working muscles. This keeps certain parts of the body (particularly the stomach) at temperatures up to 14 °C (25 °F) [76] above that of the surrounding water, while the heart and gills remain at sea temperature. When conserving energy, the core body temperature can drop to match the surroundings. A great white shark's success in raising its core temperature is an example of gigantothermy. Therefore, the great white shark can be considered an endothermic poikilotherm or mesotherm because its body temperature is not constant but is internally regulated.[36][77] Great whites also rely on the fat and oils stored within their livers for long-distance migrations across nutrient-poor areas of the oceans.[78] Studies by Stanford University and the Monterey Bay Aquarium published on 17 July 2013 revealed that in addition to controlling the sharks' buoyancy, the liver of great whites is essential in migration patterns. Sharks that sink faster during drift dives were revealed to use up their internal stores of energy quicker than those which sink in a dive at more leisurely rates.[79]
49
+
50
+ Toxicity from heavy metals seems to have little negative effects on great white sharks. Blood samples taken from forty-three individuals of varying size, age and sex off the South African coast led by biologists from the University of Miami in 2012 indicates that despite high levels of mercury, lead, and arsenic, there was no sign of raised white blood cell count and granulate to lymphocyte ratios, indicating the sharks had healthy immune systems. This discovery suggests a previously unknown physiological defence against heavy metal poisoning. Great whites are known to have a propensity for "self-healing and avoiding age-related ailments".[80]
51
+
52
+ A 2007 study from the University of New South Wales in Sydney, Australia, used CT scans of a shark's skull and computer models to measure the shark's maximum bite force. The study reveals the forces and behaviours its skull is adapted to handle and resolves competing theories about its feeding behaviour.[81] In 2008, a team of scientists led by Stephen Wroe conducted an experiment to determine the great white shark's jaw power and findings indicated that a specimen massing 3,324 kg (7,328 lb) could exert a bite force of 18,216 newtons (4,095 lbf).[45]
53
+
54
+ This shark's behaviour and social structure is complex.[82] In South Africa, white sharks have a dominance hierarchy depending on the size, sex and squatter's rights: Females dominate males, larger sharks dominate smaller sharks, and residents dominate newcomers. When hunting, great whites tend to separate and resolve conflicts with rituals and displays. White sharks rarely resort to combat although some individuals have been found with bite marks that match those of other white sharks. This suggests that when a great white approaches too closely to another, they react with a warning bite. Another possibility is that white sharks bite to show their dominance.
55
+
56
+ The great white shark is one of only a few sharks known to regularly lift its head above the sea surface to gaze at other objects such as prey. This is known as spy-hopping. This behaviour has also been seen in at least one group of blacktip reef sharks, but this might be learned from interaction with humans (it is theorized that the shark may also be able to smell better this way because smell travels through air faster than through water). White sharks are generally very curious animals, display intelligence and may also turn to socializing if the situation demands it. At Seal Island, white sharks have been observed arriving and departing in stable "clans" of two to six individuals on a yearly basis. Whether clan members are related is unknown, but they get along peacefully enough. In fact, the social structure of a clan is probably most aptly compared to that of a wolf pack; in that each member has a clearly established rank and each clan has an alpha leader. When members of different clans meet, they establish social rank nonviolently through any of a variety of interactions.[83]
57
+
58
+ Great white sharks are carnivorous and prey upon fish (e.g. tuna, rays, other sharks),[83] cetaceans (i.e., dolphins, porpoises, whales), pinnipeds (e.g. seals, fur seals,[83] and sea lions), sea turtles,[83] sea otters (Enhydra lutris) and seabirds.[84] Great whites have also been known to eat objects that they are unable to digest. Juvenile white sharks predominantly prey on fish, including other elasmobranchs, as their jaws are not strong enough to withstand the forces required to attack larger prey such as pinnipeds and cetaceans until they reach a length of 3 m (9.8 ft) or more, at which point their jaw cartilage mineralizes enough to withstand the impact of biting into larger prey species.[85] Upon approaching a length of nearly 4 m (13 ft), great white sharks begin to target predominantly marine mammals for food, though individual sharks seem to specialize in different types of prey depending on their preferences.[86][87] They seem to be highly opportunistic.[88][89] These sharks prefer prey with a high content of energy-rich fat. Shark expert Peter Klimley used a rod-and-reel rig and trolled carcasses of a seal, a pig, and a sheep from his boat in the South Farallons. The sharks attacked all three baits but rejected the sheep carcass.[90]
59
+
60
+ Off California, sharks immobilize northern elephant seals (Mirounga angustirostris) with a large bite to the hindquarters (which is the main source of the seal's mobility) and wait for the seal to bleed to death. This technique is especially used on adult male elephant seals, which are typically larger than the shark, ranging between 1,500 and 2,000 kg (3,300 and 4,400 lb), and are potentially dangerous adversaries.[91][92] Most commonly though, juvenile elephant seals are the most frequently eaten at elephant seal colonies.[93] Prey is normally attacked sub-surface. Harbor seals (Phoca vitulina) are taken from the surface and dragged down until they stop struggling. They are then eaten near the bottom. California sea lions (Zalophus californianus) are ambushed from below and struck mid-body before being dragged and eaten.[94]
61
+
62
+ In the Northwest Atlantic mature great whites are known to feed on both harbor and grey seals.[32] Unlike adults, juvenile white sharks in the area feed on smaller fish species until they are large enough to prey on marine mammals such as seals.[95]
63
+
64
+ White sharks also attack dolphins and porpoises from above, behind or below to avoid being detected by their echolocation. Targeted species include dusky dolphins (Lagenorhynchus obscurus),[48] Risso's dolphins (Grampus griseus),[48] bottlenose dolphins (Tursiops ssp.),[48][96] humpback dolphins (Sousa ssp.),[96] harbour porpoises (Phocoena phocoena),[48] and Dall's porpoises (Phocoenoides dalli).[48] Groups of dolphins have occasionally been observed defending themselves from sharks with mobbing behaviour.[96] White shark predation on other species of small cetacean has also been observed. In August 1989, a 1.8 m (5.9 ft) juvenile male pygmy sperm whale (Kogia breviceps) was found stranded in central California with a bite mark on its caudal peduncle from a great white shark.[97] In addition, white sharks attack and prey upon beaked whales.[48][96] Cases where an adult Stejneger's beaked whale (Mesoplodon stejnegeri), with a mean mass of around 1,100 kg (2,400 lb),[98] and a juvenile Cuvier's beaked whale (Ziphius cavirostris), an individual estimated at 3 m (9.8 ft), were hunted and killed by great white sharks have also been observed.[99] When hunting sea turtles, they appear to simply bite through the carapace around a flipper, immobilizing the turtle. The heaviest species of bony fish, the oceanic sunfish (Mola mola), has been found in great white shark stomachs.[88]
65
+
66
+ Off Seal Island, False Bay in South Africa, the sharks ambush brown fur seals (Arctocephalus pusillus) from below at high speeds, hitting the seal mid-body. They can go so fast that they completely leave the water. The peak burst speed is estimated to be above 40 km/h (25 mph).[100] They have also been observed chasing prey after a missed attack. Prey is usually attacked at the surface.[101] Shark attacks most often occur in the morning, within 2 hours of sunrise, when visibility is poor. Their success rate is 55% in the first 2 hours, falling to 40% in late morning after which hunting stops.[83]
67
+
68
+ Whale carcasses comprise an important part of the diet of white sharks. However, this has rarely been observed due to whales dying in remote areas. It has been estimated that 30 kg (66 lb) of whale blubber could feed a 4.5 m (15 ft) white shark for 1.5 months. Detailed observations were made of four whale carcasses in False Bay between 2000 and 2010. Sharks were drawn to the carcass by chemical and odour detection, spread by strong winds. After initially feeding on the whale caudal peduncle and fluke, the sharks would investigate the carcass by slowly swimming around it and mouthing several parts before selecting a blubber-rich area. During feeding bouts of 15–20 seconds the sharks removed flesh with lateral headshakes, without the protective ocular rotation they employ when attacking live prey. The sharks were frequently observed regurgitating chunks of blubber and immediately returning to feed, possibly in order to replace low energy yield pieces with high energy yield pieces, using their teeth as mechanoreceptors to distinguish them. After feeding for several hours, the sharks appeared to become lethargic, no longer swimming to the surface; they were observed mouthing the carcass but apparently unable to bite hard enough to remove flesh, they would instead bounce off and slowly sink. Up to eight sharks were observed feeding simultaneously, bumping into each other without showing any signs of aggression; on one occasion a shark accidentally bit the head of a neighbouring shark, leaving two teeth embedded, but both continued to feed unperturbed. Smaller individuals hovered around the carcass eating chunks that drifted away. Unusually for the area, large numbers of sharks over five metres long were observed, suggesting that the largest sharks change their behaviour to search for whales as they lose the manoeuvrability required to hunt seals. The investigating team concluded that the importance of whale carcasses, particularly for the largest white sharks, has been underestimated.[102] In another documented incident, white sharks were observed scavenging on a whale carcass alongside tiger sharks.[103] In 2020, Marine biologists Dines and Gennari et al., published a documented incident in the journal "Marine and Freshwater Research" of a group of great white sharks exhibiting pack-like behaviour, successfully attacking and killing a live adult humpback whale. The sharks utilized the classic attack strategy utilized on pinnipeds when attacking the whale, even utilizing the bite-and-spit tactic they employ on smaller prey items. The whale was an entangled individual, heavily emaciated and thus more vulnerable to the sharks' attacks. The incident is the first known documentation of great whites actively killing a large baleen whale.[104][105] A second incident regarding great white sharks killing humpback whales involving a single large female great white nicknamed Helen was documented off the coast of South Africa. Working alone, the shark attacked a 33 ft (10 m) emaciated and entangled humpback whale by attacking the whale's tail to cripple it before she managed to drown the whale by biting onto its head and pulling it underwater. The attack was witnessed via aerial drone by marine biologist Ryan Johnson, who said the attack went on for roughly 50 minutes before the shark successfully killed the whale. Johnson suggested that the shark may have strategized its attack in order to kill such a large animal.[106][107]
69
+
70
+ Stomach contents of great whites also indicates that whale sharks both juvenile and adult may also be included on the animal's menu, though whether this is active hunting or scavenging is not known at present.[108][109]
71
+
72
+ Great white sharks were previously thought to reach sexual maturity at around 15 years of age, but are now believed to take far longer; male great white sharks reach sexual maturity at age 26, while females take 33 years to reach sexual maturity.[9][110][111] Maximum life span was originally believed to be more than 30 years, but a study by the Woods Hole Oceanographic Institution placed it at upwards of 70 years. Examinations of vertebral growth ring count gave a maximum male age of 73 years and a maximum female age of 40 years for the specimens studied. The shark's late sexual maturity, low reproductive rate, long gestation period of 11 months and slow growth make it vulnerable to pressures such as overfishing and environmental change.[8]
73
+
74
+ Little is known about the great white shark's mating habits, and mating behaviour has not yet been observed in this species. It is possible that whale carcasses are an important location for sexually mature sharks to meet for mating.[102] Birth has never been observed, but pregnant females have been examined. Great white sharks are ovoviviparous, which means eggs develop and hatch in the uterus and continue to develop until birth.[112] The great white has an 11-month gestation period. The shark pup's powerful jaws begin to develop in the first month. The unborn sharks participate in oophagy, in which they feed on ova produced by the mother. Delivery is in spring and summer.[113] The largest number of pups recorded for this species is 14 pups from a single mother measuring 4.5 m (15 ft) that was killed incidentally off Taiwan in 2019.[114] The Northern Pacific population of great whites is suspected to breed off the Sea of Cortez, as evidenced by local fisherman who have said to have caught them and evidenced by teeth found at dump sites for discarded parts from their catches.[citation needed]
75
+
76
+ A breach is the result of a high speed approach to the surface with the resulting momentum taking the shark partially or completely clear of the water. This is a hunting technique employed by great white sharks whilst hunting seals. This technique is often used on cape fur seals at Seal Island in False Bay, South Africa. Because the behaviour is unpredictable, it is very hard to document. It was first photographed by Chris Fallows and Rob Lawrence who developed the technique of towing a slow-moving seal decoy to trick the sharks to breach.[115] Between April and September, scientists may observe around 600 breaches. The seals swim on the surface and the great white sharks launch their predatory attack from the deeper water below. They can reach speeds of up to 40 km/h (25 mph) and can at times launch themselves more than 3 m (10 ft) into the air. Just under half of observed breach attacks are successful.[116] In 2011, a 3-m-long shark jumped onto a seven-person research vessel off Seal Island in Mossel Bay. The crew were undertaking a population study using sardines as bait, and the incident was judged not to be an attack on the boat but an accident.[117]
77
+
78
+ Interspecific competition between the great white shark and the orca is probable in regions where dietary preferences of both species may overlap.[96] An incident was documented on 4 October 1997, in the Farallon Islands off California in the United States. An estimated 4.7–5.3 m (15–17 ft) female orca immobilized an estimated 3–4 m (9.8–13.1 ft) great white shark.[118] The orca held the shark upside down to induce tonic immobility and kept the shark still for fifteen minutes, causing it to suffocate. The orca then proceeded to eat the dead shark's liver.[96][118][119] It is believed that the scent of the slain shark's carcass caused all the great whites in the region to flee, forfeiting an opportunity for a great seasonal feed.[120] Another similar attack apparently occurred there in 2000, but its outcome is not clear.[121] After both attacks, the local population of about 100 great whites vanished.[119][121] Following the 2000 incident, a great white with a satellite tag was found to have immediately submerged to a depth of 500 m (1,600 ft) and swum to Hawaii.[121] In 2015, a pod of orcas was recorded to have killed a great white shark off South Australia.[122] In 2017, three great whites were found washed ashore near Gaansbai, South Africa, with their body cavities torn open and the livers removed by what is likely to have been killer whales.[123] Killer whales also generally impact great white distribution. Studies published in 2019 of killer whale and great white shark distribution and interactions around the Farallon Islands indicate that the cetaceans impact the sharks negatively, with brief appearances by killer whales causing the sharks to seek out new feeding areas until the next season.[124] Occasionally, however, some great whites have been seen to swim near orcas without fear.[125]
79
+
80
+ Of all shark species, the great white shark is responsible for by far the largest number of recorded shark bite incidents on humans, with 272 documented unprovoked bite incidents on humans as of 2012.[18]
81
+
82
+ More than any documented bite incident, Peter Benchley's best-selling novel Jaws and the subsequent 1975 film adaptation directed by Steven Spielberg provided the great white shark with the image of being a "man-eater" in the public mind.[126] While great white sharks have killed humans in at least 74 documented unprovoked bite incidents, they typically do not target them: for example, in the Mediterranean Sea there have been 31 confirmed bite incidents against humans in the last two centuries, most of which were non-fatal. Many of the incidents seemed to be "test-bites". Great white sharks also test-bite buoys, flotsam, and other unfamiliar objects, and they might grab a human or a surfboard to identify what it is.
83
+
84
+ Contrary to popular belief, great white sharks do not mistake humans for seals.[127] Many bite incidents occur in waters with low visibility or other situations which impair the shark's senses. The species appears to not like the taste of humans, or at least finds the taste unfamiliar. Further research shows that they can tell in one bite whether or not the object is worth predating upon. Humans, for the most part, are too bony for their liking. They much prefer seals, which are fat and rich in protein.[128]
85
+
86
+ Humans are not appropriate prey because the shark's digestion is too slow to cope with a human's high ratio of bone to muscle and fat. Accordingly, in most recorded shark bite incidents, great whites broke off contact after the first bite. Fatalities are usually caused by blood loss from the initial bite rather than from critical organ loss or from whole consumption. From 1990 to 2011 there have been a total of 139 unprovoked great white shark bite incidents, 29 of which were fatal.[129]
87
+
88
+ However, some researchers have hypothesized that the reason the proportion of fatalities is low is not because sharks do not like human flesh, but because humans are often able to escape after the first bite. In the 1980s, John McCosker, chair of aquatic biology at the California Academy of Sciences, noted that divers who dove solo and were bitten by great whites were generally at least partially consumed, while divers who followed the buddy system were generally rescued by their companion. McCosker and Timothy C. Tricas, an author and professor at the University of Hawaii, suggest that a standard pattern for great whites is to make an initial devastating attack and then wait for the prey to weaken before consuming the wounded animal. Humans' ability to move out of reach with the help of others, thus foiling the attack, is unusual for a great white's prey.[130]
89
+
90
+ Shark culling is the deliberate killing of sharks by a government in an attempt to reduce shark attacks; shark culling is often called "shark control".[131] These programs have been criticized by environmentalists and scientists—they say these programs harm the marine ecosystem; they also say such programs are "outdated, cruel, and ineffective".[132] Many different species (dolphins, turtles, etc.) are also killed in these programs (because of their use of shark nets and drum lines)—15,135 marine animals were killed in New South Wales' nets between 1950 and 2008,[131] and 84,000 marine animals were killed by Queensland authorities from 1962 to 2015.[133]
91
+
92
+ Great white sharks are currently killed in both Queensland and New South Wales in "shark control" (shark culling) programs.[131] Queensland uses shark nets and drum lines with baited hooks, while New South Wales only uses nets. From 1962 to 2018, Queensland authorities killed about 50,000 sharks, many of which were great whites.[134] From 2013 to 2014 alone, 667 sharks were killed by Queensland authorities, including great white sharks.[131] In Queensland, great white sharks found alive on the drum lines are shot.[135] In New South Wales, between 1950 and 2008, a total of 577 great white sharks were killed in nets.[131] Between September 2017 and April 2018, fourteen great white sharks were killed in New South Wales.[136]
93
+
94
+ KwaZulu-Natal (an area of South Africa) also has a "shark control" program that kills great white sharks and other marine life. In a 30-year period, more than 33,000 sharks were killed in KwaZulu-Natal's shark-killing program, including great whites.[137]
95
+
96
+ In 2014 the state government of Western Australia led by Premier Colin Barnett implemented a policy of killing large sharks. The policy, colloquially referred to as the Western Australian shark cull, was intended to protect users of the marine environment from shark bite incidents, following the deaths of seven people on the Western Australian coastline in the years 2010–2013.[138] Baited drum lines were deployed near popular beaches using hooks designed to catch great white sharks, as well as bull and tiger sharks. Large sharks found hooked but still alive were shot and their bodies discarded at sea.[139] The government claimed they were not culling the sharks, but were using a "targeted, localised, hazard mitigation strategy".[140] Barnett described opposition as "ludicrous" and "extreme", and said that nothing could change his mind.[141] This policy was met with widespread condemnation from the scientific community, which showed that species responsible for bite incidents were notoriously hard to identify, that the drum lines failed to capture white sharks, as intended, and that the government also failed to show any correlation between their drum line policy and a decrease in shark bite incidents in the region.[142]
97
+
98
+ Great white sharks infrequently bite and sometimes even sink boats. Only five of the 108 authenticated unprovoked shark bite incidents reported from the Pacific Coast during the 20th century involved kayakers.[143] In a few cases they have bitten boats up to 10 m (33 ft) in length. They have bumped or knocked people overboard, usually biting the boat from the stern. In one case in 1936, a large shark leapt completely into the South African fishing boat Lucky Jim, knocking a crewman into the sea. Tricas and McCosker's underwater observations suggest that sharks are attracted to boats by the electrical fields they generate, which are picked up by the ampullae of Lorenzini and confuse the shark about whether or not wounded prey might be near-by.[144]
99
+
100
+ Prior to August 1981, no great white shark in captivity lived longer than 11 days. In August 1981, a great white survived for 16 days at SeaWorld San Diego before being released.[145] The idea of containing a live great white at SeaWorld Orlando was used in the 1983 film Jaws 3-D.
101
+
102
+ Monterey Bay Aquarium first attempted to display a great white in 1984, but the shark died after 11 days because it did not eat.[146] In July 2003, Monterey researchers captured a small female and kept it in a large netted pen near Malibu for five days. They had the rare success of getting the shark to feed in captivity before its release.[147] Not until September 2004 was the aquarium able to place a great white on long-term exhibit. A young female, which was caught off the coast of Ventura, was kept in the aquarium's 3.8 million l (1 million US gal) Outer Bay exhibit for 198 days before she was released in March 2005. She was tracked for 30 days after release.[148] On the evening of 31 August 2006, the aquarium introduced a juvenile male caught outside Santa Monica Bay.[149] His first meal as a captive was a large salmon steak on 8 September 2006, and as of that date, he was estimated to be 1.72 m (68 in) in length and to weigh approximately 47 kg (104 lb). He was released on 16 January 2007, after 137 days in captivity.
103
+
104
+ Monterey Bay Aquarium housed a third great white, a juvenile male, for 162 days between 27 August 2007, and 5 February 2008. On arrival, he was 1.4 m (4.6 ft) long and weighed 30.6 kg (67 lb). He grew to 1.8 m (5.9 ft) and 64 kg (141 lb) before release. A juvenile female came to the Outer Bay Exhibit on 27 August 2008. While she did swim well, the shark fed only one time during her stay and was tagged and released on 7 September 2008. Another juvenile female was captured near Malibu on 12 August 2009, introduced to the Outer Bay exhibit on 26 August 2009, and was successfully released into the wild on 4 November 2009.[150] The Monterey Bay Aquarium introduced a 1.4-m-long male into their redesigned "Open Sea" exhibit on 31 August 2011. He was exhibited for 55 days, and was released into the wild on the 25th October the same year. However, the shark was determined to have died shortly after release via an attached electronic tag. The cause of death is not known.[151][152][153]
105
+
106
+ The Monterey Bay Aquarium does not plan to exhibit any more great whites, as the main purpose of containing them was scientific. As data from captive great whites were no longer needed, the institute has instead shifted its focus to study wild sharks.[154]
107
+
108
+ One of the largest adult great whites ever exhibited was at Japan's Okinawa Churaumi Aquarium in 2016, where a 3.5 m (11 ft) male was exhibited for three days before dying.[155][156] Probably the most famous captive was a 2.4 m (7.9 ft) female named Sandy, which in August 1980 became the only great white to be housed at the California Academy of Sciences' Steinhart Aquarium in San Francisco, California. She was released because she would not eat and constantly bumped against the walls.[157]
109
+
110
+ Cage diving is most common at sites where great whites are frequent including the coast of South Africa, the Neptune Islands in South Australia,[158] and Guadalupe Island in Baja California. The popularity of cage diving and swimming with sharks is at the focus of a booming tourist industry.[159][160] A common practice is to chum the water with pieces of fish to attract the sharks. These practices may make sharks more accustomed to people in their environment and to associate human activity with food; a potentially dangerous situation. By drawing bait on a wire towards the cage, tour operators lure the shark to the cage, possibly striking it, exacerbating this problem. Other operators draw the bait away from the cage, causing the shark to swim past the divers.
111
+
112
+ At present, hang baits are illegal off Isla Guadalupe and reputable dive operators do not use them. Operators in South Africa and Australia continue to use hang baits and pinniped decoys.[161] In South Australia, playing rock music recordings underwater, including the AC/DC album Back in Black has also been used experimentally to attract sharks.[162]
113
+
114
+ Companies object to being blamed for shark bite incidents, pointing out that lightning tends to strike humans more often than sharks bite humans.[163] Their position is that further research needs to be done before banning practices such as chumming, which may alter natural behaviour.[164] One compromise is to only use chum in areas where whites actively patrol anyway, well away from human leisure areas. Also, responsible dive operators do not feed sharks. Only sharks that are willing to scavenge follow the chum trail and if they find no food at the end then the shark soon swims off and does not associate chum with a meal. It has been suggested that government licensing strategies may help enforce these responsible tourism.[161]
115
+
116
+ The shark tourist industry has some financial leverage in conserving this animal. A single set of great white jaws can fetch up to £20,000. That is a fraction of the tourism value of a live shark; tourism is a more sustainable economic activity than shark fishing. For example, the dive industry in Gansbaai, South Africa consists of six boat operators with each boat guiding 30 people each day. With fees between £50 and £150 per person, a single live shark that visits each boat can create anywhere between £9,000 and £27,000 of revenue daily.[citation needed]
117
+
118
+ Putting chum in the water
119
+
120
+ A great white shark approaches divers in a cage off Dyer Island, Western Cape, South Africa
121
+
122
+ A great white shark approaches a cage
123
+
124
+ Tourists in a cage near Gansbaai
125
+
126
+ It is unclear how much of a concurrent increase in fishing for great white sharks has caused the decline of great white shark populations from the 1970s to the present. No accurate global population numbers are available, but the great white shark is now considered vulnerable.[2] Sharks taken during the long interval between birth and sexual maturity never reproduce, making population recovery and growth difficult.
127
+
128
+ The IUCN notes that very little is known about the actual status of the great white shark, but as it appears uncommon compared to other widely distributed species, it is considered vulnerable.[2] It is included in Appendix II of CITES,[15] meaning that international trade in the species requires a permit.[165] As of March 2010, it has also been included in Annex I of the CMS Migratory Sharks MoU, which strives for increased international understanding and coordination for the protection of certain migratory sharks.[166] A February 2010 study by Barbara Block of Stanford University estimated the world population of great white sharks to be lower than 3,500 individuals, making the species more vulnerable to extinction than the tiger, whose population is in the same range.[167] According to another study from 2014 by George H. Burgess, Florida Museum of Natural History, University of Florida, there are about 2,000 great white sharks near the California coast, which is 10 times higher than the previous estimate of 219 by Barbara Block.[168][169]
129
+
130
+ Fishermen target many sharks for their jaws, teeth, and fins, and as game fish in general. The great white shark, however, is rarely an object of commercial fishing, although its flesh is considered valuable. If casually captured (it happens for example in some tonnare in the Mediterranean), it is misleadingly sold as smooth-hound shark.[170]
131
+
132
+ The great white shark was declared vulnerable by the Australian Government in 1999 because of significant population decline and is currently protected under the Environmental Protection and Biodiversity Conservation (EPBC) Act.[171] The causes of decline prior to protection included mortality from sport fishing harvests as well as being caught in beach protection netting.[172]
133
+
134
+ The national conservation status of the great white shark is reflected by all Australian states under their respective laws, granting the species full protection throughout Australia regardless of jurisdiction.[171] Many states had prohibited the killing or possession of great white sharks prior to national legislation coming into effect. The great white shark is further listed as threatened in Victoria under the Flora and Fauna Guarantee Act, and as rare or likely to become extinct under Schedule 5 of the Wildlife Conservation Act in Western Australia.[171]
135
+
136
+ In 2002, the Australian government created the White Shark Recovery Plan, implementing government-mandated conservation research and monitoring for conservation in addition to federal protection and stronger regulation of shark-related trade and tourism activities.[172] An updated recovery plan was published in 2013 to review progress, research findings, and to implement further conservation actions.[16] A study in 2012 revealed that Australia's white shark population was separated by Bass Strait into genetically distinct eastern and western populations, indicating a need for the development of regional conservation strategies.[173]
137
+
138
+ Presently, human-caused shark mortality is continuing, primarily from accidental and illegal catching in commercial and recreational fishing as well as from being caught in beach protection netting, and the populations of great white shark in Australia are yet to recover.[16]
139
+
140
+ In spite of official protections in Australia, great white sharks continue to be killed in state "shark control" programs within Australia. For example, the government of Queensland has a "shark control" program (shark culling) which kills great white sharks (as well as other marine life) using shark nets and drum lines with baited hooks.[174][131] In Queensland, great white sharks that are found alive on the baited hooks are shot.[135] The government of New South Wales also kills great white sharks in its "shark control" program.[131] Partly because of these programs, shark numbers in eastern Australia have decreased.[134]
141
+
142
+ The Australasian population of great white sharks is believed to be in excess of 8,000–10,000 individuals according to genetic research studies done by CSIRO, with an adult population estimated to be around 2,210 individuals in both Eastern and Western Australia. The annual survival rate for juveniles in these two separate populations was estimated in the same study to be close to 73 percent, while adult sharks had a 93 percent annual survival rate. Whether or not mortality rates in great white sharks have declined, or the population has increased as a result of the protection of this species in Australian waters is as yet unknown due to the slow growth rates of this species.[175]
143
+
144
+ As of April 2007, great white sharks were fully protected within 370 km (230 mi) of New Zealand and additionally from fishing by New Zealand-flagged boats outside this range. The maximum penalty is a $250,000 fine and up to six months in prison.[176] In June 2018 the New Zealand Department of Conservation classified the great white shark under the New Zealand Threat Classification System as "Nationally Endangered". The species meets the criteria for this classification as there exists a moderate, stable population of between 1000 and 5000 mature individuals. This classification has the qualifiers "Data Poor" and "Threatened Overseas".[177]
145
+
146
+ In 2013, great white sharks were added to California's Endangered Species Act. From data collected, the population of great whites in the North Pacific was estimated to be fewer than 340 individuals. Research also reveals these sharks are genetically distinct from other members of their species elsewhere in Africa, Australia, and the east coast of North America, having been isolated from other populations.[178]
147
+
148
+ A 2014 study estimated the population of great white sharks along the California coastline to be approximately 2,400.[179][180]
149
+
150
+ In 2015 Massachusetts banned catching, cage diving, feeding, towing decoys, or baiting and chumming for its significant and highly predictable migratory great white population without an appropriate research permit. The goal of these restrictions is to both protect the sharks and public health.[181]
151
+
en/2265.html.txt ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Western Schism, also called Papal Schism, Great Occidental Schism and Schism of 1378 (Latin: Magnum schisma occidentale, Ecclesiae occidentalis schisma), was a split within the Catholic Church lasting from 1378 to 1417[1] in which two men (by 1410 three) simultaneously claimed to be the true pope, and each excommunicated the other. Driven by authoritative politics rather than any theological disagreement, the schism was ended by the Council of Constance (1414–1418). For a time these rival claims to the papal throne damaged the reputation of the office.[2]
2
+
3
+ The affair is sometimes referred to as the Great Schism, although this term is also used for the East–West Schism of 1054 between the Western Churches answering to the See of Rome and the Greek Orthodox Churches of the East.
4
+
5
+ The schism in the Western Roman Church resulted from the return of the papacy to Rome by Gregory XI on January 17, 1377.[3] The Avignon Papacy had developed a reputation for corruption that estranged major parts of Western Christendom. This reputation can be attributed to perceptions of predominant French influence, and to the papal curia's efforts to extend its powers of patronage and increase its revenues.[citation needed]
6
+
7
+ Pope Gregory announced his intention to return to Avignon, just after the Easter celebrations of 1378.[4] This was at the entreaty of his relatives, his friends, and nearly everyone in his retinue. After Pope Gregory XI died in the Vatican palace on 27 March 1378,[5] the Romans put into operation a plan to ensure the election of a Roman pope. The pope and his Curia were back in Rome after seventy years in Avignon, and the Romans were prepared to do everything in their power to keep them there. They intended to use intimidation and violence (impressio et metus) as their weapons.[6] On April 8, 1378 the cardinals elected a Neapolitan when no viable Roman candidate presented himself. Urban VI, born Bartolomeo Prignano, the archbishop of Bari, was elected. Urban had been a respected administrator in the papal chancery at Avignon, but as pope he proved suspicious, reformist, and prone to violent outbursts of temper.[7] Many of the cardinals who had elected him soon regretted their decision: the majority removed themselves from Rome to Anagni, where, even though Urban was still reigning, they elected Robert of Geneva as a rival pope on September 20 of the same year, claiming that the election of Urban was invalid because it had been done for fear of the rioting crowds.[8] Elected pope at Fondi on 20 September 1378 by the French cardinals,[9][10] unable to maintain himself in Italy, Robert took the name Clement VII and reestablished a papal court in Avignon, where he became dependent on the French court.[11] Clement had the immediate support of Queen Joanna I of Naples and of several of the Italian barons. Charles V of France, who seems to have been sounded out beforehand on the choice of the Roman pontiff, soon became his warmest protector. Clement eventually succeeded in winning to his cause Castile, Aragon, Navarre, a great part of the Latin East, and Flanders. Scotland supported Clement because England supported Urban.[12]
8
+
9
+ The pair of elections threw the Church into turmoil. There had been rival antipope claimants to the papacy before, but most of them had been appointed by various rival factions; in this case, a single group of leaders of the Church had created both the pope and the antipope.[13]
10
+
11
+ The conflicts quickly escalated from a church problem to a diplomatic crisis that divided Europe. Secular leaders had to choose which claimant they would recognize:
12
+
13
+ In the Iberian Peninsula there were the Fernandine Wars (Guerras fernandinas) and the 1383–1385 Crisis in Portugal, during which dynastic opponents supported rival claimants to the papal office.
14
+
15
+ Sustained by such national and factional rivalries throughout Catholic Christianity, the schism continued after the deaths of both Urban VI in 1389 and Clement VII in 1394. Boniface IX, who was crowned at Rome in 1389, and Benedict XIII, who reigned in Avignon from 1394, maintained their rival courts. When Pope Boniface died in 1404, the eight cardinals of the Roman conclave offered to refrain from electing a new pope if Benedict would resign; but when Benedict's legates refused on his behalf, the Roman party then proceeded to elect Pope Innocent VII.
16
+
17
+ In the intense partisanship, characteristic of the Middle Ages, the schism engendered a fanatical hatred noted by Johan Huizinga:[14] when the town of Bruges went over to the "obedience" of Avignon, a great number of people left to follow their trade in a city of Urbanist allegiance; in the 1382 Battle of Roosebeke, the oriflamme, which might only be unfurled in a holy cause, was taken up against the Flemings, because they were Urbanists and thus viewed by the French as schismatics.[citation needed]
18
+
19
+ Efforts were made to end the Schism through force or diplomacy. The French crown even tried to coerce Benedict XIII, whom it supported, into resigning. None of these remedies worked. The suggestion that a church council should resolve the Schism, first made in 1378, was not adopted at first, because canon law required that a pope call a council.[citation needed] Eventually theologians like Pierre d'Ailly and Jean Gerson, as well as canon lawyers like Francesco Zabarella, adopted arguments that equity permitted the Church to act for its own welfare in defiance of the letter of the law.
20
+
21
+ Eventually the cardinals of both factions secured an agreement that Benedict and Pope Gregory XII (successor to Innocent VII) would meet at Savona. They balked at the last moment, and both groups of cardinals abandoned their preferred leaders. A church council was held at Pisa in 1409 under the auspices of the cardinals to try solving the dispute. At the fifteenth session, 5 June 1409, the Council of Pisa attempted to depose both Pope and antipope as schismatical, heretical, perjured and scandalous,[15] but it then added to the problem by electing a second antipope, Alexander V. He reigned briefly from June 26, 1409, to his death in 1410, when he was succeeded by antipope John XXIII, who won some but not universal support.
22
+
23
+ Finally, a council was convened by Pisan antipope John XXIII in 1414 at Constance to resolve the issue. This was endorsed by Pope Gregory XII, thus ensuring the legitimacy of any election. The council, advised by the theologian Jean Gerson, secured the resignations of John XXIII and Pope Gregory XII, who resigned in 1415, while excommunicating the second antipope, Benedict XIII, who refused to step down. The Council elected Pope Martin V in 1417, essentially ending the schism. Nonetheless, the Crown of Aragon did not recognize Pope Martin V and continued to recognize Benedict XIII. Archbishops loyal to Benedict XIII subsequently elected Antipope Benedict XIV (Bernard Garnier) and three followers simultaneously elected Antipope Clement VIII, but the Western Schism was by then practically over. Clement VIII resigned in 1429 and apparently recognized Martin V.
24
+
25
+ The line of Roman popes is now recognized as the legitimate line, but confusion on this point continued until the 20th century. Pope Pius II (died 1464) decreed that no appeal could be made from pope to council, to avoid any future attempts to undo a papal election by anyone but the elected pope. No such crisis has arisen since the 15th century, and so there has been no need to revisit this decision. The alternate papal claimants have become known in history as antipopes. The Avignon popes were dismissed by Rome early on, but the Pisan popes were included in the Annuario Pontificio as popes until the mid-20th century. Thus the Borgia pope Alexander VI took his regnal name in sequence after the Pisan Alexander V.
26
+
27
+ In 1942, the Annuario listed the last three popes of the schism as Gregory XII (1406–1409), Alexander V (1409–1410), and John XXIII (1410–1415).[16] However, the Western Schism was reinterpreted when Pope John XXIII (1958–1963) chose to reuse the ordinal XXIII, citing "twenty-two Johns of indisputable legitimacy."[17] This is reflected in modern editions of the Annuario Pontificio, which extend Gregory XII's reign to 1415. The Pisan popes Alexander V and John XXIII are now considered to be antipopes.
28
+
29
+ Gregory XII's resignation (in 1415) was the last time a pope resigned until Benedict XVI in 2013.
30
+
31
+ After its resolution, the Western Schism still affected the Catholic Church for years to come. One of the most significant of these involved the emergence of the theory called conciliarism, founded on the success of the Council of Constance, which effectively ended the conflict. This new reform movement held that a general council is superior to the pope on the strength of its capability to settle things even in the early church such as the case in 681 when Pope Honorius was condemned by a council called Constantinople III.[18] There are theorists such as John Gerson who explained that the priests and the church itself are the sources of the papal power and, thus, the church should be able to correct, punish, and, if necessary, depose a pope.[19] For years, the so-called conciliarists have challenged the authority of the pope and they became more relevant after a convened council also known as the Council of Florence (1439–1445) became instrumental in achieving ecclesial union between the Catholic Church and the churches of the East.[20]
32
+
33
+ There was also a marked decline in morality and discipline within the church. Scholars note that although the Western Schism did not directly cause such a phenomenon, it was a gradual development rooted in the conflict, effectively eroding the church authority and its capacity to proclaim the gospel.[21] This was further aggravated by the dissension caused by the Protestant Reformation.
34
+
35
+ According to Broderick, in 1987:
36
+
37
+ Doubt still shrouds the validity of the three rival lines of pontiffs during the four decades subsequent to the still disputed papal election of 1378. This makes suspect the credentials of the cardinals created by the Roman, Avignon, and Pisan claimants to the Apostolic See. Unity was finally restored without a definitive solution to the question; for the Council of Constance succeeded in terminating the Western Schism, not by declaring which of the three claimants was the rightful one, but by eliminating all of them by forcing their abdication or deposition, and then setting up a novel arrangement for choosing a new pope acceptable to all sides. To this day the Church has never made any official, authoritative pronouncement about the papal lines of succession for this confusing period; nor has Martin V or any of his successors. Modern scholars are not agreed in their solutions, although they tend to favor the Roman line.[22]
en/2266.html.txt ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ The Western Schism, also called Papal Schism, Great Occidental Schism and Schism of 1378 (Latin: Magnum schisma occidentale, Ecclesiae occidentalis schisma), was a split within the Catholic Church lasting from 1378 to 1417[1] in which two men (by 1410 three) simultaneously claimed to be the true pope, and each excommunicated the other. Driven by authoritative politics rather than any theological disagreement, the schism was ended by the Council of Constance (1414–1418). For a time these rival claims to the papal throne damaged the reputation of the office.[2]
2
+
3
+ The affair is sometimes referred to as the Great Schism, although this term is also used for the East–West Schism of 1054 between the Western Churches answering to the See of Rome and the Greek Orthodox Churches of the East.
4
+
5
+ The schism in the Western Roman Church resulted from the return of the papacy to Rome by Gregory XI on January 17, 1377.[3] The Avignon Papacy had developed a reputation for corruption that estranged major parts of Western Christendom. This reputation can be attributed to perceptions of predominant French influence, and to the papal curia's efforts to extend its powers of patronage and increase its revenues.[citation needed]
6
+
7
+ Pope Gregory announced his intention to return to Avignon, just after the Easter celebrations of 1378.[4] This was at the entreaty of his relatives, his friends, and nearly everyone in his retinue. After Pope Gregory XI died in the Vatican palace on 27 March 1378,[5] the Romans put into operation a plan to ensure the election of a Roman pope. The pope and his Curia were back in Rome after seventy years in Avignon, and the Romans were prepared to do everything in their power to keep them there. They intended to use intimidation and violence (impressio et metus) as their weapons.[6] On April 8, 1378 the cardinals elected a Neapolitan when no viable Roman candidate presented himself. Urban VI, born Bartolomeo Prignano, the archbishop of Bari, was elected. Urban had been a respected administrator in the papal chancery at Avignon, but as pope he proved suspicious, reformist, and prone to violent outbursts of temper.[7] Many of the cardinals who had elected him soon regretted their decision: the majority removed themselves from Rome to Anagni, where, even though Urban was still reigning, they elected Robert of Geneva as a rival pope on September 20 of the same year, claiming that the election of Urban was invalid because it had been done for fear of the rioting crowds.[8] Elected pope at Fondi on 20 September 1378 by the French cardinals,[9][10] unable to maintain himself in Italy, Robert took the name Clement VII and reestablished a papal court in Avignon, where he became dependent on the French court.[11] Clement had the immediate support of Queen Joanna I of Naples and of several of the Italian barons. Charles V of France, who seems to have been sounded out beforehand on the choice of the Roman pontiff, soon became his warmest protector. Clement eventually succeeded in winning to his cause Castile, Aragon, Navarre, a great part of the Latin East, and Flanders. Scotland supported Clement because England supported Urban.[12]
8
+
9
+ The pair of elections threw the Church into turmoil. There had been rival antipope claimants to the papacy before, but most of them had been appointed by various rival factions; in this case, a single group of leaders of the Church had created both the pope and the antipope.[13]
10
+
11
+ The conflicts quickly escalated from a church problem to a diplomatic crisis that divided Europe. Secular leaders had to choose which claimant they would recognize:
12
+
13
+ In the Iberian Peninsula there were the Fernandine Wars (Guerras fernandinas) and the 1383–1385 Crisis in Portugal, during which dynastic opponents supported rival claimants to the papal office.
14
+
15
+ Sustained by such national and factional rivalries throughout Catholic Christianity, the schism continued after the deaths of both Urban VI in 1389 and Clement VII in 1394. Boniface IX, who was crowned at Rome in 1389, and Benedict XIII, who reigned in Avignon from 1394, maintained their rival courts. When Pope Boniface died in 1404, the eight cardinals of the Roman conclave offered to refrain from electing a new pope if Benedict would resign; but when Benedict's legates refused on his behalf, the Roman party then proceeded to elect Pope Innocent VII.
16
+
17
+ In the intense partisanship, characteristic of the Middle Ages, the schism engendered a fanatical hatred noted by Johan Huizinga:[14] when the town of Bruges went over to the "obedience" of Avignon, a great number of people left to follow their trade in a city of Urbanist allegiance; in the 1382 Battle of Roosebeke, the oriflamme, which might only be unfurled in a holy cause, was taken up against the Flemings, because they were Urbanists and thus viewed by the French as schismatics.[citation needed]
18
+
19
+ Efforts were made to end the Schism through force or diplomacy. The French crown even tried to coerce Benedict XIII, whom it supported, into resigning. None of these remedies worked. The suggestion that a church council should resolve the Schism, first made in 1378, was not adopted at first, because canon law required that a pope call a council.[citation needed] Eventually theologians like Pierre d'Ailly and Jean Gerson, as well as canon lawyers like Francesco Zabarella, adopted arguments that equity permitted the Church to act for its own welfare in defiance of the letter of the law.
20
+
21
+ Eventually the cardinals of both factions secured an agreement that Benedict and Pope Gregory XII (successor to Innocent VII) would meet at Savona. They balked at the last moment, and both groups of cardinals abandoned their preferred leaders. A church council was held at Pisa in 1409 under the auspices of the cardinals to try solving the dispute. At the fifteenth session, 5 June 1409, the Council of Pisa attempted to depose both Pope and antipope as schismatical, heretical, perjured and scandalous,[15] but it then added to the problem by electing a second antipope, Alexander V. He reigned briefly from June 26, 1409, to his death in 1410, when he was succeeded by antipope John XXIII, who won some but not universal support.
22
+
23
+ Finally, a council was convened by Pisan antipope John XXIII in 1414 at Constance to resolve the issue. This was endorsed by Pope Gregory XII, thus ensuring the legitimacy of any election. The council, advised by the theologian Jean Gerson, secured the resignations of John XXIII and Pope Gregory XII, who resigned in 1415, while excommunicating the second antipope, Benedict XIII, who refused to step down. The Council elected Pope Martin V in 1417, essentially ending the schism. Nonetheless, the Crown of Aragon did not recognize Pope Martin V and continued to recognize Benedict XIII. Archbishops loyal to Benedict XIII subsequently elected Antipope Benedict XIV (Bernard Garnier) and three followers simultaneously elected Antipope Clement VIII, but the Western Schism was by then practically over. Clement VIII resigned in 1429 and apparently recognized Martin V.
24
+
25
+ The line of Roman popes is now recognized as the legitimate line, but confusion on this point continued until the 20th century. Pope Pius II (died 1464) decreed that no appeal could be made from pope to council, to avoid any future attempts to undo a papal election by anyone but the elected pope. No such crisis has arisen since the 15th century, and so there has been no need to revisit this decision. The alternate papal claimants have become known in history as antipopes. The Avignon popes were dismissed by Rome early on, but the Pisan popes were included in the Annuario Pontificio as popes until the mid-20th century. Thus the Borgia pope Alexander VI took his regnal name in sequence after the Pisan Alexander V.
26
+
27
+ In 1942, the Annuario listed the last three popes of the schism as Gregory XII (1406–1409), Alexander V (1409–1410), and John XXIII (1410–1415).[16] However, the Western Schism was reinterpreted when Pope John XXIII (1958–1963) chose to reuse the ordinal XXIII, citing "twenty-two Johns of indisputable legitimacy."[17] This is reflected in modern editions of the Annuario Pontificio, which extend Gregory XII's reign to 1415. The Pisan popes Alexander V and John XXIII are now considered to be antipopes.
28
+
29
+ Gregory XII's resignation (in 1415) was the last time a pope resigned until Benedict XVI in 2013.
30
+
31
+ After its resolution, the Western Schism still affected the Catholic Church for years to come. One of the most significant of these involved the emergence of the theory called conciliarism, founded on the success of the Council of Constance, which effectively ended the conflict. This new reform movement held that a general council is superior to the pope on the strength of its capability to settle things even in the early church such as the case in 681 when Pope Honorius was condemned by a council called Constantinople III.[18] There are theorists such as John Gerson who explained that the priests and the church itself are the sources of the papal power and, thus, the church should be able to correct, punish, and, if necessary, depose a pope.[19] For years, the so-called conciliarists have challenged the authority of the pope and they became more relevant after a convened council also known as the Council of Florence (1439–1445) became instrumental in achieving ecclesial union between the Catholic Church and the churches of the East.[20]
32
+
33
+ There was also a marked decline in morality and discipline within the church. Scholars note that although the Western Schism did not directly cause such a phenomenon, it was a gradual development rooted in the conflict, effectively eroding the church authority and its capacity to proclaim the gospel.[21] This was further aggravated by the dissension caused by the Protestant Reformation.
34
+
35
+ According to Broderick, in 1987:
36
+
37
+ Doubt still shrouds the validity of the three rival lines of pontiffs during the four decades subsequent to the still disputed papal election of 1378. This makes suspect the credentials of the cardinals created by the Roman, Avignon, and Pisan claimants to the Apostolic See. Unity was finally restored without a definitive solution to the question; for the Council of Constance succeeded in terminating the Western Schism, not by declaring which of the three claimants was the rightful one, but by eliminating all of them by forcing their abdication or deposition, and then setting up a novel arrangement for choosing a new pope acceptable to all sides. To this day the Church has never made any official, authoritative pronouncement about the papal lines of succession for this confusing period; nor has Martin V or any of his successors. Modern scholars are not agreed in their solutions, although they tend to favor the Roman line.[22]
en/2267.html.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Great Schism may refer to:
en/2268.html.txt ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 45°N 84°W / 45°N 84°W / 45; -84
2
+
3
+ The Great Lakes or the Great Lakes of North America, are a series of interconnected freshwater lakes in the upper mid-east region of North America, that connect to the Atlantic Ocean through the Saint Lawrence River. In general, they are on or near the Canada–United States border. They are lakes Superior, Michigan, Huron, Erie, and Ontario. Hydrologically, there are only four lakes, because lakes Michigan and Huron join at the Straits of Mackinac. The lakes form the basis for Great Lakes Waterway.
4
+
5
+ The Great Lakes are the largest group of freshwater lakes on Earth by total area, and second-largest by total volume, containing 21% of the world's surface fresh water by volume.[1][2][3] The total surface is 94,250 square miles (244,106 km2), and the total volume (measured at the low water datum) is 5,439 cubic miles (22,671 km3),[4] slightly less than the volume of Lake Baikal (5,666 cu mi or 23,615 km3, 22–23% of the world's surface fresh water). Due to their sea-like characteristics (rolling waves, sustained winds, strong currents, great depths, and distant horizons) the five Great Lakes have also long been referred to as inland seas.[5] Lake Superior is the second-largest lake in the world by area, and the largest freshwater lake by surface area. Lake Michigan is the largest lake that is entirely within one country.[6][7][8][9]
6
+
7
+ The Great Lakes began to form at the end of the last glacial period around 14,000 years ago, as retreating ice sheets exposed the basins they had carved into the land which then filled with meltwater.[10] The lakes have been a major source for transportation, migration, trade, and fishing, serving as a habitat to many aquatic species in a region with much biodiversity.
8
+
9
+ The surrounding region is called the Great Lakes region, which includes the Great Lakes Megalopolis.[11]
10
+
11
+ Though the five lakes lie in separate basins, they form a single, naturally interconnected body of fresh water, within the Great Lakes Basin. As a chain of lakes and rivers they connect the east-central interior of North America to the Atlantic Ocean. From the interior to the outlet at the Saint Lawrence River, water flows from Superior to Huron and Michigan, southward to Erie, and finally northward to Lake Ontario. The lakes drain a large watershed via many rivers, and are studded with approximately 35,000 islands.[12] There are also several thousand smaller lakes, often called "inland lakes", within the basin.[13] The surface area of the five primary lakes combined is roughly equal to the size of the United Kingdom, while the surface area of the entire basin (the lakes and the land they drain) is about the size of the UK and France combined.[14] Lake Michigan is the only one of the Great Lakes that is entirely within the United States; the others form a water boundary between the United States and Canada. The lakes are divided among the jurisdictions of the Canadian province of Ontario and the U.S. states of Michigan, Wisconsin, Minnesota, Illinois, Indiana, Ohio, Pennsylvania, and New York. Both the province of Ontario and the state of Michigan include in their boundaries portions of four of the lakes: The province of Ontario does not border Lake Michigan, and the state of Michigan does not border Lake Ontario. New York and Wisconsin's jurisdictions extend into two lakes, and each of the remaining states into one of the lakes.
12
+
13
+ As the surfaces of Lakes Superior, Huron, Michigan, and Erie are all approximately the same elevation above sea level, while Lake Ontario is significantly lower, and because the Niagara Escarpment precludes all natural navigation, the four upper lakes are commonly called the "upper great lakes". This designation is not universal. Those living on the shore of Lake Superior often refer to all the other lakes as "the lower lakes", because they are farther south. Sailors of bulk freighters transferring cargoes from Lake Superior and northern Lake Michigan and Lake Huron to ports on Lake Erie or Ontario commonly refer to the latter as the lower lakes and Lakes Michigan, Huron, and Superior as the upper lakes. This corresponds to thinking of lakes Erie and Ontario as "down south" and the others as "up north". Vessels sailing north on Lake Michigan are considered "upbound" even though they are sailing toward its effluent current.[24]
14
+
15
+ Lakes Huron and Michigan are sometimes considered a single lake, called Lake Michigan–Huron, because they are one hydrological body of water connected by the Straits of Mackinac.[25] The straits are five miles (8 km) wide[14] and 120 feet (37 m) deep; the water levels rise and fall together,[26] and the flow between Michigan and Huron frequently reverses direction.
16
+
17
+ Dispersed throughout the Great Lakes are approximately 35,000 islands.[12] The largest among them is Manitoulin Island in Lake Huron, the largest island in any inland body of water in the world.[34] The second-largest island is Isle Royale in Lake Superior.[35] Both of these islands are large enough to contain multiple lakes themselves—for instance, Manitoulin Island's Lake Manitou is the world's largest lake on a freshwater island.[36] Some of these lakes even have their own islands, like Treasure Island in Lake Mindemoya in Manitoulin Island
18
+
19
+ The Great Lakes also have several peninsulas between them, including the Door Peninsula, the Peninsulas of Michigan, and the Ontario Peninsula. Some of these peninsulas even contain smaller peninsulas, such as the Keweenaw Peninsula, the Thumb Peninsula, the Bruce Peninsula, and the Niagara Peninsula. Population centers on the peninsulas include Grand Rapids and Detroit in Michigan along with London, Hamilton, Brantford, and Toronto in Ontario.
20
+
21
+ Although the Saint Lawrence Seaway and Great Lakes Waterway make the Great Lakes accessible to ocean-going vessels,[37] shifts in shipping to wider ocean-going container ships—which do not fit through the locks on these routes—have limited container shipping on the lakes. Most Great Lakes trade is of bulk material, and bulk freighters of Seawaymax-size or less can move throughout the entire lakes and out to the Atlantic.[38] Larger ships are confined to working in the lakes themselves. Only barges can access the Illinois Waterway system providing access to the Gulf of Mexico via the Mississippi River. Despite their vast size, large sections of the Great Lakes freeze over in winter, interrupting most shipping from January to March. Some icebreakers ply the lakes, keeping the shipping lanes open through other periods of ice on the lakes.
22
+
23
+ The Great Lakes are also connected by the Chicago Sanitary and Ship Canal to the Gulf of Mexico by way of the Illinois River (from the Chicago River) and the Mississippi River. An alternate track is via the Illinois River (from Chicago), to the Mississippi, up the Ohio, and then through the Tennessee–Tombigbee Waterway (a combination of a series of rivers and lakes and canals), to Mobile Bay and the Gulf of Mexico. Commercial tug-and-barge traffic on these waterways is heavy.[39]
24
+
25
+ Pleasure boats can also enter or exit the Great Lakes by way of the Erie Canal and Hudson River in New York. The Erie Canal connects to the Great Lakes at the east end of Lake Erie (at Buffalo, New York) and at the south side of Lake Ontario (at Oswego, New York).
26
+
27
+ In 2009, the lakes contained 84% of the surface freshwater of North America;[40] if the water were evenly distributed over the entire continent's land area, it would reach a depth of 5 feet (1.5 meters).[14] The source of water levels in the lakes is tied to what was left by melting glaciers when the lakes took their present form. Annually, only about 1% is "new" water originating from rivers, precipitation, and groundwater springs that drain into the lakes. Historically, evaporation has been balanced by drainage, making the level of the lakes constant.[14]
28
+
29
+ Intensive human population growth only began in the region in the 20th century and continues today.[14] At least two human water use activities have been identified as having the potential to affect the lakes' levels: diversion (the transfer of water to other watersheds) and consumption (substantially done today by the use of lake water to power and cool electric generation plants, resulting in evaporation).[41]
30
+
31
+ The physical impacts of climate change can be seen in water levels in the Great Lakes over the past century.[42] The United Nations' Intergovernmental Panel on Climate Change in 1997, 23 years ago, predicted: "the following lake level declines could occur: Lake Superior −0.2 to −0.5 m, Lakes Michigan and Huron −1.0 to −2.5 m, and Lake Erie −0.9 to −1.9 m."[43] In 2009, 11 years ago, it was predicted that global warming will decrease water levels.[44] In 2013, record low water levels in the Great Lakes were attributed to climate change.[45]
32
+
33
+ The water level of Lake Michigan–Huron had remained fairly constant over the 20th century,[46] but has nevertheless dropped more than 6 feet from the record high in 1986 to the low of 2013.[47] In 2012, National Geographic tied the water level drop to warming climate change.,[48] as did the Natural Resources Defense Council.[49] One newspaper reported that the long-term average level has gone down about 20 inches because of dredging and subsequent erosion in the St. Clair River. Lake Michigan–Huron hit all-time record low levels in 2013; according to the US Army Corps of Engineers, the previous record low had been set in 1964.[47] By April 2015 the water level had recovered to 7 inches (17.5 cm) more than the "long term monthly average".[50]
34
+
35
+ The Great Lakes contain 21% of the world's surface fresh water: 5,472 cubic miles (22,810 km3), or 6.0×1015 U.S. gallons, that is 6 quadrillion U.S gallons, (2.3×1016 liters). This is enough water to cover the 48 contiguous U.S. states to a uniform depth of 9.5 feet (2.9 m). Although the lakes contain a large percentage of the world's fresh water, the Great Lakes supply only a small portion of U.S. drinking water on a national basis.[57]
36
+
37
+ The total surface area of the lakes is approximately 94,250 square miles (244,100 km2)—nearly the same size as the United Kingdom, and larger than the U.S. states of New York, New Jersey, Connecticut, Rhode Island, Massachusetts, Vermont, and New Hampshire combined.[58]
38
+
39
+ The Great Lakes coast measures approximately 10,500 miles (16,900 km);,[14] but the length of a coastline is impossible to measure exactly and is not a well-defined measure (see Coastline paradox). Of the total 10,500 miles (16,900 km) of shoreline, Canada borders approximately 5,200 miles (8,400 km), while the remaining 5,300 miles (8,500 km) are bordered by the United States. Michigan has the longest shoreline of the United States, bordering roughly 3,288 miles (5,292 km) of shoreline, followed by Wisconsin (820 miles (1,320 km)), New York (473 miles (761 km)), and Ohio (312 miles (502 km)).[59] Traversing the shoreline of all the lakes would cover a distance roughly equivalent to travelling half-way around the world at the equator.[14]
40
+
41
+ It has been estimated that the foundational geology that created the conditions shaping the present day upper Great Lakes was laid from 1.1 to 1.2 billion years ago,[14][60] when two previously fused tectonic plates split apart and created the Midcontinent Rift, which crossed the Great Lakes Tectonic Zone. A valley was formed providing a basin that eventually became modern day Lake Superior. When a second fault line, the Saint Lawrence rift, formed approximately 570 million years ago,[14] the basis for Lakes Ontario and Erie were created, along with what would become the Saint Lawrence River.
42
+
43
+ The Great Lakes are estimated to have been formed at the end of the last glacial period (the Wisconsin glaciation ended 10,000 to 12,000 years ago), when the Laurentide Ice Sheet receded.[10] The retreat of the ice sheet left behind a large amount of meltwater (see Lake Algonquin, Lake Chicago, Glacial Lake Iroquois, and Champlain Sea) that filled up the basins that the glaciers had carved, thus creating the Great Lakes as we know them today.[61] Because of the uneven nature of glacier erosion, some higher hills became Great Lakes islands. The Niagara Escarpment follows the contour of the Great Lakes between New York and Wisconsin. Land below the glaciers "rebounded" as it was uncovered.[62] Since the glaciers covered some areas longer than others, this glacial rebound occurred at different rates.
44
+
45
+ A notable modern phenomenon is the formation of ice volcanoes over the lakes during wintertime. Storm-generated waves carve the lakes' ice sheet and create conical mounds through the eruption of water and slush. The process is only well-documented in the Great Lakes, and has been credited with sparing the southern shorelines from worse rocky erosion.[63]
46
+
47
+ The Great Lakes have a humid continental climate,
48
+ Köppen climate classification Dfa (in southern areas) and Dfb (in northern parts)[64] with varying influences from air masses from other regions including dry, cold Arctic systems, mild Pacific air masses from the West, and warm, wet tropical systems from the south and the Gulf of Mexico.[65] The lakes themselves also have a moderating effect on the climate; they can also increase precipitation totals and produce lake effect snowfall.[64]
49
+
50
+ The Great Lakes can have an effect on regional weather called lake-effect snow, which is sometimes very localized. Even late in winter, the lakes often have no icepack in the middle. The prevailing winds from the west pick up the air and moisture from the lake surface, which is slightly warmer in relation to the cold surface winds above. As the slightly warmer, moist air passes over the colder land surface, the moisture often produces concentrated, heavy snowfall that sets up in bands or "streamers". This is similar to the effect of warmer air dropping snow as it passes over mountain ranges. During freezing weather with high winds, the "snow belts" receive regular snow fall from this localized weather pattern, especially along the eastern shores of the lakes. Snow belts are found in Wisconsin, Michigan, Ohio, Pennsylvania, and New York, United States; and Ontario, Canada.
51
+
52
+ The lakes also moderate seasonal temperatures to some degree, but not with as large an influence as do large oceans; they absorb heat and cool the air in summer, then slowly radiate that heat in autumn. They protect against frost during transitional weather, and keep the summertime temperatures cooler than further inland. This effect can be very localized and overridden by offshore wind patterns. This temperature buffering produces areas known as "Fruit Belts", where fruit can be produced that is typically grown much farther south. For instance, Western Michigan has apple and cherry orchards, and vineyards cultivated adjacent to the lake shore as far north as the Grand Traverse Bay and Nottawasaga Bay in central Ontario. The eastern shore of Lake Michigan and the southern shore of Lake Erie have many successful wineries because of the moderating effect, as does the Niagara Peninsula between Lake Erie and Lake Ontario. A similar phenomenon allows wineries to flourish in the Finger Lakes region of New York, as well as in Prince Edward County, Ontario on Lake Ontario's northeast shore. Related to the lake effect is the regular occurrence of fog over medium-sized areas, particularly along the shorelines of the lakes. This is most noticeable along Lake Superior's shores.
53
+
54
+ The Great Lakes have been observed to help intensify storms, such as Hurricane Hazel in 1954, and the 2011 Goderich, Ontario tornado, which moved onshore as a tornadic waterspout. In 1996 a rare tropical or subtropical storm was observed forming in Lake Huron, dubbed the 1996 Lake Huron cyclone. Rather large severe thunderstorms covering wide areas are well known in the Great Lakes during mid-summer; these Mesoscale convective complexes or MCCs[66] can cause damage to wide swaths of forest and shatter glass in city buildings. These storms mainly occur during the night, and the systems sometimes have small embedded tornadoes, but more often straight-line winds accompanied by intense lightning.
55
+
56
+ Historically, the Great Lakes, in addition to their lake ecology, were surrounded by various forest ecoregions (except in a relatively small area of southeast Lake Michigan where savanna or prairie occasionally intruded). Logging, urbanization, and agriculture uses have changed that relationship. In the early 21st century, Lake Superior's shores are 91% forested, Lake Huron 68%, Lake Ontario 49%, Lake Michigan 41%, and Lake Erie, where logging and urbanization has been most extensive, 21%. Some of these forests are second or third growth (i.e. they have been logged before, changing their composition). At least 13 wildlife species are documented as becoming extinct since the arrival of Europeans, and many more are threatened or endangered.[14] Meanwhile, exotic and invasive species have also been introduced.
57
+
58
+ While the organisms living on the bottom of shallow waters are similar to those found in smaller lakes, the deep waters contain organisms found only in deep, cold lakes of the northern latitudes. These include the delicate opossum shrimp (order mysida), the deepwater scud (a crustacean of the order amphipoda), two types of copepods, and the deepwater sculpin (a spiny, large-headed fish).[68]
59
+
60
+ The Great Lakes are an important source of fishing. Early European settlers were astounded by both the variety and quantity of fish; there were 150 different species in the Great Lakes.[14] Throughout history, fish populations were the early indicator of the condition of the Lakes and have remained one of the key indicators even in the current era of sophisticated analyses and measuring instruments. According to the bi-national (U.S. and Canadian) resource book, The Great Lakes: An Environmental Atlas and Resource Book: "The largest Great Lakes fish harvests were recorded in 1889 and 1899 at some 67,000 tonnes (66,000 long tons; 74,000 short tons) [147 million pounds]."[69]
61
+
62
+ By 1801, the New York Legislature found it necessary to pass regulations curtailing obstructions to the natural migrations of Atlantic salmon from Lake Erie into their spawning channels. In the early 19th century, the government of Upper Canada found it necessary to introduce similar legislation prohibiting the use of weirs and nets at the mouths of Lake Ontario's tributaries. Other protective legislation was passed, as well, but enforcement remained difficult.[70]
63
+
64
+ On both sides of the Canada–United States border, the proliferation of dams and impoundments have multiplied, necessitating more regulatory efforts. Concerns by the mid-19th century included obstructions in the rivers which prevented salmon and lake sturgeon from reaching their spawning grounds. The Wisconsin Fisheries Commission noted a reduction of roughly 25% in general fish harvests by 1875. The states have removed dams from rivers where necessary.[clarification needed][71]
65
+
66
+ Overfishing has been cited as a possible reason for a decrease in population of various whitefish, important because of their culinary desirability and, hence, economic consequence. Moreover, between 1879 and 1899, reported whitefish harvests declined from some 24.3 million pounds (11 million kg) to just over 9 million pounds (4 million kg).[72] By 1900, commercial fishermen on Lake Michigan were hauling in an average of 41 million pounds of fish annually.[73] By 1938, Wisconsin's commercial fishing operations were motorized and mechanized, generating jobs for more than 2,000 workers, and hauling 14 million pounds per year.[73] The population of giant freshwater mussels was eliminated as the mussels were harvested for use as buttons by early Great Lakes entrepreneurs.[72] Since 2000, the invasive quagga mussel has smothered the bottom of Lake Michigan almost from shore to shore, and their numbers are estimated at 900 trillion.[73]
67
+
68
+ The influx of parasitic lamprey populations after the development of the Erie Canal and the much later Welland Canal led to the two federal governments of the US and Canada working on joint proposals to control it. By the mid-1950s, the lake trout populations of Lakes Michigan and Huron were reduced, with the lamprey deemed largely to blame. This led to the launch of the bi-national Great Lakes Fishery Commission.
69
+
70
+ The Great Lakes: An Environmental Atlas and Resource Book (1972) noted: "Only pockets remain of the once large commercial fishery."[69] But, water quality improvements realized during the 1970s and 1980s, combined with successful salmonid stocking programs, have enabled the growth of a large recreational fishery.[74] The last commercial fisherman left Milwaukee in 2011 because of overfishing and anthropogenic changes to the biosphere.[73]
71
+
72
+ Since the 19th century an estimated 160 new species have found their way into the Great Lakes ecosystem; many have become invasive; the overseas ship ballast and ship hull parasitism are causing severe economic and ecological impacts.[75][76] According to the Inland Seas Education Association, on average a new species enters the Great Lakes every eight months.[76]
73
+
74
+ Introductions into the Great Lakes include the zebra mussel, which was first discovered in 1988, and quagga mussel in 1989. The mollusks are efficient filter feeders, competing with native mussels and reducing available food and spawning grounds for fish. In addition, the mussels may be a nuisance to industries by clogging pipes. The U.S. Fish and Wildlife Service estimates that the economic impact of the zebra mussel could be about $5 billion over the next decade.[77]
75
+
76
+ The alewife first entered the system west of Lake Ontario via 19th-century canals. By the 1960s, the small silver fish had become a familiar nuisance to beach goers across Lakes Michigan, Huron, and Erie. Periodic mass dieoffs result in vast numbers of the fish washing up on shore; estimates by various governments have placed the percentage of Lake Michigan's biomass, which was made up of alewives in the early 1960s, as high as 90%. In the late 1960s, the various state and federal governments began stocking several species of salmonids, including the native lake trout as well as non-native chinook and coho salmon; by the 1980s, alewife populations had dropped drastically.[78] The ruffe, a small percid fish from Eurasia, became the most abundant fish species in Lake Superior's Saint Louis River within five years of its detection in 1986. Its range, which has expanded to Lake Huron, poses a significant threat to the lower lake fishery.[79] Five years after first being observed in the St. Clair River, the round goby can now be found in all of the Great Lakes. The goby is considered undesirable for several reasons: it preys upon bottom-feeding fish, overruns optimal habitat, spawns multiple times a season, and can survive poor water quality conditions.[80]
77
+
78
+ Several species of exotic water fleas have accidentally been introduced into the Great Lakes, such as the spiny waterflea, Bythotrephes longimanus, and the fishhook waterflea, Cercopagis pengoi, potentially having an effect on the zooplankton population. Several species of crayfish have also been introduced that may contend with native crayfish populations. More recently an electric fence has been set up across the Chicago Sanitary and Ship Canal in order to keep several species of invasive Asian carp out of the area. These fast-growing planktivorous fish have heavily colonized the Mississippi and Illinois river systems.[81] The sea lamprey, which has been particularly damaging to the native lake trout population, is another example of a marine invasive species in the Great Lakes.[82] Invasive species, particularly zebra and quagga mussels, may be at least partially responsible for the collapse of the deepwater demersal fish community in Lake Huron,[83] as well as drastic unprecedented changes in the zooplankton community of the lake.[84]
79
+
80
+ Scientists understand that the micro-aquatic life of the lakes is abundant, but know very little about some of the most plentiful microbes and their environmental effects in the Great Lakes. Although a drop of lake water may contain 1 million bacteria cells and 10 million viruses, only since 2012 has there been a long-term study of the lakes' micro-organisms. Between 2012 and 2019 more than 160 new species have been discovered.[85]
81
+
82
+ Native habitats and ecoregions in the Great Lakes region include:
83
+
84
+ Plant lists include:
85
+
86
+ Logging
87
+
88
+ Logging of the extensive forests in the Great Lakes region removed riparian and adjacent tree cover over rivers and streams, which provide shade, moderating water temperatures in fish spawning grounds. Removal of trees also destabilized the soil, with greater volumes washed into stream beds causing siltation of gravel beds, and more frequent flooding.
89
+
90
+ Running cut logs down the tributary rivers into the Great Lakes also dislocated sediments. In 1884, the New York Fish Commission determined that the dumping of sawmill waste (chips and sawdust) had impacted fish populations.[86]
91
+
92
+ The first U.S. Clean Water Act, passed by a Congressional override after being vetoed by US President Richard Nixon in 1972, was a key piece of legislation,[87] along with the bi-national Great Lakes Water Quality Agreement signed by Canada and the U.S. A variety of steps taken to process industrial and municipal pollution discharges into the system greatly improved water quality by the 1980s, and Lake Erie in particular is significantly cleaner.[88] Discharge of toxic substances has been sharply reduced. Federal and state regulations control substances like PCBs. The first of 43 "Great Lakes Areas of Concern" to be formally "de-listed" due to successful cleanup was Ontario's Collingwood Harbour in 1994; Ontario's Severn Sound followed in 2003.[89] Presque Isle Bay in Pennsylvania is formally listed as in recovery, as is Ontario's Spanish Harbour. Dozens of other Areas of Concern have received partial cleanups such as the Rouge River (Michigan) and Waukegan Harbor (Illinois).[90]
93
+
94
+ Phosphate detergents were historically a major source of nutrient to the Great Lakes algae blooms in particular in the warmer and shallower portions of the system such as Lake Erie, Saginaw Bay, Green Bay, and the southernmost portion of Lake Michigan. By the mid-1980s, most jurisdictions bordering the Great Lakes had controlled phosphate detergents,[91] resulting in sharp reductions in the frequency and extent of the blooms.[citation needed]
95
+
96
+ Blue-green algae, or Cyanobacteria blooms,[92] have been problematic on Lake Erie since 2011.[93] "Not enough is being done to stop fertilizer and phosphorus from getting into the lake and causing blooms," said Michael McKay, executive director of the Great Lakes Institute for Environmental Research (GLIER) at the University of Windsor. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10.[94] In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometres on Lake Erie, with the heaviest concentration near Toledo, Ohio. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August 2019.[95][94]
97
+
98
+ Until 1970, mercury was not listed as a harmful chemical, according to the United States Federal Water Quality Administration. Within the past ten years mercury has become more apparent in water tests. Mercury compounds have been used in paper mills to prevent slime from forming during their production, and chemical companies have used mercury to separate chlorine from brine solutions. Studies conducted by the Environmental Protection Agency have shown that when the mercury comes in contact with many of the bacteria and compounds in the fresh water, it forms the compound methyl mercury, which has a much greater impact on human health than elemental mercury due to a higher propensity for absorption. This form of mercury is not detrimental to a majority of fish types, but is very detrimental to people and other wildlife animals who consume the fish. Mercury has been known for health related problems such as birth defects in humans and animals, and the near extinction of eagles in the Great Lakes region.[96]
99
+
100
+ The amount of raw sewage dumped into the waters was the primary focus of both the first Great Lakes Water Quality Agreement and federal laws passed in both countries during the 1970s. Implementation of secondary treatment of municipal sewage by major cities greatly reduced the routine discharge of untreated sewage during the 1970s and 1980s.[97] The International Joint Commission in 2009 summarized the change: "Since the early 1970s, the level of treatment to reduce pollution from waste water discharges to the Great Lakes has improved considerably. This is a result of significant expenditures to date on both infrastructure and technology, and robust regulatory systems that have proven to be, on the whole, quite effective."[98] The commission reported that all urban sewage treatment systems on the U.S. side of the lakes had implemented secondary treatment, as had all on the Canadian side except for five small systems.[citation needed]
101
+
102
+ Though contrary to federal laws in both countries, those treatment system upgrades have not yet eliminated Combined sewer Overflow events.[citation needed] This describes when older sewerage systems, which combine storm water with sewage into single sewers heading to the treatment plant, are temporarily overwhelmed by heavy rainstorms. Local sewage treatment authorities then must release untreated effluent, a mix of rainwater and sewage, into local water bodies. While enormous public investments such as the Deep Tunnel projects in Chicago and Milwaukee have greatly reduced the frequency and volume of these events, they have not been eliminated. The number of such overflow events in Ontario, for example, is flat according to the International Joint Commission.[98] Reports about this issue on the U.S. side highlight five large municipal systems (those of Detroit, Cleveland, Buffalo, Milwaukee and Gary) as being the largest current periodic sources of untreated discharges into the Great Lakes.[99]
103
+
104
+ Algae such as diatoms, along with other phytoplankton, are photosynthetic primary producers supporting the food web of the Great Lakes,[100] and have been effected by global warming.[101] The changes in the size or in the function of the primary producers may have a direct or an indirect impact on the food web. Photosynthesis carried out by diatoms comprises about one fifth of the total photosynthesis. By taking CO2 out of the water, to photosynthesize, diatoms help to stabilize the pH of the water, as otherwise CO2 would react with water making it more acidic.
105
+
106
+ Diatoms acquire inorganic carbon thought passive diffusion of CO2 and HCO3, as well they use carbonic anhydrase mediated active transport to speed up this process.[102] Large diatoms require more carbon uptake than smaller diatoms.[103] There is a positive correlation between the surface area and the chlorophyll concentration of diatom cells.[104]
107
+
108
+ Several Native American populations (Paleo-indians) inhabited the region around 10,000 BC, after the end of the Wisconsin glaciation.[105][106] The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine Le Griffon, which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.[107]
109
+
110
+ The Rush–Bagot Treaty signed in 1818, after the War of 1812 and the later Treaty of Washington eventually led to a complete disarmament of naval vessels in the Great Lakes. Nonetheless, both nations maintain coast guard vessels in the Great Lakes.
111
+
112
+ During settlement, the Great Lakes and its rivers were the only practical means of moving people and freight. Barges from middle North America were able to reach the Atlantic Ocean from the Great Lakes when the Welland canal opened in 1824 and the later Erie Canal opened in 1825.[108] By 1848, with the opening of the Illinois and Michigan Canal at Chicago, direct access to the Mississippi River was possible from the lakes.[109] With these two canals an all-inland water route was provided between New York City and New Orleans.
113
+
114
+ The main business of many of the passenger lines in the 19th century was transporting immigrants. Many of the larger cities owe their existence to their position on the lakes as a freight destination as well as for being a magnet for immigrants. After railroads and surface roads developed, the freight and passenger businesses dwindled and, except for ferries and a few foreign cruise ships, have now vanished.
115
+ The immigration routes still have an effect today. Immigrants often formed their own communities and some areas have a pronounced ethnicity, such as Dutch, German, Polish, Finnish, and many others. Since many immigrants settled for a time in New England before moving westward, many areas on the U.S. side of the Great Lakes also have a New England feel, especially in home styles and accent.
116
+
117
+ Since general freight these days is transported by railroads and trucks, domestic ships mostly move bulk cargoes, such as iron ore, coal and limestone for the steel industry. The domestic bulk freight developed because of the nearby mines. It was more economical to transport the ingredients for steel to centralized plants rather than try to make steel on the spot. Grain exports are also a major cargo on the lakes.
118
+
119
+ In the 19th century and early 20th centuries, iron and other ores such as copper were shipped south on (downbound ships), and supplies, food, and coal were shipped north (upbound). Because of the location of the coal fields in Pennsylvania and West Virginia, and the general northeast track of the Appalachian Mountains, railroads naturally developed shipping routes that went due north to ports such as Erie, Pennsylvania and Ashtabula, Ohio.
120
+
121
+ Because the lake maritime community largely developed independently, it has some distinctive vocabulary. Ships, no matter the size, are called boats. When the sailing ships gave way to steamships, they were called steamboats—the same term used on the Mississippi. The ships also have a distinctive design (see Lake freighter). Ships that primarily trade on the lakes are known as lakers. Foreign boats are known as salties. One of the more common sights on the lakes has been since about 1950 the 1,000‑by‑105-foot (305-by-32-meter), 78,850-long-ton (80,120-metric-ton) self-unloader. This is a laker with a conveyor belt system that can unload itself by swinging a crane over the side.[110] Today, the Great Lakes fleet is much smaller in numbers than it once was because of the increased use of overland freight, and a few larger ships replacing many small ones.
122
+
123
+ During World War II, the risk of submarine attacks against coastal training facilities motivated the United States Navy to operate two aircraft carriers on the Great Lakes, USS Sable (IX-81) and USS Wolverine (IX-64). Both served as training ships to qualify naval aviators in carrier landing and takeoff.[111] Lake Champlain briefly became the sixth Great Lake of the United States on March 6, 1998, when President Clinton signed Senate Bill 927. This bill, which reauthorized the National Sea Grant Program, contained a line declaring Lake Champlain to be a Great Lake. Not coincidentally, this status allows neighboring states to apply for additional federal research and education funds allocated to these national resources.[112] Following a small uproar, the Senate voted to revoke the designation on March 24 (although New York and Vermont universities would continue to receive funds to monitor and study the lake).[113]
124
+
125
+ In the early years of the 21st century, water levels in the Great Lakes were a concern.[114] Researchers at the Mowat Centre said that low levels could cost $19bn by 2050.[115] This was followed by record high levels in all lakes except Ontario in the late 2010s and 2020.[116]
126
+
127
+ Alan B. McCullough has written that the fishing industry of the Great Lakes got its start "on the American side of Lake Ontario in Chaumont Bay, near the Maumee River on Lake Erie, and on the Detroit River at about the time of the War of 1812." Although the region was sparsely populated until the 1830s, so there was not much local demand and transporting fish was still prohibitively costly, there were economic and infrastructure developments that were promising for the future of the fishing industry going into the 1830s. Particularly, the 1825 opening of the Erie Canal and the Welland Canal a few years later. The fishing industry expanded particularly in the waters associated with the fur trade that connect Lake Erie and Lake Huron. In fact, two major suppliers of fish in the 1830s were the fur trading companies Hudson's Bay Company and the American Fur Company.[117]
128
+
129
+ The catch from these waters would be sent to the growing market for salted fish in Detroit, where merchants involved in the fur trade had already gained some experience handling salted fish. One such merchant was John P. Clark, a shipbuilder and merchant who began selling fish in the area of Manitowoc, Wisconsin where whitefish was abundant. Another operation cropped up in Georgian Bay, Canadian waters plentiful with trout as well as whitefish. In 1831, Alexander MacGregor from Goderich, Ontario found whitefish and herring in unusually abundant supply around the Fishing Islands. A contemporary account by Methodist missionary John Evans describes the fish as resembling a "bright cloud moving rapidly through the water".[117]
130
+
131
+ Except when the water is frozen during winter, more than 100 lake freighters operate continuously on the Great Lakes,[118] which remain a major water transport corridor for bulk goods. The Great Lakes Waterway connects all the lakes; the smaller Saint Lawrence Seaway connects the lakes to the Atlantic oceans. Some lake freighters are too large to use the Seaway, and operate only on the Waterway and lakes.
132
+
133
+ In 2002, 162 million net tons of dry bulk cargo were moved on the Lakes. This was, in order of volume: iron ore, grain and potash.[119] The iron ore and much of the stone and coal are used in the steel industry. There is also some shipping of liquid and containerized cargo but most container ships cannot pass the locks on the Saint Lawrence Seaway because the ships are too wide.
134
+
135
+ Only four bridges are on the Great Lakes other than Lake Ontario because of the cost of building structures high enough for ships to pass under. The Blue Water Bridge is, for example, more than 150 feet high and more than a mile long.[118]
136
+
137
+ Major ports on the Great Lakes include Duluth-Superior, Chicago, Detroit, Cleveland, Twin Harbors, Hamilton and Thunder Bay.
138
+
139
+ The Great Lakes are used to supply drinking water to tens of millions of people in bordering areas. This valuable resource is collectively administered by the state and provincial governments adjacent to the lakes, who have agreed to the Great Lakes Compact to regulate water supply and use.
140
+
141
+ Tourism and recreation are major industries on the Great Lakes.[120] A few small cruise ships operate on the Great Lakes including a couple of sailing ships. Sport fishing, commercial fishing, and Native American fishing represent a U.S.$4 billion a year industry with salmon, whitefish, smelt, lake trout, bass and walleye being major catches. Many other water sports are practiced on the lakes such as yachting, sea kayaking, diving, kitesurfing, powerboating, and lake surfing.
142
+
143
+ The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the Saint Lawrence River.[121]
144
+
145
+ From 1844 through 1857, palace steamers carried passengers and cargo around the Great Lakes.[122] In the first half of the 20th century large luxurious passenger steamers sailed the lakes in opulence.[123] The Detroit and Cleveland Navigation Company had several vessels at the time and hired workers from all walks of life to help operate these vessels.[124] Several ferries currently operate on the Great Lakes to carry passengers to various islands, including Isle Royale, Drummond Island, Pelee Island, Mackinac Island, Beaver Island, Bois Blanc Island (Ontario), Bois Blanc Island (Michigan), Kelleys Island, South Bass Island, North Manitou Island, South Manitou Island, Harsens Island, Manitoulin Island, and the Toronto Islands. As of 2007, four car ferry services cross the Great Lakes, two on Lake Michigan: a steamer from Ludington, Michigan, to Manitowoc, Wisconsin, and a high speed catamaran from Milwaukee to Muskegon, Michigan, one on Lake Erie: a boat from Kingsville, Ontario, or Leamington, Ontario, to Pelee Island, Ontario, then onto Sandusky, Ohio, and one on Lake Huron: the M.S. Chi-Cheemaun [125] runs between Tobermory and South Baymouth, Manitoulin Island, operated by the Owen Sound Transportation Company. An international ferry across Lake Ontario from Rochester, New York, to Toronto ran during 2004 and 2005, but is no longer in operation.
146
+
147
+ The large size of the Great Lakes increases the risk of water travel; storms and reefs are common threats. The lakes are prone to sudden and severe storms, in particular in the autumn, from late October until early December. Hundreds of ships have met their end on the lakes. The greatest concentration of shipwrecks lies near Thunder Bay (Michigan), beneath Lake Huron, near the point where eastbound and westbound shipping lanes converge.
148
+
149
+ The Lake Superior shipwreck coast from Grand Marais, Michigan, to Whitefish Point became known as the "Graveyard of the Great Lakes". More vessels have been lost in the Whitefish Point area than any other part of Lake Superior.[126] The Whitefish Point Underwater Preserve serves as an underwater museum to protect the many shipwrecks in this area.
150
+
151
+ The first ship to sink in Lake Michigan was Le Griffon, also the first ship to sail the Great Lakes. Caught in a 1679 storm while trading furs between Green Bay and Michilimacinac, she was lost with all hands aboard.[127] Its wreck may have been found in 2004,[128] but a wreck subsequently discovered in a different location was also claimed in 2014 to be Le Griffon.[129]
152
+
153
+ The largest and last major freighter wrecked on the lakes was the SS Edmund Fitzgerald, which sank on November 10, 1975, just over 17 miles (30 km) offshore from Whitefish Point on Lake Superior. The largest loss of life in a shipwreck out on the lakes may have been that of Lady Elgin, wrecked in 1860 with the loss of around 400 lives on Lake Michigan. In an incident at a Chicago dock in 1915, the SS Eastland rolled over while loading passengers, killing 841.
154
+
155
+ In August 2007, the Great Lakes Shipwreck Historical Society announced that it had found the wreckage of Cyprus, a 420-foot (130 m) long, century-old ore carrier. Cyprus sank during a Lake Superior storm on October 11, 1907, during its second voyage while hauling iron ore from Superior, Wisconsin, to Buffalo, New York. The entire crew of 23 drowned, except one, Charles Pitz, who floated on a life raft for almost seven hours.[130]
156
+
157
+ In June 2008, deep sea divers in Lake Ontario found the wreck of the 1780 Royal Navy warship HMS Ontario in what has been described as an "archaeological miracle".[131] There are no plans to raise her as the site is being treated as a war grave.
158
+
159
+ In June 2010, L.R. Doty was found in Lake Michigan by an exploration diving team led by dive boat Captain Jitka Hanakova from her boat the Molly V.[132] The ship sank in October 1898, probably attempting to rescue a small schooner, Olive Jeanette, during a terrible storm.
160
+
161
+ Still missing are the two last warships to sink in the Great Lakes, the French minesweepers, Inkerman and Cerisoles, which vanished in Lake Superior during a blizzard in 1918. 78 lives were lost making it the largest loss of life in Lake Superior and the greatest unexplained loss of life in the Great Lakes.
162
+
163
+ Related articles
164
+
165
+ In 1872, a treaty gave access to the St. Lawrence River to the United States, and access to Lake Michigan to the Dominion of Canada.[133] The International Joint Commission was established in 1909 to help prevent and resolve disputes relating to the use and quality of boundary waters, and to advise Canada and the United States on questions related to water resources. Concerns over diversion of Lake water are of concern to both Americans and Canadians. Some water is diverted through the Chicago River to operate the Illinois Waterway but the flow is limited by treaty. Possible schemes for bottled water plants and diversion to dry regions of the continent raise concerns. Under the U.S. "Water Resources Development Act",[134] diversion of water from the Great Lakes Basin requires the approval of all eight Great Lakes governors through the Great Lakes Commission, which rarely occurs. International treaties regulate large diversions.
166
+
167
+ In 1998, the Canadian company Nova Group won approval from the Province of Ontario to withdraw 158,000,000 U.S. gallons (600,000 m3) of Lake Superior water annually to ship by tanker to Asian countries. Public outcry forced the company to abandon the plan before it began. Since that time, the eight Great Lakes Governors and the Premiers of Ontario and Quebec have negotiated the Great Lakes-Saint Lawrence River Basin Sustainable Water Resources Agreement[135] and the Great Lakes-St. Lawrence River Basin Water Resources Compact[136] that would prevent most future diversion proposals and all long-distance ones. The agreements strengthen protection against abusive water withdrawal practices within the Great Lakes basin. On December 13, 2005, the Governors and Premiers signed these two agreements, the first of which is between all ten jurisdictions. It is somewhat more detailed and protective, though its legal strength has not yet been tested in court. The second, the Great Lakes Compact, has been approved by the state legislatures of all eight states that border the Great Lakes as well as the U.S. Congress, and was signed into law by President George W. Bush on October 3, 2008.[137]
168
+
169
+ The Great Lakes Restoration Initiative, described as "the largest investment in the Great Lakes in two decades",[138] was funded at $475 million in the U.S. federal government's Fiscal Year 2011 budget, and $300 million in the Fiscal Year 2012 budget. Through the program a coalition of federal agencies is making grants to local and state entities for toxics cleanups, wetlands and coastline restoration projects, and invasive species-related projects.
en/2269.html.txt ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 45°N 84°W / 45°N 84°W / 45; -84
2
+
3
+ The Great Lakes or the Great Lakes of North America, are a series of interconnected freshwater lakes in the upper mid-east region of North America, that connect to the Atlantic Ocean through the Saint Lawrence River. In general, they are on or near the Canada–United States border. They are lakes Superior, Michigan, Huron, Erie, and Ontario. Hydrologically, there are only four lakes, because lakes Michigan and Huron join at the Straits of Mackinac. The lakes form the basis for Great Lakes Waterway.
4
+
5
+ The Great Lakes are the largest group of freshwater lakes on Earth by total area, and second-largest by total volume, containing 21% of the world's surface fresh water by volume.[1][2][3] The total surface is 94,250 square miles (244,106 km2), and the total volume (measured at the low water datum) is 5,439 cubic miles (22,671 km3),[4] slightly less than the volume of Lake Baikal (5,666 cu mi or 23,615 km3, 22–23% of the world's surface fresh water). Due to their sea-like characteristics (rolling waves, sustained winds, strong currents, great depths, and distant horizons) the five Great Lakes have also long been referred to as inland seas.[5] Lake Superior is the second-largest lake in the world by area, and the largest freshwater lake by surface area. Lake Michigan is the largest lake that is entirely within one country.[6][7][8][9]
6
+
7
+ The Great Lakes began to form at the end of the last glacial period around 14,000 years ago, as retreating ice sheets exposed the basins they had carved into the land which then filled with meltwater.[10] The lakes have been a major source for transportation, migration, trade, and fishing, serving as a habitat to many aquatic species in a region with much biodiversity.
8
+
9
+ The surrounding region is called the Great Lakes region, which includes the Great Lakes Megalopolis.[11]
10
+
11
+ Though the five lakes lie in separate basins, they form a single, naturally interconnected body of fresh water, within the Great Lakes Basin. As a chain of lakes and rivers they connect the east-central interior of North America to the Atlantic Ocean. From the interior to the outlet at the Saint Lawrence River, water flows from Superior to Huron and Michigan, southward to Erie, and finally northward to Lake Ontario. The lakes drain a large watershed via many rivers, and are studded with approximately 35,000 islands.[12] There are also several thousand smaller lakes, often called "inland lakes", within the basin.[13] The surface area of the five primary lakes combined is roughly equal to the size of the United Kingdom, while the surface area of the entire basin (the lakes and the land they drain) is about the size of the UK and France combined.[14] Lake Michigan is the only one of the Great Lakes that is entirely within the United States; the others form a water boundary between the United States and Canada. The lakes are divided among the jurisdictions of the Canadian province of Ontario and the U.S. states of Michigan, Wisconsin, Minnesota, Illinois, Indiana, Ohio, Pennsylvania, and New York. Both the province of Ontario and the state of Michigan include in their boundaries portions of four of the lakes: The province of Ontario does not border Lake Michigan, and the state of Michigan does not border Lake Ontario. New York and Wisconsin's jurisdictions extend into two lakes, and each of the remaining states into one of the lakes.
12
+
13
+ As the surfaces of Lakes Superior, Huron, Michigan, and Erie are all approximately the same elevation above sea level, while Lake Ontario is significantly lower, and because the Niagara Escarpment precludes all natural navigation, the four upper lakes are commonly called the "upper great lakes". This designation is not universal. Those living on the shore of Lake Superior often refer to all the other lakes as "the lower lakes", because they are farther south. Sailors of bulk freighters transferring cargoes from Lake Superior and northern Lake Michigan and Lake Huron to ports on Lake Erie or Ontario commonly refer to the latter as the lower lakes and Lakes Michigan, Huron, and Superior as the upper lakes. This corresponds to thinking of lakes Erie and Ontario as "down south" and the others as "up north". Vessels sailing north on Lake Michigan are considered "upbound" even though they are sailing toward its effluent current.[24]
14
+
15
+ Lakes Huron and Michigan are sometimes considered a single lake, called Lake Michigan–Huron, because they are one hydrological body of water connected by the Straits of Mackinac.[25] The straits are five miles (8 km) wide[14] and 120 feet (37 m) deep; the water levels rise and fall together,[26] and the flow between Michigan and Huron frequently reverses direction.
16
+
17
+ Dispersed throughout the Great Lakes are approximately 35,000 islands.[12] The largest among them is Manitoulin Island in Lake Huron, the largest island in any inland body of water in the world.[34] The second-largest island is Isle Royale in Lake Superior.[35] Both of these islands are large enough to contain multiple lakes themselves—for instance, Manitoulin Island's Lake Manitou is the world's largest lake on a freshwater island.[36] Some of these lakes even have their own islands, like Treasure Island in Lake Mindemoya in Manitoulin Island
18
+
19
+ The Great Lakes also have several peninsulas between them, including the Door Peninsula, the Peninsulas of Michigan, and the Ontario Peninsula. Some of these peninsulas even contain smaller peninsulas, such as the Keweenaw Peninsula, the Thumb Peninsula, the Bruce Peninsula, and the Niagara Peninsula. Population centers on the peninsulas include Grand Rapids and Detroit in Michigan along with London, Hamilton, Brantford, and Toronto in Ontario.
20
+
21
+ Although the Saint Lawrence Seaway and Great Lakes Waterway make the Great Lakes accessible to ocean-going vessels,[37] shifts in shipping to wider ocean-going container ships—which do not fit through the locks on these routes—have limited container shipping on the lakes. Most Great Lakes trade is of bulk material, and bulk freighters of Seawaymax-size or less can move throughout the entire lakes and out to the Atlantic.[38] Larger ships are confined to working in the lakes themselves. Only barges can access the Illinois Waterway system providing access to the Gulf of Mexico via the Mississippi River. Despite their vast size, large sections of the Great Lakes freeze over in winter, interrupting most shipping from January to March. Some icebreakers ply the lakes, keeping the shipping lanes open through other periods of ice on the lakes.
22
+
23
+ The Great Lakes are also connected by the Chicago Sanitary and Ship Canal to the Gulf of Mexico by way of the Illinois River (from the Chicago River) and the Mississippi River. An alternate track is via the Illinois River (from Chicago), to the Mississippi, up the Ohio, and then through the Tennessee–Tombigbee Waterway (a combination of a series of rivers and lakes and canals), to Mobile Bay and the Gulf of Mexico. Commercial tug-and-barge traffic on these waterways is heavy.[39]
24
+
25
+ Pleasure boats can also enter or exit the Great Lakes by way of the Erie Canal and Hudson River in New York. The Erie Canal connects to the Great Lakes at the east end of Lake Erie (at Buffalo, New York) and at the south side of Lake Ontario (at Oswego, New York).
26
+
27
+ In 2009, the lakes contained 84% of the surface freshwater of North America;[40] if the water were evenly distributed over the entire continent's land area, it would reach a depth of 5 feet (1.5 meters).[14] The source of water levels in the lakes is tied to what was left by melting glaciers when the lakes took their present form. Annually, only about 1% is "new" water originating from rivers, precipitation, and groundwater springs that drain into the lakes. Historically, evaporation has been balanced by drainage, making the level of the lakes constant.[14]
28
+
29
+ Intensive human population growth only began in the region in the 20th century and continues today.[14] At least two human water use activities have been identified as having the potential to affect the lakes' levels: diversion (the transfer of water to other watersheds) and consumption (substantially done today by the use of lake water to power and cool electric generation plants, resulting in evaporation).[41]
30
+
31
+ The physical impacts of climate change can be seen in water levels in the Great Lakes over the past century.[42] The United Nations' Intergovernmental Panel on Climate Change in 1997, 23 years ago, predicted: "the following lake level declines could occur: Lake Superior −0.2 to −0.5 m, Lakes Michigan and Huron −1.0 to −2.5 m, and Lake Erie −0.9 to −1.9 m."[43] In 2009, 11 years ago, it was predicted that global warming will decrease water levels.[44] In 2013, record low water levels in the Great Lakes were attributed to climate change.[45]
32
+
33
+ The water level of Lake Michigan–Huron had remained fairly constant over the 20th century,[46] but has nevertheless dropped more than 6 feet from the record high in 1986 to the low of 2013.[47] In 2012, National Geographic tied the water level drop to warming climate change.,[48] as did the Natural Resources Defense Council.[49] One newspaper reported that the long-term average level has gone down about 20 inches because of dredging and subsequent erosion in the St. Clair River. Lake Michigan–Huron hit all-time record low levels in 2013; according to the US Army Corps of Engineers, the previous record low had been set in 1964.[47] By April 2015 the water level had recovered to 7 inches (17.5 cm) more than the "long term monthly average".[50]
34
+
35
+ The Great Lakes contain 21% of the world's surface fresh water: 5,472 cubic miles (22,810 km3), or 6.0×1015 U.S. gallons, that is 6 quadrillion U.S gallons, (2.3×1016 liters). This is enough water to cover the 48 contiguous U.S. states to a uniform depth of 9.5 feet (2.9 m). Although the lakes contain a large percentage of the world's fresh water, the Great Lakes supply only a small portion of U.S. drinking water on a national basis.[57]
36
+
37
+ The total surface area of the lakes is approximately 94,250 square miles (244,100 km2)—nearly the same size as the United Kingdom, and larger than the U.S. states of New York, New Jersey, Connecticut, Rhode Island, Massachusetts, Vermont, and New Hampshire combined.[58]
38
+
39
+ The Great Lakes coast measures approximately 10,500 miles (16,900 km);,[14] but the length of a coastline is impossible to measure exactly and is not a well-defined measure (see Coastline paradox). Of the total 10,500 miles (16,900 km) of shoreline, Canada borders approximately 5,200 miles (8,400 km), while the remaining 5,300 miles (8,500 km) are bordered by the United States. Michigan has the longest shoreline of the United States, bordering roughly 3,288 miles (5,292 km) of shoreline, followed by Wisconsin (820 miles (1,320 km)), New York (473 miles (761 km)), and Ohio (312 miles (502 km)).[59] Traversing the shoreline of all the lakes would cover a distance roughly equivalent to travelling half-way around the world at the equator.[14]
40
+
41
+ It has been estimated that the foundational geology that created the conditions shaping the present day upper Great Lakes was laid from 1.1 to 1.2 billion years ago,[14][60] when two previously fused tectonic plates split apart and created the Midcontinent Rift, which crossed the Great Lakes Tectonic Zone. A valley was formed providing a basin that eventually became modern day Lake Superior. When a second fault line, the Saint Lawrence rift, formed approximately 570 million years ago,[14] the basis for Lakes Ontario and Erie were created, along with what would become the Saint Lawrence River.
42
+
43
+ The Great Lakes are estimated to have been formed at the end of the last glacial period (the Wisconsin glaciation ended 10,000 to 12,000 years ago), when the Laurentide Ice Sheet receded.[10] The retreat of the ice sheet left behind a large amount of meltwater (see Lake Algonquin, Lake Chicago, Glacial Lake Iroquois, and Champlain Sea) that filled up the basins that the glaciers had carved, thus creating the Great Lakes as we know them today.[61] Because of the uneven nature of glacier erosion, some higher hills became Great Lakes islands. The Niagara Escarpment follows the contour of the Great Lakes between New York and Wisconsin. Land below the glaciers "rebounded" as it was uncovered.[62] Since the glaciers covered some areas longer than others, this glacial rebound occurred at different rates.
44
+
45
+ A notable modern phenomenon is the formation of ice volcanoes over the lakes during wintertime. Storm-generated waves carve the lakes' ice sheet and create conical mounds through the eruption of water and slush. The process is only well-documented in the Great Lakes, and has been credited with sparing the southern shorelines from worse rocky erosion.[63]
46
+
47
+ The Great Lakes have a humid continental climate,
48
+ Köppen climate classification Dfa (in southern areas) and Dfb (in northern parts)[64] with varying influences from air masses from other regions including dry, cold Arctic systems, mild Pacific air masses from the West, and warm, wet tropical systems from the south and the Gulf of Mexico.[65] The lakes themselves also have a moderating effect on the climate; they can also increase precipitation totals and produce lake effect snowfall.[64]
49
+
50
+ The Great Lakes can have an effect on regional weather called lake-effect snow, which is sometimes very localized. Even late in winter, the lakes often have no icepack in the middle. The prevailing winds from the west pick up the air and moisture from the lake surface, which is slightly warmer in relation to the cold surface winds above. As the slightly warmer, moist air passes over the colder land surface, the moisture often produces concentrated, heavy snowfall that sets up in bands or "streamers". This is similar to the effect of warmer air dropping snow as it passes over mountain ranges. During freezing weather with high winds, the "snow belts" receive regular snow fall from this localized weather pattern, especially along the eastern shores of the lakes. Snow belts are found in Wisconsin, Michigan, Ohio, Pennsylvania, and New York, United States; and Ontario, Canada.
51
+
52
+ The lakes also moderate seasonal temperatures to some degree, but not with as large an influence as do large oceans; they absorb heat and cool the air in summer, then slowly radiate that heat in autumn. They protect against frost during transitional weather, and keep the summertime temperatures cooler than further inland. This effect can be very localized and overridden by offshore wind patterns. This temperature buffering produces areas known as "Fruit Belts", where fruit can be produced that is typically grown much farther south. For instance, Western Michigan has apple and cherry orchards, and vineyards cultivated adjacent to the lake shore as far north as the Grand Traverse Bay and Nottawasaga Bay in central Ontario. The eastern shore of Lake Michigan and the southern shore of Lake Erie have many successful wineries because of the moderating effect, as does the Niagara Peninsula between Lake Erie and Lake Ontario. A similar phenomenon allows wineries to flourish in the Finger Lakes region of New York, as well as in Prince Edward County, Ontario on Lake Ontario's northeast shore. Related to the lake effect is the regular occurrence of fog over medium-sized areas, particularly along the shorelines of the lakes. This is most noticeable along Lake Superior's shores.
53
+
54
+ The Great Lakes have been observed to help intensify storms, such as Hurricane Hazel in 1954, and the 2011 Goderich, Ontario tornado, which moved onshore as a tornadic waterspout. In 1996 a rare tropical or subtropical storm was observed forming in Lake Huron, dubbed the 1996 Lake Huron cyclone. Rather large severe thunderstorms covering wide areas are well known in the Great Lakes during mid-summer; these Mesoscale convective complexes or MCCs[66] can cause damage to wide swaths of forest and shatter glass in city buildings. These storms mainly occur during the night, and the systems sometimes have small embedded tornadoes, but more often straight-line winds accompanied by intense lightning.
55
+
56
+ Historically, the Great Lakes, in addition to their lake ecology, were surrounded by various forest ecoregions (except in a relatively small area of southeast Lake Michigan where savanna or prairie occasionally intruded). Logging, urbanization, and agriculture uses have changed that relationship. In the early 21st century, Lake Superior's shores are 91% forested, Lake Huron 68%, Lake Ontario 49%, Lake Michigan 41%, and Lake Erie, where logging and urbanization has been most extensive, 21%. Some of these forests are second or third growth (i.e. they have been logged before, changing their composition). At least 13 wildlife species are documented as becoming extinct since the arrival of Europeans, and many more are threatened or endangered.[14] Meanwhile, exotic and invasive species have also been introduced.
57
+
58
+ While the organisms living on the bottom of shallow waters are similar to those found in smaller lakes, the deep waters contain organisms found only in deep, cold lakes of the northern latitudes. These include the delicate opossum shrimp (order mysida), the deepwater scud (a crustacean of the order amphipoda), two types of copepods, and the deepwater sculpin (a spiny, large-headed fish).[68]
59
+
60
+ The Great Lakes are an important source of fishing. Early European settlers were astounded by both the variety and quantity of fish; there were 150 different species in the Great Lakes.[14] Throughout history, fish populations were the early indicator of the condition of the Lakes and have remained one of the key indicators even in the current era of sophisticated analyses and measuring instruments. According to the bi-national (U.S. and Canadian) resource book, The Great Lakes: An Environmental Atlas and Resource Book: "The largest Great Lakes fish harvests were recorded in 1889 and 1899 at some 67,000 tonnes (66,000 long tons; 74,000 short tons) [147 million pounds]."[69]
61
+
62
+ By 1801, the New York Legislature found it necessary to pass regulations curtailing obstructions to the natural migrations of Atlantic salmon from Lake Erie into their spawning channels. In the early 19th century, the government of Upper Canada found it necessary to introduce similar legislation prohibiting the use of weirs and nets at the mouths of Lake Ontario's tributaries. Other protective legislation was passed, as well, but enforcement remained difficult.[70]
63
+
64
+ On both sides of the Canada–United States border, the proliferation of dams and impoundments have multiplied, necessitating more regulatory efforts. Concerns by the mid-19th century included obstructions in the rivers which prevented salmon and lake sturgeon from reaching their spawning grounds. The Wisconsin Fisheries Commission noted a reduction of roughly 25% in general fish harvests by 1875. The states have removed dams from rivers where necessary.[clarification needed][71]
65
+
66
+ Overfishing has been cited as a possible reason for a decrease in population of various whitefish, important because of their culinary desirability and, hence, economic consequence. Moreover, between 1879 and 1899, reported whitefish harvests declined from some 24.3 million pounds (11 million kg) to just over 9 million pounds (4 million kg).[72] By 1900, commercial fishermen on Lake Michigan were hauling in an average of 41 million pounds of fish annually.[73] By 1938, Wisconsin's commercial fishing operations were motorized and mechanized, generating jobs for more than 2,000 workers, and hauling 14 million pounds per year.[73] The population of giant freshwater mussels was eliminated as the mussels were harvested for use as buttons by early Great Lakes entrepreneurs.[72] Since 2000, the invasive quagga mussel has smothered the bottom of Lake Michigan almost from shore to shore, and their numbers are estimated at 900 trillion.[73]
67
+
68
+ The influx of parasitic lamprey populations after the development of the Erie Canal and the much later Welland Canal led to the two federal governments of the US and Canada working on joint proposals to control it. By the mid-1950s, the lake trout populations of Lakes Michigan and Huron were reduced, with the lamprey deemed largely to blame. This led to the launch of the bi-national Great Lakes Fishery Commission.
69
+
70
+ The Great Lakes: An Environmental Atlas and Resource Book (1972) noted: "Only pockets remain of the once large commercial fishery."[69] But, water quality improvements realized during the 1970s and 1980s, combined with successful salmonid stocking programs, have enabled the growth of a large recreational fishery.[74] The last commercial fisherman left Milwaukee in 2011 because of overfishing and anthropogenic changes to the biosphere.[73]
71
+
72
+ Since the 19th century an estimated 160 new species have found their way into the Great Lakes ecosystem; many have become invasive; the overseas ship ballast and ship hull parasitism are causing severe economic and ecological impacts.[75][76] According to the Inland Seas Education Association, on average a new species enters the Great Lakes every eight months.[76]
73
+
74
+ Introductions into the Great Lakes include the zebra mussel, which was first discovered in 1988, and quagga mussel in 1989. The mollusks are efficient filter feeders, competing with native mussels and reducing available food and spawning grounds for fish. In addition, the mussels may be a nuisance to industries by clogging pipes. The U.S. Fish and Wildlife Service estimates that the economic impact of the zebra mussel could be about $5 billion over the next decade.[77]
75
+
76
+ The alewife first entered the system west of Lake Ontario via 19th-century canals. By the 1960s, the small silver fish had become a familiar nuisance to beach goers across Lakes Michigan, Huron, and Erie. Periodic mass dieoffs result in vast numbers of the fish washing up on shore; estimates by various governments have placed the percentage of Lake Michigan's biomass, which was made up of alewives in the early 1960s, as high as 90%. In the late 1960s, the various state and federal governments began stocking several species of salmonids, including the native lake trout as well as non-native chinook and coho salmon; by the 1980s, alewife populations had dropped drastically.[78] The ruffe, a small percid fish from Eurasia, became the most abundant fish species in Lake Superior's Saint Louis River within five years of its detection in 1986. Its range, which has expanded to Lake Huron, poses a significant threat to the lower lake fishery.[79] Five years after first being observed in the St. Clair River, the round goby can now be found in all of the Great Lakes. The goby is considered undesirable for several reasons: it preys upon bottom-feeding fish, overruns optimal habitat, spawns multiple times a season, and can survive poor water quality conditions.[80]
77
+
78
+ Several species of exotic water fleas have accidentally been introduced into the Great Lakes, such as the spiny waterflea, Bythotrephes longimanus, and the fishhook waterflea, Cercopagis pengoi, potentially having an effect on the zooplankton population. Several species of crayfish have also been introduced that may contend with native crayfish populations. More recently an electric fence has been set up across the Chicago Sanitary and Ship Canal in order to keep several species of invasive Asian carp out of the area. These fast-growing planktivorous fish have heavily colonized the Mississippi and Illinois river systems.[81] The sea lamprey, which has been particularly damaging to the native lake trout population, is another example of a marine invasive species in the Great Lakes.[82] Invasive species, particularly zebra and quagga mussels, may be at least partially responsible for the collapse of the deepwater demersal fish community in Lake Huron,[83] as well as drastic unprecedented changes in the zooplankton community of the lake.[84]
79
+
80
+ Scientists understand that the micro-aquatic life of the lakes is abundant, but know very little about some of the most plentiful microbes and their environmental effects in the Great Lakes. Although a drop of lake water may contain 1 million bacteria cells and 10 million viruses, only since 2012 has there been a long-term study of the lakes' micro-organisms. Between 2012 and 2019 more than 160 new species have been discovered.[85]
81
+
82
+ Native habitats and ecoregions in the Great Lakes region include:
83
+
84
+ Plant lists include:
85
+
86
+ Logging
87
+
88
+ Logging of the extensive forests in the Great Lakes region removed riparian and adjacent tree cover over rivers and streams, which provide shade, moderating water temperatures in fish spawning grounds. Removal of trees also destabilized the soil, with greater volumes washed into stream beds causing siltation of gravel beds, and more frequent flooding.
89
+
90
+ Running cut logs down the tributary rivers into the Great Lakes also dislocated sediments. In 1884, the New York Fish Commission determined that the dumping of sawmill waste (chips and sawdust) had impacted fish populations.[86]
91
+
92
+ The first U.S. Clean Water Act, passed by a Congressional override after being vetoed by US President Richard Nixon in 1972, was a key piece of legislation,[87] along with the bi-national Great Lakes Water Quality Agreement signed by Canada and the U.S. A variety of steps taken to process industrial and municipal pollution discharges into the system greatly improved water quality by the 1980s, and Lake Erie in particular is significantly cleaner.[88] Discharge of toxic substances has been sharply reduced. Federal and state regulations control substances like PCBs. The first of 43 "Great Lakes Areas of Concern" to be formally "de-listed" due to successful cleanup was Ontario's Collingwood Harbour in 1994; Ontario's Severn Sound followed in 2003.[89] Presque Isle Bay in Pennsylvania is formally listed as in recovery, as is Ontario's Spanish Harbour. Dozens of other Areas of Concern have received partial cleanups such as the Rouge River (Michigan) and Waukegan Harbor (Illinois).[90]
93
+
94
+ Phosphate detergents were historically a major source of nutrient to the Great Lakes algae blooms in particular in the warmer and shallower portions of the system such as Lake Erie, Saginaw Bay, Green Bay, and the southernmost portion of Lake Michigan. By the mid-1980s, most jurisdictions bordering the Great Lakes had controlled phosphate detergents,[91] resulting in sharp reductions in the frequency and extent of the blooms.[citation needed]
95
+
96
+ Blue-green algae, or Cyanobacteria blooms,[92] have been problematic on Lake Erie since 2011.[93] "Not enough is being done to stop fertilizer and phosphorus from getting into the lake and causing blooms," said Michael McKay, executive director of the Great Lakes Institute for Environmental Research (GLIER) at the University of Windsor. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10.[94] In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometres on Lake Erie, with the heaviest concentration near Toledo, Ohio. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August 2019.[95][94]
97
+
98
+ Until 1970, mercury was not listed as a harmful chemical, according to the United States Federal Water Quality Administration. Within the past ten years mercury has become more apparent in water tests. Mercury compounds have been used in paper mills to prevent slime from forming during their production, and chemical companies have used mercury to separate chlorine from brine solutions. Studies conducted by the Environmental Protection Agency have shown that when the mercury comes in contact with many of the bacteria and compounds in the fresh water, it forms the compound methyl mercury, which has a much greater impact on human health than elemental mercury due to a higher propensity for absorption. This form of mercury is not detrimental to a majority of fish types, but is very detrimental to people and other wildlife animals who consume the fish. Mercury has been known for health related problems such as birth defects in humans and animals, and the near extinction of eagles in the Great Lakes region.[96]
99
+
100
+ The amount of raw sewage dumped into the waters was the primary focus of both the first Great Lakes Water Quality Agreement and federal laws passed in both countries during the 1970s. Implementation of secondary treatment of municipal sewage by major cities greatly reduced the routine discharge of untreated sewage during the 1970s and 1980s.[97] The International Joint Commission in 2009 summarized the change: "Since the early 1970s, the level of treatment to reduce pollution from waste water discharges to the Great Lakes has improved considerably. This is a result of significant expenditures to date on both infrastructure and technology, and robust regulatory systems that have proven to be, on the whole, quite effective."[98] The commission reported that all urban sewage treatment systems on the U.S. side of the lakes had implemented secondary treatment, as had all on the Canadian side except for five small systems.[citation needed]
101
+
102
+ Though contrary to federal laws in both countries, those treatment system upgrades have not yet eliminated Combined sewer Overflow events.[citation needed] This describes when older sewerage systems, which combine storm water with sewage into single sewers heading to the treatment plant, are temporarily overwhelmed by heavy rainstorms. Local sewage treatment authorities then must release untreated effluent, a mix of rainwater and sewage, into local water bodies. While enormous public investments such as the Deep Tunnel projects in Chicago and Milwaukee have greatly reduced the frequency and volume of these events, they have not been eliminated. The number of such overflow events in Ontario, for example, is flat according to the International Joint Commission.[98] Reports about this issue on the U.S. side highlight five large municipal systems (those of Detroit, Cleveland, Buffalo, Milwaukee and Gary) as being the largest current periodic sources of untreated discharges into the Great Lakes.[99]
103
+
104
+ Algae such as diatoms, along with other phytoplankton, are photosynthetic primary producers supporting the food web of the Great Lakes,[100] and have been effected by global warming.[101] The changes in the size or in the function of the primary producers may have a direct or an indirect impact on the food web. Photosynthesis carried out by diatoms comprises about one fifth of the total photosynthesis. By taking CO2 out of the water, to photosynthesize, diatoms help to stabilize the pH of the water, as otherwise CO2 would react with water making it more acidic.
105
+
106
+ Diatoms acquire inorganic carbon thought passive diffusion of CO2 and HCO3, as well they use carbonic anhydrase mediated active transport to speed up this process.[102] Large diatoms require more carbon uptake than smaller diatoms.[103] There is a positive correlation between the surface area and the chlorophyll concentration of diatom cells.[104]
107
+
108
+ Several Native American populations (Paleo-indians) inhabited the region around 10,000 BC, after the end of the Wisconsin glaciation.[105][106] The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine Le Griffon, which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.[107]
109
+
110
+ The Rush–Bagot Treaty signed in 1818, after the War of 1812 and the later Treaty of Washington eventually led to a complete disarmament of naval vessels in the Great Lakes. Nonetheless, both nations maintain coast guard vessels in the Great Lakes.
111
+
112
+ During settlement, the Great Lakes and its rivers were the only practical means of moving people and freight. Barges from middle North America were able to reach the Atlantic Ocean from the Great Lakes when the Welland canal opened in 1824 and the later Erie Canal opened in 1825.[108] By 1848, with the opening of the Illinois and Michigan Canal at Chicago, direct access to the Mississippi River was possible from the lakes.[109] With these two canals an all-inland water route was provided between New York City and New Orleans.
113
+
114
+ The main business of many of the passenger lines in the 19th century was transporting immigrants. Many of the larger cities owe their existence to their position on the lakes as a freight destination as well as for being a magnet for immigrants. After railroads and surface roads developed, the freight and passenger businesses dwindled and, except for ferries and a few foreign cruise ships, have now vanished.
115
+ The immigration routes still have an effect today. Immigrants often formed their own communities and some areas have a pronounced ethnicity, such as Dutch, German, Polish, Finnish, and many others. Since many immigrants settled for a time in New England before moving westward, many areas on the U.S. side of the Great Lakes also have a New England feel, especially in home styles and accent.
116
+
117
+ Since general freight these days is transported by railroads and trucks, domestic ships mostly move bulk cargoes, such as iron ore, coal and limestone for the steel industry. The domestic bulk freight developed because of the nearby mines. It was more economical to transport the ingredients for steel to centralized plants rather than try to make steel on the spot. Grain exports are also a major cargo on the lakes.
118
+
119
+ In the 19th century and early 20th centuries, iron and other ores such as copper were shipped south on (downbound ships), and supplies, food, and coal were shipped north (upbound). Because of the location of the coal fields in Pennsylvania and West Virginia, and the general northeast track of the Appalachian Mountains, railroads naturally developed shipping routes that went due north to ports such as Erie, Pennsylvania and Ashtabula, Ohio.
120
+
121
+ Because the lake maritime community largely developed independently, it has some distinctive vocabulary. Ships, no matter the size, are called boats. When the sailing ships gave way to steamships, they were called steamboats—the same term used on the Mississippi. The ships also have a distinctive design (see Lake freighter). Ships that primarily trade on the lakes are known as lakers. Foreign boats are known as salties. One of the more common sights on the lakes has been since about 1950 the 1,000‑by‑105-foot (305-by-32-meter), 78,850-long-ton (80,120-metric-ton) self-unloader. This is a laker with a conveyor belt system that can unload itself by swinging a crane over the side.[110] Today, the Great Lakes fleet is much smaller in numbers than it once was because of the increased use of overland freight, and a few larger ships replacing many small ones.
122
+
123
+ During World War II, the risk of submarine attacks against coastal training facilities motivated the United States Navy to operate two aircraft carriers on the Great Lakes, USS Sable (IX-81) and USS Wolverine (IX-64). Both served as training ships to qualify naval aviators in carrier landing and takeoff.[111] Lake Champlain briefly became the sixth Great Lake of the United States on March 6, 1998, when President Clinton signed Senate Bill 927. This bill, which reauthorized the National Sea Grant Program, contained a line declaring Lake Champlain to be a Great Lake. Not coincidentally, this status allows neighboring states to apply for additional federal research and education funds allocated to these national resources.[112] Following a small uproar, the Senate voted to revoke the designation on March 24 (although New York and Vermont universities would continue to receive funds to monitor and study the lake).[113]
124
+
125
+ In the early years of the 21st century, water levels in the Great Lakes were a concern.[114] Researchers at the Mowat Centre said that low levels could cost $19bn by 2050.[115] This was followed by record high levels in all lakes except Ontario in the late 2010s and 2020.[116]
126
+
127
+ Alan B. McCullough has written that the fishing industry of the Great Lakes got its start "on the American side of Lake Ontario in Chaumont Bay, near the Maumee River on Lake Erie, and on the Detroit River at about the time of the War of 1812." Although the region was sparsely populated until the 1830s, so there was not much local demand and transporting fish was still prohibitively costly, there were economic and infrastructure developments that were promising for the future of the fishing industry going into the 1830s. Particularly, the 1825 opening of the Erie Canal and the Welland Canal a few years later. The fishing industry expanded particularly in the waters associated with the fur trade that connect Lake Erie and Lake Huron. In fact, two major suppliers of fish in the 1830s were the fur trading companies Hudson's Bay Company and the American Fur Company.[117]
128
+
129
+ The catch from these waters would be sent to the growing market for salted fish in Detroit, where merchants involved in the fur trade had already gained some experience handling salted fish. One such merchant was John P. Clark, a shipbuilder and merchant who began selling fish in the area of Manitowoc, Wisconsin where whitefish was abundant. Another operation cropped up in Georgian Bay, Canadian waters plentiful with trout as well as whitefish. In 1831, Alexander MacGregor from Goderich, Ontario found whitefish and herring in unusually abundant supply around the Fishing Islands. A contemporary account by Methodist missionary John Evans describes the fish as resembling a "bright cloud moving rapidly through the water".[117]
130
+
131
+ Except when the water is frozen during winter, more than 100 lake freighters operate continuously on the Great Lakes,[118] which remain a major water transport corridor for bulk goods. The Great Lakes Waterway connects all the lakes; the smaller Saint Lawrence Seaway connects the lakes to the Atlantic oceans. Some lake freighters are too large to use the Seaway, and operate only on the Waterway and lakes.
132
+
133
+ In 2002, 162 million net tons of dry bulk cargo were moved on the Lakes. This was, in order of volume: iron ore, grain and potash.[119] The iron ore and much of the stone and coal are used in the steel industry. There is also some shipping of liquid and containerized cargo but most container ships cannot pass the locks on the Saint Lawrence Seaway because the ships are too wide.
134
+
135
+ Only four bridges are on the Great Lakes other than Lake Ontario because of the cost of building structures high enough for ships to pass under. The Blue Water Bridge is, for example, more than 150 feet high and more than a mile long.[118]
136
+
137
+ Major ports on the Great Lakes include Duluth-Superior, Chicago, Detroit, Cleveland, Twin Harbors, Hamilton and Thunder Bay.
138
+
139
+ The Great Lakes are used to supply drinking water to tens of millions of people in bordering areas. This valuable resource is collectively administered by the state and provincial governments adjacent to the lakes, who have agreed to the Great Lakes Compact to regulate water supply and use.
140
+
141
+ Tourism and recreation are major industries on the Great Lakes.[120] A few small cruise ships operate on the Great Lakes including a couple of sailing ships. Sport fishing, commercial fishing, and Native American fishing represent a U.S.$4 billion a year industry with salmon, whitefish, smelt, lake trout, bass and walleye being major catches. Many other water sports are practiced on the lakes such as yachting, sea kayaking, diving, kitesurfing, powerboating, and lake surfing.
142
+
143
+ The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the Saint Lawrence River.[121]
144
+
145
+ From 1844 through 1857, palace steamers carried passengers and cargo around the Great Lakes.[122] In the first half of the 20th century large luxurious passenger steamers sailed the lakes in opulence.[123] The Detroit and Cleveland Navigation Company had several vessels at the time and hired workers from all walks of life to help operate these vessels.[124] Several ferries currently operate on the Great Lakes to carry passengers to various islands, including Isle Royale, Drummond Island, Pelee Island, Mackinac Island, Beaver Island, Bois Blanc Island (Ontario), Bois Blanc Island (Michigan), Kelleys Island, South Bass Island, North Manitou Island, South Manitou Island, Harsens Island, Manitoulin Island, and the Toronto Islands. As of 2007, four car ferry services cross the Great Lakes, two on Lake Michigan: a steamer from Ludington, Michigan, to Manitowoc, Wisconsin, and a high speed catamaran from Milwaukee to Muskegon, Michigan, one on Lake Erie: a boat from Kingsville, Ontario, or Leamington, Ontario, to Pelee Island, Ontario, then onto Sandusky, Ohio, and one on Lake Huron: the M.S. Chi-Cheemaun [125] runs between Tobermory and South Baymouth, Manitoulin Island, operated by the Owen Sound Transportation Company. An international ferry across Lake Ontario from Rochester, New York, to Toronto ran during 2004 and 2005, but is no longer in operation.
146
+
147
+ The large size of the Great Lakes increases the risk of water travel; storms and reefs are common threats. The lakes are prone to sudden and severe storms, in particular in the autumn, from late October until early December. Hundreds of ships have met their end on the lakes. The greatest concentration of shipwrecks lies near Thunder Bay (Michigan), beneath Lake Huron, near the point where eastbound and westbound shipping lanes converge.
148
+
149
+ The Lake Superior shipwreck coast from Grand Marais, Michigan, to Whitefish Point became known as the "Graveyard of the Great Lakes". More vessels have been lost in the Whitefish Point area than any other part of Lake Superior.[126] The Whitefish Point Underwater Preserve serves as an underwater museum to protect the many shipwrecks in this area.
150
+
151
+ The first ship to sink in Lake Michigan was Le Griffon, also the first ship to sail the Great Lakes. Caught in a 1679 storm while trading furs between Green Bay and Michilimacinac, she was lost with all hands aboard.[127] Its wreck may have been found in 2004,[128] but a wreck subsequently discovered in a different location was also claimed in 2014 to be Le Griffon.[129]
152
+
153
+ The largest and last major freighter wrecked on the lakes was the SS Edmund Fitzgerald, which sank on November 10, 1975, just over 17 miles (30 km) offshore from Whitefish Point on Lake Superior. The largest loss of life in a shipwreck out on the lakes may have been that of Lady Elgin, wrecked in 1860 with the loss of around 400 lives on Lake Michigan. In an incident at a Chicago dock in 1915, the SS Eastland rolled over while loading passengers, killing 841.
154
+
155
+ In August 2007, the Great Lakes Shipwreck Historical Society announced that it had found the wreckage of Cyprus, a 420-foot (130 m) long, century-old ore carrier. Cyprus sank during a Lake Superior storm on October 11, 1907, during its second voyage while hauling iron ore from Superior, Wisconsin, to Buffalo, New York. The entire crew of 23 drowned, except one, Charles Pitz, who floated on a life raft for almost seven hours.[130]
156
+
157
+ In June 2008, deep sea divers in Lake Ontario found the wreck of the 1780 Royal Navy warship HMS Ontario in what has been described as an "archaeological miracle".[131] There are no plans to raise her as the site is being treated as a war grave.
158
+
159
+ In June 2010, L.R. Doty was found in Lake Michigan by an exploration diving team led by dive boat Captain Jitka Hanakova from her boat the Molly V.[132] The ship sank in October 1898, probably attempting to rescue a small schooner, Olive Jeanette, during a terrible storm.
160
+
161
+ Still missing are the two last warships to sink in the Great Lakes, the French minesweepers, Inkerman and Cerisoles, which vanished in Lake Superior during a blizzard in 1918. 78 lives were lost making it the largest loss of life in Lake Superior and the greatest unexplained loss of life in the Great Lakes.
162
+
163
+ Related articles
164
+
165
+ In 1872, a treaty gave access to the St. Lawrence River to the United States, and access to Lake Michigan to the Dominion of Canada.[133] The International Joint Commission was established in 1909 to help prevent and resolve disputes relating to the use and quality of boundary waters, and to advise Canada and the United States on questions related to water resources. Concerns over diversion of Lake water are of concern to both Americans and Canadians. Some water is diverted through the Chicago River to operate the Illinois Waterway but the flow is limited by treaty. Possible schemes for bottled water plants and diversion to dry regions of the continent raise concerns. Under the U.S. "Water Resources Development Act",[134] diversion of water from the Great Lakes Basin requires the approval of all eight Great Lakes governors through the Great Lakes Commission, which rarely occurs. International treaties regulate large diversions.
166
+
167
+ In 1998, the Canadian company Nova Group won approval from the Province of Ontario to withdraw 158,000,000 U.S. gallons (600,000 m3) of Lake Superior water annually to ship by tanker to Asian countries. Public outcry forced the company to abandon the plan before it began. Since that time, the eight Great Lakes Governors and the Premiers of Ontario and Quebec have negotiated the Great Lakes-Saint Lawrence River Basin Sustainable Water Resources Agreement[135] and the Great Lakes-St. Lawrence River Basin Water Resources Compact[136] that would prevent most future diversion proposals and all long-distance ones. The agreements strengthen protection against abusive water withdrawal practices within the Great Lakes basin. On December 13, 2005, the Governors and Premiers signed these two agreements, the first of which is between all ten jurisdictions. It is somewhat more detailed and protective, though its legal strength has not yet been tested in court. The second, the Great Lakes Compact, has been approved by the state legislatures of all eight states that border the Great Lakes as well as the U.S. Congress, and was signed into law by President George W. Bush on October 3, 2008.[137]
168
+
169
+ The Great Lakes Restoration Initiative, described as "the largest investment in the Great Lakes in two decades",[138] was funded at $475 million in the U.S. federal government's Fiscal Year 2011 budget, and $300 million in the Fiscal Year 2012 budget. Through the program a coalition of federal agencies is making grants to local and state entities for toxics cleanups, wetlands and coastline restoration projects, and invasive species-related projects.
en/227.html.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Andrology (from Ancient Greek: ἀνήρ, anēr, genitive ἀνδρός, andros, "man"; and -λογία, -logia) is the medical specialty that deals with male health, particularly relating to the problems of the male reproductive system and urological problems that are unique to men. It is the counterpart to gynaecology, which deals with medical issues which are specific to female health, especially reproductive and urologic health.
2
+
3
+ Andrology covers anomalies in the connective tissues pertaining to the genitalia, as well as changes in the volume of cells, such as in genital hypertrophy or macrogenitosomia.[1]
4
+
5
+ From reproductive and urologic viewpoints, male-specific medical and surgical procedures include vasectomy, vasovasostomy (one of the vasectomy reversal procedures), orchidopexy and circumcision as well as intervention to deal with male genitourinary disorders such as the following:
6
+
7
+ Unlike gynaecology, which has a plethora of medical board certification programs worldwide, andrology has none. Andrology has only been studied as a distinct specialty since the late 1960s: the first specialist journal on the subject was the German periodical Andrologie (now called Andrologia), published from 1969 onwards.[2]
en/2270.html.txt ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Coordinates: 45°N 84°W / 45°N 84°W / 45; -84
2
+
3
+ The Great Lakes or the Great Lakes of North America, are a series of interconnected freshwater lakes in the upper mid-east region of North America, that connect to the Atlantic Ocean through the Saint Lawrence River. In general, they are on or near the Canada–United States border. They are lakes Superior, Michigan, Huron, Erie, and Ontario. Hydrologically, there are only four lakes, because lakes Michigan and Huron join at the Straits of Mackinac. The lakes form the basis for Great Lakes Waterway.
4
+
5
+ The Great Lakes are the largest group of freshwater lakes on Earth by total area, and second-largest by total volume, containing 21% of the world's surface fresh water by volume.[1][2][3] The total surface is 94,250 square miles (244,106 km2), and the total volume (measured at the low water datum) is 5,439 cubic miles (22,671 km3),[4] slightly less than the volume of Lake Baikal (5,666 cu mi or 23,615 km3, 22–23% of the world's surface fresh water). Due to their sea-like characteristics (rolling waves, sustained winds, strong currents, great depths, and distant horizons) the five Great Lakes have also long been referred to as inland seas.[5] Lake Superior is the second-largest lake in the world by area, and the largest freshwater lake by surface area. Lake Michigan is the largest lake that is entirely within one country.[6][7][8][9]
6
+
7
+ The Great Lakes began to form at the end of the last glacial period around 14,000 years ago, as retreating ice sheets exposed the basins they had carved into the land which then filled with meltwater.[10] The lakes have been a major source for transportation, migration, trade, and fishing, serving as a habitat to many aquatic species in a region with much biodiversity.
8
+
9
+ The surrounding region is called the Great Lakes region, which includes the Great Lakes Megalopolis.[11]
10
+
11
+ Though the five lakes lie in separate basins, they form a single, naturally interconnected body of fresh water, within the Great Lakes Basin. As a chain of lakes and rivers they connect the east-central interior of North America to the Atlantic Ocean. From the interior to the outlet at the Saint Lawrence River, water flows from Superior to Huron and Michigan, southward to Erie, and finally northward to Lake Ontario. The lakes drain a large watershed via many rivers, and are studded with approximately 35,000 islands.[12] There are also several thousand smaller lakes, often called "inland lakes", within the basin.[13] The surface area of the five primary lakes combined is roughly equal to the size of the United Kingdom, while the surface area of the entire basin (the lakes and the land they drain) is about the size of the UK and France combined.[14] Lake Michigan is the only one of the Great Lakes that is entirely within the United States; the others form a water boundary between the United States and Canada. The lakes are divided among the jurisdictions of the Canadian province of Ontario and the U.S. states of Michigan, Wisconsin, Minnesota, Illinois, Indiana, Ohio, Pennsylvania, and New York. Both the province of Ontario and the state of Michigan include in their boundaries portions of four of the lakes: The province of Ontario does not border Lake Michigan, and the state of Michigan does not border Lake Ontario. New York and Wisconsin's jurisdictions extend into two lakes, and each of the remaining states into one of the lakes.
12
+
13
+ As the surfaces of Lakes Superior, Huron, Michigan, and Erie are all approximately the same elevation above sea level, while Lake Ontario is significantly lower, and because the Niagara Escarpment precludes all natural navigation, the four upper lakes are commonly called the "upper great lakes". This designation is not universal. Those living on the shore of Lake Superior often refer to all the other lakes as "the lower lakes", because they are farther south. Sailors of bulk freighters transferring cargoes from Lake Superior and northern Lake Michigan and Lake Huron to ports on Lake Erie or Ontario commonly refer to the latter as the lower lakes and Lakes Michigan, Huron, and Superior as the upper lakes. This corresponds to thinking of lakes Erie and Ontario as "down south" and the others as "up north". Vessels sailing north on Lake Michigan are considered "upbound" even though they are sailing toward its effluent current.[24]
14
+
15
+ Lakes Huron and Michigan are sometimes considered a single lake, called Lake Michigan–Huron, because they are one hydrological body of water connected by the Straits of Mackinac.[25] The straits are five miles (8 km) wide[14] and 120 feet (37 m) deep; the water levels rise and fall together,[26] and the flow between Michigan and Huron frequently reverses direction.
16
+
17
+ Dispersed throughout the Great Lakes are approximately 35,000 islands.[12] The largest among them is Manitoulin Island in Lake Huron, the largest island in any inland body of water in the world.[34] The second-largest island is Isle Royale in Lake Superior.[35] Both of these islands are large enough to contain multiple lakes themselves—for instance, Manitoulin Island's Lake Manitou is the world's largest lake on a freshwater island.[36] Some of these lakes even have their own islands, like Treasure Island in Lake Mindemoya in Manitoulin Island
18
+
19
+ The Great Lakes also have several peninsulas between them, including the Door Peninsula, the Peninsulas of Michigan, and the Ontario Peninsula. Some of these peninsulas even contain smaller peninsulas, such as the Keweenaw Peninsula, the Thumb Peninsula, the Bruce Peninsula, and the Niagara Peninsula. Population centers on the peninsulas include Grand Rapids and Detroit in Michigan along with London, Hamilton, Brantford, and Toronto in Ontario.
20
+
21
+ Although the Saint Lawrence Seaway and Great Lakes Waterway make the Great Lakes accessible to ocean-going vessels,[37] shifts in shipping to wider ocean-going container ships—which do not fit through the locks on these routes—have limited container shipping on the lakes. Most Great Lakes trade is of bulk material, and bulk freighters of Seawaymax-size or less can move throughout the entire lakes and out to the Atlantic.[38] Larger ships are confined to working in the lakes themselves. Only barges can access the Illinois Waterway system providing access to the Gulf of Mexico via the Mississippi River. Despite their vast size, large sections of the Great Lakes freeze over in winter, interrupting most shipping from January to March. Some icebreakers ply the lakes, keeping the shipping lanes open through other periods of ice on the lakes.
22
+
23
+ The Great Lakes are also connected by the Chicago Sanitary and Ship Canal to the Gulf of Mexico by way of the Illinois River (from the Chicago River) and the Mississippi River. An alternate track is via the Illinois River (from Chicago), to the Mississippi, up the Ohio, and then through the Tennessee–Tombigbee Waterway (a combination of a series of rivers and lakes and canals), to Mobile Bay and the Gulf of Mexico. Commercial tug-and-barge traffic on these waterways is heavy.[39]
24
+
25
+ Pleasure boats can also enter or exit the Great Lakes by way of the Erie Canal and Hudson River in New York. The Erie Canal connects to the Great Lakes at the east end of Lake Erie (at Buffalo, New York) and at the south side of Lake Ontario (at Oswego, New York).
26
+
27
+ In 2009, the lakes contained 84% of the surface freshwater of North America;[40] if the water were evenly distributed over the entire continent's land area, it would reach a depth of 5 feet (1.5 meters).[14] The source of water levels in the lakes is tied to what was left by melting glaciers when the lakes took their present form. Annually, only about 1% is "new" water originating from rivers, precipitation, and groundwater springs that drain into the lakes. Historically, evaporation has been balanced by drainage, making the level of the lakes constant.[14]
28
+
29
+ Intensive human population growth only began in the region in the 20th century and continues today.[14] At least two human water use activities have been identified as having the potential to affect the lakes' levels: diversion (the transfer of water to other watersheds) and consumption (substantially done today by the use of lake water to power and cool electric generation plants, resulting in evaporation).[41]
30
+
31
+ The physical impacts of climate change can be seen in water levels in the Great Lakes over the past century.[42] The United Nations' Intergovernmental Panel on Climate Change in 1997, 23 years ago, predicted: "the following lake level declines could occur: Lake Superior −0.2 to −0.5 m, Lakes Michigan and Huron −1.0 to −2.5 m, and Lake Erie −0.9 to −1.9 m."[43] In 2009, 11 years ago, it was predicted that global warming will decrease water levels.[44] In 2013, record low water levels in the Great Lakes were attributed to climate change.[45]
32
+
33
+ The water level of Lake Michigan–Huron had remained fairly constant over the 20th century,[46] but has nevertheless dropped more than 6 feet from the record high in 1986 to the low of 2013.[47] In 2012, National Geographic tied the water level drop to warming climate change.,[48] as did the Natural Resources Defense Council.[49] One newspaper reported that the long-term average level has gone down about 20 inches because of dredging and subsequent erosion in the St. Clair River. Lake Michigan–Huron hit all-time record low levels in 2013; according to the US Army Corps of Engineers, the previous record low had been set in 1964.[47] By April 2015 the water level had recovered to 7 inches (17.5 cm) more than the "long term monthly average".[50]
34
+
35
+ The Great Lakes contain 21% of the world's surface fresh water: 5,472 cubic miles (22,810 km3), or 6.0×1015 U.S. gallons, that is 6 quadrillion U.S gallons, (2.3×1016 liters). This is enough water to cover the 48 contiguous U.S. states to a uniform depth of 9.5 feet (2.9 m). Although the lakes contain a large percentage of the world's fresh water, the Great Lakes supply only a small portion of U.S. drinking water on a national basis.[57]
36
+
37
+ The total surface area of the lakes is approximately 94,250 square miles (244,100 km2)—nearly the same size as the United Kingdom, and larger than the U.S. states of New York, New Jersey, Connecticut, Rhode Island, Massachusetts, Vermont, and New Hampshire combined.[58]
38
+
39
+ The Great Lakes coast measures approximately 10,500 miles (16,900 km);,[14] but the length of a coastline is impossible to measure exactly and is not a well-defined measure (see Coastline paradox). Of the total 10,500 miles (16,900 km) of shoreline, Canada borders approximately 5,200 miles (8,400 km), while the remaining 5,300 miles (8,500 km) are bordered by the United States. Michigan has the longest shoreline of the United States, bordering roughly 3,288 miles (5,292 km) of shoreline, followed by Wisconsin (820 miles (1,320 km)), New York (473 miles (761 km)), and Ohio (312 miles (502 km)).[59] Traversing the shoreline of all the lakes would cover a distance roughly equivalent to travelling half-way around the world at the equator.[14]
40
+
41
+ It has been estimated that the foundational geology that created the conditions shaping the present day upper Great Lakes was laid from 1.1 to 1.2 billion years ago,[14][60] when two previously fused tectonic plates split apart and created the Midcontinent Rift, which crossed the Great Lakes Tectonic Zone. A valley was formed providing a basin that eventually became modern day Lake Superior. When a second fault line, the Saint Lawrence rift, formed approximately 570 million years ago,[14] the basis for Lakes Ontario and Erie were created, along with what would become the Saint Lawrence River.
42
+
43
+ The Great Lakes are estimated to have been formed at the end of the last glacial period (the Wisconsin glaciation ended 10,000 to 12,000 years ago), when the Laurentide Ice Sheet receded.[10] The retreat of the ice sheet left behind a large amount of meltwater (see Lake Algonquin, Lake Chicago, Glacial Lake Iroquois, and Champlain Sea) that filled up the basins that the glaciers had carved, thus creating the Great Lakes as we know them today.[61] Because of the uneven nature of glacier erosion, some higher hills became Great Lakes islands. The Niagara Escarpment follows the contour of the Great Lakes between New York and Wisconsin. Land below the glaciers "rebounded" as it was uncovered.[62] Since the glaciers covered some areas longer than others, this glacial rebound occurred at different rates.
44
+
45
+ A notable modern phenomenon is the formation of ice volcanoes over the lakes during wintertime. Storm-generated waves carve the lakes' ice sheet and create conical mounds through the eruption of water and slush. The process is only well-documented in the Great Lakes, and has been credited with sparing the southern shorelines from worse rocky erosion.[63]
46
+
47
+ The Great Lakes have a humid continental climate,
48
+ Köppen climate classification Dfa (in southern areas) and Dfb (in northern parts)[64] with varying influences from air masses from other regions including dry, cold Arctic systems, mild Pacific air masses from the West, and warm, wet tropical systems from the south and the Gulf of Mexico.[65] The lakes themselves also have a moderating effect on the climate; they can also increase precipitation totals and produce lake effect snowfall.[64]
49
+
50
+ The Great Lakes can have an effect on regional weather called lake-effect snow, which is sometimes very localized. Even late in winter, the lakes often have no icepack in the middle. The prevailing winds from the west pick up the air and moisture from the lake surface, which is slightly warmer in relation to the cold surface winds above. As the slightly warmer, moist air passes over the colder land surface, the moisture often produces concentrated, heavy snowfall that sets up in bands or "streamers". This is similar to the effect of warmer air dropping snow as it passes over mountain ranges. During freezing weather with high winds, the "snow belts" receive regular snow fall from this localized weather pattern, especially along the eastern shores of the lakes. Snow belts are found in Wisconsin, Michigan, Ohio, Pennsylvania, and New York, United States; and Ontario, Canada.
51
+
52
+ The lakes also moderate seasonal temperatures to some degree, but not with as large an influence as do large oceans; they absorb heat and cool the air in summer, then slowly radiate that heat in autumn. They protect against frost during transitional weather, and keep the summertime temperatures cooler than further inland. This effect can be very localized and overridden by offshore wind patterns. This temperature buffering produces areas known as "Fruit Belts", where fruit can be produced that is typically grown much farther south. For instance, Western Michigan has apple and cherry orchards, and vineyards cultivated adjacent to the lake shore as far north as the Grand Traverse Bay and Nottawasaga Bay in central Ontario. The eastern shore of Lake Michigan and the southern shore of Lake Erie have many successful wineries because of the moderating effect, as does the Niagara Peninsula between Lake Erie and Lake Ontario. A similar phenomenon allows wineries to flourish in the Finger Lakes region of New York, as well as in Prince Edward County, Ontario on Lake Ontario's northeast shore. Related to the lake effect is the regular occurrence of fog over medium-sized areas, particularly along the shorelines of the lakes. This is most noticeable along Lake Superior's shores.
53
+
54
+ The Great Lakes have been observed to help intensify storms, such as Hurricane Hazel in 1954, and the 2011 Goderich, Ontario tornado, which moved onshore as a tornadic waterspout. In 1996 a rare tropical or subtropical storm was observed forming in Lake Huron, dubbed the 1996 Lake Huron cyclone. Rather large severe thunderstorms covering wide areas are well known in the Great Lakes during mid-summer; these Mesoscale convective complexes or MCCs[66] can cause damage to wide swaths of forest and shatter glass in city buildings. These storms mainly occur during the night, and the systems sometimes have small embedded tornadoes, but more often straight-line winds accompanied by intense lightning.
55
+
56
+ Historically, the Great Lakes, in addition to their lake ecology, were surrounded by various forest ecoregions (except in a relatively small area of southeast Lake Michigan where savanna or prairie occasionally intruded). Logging, urbanization, and agriculture uses have changed that relationship. In the early 21st century, Lake Superior's shores are 91% forested, Lake Huron 68%, Lake Ontario 49%, Lake Michigan 41%, and Lake Erie, where logging and urbanization has been most extensive, 21%. Some of these forests are second or third growth (i.e. they have been logged before, changing their composition). At least 13 wildlife species are documented as becoming extinct since the arrival of Europeans, and many more are threatened or endangered.[14] Meanwhile, exotic and invasive species have also been introduced.
57
+
58
+ While the organisms living on the bottom of shallow waters are similar to those found in smaller lakes, the deep waters contain organisms found only in deep, cold lakes of the northern latitudes. These include the delicate opossum shrimp (order mysida), the deepwater scud (a crustacean of the order amphipoda), two types of copepods, and the deepwater sculpin (a spiny, large-headed fish).[68]
59
+
60
+ The Great Lakes are an important source of fishing. Early European settlers were astounded by both the variety and quantity of fish; there were 150 different species in the Great Lakes.[14] Throughout history, fish populations were the early indicator of the condition of the Lakes and have remained one of the key indicators even in the current era of sophisticated analyses and measuring instruments. According to the bi-national (U.S. and Canadian) resource book, The Great Lakes: An Environmental Atlas and Resource Book: "The largest Great Lakes fish harvests were recorded in 1889 and 1899 at some 67,000 tonnes (66,000 long tons; 74,000 short tons) [147 million pounds]."[69]
61
+
62
+ By 1801, the New York Legislature found it necessary to pass regulations curtailing obstructions to the natural migrations of Atlantic salmon from Lake Erie into their spawning channels. In the early 19th century, the government of Upper Canada found it necessary to introduce similar legislation prohibiting the use of weirs and nets at the mouths of Lake Ontario's tributaries. Other protective legislation was passed, as well, but enforcement remained difficult.[70]
63
+
64
+ On both sides of the Canada–United States border, the proliferation of dams and impoundments have multiplied, necessitating more regulatory efforts. Concerns by the mid-19th century included obstructions in the rivers which prevented salmon and lake sturgeon from reaching their spawning grounds. The Wisconsin Fisheries Commission noted a reduction of roughly 25% in general fish harvests by 1875. The states have removed dams from rivers where necessary.[clarification needed][71]
65
+
66
+ Overfishing has been cited as a possible reason for a decrease in population of various whitefish, important because of their culinary desirability and, hence, economic consequence. Moreover, between 1879 and 1899, reported whitefish harvests declined from some 24.3 million pounds (11 million kg) to just over 9 million pounds (4 million kg).[72] By 1900, commercial fishermen on Lake Michigan were hauling in an average of 41 million pounds of fish annually.[73] By 1938, Wisconsin's commercial fishing operations were motorized and mechanized, generating jobs for more than 2,000 workers, and hauling 14 million pounds per year.[73] The population of giant freshwater mussels was eliminated as the mussels were harvested for use as buttons by early Great Lakes entrepreneurs.[72] Since 2000, the invasive quagga mussel has smothered the bottom of Lake Michigan almost from shore to shore, and their numbers are estimated at 900 trillion.[73]
67
+
68
+ The influx of parasitic lamprey populations after the development of the Erie Canal and the much later Welland Canal led to the two federal governments of the US and Canada working on joint proposals to control it. By the mid-1950s, the lake trout populations of Lakes Michigan and Huron were reduced, with the lamprey deemed largely to blame. This led to the launch of the bi-national Great Lakes Fishery Commission.
69
+
70
+ The Great Lakes: An Environmental Atlas and Resource Book (1972) noted: "Only pockets remain of the once large commercial fishery."[69] But, water quality improvements realized during the 1970s and 1980s, combined with successful salmonid stocking programs, have enabled the growth of a large recreational fishery.[74] The last commercial fisherman left Milwaukee in 2011 because of overfishing and anthropogenic changes to the biosphere.[73]
71
+
72
+ Since the 19th century an estimated 160 new species have found their way into the Great Lakes ecosystem; many have become invasive; the overseas ship ballast and ship hull parasitism are causing severe economic and ecological impacts.[75][76] According to the Inland Seas Education Association, on average a new species enters the Great Lakes every eight months.[76]
73
+
74
+ Introductions into the Great Lakes include the zebra mussel, which was first discovered in 1988, and quagga mussel in 1989. The mollusks are efficient filter feeders, competing with native mussels and reducing available food and spawning grounds for fish. In addition, the mussels may be a nuisance to industries by clogging pipes. The U.S. Fish and Wildlife Service estimates that the economic impact of the zebra mussel could be about $5 billion over the next decade.[77]
75
+
76
+ The alewife first entered the system west of Lake Ontario via 19th-century canals. By the 1960s, the small silver fish had become a familiar nuisance to beach goers across Lakes Michigan, Huron, and Erie. Periodic mass dieoffs result in vast numbers of the fish washing up on shore; estimates by various governments have placed the percentage of Lake Michigan's biomass, which was made up of alewives in the early 1960s, as high as 90%. In the late 1960s, the various state and federal governments began stocking several species of salmonids, including the native lake trout as well as non-native chinook and coho salmon; by the 1980s, alewife populations had dropped drastically.[78] The ruffe, a small percid fish from Eurasia, became the most abundant fish species in Lake Superior's Saint Louis River within five years of its detection in 1986. Its range, which has expanded to Lake Huron, poses a significant threat to the lower lake fishery.[79] Five years after first being observed in the St. Clair River, the round goby can now be found in all of the Great Lakes. The goby is considered undesirable for several reasons: it preys upon bottom-feeding fish, overruns optimal habitat, spawns multiple times a season, and can survive poor water quality conditions.[80]
77
+
78
+ Several species of exotic water fleas have accidentally been introduced into the Great Lakes, such as the spiny waterflea, Bythotrephes longimanus, and the fishhook waterflea, Cercopagis pengoi, potentially having an effect on the zooplankton population. Several species of crayfish have also been introduced that may contend with native crayfish populations. More recently an electric fence has been set up across the Chicago Sanitary and Ship Canal in order to keep several species of invasive Asian carp out of the area. These fast-growing planktivorous fish have heavily colonized the Mississippi and Illinois river systems.[81] The sea lamprey, which has been particularly damaging to the native lake trout population, is another example of a marine invasive species in the Great Lakes.[82] Invasive species, particularly zebra and quagga mussels, may be at least partially responsible for the collapse of the deepwater demersal fish community in Lake Huron,[83] as well as drastic unprecedented changes in the zooplankton community of the lake.[84]
79
+
80
+ Scientists understand that the micro-aquatic life of the lakes is abundant, but know very little about some of the most plentiful microbes and their environmental effects in the Great Lakes. Although a drop of lake water may contain 1 million bacteria cells and 10 million viruses, only since 2012 has there been a long-term study of the lakes' micro-organisms. Between 2012 and 2019 more than 160 new species have been discovered.[85]
81
+
82
+ Native habitats and ecoregions in the Great Lakes region include:
83
+
84
+ Plant lists include:
85
+
86
+ Logging
87
+
88
+ Logging of the extensive forests in the Great Lakes region removed riparian and adjacent tree cover over rivers and streams, which provide shade, moderating water temperatures in fish spawning grounds. Removal of trees also destabilized the soil, with greater volumes washed into stream beds causing siltation of gravel beds, and more frequent flooding.
89
+
90
+ Running cut logs down the tributary rivers into the Great Lakes also dislocated sediments. In 1884, the New York Fish Commission determined that the dumping of sawmill waste (chips and sawdust) had impacted fish populations.[86]
91
+
92
+ The first U.S. Clean Water Act, passed by a Congressional override after being vetoed by US President Richard Nixon in 1972, was a key piece of legislation,[87] along with the bi-national Great Lakes Water Quality Agreement signed by Canada and the U.S. A variety of steps taken to process industrial and municipal pollution discharges into the system greatly improved water quality by the 1980s, and Lake Erie in particular is significantly cleaner.[88] Discharge of toxic substances has been sharply reduced. Federal and state regulations control substances like PCBs. The first of 43 "Great Lakes Areas of Concern" to be formally "de-listed" due to successful cleanup was Ontario's Collingwood Harbour in 1994; Ontario's Severn Sound followed in 2003.[89] Presque Isle Bay in Pennsylvania is formally listed as in recovery, as is Ontario's Spanish Harbour. Dozens of other Areas of Concern have received partial cleanups such as the Rouge River (Michigan) and Waukegan Harbor (Illinois).[90]
93
+
94
+ Phosphate detergents were historically a major source of nutrient to the Great Lakes algae blooms in particular in the warmer and shallower portions of the system such as Lake Erie, Saginaw Bay, Green Bay, and the southernmost portion of Lake Michigan. By the mid-1980s, most jurisdictions bordering the Great Lakes had controlled phosphate detergents,[91] resulting in sharp reductions in the frequency and extent of the blooms.[citation needed]
95
+
96
+ Blue-green algae, or Cyanobacteria blooms,[92] have been problematic on Lake Erie since 2011.[93] "Not enough is being done to stop fertilizer and phosphorus from getting into the lake and causing blooms," said Michael McKay, executive director of the Great Lakes Institute for Environmental Research (GLIER) at the University of Windsor. The largest Lake Erie bloom to date occurred in 2015, exceeding the severity index at 10.5 and in 2011 at a 10.[94] In early August 2019, satellite images depicted a bloom stretching up to 1,300 square kilometres on Lake Erie, with the heaviest concentration near Toledo, Ohio. A large bloom does not necessarily mean the cyanobacteria ... will produce toxins", said Michael McKay, of the University of Windsor. Water quality testing was underway in August 2019.[95][94]
97
+
98
+ Until 1970, mercury was not listed as a harmful chemical, according to the United States Federal Water Quality Administration. Within the past ten years mercury has become more apparent in water tests. Mercury compounds have been used in paper mills to prevent slime from forming during their production, and chemical companies have used mercury to separate chlorine from brine solutions. Studies conducted by the Environmental Protection Agency have shown that when the mercury comes in contact with many of the bacteria and compounds in the fresh water, it forms the compound methyl mercury, which has a much greater impact on human health than elemental mercury due to a higher propensity for absorption. This form of mercury is not detrimental to a majority of fish types, but is very detrimental to people and other wildlife animals who consume the fish. Mercury has been known for health related problems such as birth defects in humans and animals, and the near extinction of eagles in the Great Lakes region.[96]
99
+
100
+ The amount of raw sewage dumped into the waters was the primary focus of both the first Great Lakes Water Quality Agreement and federal laws passed in both countries during the 1970s. Implementation of secondary treatment of municipal sewage by major cities greatly reduced the routine discharge of untreated sewage during the 1970s and 1980s.[97] The International Joint Commission in 2009 summarized the change: "Since the early 1970s, the level of treatment to reduce pollution from waste water discharges to the Great Lakes has improved considerably. This is a result of significant expenditures to date on both infrastructure and technology, and robust regulatory systems that have proven to be, on the whole, quite effective."[98] The commission reported that all urban sewage treatment systems on the U.S. side of the lakes had implemented secondary treatment, as had all on the Canadian side except for five small systems.[citation needed]
101
+
102
+ Though contrary to federal laws in both countries, those treatment system upgrades have not yet eliminated Combined sewer Overflow events.[citation needed] This describes when older sewerage systems, which combine storm water with sewage into single sewers heading to the treatment plant, are temporarily overwhelmed by heavy rainstorms. Local sewage treatment authorities then must release untreated effluent, a mix of rainwater and sewage, into local water bodies. While enormous public investments such as the Deep Tunnel projects in Chicago and Milwaukee have greatly reduced the frequency and volume of these events, they have not been eliminated. The number of such overflow events in Ontario, for example, is flat according to the International Joint Commission.[98] Reports about this issue on the U.S. side highlight five large municipal systems (those of Detroit, Cleveland, Buffalo, Milwaukee and Gary) as being the largest current periodic sources of untreated discharges into the Great Lakes.[99]
103
+
104
+ Algae such as diatoms, along with other phytoplankton, are photosynthetic primary producers supporting the food web of the Great Lakes,[100] and have been effected by global warming.[101] The changes in the size or in the function of the primary producers may have a direct or an indirect impact on the food web. Photosynthesis carried out by diatoms comprises about one fifth of the total photosynthesis. By taking CO2 out of the water, to photosynthesize, diatoms help to stabilize the pH of the water, as otherwise CO2 would react with water making it more acidic.
105
+
106
+ Diatoms acquire inorganic carbon thought passive diffusion of CO2 and HCO3, as well they use carbonic anhydrase mediated active transport to speed up this process.[102] Large diatoms require more carbon uptake than smaller diatoms.[103] There is a positive correlation between the surface area and the chlorophyll concentration of diatom cells.[104]
107
+
108
+ Several Native American populations (Paleo-indians) inhabited the region around 10,000 BC, after the end of the Wisconsin glaciation.[105][106] The peoples of the Great Lakes traded with the Hopewell culture from around 1000 AD, as copper nuggets have been extracted from the region, and fashioned into ornaments and weapons in the mounds of Southern Ohio. The brigantine Le Griffon, which was commissioned by René-Robert Cavelier, Sieur de La Salle, was built at Cayuga Creek, near the southern end of the Niagara River, and became the first known sailing ship to travel the upper Great Lakes on August 7, 1679.[107]
109
+
110
+ The Rush–Bagot Treaty signed in 1818, after the War of 1812 and the later Treaty of Washington eventually led to a complete disarmament of naval vessels in the Great Lakes. Nonetheless, both nations maintain coast guard vessels in the Great Lakes.
111
+
112
+ During settlement, the Great Lakes and its rivers were the only practical means of moving people and freight. Barges from middle North America were able to reach the Atlantic Ocean from the Great Lakes when the Welland canal opened in 1824 and the later Erie Canal opened in 1825.[108] By 1848, with the opening of the Illinois and Michigan Canal at Chicago, direct access to the Mississippi River was possible from the lakes.[109] With these two canals an all-inland water route was provided between New York City and New Orleans.
113
+
114
+ The main business of many of the passenger lines in the 19th century was transporting immigrants. Many of the larger cities owe their existence to their position on the lakes as a freight destination as well as for being a magnet for immigrants. After railroads and surface roads developed, the freight and passenger businesses dwindled and, except for ferries and a few foreign cruise ships, have now vanished.
115
+ The immigration routes still have an effect today. Immigrants often formed their own communities and some areas have a pronounced ethnicity, such as Dutch, German, Polish, Finnish, and many others. Since many immigrants settled for a time in New England before moving westward, many areas on the U.S. side of the Great Lakes also have a New England feel, especially in home styles and accent.
116
+
117
+ Since general freight these days is transported by railroads and trucks, domestic ships mostly move bulk cargoes, such as iron ore, coal and limestone for the steel industry. The domestic bulk freight developed because of the nearby mines. It was more economical to transport the ingredients for steel to centralized plants rather than try to make steel on the spot. Grain exports are also a major cargo on the lakes.
118
+
119
+ In the 19th century and early 20th centuries, iron and other ores such as copper were shipped south on (downbound ships), and supplies, food, and coal were shipped north (upbound). Because of the location of the coal fields in Pennsylvania and West Virginia, and the general northeast track of the Appalachian Mountains, railroads naturally developed shipping routes that went due north to ports such as Erie, Pennsylvania and Ashtabula, Ohio.
120
+
121
+ Because the lake maritime community largely developed independently, it has some distinctive vocabulary. Ships, no matter the size, are called boats. When the sailing ships gave way to steamships, they were called steamboats—the same term used on the Mississippi. The ships also have a distinctive design (see Lake freighter). Ships that primarily trade on the lakes are known as lakers. Foreign boats are known as salties. One of the more common sights on the lakes has been since about 1950 the 1,000‑by‑105-foot (305-by-32-meter), 78,850-long-ton (80,120-metric-ton) self-unloader. This is a laker with a conveyor belt system that can unload itself by swinging a crane over the side.[110] Today, the Great Lakes fleet is much smaller in numbers than it once was because of the increased use of overland freight, and a few larger ships replacing many small ones.
122
+
123
+ During World War II, the risk of submarine attacks against coastal training facilities motivated the United States Navy to operate two aircraft carriers on the Great Lakes, USS Sable (IX-81) and USS Wolverine (IX-64). Both served as training ships to qualify naval aviators in carrier landing and takeoff.[111] Lake Champlain briefly became the sixth Great Lake of the United States on March 6, 1998, when President Clinton signed Senate Bill 927. This bill, which reauthorized the National Sea Grant Program, contained a line declaring Lake Champlain to be a Great Lake. Not coincidentally, this status allows neighboring states to apply for additional federal research and education funds allocated to these national resources.[112] Following a small uproar, the Senate voted to revoke the designation on March 24 (although New York and Vermont universities would continue to receive funds to monitor and study the lake).[113]
124
+
125
+ In the early years of the 21st century, water levels in the Great Lakes were a concern.[114] Researchers at the Mowat Centre said that low levels could cost $19bn by 2050.[115] This was followed by record high levels in all lakes except Ontario in the late 2010s and 2020.[116]
126
+
127
+ Alan B. McCullough has written that the fishing industry of the Great Lakes got its start "on the American side of Lake Ontario in Chaumont Bay, near the Maumee River on Lake Erie, and on the Detroit River at about the time of the War of 1812." Although the region was sparsely populated until the 1830s, so there was not much local demand and transporting fish was still prohibitively costly, there were economic and infrastructure developments that were promising for the future of the fishing industry going into the 1830s. Particularly, the 1825 opening of the Erie Canal and the Welland Canal a few years later. The fishing industry expanded particularly in the waters associated with the fur trade that connect Lake Erie and Lake Huron. In fact, two major suppliers of fish in the 1830s were the fur trading companies Hudson's Bay Company and the American Fur Company.[117]
128
+
129
+ The catch from these waters would be sent to the growing market for salted fish in Detroit, where merchants involved in the fur trade had already gained some experience handling salted fish. One such merchant was John P. Clark, a shipbuilder and merchant who began selling fish in the area of Manitowoc, Wisconsin where whitefish was abundant. Another operation cropped up in Georgian Bay, Canadian waters plentiful with trout as well as whitefish. In 1831, Alexander MacGregor from Goderich, Ontario found whitefish and herring in unusually abundant supply around the Fishing Islands. A contemporary account by Methodist missionary John Evans describes the fish as resembling a "bright cloud moving rapidly through the water".[117]
130
+
131
+ Except when the water is frozen during winter, more than 100 lake freighters operate continuously on the Great Lakes,[118] which remain a major water transport corridor for bulk goods. The Great Lakes Waterway connects all the lakes; the smaller Saint Lawrence Seaway connects the lakes to the Atlantic oceans. Some lake freighters are too large to use the Seaway, and operate only on the Waterway and lakes.
132
+
133
+ In 2002, 162 million net tons of dry bulk cargo were moved on the Lakes. This was, in order of volume: iron ore, grain and potash.[119] The iron ore and much of the stone and coal are used in the steel industry. There is also some shipping of liquid and containerized cargo but most container ships cannot pass the locks on the Saint Lawrence Seaway because the ships are too wide.
134
+
135
+ Only four bridges are on the Great Lakes other than Lake Ontario because of the cost of building structures high enough for ships to pass under. The Blue Water Bridge is, for example, more than 150 feet high and more than a mile long.[118]
136
+
137
+ Major ports on the Great Lakes include Duluth-Superior, Chicago, Detroit, Cleveland, Twin Harbors, Hamilton and Thunder Bay.
138
+
139
+ The Great Lakes are used to supply drinking water to tens of millions of people in bordering areas. This valuable resource is collectively administered by the state and provincial governments adjacent to the lakes, who have agreed to the Great Lakes Compact to regulate water supply and use.
140
+
141
+ Tourism and recreation are major industries on the Great Lakes.[120] A few small cruise ships operate on the Great Lakes including a couple of sailing ships. Sport fishing, commercial fishing, and Native American fishing represent a U.S.$4 billion a year industry with salmon, whitefish, smelt, lake trout, bass and walleye being major catches. Many other water sports are practiced on the lakes such as yachting, sea kayaking, diving, kitesurfing, powerboating, and lake surfing.
142
+
143
+ The Great Lakes Circle Tour is a designated scenic road system connecting all of the Great Lakes and the Saint Lawrence River.[121]
144
+
145
+ From 1844 through 1857, palace steamers carried passengers and cargo around the Great Lakes.[122] In the first half of the 20th century large luxurious passenger steamers sailed the lakes in opulence.[123] The Detroit and Cleveland Navigation Company had several vessels at the time and hired workers from all walks of life to help operate these vessels.[124] Several ferries currently operate on the Great Lakes to carry passengers to various islands, including Isle Royale, Drummond Island, Pelee Island, Mackinac Island, Beaver Island, Bois Blanc Island (Ontario), Bois Blanc Island (Michigan), Kelleys Island, South Bass Island, North Manitou Island, South Manitou Island, Harsens Island, Manitoulin Island, and the Toronto Islands. As of 2007, four car ferry services cross the Great Lakes, two on Lake Michigan: a steamer from Ludington, Michigan, to Manitowoc, Wisconsin, and a high speed catamaran from Milwaukee to Muskegon, Michigan, one on Lake Erie: a boat from Kingsville, Ontario, or Leamington, Ontario, to Pelee Island, Ontario, then onto Sandusky, Ohio, and one on Lake Huron: the M.S. Chi-Cheemaun [125] runs between Tobermory and South Baymouth, Manitoulin Island, operated by the Owen Sound Transportation Company. An international ferry across Lake Ontario from Rochester, New York, to Toronto ran during 2004 and 2005, but is no longer in operation.
146
+
147
+ The large size of the Great Lakes increases the risk of water travel; storms and reefs are common threats. The lakes are prone to sudden and severe storms, in particular in the autumn, from late October until early December. Hundreds of ships have met their end on the lakes. The greatest concentration of shipwrecks lies near Thunder Bay (Michigan), beneath Lake Huron, near the point where eastbound and westbound shipping lanes converge.
148
+
149
+ The Lake Superior shipwreck coast from Grand Marais, Michigan, to Whitefish Point became known as the "Graveyard of the Great Lakes". More vessels have been lost in the Whitefish Point area than any other part of Lake Superior.[126] The Whitefish Point Underwater Preserve serves as an underwater museum to protect the many shipwrecks in this area.
150
+
151
+ The first ship to sink in Lake Michigan was Le Griffon, also the first ship to sail the Great Lakes. Caught in a 1679 storm while trading furs between Green Bay and Michilimacinac, she was lost with all hands aboard.[127] Its wreck may have been found in 2004,[128] but a wreck subsequently discovered in a different location was also claimed in 2014 to be Le Griffon.[129]
152
+
153
+ The largest and last major freighter wrecked on the lakes was the SS Edmund Fitzgerald, which sank on November 10, 1975, just over 17 miles (30 km) offshore from Whitefish Point on Lake Superior. The largest loss of life in a shipwreck out on the lakes may have been that of Lady Elgin, wrecked in 1860 with the loss of around 400 lives on Lake Michigan. In an incident at a Chicago dock in 1915, the SS Eastland rolled over while loading passengers, killing 841.
154
+
155
+ In August 2007, the Great Lakes Shipwreck Historical Society announced that it had found the wreckage of Cyprus, a 420-foot (130 m) long, century-old ore carrier. Cyprus sank during a Lake Superior storm on October 11, 1907, during its second voyage while hauling iron ore from Superior, Wisconsin, to Buffalo, New York. The entire crew of 23 drowned, except one, Charles Pitz, who floated on a life raft for almost seven hours.[130]
156
+
157
+ In June 2008, deep sea divers in Lake Ontario found the wreck of the 1780 Royal Navy warship HMS Ontario in what has been described as an "archaeological miracle".[131] There are no plans to raise her as the site is being treated as a war grave.
158
+
159
+ In June 2010, L.R. Doty was found in Lake Michigan by an exploration diving team led by dive boat Captain Jitka Hanakova from her boat the Molly V.[132] The ship sank in October 1898, probably attempting to rescue a small schooner, Olive Jeanette, during a terrible storm.
160
+
161
+ Still missing are the two last warships to sink in the Great Lakes, the French minesweepers, Inkerman and Cerisoles, which vanished in Lake Superior during a blizzard in 1918. 78 lives were lost making it the largest loss of life in Lake Superior and the greatest unexplained loss of life in the Great Lakes.
162
+
163
+ Related articles
164
+
165
+ In 1872, a treaty gave access to the St. Lawrence River to the United States, and access to Lake Michigan to the Dominion of Canada.[133] The International Joint Commission was established in 1909 to help prevent and resolve disputes relating to the use and quality of boundary waters, and to advise Canada and the United States on questions related to water resources. Concerns over diversion of Lake water are of concern to both Americans and Canadians. Some water is diverted through the Chicago River to operate the Illinois Waterway but the flow is limited by treaty. Possible schemes for bottled water plants and diversion to dry regions of the continent raise concerns. Under the U.S. "Water Resources Development Act",[134] diversion of water from the Great Lakes Basin requires the approval of all eight Great Lakes governors through the Great Lakes Commission, which rarely occurs. International treaties regulate large diversions.
166
+
167
+ In 1998, the Canadian company Nova Group won approval from the Province of Ontario to withdraw 158,000,000 U.S. gallons (600,000 m3) of Lake Superior water annually to ship by tanker to Asian countries. Public outcry forced the company to abandon the plan before it began. Since that time, the eight Great Lakes Governors and the Premiers of Ontario and Quebec have negotiated the Great Lakes-Saint Lawrence River Basin Sustainable Water Resources Agreement[135] and the Great Lakes-St. Lawrence River Basin Water Resources Compact[136] that would prevent most future diversion proposals and all long-distance ones. The agreements strengthen protection against abusive water withdrawal practices within the Great Lakes basin. On December 13, 2005, the Governors and Premiers signed these two agreements, the first of which is between all ten jurisdictions. It is somewhat more detailed and protective, though its legal strength has not yet been tested in court. The second, the Great Lakes Compact, has been approved by the state legislatures of all eight states that border the Great Lakes as well as the U.S. Congress, and was signed into law by President George W. Bush on October 3, 2008.[137]
168
+
169
+ The Great Lakes Restoration Initiative, described as "the largest investment in the Great Lakes in two decades",[138] was funded at $475 million in the U.S. federal government's Fiscal Year 2011 budget, and $300 million in the Fiscal Year 2012 budget. Through the program a coalition of federal agencies is making grants to local and state entities for toxics cleanups, wetlands and coastline restoration projects, and invasive species-related projects.
en/2271.html.txt ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A skyscraper is a continuously habitable high-rise building that has over 40 floors[1] and is taller than 150 m (492 ft).[2][3][4][5] Historically, the term first referred to buildings with 10 to 20 floors in the 1880s. The meaning shifted with advancing construction technology during the 20th century.[1] Skyscrapers may host offices, hotels, residential spaces, and retail spaces.
4
+
5
+ One common feature of skyscrapers is having a steel framework that supports curtain walls. These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete.
6
+
7
+ Modern skyscrapers' walls are not load-bearing, and most skyscrapers are characterised by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure, and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks, which in some cases is also structurally required.
8
+
9
+ As of January 2020[update], only nine cities have more than 100 skyscrapers that are 150 m (492 ft) or taller: Hong Kong (355), New York City (284), Shenzhen (235), Dubai (199), Shanghai (163), Tokyo (155), Chongqing (127), Chicago (126), and Guangzhou (115).[6]
10
+
11
+ The term skyscraper was first applied to buildings of steel framed construction of at least 10 storeys in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like Chicago, New York City, Philadelphia, Detroit, and St. Louis.[7] The first steel-frame skyscraper was the Home Insurance Building (originally 10 storeys with a height of 42 m or 138 ft) in Chicago, Illinois in 1885. Some point to Philadelphia's 10-storey Jayne Building (1849–50) as a proto-skyscraper, or to New York's seven-floor Equitable Life Building (New York City), built in 1870, for its innovative use of a kind of skeletal frame, but such designation depends largely on what factors are chosen. Even the scholars making the argument find it to be purely academic.[8][9]
12
+
13
+ The structural definition of the word skyscraper was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-storey buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry, which passed their practical limit in 1891 with Chicago's Monadnock Building.
14
+
15
+ What is the chief characteristic of the tall office building? It is lofty. It must be tall. The force and power of altitude must be in it, the glory and pride of exaltation must be in it. It must be every inch a proud and soaring thing, rising in sheer exaltation that from bottom to top it is a unit without a single dissenting line.
16
+
17
+ Some structural engineers define a highrise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers.
18
+
19
+ Different organizations from the United States and Europe define skyscrapers as buildings at least 150 metres in height or taller.[10][11][2][12], with "supertall" skyscrapers for buildings higher than 300 m (984 ft) and "megatall" skyscrapers for those taller than 600 m (1,969 ft).[13]
20
+
21
+ The word skyscraper often carries a connotation of pride and achievement. The skyscraper, in name and social function, is a modern expression of the age-old symbol of the world center or axis mundi: a pillar that connects earth to heaven and the four compass directions to one another.[14]
22
+
23
+ The tallest building in ancient times was the 146 m (479 ft) Great Pyramid of Giza in ancient Egypt, built in the 26th century BC. It was not surpassed in height for thousands of years, the 160 m (520 ft) Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed.[15] The latter in turn was not surpassed until the 555-foot (169 m) Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper.
24
+
25
+ High-rise apartments flourished in classical antiquity. Ancient Roman insulae in imperial cities reached 10 and more storeys.[16] Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of 20–25 m for multi-storey buildings, but met with only limited success.[17][18] Lower floors were typically occupied by shops or wealthy families, the upper rented to the lower classes.[16] Surviving Oxyrhynchus Papyri indicate that seven-storey buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt.[19]
26
+
27
+ The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the 97.2 m (319 ft) high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than 26 m.[20] Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 up to 51 m height in San Gimignano.[20]
28
+
29
+ The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets. Nasir Khusraw in the early 11th century described some of them rising up to 14 storeys, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them.[21] Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple storeys above them were rented out to tenants.[22] An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen. Shibam was made up of over 500 tower houses,[23] each one rising 5 to 11 storeys high,[24] with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks.[23] Shibam still has the tallest mudbrick buildings in the world, with many of them over 30 m (98 ft) high.[25]
30
+
31
+ An early modern example of high-rise housing was in 17th-century Edinburgh, Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 storeys were common, and there are records of buildings as high as 14 storeys. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill (also locally known as the "Maltings"), in Shrewsbury, England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices.[26]
32
+
33
+ In 1857, Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors, at the E.V. Haughwout Building in New York City. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by a portion of New Yorkers to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool, England. It was only five floors high.[27][28][29] Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884–1885.[30] While its original height of 42.1 m (138ft) is not considered very impressive today, it was at that time. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement the Chicago School, which developed what has been called the Commercial Style.[31]
34
+
35
+ The architect, Major William Le Baron Jenney, created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today.[32]
36
+
37
+ Burnham and Root's 45 m (148 ft) Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper,[33] while Louis Sullivan's 41 m (135 ft) Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper.
38
+
39
+ In 1889, the Mole Antonelliana in Italy was 167 m (549 ft) tall.
40
+
41
+ Most early skyscrapers emerged in the land-strapped areas of Chicago and New York City toward the end of the 19th century. A land boom in Melbourne, Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. London builders soon found building heights limited due to a complaint from Queen Victoria, rules that continued to exist with few exceptions.
42
+
43
+ Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the twentieth century. Some notable exceptions are the 43 m (141 ft) tall 1898 Witte Huis (White House) in Rotterdam; the Royal Liver Building in Liverpool, completed in 1911 and 90 m (300 ft) high;[34] the 57 m (187 ft) tall 1924 Marx House in Düsseldorf, Germany; the 61 m (200 ft) Kungstornen (Kings' Towers) in Stockholm, Sweden, which were built 1924–25,[35] the 89 m (292 ft) Edificio Telefónica in Madrid, Spain, built in 1929; the 87.5 m (287 ft) Boerentoren in Antwerp, Belgium, built in 1932; the 66 m (217 ft) Prudential Building in Warsaw, Poland, built in 1934; and the 108 m (354 ft) Torre Piacentini in Genoa, Italy, built in 1940.
44
+
45
+ After an early competition between Chicago and New York City for the world's tallest building, New York took the lead by 1895 with the completion of the 103 m (338 ft) tall American Surety Building, leaving New York with the title of the world's tallest building for many years.
46
+
47
+ Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone. They use mechanical equipment such as water pumps and elevators. Since the 1960s, according to the CTHUB, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world.[36]
48
+
49
+ Skyscraper construction entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II. Shortly after the war ended, the Soviet Union began construction on a series of skyscrapers in Moscow. Seven, dubbed the "Seven Sisters", were built between 1947 and 1953; and one, the Main building of Moscow State University, was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany (Frankfurter Tor), Poland (PKiN), Ukraine (Hotel Ukrayina), Latvia (Academy of Sciences) and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) Torre Breda (Italy).
50
+
51
+ From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America. Finally, they also began to be constructed in cities of Africa, the Middle East, South Asia and Oceania (mainly Australia) from the late 1950s on.
52
+
53
+ Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building, once the world's tallest skyscraper.
54
+
55
+ German architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived of the glass façade skyscraper[37] and, along with Norwegian Fred Severud,[38] he designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of the modernist high-rise architecture.[39]
56
+
57
+ Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations[40] which made it possible for people to live and work in "cities in the sky".[41]
58
+
59
+ In the early 1960s structural engineer Fazlur Rahman Khan, considered the "father of tubular designs" for high-rises,[42] discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems.[43] His central innovation in skyscraper design and construction was the concept of the "tube" structural system, including the "framed tube", "trussed tube", and "bundled tube".[44] His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design.[45] These systems allow greater economic efficiency,[46] and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped.[47] The first building to employ the tube structure was the Chestnut De-Witt apartment building,[40] this building is considered to be a major development in modern architecture.[40] These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land.[41] Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the "Second Chicago School",[48] including the hundred-storey John Hancock Center and the massive 442 m (1,450 ft) Willis Tower.[49] Other pioneers of this field include Hal Iyengar, William LeMessurier, and Minoru Yamasaki, the architect of the World Trade Center.
60
+
61
+ Many buildings designed in the 70s lacked a particular style and recalled ornamentation from earlier buildings designed before the 50s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes.[50] This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources.[51] Khan's work promoted structures integrated with architecture and the least use of material resulting in the least carbon emission impact on the environment.[52] The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach.[50]
62
+
63
+ Modern building practices regarding supertall structures have led to the study of "vanity height".[53][54] Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30 % of their height, raising potential definitional and sustainability issues.[55][56][57] The current era of skyscrapers focuses on sustainability, its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard.[58]
64
+
65
+ Architecturally, with the movements of Postmodernism, New Urbanism and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today.[59] Examples are the Wells Fargo Center, NBC Tower, Parkview Square, 30 Park Place, the Messeturm, the iconic Petronas Towers and Jin Mao Tower.
66
+
67
+ Other contemporary styles and movements in skyscraper design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Neo Art Deco and neo-historist, also known as revivalist.
68
+
69
+ 3 September is the global commemorative day for skyscrapers, called "Skyscraper Day".[60]
70
+
71
+ New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the 318.9 m (1,046 ft) Chrysler Building in 1930 and the 443.2 m (1,454 ft) Empire State Building in 1931, the world's tallest building for forty years. The first completed 417 m (1,368 ft) tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower) in Chicago within two years. The 442 m (1,450 ft) tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by 452 m (1,483 ft) Petronas Twin Towers in Kuala Lumpur, which held the title for six years.
72
+
73
+ The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics, engineering, and construction management.
74
+
75
+ One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows.
76
+
77
+ The concept of a skyscraper is a product of the industrialized age, made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete. The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labour costs increased.
78
+
79
+ The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns.[61] Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes.
80
+
81
+ Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis, ThyssenKrupp, Schindler, and KONE.
82
+
83
+ Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear.[62]
84
+
85
+ Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the high price. This presents a paradox to civil engineers: the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure, but can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor.
86
+
87
+ The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building's setbacks are actually a result of the building code at the time (1916 Zoning Resolution), and were not structurally required. On the other hand, John Hancock Center's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls.
88
+
89
+ The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads.
90
+
91
+ Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes.
92
+
93
+ By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections.[63] The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 storeys tall as usable floor spaces are reduced for supporting column and due to more usage of steel.[61]
94
+
95
+ A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation".[64][65] Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction[44] was first used in the DeWitt-Chestnut Apartment Building, completed in Chicago in 1963,[66] and soon after in the John Hancock Center and World Trade Center.
96
+
97
+ The tubular systems are fundamental to tall building design. Most buildings over 40-storeys constructed since the 1960s now use a tube design derived from Khan's structural engineering principles,[61][67] examples including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, and most other supertall skyscrapers since the 1960s.[44] The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa.[47]
98
+
99
+ Khan pioneered several other variations of the tube structure design.[citation needed] One of these was the concept of X-bracing, or the trussed tube, first employed for the John Hancock Center. This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires.
100
+
101
+ The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145.[46] The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower.[68]
102
+
103
+ An important variation on the tube frame is the bundled tube, which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture."[47]
104
+
105
+ The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core.[69]
106
+
107
+ Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors.
108
+
109
+ Other buildings, such as the Petronas Towers, use double-deck elevators, allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only people on one level need to get off at a given floor.
110
+
111
+ Buildings with sky lobbies include the World Trade Center, Petronas Twin Towers, Willis Tower and Taipei 101. The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool, which remains the highest in America.[70]
112
+
113
+ Skyscrapers are usually situated in city centers where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings.
114
+
115
+ Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes.
116
+
117
+ Today, skyscrapers are an increasingly common sight where land is expensive, as in the centers of big cities, because they provide such a high ratio of rentable floor space per unit area of land.
118
+
119
+ One problem with skyscrapers is car parking. In the largest cities most people commute via public transport, but in smaller cities many parking spaces are needed. Multi-storey car parks are impractical to build very tall, so much land area is needed.
120
+
121
+ There may be a correlation between skyscraper construction and great income inequality but this has not been conclusively proven.[71]
122
+
123
+ The amount of steel, concrete, and glass needed to construct a single skyscraper is large, and these materials represent a great deal of embodied energy. Skyscrapers are thus energy intensive buildings, but skyscrapers have a long lifespan, for example the Empire State Building in New York City, United States completed in 1931 and is still in active use.
124
+
125
+ Skyscrapers have considerable mass, which means that they must be built on a sturdier foundation than would be required for shorter, lighter buildings. Building materials must also be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated, elevators are generally used instead of stairs, and natural lighting cannot be utilized in rooms far from the windows and the windowless spaces such as elevators, bathrooms and stairwells.
126
+
127
+ Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions. Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States,[72] proving that skyscrapers can be environmentally friendly. Also the 30 St Mary Axe in London, the United Kingdom is an environmentally friendly skyscraper.
128
+
129
+ In the lower levels of a skyscraper a larger percentage of the building cross section must be devoted to the building structure and services than is required for lower buildings:
130
+
131
+ In low-rise structures, the support rooms (chillers, transformers, boilers, pumps and air handling units) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building.
132
+
133
+ At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings. As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture:
134
+
135
+ Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates.
136
+
137
+ This geographical transition is accompanied by a change in approach to skyscraper design. For much of the twentieth century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art. Architects in recent years have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand.[citation needed]
138
+
139
+ The following list measures height of the roof.[82][failed verification] The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996.
140
+
141
+ The iconic World Trade Center twin towers were destroyed in 2001.
142
+
143
+ The Willis Tower in Chicago was the world's tallest building from 1974 to 1998; many still refer to it as the "Sears Tower", its name from inception to 2009.
144
+
145
+ The Petronas Twin Towers in Kuala Lumpur were the tallest from 1998 to 2004.
146
+
147
+ Taipei 101, the world's tallest skyscraper from 2004 to 2010, was the first to exceed the 500-metre mark.
148
+
149
+ Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku. Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category.[83] The first building under construction and planned to be over one kilometre tall is the Jeddah Tower.
150
+
151
+ Several wooden skyscraper designs have been designed and built. A 14-storey housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015.[84] The Tree's record was eclipsed by Brock Commons, an 18-storey wooden dormitory at the University of British Columbia in Canada, when it was completed in September 2016.[85]
152
+
153
+ A 40-storey residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden.[86] Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction.[87] The tallest currently-planned wooden skyscraper is the 70-storey W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041.[88] An 80-storey wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge. The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois, would be 348 feet shorter than the W350 Project despite having 10 more storeys.[89][88]
154
+
155
+ Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75 %. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures.[90] CLT panels are prefabricated and can therefore speed up building time.[91]
en/2272.html.txt ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A skyscraper is a continuously habitable high-rise building that has over 40 floors[1] and is taller than 150 m (492 ft).[2][3][4][5] Historically, the term first referred to buildings with 10 to 20 floors in the 1880s. The meaning shifted with advancing construction technology during the 20th century.[1] Skyscrapers may host offices, hotels, residential spaces, and retail spaces.
4
+
5
+ One common feature of skyscrapers is having a steel framework that supports curtain walls. These curtain walls either bear on the framework below or are suspended from the framework above, rather than resting on load-bearing walls of conventional construction. Some early skyscrapers have a steel frame that enables the construction of load-bearing walls taller than of those made of reinforced concrete.
6
+
7
+ Modern skyscrapers' walls are not load-bearing, and most skyscrapers are characterised by large surface areas of windows made possible by steel frames and curtain walls. However, skyscrapers can have curtain walls that mimic conventional walls with a small surface area of windows. Modern skyscrapers often have a tubular structure, and are designed to act like a hollow cylinder to resist wind, seismic, and other lateral loads. To appear more slender, allow less wind exposure and transmit more daylight to the ground, many skyscrapers have a design with setbacks, which in some cases is also structurally required.
8
+
9
+ As of January 2020[update], only nine cities have more than 100 skyscrapers that are 150 m (492 ft) or taller: Hong Kong (355), New York City (284), Shenzhen (235), Dubai (199), Shanghai (163), Tokyo (155), Chongqing (127), Chicago (126), and Guangzhou (115).[6]
10
+
11
+ The term skyscraper was first applied to buildings of steel framed construction of at least 10 storeys in the late 19th century, a result of public amazement at the tall buildings being built in major American cities like Chicago, New York City, Philadelphia, Detroit, and St. Louis.[7] The first steel-frame skyscraper was the Home Insurance Building (originally 10 storeys with a height of 42 m or 138 ft) in Chicago, Illinois in 1885. Some point to Philadelphia's 10-storey Jayne Building (1849–50) as a proto-skyscraper, or to New York's seven-floor Equitable Life Building (New York City), built in 1870, for its innovative use of a kind of skeletal frame, but such designation depends largely on what factors are chosen. Even the scholars making the argument find it to be purely academic.[8][9]
12
+
13
+ The structural definition of the word skyscraper was refined later by architectural historians, based on engineering developments of the 1880s that had enabled construction of tall multi-storey buildings. This definition was based on the steel skeleton—as opposed to constructions of load-bearing masonry, which passed their practical limit in 1891 with Chicago's Monadnock Building.
14
+
15
+ What is the chief characteristic of the tall office building? It is lofty. It must be tall. The force and power of altitude must be in it, the glory and pride of exaltation must be in it. It must be every inch a proud and soaring thing, rising in sheer exaltation that from bottom to top it is a unit without a single dissenting line.
16
+
17
+ Some structural engineers define a highrise as any vertical construction for which wind is a more significant load factor than earthquake or weight. Note that this criterion fits not only high-rises but some other tall structures, such as towers.
18
+
19
+ Different organizations from the United States and Europe define skyscrapers as buildings at least 150 metres in height or taller.[10][11][2][12], with "supertall" skyscrapers for buildings higher than 300 m (984 ft) and "megatall" skyscrapers for those taller than 600 m (1,969 ft).[13]
20
+
21
+ The word skyscraper often carries a connotation of pride and achievement. The skyscraper, in name and social function, is a modern expression of the age-old symbol of the world center or axis mundi: a pillar that connects earth to heaven and the four compass directions to one another.[14]
22
+
23
+ The tallest building in ancient times was the 146 m (479 ft) Great Pyramid of Giza in ancient Egypt, built in the 26th century BC. It was not surpassed in height for thousands of years, the 160 m (520 ft) Lincoln Cathedral having exceeded it in 1311–1549, before its central spire collapsed.[15] The latter in turn was not surpassed until the 555-foot (169 m) Washington Monument in 1884. However, being uninhabited, none of these structures actually comply with the modern definition of a skyscraper.
24
+
25
+ High-rise apartments flourished in classical antiquity. Ancient Roman insulae in imperial cities reached 10 and more storeys.[16] Beginning with Augustus (r. 30 BC-14 AD), several emperors attempted to establish limits of 20–25 m for multi-storey buildings, but met with only limited success.[17][18] Lower floors were typically occupied by shops or wealthy families, the upper rented to the lower classes.[16] Surviving Oxyrhynchus Papyri indicate that seven-storey buildings existed in provincial towns such as in 3rd century AD Hermopolis in Roman Egypt.[19]
26
+
27
+ The skylines of many important medieval cities had large numbers of high-rise urban towers, built by the wealthy for defense and status. The residential Towers of 12th century Bologna numbered between 80 and 100 at a time, the tallest of which is the 97.2 m (319 ft) high Asinelli Tower. A Florentine law of 1251 decreed that all urban buildings be immediately reduced to less than 26 m.[20] Even medium-sized towns of the era are known to have proliferations of towers, such as the 72 up to 51 m height in San Gimignano.[20]
28
+
29
+ The medieval Egyptian city of Fustat housed many high-rise residential buildings, which Al-Muqaddasi in the 10th century described as resembling minarets. Nasir Khusraw in the early 11th century described some of them rising up to 14 storeys, with roof gardens on the top floor complete with ox-drawn water wheels for irrigating them.[21] Cairo in the 16th century had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple storeys above them were rented out to tenants.[22] An early example of a city consisting entirely of high-rise housing is the 16th-century city of Shibam in Yemen. Shibam was made up of over 500 tower houses,[23] each one rising 5 to 11 storeys high,[24] with each floor being an apartment occupied by a single family. The city was built in this way in order to protect it from Bedouin attacks.[23] Shibam still has the tallest mudbrick buildings in the world, with many of them over 30 m (98 ft) high.[25]
30
+
31
+ An early modern example of high-rise housing was in 17th-century Edinburgh, Scotland, where a defensive city wall defined the boundaries of the city. Due to the restricted land area available for development, the houses increased in height instead. Buildings of 11 storeys were common, and there are records of buildings as high as 14 storeys. Many of the stone-built structures can still be seen today in the old town of Edinburgh. The oldest iron framed building in the world, although only partially iron framed, is The Flaxmill (also locally known as the "Maltings"), in Shrewsbury, England. Built in 1797, it is seen as the "grandfather of skyscrapers", since its fireproof combination of cast iron columns and cast iron beams developed into the modern steel frame that made modern skyscrapers possible. In 2013 funding was confirmed to convert the derelict building into offices.[26]
32
+
33
+ In 1857, Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors, at the E.V. Haughwout Building in New York City. Otis later introduced the first commercial passenger elevators to the Equitable Life Building in 1870, considered by a portion of New Yorkers to be the first skyscraper. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. An early development in this area was Oriel Chambers in Liverpool, England. It was only five floors high.[27][28][29] Further developments led to what many individuals and organizations consider the world's first skyscraper, the ten-story Home Insurance Building in Chicago, built in 1884–1885.[30] While its original height of 42.1 m (138ft) is not considered very impressive today, it was at that time. The building of tall buildings in the 1880s gave the skyscraper its first architectural movement the Chicago School, which developed what has been called the Commercial Style.[31]
34
+
35
+ The architect, Major William Le Baron Jenney, created a load-bearing structural frame. In this building, a steel frame supported the entire weight of the walls, instead of load-bearing walls carrying the weight of the building. This development led to the "Chicago skeleton" form of construction. In addition to the steel frame, the Home Insurance Building also utilized fireproofing, elevators, and electrical wiring, key elements in most skyscrapers today.[32]
36
+
37
+ Burnham and Root's 45 m (148 ft) Rand McNally Building in Chicago, 1889, was the first all-steel framed skyscraper,[33] while Louis Sullivan's 41 m (135 ft) Wainwright Building in St. Louis, Missouri, 1891, was the first steel-framed building with soaring vertical bands to emphasize the height of the building and is therefore considered to be the first early skyscraper.
38
+
39
+ In 1889, the Mole Antonelliana in Italy was 167 m (549 ft) tall.
40
+
41
+ Most early skyscrapers emerged in the land-strapped areas of Chicago and New York City toward the end of the 19th century. A land boom in Melbourne, Australia between 1888 and 1891 spurred the creation of a significant number of early skyscrapers, though none of these were steel reinforced and few remain today. Height limits and fire restrictions were later introduced. London builders soon found building heights limited due to a complaint from Queen Victoria, rules that continued to exist with few exceptions.
42
+
43
+ Concerns about aesthetics and fire safety had likewise hampered the development of skyscrapers across continental Europe for the first half of the twentieth century. Some notable exceptions are the 43 m (141 ft) tall 1898 Witte Huis (White House) in Rotterdam; the Royal Liver Building in Liverpool, completed in 1911 and 90 m (300 ft) high;[34] the 57 m (187 ft) tall 1924 Marx House in Düsseldorf, Germany; the 61 m (200 ft) Kungstornen (Kings' Towers) in Stockholm, Sweden, which were built 1924–25,[35] the 89 m (292 ft) Edificio Telefónica in Madrid, Spain, built in 1929; the 87.5 m (287 ft) Boerentoren in Antwerp, Belgium, built in 1932; the 66 m (217 ft) Prudential Building in Warsaw, Poland, built in 1934; and the 108 m (354 ft) Torre Piacentini in Genoa, Italy, built in 1940.
44
+
45
+ After an early competition between Chicago and New York City for the world's tallest building, New York took the lead by 1895 with the completion of the 103 m (338 ft) tall American Surety Building, leaving New York with the title of the world's tallest building for many years.
46
+
47
+ Modern skyscrapers are built with steel or reinforced concrete frameworks and curtain walls of glass or polished stone. They use mechanical equipment such as water pumps and elevators. Since the 1960s, according to the CTHUB, the skyscraper has been reoriented away from a symbol for North American corporate power to instead communicate a city or nation's place in the world.[36]
48
+
49
+ Skyscraper construction entered a three-decades-long era of stagnation in 1930 due to the Great Depression and then World War II. Shortly after the war ended, the Soviet Union began construction on a series of skyscrapers in Moscow. Seven, dubbed the "Seven Sisters", were built between 1947 and 1953; and one, the Main building of Moscow State University, was the tallest building in Europe for nearly four decades (1953–1990). Other skyscrapers in the style of Socialist Classicism were erected in East Germany (Frankfurter Tor), Poland (PKiN), Ukraine (Hotel Ukrayina), Latvia (Academy of Sciences) and other Eastern Bloc countries. Western European countries also began to permit taller skyscrapers during the years immediately following World War II. Early examples include Edificio España (Spain) Torre Breda (Italy).
50
+
51
+ From the 1930s onward, skyscrapers began to appear in various cities in East and Southeast Asia as well as in Latin America. Finally, they also began to be constructed in cities of Africa, the Middle East, South Asia and Oceania (mainly Australia) from the late 1950s on.
52
+
53
+ Skyscraper projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even demolished—such as New York's Singer Building, once the world's tallest skyscraper.
54
+
55
+ German architect Ludwig Mies van der Rohe became one of the world's most renowned architects in the second half of the 20th century. He conceived of the glass façade skyscraper[37] and, along with Norwegian Fred Severud,[38] he designed the Seagram Building in 1958, a skyscraper that is often regarded as the pinnacle of the modernist high-rise architecture.[39]
56
+
57
+ Skyscraper construction surged throughout the 1960s. The impetus behind the upswing was a series of transformative innovations[40] which made it possible for people to live and work in "cities in the sky".[41]
58
+
59
+ In the early 1960s structural engineer Fazlur Rahman Khan, considered the "father of tubular designs" for high-rises,[42] discovered that the dominating rigid steel frame structure was not the only system apt for tall buildings, marking a new era of skyscraper construction in terms of multiple structural systems.[43] His central innovation in skyscraper design and construction was the concept of the "tube" structural system, including the "framed tube", "trussed tube", and "bundled tube".[44] His "tube concept", using all the exterior wall perimeter structure of a building to simulate a thin-walled tube, revolutionized tall building design.[45] These systems allow greater economic efficiency,[46] and also allow skyscrapers to take on various shapes, no longer needing to be rectangular and box-shaped.[47] The first building to employ the tube structure was the Chestnut De-Witt apartment building,[40] this building is considered to be a major development in modern architecture.[40] These new designs opened an economic door for contractors, engineers, architects, and investors, providing vast amounts of real estate space on minimal plots of land.[41] Over the next fifteen years, many towers were built by Fazlur Rahman Khan and the "Second Chicago School",[48] including the hundred-storey John Hancock Center and the massive 442 m (1,450 ft) Willis Tower.[49] Other pioneers of this field include Hal Iyengar, William LeMessurier, and Minoru Yamasaki, the architect of the World Trade Center.
60
+
61
+ Many buildings designed in the 70s lacked a particular style and recalled ornamentation from earlier buildings designed before the 50s. These design plans ignored the environment and loaded structures with decorative elements and extravagant finishes.[50] This approach to design was opposed by Fazlur Khan and he considered the designs to be whimsical rather than rational. Moreover, he considered the work to be a waste of precious natural resources.[51] Khan's work promoted structures integrated with architecture and the least use of material resulting in the least carbon emission impact on the environment.[52] The next era of skyscrapers will focus on the environment including performance of structures, types of material, construction practices, absolute minimal use of materials/natural resources, embodied energy within the structures, and more importantly, a holistically integrated building systems approach.[50]
62
+
63
+ Modern building practices regarding supertall structures have led to the study of "vanity height".[53][54] Vanity height, according to the CTBUH, is the distance between the highest floor and its architectural top (excluding antennae, flagpole or other functional extensions). Vanity height first appeared in New York City skyscrapers as early as the 1920s and 1930s but supertall buildings have relied on such uninhabitable extensions for on average 30 % of their height, raising potential definitional and sustainability issues.[55][56][57] The current era of skyscrapers focuses on sustainability, its built and natural environments, including the performance of structures, types of materials, construction practices, absolute minimal use of materials and natural resources, energy within the structure, and a holistically integrated building systems approach. LEED is a current green building standard.[58]
64
+
65
+ Architecturally, with the movements of Postmodernism, New Urbanism and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that remains popular today.[59] Examples are the Wells Fargo Center, NBC Tower, Parkview Square, 30 Park Place, the Messeturm, the iconic Petronas Towers and Jin Mao Tower.
66
+
67
+ Other contemporary styles and movements in skyscraper design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Neo Art Deco and neo-historist, also known as revivalist.
68
+
69
+ 3 September is the global commemorative day for skyscrapers, called "Skyscraper Day".[60]
70
+
71
+ New York City developers competed among themselves, with successively taller buildings claiming the title of "world's tallest" in the 1920s and early 1930s, culminating with the completion of the 318.9 m (1,046 ft) Chrysler Building in 1930 and the 443.2 m (1,454 ft) Empire State Building in 1931, the world's tallest building for forty years. The first completed 417 m (1,368 ft) tall World Trade Center tower became the world's tallest building in 1972. However, it was overtaken by the Sears Tower (now Willis Tower) in Chicago within two years. The 442 m (1,450 ft) tall Sears Tower stood as the world's tallest building for 24 years, from 1974 until 1998, until it was edged out by 452 m (1,483 ft) Petronas Twin Towers in Kuala Lumpur, which held the title for six years.
72
+
73
+ The design and construction of skyscrapers involves creating safe, habitable spaces in very tall buildings. The buildings must support their weight, resist wind and earthquakes, and protect occupants from fire. Yet they must also be conveniently accessible, even on the upper floors, and provide utilities and a comfortable climate for the occupants. The problems posed in skyscraper design are considered among the most complex encountered given the balances required between economics, engineering, and construction management.
74
+
75
+ One common feature of skyscrapers is a steel framework from which curtain walls are suspended, rather than load-bearing walls of conventional construction. Most skyscrapers have a steel frame that enables them to be built taller than typical load-bearing walls of reinforced concrete. Skyscrapers usually have a particularly small surface area of what are conventionally thought of as walls. Because the walls are not load-bearing most skyscrapers are characterized by surface areas of windows made possible by the concept of steel frame and curtain wall. However, skyscrapers can also have curtain walls that mimic conventional walls and have a small surface area of windows.
76
+
77
+ The concept of a skyscraper is a product of the industrialized age, made possible by cheap fossil fuel derived energy and industrially refined raw materials such as steel and concrete. The construction of skyscrapers was enabled by steel frame construction that surpassed brick and mortar construction starting at the end of the 19th century and finally surpassing it in the 20th century together with reinforced concrete construction as the price of steel decreased and labour costs increased.
78
+
79
+ The steel frames become inefficient and uneconomic for supertall buildings as usable floor space is reduced for progressively larger supporting columns.[61] Since about 1960, tubular designs have been used for high rises. This reduces the usage of material (more efficient in economic terms – Willis Tower uses a third less steel than the Empire State Building) yet allows greater height. It allows fewer interior columns, and so creates more usable floor space. It further enables buildings to take on various shapes.
80
+
81
+ Elevators are characteristic to skyscrapers. In 1852 Elisha Otis introduced the safety elevator, allowing convenient and safe passenger movement to upper floors. Another crucial development was the use of a steel frame instead of stone or brick, otherwise the walls on the lower floors on a tall building would be too thick to be practical. Today major manufacturers of elevators include Otis, ThyssenKrupp, Schindler, and KONE.
82
+
83
+ Advances in construction techniques have allowed skyscrapers to narrow in width, while increasing in height. Some of these new techniques include mass dampers to reduce vibrations and swaying, and gaps to allow air to pass through, reducing wind shear.[62]
84
+
85
+ Good structural design is important in most building design, but particularly for skyscrapers since even a small chance of catastrophic failure is unacceptable given the high price. This presents a paradox to civil engineers: the only way to assure a lack of failure is to test for all modes of failure, in both the laboratory and the real world. But the only way to know of all modes of failure is to learn from previous failures. Thus, no engineer can be absolutely sure that a given structure will resist all loadings that could cause failure, but can only have large enough margins of safety such that a failure is acceptably unlikely. When buildings do fail, engineers question whether the failure was due to some lack of foresight or due to some unknowable factor.
86
+
87
+ The load a skyscraper experiences is largely from the force of the building material itself. In most building designs, the weight of the structure is much larger than the weight of the material that it will support beyond its own weight. In technical terms, the dead load, the load of the structure, is larger than the live load, the weight of things in the structure (people, furniture, vehicles, etc.). As such, the amount of structural material required within the lower levels of a skyscraper will be much larger than the material required within higher levels. This is not always visually apparent. The Empire State Building's setbacks are actually a result of the building code at the time (1916 Zoning Resolution), and were not structurally required. On the other hand, John Hancock Center's shape is uniquely the result of how it supports loads. Vertical supports can come in several types, among which the most common for skyscrapers can be categorized as steel frames, concrete cores, tube within tube design, and shear walls.
88
+
89
+ The wind loading on a skyscraper is also considerable. In fact, the lateral wind load imposed on supertall structures is generally the governing factor in the structural design. Wind pressure increases with height, so for very tall buildings, the loads associated with wind are larger than dead or live loads.
90
+
91
+ Other vertical and horizontal loading factors come from varied, unpredictable sources, such as earthquakes.
92
+
93
+ By 1895, steel had replaced cast iron as skyscrapers' structural material. Its malleability allowed it to be formed into a variety of shapes, and it could be riveted, ensuring strong connections.[63] The simplicity of a steel frame eliminated the inefficient part of a shear wall, the central portion, and consolidated support members in a much stronger fashion by allowing both horizontal and vertical supports throughout. Among steel's drawbacks is that as more material must be supported as height increases, the distance between supporting members must decrease, which in turn increases the amount of material that must be supported. This becomes inefficient and uneconomic for buildings above 40 storeys tall as usable floor spaces are reduced for supporting column and due to more usage of steel.[61]
94
+
95
+ A new structural system of framed tubes was developed by Fazlur Rahman Khan in 1963. The framed tube structure is defined as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation".[64][65] Closely spaced interconnected exterior columns form the tube. Horizontal loads (primarily wind) are supported by the structure as a whole. Framed tubes allow fewer interior columns, and so create more usable floor space, and about half the exterior surface is available for windows. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Tube structures cut down costs, at the same time allowing buildings to reach greater heights. Concrete tube-frame construction[44] was first used in the DeWitt-Chestnut Apartment Building, completed in Chicago in 1963,[66] and soon after in the John Hancock Center and World Trade Center.
96
+
97
+ The tubular systems are fundamental to tall building design. Most buildings over 40-storeys constructed since the 1960s now use a tube design derived from Khan's structural engineering principles,[61][67] examples including the construction of the World Trade Center, Aon Center, Petronas Towers, Jin Mao Building, and most other supertall skyscrapers since the 1960s.[44] The strong influence of tube structure design is also evident in the construction of the current tallest skyscraper, the Burj Khalifa.[47]
98
+
99
+ Khan pioneered several other variations of the tube structure design.[citation needed] One of these was the concept of X-bracing, or the trussed tube, first employed for the John Hancock Center. This concept reduced the lateral load on the building by transferring the load into the exterior columns. This allows for a reduced need for interior columns thus creating more floor space. This concept can be seen in the John Hancock Center, designed in 1965 and completed in 1969. One of the most famous buildings of the structural expressionist style, the skyscraper's distinctive X-bracing exterior is actually a hint that the structure's skin is indeed part of its 'tubular system'. This idea is one of the architectural techniques the building used to climb to record heights (the tubular system is essentially the spine that helps the building stand upright during wind and earthquake loads). This X-bracing allows for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires.
100
+
101
+ The John Hancock Center was far more efficient than earlier steel-frame structures. Where the Empire State Building (1931), required about 206 kilograms of steel per square metre and 28 Liberty Street (1961) required 275, the John Hancock Center required only 145.[46] The trussed tube concept was applied to many later skyscrapers, including the Onterie Center, Citigroup Center and Bank of China Tower.[68]
102
+
103
+ An important variation on the tube frame is the bundled tube, which uses several interconnected tube frames. The Willis Tower in Chicago used this design, employing nine tubes of varying height to achieve its distinct appearance. The bundled tube structure meant that "buildings no longer need be boxlike in appearance: they could become sculpture."[47]
104
+
105
+ The invention of the elevator was a precondition for the invention of skyscrapers, given that most people would not (or could not) climb more than a few flights of stairs at a time. The elevators in a skyscraper are not simply a necessary utility, like running water and electricity, but are in fact closely related to the design of the whole structure: a taller building requires more elevators to service the additional floors, but the elevator shafts consume valuable floor space. If the service core, which contains the elevator shafts, becomes too big, it can reduce the profitability of the building. Architects must therefore balance the value gained by adding height against the value lost to the expanding service core.[69]
106
+
107
+ Many tall buildings use elevators in a non-standard configuration to reduce their footprint. Buildings such as the former World Trade Center Towers and Chicago's John Hancock Center use sky lobbies, where express elevators take passengers to upper floors which serve as the base for local elevators. This allows architects and engineers to place elevator shafts on top of each other, saving space. Sky lobbies and express elevators take up a significant amount of space, however, and add to the amount of time spent commuting between floors.
108
+
109
+ Other buildings, such as the Petronas Towers, use double-deck elevators, allowing more people to fit in a single elevator, and reaching two floors at every stop. It is possible to use even more than two levels on an elevator, although this has never been done. The main problem with double-deck elevators is that they cause everyone in the elevator to stop when only people on one level need to get off at a given floor.
110
+
111
+ Buildings with sky lobbies include the World Trade Center, Petronas Twin Towers, Willis Tower and Taipei 101. The 44th-floor sky lobby of the John Hancock Center also featured the first high-rise indoor swimming pool, which remains the highest in America.[70]
112
+
113
+ Skyscrapers are usually situated in city centers where the price of land is high. Constructing a skyscraper becomes justified if the price of land is so high that it makes economic sense to build upward as to minimize the cost of the land per the total floor area of a building. Thus the construction of skyscrapers is dictated by economics and results in skyscrapers in a certain part of a large city unless a building code restricts the height of buildings.
114
+
115
+ Skyscrapers are rarely seen in small cities and they are characteristic of large cities, because of the critical importance of high land prices for the construction of skyscrapers. Usually only office, commercial and hotel users can afford the rents in the city center and thus most tenants of skyscrapers are of these classes.
116
+
117
+ Today, skyscrapers are an increasingly common sight where land is expensive, as in the centers of big cities, because they provide such a high ratio of rentable floor space per unit area of land.
118
+
119
+ One problem with skyscrapers is car parking. In the largest cities most people commute via public transport, but in smaller cities many parking spaces are needed. Multi-storey car parks are impractical to build very tall, so much land area is needed.
120
+
121
+ There may be a correlation between skyscraper construction and great income inequality but this has not been conclusively proven.[71]
122
+
123
+ The amount of steel, concrete, and glass needed to construct a single skyscraper is large, and these materials represent a great deal of embodied energy. Skyscrapers are thus energy intensive buildings, but skyscrapers have a long lifespan, for example the Empire State Building in New York City, United States completed in 1931 and is still in active use.
124
+
125
+ Skyscrapers have considerable mass, which means that they must be built on a sturdier foundation than would be required for shorter, lighter buildings. Building materials must also be lifted to the top of a skyscraper during construction, requiring more energy than would be necessary at lower heights. Furthermore, a skyscraper consumes much electricity because potable and non-potable water have to be pumped to the highest occupied floors, skyscrapers are usually designed to be mechanically ventilated, elevators are generally used instead of stairs, and natural lighting cannot be utilized in rooms far from the windows and the windowless spaces such as elevators, bathrooms and stairwells.
126
+
127
+ Skyscrapers can be artificially lit and the energy requirements can be covered by renewable energy or other electricity generation with low greenhouse gas emissions. Heating and cooling of skyscrapers can be efficient, because of centralized HVAC systems, heat radiation blocking windows and small surface area of the building. There is Leadership in Energy and Environmental Design (LEED) certification for skyscrapers. For example, the Empire State Building received a gold Leadership in Energy and Environmental Design rating in September 2011 and the Empire State Building is the tallest LEED certified building in the United States,[72] proving that skyscrapers can be environmentally friendly. Also the 30 St Mary Axe in London, the United Kingdom is an environmentally friendly skyscraper.
128
+
129
+ In the lower levels of a skyscraper a larger percentage of the building cross section must be devoted to the building structure and services than is required for lower buildings:
130
+
131
+ In low-rise structures, the support rooms (chillers, transformers, boilers, pumps and air handling units) can be put in basements or roof space—areas which have low rental value. There is, however, a limit to how far this plant can be located from the area it serves. The farther away it is the larger the risers for ducts and pipes from this plant to the floors they serve and the more floor area these risers take. In practice this means that in highrise buildings this plant is located on 'plant levels' at intervals up the building.
132
+
133
+ At the beginning of the 20th century, New York City was a center for the Beaux-Arts architectural movement, attracting the talents of such great architects as Stanford White and Carrere and Hastings. As better construction and engineering technology became available as the century progressed, New York City and Chicago became the focal point of the competition for the tallest building in the world. Each city's striking skyline has been composed of numerous and varied skyscrapers, many of which are icons of 20th-century architecture:
134
+
135
+ Momentum in setting records passed from the United States to other nations with the opening of the Petronas Twin Towers in Kuala Lumpur, Malaysia, in 1998. The record for the world's tallest building has remained in Asia since the opening of Taipei 101 in Taipei, Taiwan, in 2004. A number of architectural records, including those of the world's tallest building and tallest free-standing structure, moved to the Middle East with the opening of the Burj Khalifa in Dubai, United Arab Emirates.
136
+
137
+ This geographical transition is accompanied by a change in approach to skyscraper design. For much of the twentieth century large buildings took the form of simple geometrical shapes. This reflected the "international style" or modernist philosophy shaped by Bauhaus architects early in the century. The last of these, the Willis Tower and World Trade Center towers in New York, erected in the 1970s, reflect the philosophy. Tastes shifted in the decade which followed, and new skyscrapers began to exhibit postmodernist influences. This approach to design avails itself of historical elements, often adapted and re-interpreted, in creating technologically modern structures. The Petronas Twin Towers recall Asian pagoda architecture and Islamic geometric principles. Taipei 101 likewise reflects the pagoda tradition as it incorporates ancient motifs such as the ruyi symbol. The Burj Khalifa draws inspiration from traditional Islamic art. Architects in recent years have sought to create structures that would not appear equally at home if set in any part of the world, but that reflect the culture thriving in the spot where they stand.[citation needed]
138
+
139
+ The following list measures height of the roof.[82][failed verification] The more common gauge is the "highest architectural detail"; such ranking would have included Petronas Towers, built in 1996.
140
+
141
+ The iconic World Trade Center twin towers were destroyed in 2001.
142
+
143
+ The Willis Tower in Chicago was the world's tallest building from 1974 to 1998; many still refer to it as the "Sears Tower", its name from inception to 2009.
144
+
145
+ The Petronas Twin Towers in Kuala Lumpur were the tallest from 1998 to 2004.
146
+
147
+ Taipei 101, the world's tallest skyscraper from 2004 to 2010, was the first to exceed the 500-metre mark.
148
+
149
+ Proposals for such structures have been put forward, including the Burj Mubarak Al Kabir in Kuwait and Azerbaijan Tower in Baku. Kilometer-plus structures present architectural challenges that may eventually place them in a new architectural category.[83] The first building under construction and planned to be over one kilometre tall is the Jeddah Tower.
150
+
151
+ Several wooden skyscraper designs have been designed and built. A 14-storey housing project in Bergen, Norway known as 'Treet' or 'The Tree' became the world's tallest wooden apartment block when it was completed in late 2015.[84] The Tree's record was eclipsed by Brock Commons, an 18-storey wooden dormitory at the University of British Columbia in Canada, when it was completed in September 2016.[85]
152
+
153
+ A 40-storey residential building 'Trätoppen' has been proposed by architect Anders Berensson to be built in Stockholm, Sweden.[86] Trätoppen would be the tallest building in Stockholm, though there are no immediate plans to begin construction.[87] The tallest currently-planned wooden skyscraper is the 70-storey W350 Project in Tokyo, to be built by the Japanese wood products company Sumitomo Forestry Co. to celebrate its 350th anniversary in 2041.[88] An 80-storey wooden skyscraper, the River Beech Tower, has been proposed by a team including architects Perkins + Will and the University of Cambridge. The River Beech Tower, on the banks of the Chicago River in Chicago, Illinois, would be 348 feet shorter than the W350 Project despite having 10 more storeys.[89][88]
154
+
155
+ Wooden skyscrapers are estimated to be around a quarter of the weight of an equivalent reinforced-concrete structure as well as reducing the building carbon footprint by 60–75 %. Buildings have been designed using cross-laminated timber (CLT) which gives a higher rigidity and strength to wooden structures.[90] CLT panels are prefabricated and can therefore speed up building time.[91]
en/2273.html.txt ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Engraving is the practice of incising a design on to a hard, usually flat surface by cutting grooves into it with a burin. The result may be a decorated object in itself, as when silver, gold, steel, or glass are engraved, or may provide an intaglio printing plate, of copper or another metal, for printing images on paper as prints or illustrations; these images are also called "engravings". Engraving is one of the oldest and most important techniques in printmaking. Wood engraving is a form of relief printing and is not covered in this article.
4
+
5
+ Engraving was a historically important method of producing images on paper in artistic printmaking, in mapmaking, and also for commercial reproductions and illustrations for books and magazines. It has long been replaced by various photographic processes in its commercial applications and, partly because of the difficulty of learning the technique, is much less common in printmaking, where it has been largely replaced by etching and other techniques.
6
+
7
+ "Engraving" is also loosely but incorrectly used for any old black and white print; it requires a degree of expertise to distinguish engravings from prints using other techniques such as etching in particular, but also mezzotint and other techniques. Many old master prints also combine techniques on the same plate, further confusing matters. Line engraving and steel engraving cover use for reproductive prints, illustrations in books and magazines, and similar uses, mostly in the 19th century, and often not actually using engraving.
8
+ Traditional engraving, by burin or with the use of machines, continues to be practised by goldsmiths, glass engravers, gunsmiths and others, while modern industrial techniques such as photoengraving and laser engraving have many important applications. Engraved gems were an important art in the ancient world, revived at the Renaissance, although the term traditionally covers relief as well as intaglio carvings, and is essentially a branch of sculpture rather than engraving, as drills were the usual tools.
9
+
10
+ Other terms often used for printed engravings are copper engraving, copper-plate engraving or line engraving. Steel engraving is the same technique, on steel or steel-faced plates, and was mostly used for banknotes, illustrations for books, magazines and reproductive prints, letterheads and similar uses from about 1790 to the early 20th century, when the technique became less popular, except for banknotes and other forms of security printing. Especially in the past, "engraving" was often used very loosely to cover several printmaking techniques, so that many so-called engravings were in fact produced by totally different techniques, such as etching or mezzotint. "Hand engraving" is a term sometimes used for engraving objects other than printing plates, to inscribe or decorate jewellery, firearms, trophies, knives and other fine metal goods. Traditional engravings in printmaking are also "hand engraved", using just the same techniques to make the lines in the plate.
11
+
12
+ Each graver is different and has its own use. Engravers use a hardened steel tool called a burin, or graver, to cut the design into the surface, most traditionally a copper plate.[1] However, modern hand engraving artists use burins or gravers to cut a variety of metals such as silver, nickel, steel, brass, gold, titanium, and more, in applications from weaponry to jewellery to motorcycles to found objects. Modern professional engravers can engrave with a resolution of up to 40 lines per mm in high grade work creating game scenes and scrollwork. Dies used in mass production of molded parts are sometimes hand engraved to add special touches or certain information such as part numbers.
13
+
14
+ In addition to hand engraving, there are engraving machines that require less human finesse and are not directly controlled by hand. They are usually used for lettering, using a pantographic system. There are versions for the insides of rings and also the outsides of larger pieces. Such machines are commonly used for inscriptions on rings, lockets and presentation pieces.
15
+
16
+ Gravers come in a variety of shapes and sizes that yield different line types. The burin produces a unique and recognizable quality of line that is characterized by its steady, deliberate appearance and clean edges. The angle tint tool has a slightly curved tip that is commonly used in printmaking. Florentine liners are flat-bottomed tools with multiple lines incised into them, used to do fill work on larger areas or to create uniform shade lines that are fast to execute. Ring gravers are made with particular shapes that are used by jewelry engravers in order to cut inscriptions inside rings. Flat gravers are used for fill work on letters, as well as "wriggle" cuts on most musical instrument engraving work, remove background, or create bright cuts. Knife gravers are for line engraving and very deep cuts. Round gravers, and flat gravers with a radius, are commonly used on silver to create bright cuts (also called bright-cut engraving), as well as other hard-to-cut metals such as nickel and steel. Square or V-point gravers are typically square or elongated diamond-shaped and used for cutting straight lines. V-point can be anywhere from 60 to 130 degrees, depending on purpose and effect. These gravers have very small cutting points. Other tools such as mezzotint rockers, roulets and burnishers are used for texturing effects. Burnishing tools can also be used for certain stone setting techniques.
17
+
18
+ Musical instrument engraving on American-made brass instruments flourished in the 1920s and utilizes a specialized engraving technique where a flat graver is "walked" across the surface of the instrument to make zig-zag lines and patterns. The method for "walking" the graver may also be referred to as "wriggle" or "wiggle" cuts. This technique is necessary due to the thinness of metal used to make musical instruments versus firearms or jewelry. Wriggle cuts are commonly found on silver Western jewelry and other Western metal work.
19
+
20
+ Tool geometry is extremely important for accuracy in hand engraving. When sharpened for most applications, a graver has a "face", which is the top of the graver, and a "heel", which is the bottom of the graver; not all tools or application require a heel. These two surfaces meet to form a point that cuts the metal. The geometry and length of the heel helps to guide the graver smoothly as it cuts the surface of the metal. When the tool's point breaks or chips, even on a microscopic level, the graver can become hard to control and produces unexpected results. Modern innovations have brought about new types of carbide that resist chipping and breakage, which hold a very sharp point longer between resharpening than traditional metal tools.
21
+
22
+ Sharpening a graver or burin requires either a sharpening stone or wheel. Harder carbide and steel gravers require diamond-grade sharpening wheels; these gravers can be polished to a mirror finish using a ceramic or cast iron lap, which is essential in creating bright cuts. Several low-speed, reversible sharpening system made specifically for hand engravers are available that reduce sharpening time. Fixtures that secure the tool in place at certain angles and geometries are also available to take the guesswork from sharpening to produce accurate points. Very few master engravers exist today who rely solely on "feel" and muscle memory to sharpen tools. These master engravers typically worked for many years as an apprentice, most often learning techniques decades before modern machinery was available for hand engravers. These engravers typically trained in such countries as Italy and Belgium, where hand engraving has a rich and long heritage of masters.
23
+
24
+ Design or artwork is generally prepared in advance, although some professional and highly experienced hand engravers are able to draw out minimal outlines either on paper or directly on the metal surface just prior to engraving. The work to be engraved may be lightly scribed on the surface with a sharp point, laser marked, drawn with a fine permanent marker (removable with acetone) or pencil, transferred using various chemicals in conjunction with inkjet or laser printouts, or stippled. Engraving artists may rely on hand drawing skills, copyright-free designs and images, computer-generated artwork, or common design elements when creating artwork.
25
+
26
+ Originally, handpieces varied little in design as the common use was to push with the handle placed firmly in the center of the palm. With modern pneumatic engraving systems, handpieces are designed and created in a variety of shapes and power ranges. Handpieces are made using various methods and materials. Knobs may be handmade from wood, molded and engineered from plastic, or machine-made from brass, steel, or other metals. The most widely known hand engraving tool maker, GRS Tools in Kansas is an American-owned and operated company that manufacture handpieces as well as many other tools for various applications in metal engraving.
27
+
28
+ The actual engraving is traditionally done by a combination of pressure and manipulating the work-piece. The traditional "hand push" process is still practiced today, but modern technology has brought various mechanically assisted engraving systems. Most pneumatic engraving systems require an air source that drives air through a hose into a handpiece, which resembles a traditional engraving handle in many cases, that powers a mechanism (usually a piston). The air is actuated by either a foot control (like a gas pedal or sewing machine) or newer palm / hand control. This mechanism replaces either the "hand push" effort or the effects of a hammer. The internal mechanisms move at speeds up to 15,000 strokes per minute, thereby greatly reducing the effort needed in traditional hand engraving. These types of pneumatic systems are used for power assistance only and do not guide or control the engraving artist. One of the major benefits of using a pneumatic system for hand engraving is the reduction of fatigue and decrease in time spent working.
29
+
30
+ Hand engraving artists today employ a combination of hand push, pneumatic, rotary, or hammer and chisel methods. Hand push is still commonly used by modern hand engraving artists who create "bulino" style work, which is highly detailed and delicate, fine work; a great majority, if not all, traditional printmakers today rely solely upon hand push methods. Pneumatic systems greatly reduce the effort required for removing large amounts of metal, such as in deep relief engraving or Western bright cut techniques.
31
+
32
+ Finishing the work is often necessary when working in metal that may rust or where a colored finish is desirable, such as a firearm. A variety of spray lacquers and finishing techniques exist to seal and protect the work from exposure to the elements and time. Finishing also may include lightly sanding the surface to remove small chips of metal called "burrs" that are very sharp and unsightly. Some engravers prefer high contrast to the work or design, using black paints or inks to darken removed (and lower) areas of exposed metal. The excess paint or ink is wiped away and allowed to dry before lacquering or sealing, which may or may not be desired by the artist.
33
+
34
+ Because of the high level of microscopic detail that can be achieved by a master engraver, counterfeiting of engraved designs is well-nigh impossible, and modern banknotes are almost always engraved, as are plates for printing money, checks, bonds and other security-sensitive papers. The engraving is so fine that a normal printer cannot recreate the detail of hand engraved images, nor can it be scanned. In the Bureau of Engraving and Printing, more than one hand engraver will work on the same plate, making it nearly impossible for one person to duplicate all the engraving on a particular banknote or document.
35
+
36
+ The modern discipline of hand engraving, as it is called in a metalworking context, survives largely in a few specialized fields. The highest levels of the art are found on firearms and other metal weaponry, jewellery, and musical instruments.
37
+
38
+ In most commercial markets today, hand engraving has been replaced with milling using CNC engraving or milling machines. Still, there are certain applications where use of hand engraving tools cannot be replaced.
39
+
40
+ In some instances, images or designs can be transferred to metal surfaces via mechanical process. One such process is roll stamping or roller-die engraving. In this process, a hardened image die is pressed against the destination surface using extreme pressure to impart the image. In the 1800s pistol cylinders were often decorated via this process to impart a continuous scene around its surface.
41
+
42
+ Engraving machines such as the K500 (packaging) or K6 (publication) by Hell Gravure Systems use a diamond stylus to cut cells. Each cell creates one printing dot later in the process. A K6 can have up to 18 engraving heads each cutting 8.000 cells per second to an accuracy of .1 µm and below. They are fully computer-controlled and the whole process of cylinder-making is fully automated.
43
+
44
+ It is now common place for retail stores (mostly jewellery, silverware or award stores) to have a small computer controlled engrave on site. This enables them to personalise the products they sell. Retail engraving machines tend to be focused around ease of use for the operator and the ability to do a wide variety of items including flat metal plates, jewelry of different shapes and sizes, as well as cylindrical items such as mugs and tankards. They will typically be equipped with a computer dedicated to graphic design that will enable the operator to easily design a text or picture graphic which the software will translate into digital signals telling the engraver machine what to do. Unlike industrial engravers, retail machines are smaller and only use one diamond head. This is interchangeable so the operator can use differently shaped diamonds for different finishing effects. They will typically be able to do a variety of metals and plastics. Glass and crystal engraving is possible, but the brittle nature of the material makes the process more time consuming.
45
+
46
+ Retail engravers mainly use two different processes. The first and most common 'Diamond Drag' pushes the diamond cutter through the surface of the material and then pulls to create scratches. These direction and depth are controlled by the computer input. The second is 'Spindle Cutter'. This is similar to Diamond Drag, but the engraving head is shaped in a flat V shape, with a small diamond and the base. The machine uses an electronic spindle to quickly rotate the head as it pushes it into the material, then pulls it along whilst it continues to spin. This creates a much bolder impression than diamond drag. It is used mainly for brass plaques and pet tags.
47
+
48
+ With state-of-the-art machinery it is easy to have a simple, single item complete in under ten minutes.
49
+ The engraving process with diamonds is state-of-the-art since the 1960s.
50
+
51
+ Today laser engraving machines are in development but still mechanical cutting has proven its strength in economical terms and quality. More than 4,000 engravers make approx. 8 Mio printing cylinders worldwide per year.
52
+
53
+ For the printing process, see intaglio (printmaking). For the Western art history of engraved prints, see old master print and line engraving
54
+
55
+ The first evidence for humans engraving patterns is a chiselled shell, dating back between 540,000 and 430,000 years, from Trinil, in Java, Indonesia, where the first Homo erectus was discovered.[3] Hatched banding upon ostrich eggshells used as water containers found in South Africa in the Diepkloof Rock Shelter and dated to the Middle Stone Age around 60,000 BC are the next documented case of human engraving.[4] Engraving on bone and ivory is an important technique for the Art of the Upper Paleolithic, and larger engraved petroglyphs on rocks are found from many prehistoric periods and cultures around the world.
56
+
57
+ In antiquity, the only engraving on metal that could be carried out is the shallow grooves found in some jewellery after the beginning of the 1st Millennium B.C. The majority of so-called engraved designs on ancient gold rings or other items were produced by chasing or sometimes a combination of lost-wax casting and chasing. Engraved gem is a term for any carved or engraved semi-precious stone; this was an important small-scale art form in the ancient world, and remained popular until the 19th century.
58
+
59
+ However the use of glass engraving, usually using a wheel, to cut decorative scenes or figures into glass vessels, in imitation of hardstone carvings, appears as early as the first century AD,[5] continuing into the fourth century CE at urban centers such as Cologne and Rome,[6] and appears to have ceased sometime in the fifth century. Decoration was first based on Greek mythology, before hunting and circus scenes became popular, as well as imagery drawn from the Old and New Testament.[6] It appears to have been used to mimic the appearance of precious metal wares during the same period, including the application of gold leaf, and could be cut free-hand or with lathes. As many as twenty separate stylistic workshops have been identified, and it seems likely that the engraver and vessel producer were separate craftsmen.[5]
60
+
61
+ In the European Middle Ages goldsmiths used engraving to decorate and inscribe metalwork. It is thought that they began to print impressions of their designs to record them. From this grew the engraving of copper printing plates to produce artistic images on paper, known as old master prints, in Germany in the 1430s. Italy soon followed. Many early engravers came from a goldsmithing background. The first and greatest period of the engraving was from about 1470 to 1530, with such masters as Martin Schongauer,[7] Albrecht Dürer, and Lucas van Leiden.
62
+
63
+ Thereafter engraving tended to lose ground to etching, which was a much easier technique for the artist to learn. But many prints combined the two techniques: although Rembrandt's prints are generally all called etchings for convenience, many of them have some burin or drypoint work, and some have nothing else. By the nineteenth century, most engraving was for commercial illustration.
64
+
65
+ Before the advent of photography, engraving was used to reproduce other forms of art, for example paintings. Engravings continued to be common in newspapers and many books into the early 20th century, as they were cheaper to use in printing than photographic images.
66
+
67
+ Many classic postage stamps were engraved, although the practice is now mostly confined to particular countries, or used when a more "elegant" design is desired and a limited color range is acceptable.
68
+
69
+ Modifying the relief designs on coins is a craft dating back to the 18th century and today modified coins are known colloquially as hobo nickels. In the United States, especially during the Great Depression, coin engraving on the large-faced Indian Head nickel became a way to help make ends meet. The craft continues today, and with modern equipment often produces stunning miniature sculptural artworks and floral scrollwork.[8]
70
+
71
+ During the mid-20th century, a renaissance in hand-engraving began to take place. With the inventions of pneumatic hand-engraving systems that aided hand-engravers, the art and techniques of hand-engraving became more accessible.
72
+
73
+ The first music printed from engraved plates dates from 1446 and most printed music was produced through engraving from roughly 1700–1860. From 1860–1990 most printed music was produced through a combination of engraved master plates reproduced through offset lithography.
74
+
75
+ The first comprehensive account is given by Mme Delusse in her article "Gravure en lettres, en géographie et en musique" in Diderot's Encyclopedia. The technique involved a five-pointed raster to score staff lines, various punches in the shapes of notes and standard musical symbols, and various burins and scorers for lines and slurs. For correction, the plate was held on a bench by callipers, hit with a dot punch on the opposite side, and burnished to remove any signs of the defective work. The process involved intensive pre-planning of the layout, and many manuscript scores with engraver's planning marks survive from the 18th and 19th centuries.[9]
76
+
77
+ By 1837 pewter had replaced copper as a medium, and Berthiaud gives an account with an entire chapter devoted to music (Novel manuel complet de l'imprimeur en taille douce, 1837). Printing from such plates required a separate inking to be carried out cold, and the printing press used less pressure. Generally, four pages of music were engraved on a single plate. Because music engraving houses trained engravers through years of apprenticeship, very little is known about the practice. Fewer than one dozen sets of tools survive in libraries and museums.[10] By 1900 music engravers were established in several hundred cities in the world, but the art of storing plates was usually concentrated with publishers. Extensive bombing of Leipzig in 1944, the home of most German engraving and printing firms, destroyed roughly half the world's engraved music plates.
78
+
79
+ Examples of contemporary uses for engraving include creating text on jewellery, such as pendants or on the inside of engagement- and wedding rings to include text such as the name of the partner, or adding a winner's name to a sports trophy. Another application of modern engraving is found in the printing industry. There, every day thousands of pages are mechanically engraved onto rotogravure cylinders, typically a steel base with a copper layer of about 0.1 mm in which the image is transferred. After engraving the image is protected with an approximately 6 µm chrome layer. Using this process the image will survive for over a million copies in high speed printing presses.
80
+ Some schools throughout the world are renowned for their teaching of engraving, like the École Estienne in Paris.
81
+
82
+ In traditional engraving, which is a purely linear medium, the impression of half-tones was created by making many very thin parallel lines, a technique called hatching. When two sets of parallel-line hatchings intersected each other for higher density, the resulting pattern was known as cross-hatching. Patterns of dots were also used in a technique called stippling, first used around 1505 by Giulio Campagnola. Claude Mellan was one of many 17th-century engravers with a very well-developed technique of using parallel lines of varying thickness (known as the "swelling line") to give subtle effects of tone (as was Goltzius) – see picture below. One famous example is his Sudarium of Saint Veronica (1649), an engraving of the face of Jesus made from a single spiraling line that starts at the tip of Jesus's nose.
83
+
84
+ The earliest allusion to engraving in the Bible may be the reference to Judah's seal ring (Ge 38:18), followed by (Ex 39.30). Engraving was commonly done with pointed tools of iron or even with diamond points. (Jer 17:1).
85
+
86
+ Each of the two onyx stones on the shoulder-pieces of the high priest’s ephod was engraved with the names of six different tribes of Israel, and each of the 12 precious stones that adorned his breastpiece was engraved with the name of one of the tribes. The holy sign of dedication, the shining gold plate on the high priest's turban, was engraved with the words: "Holiness belongs to Adonai." Bezalel, along with Oholiab, was qualified to do this specialized engraving work as well as to train others.—Ex 35:30–35; 28:9–12; 39:6–14, 30.
87
+
88
+ Prints:
89
+
90
+ Of gems:
91
+
92
+ Of guns:
93
+
94
+ Of coins:
95
+
96
+ Of postage stamps:
97
+
98
+ Of pins:
en/2274.html.txt ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ A rock is any naturally occurring solid mass or aggregate of minerals or mineraloid matter. It is categorized by the minerals included, its chemical composition and the way in which it is formed. Rocks are usually grouped into three main groups: igneous rocks, metamorphic rocks and sedimentary rocks. Rocks form the Earth's outer solid layer, the crust.
4
+
5
+ Igneous rocks are formed when magma cools in the Earth's crust, or lava cools on the ground surface or the seabed. The metamorphic rocks are formed when existing rocks are subjected to such large pressures and temperatures that they are transformed—something that occurs, for example, when continental plates collide. The sedimentary rocks are formed by diagenesis or lithification of sediments, which in turn are formed by the weathering, transport, and deposition of existing rocks.[1]
6
+
7
+ The scientific study of rocks is called petrology, which is an essential component of geology.[2]
8
+
9
+ Rocks are composed of grains of minerals, which are homogeneous solids formed from a chemical compound arranged in an orderly manner.[3][page needed] The aggregate minerals forming the rock are held together by chemical bonds. The types and abundance of minerals in a rock are determined by the manner in which it was formed.
10
+
11
+ Most rocks contain silicate minerals, compounds that include silicon oxide tetrahedra in their crystal lattice, and account for about one-third of all known mineral species and about 95% of the earth's crust.[4] The proportion of silica in rocks and minerals is a major factor in determining their names and properties.[5]
12
+
13
+ Rocks are classified according to characteristics such as mineral and chemical composition, permeability, texture of the constituent particles, and particle size. These physical properties are the result of the processes that formed the rocks.[6] Over the course of time, rocks can transform from one type into another, as described by a geological model called the rock cycle. This transformation produces three general classes of rock: igneous, sedimentary and metamorphic.
14
+
15
+ Those three classes are subdivided into many groups. There are, however, no hard-and-fast boundaries between allied rocks. By increase or decrease in the proportions of their minerals, they pass through gradations from one to the other; the distinctive structures of one kind of rock may thus be traced gradually merging into those of another. Hence the definitions adopted in rock names simply correspond to selected points in a continuously graduated series.[7]
16
+
17
+ Igneous rock (derived from the Latin word igneus, meaning of fire, from ignis meaning fire) is formed through the cooling and solidification of magma or lava. This magma may be derived from partial melts of pre-existing rocks in either a planet's mantle or crust. Typically, the melting of rocks is caused by one or more of three processes: an increase in temperature, a decrease in pressure, or a change in composition.
18
+
19
+ Igneous rocks are divided into two main categories:
20
+
21
+ The chemical abundance and the rate of cooling of magma typically forms a sequence known as Bowen's reaction series. Most major igneous rocks are found along this scale.[5]
22
+
23
+ About 65% of the Earth's crust by volume consists of igneous rocks, making it the most plentiful category. Of these, 66% are basalt and gabbro, 16% are granite, and 17% granodiorite and diorite. Only 0.6% are syenite and 0.3% are ultramafic. The oceanic crust is 99% basalt, which is an igneous rock of mafic composition. Granite and similar rocks, known as granitoids, dominate the continental crust.[8][9]
24
+
25
+ Sedimentary rocks are formed at the earth's surface by the accumulation and cementation of fragments of earlier rocks, minerals, and organisms[10] or as chemical precipitates and organic growths in water (sedimentation). This process causes clastic sediments (pieces of rock) or organic particles (detritus) to settle and accumulate, or for minerals to chemically precipitate (evaporite) from a solution. The particulate matter then undergoes compaction and cementation at moderate temperatures and pressures (diagenesis).
26
+
27
+ Before being deposited, sediments are formed by weathering of earlier rocks by erosion in a source area and then transported to the place of deposition by water, wind, ice, mass movement or glaciers (agents of denudation).[6] About 7.9% of the crust by volume is composed of sedimentary rocks, with 82% of those being shales, while the remainder consists of limestone (6%), sandstone and arkoses (12%).[9] Sedimentary rocks often contain fossils. Sedimentary rocks form under the influence of gravity and typically are deposited in horizontal or near horizontal layers or strata, and may be referred to as stratified rocks.[11]
28
+
29
+ Metamorphic rocks are formed by subjecting any rock type—sedimentary rock, igneous rock or another older metamorphic rock—to different temperature and pressure conditions than those in which the original rock was formed. This process is called metamorphism, meaning to "change in form". The result is a profound change in physical properties and chemistry of the stone. The original rock, known as the protolith, transforms into other mineral types or other forms of the same minerals, by recrystallization.[6] The temperatures and pressures required for this process are always higher than those found at the Earth's surface: temperatures greater than 150 to 200 °C and pressures of 1500 bars.[12] Metamorphic rocks compose 27.4% of the crust by volume.[9]
30
+
31
+ The three major classes of metamorphic rock are based upon the formation mechanism. An intrusion of magma that heats the surrounding rock causes contact metamorphism—a temperature-dominated transformation. Pressure metamorphism occurs when sediments are buried deep under the ground; pressure is dominant, and temperature plays a smaller role. This is termed burial metamorphism, and it can result in rocks such as jade. Where both heat and pressure play a role, the mechanism is termed regional metamorphism. This is typically found in mountain-building regions.[5]
32
+
33
+ Depending on the structure, metamorphic rocks are divided into two general categories. Those that possess a texture are referred to as foliated; the remainders are termed non-foliated. The name of the rock is then determined based on the types of minerals present. Schists are foliated rocks that are primarily composed of lamellar minerals such as micas. A gneiss has visible bands of differing lightness, with a common example being the granite gneiss. Other varieties of foliated rock include slates, phyllites, and mylonite. Familiar examples of non-foliated metamorphic rocks include marble, soapstone, and serpentine. This branch contains quartzite—a metamorphosed form of sandstone—and hornfels.[5]
34
+
35
+ The use of rock has had a huge impact on the cultural and technological development of the human race. Rock has been used by humans and other hominids for at least 2.5 million years.[13] Lithic technology marks some of the oldest and continuously used technologies. The mining of rock for its metal content has been one of the most important factors of human advancement, and has progressed at different rates in different places, in part because of the kind of metals available from the rock of a region.
36
+
37
+ Mining is the extraction of valuable minerals or other geological materials from the earth, from an ore body, vein or seam.[14] The term also includes the removal of soil. Materials recovered by mining include base metals, precious metals, iron, uranium, coal, diamonds, limestone, oil shale, rock salt, potash, construction aggregate and dimension stone. Mining is required to obtain any material that cannot be grown through agricultural processes, or created artificially in a laboratory or factory. Mining in a wider sense comprises extraction of any resource (e.g. petroleum, natural gas, salt or even water) from the earth.[15]
38
+
39
+ Mining of rock and metals has been done since prehistoric times. Modern mining processes involve prospecting for mineral deposits, analysis of the profit potential of a proposed mine, extraction of the desired materials, and finally reclamation of the land to prepare it for other uses once mining ceases.[16]
40
+
41
+ Mining processes may create negative impacts on the environment both during the mining operations and for years after mining has ceased. These potential impacts have led to most of the world's nations adopting regulations to manage negative effects of mining operations.[17]
en/2275.html.txt ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gravity (from Latin gravitas, meaning 'weight'[1]), or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light[2]—are brought toward (or gravitate toward) one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing and forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
4
+
5
+ Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon.[3] However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force, which causes any two bodies to be attracted to each other, with the force proportional to the product of their masses and inversely proportional to the square of the distance between them.
6
+
7
+ Gravity is the weakest of the four fundamental interactions of physics, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a consequence, it has no significant influence at the level of subatomic particles.[4] In contrast, it is the dominant interaction at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies.
8
+
9
+ The earliest instance of gravity in the Universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner.[5] Attempts to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics, are a current area of research.
10
+
11
+ The ancient Greek philosopher Archimedes discovered the center of gravity of a triangle.[6] He also postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity.[7]
12
+
13
+ The Roman architect and engineer Vitruvius in De Architectura postulated that gravity of an object did not depend on weight but its "nature".[8]
14
+
15
+ In ancient India, Aryabhata first identified the force to explain why objects are not thrown outward as the earth rotates. Brahmagupta described gravity as an attractive force and used the term "gurutvaakarshan" for gravity.[9][10]
16
+
17
+ Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal[11]) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitational acceleration is the same for all objects. This was a major departure from Aristotle's belief that heavier objects have a higher gravitational acceleration.[12] Galileo postulated air resistance as the reason that objects with less mass fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.[13]
18
+
19
+ In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, "I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly."[14] The equation is the following:
20
+
21
+ F
22
+ =
23
+ G
24
+
25
+
26
+
27
+
28
+ m
29
+
30
+ 1
31
+
32
+
33
+
34
+ m
35
+
36
+ 2
37
+
38
+
39
+
40
+
41
+ r
42
+
43
+ 2
44
+
45
+
46
+
47
+
48
+  
49
+
50
+
51
+ {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}\ }
52
+
53
+ Where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant.
54
+
55
+ Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune.
56
+
57
+ A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit. This discrepancy was the advance in the perihelion of Mercury of 42.98 arcseconds per century.[15]
58
+
59
+ Although Newton's theory has been superseded by Albert Einstein's general relativity, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is simpler to work with and it gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.
60
+
61
+ The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.[16]
62
+
63
+ Formulations of the equivalence principle include:
64
+
65
+ In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.[19][20] In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force.
66
+
67
+ Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.
68
+
69
+ Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor.
70
+
71
+ Notable solutions of the Einstein field equations include:
72
+
73
+ The tests of general relativity included the following:[21]
74
+
75
+ An open question is whether it is possible to describe the small-scale interactions of gravity with the same framework as quantum mechanics. General relativity describes large-scale bulk properties whereas quantum mechanics is the framework to describe the smallest scale interactions of matter. Without modifications these frameworks are incompatible.[29]
76
+
77
+ One path is to describe gravity in the framework of quantum field theory, which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons.[30][31] This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,[29] where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
78
+
79
+ Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
80
+
81
+ The strength of the gravitational field is numerically equal to the acceleration of objects under its influence.[32] The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities.[33] For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).
82
+
83
+ That value, denoted g, is g = 9.80665 m/s2 (32.1740 ft/s2).[34][35]
84
+
85
+ The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.[36] This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".[37]
86
+
87
+ Assuming the standardized value for g and ignoring air resistance, this means that an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.
88
+
89
+ According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object does not bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.
90
+
91
+ The force of gravity on Earth is the resultant (vector sum) of two forces:[38] (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.
92
+
93
+ Under an assumption of constant gravitational attraction, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2 on Earth. This resulting force is the object's weight. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first ​1⁄20 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by ​2⁄20 it has dropped at total of 4 units; by ​3⁄20, 9 units and so on.
94
+
95
+ Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression
96
+
97
+
98
+
99
+ h
100
+ =
101
+
102
+
103
+
104
+
105
+ v
106
+
107
+ 2
108
+
109
+
110
+
111
+ 2
112
+ g
113
+
114
+
115
+
116
+
117
+
118
+
119
+ {\displaystyle h={\tfrac {v^{2}}{2g}}}
120
+
121
+ for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only.
122
+
123
+ The application of Newton's law of gravity has enabled the acquisition of much of the detailed information we have about the planets in the Solar System, the mass of the Sun, and details of quasars; even the existence of dark matter is inferred using Newton's law of gravity. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit galactic centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them.
124
+
125
+ The earliest gravity (possibly in the form of quantum gravity, supergravity or a gravitational singularity), along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state (such as a false vacuum, quantum vacuum or virtual particle), in a currently unknown manner.[5]
126
+
127
+ General relativity predicts that energy can be transported out of a system through gravitational radiation. Any accelerating matter can create curvatures in the space-time metric, which is how the gravitational radiation is transported away from the system. Co-orbiting objects can generate curvatures in space-time such as the Earth-Sun system, pairs of neutron stars, and pairs of black holes. Another astrophysical system predicted to lose energy in the form of gravitational radiation are exploding supernovae.
128
+
129
+ The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993.
130
+
131
+ The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion-light years from Earth were measured.[40][41] This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang.[42] Neutron star and black hole formation also create detectable amounts of gravitational radiation.[43] This research was awarded the Nobel Prize in physics in 2017.[44]
132
+
133
+ As of 2020[update], the gravitational radiation emitted by the Solar System is far too small to measure with current technology.
134
+
135
+ In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light.[45] This means that if the Sun suddenly disappeared, the Earth would keep orbiting it normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in the Chinese Science Bulletin in February 2013.[46]
136
+
137
+ In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.[47]
138
+
139
+ There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
en/2276.html.txt ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Gravity (from Latin gravitas, meaning 'weight'[1]), or gravitation, is a natural phenomenon by which all things with mass or energy—including planets, stars, galaxies, and even light[2]—are brought toward (or gravitate toward) one another. On Earth, gravity gives weight to physical objects, and the Moon's gravity causes the ocean tides. The gravitational attraction of the original gaseous matter present in the Universe caused it to begin coalescing and forming stars and caused the stars to group together into galaxies, so gravity is responsible for many of the large-scale structures in the Universe. Gravity has an infinite range, although its effects become increasingly weaker as objects get further away.
4
+
5
+ Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as a consequence of the curvature of spacetime caused by the uneven distribution of mass. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole's event horizon.[3] However, for most applications, gravity is well approximated by Newton's law of universal gravitation, which describes gravity as a force, which causes any two bodies to be attracted to each other, with the force proportional to the product of their masses and inversely proportional to the square of the distance between them.
6
+
7
+ Gravity is the weakest of the four fundamental interactions of physics, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a consequence, it has no significant influence at the level of subatomic particles.[4] In contrast, it is the dominant interaction at the macroscopic scale, and is the cause of the formation, shape and trajectory (orbit) of astronomical bodies.
8
+
9
+ The earliest instance of gravity in the Universe, possibly in the form of quantum gravity, supergravity or a gravitational singularity, along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state, such as a false vacuum, quantum vacuum or virtual particle, in a currently unknown manner.[5] Attempts to develop a theory of gravity consistent with quantum mechanics, a quantum gravity theory, which would allow gravity to be united in a common mathematical framework (a theory of everything) with the other three fundamental interactions of physics, are a current area of research.
10
+
11
+ The ancient Greek philosopher Archimedes discovered the center of gravity of a triangle.[6] He also postulated that if two equal weights did not have the same center of gravity, the center of gravity of the two weights together would be in the middle of the line that joins their centers of gravity.[7]
12
+
13
+ The Roman architect and engineer Vitruvius in De Architectura postulated that gravity of an object did not depend on weight but its "nature".[8]
14
+
15
+ In ancient India, Aryabhata first identified the force to explain why objects are not thrown outward as the earth rotates. Brahmagupta described gravity as an attractive force and used the term "gurutvaakarshan" for gravity.[9][10]
16
+
17
+ Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries. In his famous (though possibly apocryphal[11]) experiment dropping balls from the Tower of Pisa, and later with careful measurements of balls rolling down inclines, Galileo showed that gravitational acceleration is the same for all objects. This was a major departure from Aristotle's belief that heavier objects have a higher gravitational acceleration.[12] Galileo postulated air resistance as the reason that objects with less mass fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.[13]
18
+
19
+ In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. In his own words, "I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly."[14] The equation is the following:
20
+
21
+ F
22
+ =
23
+ G
24
+
25
+
26
+
27
+
28
+ m
29
+
30
+ 1
31
+
32
+
33
+
34
+ m
35
+
36
+ 2
37
+
38
+
39
+
40
+
41
+ r
42
+
43
+ 2
44
+
45
+
46
+
47
+
48
+  
49
+
50
+
51
+ {\displaystyle F=G{\frac {m_{1}m_{2}}{r^{2}}}\ }
52
+
53
+ Where F is the force, m1 and m2 are the masses of the objects interacting, r is the distance between the centers of the masses and G is the gravitational constant.
54
+
55
+ Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the general position of the planet, and Le Verrier's calculations are what led Johann Gottfried Galle to the discovery of Neptune.
56
+
57
+ A discrepancy in Mercury's orbit pointed out flaws in Newton's theory. By the end of the 19th century, it was known that its orbit showed slight perturbations that could not be accounted for entirely under Newton's theory, but all searches for another perturbing body (such as a planet orbiting the Sun even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit. This discrepancy was the advance in the perihelion of Mercury of 42.98 arcseconds per century.[15]
58
+
59
+ Although Newton's theory has been superseded by Albert Einstein's general relativity, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is simpler to work with and it gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.
60
+
61
+ The equivalence principle, explored by a succession of researchers including Galileo, Loránd Eötvös, and Einstein, expresses the idea that all objects fall in the same way, and that the effects of gravity are indistinguishable from certain aspects of acceleration and deceleration. The simplest way to test the weak equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the same rate when other forces (such as air resistance and electromagnetic effects) are negligible. More sophisticated tests use a torsion balance of a type invented by Eötvös. Satellite experiments, for example STEP, are planned for more accurate experiments in space.[16]
62
+
63
+ Formulations of the equivalence principle include:
64
+
65
+ In general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground.[19][20] In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force.
66
+
67
+ Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths in curved spacetime. These straight paths are called geodesics. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.
68
+
69
+ Einstein discovered the field equations of general relativity, which relate the presence of matter and the curvature of spacetime and are named after him. The Einstein field equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of the field equations are the components of the metric tensor of spacetime. A metric tensor describes a geometry of spacetime. The geodesic paths for a spacetime are calculated from the metric tensor.
70
+
71
+ Notable solutions of the Einstein field equations include:
72
+
73
+ The tests of general relativity included the following:[21]
74
+
75
+ An open question is whether it is possible to describe the small-scale interactions of gravity with the same framework as quantum mechanics. General relativity describes large-scale bulk properties whereas quantum mechanics is the framework to describe the smallest scale interactions of matter. Without modifications these frameworks are incompatible.[29]
76
+
77
+ One path is to describe gravity in the framework of quantum field theory, which has been successful to accurately describe the other fundamental interactions. The electromagnetic force arises from an exchange of virtual photons, where the QFT description of gravity is that there is an exchange of virtual gravitons.[30][31] This description reproduces general relativity in the classical limit. However, this approach fails at short distances of the order of the Planck length,[29] where a more complete theory of quantum gravity (or a new approach to quantum mechanics) is required.
78
+
79
+ Every planetary body (including the Earth) is surrounded by its own gravitational field, which can be conceptualized with Newtonian physics as exerting an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point above the surface is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.
80
+
81
+ The strength of the gravitational field is numerically equal to the acceleration of objects under its influence.[32] The rate of acceleration of falling objects near the Earth's surface varies very slightly depending on latitude, surface features such as mountains and ridges, and perhaps unusually high or low sub-surface densities.[33] For purposes of weights and measures, a standard gravity value is defined by the International Bureau of Weights and Measures, under the International System of Units (SI).
82
+
83
+ That value, denoted g, is g = 9.80665 m/s2 (32.1740 ft/s2).[34][35]
84
+
85
+ The standard value of 9.80665 m/s2 is the one originally adopted by the International Committee on Weights and Measures in 1901 for 45° latitude, even though it has been shown to be too high by about five parts in ten thousand.[36] This value has persisted in meteorology and in some standard atmospheres as the value for 45° latitude even though it applies more precisely to latitude of 45°32'33".[37]
86
+
87
+ Assuming the standardized value for g and ignoring air resistance, this means that an object falling freely near the Earth's surface increases its velocity by 9.80665 m/s (32.1740 ft/s or 22 mph) for each second of its descent. Thus, an object starting from rest will attain a velocity of 9.80665 m/s (32.1740 ft/s) after one second, approximately 19.62 m/s (64.4 ft/s) after two seconds, and so on, adding 9.80665 m/s (32.1740 ft/s) to each resulting velocity. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time.
88
+
89
+ According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object does not bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.
90
+
91
+ The force of gravity on Earth is the resultant (vector sum) of two forces:[38] (a) The gravitational attraction in accordance with Newton's universal law of gravitation, and (b) the centrifugal force, which results from the choice of an earthbound, rotating frame of reference. The force of gravity is weakest at the equator because of the centrifugal force caused by the Earth's rotation and because points on the equator are furthest from the center of the Earth. The force of gravity varies with latitude and increases from about 9.780 m/s2 at the Equator to about 9.832 m/s2 at the poles.
92
+
93
+ Under an assumption of constant gravitational attraction, Newton's law of universal gravitation simplifies to F = mg, where m is the mass of the body and g is a constant vector with an average magnitude of 9.81 m/s2 on Earth. This resulting force is the object's weight. The acceleration due to gravity is equal to this g. An initially stationary object which is allowed to fall freely under gravity drops a distance which is proportional to the square of the elapsed time. The image on the right, spanning half a second, was captured with a stroboscopic flash at 20 flashes per second. During the first ​1⁄20 of a second the ball drops one unit of distance (here, a unit is about 12 mm); by ​2⁄20 it has dropped at total of 4 units; by ​3⁄20, 9 units and so on.
94
+
95
+ Under the same constant gravity assumptions, the potential energy, Ep, of a body at height h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid only over small distances h from the surface of the Earth. Similarly, the expression
96
+
97
+
98
+
99
+ h
100
+ =
101
+
102
+
103
+
104
+
105
+ v
106
+
107
+ 2
108
+
109
+
110
+
111
+ 2
112
+ g
113
+
114
+
115
+
116
+
117
+
118
+
119
+ {\displaystyle h={\tfrac {v^{2}}{2g}}}
120
+
121
+ for the maximum height reached by a vertically projected body with initial velocity v is useful for small heights and small initial velocities only.
122
+
123
+ The application of Newton's law of gravity has enabled the acquisition of much of the detailed information we have about the planets in the Solar System, the mass of the Sun, and details of quasars; even the existence of dark matter is inferred using Newton's law of gravity. Although we have not traveled to all the planets nor to the Sun, we know their masses. These masses are obtained by applying the laws of gravity to the measured characteristics of the orbit. In space an object maintains its orbit because of the force of gravity acting upon it. Planets orbit stars, stars orbit galactic centers, galaxies orbit a center of mass in clusters, and clusters orbit in superclusters. The force of gravity exerted on one object by another is directly proportional to the product of those objects' masses and inversely proportional to the square of the distance between them.
124
+
125
+ The earliest gravity (possibly in the form of quantum gravity, supergravity or a gravitational singularity), along with ordinary space and time, developed during the Planck epoch (up to 10−43 seconds after the birth of the Universe), possibly from a primeval state (such as a false vacuum, quantum vacuum or virtual particle), in a currently unknown manner.[5]
126
+
127
+ General relativity predicts that energy can be transported out of a system through gravitational radiation. Any accelerating matter can create curvatures in the space-time metric, which is how the gravitational radiation is transported away from the system. Co-orbiting objects can generate curvatures in space-time such as the Earth-Sun system, pairs of neutron stars, and pairs of black holes. Another astrophysical system predicted to lose energy in the form of gravitational radiation are exploding supernovae.
128
+
129
+ The first indirect evidence for gravitational radiation was through measurements of the Hulse–Taylor binary in 1973. This system consists of a pulsar and neutron star in orbit around one another. Its orbital period has decreased since its initial discovery due to a loss of energy, which is consistent for the amount of energy loss due to gravitational radiation. This research was awarded the Nobel Prize in Physics in 1993.
130
+
131
+ The first direct evidence for gravitational radiation was measured on 14 September 2015 by the LIGO detectors. The gravitational waves emitted during the collision of two black holes 1.3 billion-light years from Earth were measured.[40][41] This observation confirms the theoretical predictions of Einstein and others that such waves exist. It also opens the way for practical observation and understanding of the nature of gravity and events in the Universe including the Big Bang.[42] Neutron star and black hole formation also create detectable amounts of gravitational radiation.[43] This research was awarded the Nobel Prize in physics in 2017.[44]
132
+
133
+ As of 2020[update], the gravitational radiation emitted by the Solar System is far too small to measure with current technology.
134
+
135
+ In December 2012, a research team in China announced that it had produced measurements of the phase lag of Earth tides during full and new moons which seem to prove that the speed of gravity is equal to the speed of light.[45] This means that if the Sun suddenly disappeared, the Earth would keep orbiting it normally for 8 minutes, which is the time light takes to travel that distance. The team's findings were released in the Chinese Science Bulletin in February 2013.[46]
136
+
137
+ In October 2017, the LIGO and Virgo detectors received gravitational wave signals within 2 seconds of gamma ray satellites and optical telescopes seeing signals from the same direction. This confirmed that the speed of gravitational waves was the same as the speed of light.[47]
138
+
139
+ There are some observations that are not adequately accounted for, which may point to the need for better theories of gravity or perhaps be explained in other ways.
en/2277.html.txt ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Engraving is the practice of incising a design on to a hard, usually flat surface by cutting grooves into it with a burin. The result may be a decorated object in itself, as when silver, gold, steel, or glass are engraved, or may provide an intaglio printing plate, of copper or another metal, for printing images on paper as prints or illustrations; these images are also called "engravings". Engraving is one of the oldest and most important techniques in printmaking. Wood engraving is a form of relief printing and is not covered in this article.
4
+
5
+ Engraving was a historically important method of producing images on paper in artistic printmaking, in mapmaking, and also for commercial reproductions and illustrations for books and magazines. It has long been replaced by various photographic processes in its commercial applications and, partly because of the difficulty of learning the technique, is much less common in printmaking, where it has been largely replaced by etching and other techniques.
6
+
7
+ "Engraving" is also loosely but incorrectly used for any old black and white print; it requires a degree of expertise to distinguish engravings from prints using other techniques such as etching in particular, but also mezzotint and other techniques. Many old master prints also combine techniques on the same plate, further confusing matters. Line engraving and steel engraving cover use for reproductive prints, illustrations in books and magazines, and similar uses, mostly in the 19th century, and often not actually using engraving.
8
+ Traditional engraving, by burin or with the use of machines, continues to be practised by goldsmiths, glass engravers, gunsmiths and others, while modern industrial techniques such as photoengraving and laser engraving have many important applications. Engraved gems were an important art in the ancient world, revived at the Renaissance, although the term traditionally covers relief as well as intaglio carvings, and is essentially a branch of sculpture rather than engraving, as drills were the usual tools.
9
+
10
+ Other terms often used for printed engravings are copper engraving, copper-plate engraving or line engraving. Steel engraving is the same technique, on steel or steel-faced plates, and was mostly used for banknotes, illustrations for books, magazines and reproductive prints, letterheads and similar uses from about 1790 to the early 20th century, when the technique became less popular, except for banknotes and other forms of security printing. Especially in the past, "engraving" was often used very loosely to cover several printmaking techniques, so that many so-called engravings were in fact produced by totally different techniques, such as etching or mezzotint. "Hand engraving" is a term sometimes used for engraving objects other than printing plates, to inscribe or decorate jewellery, firearms, trophies, knives and other fine metal goods. Traditional engravings in printmaking are also "hand engraved", using just the same techniques to make the lines in the plate.
11
+
12
+ Each graver is different and has its own use. Engravers use a hardened steel tool called a burin, or graver, to cut the design into the surface, most traditionally a copper plate.[1] However, modern hand engraving artists use burins or gravers to cut a variety of metals such as silver, nickel, steel, brass, gold, titanium, and more, in applications from weaponry to jewellery to motorcycles to found objects. Modern professional engravers can engrave with a resolution of up to 40 lines per mm in high grade work creating game scenes and scrollwork. Dies used in mass production of molded parts are sometimes hand engraved to add special touches or certain information such as part numbers.
13
+
14
+ In addition to hand engraving, there are engraving machines that require less human finesse and are not directly controlled by hand. They are usually used for lettering, using a pantographic system. There are versions for the insides of rings and also the outsides of larger pieces. Such machines are commonly used for inscriptions on rings, lockets and presentation pieces.
15
+
16
+ Gravers come in a variety of shapes and sizes that yield different line types. The burin produces a unique and recognizable quality of line that is characterized by its steady, deliberate appearance and clean edges. The angle tint tool has a slightly curved tip that is commonly used in printmaking. Florentine liners are flat-bottomed tools with multiple lines incised into them, used to do fill work on larger areas or to create uniform shade lines that are fast to execute. Ring gravers are made with particular shapes that are used by jewelry engravers in order to cut inscriptions inside rings. Flat gravers are used for fill work on letters, as well as "wriggle" cuts on most musical instrument engraving work, remove background, or create bright cuts. Knife gravers are for line engraving and very deep cuts. Round gravers, and flat gravers with a radius, are commonly used on silver to create bright cuts (also called bright-cut engraving), as well as other hard-to-cut metals such as nickel and steel. Square or V-point gravers are typically square or elongated diamond-shaped and used for cutting straight lines. V-point can be anywhere from 60 to 130 degrees, depending on purpose and effect. These gravers have very small cutting points. Other tools such as mezzotint rockers, roulets and burnishers are used for texturing effects. Burnishing tools can also be used for certain stone setting techniques.
17
+
18
+ Musical instrument engraving on American-made brass instruments flourished in the 1920s and utilizes a specialized engraving technique where a flat graver is "walked" across the surface of the instrument to make zig-zag lines and patterns. The method for "walking" the graver may also be referred to as "wriggle" or "wiggle" cuts. This technique is necessary due to the thinness of metal used to make musical instruments versus firearms or jewelry. Wriggle cuts are commonly found on silver Western jewelry and other Western metal work.
19
+
20
+ Tool geometry is extremely important for accuracy in hand engraving. When sharpened for most applications, a graver has a "face", which is the top of the graver, and a "heel", which is the bottom of the graver; not all tools or application require a heel. These two surfaces meet to form a point that cuts the metal. The geometry and length of the heel helps to guide the graver smoothly as it cuts the surface of the metal. When the tool's point breaks or chips, even on a microscopic level, the graver can become hard to control and produces unexpected results. Modern innovations have brought about new types of carbide that resist chipping and breakage, which hold a very sharp point longer between resharpening than traditional metal tools.
21
+
22
+ Sharpening a graver or burin requires either a sharpening stone or wheel. Harder carbide and steel gravers require diamond-grade sharpening wheels; these gravers can be polished to a mirror finish using a ceramic or cast iron lap, which is essential in creating bright cuts. Several low-speed, reversible sharpening system made specifically for hand engravers are available that reduce sharpening time. Fixtures that secure the tool in place at certain angles and geometries are also available to take the guesswork from sharpening to produce accurate points. Very few master engravers exist today who rely solely on "feel" and muscle memory to sharpen tools. These master engravers typically worked for many years as an apprentice, most often learning techniques decades before modern machinery was available for hand engravers. These engravers typically trained in such countries as Italy and Belgium, where hand engraving has a rich and long heritage of masters.
23
+
24
+ Design or artwork is generally prepared in advance, although some professional and highly experienced hand engravers are able to draw out minimal outlines either on paper or directly on the metal surface just prior to engraving. The work to be engraved may be lightly scribed on the surface with a sharp point, laser marked, drawn with a fine permanent marker (removable with acetone) or pencil, transferred using various chemicals in conjunction with inkjet or laser printouts, or stippled. Engraving artists may rely on hand drawing skills, copyright-free designs and images, computer-generated artwork, or common design elements when creating artwork.
25
+
26
+ Originally, handpieces varied little in design as the common use was to push with the handle placed firmly in the center of the palm. With modern pneumatic engraving systems, handpieces are designed and created in a variety of shapes and power ranges. Handpieces are made using various methods and materials. Knobs may be handmade from wood, molded and engineered from plastic, or machine-made from brass, steel, or other metals. The most widely known hand engraving tool maker, GRS Tools in Kansas is an American-owned and operated company that manufacture handpieces as well as many other tools for various applications in metal engraving.
27
+
28
+ The actual engraving is traditionally done by a combination of pressure and manipulating the work-piece. The traditional "hand push" process is still practiced today, but modern technology has brought various mechanically assisted engraving systems. Most pneumatic engraving systems require an air source that drives air through a hose into a handpiece, which resembles a traditional engraving handle in many cases, that powers a mechanism (usually a piston). The air is actuated by either a foot control (like a gas pedal or sewing machine) or newer palm / hand control. This mechanism replaces either the "hand push" effort or the effects of a hammer. The internal mechanisms move at speeds up to 15,000 strokes per minute, thereby greatly reducing the effort needed in traditional hand engraving. These types of pneumatic systems are used for power assistance only and do not guide or control the engraving artist. One of the major benefits of using a pneumatic system for hand engraving is the reduction of fatigue and decrease in time spent working.
29
+
30
+ Hand engraving artists today employ a combination of hand push, pneumatic, rotary, or hammer and chisel methods. Hand push is still commonly used by modern hand engraving artists who create "bulino" style work, which is highly detailed and delicate, fine work; a great majority, if not all, traditional printmakers today rely solely upon hand push methods. Pneumatic systems greatly reduce the effort required for removing large amounts of metal, such as in deep relief engraving or Western bright cut techniques.
31
+
32
+ Finishing the work is often necessary when working in metal that may rust or where a colored finish is desirable, such as a firearm. A variety of spray lacquers and finishing techniques exist to seal and protect the work from exposure to the elements and time. Finishing also may include lightly sanding the surface to remove small chips of metal called "burrs" that are very sharp and unsightly. Some engravers prefer high contrast to the work or design, using black paints or inks to darken removed (and lower) areas of exposed metal. The excess paint or ink is wiped away and allowed to dry before lacquering or sealing, which may or may not be desired by the artist.
33
+
34
+ Because of the high level of microscopic detail that can be achieved by a master engraver, counterfeiting of engraved designs is well-nigh impossible, and modern banknotes are almost always engraved, as are plates for printing money, checks, bonds and other security-sensitive papers. The engraving is so fine that a normal printer cannot recreate the detail of hand engraved images, nor can it be scanned. In the Bureau of Engraving and Printing, more than one hand engraver will work on the same plate, making it nearly impossible for one person to duplicate all the engraving on a particular banknote or document.
35
+
36
+ The modern discipline of hand engraving, as it is called in a metalworking context, survives largely in a few specialized fields. The highest levels of the art are found on firearms and other metal weaponry, jewellery, and musical instruments.
37
+
38
+ In most commercial markets today, hand engraving has been replaced with milling using CNC engraving or milling machines. Still, there are certain applications where use of hand engraving tools cannot be replaced.
39
+
40
+ In some instances, images or designs can be transferred to metal surfaces via mechanical process. One such process is roll stamping or roller-die engraving. In this process, a hardened image die is pressed against the destination surface using extreme pressure to impart the image. In the 1800s pistol cylinders were often decorated via this process to impart a continuous scene around its surface.
41
+
42
+ Engraving machines such as the K500 (packaging) or K6 (publication) by Hell Gravure Systems use a diamond stylus to cut cells. Each cell creates one printing dot later in the process. A K6 can have up to 18 engraving heads each cutting 8.000 cells per second to an accuracy of .1 µm and below. They are fully computer-controlled and the whole process of cylinder-making is fully automated.
43
+
44
+ It is now common place for retail stores (mostly jewellery, silverware or award stores) to have a small computer controlled engrave on site. This enables them to personalise the products they sell. Retail engraving machines tend to be focused around ease of use for the operator and the ability to do a wide variety of items including flat metal plates, jewelry of different shapes and sizes, as well as cylindrical items such as mugs and tankards. They will typically be equipped with a computer dedicated to graphic design that will enable the operator to easily design a text or picture graphic which the software will translate into digital signals telling the engraver machine what to do. Unlike industrial engravers, retail machines are smaller and only use one diamond head. This is interchangeable so the operator can use differently shaped diamonds for different finishing effects. They will typically be able to do a variety of metals and plastics. Glass and crystal engraving is possible, but the brittle nature of the material makes the process more time consuming.
45
+
46
+ Retail engravers mainly use two different processes. The first and most common 'Diamond Drag' pushes the diamond cutter through the surface of the material and then pulls to create scratches. These direction and depth are controlled by the computer input. The second is 'Spindle Cutter'. This is similar to Diamond Drag, but the engraving head is shaped in a flat V shape, with a small diamond and the base. The machine uses an electronic spindle to quickly rotate the head as it pushes it into the material, then pulls it along whilst it continues to spin. This creates a much bolder impression than diamond drag. It is used mainly for brass plaques and pet tags.
47
+
48
+ With state-of-the-art machinery it is easy to have a simple, single item complete in under ten minutes.
49
+ The engraving process with diamonds is state-of-the-art since the 1960s.
50
+
51
+ Today laser engraving machines are in development but still mechanical cutting has proven its strength in economical terms and quality. More than 4,000 engravers make approx. 8 Mio printing cylinders worldwide per year.
52
+
53
+ For the printing process, see intaglio (printmaking). For the Western art history of engraved prints, see old master print and line engraving
54
+
55
+ The first evidence for humans engraving patterns is a chiselled shell, dating back between 540,000 and 430,000 years, from Trinil, in Java, Indonesia, where the first Homo erectus was discovered.[3] Hatched banding upon ostrich eggshells used as water containers found in South Africa in the Diepkloof Rock Shelter and dated to the Middle Stone Age around 60,000 BC are the next documented case of human engraving.[4] Engraving on bone and ivory is an important technique for the Art of the Upper Paleolithic, and larger engraved petroglyphs on rocks are found from many prehistoric periods and cultures around the world.
56
+
57
+ In antiquity, the only engraving on metal that could be carried out is the shallow grooves found in some jewellery after the beginning of the 1st Millennium B.C. The majority of so-called engraved designs on ancient gold rings or other items were produced by chasing or sometimes a combination of lost-wax casting and chasing. Engraved gem is a term for any carved or engraved semi-precious stone; this was an important small-scale art form in the ancient world, and remained popular until the 19th century.
58
+
59
+ However the use of glass engraving, usually using a wheel, to cut decorative scenes or figures into glass vessels, in imitation of hardstone carvings, appears as early as the first century AD,[5] continuing into the fourth century CE at urban centers such as Cologne and Rome,[6] and appears to have ceased sometime in the fifth century. Decoration was first based on Greek mythology, before hunting and circus scenes became popular, as well as imagery drawn from the Old and New Testament.[6] It appears to have been used to mimic the appearance of precious metal wares during the same period, including the application of gold leaf, and could be cut free-hand or with lathes. As many as twenty separate stylistic workshops have been identified, and it seems likely that the engraver and vessel producer were separate craftsmen.[5]
60
+
61
+ In the European Middle Ages goldsmiths used engraving to decorate and inscribe metalwork. It is thought that they began to print impressions of their designs to record them. From this grew the engraving of copper printing plates to produce artistic images on paper, known as old master prints, in Germany in the 1430s. Italy soon followed. Many early engravers came from a goldsmithing background. The first and greatest period of the engraving was from about 1470 to 1530, with such masters as Martin Schongauer,[7] Albrecht Dürer, and Lucas van Leiden.
62
+
63
+ Thereafter engraving tended to lose ground to etching, which was a much easier technique for the artist to learn. But many prints combined the two techniques: although Rembrandt's prints are generally all called etchings for convenience, many of them have some burin or drypoint work, and some have nothing else. By the nineteenth century, most engraving was for commercial illustration.
64
+
65
+ Before the advent of photography, engraving was used to reproduce other forms of art, for example paintings. Engravings continued to be common in newspapers and many books into the early 20th century, as they were cheaper to use in printing than photographic images.
66
+
67
+ Many classic postage stamps were engraved, although the practice is now mostly confined to particular countries, or used when a more "elegant" design is desired and a limited color range is acceptable.
68
+
69
+ Modifying the relief designs on coins is a craft dating back to the 18th century and today modified coins are known colloquially as hobo nickels. In the United States, especially during the Great Depression, coin engraving on the large-faced Indian Head nickel became a way to help make ends meet. The craft continues today, and with modern equipment often produces stunning miniature sculptural artworks and floral scrollwork.[8]
70
+
71
+ During the mid-20th century, a renaissance in hand-engraving began to take place. With the inventions of pneumatic hand-engraving systems that aided hand-engravers, the art and techniques of hand-engraving became more accessible.
72
+
73
+ The first music printed from engraved plates dates from 1446 and most printed music was produced through engraving from roughly 1700–1860. From 1860–1990 most printed music was produced through a combination of engraved master plates reproduced through offset lithography.
74
+
75
+ The first comprehensive account is given by Mme Delusse in her article "Gravure en lettres, en géographie et en musique" in Diderot's Encyclopedia. The technique involved a five-pointed raster to score staff lines, various punches in the shapes of notes and standard musical symbols, and various burins and scorers for lines and slurs. For correction, the plate was held on a bench by callipers, hit with a dot punch on the opposite side, and burnished to remove any signs of the defective work. The process involved intensive pre-planning of the layout, and many manuscript scores with engraver's planning marks survive from the 18th and 19th centuries.[9]
76
+
77
+ By 1837 pewter had replaced copper as a medium, and Berthiaud gives an account with an entire chapter devoted to music (Novel manuel complet de l'imprimeur en taille douce, 1837). Printing from such plates required a separate inking to be carried out cold, and the printing press used less pressure. Generally, four pages of music were engraved on a single plate. Because music engraving houses trained engravers through years of apprenticeship, very little is known about the practice. Fewer than one dozen sets of tools survive in libraries and museums.[10] By 1900 music engravers were established in several hundred cities in the world, but the art of storing plates was usually concentrated with publishers. Extensive bombing of Leipzig in 1944, the home of most German engraving and printing firms, destroyed roughly half the world's engraved music plates.
78
+
79
+ Examples of contemporary uses for engraving include creating text on jewellery, such as pendants or on the inside of engagement- and wedding rings to include text such as the name of the partner, or adding a winner's name to a sports trophy. Another application of modern engraving is found in the printing industry. There, every day thousands of pages are mechanically engraved onto rotogravure cylinders, typically a steel base with a copper layer of about 0.1 mm in which the image is transferred. After engraving the image is protected with an approximately 6 µm chrome layer. Using this process the image will survive for over a million copies in high speed printing presses.
80
+ Some schools throughout the world are renowned for their teaching of engraving, like the École Estienne in Paris.
81
+
82
+ In traditional engraving, which is a purely linear medium, the impression of half-tones was created by making many very thin parallel lines, a technique called hatching. When two sets of parallel-line hatchings intersected each other for higher density, the resulting pattern was known as cross-hatching. Patterns of dots were also used in a technique called stippling, first used around 1505 by Giulio Campagnola. Claude Mellan was one of many 17th-century engravers with a very well-developed technique of using parallel lines of varying thickness (known as the "swelling line") to give subtle effects of tone (as was Goltzius) – see picture below. One famous example is his Sudarium of Saint Veronica (1649), an engraving of the face of Jesus made from a single spiraling line that starts at the tip of Jesus's nose.
83
+
84
+ The earliest allusion to engraving in the Bible may be the reference to Judah's seal ring (Ge 38:18), followed by (Ex 39.30). Engraving was commonly done with pointed tools of iron or even with diamond points. (Jer 17:1).
85
+
86
+ Each of the two onyx stones on the shoulder-pieces of the high priest’s ephod was engraved with the names of six different tribes of Israel, and each of the 12 precious stones that adorned his breastpiece was engraved with the name of one of the tribes. The holy sign of dedication, the shining gold plate on the high priest's turban, was engraved with the words: "Holiness belongs to Adonai." Bezalel, along with Oholiab, was qualified to do this specialized engraving work as well as to train others.—Ex 35:30–35; 28:9–12; 39:6–14, 30.
87
+
88
+ Prints:
89
+
90
+ Of gems:
91
+
92
+ Of guns:
93
+
94
+ Of coins:
95
+
96
+ Of postage stamps:
97
+
98
+ Of pins:
en/2278.html.txt ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Graz (/ɡrɑːts/ GRAHTS, German: [ɡʁaːts] (listen); Slovene: Gradec) is the capital city of Styria and Austria's second-largest city after Vienna. As of 1 January 2019, Graz had a population of 328,276 (292,269 of whom had principal-residence status).[3] In 2015, the population of the Graz larger urban zone (LUZ) stood at 633,168, based on principal-residence status.[4] Graz has a long tradition as a seat of higher education. It has four colleges and four universities with more than 60,000 students.[5] Its historic centre (Altstadt) is one of the best-preserved city centres in Central Europe.[6]
4
+
5
+ For centuries, Graz was more important to Slovenes and Croats, both politically and culturally, than the capitals of Ljubljana, Slovenia and Zagreb, Croatia; it remains influential to this day.[7] In 1999, the city's historic centre was added to the UNESCO list of World Heritage Sites and in 2010 the designation was expanded to include Eggenberg Palace (German: Schloss Eggenberg) on the western edge of the city. Graz was designated the Cultural Capital of Europe in 2003 and became a City of Culinary Delights in 2008.
6
+
7
+ The name of the city, Graz, formerly spelled Gratz,[8] most likely stems from the Slavic gradec, which means "small castle". Some archaeological finds point to the erection of a small castle by Alpine Slavic people, which over time became a heavily defended fortification.[9] In literary Slovene and Croatian, gradec still means "small castle", forming a hypocoristic derivative of Proto-West-South Slavic *gradьcъ, whichs descends via liquid metathesis from Common Slavic *gardьcъ and via the Slavic third palatalisation from Proto-Slavic *gardiku, originally denoting "small town, settlement". The name thus follows the common South Slavic pattern for naming settlements as grad. The German name 'Graz' first appears in records in 1128.
8
+
9
+ Graz is situated on both sides of the Mur river in southeast Austria. It is about 200 km (120 mi) southwest of Vienna (Wien). The nearest larger urban centre is Maribor (Marburg) in Slovenia, which is about 50 km (31 mi) to the south. Graz is the state capital and largest city in Styria, a green and heavily forested region on the eastern edge of the Alps.
10
+
11
+ These towns and villages border Graz:
12
+
13
+ Graz is divided into 17 municipal districts (Stadtbezirke):
14
+
15
+ I. Innere Stadt (3,389)
16
+ II. St. Leonhard (16,122)
17
+ III. Geidorf (25,168)
18
+ IV. Lend (31,753)
19
+ V. Gries (29,308)
20
+ VI. Jakomini (33,554)
21
+ VII. Liebenau (14,562)
22
+ VIII. St. Peter (15,291)
23
+ IX. Waltendorf (12,066)
24
+
25
+ X. Ries (5,886)
26
+ XI. Mariatrost (9,737)
27
+ XII. Andritz (19,129)
28
+ XIII. Gösting (11,309)
29
+ XIV. Eggenberg (20,801)
30
+ XV. Wetzelsdorf (15,779)
31
+ XVI. Straßgang (16,341)
32
+ XVII. Puntigam (8,745)
33
+
34
+ The oldest settlement on the ground of the modern city of Graz dates back to the Copper Age. However, no historical continuity exists of a settlement before the Middle Ages.
35
+
36
+ During the 12th century, dukes under Babenberg rule made the town into an important commercial center. Later, Graz came under the rule of the Habsburgs and, in 1281, gained special privileges from King Rudolph I.
37
+
38
+ In the 14th century, Graz became the city of residence of the Inner Austrian line of the Habsburgs. The royalty lived in the Schlossberg castle and from there ruled Styria, Carinthia, most of today's Slovenia, and parts of Italy (Carniola, Gorizia and Gradisca, Trieste).
39
+
40
+ In the 16th century, the city's design and planning were primarily controlled by Italian Renaissance architects and artists. One of the most famous buildings representative of this style is the Landhaus, designed by Domenico dell'Allio, and used by the local rulers as a governmental headquarters.
41
+
42
+ The University of Graz was founded by Archduke Karl II in 1585, it's the city's oldest university. For most of its existence, it was controlled by the Catholic church, and was closed in 1782 by Joseph II in an attempt to gain state control over educational institutions. Joseph II transformed it into a lyceum where civil servants and medical personnel were trained. In 1827 it was re-established as a university by Emperor Franz I, and was named 'Karl-Franzens Universität' or 'Charles-Francis University' in English. More than 30,000 students are currently enrolled at this university.
43
+
44
+ The astronomer Johannes Kepler lived in Graz for a short period. He worked as a math teacher and was a professor of mathematics at the University of Graz, but still found time to study astronomy. He left Graz for Prague when Lutherans were banned from the city.
45
+
46
+ Ludwig Boltzmann was Professor for Mathematical Physics from 1869 to 1890. During that time, Nikola Tesla studied electrical engineering at the Polytechnic in 1875. Nobel Laureate Otto Loewi taught at the University of Graz from 1909 until 1938. Ivo Andric, the 1961 Nobel Prize for Literature Laureate obtained his doctorate at the University of Graz. Erwin Schrödinger was briefly chancellor of the University of Graz in 1936.
47
+
48
+ Graz is centrally located within today's Bundesland (state) of Styria, or Steiermark in German. Mark is an old German word indicating a large area of land used as a defensive border, in which the peasantry is taught how to organize and fight in the case of an invasion. With a strategic location at the head of the open and fertile Mur valley, Graz was historically a target of invaders, such as the Hungarians under Matthias Corvinus in 1481, and the Ottoman Turks in 1529 and 1532. Apart from the Riegersburg Castle, the Schlossberg was the only fortification in the region that never fell to the Ottoman Turks. Graz is home to the region's provincial armory, which is the world's largest historical collection of late medieval and Renaissance weaponry. It has been preserved since 1551, and displays over 30,000 items.
49
+
50
+ From the earlier part of the 15th century, Graz was the residence of the younger branch of the Habsburgs, which succeeded to the imperial throne in 1619 in the person of Emperor Ferdinand II, who moved the capital to Vienna. New fortifications were built on the Schlossberg at the end of the 16th century. Napoleon's army occupied Graz in 1797. In 1809, the city withstood another assault by the French army. During this attack, the commanding officer in the fortress was ordered to defend it with about 900 men against Napoleon's army of about 3,000. He successfully defended the Schlossberg against eight attacks, but they were forced to give up after the Grande Armée occupied Vienna and the Emperor ordered to surrender. Following the defeat of Austria by Napoleonic forces at the Battle of Wagram in 1809, the fortifications were demolished using explosives, as stipulated in the Peace of Schönbrunn of the same year. The belltower (Glockenturm)[10] and the civic clock tower (Uhrturm),[11] which is a leading tourist attraction and serves as a symbol for Graz, were spared after the citizens of Graz paid a ransom for their preservation.[12]
51
+
52
+ Archduke Karl II of Inner Austria had 20,000 Protestant books burned in the square of what is now a mental hospital, and succeeded in returning Styria to the authority of the Holy See. Archduke Franz Ferdinand was born in Graz in what is now the Stadtmuseum (city museum).
53
+
54
+ The more recent population figures do not give the whole picture as only people with principal-residence status are counted and people with secondary residence status are not. Most of the people with secondary residence status in Graz are students. At the end of 2016 there were 33,473 people with secondary residence status in Graz.[13][14]
55
+
56
+ Oceanic climate is the type found in the city,[16] but due to the 0 °C isotherm, the same occurs in a humid continental climate with based in Köppen system (Cfb/Dfb borderline). Wladimir Köppen himself was in town and conducted studies to see how the climate of the past influenced the Continental Drift theory.[17] Due to its position southeast of the Alps, Graz is shielded from the prevailing westerly winds that bring weather fronts in from the North Atlantic to northwestern and central Europe. The weather in Graz is thus influenced by the Mediterranean, and it has more hours of sunshine per year than Vienna or Salzburg and also less wind or rain. Graz lies in a basin that is only open to the south, causing the climate to be warmer than would be expected at that latitude.[18] Plants are found in Graz that normally grow much further south.
57
+
58
+ Politically, culturally, scientifically and religiously, Graz was an important centre for all Slovenes, especially from the establishment of the University of Graz in 1586 until the establishment of University of Ljubljana in 1919. In 1574, the first Slovene Catholic book [sl] was published in Graz, and in 1592, Hieronymus Megiser published in Graz the book Dictionarium quatuor linguarum, the first multilingual dictionary of Slovene.[20]
59
+
60
+ The Styrian Slovenes did not consider Graz a German-speaking city, but their own, a place to study while living at their relatives' homes and to fulfill one's career ambitions.[citation needed] The student associations in Graz were a crucible of the Slovene identity, and the Slovene students in Graz were more nationally aware than some others. This led to fierce anti-Slovene efforts of German-speaking nationalists in Graz before and during World War II.[7]
61
+
62
+ Many Slovenian Styrians study there. Slovenes are among the professors at the Institute for Jazz in Graz. Numerous Slovenes have found employment there, while being formerly unemployed in Slovenia.[7] For the Slovene culture, Graz remains permanently important due to its university and the Universalmuseum Joanneum archives containing numerous documents from the Slovenian Styria.[7]
63
+
64
+ A symposium on the relation of Graz and the Slovenes was held in Graz in 2010, at the occasion of the 200th anniversary of the establishment of the first and oldest chair of Slovene. It was established at the Lyzeum of Graz in July 1811 on the initiative of Janez Nepomuk Primic [sl].[21] A collection of lectures on the topic was published. The Slovenian Post commemorated the anniversary with a stamp.[22]
65
+
66
+ For the year that Graz was Cultural Capital of Europe, new structures were erected. The Graz Museum of Contemporary Art (German: Kunsthaus) was designed by Peter Cook and Colin Fournier and is situated next to the Mur river. The Island in the Mur is a floating platform made of steel. It was designed by American architect Vito Acconci and contains a café, an open-air theatre and a playground.
67
+
68
+ The historic centre was added to the UNESCO World Heritage List in 1999[12] due to the harmonious co-existence of typical buildings from different epochs and in different architectural styles. Situated in a cultural borderland between Central Europe, Italy and the Balkan States, Graz absorbed various influences from the neighbouring regions and thus received its exceptional townscape. Today the historic centre consists of over 1,000 buildings, their age ranging from Gothic to contemporary.
69
+
70
+ The most important sights in the historic centre are:
71
+
72
+ During 2003 Graz held the title of "European Capital of Culture" and was one of the UNESCO "Cities of Design" in 2011.
73
+
74
+ The most important museums in Graz are:
75
+
76
+ The Old Town and the adjacent districts are characterized by the historic residential buildings and churches found there. In the outer districts buildings are predominantly of the architectural styles from the second half of the 20th century.
77
+
78
+ In 1965 the Grazer Schule (School of Graz) was founded. Several buildings around the universities are of this style, for example the green houses by Volker Giencke and the RESOWI center by Günther Domenig.
79
+
80
+ Before Graz became the European Capital of Culture in 2003, several new projects were realized, such as the Stadthalle, the Kindermuseum (museum for children), the Helmut-List-Halle, the Kunsthaus and the Murinsel.
81
+
82
+ Buildings in Graz which are at least 50m tall:
83
+
84
+ SK Sturm Graz is the main football club of the city, with three Austrian championships and five runner-up seasons. The Grazer AK also won an Austrian championship, but went into administration in 2007 and was excluded from the professional league system.
85
+
86
+ In ice hockey, the ATSE Graz was the Austrian Hockey League champion in 1975 and 1978. The EC Graz was runner-up in 1991–92, 1992–93 and 1993–94. The Graz 99ers plays in first division since 2000.
87
+
88
+ UBSC Raiffeisen Graz plays in the Austrian Basketball League.
89
+
90
+ The Graz Giants play in the Austrian Football League (American Football).
91
+
92
+ The city bid for the 2002 Winter Olympics in 1995, but lost the election to Salt Lake City. Nowadays there is a plan to bid for the 2026 Winter Olympics with some venues in Bavaria, Germany to cut costs with using existing venues around national borders. It's still facing referendum, meaning usually the end for many former olympic bids in Europe and North America since 1970 -century.
93
+
94
+ Graz hosts the annual festival of classical music Styriarte, founded in 1985 to tie conductor Nikolaus Harnoncourt closer to his hometown. Events have been held at different venues in Graz and in the surrounding region.
95
+
96
+ Referred to as Steirisch by locals, Graz belongs to the Austro-Bavarian region of dialects, more specifically a mix of Central Bavarian in the western part of Styria and Southern Bavarian in the eastern part.[23] The Grazer ORF, the Graz subsidiary of Austrian Broadcasting Corporation, launched an initiative in 2008 called Scho wieda Steirisch g'redt in order to highlight the numerous dialects of Graz and Styria in general and to cultivate the pride many Styrians hold for their local culture. Two reasons for a melding of these dialects with Standard German: the influence of television and radio bringing Standard German into the home and the industrialization causing the disappearance of the single farmer since the farming communities are seen as the true keepers of dialect speaking.[24]
97
+
98
+ An extensive public transport network makes Graz an easy city to navigate without a car. The city has a comprehensive bus network, complementing the Graz tram network consisting of eight lines. Four lines pass through the underground tram stop at the central railway station (Hauptbahnhof) and on to the city centre before branching out. Furthermore, there are seven night-time bus routes, although these run only at weekends and on evenings preceding public holidays.
99
+
100
+ The Schlossbergbahn, a funicular railway, and the Schlossberg lift, a vertical lift, link the city centre to the Schlossberg.
101
+
102
+ From the central railway station (Hauptbahnhof), regional trains link to most of Styria. Direct trains run to most major cities nearby including Vienna, Salzburg, Innsbruck, Maribor and Ljubljana in Slovenia, Zagreb in Croatia, Budapest in Hungary, Prague and Brno in the Czech Republic, Zürich in Switzerland, as well as Munich, Stuttgart, Heidelberg, and Frankfurt in Germany. Trains for Vienna leave every hour. In recent years many railway stations within the city limits and in the suburbs have been rebuilt or modernised and are now part of the "S-Bahn Graz", a commuter train service connecting the city with its suburban area and towns nearby.
103
+
104
+ Graz airport is located about 10 km (6 mi) south of the city centre and is accessible by bus, railway, and car. Direct destinations include Amsterdam, Berlin, Düsseldorf, Frankfurt, Munich, Stuttgart, Istanbul, Vienna and Zurich.[25]
105
+
106
+ In Graz there are seven hospitals, several private hospitals and sanatoriums, as well as 44 pharmacies.
107
+
108
+ The University Hospital Graz (LKH-Universitäts-Klinikum Graz) is located in eastern Graz and has 1,556 beds and 7,190 employees. The Regional Hospital Graz II (LKH Graz II) has two sites in Graz. The western site (LKH Graz II Standort West) is located in Eggenberg and has 280 beds and about 500 employees, the southern site (LKH Graz II Standort Süd) specializes in neurology and psychiatry and is located in Straßgang with 880 beds and 1,100 employees. The AUVA Accident Hospital (Unfallkrankenhaus der AUVA) is in Eggenberg and has 180 beds and a total of 444 employees.
109
+
110
+ The Albert Schweitzer Clinic in the western part of the city is a geriatric hospital with 304 beds, the Hospital of St. John of God (Krankenhaus der Barmherzigen Brüder) has two sites in Graz, one in Lend with 225 beds and one in Eggenberg with 260 beds. The Hospital of the Order of Saint Elizabeth (Krankenhaus der Elisabethinen) in Gries has 182 beds.
111
+
112
+ There are several private clinics as well: the Privatklinik Kastanienhof, the Privatklinik Leech, the Privatklinik der Kreuzschwestern, the Sanatorium St. Leonhard, the Sanatorium Hansa and the Privatklinik Graz-Ragnitz.
113
+
114
+ EMS in Graz is provided solely by the Austrian Red Cross. Perpetually two emergency doctor's cars (NEF – Notarzteinsatzfahrzeug), two NAWs (Notarztwagen – ambulances staffed with a physician in addition to regular personnel) and about 30 RTWs (Rettungswagen – regular ambulances) are on standby. Furthermore, several non-emergency ambulances (KTW – Krankentransportwagen) and a Mobile Intensive Care Unit (MICU) are operated by the Red Cross to transport non-emergency patients to and between hospitals. In addition to the Red Cross, the Labourers'-Samaritan-Alliance (Arbeiter-Samariter-Bund Österreichs), the Austrian organisation of the Order of Malta Ambulance Corps (Malteser Hospitaldienst Austria) and the Green Cross (Grünes Kreuz) operate ambulances (KTW) for non-emergency patient transport. In addition to the cars, there's also the C12 air ambulance helicopter stationed at Graz airport, staffed with an emergency physician in addition to regular personnel.
115
+
116
+ Graz is twinned with:[26]
117
+
118
+ The following are past and present notable residents of Graz.
119
+
120
+ Official websites
121
+
122
+ History
123
+
124
+ Further information
en/2279.html.txt ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Ancient Greek includes the forms of the Greek language used in ancient Greece and the ancient world from around the 9th century BC to the 6th century AD. It is often roughly divided into the Archaic period (9th to 6th centuries BC), Classical period (5th and 4th centuries BC), and Hellenistic period (Koine Greek, 3rd century BC to 4th century AD).
6
+
7
+ It is preceded by Mycenaean Greek and succeeded by Medieval Greek. Koine is regarded as a separate historical stage although its earliest form closely resembles Attic Greek and its latest form approaches Medieval Greek. There were several regional dialects of ancient Greek, of which Attic Greek developed into Koine.
8
+
9
+ Ancient Greek was the language of Homer and of fifth-century Athenian historians, playwrights, and philosophers, as well as being the original language of the New Testament of the best-selling book in world history, the Christian Bible. Ancient Greek has contributed many words to English vocabulary and has been a standard subject of study in educational institutions of the Western world since the Renaissance. This article primarily contains information about the Epic and Classical periods of the language.
10
+
11
+ Ancient Greek was a pluricentric language, divided into many dialects. The main dialect groups are Attic and Ionic, Aeolic, Arcadocypriot, and Doric, many of them with several subdivisions. Some dialects are found in standardized literary forms used in literature, while others are attested only in inscriptions.
12
+
13
+ There are also several historical forms. Homeric Greek is a literary form of Archaic Greek (derived primarily from Ionic and Aeolic) used in the epic poems, the Iliad and the Odyssey, and in later poems by other authors. Homeric Greek had significant differences in grammar and pronunciation from Classical Attic and other Classical-era dialects.
14
+
15
+ The origins, early form and development of the Hellenic language family are not well understood because of a lack of contemporaneous evidence. Several theories exist about what Hellenic dialect groups may have existed between the divergence of early Greek-like speech from the common Proto-Indo-European language and the Classical period. They have the same general outline, but differ in some of the detail. The only attested dialect from this period[a] is Mycenaean Greek, but its relationship to the historical dialects and the historical circumstances of the times imply that the overall groups already existed in some form.
16
+
17
+ Scholars assume that major ancient Greek period dialect groups developed not later than 1120 BC, at the time of the Dorian invasions—and that their first appearances as precise alphabetic writing began in the 8th century BC. The invasion would not be "Dorian" unless the invaders had some cultural relationship to the historical Dorians. The invasion is known to have displaced population to the later Attic-Ionic regions, who regarded themselves as descendants of the population displaced by or contending with the Dorians.
18
+
19
+ The Greeks of this period believed there were three major divisions of all Greek people – Dorians, Aeolians, and Ionians (including Athenians), each with their own defining and distinctive dialects. Allowing for their oversight of Arcadian, an obscure mountain dialect, and Cypriot, far from the center of Greek scholarship, this division of people and language is quite similar to the results of modern archaeological-linguistic investigation.
20
+
21
+ One standard formulation for the dialects is:[2]
22
+
23
+ Western group:
24
+
25
+
26
+
27
+ Central group:
28
+
29
+
30
+
31
+ Eastern group:
32
+
33
+
34
+
35
+ Western group:
36
+
37
+
38
+
39
+ Eastern group:
40
+
41
+
42
+
43
+ West vs. non-West Greek is the strongest-marked and earliest division, with non-West in subsets of Ionic-Attic (or Attic-Ionic) and Aeolic vs. Arcadocypriot, or Aeolic and Arcado-Cypriot vs. Ionic-Attic. Often non-West is called 'East Greek'.
44
+
45
+ Arcadocypriot apparently descended more closely from the Mycenaean Greek of the Bronze Age.
46
+
47
+ Boeotian had come under a strong Northwest Greek influence, and can in some respects be considered a transitional dialect. Thessalian likewise had come under Northwest Greek influence, though to a lesser degree.
48
+
49
+ Pamphylian Greek, spoken in a small area on the southwestern coast of Anatolia and little preserved in inscriptions, may be either a fifth major dialect group, or it is Mycenaean Greek overlaid by Doric, with a non-Greek native influence.
50
+
51
+ Most of the dialect sub-groups listed above had further subdivisions, generally equivalent to a city-state and its surrounding territory, or to an island. Doric notably had several intermediate divisions as well, into Island Doric (including Cretan Doric), Southern Peloponnesus Doric (including Laconian, the dialect of Sparta), and Northern Peloponnesus Doric (including Corinthian).
52
+
53
+ The Lesbian dialect was Aeolic Greek.
54
+
55
+ All the groups were represented by colonies beyond Greece proper as well, and these colonies generally developed local characteristics, often under the influence of settlers or neighbors speaking different Greek dialects.
56
+
57
+ The dialects outside the Ionic group are known mainly from inscriptions, notable exceptions being:
58
+
59
+ After the conquests of Alexander the Great in the late 4th century BC, a new international dialect known as Koine or Common Greek developed, largely based on Attic Greek, but with influence from other dialects. This dialect slowly replaced most of the older dialects, although the Doric dialect has survived in the Tsakonian language, which is spoken in the region of modern Sparta. Doric has also passed down its aorist terminations into most verbs of Demotic Greek. By about the 6th century AD, the Koine had slowly metamorphosed into Medieval Greek.
60
+
61
+ Ancient Macedonian was an Indo-European language. Because of no surviving sample texts, it is impossible to ascertain whether it was a Greek dialect or even related to the Greek language at all. Its exact relationship remains unclear. Macedonian could also be related to Thracian and Phrygian languages to some extent. The Macedonian dialect (or language) appears to have been replaced by Attic Greek during the Hellenistic period. Late 20th century epigraphic discoveries in the Greek region of Macedonia, such as the Pella curse tablet, suggest that ancient Macedonian has been a variety of north-western ancient Greek or replaced by a Greek dialect.[4]
62
+
63
+ Ancient Greek differs from Proto-Indo-European (PIE) and other Indo-European languages in certain ways. In phonotactics, ancient Greek words could end only in a vowel or /n s r/; final stops were lost, as in γάλα "milk", compared with γάλακτος "of milk" (genitive). Ancient Greek of the classical period also differed in both the inventory and distribution of original PIE phonemes due to numerous sound changes,[5] notably the following:
64
+
65
+ The pronunciation of ancient Greek was very different from that of Modern Greek. Ancient Greek had long and short vowels; many diphthongs; double and single consonants; voiced, voiceless, and aspirated stops; and a pitch accent. In Modern Greek, all vowels and consonants are short. Many vowels and diphthongs once pronounced distinctly are pronounced as /i/ (iotacism). Some of the stops and glides in diphthongs have become fricatives, and the pitch accent has changed to a stress accent. Many of the changes took place in the Koine Greek period. The writing system of Modern Greek, however, does not reflect all pronunciation changes.
66
+
67
+ The examples below represent Attic Greek in the 5th century BC. Ancient pronunciation cannot be reconstructed with certainty, but Greek from the period is well documented, and there is little disagreement among linguists as to the general nature of the sounds that the letters represent.
68
+
69
+ [ŋ] occurred as an allophone of /n/ that was used before velars and as an allophone of /ɡ/ before nasals. /r/ was probably voiceless when word-initial (written ῥ). /s/ was assimilated to [z] before voiced consonants.
70
+
71
+ /oː/ raised to [uː], probably by the 4th century BC.
72
+
73
+ Greek, like all of the older Indo-European languages, is highly inflected. It is highly archaic in its preservation of Proto-Indo-European forms. In ancient Greek, nouns (including proper nouns) have five cases (nominative, genitive, dative, accusative, and vocative), three genders (masculine, feminine, and neuter), and three numbers (singular, dual, and plural). Verbs have four moods (indicative, imperative, subjunctive, and optative) and three voices (active, middle, and passive), as well as three persons (first, second, and third) and various other forms. Verbs are conjugated through seven combinations of tenses and aspect (generally simply called "tenses"): the present, future, and imperfect are imperfective in aspect; the aorist (perfective aspect); a present perfect, pluperfect and future perfect. Most tenses display all four moods and three voices, although there is no future subjunctive or imperative. Also, there is no imperfect subjunctive, optative or imperative. The infinitives and participles correspond to the finite combinations of tense, aspect, and voice.
74
+
75
+ The indicative of past tenses adds (conceptually, at least) a prefix /e-/, called the augment. This was probably originally a separate word, meaning something like "then", added because tenses in PIE had primarily aspectual meaning. The augment is added to the indicative of the aorist, imperfect, and pluperfect, but not to any of the other forms of the aorist (no other forms of the imperfect and pluperfect exist).
76
+
77
+ The two kinds of augment in Greek are syllabic and quantitative. The syllabic augment is added to stems beginning with consonants, and simply prefixes e (stems beginning with r, however, add er). The quantitative augment is added to stems beginning with vowels, and involves lengthening the vowel:
78
+
79
+ Some verbs augment irregularly; the most common variation is e → ei. The irregularity can be explained diachronically by the loss of s between vowels.
80
+ In verbs with a preposition as a prefix, the augment is placed not at the start of the word, but between the preposition and the original verb. For example, προσ(-)βάλλω (I attack) goes to προσέβαλoν in the aorist. However compound verbs consisting of a prefix that is not a preposition retain the augment at the start of the word: αὐτο(-)μολῶ goes to ηὐτομόλησα in the aorist.
81
+
82
+ Following Homer's practice, the augment is sometimes not made in poetry, especially epic poetry.
83
+
84
+ The augment sometimes substitutes for reduplication; see below.
85
+
86
+ Almost all forms of the perfect, pluperfect, and future perfect reduplicate the initial syllable of the verb stem. (Note that a few irregular forms of perfect do not reduplicate, whereas a handful of irregular aorists reduplicate.) The three types of reduplication are:
87
+
88
+ Irregular duplication can be understood diachronically. For example, lambanō (root lab) has the perfect stem eilēpha (not *lelēpha) because it was originally slambanō, with perfect seslēpha, becoming eilēpha through compensatory lengthening.
89
+
90
+ Reduplication is also visible in the present tense stems of certain verbs. These stems add a syllable consisting of the root's initial consonant followed by i. A nasal stop appears after the reduplication in some verbs.[6]
91
+
92
+ The earliest extant examples of ancient Greek writing (circa 1450 BC) are in the syllabic script Linear B. Beginning in the 8th century BC, however, the Greek alphabet became standard, albeit with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later.
93
+
94
+ The beginning of Homer's Iliad exemplifies the Archaic period of ancient Greek (see Homeric Greek for more details):
95
+
96
+ Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος
97
+ οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε' ἔθηκε,
98
+ πολλὰς δ' ἰφθίμους ψυχὰς Ἄϊδι προΐαψεν
99
+ ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν
100
+ οἰωνοῖσί τε πᾶσι· Διὸς δ' ἐτελείετο βουλή·
101
+ ἐξ οὗ δὴ τὰ πρῶτα διαστήτην ἐρίσαντε
102
+ Ἀτρεΐδης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς.
103
+
104
+ The beginning of Apology by Plato exemplifies Attic Greek from the Classical period of ancient Greek:
105
+
106
+ Using the IPA:
107
+
108
+ Transliterated into the Latin alphabet using a modern version of the Erasmian scheme:
109
+
110
+ Translated into English:
111
+
112
+ The study of ancient Greek in European countries in addition to Latin occupied an important place in the syllabus from the Renaissance until the beginning of the 20th century. Ancient Greek is still taught as a compulsory or optional subject especially at traditional or elite schools throughout Europe, such as public schools and grammar schools in the United Kingdom. It is compulsory in the liceo classico in Italy, in the gymnasium in the Netherlands, in some classes in Austria, in klasična gimnazija (grammar school - orientation classical languages) in Croatia, in Classical Studies in ASO in Belgium and it is optional in the humanities-oriented gymnasium in Germany (usually as a third language after Latin and English, from the age of 14 to 18). In 2006/07, 15,000 pupils studied ancient Greek in Germany according to the Federal Statistical Office of Germany, and 280,000 pupils studied it in Italy.[7] It is a compulsory subject alongside Latin in the humanities branch of the Spanish bachillerato. Ancient Greek is also taught at most major universities worldwide, often combined with Latin as part of the study of classics. It will also be taught in state primary schools in the UK, to boost children's language skills,[8][9] and will be offered as a foreign language to pupils in all primary schools from 2014 as part of a major drive to boost education standards, together with Latin, Mandarin, French, German, Spanish, and Italian.[10][needs update]
113
+
114
+ In Christian education, especially at the post-graduate level, the study of ancient Greek is commonplace if not compulsory. As a lingua franca of the Roman world at the time of Jesus, the Bible's accounts of his life and the rest of the New Testament were written in Greek; since these books form a vital part of Christian theology, studying the language they are written in is commonplace for those studying to become pastors or priests.
115
+
116
+ Ancient Greek is also taught as a compulsory subject in all gymnasiums and lyceums in Greece.[11][12] Starting in 2001, an annual international competition "Exploring the Ancient Greek Language and Culture" (Greek: Διαγωνισμός στην Αρχαία Ελληνική Γλώσσα και Γραμματεία) was run for upper secondary students through the Greek Ministry of National Education and Religious Affairs, with Greek language and cultural organisations as co-organisers.[13] It appears to have ceased in 2010, having failed to gain the recognition and acceptance of teachers.[14]
117
+
118
+ Modern authors rarely write in ancient Greek, though Jan Křesadlo wrote some poetry and prose in the language, and Harry Potter and the Philosopher's Stone,[15] some volumes of Asterix,[16] and The Adventures of Alix have been translated into ancient Greek. Ὀνόματα Kεχιασμένα (Onomata Kechiasmena) is the first magazine of crosswords and puzzles in ancient Greek.[17] Its first issue appeared in April 2015 as an annex to Hebdomada Aenigmatum. Alfred Rahlfs included a preface, a short history of the Septuagint text, and other front matter translated into ancient Greek in his 1935 edition of the Septuagint; Robert Hanhart also included the introductory remarks to the 2006 revised Rahlfs–Hanhart edition in the language as well.[18] Akropolis World News reports weekly a summary of the most important news in ancient Greek.[19]
119
+
120
+ Ancient Greek is also used by organizations and individuals, mainly Greek, who wish to denote their respect, admiration or preference for the use of this language. This use is sometimes considered graphical, nationalistic or humorous. In any case, the fact that modern Greeks can still wholly or partly understand texts written in non-archaic forms of ancient Greek shows the affinity of the modern Greek language to its ancestral predecessor.[19]
121
+
122
+ An isolated community near Trabzon, Turkey, an area where Pontic Greek is spoken, has been found to speak a variety of Modern Greek, Ophitic, that has parallels, both structurally and in its vocabulary, to ancient Greek not present in other varieties (linguistic conservatism).[20] As few as 5,000 people speak the dialect, and linguists believe that it is the closest living language to ancient Greek.[21]
123
+
124
+ Ancient Greek is often used in the coinage of modern technical terms in the European languages: see English words of Greek origin. Latinized forms of ancient Greek roots are used in many of the scientific names of species and in scientific terminology.
125
+
126
+ Proto-Greek
127
+
128
+ Mycenaean
129
+
130
+ Ancient
131
+
132
+ Koine
133
+
134
+ Medieval
135
+
136
+ Modern
en/228.html.txt ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Sir Andrew Barron Murray OBE (born 15 May 1987) is a British professional tennis player from Scotland. Murray represents Great Britain in his sporting activities and is a three-time Grand Slam tournament winner, two-time Olympic champion, Davis Cup champion, winner of the 2016 ATP World Tour Finals, and former world No. 1.
6
+
7
+ Murray defeated Novak Djokovic in the 2012 US Open final, becoming the first British player since 1977, and the first British man since 1936, to win a Grand Slam singles tournament. Murray is also the first British man to win multiple Wimbledon singles titles since Fred Perry in 1936,[12] winning the tournament in 2013 and 2016.
8
+
9
+ Murray is the men's singles 2012 and 2016 Olympic gold medallist, making him the only tennis player, male or female, to have won two Olympic singles titles. He featured in Great Britain's Davis Cup-winning team in 2015, going 11–0 in his matches (8 singles and 3 doubles) as they secured their first Davis Cup title since 1936.[12][13]
10
+
11
+ Andy Murray was born in Glasgow, Scotland, the son of Judy Murray (née Erskine) and William Murray.[4] His maternal grandfather, Roy Erskine, was a professional footballer in the late 1950s.[14] Murray is a supporter of Hibernian Football Club, one of the teams his grandfather represented,[15] and Arsenal Football Club.[16]
12
+
13
+ Murray began playing tennis at the age of three when his mother Judy took him to play on the local courts.[17] He played in his first competitive tournament at age five and by the time he was eight he was competing with adults in the Central District Tennis League.[18] Murray's elder brother, Jamie, is also a professional tennis player, playing on the doubles circuit, and became a multiple Grand Slam winner in the discipline (both men's and mixed).[19][20][21][22][23]
14
+
15
+ Murray grew up in Dunblane and attended Dunblane Primary School. He and his brother were present during the 1996 Dunblane school massacre,[24] when Thomas Hamilton killed 16 children and a teacher before shooting himself; Murray took cover in a classroom.[25] Murray says he was too young to understand what was happening and is reluctant to talk about it in interviews, but in his autobiography Hitting Back he states that he attended a youth group run by Hamilton and his mother gave Hamilton lifts in her car.[26] Murray later attended Dunblane High School.[27][28]
16
+
17
+ Murray's parents split up when he was 10, with the boys living with their father while being mentored in tennis by their mother.[29] He believes the impact this had on him could be the reason behind his competitive spirit.[30] At 15, he was asked to train with Rangers Football Club at their School of Excellence, but declined, opting to focus on his tennis career instead.[31] He then decided to move to Barcelona, Spain. There he studied at the Schiller International School and trained on the clay courts of the Sánchez-Casal Academy, coached by Pato Alvarez.[32] Murray described this time as "a big sacrifice".[28] His parents had to find £40,000 to pay for his 18-month stay there.[29] In Spain, he trained with Emilio Sánchez, former world No. 1 doubles player.[28]
18
+
19
+ Murray was born with a bipartite patella, where the kneecap remains as two separate bones instead of fusing together in early childhood, but was not diagnosed until the age of 16. He has been seen holding his knee due to the pain caused by the condition and has pulled out of events because of it.[33]
20
+
21
+ In February 2013, Murray bought Cromlix House hotel near Dunblane for £1.8 million.
22
+ Brother Jamie had celebrated his wedding there in 2010[34] but it had since ceased trading. The venue re-opened as a 15-room five-star hotel in April 2014.[35] Later that month Murray was awarded freedom of Stirling and received an Honorary Doctorate from the University of Stirling in recognition of his services to tennis.[36]
23
+
24
+ Murray began dating Kim Sears, daughter of player-turned-coach Nigel Sears, in 2005.[37][38] Their engagement was announced in November 2014,[38] and they married on 11 April 2015 at Dunblane Cathedral in his home town,[39] with the reception at his Cromlix House hotel. The couple live in Oxshott, Surrey[1] with their son and two daughters.[40][41][42] He identifies himself as a feminist, and has been repeatedly vocal in his support for women players and coaches.[43][44]
25
+
26
+ Murray has invested in up to 30 UK businesses using Seedrs, a crowdfunding platform.[45]
27
+
28
+ Leon Smith, Murray's tennis coach from 11 to 17,[32] described Murray as "unbelievably competitive", while Murray attributes his abilities to the motivation gained from losing to his older brother Jamie. In 1999, at the age of 12, Murray won his age group at the Orange Bowl, a prestigious event for junior players.[46] He won it again at the age of 14, and is one of only nine tennis players to win the Junior Orange Bowl championship twice in its 70-year history, alongside the likes of Jimmy Connors, Jennifer Capriati, Monica Seles, and Yishai Oliel.[47]
29
+
30
+ In July 2003, Murray started out on the Challenger and Futures circuit. In his first tournament, he reached the quarter-finals of the Manchester Challenger.[48] In September, Murray won his first senior title by taking the Glasgow Futures event.[49] He also reached the semi-finals of the Edinburgh Futures event.[50]
31
+
32
+ For the first six months of 2004, Murray had a knee injury and couldn't play.[51]
33
+
34
+ In July 2004, Murray played a Challenger event in Nottingham, where he lost to future Grand Slam finalist Jo-Wilfried Tsonga in the second round.[52] Murray then went on to win Futures events in Xàtiva[53] and Rome.[54]
35
+
36
+ In September 2004, he won the Junior US Open and was selected for the Davis Cup World Group play-off match against Austria later that month;[55] however, he was not selected to play. Later that year, he won BBC Young Sports Personality of the Year.[56]
37
+
38
+ As a junior, Murray reached as high as No. 6 in the world in 2003 (and No. 8 in doubles). In the 2004-instated combined rankings, he reached No. 2 in the world.[57]
39
+
40
+ Junior Slam results:
41
+ Australian Open: -
42
+ French Open: SF (2005)
43
+ Wimbledon: 3R (2004)
44
+ US Open: W (2004)
45
+
46
+ Murray began 2005 ranked No. 407,[58] but when he was in South America in January, he hurt his back and had to take three months off.[51]
47
+
48
+ In March, he became the youngest Briton to play in the Davis Cup.[59] Murray turned professional in April and was given a wild card entry to a clay-court tournament in Barcelona, the Open SEAT, where he lost in three sets to Jan Hernych.[60] In April, Murray parted acrimoniously from his coach Pato Alvarez, complaining about his negative attitude.[61] Murray then reached the semi-finals of the boys' French Open, where he lost in straight sets to Marin Čilić.[62]
49
+
50
+ Mark Petchey agreed to coach Murray for four weeks until the end of Wimbledon, but it metamorphosed into a full-time position.[61] Given a wild card to Queen's,[63] Murray progressed past Santiago Ventura in straight sets for his first ATP match win.[64] Following a second-round win against Taylor Dent,[65] he played former Australian Open champion Thomas Johansson in the third round, losing in three sets after cramping and twisting his ankle.[66][67] Following his performance at Queen's, Murray received a wild card for Wimbledon. Ranked No. 312, Murray became the first Scot in the Open Era to reach the third round of the men's singles tournament at Wimbledon.[68] In the third round, Murray lost to 2002 Wimbledon finalist David Nalbandian due to cramping and fatigue, having led two sets to love.[69]
51
+
52
+ Following Wimbledon, Murray won Challenger events on the hard courts of Aptos and Binghamton, New York. He then experienced his first Masters event at Cincinnati, where he beat Taylor Dent, before losing in three sets to then-No. 4, Marat Safin. With a wild card entry, Murray played Andrei Pavel in the opening round of the US Open, where he recovered from down two sets to one to win his first five-set match.[70] However, he lost in the second round to Arnaud Clément in another five-set contest.[71] Murray was again selected for the Davis Cup match against Switzerland. He was picked for the opening singles rubbers, losing in straight sets to Stanislas Wawrinka.[72] Murray made his first ATP final at the Thailand Open where he faced No. 1 Roger Federer. Murray lost in straight sets.[73]
53
+
54
+ Murray beat Tim Henman in their first meeting, at the Basel Swiss Indoors in the first round, and eventually reached the quarter-finals.[74]
55
+
56
+ In November, Murray captained Scotland at the inaugural Aberdeen Cup against England led by Greg Rusedski.[75] This was an exhibition tournament and the only event where Murray played Rusedski, they never met on the Tour. Rusedski beat Murray in the first match, but Murray won the second. This was also the first time that Andy and his brother Jamie Murray played doubles as seniors.[76] Scotland defeated England 4½ – 2½.[77] He completed the year ranked No. 64 and was named the 2005 BBC Scotland Sports Personality of the Year.[78]
57
+
58
+ The 2006 season saw Murray compete on the full circuit for the first time and split with his coach Mark Petchey[79] and team up with Brad Gilbert.[80]
59
+
60
+ On 27 February, Murray became the British No. 1, ending Tim Henman's seven-year run. Murray was now world No. 42, Greg Rusedski No. 43, and Tim Henman No. 49.[81] Rusedski regained his British No. 1 status on 15 May[82] for eight weeks.[83]
61
+
62
+ Murray suffered a straight sets defeat at the Australian Open, to Argentine Juan Ignacio Chela in the first round[84] and to Gaël Monfils at the French Open, in five sets.[85] Murray did reach the fourth round for the first time at both Wimbledon (beating 3rd seed Andy Roddick in the 3rd round) and the US Open.[86][87]
63
+
64
+ Murray played in Davis Cup ties against Serbia, Israel and Ukraine. Murray missed the opening singles matches before losing the doubles as Britain lost their tie against Serbia.[88] During the tie with Israel, Murray won his rubber and lost the doubles before pulling out with a neck injury before the reverse singles, as Britain lost the tie.[89][90][91] Against Ukraine, Murray won both his singles rubbers, but lost the doubles, as Britain won the tie.[92][93][94]
65
+
66
+ At the Masters, Murray lost in the first round in Miami,[95] Monte Carlo and Rome.[96][97] Murray went out of the tournaments in Indian Wells and Hamburg in the second round.[98][99] Murray reached his first Masters semi-final in Toronto at the Rogers Cup, losing to Richard Gasquet.[100]
67
+
68
+ At Cincinnati, Murray became only one of two players, alongside Rafael Nadal, to defeat Roger Federer in 2006, breaking the Swiss star's 55 match winning streak on hard courts.[101] He lost two rounds later to Andy Roddick, but broke into the top 20 for the first time.[102][103] In the final two Masters events in Madrid and Paris, Murray exited both tournaments at the last-16 stage ending his season, with losses to Novak Djokovic and Dominik Hrbatý.[104][105] When the tour reached San Jose, California; Murray defeated a top ten player for the first time, Andy Roddick.[106] Murray went on to claim the SAP Open title defeating No. 11 Lleyton Hewitt.[107] Murray was a finalist at the Legg Mason Tennis Classic.[108] Playing doubles with his brother in Bangkok the pair reached the final.[109] After the French Open, where Murray was injured again, he revealed that his bones hadn't fully grown, causing him to suffer from cramps and back problems.[110]
69
+
70
+ In November, the Aberdeen Cup was held for the second time, with Murray leading team Scotland and Greg Rusedski captaining England.[111] Scotland won 6½–1.[112]
71
+
72
+ Murray reached the fourth round of the Australian Open, where he lost a five-set match against No. 2, Rafael Nadal.[113]
73
+
74
+ Following the Miami Masters, where he reached the semi-finals,[114] Murray reached the No. 10 ranking on 16 April.[115]
75
+
76
+ The British No. 1 sustained tendon damage during his first round match at the German Open in Hamburg. Murray was up 5–1 when he hit a forehand from the back of the court and snapped the tendons in his wrist, leaving him out of action from 15 May until 7 August, thereby missing Wimbledon.[116] During this rest period, Murray rose to No. 8, but by 7 August, he had dropped to No. 14.[115]
77
+
78
+ Murray suffered a third round loss at the US Open. At the Masters tournaments, Murray reached the semi-finals of Indian Wells and Miami. At Rome and Cincinnati, Murray exited in the first round whilst going out in the second in Canada. In the final two masters tournaments, Murray exited in the third round in Madrid and he went out in the quarter-finals of Paris. Murray won titles in San Jose and St. Petersburg.[117] He also reached the final of tournaments in Doha and Metz, finishing the season ranked 11th in the world.[115]
79
+
80
+ In November, Murray split with his coach Brad Gilbert and added a team of experts along with Miles Maclagan, his main coach.[118][119]
81
+
82
+ In 2008, Murray suffered a first round loss at the Australian Open to eventual runner-up Jo-Wilfried Tsonga, and a third round loss at the French Open to Nicolás Almagro.[120] Murray then made his first Grand Slam quarter-final at Wimbledon before making his first final at the US Open. During the tournament in New York, Murray claimed his first win over Nadal. That victory meant that he'd become the first player from Britain since Greg Rusedski in 1997 to reach a major final.[121] In his first Grand Slam final Murray suffered a straight sets loss to Federer.[122][123] At the Beijing Olympics, Murray suffered one of the worst defeats of his career, losing his first round singles match to No. 77 Yen-hsun Lu of Taiwan in straight sets. That abject defeat was still on his mind in a BBC interview five years later – despite an intervening Olympic gold medal and a head-to-head win – when he met the same player (now ranked No. 75) in the second round of Wimbledon 2013.[124]
83
+
84
+ In the Masters tournaments Murray went out in round four in Indian Wells and the first round of Miami. In the clay Masters Murray made the third round of Monte Carlo and Hamburg and the second of Rome. On the American hard court swing Murray made the semi-finals of Toronto before winning his first Masters shield in Cincinnati. He added another shield to his collection in Madrid;[125] before losing in the quarter-finals of Paris. Now at No. 4 in the world, Murray qualified for the first time for the Masters Cup. He played well in defeating an injured Federer[126] but lost to Davydenko in the semi-finals.[127] Murray ended 2008 ranked No. 4. Murray also won tournaments in Doha, Marseille and St Petersburg.
85
+
86
+ Murray opened the 2009 season with a successful defence of his title at the Qatar Open in Doha, defeating Andy Roddick in straight sets.[128] At the Australian Open, Murray made it to the fourth round, losing to Fernando Verdasco.[129] Murray won his eleventh career title in Rotterdam, defeating No. 1, Nadal in three sets.[130] Murray next went to Dubai but withdrew before the quarter-finals with a re-occurrence of a virus that had affected him at the Australian Open.[131] The virus caused Murray to miss a Davis Cup tie in Glasgow. Murray then lost in the finals to Nadal at Indian Wells,[132] but won a week later in Miami over Djokovic for another masters title.
87
+
88
+ In the lead-up to the French Open, Murray beat No. 9, Nikolay Davydenko at the Monte Carlo Masters, the first time he had beaten a top ten player on clay,[133] though he lost to Nadal in the semi-finals. Murray was upset in round two of the Rome Masters by qualifier Juan Mónaco, and he reached the quarter-finals of the Madrid Masters, losing to Juan Martín del Potro. During this time Murray achieved the highest ever ranking of a British male in the Open Era when he reached the No. 3 ranking on 11 May 2009.[134] Murray reached the quarter-finals of the French Open, but was defeated by Fernando González in four sets.
89
+
90
+ Murray won a title for the first time on grass at Queen's and became the first British winner of the tournament since 1938. In the final Murray defeated American James Blake.[135] At Wimbledon, against Stanislas Wawrinka, Murray's fourth round match was the first match to be played entirely under Wimbledon's retractable roof. This also enabled it to be the then latest finishing match ever at Wimbledon, a record he would go on to eclipse three years later in a second round match against Marcos Baghdatis.[136] However Murray lost a tight semi-final to Andy Roddick.
91
+
92
+ Murray returned to action in Montreal, defeating del Potro in three sets to take the title.[137] After this victory, he overtook Nadal in the rankings and held the number two position until the start of the US Open.[138] Murray followed the Masters win playing at the Cincinnati Masters, where he lost to Federer. At the US Open, Murray was hampered by a wrist injury and suffered a straight-sets loss to Čilić.[139] Murray won both his singles matches, and lost at doubles in the Davis Cup against Poland,[140] but then missed six weeks with a wrist injury.[141]
93
+
94
+ In November, Murray won at Valencia,[141][142] but bowed out in round two of the Paris Masters. To end the season, Murray did not make it out of the round robin at the World Tour Finals in London.[143] He ended the year ranked #4 for the second consecutive year.
95
+
96
+ Murray and Laura Robson represented Britain at the Hopman Cup. The pair progressed to the final, where they were beaten by Spain.[144] At the Australian Open Murray beat Nadal and Čilić before losing in the final to No. 1 Roger Federer.[145]
97
+
98
+ Andy Murray during his runner's up speech at the 2010 Australian Open.[146]
99
+
100
+ At the BNP Paribas Open in Indian Wells, Murray reached the quarter-finals, losing to Robin Söderling in straight sets. Murray next played at the 2010 Sony Ericsson Open, but lost his first match of the tournament to Mardy Fish, afterwards saying that his mind hadn't been fully on tennis.[147] At the Monte-Carlo Rolex Masters, Murray suffered another first match loss, this time to Philipp Kohlschreiber. He entered the doubles competition with Ross Hutchins; the duo lost to the Bryan Brothers on a champions tie-breaker. Murray reached the third round in the Rome Masters, and the quarter-finals at the Madrid Masters, losing both times to David Ferrer.[148][149]
101
+
102
+ After playing an exhibition match, Murray started the French Open with three tough wins, before losing in straight sets to Tomáš Berdych in the fourth round.[150] In London, Murray progressed to the third round, where he faced Mardy Fish. At 3–3 in the final set with momentum going Murray's way (he had just come back from 3–0 down), the match was called off for bad light, leaving Murray fuming. Coming back the next day, Murray was edged out by the eventual finalist in a tie-breaker for his second defeat by him in the year.[151][152] At Wimbledon, Murray progressed to the semi-finals, losing to Nadal in straight sets.[153] On 27 July 2010, Murray and his coach Maclagan split, and Murray replaced him with Àlex Corretja.[154]
103
+
104
+ Starting the US hard-court season with the 2010 Farmers Classic, Murray reached the final but lost against Sam Querrey in three sets. This was his first loss to Querrey in five career meetings.[155] In Canada, Murray became the first player since Andre Agassi in 1995 to defend the Canadian Masters. Murray defeated Nadal and then Federer in straight sets, ending his eight-month title drought.[156] At the Cincinnati Masters, Murray first complained about the speed of the court,[157] and then in a quarter-final match with Fish, Murray complained that the organisers refused to put the match on later in the day[158] With temperatures reaching 33 °C in the shade, Murray won the first set in a tie-breaker but began to feel ill. The doctor was called on court to actively cool Murray down. Murray said after the match that he had considered retiring. He lost the second set, but forced a final-set tie-breaker, before Fish won.[159] After losing to Stanislas Wawrinka in the third round of the US Open, questions about Murray's conditioning arose, as he called the trainer out twice during the match.[160]
105
+
106
+ His next event was the China Open in Beijing, where Murray reached the quarter-finals, losing to Ivan Ljubičić.[161] Murray then won the Shanghai Rolex Masters dismissing Roger Federer in straight sets.[162] He did not drop a single set throughout the event. Murray returned to Spain to defend his title at the Valencia Open 500 but lost in the second round to Juan Mónaco.[163] However, in doubles, Murray partnered his brother Jamie Murray to the final, where they defeated Mahesh Bhupathi and Max Mirnyi. The victory was Murray's first doubles title and the second time he had reached a final with his brother.[164]
107
+
108
+ Murray reached the quarter-finals at the BNP Paribas Masters losing to Gaël Monfils in three sets. Combined with his exit and Söderling's taking the title, Murray found himself pushed down a spot in the rankings, to No. 5 from No. 4.[165] At the Tour finals in London, Murray went 2–1 in round robin play before facing Nadal in the semi-final. They battled for over three hours, before Murray fell to the Spaniard in a final-set tie-breaker, bringing an end to his season.[166] He ended the year ranked #4 for the third consecutive year.
109
+
110
+ Murray and Laura Robson lost in the round-robin stage 2011 Hopman Cup, losing all three ties even though Murray won all of his singles matches. Then Murray, along with other stars such as Federer, Nadal, and Djokovic, participated in the Rally for Relief event to help raise money for the flood victims in Queensland.[167]
111
+
112
+ Seeded fifth in the 2011 Australian Open, Murray met former champion Novak Djokovic in the final and was defeated in straight sets. In Rotterdam, he was defeated by Marcos Baghdatis in the first round.[168] Murray reached the semi-finals of the doubles tournament with his brother Jamie. Murray lost to qualifiers in the first rounds at the Masters Series events in Indian Wells and Miami, after which he split with coach Àlex Corretja.[169]
113
+
114
+ Murray returned to form at the Monte-Carlo Rolex Masters, but lost to Nadal in the semi-finals.[170] Murray sustained an elbow injury before the match and subsequently withdrew from the 2011 Barcelona Open Banco Sabadell due to the injury.[171] Murray lost in the third round at the Mutua Madrileña Madrid Open, but made it to the semi-finals of the Rome Masters, where he lost to Novak Djokovic.[172]
115
+
116
+ At the French Open, Murray won two tough early matches, before losing in his first semi-final at Roland Garros, against Rafael Nadal.[173][174][175]
117
+
118
+ Murray defeated Jo-Wilfried Tsonga to win his second Queen's Club title.[176] At Wimbledon, Murray lost in the semi-final to Nadal, despite taking the first set.[177] At the Davis Cup tie between Great Britain and Luxembourg, Murray led the British team to victory.[178] Murray was the two-time defending 2011 Rogers Cup champion, but lost in the second round to South African Kevin Anderson.[179] However, the following week, he won the 2011 Western & Southern Open after Novak Djokovic retired due to injury.[180] At the 2011 US Open, Murray battled from two sets down to win a five-set second-round encounter with Robin Haase, but lost in the semi-finals to Rafael Nadal in four sets.[181] This was the first time in his career that Andy had reached the quarter-finals, or better, at all four slams in a calendar year.
119
+
120
+ Murray easily won the small 250-class Thailand Open, and the following week he won his third title in four tournaments at the Rakuten Japan Open Tennis Championships. His opponent in the final was Rafael Nadal, whom he beat for the first time in the year in three sets. Murray then won the doubles with his brother Jamie Murray, becoming the first person in the 2011 season to capture both singles and doubles titles at the same tournament. Murray then successfully defended his Shanghai Masters crown with a straight-sets victory over David Ferrer in the final. At the ATP World Tour Finals, Murray lost to David Ferrer in straight sets and withdrew from the tournament after the loss with a groin pull. Murray ended the year ranked #4, behind Djokovic, Nadal, and Federer, for the fourth consecutive year.
121
+
122
+ With Ivan Lendl as his new full-time coach,[182] Murray began the season by playing in the 2012 Brisbane International. He overcame a slow start in his first two matches to win his 22nd title by beating Alexandr Dolgopolov in the final.[183] In doubles, he lost in the quarter-finals against second seeds Jürgen Melzer and Philipp Petzschner in a tight match.[184] After an exhibition tournament,[185] Murray made it to the semi-finals of the 2012 Australian Open, where he was defeated by Djokovic in a four-hour-and 50-minute match.[186]
123
+
124
+ At the Dubai Duty Free Tennis Championships, Murray defeated Djokovic in the semi-finals, but lost in the final to Roger Federer.[187] After an early defeat at the BNP Paribas Open, Murray made the final of the Miami Masters, losing to Djokovic.[188] Murray then had quarter-final losses at the Monte Carlo Masters and Barcelona Open, and a third round loss at the Italian Open.[189][190][191] Murray battled back spasms throughout the French Open, and in the quarter-finals, he was beaten by David Ferrer.[192]
125
+
126
+ Murray lost in the opening round of the Queen's Club Championships to No. 65 Nicolas Mahut.[193] At Wimbledon, Murray set the then record for the latest finish at the championships when he completed a four-set victory over Marcos Baghdatis at 23:02 BST, which was eclipsed by the 2018 Wimbledon men's singles semi-finals which saw play being completed at 23:03 BST.[194] Murray beat Jo-Wilfried Tsonga in the semi-final in four sets to become the first male British player to reach the final of Wimbledon since Bunny Austin in 1938.[195] In the final, he faced Federer, but after taking the first set, he lost the match in four sets.[196]
127
+
128
+ Murray returned to Wimbledon within weeks, this time to compete in the London 2012 Summer Olympics in singles, doubles, and mixed doubles. He partnered his brother Jamie Murray in doubles and suffered a first-round exit to Austria (Jürgen Melzer and Alexander Peya) in three sets.[197] In the mixed doubles, Murray was partnered by Laura Robson. They made it all the way to the finals where they lost to the Belarusian top seeds (Victoria Azarenka and Max Mirnyi) in three sets, settling for the silver medal. In singles, Murray lost only one set on his way to the finals where he met Federer, defeating him in straight sets, for the loss of just 7 games.[188] By winning the Olympic gold medal, Murray became the first British man to win the Olympic singles gold medal in tennis since Josiah Ritchie in 1908, and only the 7th man in the open era to win two medals at the same Olympic Games.[198]
129
+ Murray retired early in the Rogers Cup due to a knee injury, and was beaten by unseeded Jérémy Chardy at the Cincinnati Masters in straight sets.
130
+
131
+ He next competed in the final major of the season at the US Open. He cruised through his opening two rounds in straight sets against Alex Bogomolov and Ivan Dodig, before facing a tough four-set battle with Feliciano López, where Murray had to win three tie-breakers. In the fourth round, he defeated the Canadian Milos Raonic in straight sets, and then in the quarter-finals, had to come from a set and two breaks down against Marin Čilić to prevail in four. In the semi-finals, he defeated Tomáš Berdych in a long-fought match that lasted almost four hours, to reach his second consecutive Grand Slam final. Murray defeated Djokovic in five sets, becoming the first British man to win a Grand Slam final since Fred Perry in 1936,[199] and the first Scottish-born player to win a Grand Slam final since Harold Mahony in 1896.[200] The win would also set several records for Murray: it involved the longest tiebreak in US Open final history at 12–10 in the first set, it made Murray the first man ever to win an Olympic gold medal and the US Open in the same year, and it tied with the 1988 US Open final (in which Murray's coach Lendl competed) as the longest final in the tournament's history.[201] By defeating Djokovic in the final, Murray achieved his 100th Grand Slam match win of his career. The victory made Murray part of the "Big Four" according to many pundits and contemporaries, including Novak Djokovic.[202][203]
132
+
133
+ In his first tournament after the US Open, Murray reached the semi-finals of the Rakuten Japan Open after entering as defending champion. He was beaten by Milos Raonic in a close three-set match. He was defending champion in the doubles with his brother Jamie. However, they were knocked out in the quarter-finals by top seeds Leander Paes and Radek Štěpánek. At the penultimate Masters 1000 tournament of the year in Shanghai, after receiving a bye into round two, Murray's first match was due to be played against Florian Mayer. However, Mayer had to pull out due to injury, giving Murray a walkover into round three. After beating Alexandr Dolgopolov in the third round, he then overcame Radek Štěpánek in a three-set quarter-final. Murray next faced Roger Federer in the semi-finals, whom he defeated in straight sets to set up a second consecutive final against Djokovic, and his third consecutive Shanghai final. After failing to capitalise on five match points, Murray eventually lost in three sets, bringing to an end his 12–0 winning streak at the competition.[204][205]
134
+ When Nadal pulled out of both the Paris Masters and the Year-End Championships,[206] Murray finished the year at No. 3, after four years at No. 4. This was the first time Murray had finished the year higher than No. 4. At the BBC Sports Personality of the Year Murray found himself voted third overall, ahead of Mo Farah.[207] Murray won the World Breakthrough of the Year at the Laureus World Sports Awards.[208]
135
+
136
+ Murray was appointed Officer of the Order of the British Empire (OBE) in the 2013 New Year Honours for services to tennis.[209][210][211]
137
+
138
+ Murray began his 2013 season by retaining his Brisbane International title, defeating Grigor Dimitrov in the final in straight sets.[212] Trying to win his second major in a row, he began the 2013 Australian Open well with a straight sets victory over Dutchman Robin Haase. He followed this up with straight set victories over João Sousa, practice partner Ričardas Berankis and French No. 14 seed Gilles Simon. In the quarter-finals he cruised past Jérémy Chardy in straight sets to set up a semi-final clash with Roger Federer. After exchanging sets, Murray eventually prevailed in 5 sets, recording his first Grand Slam tournament triumph over Federer. With this victory, each member of the ATP's most dominant quartet of the previous four years (Federer, Nadal, Djokovic and Murray) had beaten the other three at the majors.[213] This victory set up Murray's third consecutive major final appearance, and second in a row against Djokovic. After taking the first set in a tiebreak, Murray was eventually defeated in four sets.[214] His defeat in this final meant that Murray became only the second man in the Open Era to achieve three runner-up finishes at the Australian Open, the other being Stefan Edberg.
139
+
140
+ At the BNP Paribas Open in Indian Wells, Murray lost at the quarter-final stage to Juan Martín del Potro in three sets.[215] At the Miami Masters, Murray made it through his first four matches without dropping a set, and after overcoming Richard Gasquet in the semi-finals, faced David Ferrer in the final. After losing the first set, and facing match point in the decider at 5–6, Murray eventually took the match in a third-set tiebreaker to win his second Miami Masters title, and leapfrog Roger Federer into second place in the rankings, ending a near-decade long time period in which either Federer or Rafael Nadal were ranked in the top two.[216] Murray briefly fell back to No. 3, following a third round defeat by Stanislas Wawrinka in Monte-Carlo, but reclaimed the No. 2 ranking as a result of Federer failing to defend his title at the Mutua Madrid Open. Later, Murray lost at the quarter-final stage to Tomáš Berdych in straight sets.[217]
141
+
142
+ At the Rome Masters, Murray retired due to a hip injury during his second round match against Marcel Granollers on his 26th birthday. Murray had just battled back to tie the match at one set all after winning the second set on a tiebreak. This left Murray with only eleven days to be fit for the start of the French Open.[218]
143
+
144
+ Speaking at a press conference after the match, Murray said, "As it is, I'd be very surprised if I was playing in Paris. I need to make a plan as to what I do. I'll chat with the guys tonight and make a plan for the next few days then make a decision on Paris after the next five days."[219] He would go on to withdraw from Roland Garros later, citing a back injury.[220] After a four-week break due to injury, Murray made his comeback at the 2013 Aegon Championships, where he was the top seed. After a rain delayed first day, Murray had to complete his second round match against Nicolas Mahut, and his subsequent match against Marinko Matosevic on the same day, both of which he won in straight sets. After beating Benjamin Becker in the quarter-finals, Murray next faced his first top ten opponent since losing to Tomáš Berdych in Madrid, taking on Jo-Wilfried Tsonga in the semi-finals. After dropping the first set against the Frenchman, Murray eventually raised his level and won in three to set up a final against Marin Čilić of Croatia, his third consecutive final on grass courts. He came from behind again to beat Čilić in three sets to claim his third title at Queen's Club.[221]
145
+
146
+ Going into Wimbledon, Murray had not lost a match on grass since the previous year's final, and was on a winning streak of 11 matches on grass. In the first two rounds, Murray faced Benjamin Becker[222] and Yen-hsun Lu[223] respectively, defeating both in straight sets. His third round match was against 32nd seed Tommy Robredo, and despite a tour comeback over the past year, Murray overcame the Spaniard in straight sets to set up a clash with Mikhail Youzhny, the highest seed left in Murray's half following the unexpectedly early exits of Roger Federer and Rafael Nadal.[224] Despite facing a fightback in the second set, Murray won in straight sets to make it through to his tenth consecutive Grand Slam quarter-final,[225] in which he was to play Fernando Verdasco, the first left-handed player Murray had faced since the 2012 US Open. For the seventh time in his career, Murray had to come back from a deficit of two sets to ultimately come through in five,[226] setting up a semi-final clash with 24th seed Jerzy Janowicz, the Polish player who beat Murray in their previous encounter. After Murray failed to break Janowicz's serve, the Pole took the opening set in the tiebreak, following a double fault from Murray. However Murray managed to up his level of play, and won the next three sets, making it through to his second consecutive Wimbledon final, and third consecutive major final against Novak Djokovic.[227]
147
+
148
+ Despite the Serb being the favourite to win the title throughout the Championships, Murray overcame Djokovic in a straight sets match that lasted over three hours, to become the first British winner of the men's singles title since Fred Perry in 1936, the first Scot of either sex to win a Wimbledon singles title since Harold Mahony in 1896, and to extend his winning streak on grass to 18 matches.[228]
149
+
150
+ At the US Open, Murray entered a Grand Slam tournament as defending champion for the first time, and started strongly with a straight sets win against Michaël Llodra. He backed this up with wins over Leonardo Mayer, Florian Mayer and Denis Istomin to reach the quarter-finals at a major for the 11th straight tournament. In the last 8, Murray faced Stanislas Wawrinka of Switzerland, but lost in straight sets, ending Murray's streak of four consecutive major finals.[229] Following his disappointing run of form on hard courts, Murray next joined the Great Britain Davis Cup team in their World Group Play-off tie on clay against Croatia, where he played in two singles and the doubles rubbers. After defeating 16-year-old Borna Ćorić in straight sets, Murray teamed up with Colin Fleming to defeat Croatian number 1 Ivan Dodig and Mate Pavić in the doubles, and take a 2–1 lead in the tie. Murray then sealed Britain's return to the World Group by defeating Dodig in straight sets.[230]
151
+
152
+ Following the Davis Cup, Murray's season was cut short by his decision to undergo surgery, in order to sort out the lower back problems that had caused him problems since the early stages of the previous season. After being forced to withdraw from the French Open in May, the injury flared up again during the US Open and later during the Davis Cup World Group Play-offs, Murray made the decision that surgery was the best way to sort the problem out for the long-term.[231] Following the conclusion of the 2013 season, Murray was voted the 2013 BBC Sport Personality of the Year, after having been heavy favourite since the nominees were announced.[232]
153
+
154
+ Murray started his season at the Qatar Open in Doha. In the first round, he defeated Mousa Shanan Zayed in straight sets in 37 minutes without dropping a single game, but was defeated in three sets by No. 40 Florian Mayer in the second round, despite being a set and a break up three games into the second set.[233] He then played a warm-up match at the 2014 AAMI Classic in Kooyong against No. 43 Lleyton Hewitt, losing in two close tiebreaks.
155
+
156
+ He next headed to Melbourne for the 2014 Australian Open, where he drew the No. 112, Go Soeda of Japan. Despite worries that he was not match-fit, Murray got off to a strong start, dispatching the Japanese number 2 in under 90 minutes, losing just 5 games in the process. He next went on to defeat Vincent Millot and Feliciano López respectively in straight sets. In the fourth round, Murray dropped his first set of the tournament on his way to beating Stephane Robert in four sets to set up a meeting with long-standing rival Roger Federer in the quarter-finals. Despite saving two match points to take the third set, he ultimately went out in four, ending his streak of four consecutive Australian Open semi-finals.[234] As a result of losing before the final, Murray fell to No. 6, falling out of the top 5 for the first time since 2008.
157
+
158
+ He next headed to the United States to compete in the Davis Cup World Group first round with Great Britain, who went into the tie as outsiders. Murray won both of his ties against Donald Young and Sam Querrey respectively, helping Britain to their first Davis Cup quarter-final since 1986.[235] Murray's next tournament was the Rotterdam Open after receiving a late wild card, however he lost to Marin Čilić in straight sets in the quarter-finals. His following competition, the Mexican Open in Acapulco, ended in a semi-final defeat by Grigor Dimitrov in a thrilling three-setter that required two tiebreakers to decide the final two sets.
159
+
160
+ At Indian Wells, Murray struggled in his first two matches against Lukáš Rosol and Jiří Veselý respectively, overcoming both in close three-set encounters to set up a fourth round clash with Canadian Milos Raonic, which he lost in three sets. Murray offered to play with 2012 Wimbledon Doubles champion Jonathan Marray, because Marray was unable to convince anyone to join him on court.[236] For Murray and Marray's first competitive match together, they won a doubles clash against Gaël Monfils and Juan Mónaco only to lose in the second round to the No 2 seeds Alexander Peya and Bruno Soares.
161
+
162
+ In March, Murray split with coach Ivan Lendl, who had been widely praised for helping Murray achieve his goal of winning Grand Slam titles.[237] At the 2014 Miami Masters, Murray defeated Matthew Ebden, Feliciano López and Jo Wilfried Tsonga but lost to Djokovic in the quarter-finals.[238] In the Davis Cup quarter-finals against Italy, he beat Andreas Seppi in his first rubber, then teamed up with Colin Fleming to win the doubles rubber. Murray had only beaten one top ten player on clay, Nikolay Davydenko, back in 2009, and so in his final singles match, was stunned by Fabio Fognini in straight sets, which took Great Britain to the deciding final rubber.[239] However, in this match his compatriot, James Ward was defeated by Andreas Seppi, also in straight sets, knocking Murray and Great Britain out of the Davis Cup.[239]
163
+
164
+ Murray next competed at the Madrid Open and following his opening win, over Nicolas Almagro, he dedicated the victory to former player Elena Baltacha.[240][241] He then lost to qualifier Santiago Giraldo in the following round. Murray then reached the quarter-finals of the Rome Masters where he lost to No. 1 Rafael Nadal in a tight match in which he had been up a break in the final set.[242] At the French Open, Murray defeated Andrey Golubev and Marinko Matosevic before edging out 28th seed Philipp Kohlschreiber 12–10 in the final set. This was the first time Murray had ever gone beyond 7–5 in a deciding set.[243] He followed this up with a straight sets win over Fernando Verdasco and then recorded a five set victory over Frenchman Gaël Monfils in the quarter-final, which saw Murray rise to No. 5 and equal his best ever French Open by reaching the semi-finals. However, he subsequently lost to Nadal in straight sets, winning only 6 games in the match.[244] After losing the 2014 French Open semi-finals to Nadal, Murray appointed former women's world No. 1, and two-times slam titlist, Amélie Mauresmo as his coach[245] in a 'historic move' which made Mauresmo the first woman to coach a top male tennis player.[246]
165
+
166
+ After strong grass court seasons in 2012 and 2013, Murray was seeded third for the 2014 Wimbledon Championship, behind Novak Djokovic and Rafael Nadal, who were seeded first and second respectively.[247] He began his title defence with straight sets wins over David Goffin[248] and Blaž Rola, defeating the latter for the loss of just two games.[249] Murray continued his good form, defeating Roberto Bautista Agut[250] and Kevin Anderson,[251] the 27th and 20th seeds, again in straight sets to reach his seventh consecutive Wimbledon quarter-final. Murray's defence then came to a halt as Grigor Dimitrov ended his 17 match winning-streak on the grass of Wimbledon (this includes the 2012 Olympics) with a straight sets win, meaning Murray failed to reach the semi-finals for the first time since 2008.[252] After his defeat at the Championships, Murray dropped to No. 10, his lowest ranking since 2008.[253]
167
+
168
+ Prior to the North American hard court swing, Murray announced he was extending his partnership with Amélie Mauresmo until the end of the US Open, but was ideally looking for a long-term deal.[254] He also revealed he had only just returned to a full training schedule following his back surgery last September.[255] Murray reached back-to-back quarter-finals at the Canadian Open and Cincinnati Masters, losing to eventual champions Jo Wilfried Tsonga,[256] after being a break up in the decider,[257] and Roger Federer, after being two breaks up in the second set, respectively.[258] He made it to the quarter-finals of the 2014 US Open, losing to Novak Djokovic, after earning his first top ten win of the year in the previous round against Jo Wilfried Tsonga.[259] This was the first season since 2009 where Murray failed to reach a grand slam final. As a consequence Murray fell outside of the top 10 ranking places for the first time since June 2008.[260]
169
+
170
+ Murray took a wildcard into the inaugural Shenzhen Open in China, entering as the number 2 seed. Victories over Somdev Devvarman, Lukáš Lacko and Juan Mónaco saw Murray reach his first final of the season, breaking a drought of 14 months following his title at Wimbledon. In the final he faced Tommy Robredo of Spain, the second final between the two. After saving five championship points in the second set tie break, Murray went on to win the title in three sets, Robredo's drop in fitness ultimately proving decisive.[261] He then took his good form into Beijing, where he reached the semi-finals before losing to Djokovic in straight sets,[262] however he lost in the third round at the Shanghai Masters to David Ferrer despite being a set up.[263] Following his early exit in Shanghai, Murray took a wildcard into the Vienna Open in an attempt to claim a place at the ATP World Tour Finals. He reached the final, where he once again faced Ferrer, and triumphed in three sets for his second title of the season, and the 30th of his career.[264] Murray defeated Ferrer again in the semi-finals of the Valencia Open to move into his third final in five weeks, and further strengthen his bid for a place at the season finale in London.[265] In a repeat of the Shenzhen Open final, Murray again saved five championship points as he overcame Tommy Robredo in three sets.[266] Murray then went on to reach the quarter-finals of the Paris Masters, where he was eliminated by Djokovic in what was his 23rd match in the space of only 37 days.[267] However, his win over Dimitrov in the third round had already guaranteed him a spot at the ATP World Tour Finals.[268]
171
+
172
+ At the ATP World Tour Finals, Murray lost his opening round robin match to Kei Nishikori[269] but won his second match against Milos Raonic.[270] However, he lost his final group match against Federer in straight sets and only managed to win one game against him, marking his worst defeat since losing to Djokovic in the 2007 Miami Masters, eliminating him from the tournament.[271]
173
+
174
+ Following the conclusion of the season, Murray mutually agreed a split with long-term backroom staff, training partner Dani Vallverdu and fitness coach Jez Green. They had been with him for five and seven years respectively but were both reported to have been unhappy at the lack of consultation they had been given about the appointment of Mauresmo.[272] Murray also took part in the inaugural season of the International Premier Tennis League, representing the Manila Mavericks, who had drafted him as an icon player in February.[273] Murray took part in the first three matches of the tournament which were all played in Manila.[274]
175
+
176
+ Murray began his year by winning an exhibition event in Abu Dhabi.[275] He then played the Hopman Cup with Heather Watson and, despite winning all his singles matches in straight sets, they finished second in their group behind Poland.[276]
177
+
178
+ His first competitive tournament of the year was the Australian Open. He won his opening three matches in straight sets before defeating 11th seed Grigor Dimitrov to reach the quarter-final.[277] Wins over Nick Kyrgios[278] and Tomáš Berdych followed as Murray reached his fourth final at the tournament (three of which were against Djokovic) and the eighth grand slam final of his career.[279] He lost the final to Novak Djokovic in four sets,[280] however his run to the final saw his return to the top four in the world rankings for the first time in 12 months.[281]
179
+
180
+ Murray next participated in the Rotterdam Open as the top seed, but he lost in the quarter-finals to Gilles Simon who ended a 12 match losing streak against Murray.[282] Murray then played in the Dubai Championships but suffered another quarter-final defeat to 18-year-old Borna Ćorić and as a result, Murray slipped to No. 5 behind Rafael Nadal and Kei Nishikori.[283][284] Afterwards, Murray played the Davis Cup World Group in Glasgow against the United States. He won both his matches against Donald Young and John Isner, allowing Great Britain to progress to the quarter-finals for the second consecutive time with a 3–2 lead over the United States.[285]
181
+
182
+ Murray then reached the semi-finals of the 2015 Indian Wells, overtaking Tim Henman's record of 496 career wins to have the most career wins for a British man in the Open Era.[286] However, he suffered a 6th consecutive defeat to Djokovic in straight sets.[287] Murray then reached the final of the 2015 Miami Open, recording his 500th career win along the way to become the first British player to have 500 or more wins in the Open Era.[288] He went on to lose the final to Djokovic, this time in three sets.[289] Murray added Jonas Björkman to his coaching staff in March initially on a five-week trail to help out in periods when Mauresmo was unavailable as she only agreed to work with him for 25 weeks.[290] However, at the end of the Australian Open, Mauresmo had informed Murray that she was pregnant and he announced at the end of April, that Björkman would be his main coach for all of the grass court season and all of the US hard court swing, while Mauresmo would only be with the team for Wimbledon.[291]
183
+
184
+ Murray won his first ATP clay court title at the 2015 BMW Open. He defeated German Philipp Kohlschreiber in three close sets to become the first Briton since Buster Mottram in 1976 to win a tour level clay court event.[292][293] The following week he reached his second final on clay, at the Madrid Open after recording only his second and third victories over top 10 opposition on clay, against Raonic and Nishikori.[294][295] In the final, he defeated Rafael Nadal in straight sets for his first Madrid title on clay, and first ever clay court Masters 1000 title. The win was Murray's first over Nadal, Federer or Djokovic since Wimbledon 2013, and his first over Nadal on a clay court.[296][297]
185
+
186
+ Murray continued his winning streak at the Italian Open, beating Jeremy Chardy in straight sets in his opening match, but then withdrew due to fatigue after having played nine matches in the space of 10 days. Murray then reached his third semi-final at the French Open, but lost to Djokovic in five sets after threatening a comeback from two sets to love down, ending his 15 match winning streak on clay.[298]
187
+
188
+ To start his grass court campaign, Murray went on to win a record tying fourth Queen's Club title, defeating the big serving South African Kevin Anderson in straight sets in the final.[299] At the third grand slam of the year, the 2015 Wimbledon Championships, Murray dropped only two sets on his way to setting up a semi-final clash with Roger Federer. Murray lost to the Swiss veteran in straight sets, gaining only one break point in the entire match.[300]
189
+
190
+ After Wimbledon, Murray returned to Queen's Club, to play for Great Britain against France in their Davis Cup quarter-final tie. Great Britain went 1–0 down when James Ward lost to Gilles Simon in straight sets, however Murray levelled the tie with a victory against Jo-Wilfried Tsonga. Murray then teamed up with his brother Jamie to win the doubles rubber, coming back from a set down to defeat Tsonga and Nicolas Mahut in four sets, giving Britain a crucial 2–1 lead going into the final day. He then faced Simon in the fourth rubber and after initially being a set and a break down, he suddenly found his form again towards the end of the second set and eventually won in four sets, winning 12 of the last 15 games in the process (with Simon struggling from an ankle injury). With a 3–1 lead over France, this resulted in Great Britain reaching their first Davis Cup semi-final since 1981.[301]
191
+
192
+ Murray next participated at the Citi Open (for the first time since 2006), as the top seed and favourite to win the tournament. However, he suffered a defeat in his first match, losing to No. 53 Teymuraz Gabashvili in a final set tiebreak, despite serving for the match.[302] In doubles, he partnered Daniel Nestor, however they lost in the first round to the fourth seeds, Rohan Bopanna and Florin Mergea, also in three sets.[303]
193
+
194
+ He bounced back from this defeat by winning the Montreal Masters Rogers Cup, defeating Tsonga and Nishikori in the quarter-finals and semi-finals respectively. He then prevailed in the final against Djokovic in three sets. This broke his eight-match, two-year losing streak against Djokovic (his last win against him being in the final of Wimbledon in 2013). In winning the title he also surpassed Federer in terms of ranking, becoming the world No. 2 for the first time in over two years. In doubles, he partnered Leander Paes and they won their first match against Chardy and Anderson, but were then defeated by Murray's brother Jamie and John Peers in two sets – the first time the Murray brothers had competed against each other in a Tour-level match, a situation which Andy described as "awkward" and Jamie as "a bit weird".[304]
195
+
196
+ In the second Master Series tournament of the US Hard Court season, the Cincinnati Masters, Murray defeated veteran Mardy Fish in the second round, and then beat both Grigor Dimitrov and Richard Gasquet in three-set matches, having to come from a set down on both occasions, while Dimitrov had served for the match in the deciding set. In the semi-final, he lost to defending champion Roger Federer in straight sets, and after Federer went on to win the tournament, this result saw Murray return to the No. 3 ranking and seeding for the US Open. At the US Open, Murray beat Nick Kyrgios in four sets before beating Adrian Mannarino in five sets after being two sets down, equaling Federer for winning eight matches from two sets to love down. He then beat Thomaz Bellucci in straight sets but suffered a defeat in the fourth round to Kevin Anderson in four sets. This ended Murray's five-year run of 18 consecutive Grand Slam quarter-finals (not counting his withdrawal from the 2013 French Open) since his third round loss to Stan Wawrinka in the 2010 US Open.[305]
197
+
198
+ Playing against Australia in the semi-finals of the Davis Cup World Group in Glasgow, Murray won both his singles rubbers in straight sets, against Thanasi Kokkinakis and Bernard Tomic.[306] He also partnered his brother Jamie, and they won in five sets against the pairing of Sam Groth and Lleyton Hewitt, the results guiding Great Britain to the Davis Cup final for the first time since 1978 with a 3–2 lead over Australia.[307]
199
+
200
+ After losing in the semi-finals of the Shanghai Masters to Djokovic in straight sets, Murray reached the finals of the Paris Masters for the loss of just one set, with victories against Borna Ćorić, David Goffin and David Ferrer. After a three set win over Richard Gasquet, he joined Novak Djokovic, Roger Federer and Rafael Nadal as the only players to reach the semi-finals (or better) at all nine of the ATP World Tour Masters 1000 tournaments, and also ensured that he compiled his best match record in a single
201
+ season.[308] He then lost the final to Djokovic again in straight sets.
202
+
203
+ As the world No. 2, Murray participated in the ATP World Tour Finals in London, and was drawn into the Ilie Năstase group with David Ferrer, Rafael Nadal and Stan Wawrinka. He went out in the round-robin stage, after defeating Ferrer and losing to Nadal and Wawrinka.[309] However, after Federer failed to win the tournament, he finished the season ranked No. 2 for the first time.[310]
204
+
205
+ In the Davis Cup final, Murray's victory over Ruben Bemelmans in straight sets pulled Great Britain level in the final after Kyle Edmund had lost the first singles rubber in five sets, played on indoor clay courts at Ghent. He then partnered his brother Jamie in a four-set victory over the pairing of Steve Darcis and David Goffin, before defeating Goffin again in the reverse singles on Sunday, thus ensuring a 3–1 victory for Great Britain, their first Davis Cup title since 1936 and their tenth overall.[311] Murray also became only the third person since the current Davis Cup format was introduced to win all eight of his singles rubbers in a Davis Cup season, after John McEnroe and Mats Wilander.[312]
206
+
207
+ Murray began his 2016 season by playing in the Hopman Cup, pairing up with Heather Watson again. However, they finished second in their group after losing their tie to eventual champions Nick Kyrgios and Daria Gavrilova from Australia.[313]
208
+
209
+ Murray played his first competitive tournament of 2016 at the Australian Open where he was aiming to win his first title there after four runner-up finishes. He went on to reach his fifth Australian Open final with victories over Alexander Zverev, Sam Groth, João Sousa, Bernard Tomic, David Ferrer and Milos Raonic, dropping four sets along the way. However, in a rematch of the previous year final, he was unable to win his first title as he lost in the final to an in-form Novak Djokovic (who won a record-equalling sixth title) in straight sets.[314] He became the second man in the Open Era (after Ivan Lendl) to lose five Grand Slam finals at one event, and the only one not to have won the title. Subsequently, in February, Murray appointed Jamie Delgado as an assistant coach.[315]
210
+
211
+ Murray then played at 2016 Davis Cup defeating Taro Daniel in straight sets and Kei Nishikori in five sets. Murray then competed at the first Masters 1000 of the year at the 2016 Indian Wells Masters. He defeated Marcel Granollers in the second round in straight sets but had an early loss to Federico Delbonis in the third round. Murray then played at the 2016 Miami Open as the 2nd seed. He defeated Denis Istomin in the second round in straight sets but had another early loss, to 26th seed Grigor Dimitrov, despite taking the first set.[316]
212
+
213
+ Murray began his clay court season at the 2016 Monte-Carlo Rolex Masters as the 2nd seed. Murray struggled in his second round match against Pierre-Hugues Herbert but Murray came through in 3 sets. Murray struggled again in his third round match against 16th seed Benoît Paire as Murray was down a set and two breaks. Paire also served for the match in the third set but Murray still came through in 3 sets. Murray then defeated 10th seed Milos Raonic in straight sets in the quarter-finals. In the semi-finals Murray lost to 5th seed and eventual champion Rafael Nadal despite winning the first set. Murray then played at the 2016 Mutua Madrid Open as the 2nd seed and the defending champion. Murray defeated qualifier Radek Štěpánek in three sets. He then proceeded to the semi-finals after defeating 16th seed Gilles Simon and 8th seed Tomáš Berdych both in straight sets. In the semi-finals Murray defeated Nadal in straight sets who Murray had lost to earlier in the year. In the final Murray lost to number 1 seed Novak Djokovic in three sets. This loss dropped Murray from second to third in the ATP Rankings. Shortly afterwards Mauresmo and Murray issued a joint statement announcing that they had "mutually agreed" to end their coaching partnership.[317]
214
+
215
+ Murray regained his number two ranking after he won the 2016 Internazionali BNL d'Italia for his 1st title of the season and 36th overall. He defeated Mikhail Kukushkin, Jérémy Chardy, 12th seed David Goffin, Lucas Pouille, and number 1 seed Djokovic all in straight sets. This was his first win over Djokovic on clay and became the first British player since Virginia Wade in 1971 to win the title and the first British man since George Patrick Hughes in 1931.[318] Murray then moved on to the French Open where he struggled in the opening rounds coming through two five-set matches against Štěpánek and French wildcard Mathias Bourgue. He came through in straight sets against big servers Ivo Karlović and John Isner to reach the quarter-finals where he beat home favourite Richard Gasquet in four sets to set up a semi-final clash against defending champion Stanislas Wawrinka. Murray defeated Wawrinka in four sets to become the first male British player since Bunny Austin in 1937, to reach a French Open final.[319] He was unable to win his maiden French Open final, losing to Djokovic in four sets.
216
+
217
+ In June 2016, Ivan Lendl agreed to return to his former role as Murray's coach.[320] Murray started his grass season at the 2016 Aegon Championships as the 1st seed and the defending champion. Murray defeated Nicolas Mahut in straight sets despite facing a set point in the first set and three set points in the second set. He then defeated his countryman Aljaž Bedene in straight sets. He then had three set wins over Kyle Edmund, another countryman, and No. 5 seed Marin Čilić. In the final he was down a set and a break to 3rd seed Milos Raonic. Murray still managed to come back and win a record 5th Queen's Club Championships and it was also his 2nd title in 2016. Murray then played at the third major of the year at the 2016 Wimbledon Championships as the 2nd seed. Murray had straight set wins over Liam Broady, Lu Yen-hsun, John Millman, and Nick Kyrgios in the first four rounds.[321] Murray then defeated 12th seed Jo-Wilfried Tsonga in five sets in the quarter-final[322] and 10th seed Tomáš Berdych in straight sets to reach his third straight major final. In the final on 10 July, Murray defeated Raonic in straight sets to win his second Wimbledon title and third major title overall.[323] His Wimbledon crown was his 3rd title of the season and 38th career Tour title.
218
+
219
+ Murray next played at the Rio Olympic Games. He became the first player, male or female, to win two gold medals in the tennis singles events by defeating Juan Martín del Potro in the final, which lasted over four hours.[324] The win was his 3rd consecutive title and 4th title of the season. Murray then entered the US Open and beat Lukas Rosol, Marcel Granollers, Paolo Lorenzi and Grigor Dimitrov in the first four rounds. However his run came to an end when he lost to sixth seed Kei Nishikori in five sets despite holding a two sets to one lead.
220
+
221
+ His next activity was the 2016 Davis Cup semi-final in Glasgow against Argentina. He lost the opening rubber against Juan Martín del Potro in five sets.[325] After Great Britain lost the second rubber as well, he teamed up with his brother Jamie to beat del Potro and Leonardo Mayer in the third rubber in four sets.[326] He then won the fourth rubber against Guido Pella in straight sets,[327] though Great Britain eventually lost the tie.[328] Murray then won the China Open for his fifth title of 2016 and 40th career tour title. He defeated Andreas Seppi, Andrey Kuznetsov, Kyle Edmund, David Ferrer, and Grigor Dimitrov all in straight sets. Murray then backed this up with a tournament win at the Shanghai Rolex Masters defeating Steve Johnson, Lucas Pouille, David Goffin, Gilles Simon, and Roberto Bautista Agut all in straight sets to capture his 13th masters title and 3rd title in Shanghai. This marked his 6th title of 2016 and drew him even with former No. 1 Stefan Edberg at No. 15 on the Open Era titles list with 41 Tour titles each.
222
+
223
+ Murray brought his win streak to 15 consecutive match wins by winning the Erste Bank Open for his seventh tour title of the 2016 season. His tournament started slowly with three-set wins over Martin Klizan and Gilles Simon in the first two rounds. However, a decisive win over John Isner in the quarter-final and a walkover due to David Ferrer's withdrawal with a leg injury saw Murray reach the final.[329] There he defeated Jo-Wilfried Tsonga, for his third title in succession.[330] The result saw Murray win seven titles in a single season for the first time in his career, and move to solo 15th on the all-time list of singles titles in the Open Era, breaking a tie with former world No. 1 Stefan Edberg.[331]
224
+
225
+ Murray entered the Paris Masters knowing that in the event of Djokovic not reaching the final, winning the title would be enough to see him crowned world No. 1 for the first time. After reaching the quarter-finals, courtesy of wins over Fernando Verdasco and Lucas Pouille, Murray faced Berdych for a place in the semi-finals, winning in straight sets. Meanwhile, Djokovic lost to Marin Cilic, meaning that Murray would replace Djokovic at the top of the rankings should he reach the final. He was due to face Milos Raonic in the semi-finals. However, Raonic withdrew prior to the start of the match, giving Murray a walkover. As a result, Murray became the first British man to reach No. 1 since the introduction of the rankings in 1973.[332] Murray then defeated John Isner in the final in 3 sets to win his fourth consecutive tournament and first Paris Masters title.[333] In November 2016, Murray reached the final of the ATP World Tour Finals for the first time before winning against Novak Djokovic in two sets, thus reaching year-end No. 1[334] and in doing so, becoming the first player to win a Grand Slam, the ATP World Tour Finals, the men's singles at the Olympic Games and a Masters 1000 title in the same calendar year. The International Tennis Federation recognised Murray as their men's 2016 ITF men's world champion, the first time Murray had achieved this honour.
226
+
227
+ Murray was knighted in the 2017 New Year Honours for services to tennis and charity.[335] He opened the season with a loss in the semi-finals of the Mubadala World Tennis Championship to David Goffin, following which he won against Milos Raonic in the third-place play-off.[336][337] Murray then reached the final of the Qatar Open, but lost to Novak Djokovic in three sets despite saving three championship points.[338][339] At the Australian Open he lost in the fourth round against Mischa Zverev in four sets.[340]
228
+
229
+ Murray returned to action at the Dubai Duty Free Tennis Championships event in February. There he won his only tournament of the year, beating Fernando Verdasco in straight sets,[341] despite almost losing in the quarter-finals to Philipp Kohlschreiber where Murray had to save seven match points.[342] The next week, he suffered a shock defeat in the second round of the Indian Wells Masters to Vasek Pospisil.[343]
230
+
231
+ After missing a month due to an elbow injury, Murray returned to compete in the Monte-Carlo Masters in April, losing out in the third round to Albert Ramos-Vinolas.[344] He then competed in Barcelona where he was beaten by Dominic Thiem in the semi-finals.[345] Murray continued to struggle in his next two tournaments, losing to Borna Coric in the third round of Madrid,[346] and to Fabio Fognini in second round of Rome, where he was defending champion.[347] In both of these defeats, he failed to win a set. At the 2017 French Open, following tough four-set victories over Andrey Kuznetsov and Martin Kližan in the opening rounds,[348][349] Murray defeated Juan Martín del Potro and Karen Khachanov in straight sets.[350][351] In the quarter-finals he defeated Kei Nishikori in four sets,[352] but lost in the semi-finals to Stan Wawrinka in five sets.[353]
232
+
233
+ As the five-time champion at Queens, Murray pledged his prize money to the victims of the Grenfell Tower fire,[354] however he was defeated in straight sets by Jordan Thompson in the first round.[355] Despite concerns over a lingering hip injury, he returned to Wimbledon as the defending champion and progressed to the third round with straight set wins against Alexander Bublik and Dustin Brown.[356][357] He dropped his first set of the tournament to Fabio Fognini but proceeded to the fourth round in four sets.[358] Murray continued to the quarter final with a straight set victory against Benoit Paire.[359] However, he was defeated in the quarter-final by Sam Querrey in five sets.[360]
234
+
235
+ Murray missed the Canadian Open and the Cincinnati Masters due to his hip injury, which led to him losing his No. 1 ranking to Rafael Nadal.[361][362] His injury then forced him to withdraw from the 2017 US Open two days before the start of the tournament, making it the first Grand Slam tournament he had missed since the 2013 French Open.[363] Murray then withdrew from the Asian hard court swing and said it was "most likely" that he would not play in a professional tournament again in 2017.[364] Ultimately he did indeed not play again, withdrawing from Paris which left him unable to qualify for the 2017 ATP Finals; that November, as a result of his inactivity, his ranking fell sharply to No. 16, his lowest ranking since May 2008.[365][366] Murray returned to the court to play a charity match against Federer in Glasgow and expressed his hope to return to the tour in Brisbane.[367] The following week, he and Ivan Lendl announced that they had mutually ended their coaching arrangement for a second time.[368]
236
+
237
+ Murray withdrew from the Brisbane International and Australian Open due to hip injury.[369] In a post on Instagram, Murray explained that rehab was one option for recovery. He added that hip surgery was also an option but that the chances of a successful outcome were not as high.[370][371] On 8 January, Murray announced on Instagram he had undergone hip surgery.[370][372]
238
+
239
+ In March, Murray lost his British No. 1 ranking for the first time since 2006, to Kyle Edmund .[373] Later that month, Murray said he was making progress after several days of playing at the Mouratoglou Academy in Nice after posting pictures of himself practising against Aidan McHugh, a British junior player, on Instagram.[374][375] He then announced he would play his first ATP tournament since hip surgery at the Rosmalen Grass Court Championships in June,[376][377] although he later withdrew saying he was not quite ready and wanted to be 100%.[378] However, he later announced he would make his return at the Queen's Club Championships. He subsequently lost to Nick Kyrgios in the first round in three sets.[379] He was given a wildcard for the Eastbourne International, where he beat Stan Wawrinka in the first round before losing to Kyle Edmund in the second.[380] He withdrew from Wimbledon with a "heavy heart" a day before the tournament, saying it was too soon to play five-set matches.[381] As a result of this withdrawal, he dropped to 839th in the ATP rankings, his newest low ranking since he first entered the ATP rankings in 21 July 2003.[382]
240
+
241
+ He then entered the Washington Open, where he won his first round match against Mackenzie McDonald in three sets.[383] He then faced Kyle Edmund, who had dealt him his last defeat at Eastbourne, overcoming him in three sets. His next match, a dramatic three-set victory over Marius Copil in the third round, lasted until just past 3:00 AM local time; Murray wept after the conclusion of the match, overcome with emotion. He then withdrew from the tournament and the Canadian Open the following week to continue his recovery and to focus on the Cincinnati Masters for which he was awarded a wildcard. He eventually lost in the first round to France's Lucas Pouille in three sets.[384]
242
+
243
+ Murray made his grand slam return at the US Open where he defeated the Australian James Duckworth in four sets.[385] However, he was unable to progress further, losing in the second round to Spain's Fernando Verdasco in four sets.[386]
244
+
245
+ Murray then withdrew from Great Britain's Davis Cup tie against Uzbekistan in Glasgow to continue his rehabilitation from his injury.[387]
246
+
247
+ He entered the Shenzhen Open as a wildcard. He advanced to the second round after Zhizhen Zhang retired in the third set of the first round.[388] There, he faced defending champion and top seed David Goffin, who Murray upset in straight sets.[389] He then faced Fernando Verdasco in the quarter-finals, but was defeated in straight sets.[390] Murray had been due to play at the China Open the following week, but, after suffering a slight ankle problem, he decided to end his season early to ensure he would be fit for the following year.[391][392]
248
+
249
+ Murray travelled to Brisbane early in order to better prepare for the Brisbane International.[393] He won his first round match against James Duckworth in straight sets but admitted post-match that he did not know how long he would be able to play top-class tennis.[394] Murray was defeated in the next round by Daniil Medvedev, at that time ranked 16th in the world.[395]
250
+
251
+ On 11 January 2019, at a press conference just prior to the Australian Open, an emotional Murray announced that he could possibly retire from professional tennis due to struggling physically for a "long time", particularly with his hip injury. He said that he had been suffering with hip pain on a daily basis, and that it caused him to struggle with tasks like putting his shoes and socks on.[396] He spoke of the possibility of a second hip surgery, but expressed doubt this would be a viable option to prolong his career, merely allowing him to "have a better quality of life, and be out of pain".[397] He hoped to make it through to Wimbledon,[397][398] but that the Australian Open could be his final tournament if he was not able to last until the summer, stating: "I'm not sure I can play through the pain for another four or five months".[396] Active and retired tennis players, including Juan Martín del Potro, Kyle Edmund, Bilie Jean King and the other members of the 'Big Four' paid tribute to Murray upon his announcement.[399][400][401]
252
+
253
+ Murray entered the singles of the Australian Open, however lost his opening match against 22nd seed Roberto Bautista Agut in a four-hour, five-set 'epic'. At its conclusion, a video montage of tributes featuring other top players Roger Federer, Novak Djokovic, Sloane Stephens and Caroline Wozniacki played in deference to his impending retirement.[402] In his post-match interview, he stated that he was considering a second hip surgery, and had not yet ruled out a return to the sport upon recovering from the operation.[403]
254
+
255
+ Bob Bryan urged Murray to have the "Birmingham hip (BHR)" operation he underwent in August 2018, involving a cobalt-chrome metal cap being placed over the femur with a matching metal cup in the acetabulum (a conservative bone-saving alternative to a traditional Total Hip Replacement). Bryan informed Murray that the BHR would improve his quality of life and may help him return to the professional tennis tour.[404] On 29 January, Murray announced on Instagram that he had undergone hip resurfacing surgery in London and hoped that it would "be the end of my hip pain."[405] On 4 February, in an interview with The Times, Professor Derek McMinn, who invented the BHR implant and procedure, gave the opinion that Murray's chances of returning to competitive tennis should be "in the high 90 per cent".[406] On 7 March, Murray stated in an interview that he was now free of pain in his hip as a result of the surgery and may therefore return to playing competitive tennis, but that any potential Wimbledon return would be dependent on how his hip felt, and that he would not rush his comeback and may test his condition by playing doubles.[407]
256
+
257
+ On 16 May 2019, Murray received his knighthood from Prince Charles at Buckingham Palace, two years after he was awarded the honour.[408]
258
+
259
+ Murray returned to the professional tennis circuit in June, entering the doubles competition of the Queen's Club Championships alongside Feliciano Lopez.[409] The duo won their first round match against the top seeds Juan Sebastián Cabal and Robert Farah in straight sets and then beat the defending champions John Peers and Henri Kontinen in the semi finals.[410] Murray and Lopez went on to win the tournament by defeating Rajeev Ram and Joe Salisbury in a final set champions tiebreak.[411] Following the win, Murray stated that his "hip felt great" and that "there was no pain."[412] Murray continued his comeback from injury by partnering Marcelo Melo in the doubles at the Eastbourne International where they lost in the first round against Cabal and Farah.[413] At the 2019 Wimbledon Championships, Murray entered the men's doubles and mixed doubles events. In the men's doubles he partnered Pierre-Hugues Herbert and was eliminated in the second round, while his mixed doubles partnership with Serena Williams ended with a third round defeat to top seeds Bruno Soares and Nicole Melichar.[414]
260
+
261
+ After Murray's Wimbledon campaign, he and his brother Jamie participated in the Citi Open doubles, where they defeated Edouard Roger-Vasselin and Nicolas Mahut before losing to Michael Venus and Raven Klaasen in the round of 16. His next tournament at the Canadian Open renewed his partnership with Feliciano Lopez where they defeated Lukasz Kubot and Marcelo Melo and lost to Fabrice Martin and Jeremy Chardy. Following the conclusion of the tournament, Murray stated his return to the singles competition at the Western and Southern Open and revealed his plans to play in China in the autumn.[415] In his first singles match since the 2019 Australian Open, Murray faced Richard Gasquet in the first round of the 2019 Cincinnati Masters, losing in straight sets.[416] In the quarter-final of the Cincinnati doubles tournament, Andy Murray and Feliciano López met Jamie Murray and Neal Skupski in only the second match between the siblings in their senior careers; Jamie and Skupski won in three sets to progress, with Andy stating afterwards that he would now concentrate his efforts on returning to the singles tour.[417] Murray then played at the 2019 Winston-Salem Open, where he faced Tennys Sandgren in the first round. Murray lost in straight sets, though the score was close. Murray then contemplated dropping down to Challenger level, skipping the US Open entirely to focus on two tournaments happening concurrently. Murray opted to play at the 2019 Rafa Nadal Open Banc Sabadell Challenger event, the first time he had competed on the second tier Challenger Tour since 2005.[418] In the first round of the event, Murray defeated 17-year-old Imran Sibille in straight sets in under 43 minutes to record his first singles victory since his hip surgery.[419] He lost to Matteo Viola in the third round.[420]
262
+
263
+ In September 2019, Murray participated in the inaugural Zhuhai Championships, losing to eventual champion Alex de Minaur in the second round. He also participated in the China Open, where he recorded a win against Matteo Berrettini, ranked 13th in the world, but he was eliminated by eventual champion Dominic Thiem in the quarter-final.[421] Murray lost against 12th ranked Fabio Fognini in the second round of the Shanghai Open, before winning the first title after his operation in the European Open in October 2019, beating three-time Grand Slam winner Stan Wawrinka in the final.[422] In November 2019, he represented Great Britain for the first time since 2016 after being named in the squad for the 2019 Davis Cup finals;[423] however, he was only able to play one rubber in Great Britain's run to the semi-finals.
264
+
265
+ At the end of November 2019, a television documentary, Andy Murray: Resurfacing, was released on the Amazon Prime platform, detailing Murray's various attempts to overcome his hip injury over a two-year period from his defeat at Wimbledon in 2017 to his doubles victory at Queen's Club in 2019.[424][425] In late December, Murray's team confirmed that the pelvic injury which had curtailed his involvement in the Davis Cup would also prevent him from entering the upcoming 2020 Australian Open and the inaugural ATP Cup.[426]
266
+
267
+ Novak Djokovic and Murray have met 36 times with Djokovic leading 25–11.[427][428] Djokovic leads 5–1 on clay, 20–8 on hard courts, and Murray leads 2–0 on grass. The two are almost exactly the same age, with Murray being only a week older than Djokovic. They went to training camp together, and Murray won the first match they ever played as teenagers. The pair have met 19 times in finals, with Djokovic leading 11–8.[427] Ten of the finals were at ATP Masters 1000 events, and they are tied at 5–5. They have met in seven major finals: The 2011 Australian Open, the 2012 US Open, the 2013 Australian Open, the 2013 Wimbledon Championships, the 2015 Australian Open, the 2016 Australian Open, and the 2016 French Open. Djokovic has won in Australia four times and their single French open final, Murray emerged as the victor at the US Open and Wimbledon. The former of Murray's victories was the longest ever final at the US Open, tying with the 1988 final played between Ivan Lendl and Mats Wilander at 4 hours and 53 minutes, while the latter was notable for being the first home triumph in men's singles at Wimbledon since 1936.
268
+
269
+ They also played a nearly five-hour long semi-final match in the 2012 Australian Open, which Djokovic won 7–5 in the fifth set after Murray led 2 sets to 1. Murray and Djokovic met again in 2012 at the London 2012 Olympic Games, with Murray winning in straight sets. During the final of the 2012 Shanghai Masters, Murray held five championship points in the second set, however Djokovic saved each of them, forcing a deciding set. He eventually prevailed to win his first Shanghai Masters title, ending Murray's 12–0 winning streak at the event. The three set matches they played in Rome and Shanghai in 2011 and 2012 respectively were voted the ATP World Tour Match of the Year for each respective season.[429][430] Due to the tight competition between 2008 and 2013, many saw this as the emerging rivalry.[431][432] Djokovic went on to dominate the rivalry after the 2013 Wimbledon final, winning 13 of their last 16 matches. In 2016, Murray suffered his 4th loss (his 5th total) in the final of the Australian Open from Djokovic, then the Serbian defeated the British player in four sets in the Roland Garros final, where Djokovic won his first Roland Garros title and completed the Career Grand Slam.[433][434]Murray and Djokovic met in the final at the year's end final of the ATP World Tour Finals for the first time in their rivalry, where the winner would be granted the Year-end No. 1 status. Djokovic, dropped only one set en route to the final at the ATP World Tour Finals, but lost in straight sets to Murray, who finished the year at No. 1 and became the first British player to achieve this feat.
270
+
271
+ Murray and Roger Federer have met 25 times with Federer leading 14–11. Federer leads 12–10 on hard courts, 2–1 on grass, and they have never met on clay. They have met six times at the Grand Slam tournament level, with Federer leading 5–1. After Federer won the first professional match they played, Murray dominated the first half of the rivalry, with an 8–5 lead in 2010. The second half of the rivalry has been dominated by Federer, who leads 9–3 since 2011, and has led their rivalry since the 2014 ATP World Tour Finals.[435] Federer leads 5–3 in finals, having won each of their Grand Slam Final meetings at the 2008 US Open[122] and 2010 Australian Open, both of which Federer won in straight sets, and the 2012 Wimbledon Championships, where Murray took the first set, but ended up losing in 4 sets. Murray leads 6–3 in ATP 1000 tournaments and 2–0 in finals. They have met five times at the ATP World Tour Finals, with Murray winning in Shanghai in 2008[436] and Federer coming out victorious in London in 2009, 2010, 2012, and in 2014.
272
+
273
+ In August 2012, Murray met Federer in the final of the London 2012 Olympics at Wimbledon Centre Court, just four weeks after the 2012 Wimbledon Final, in which Federer had defeated Murray to win his record-tying 7th title at the All-England Club. Murray defeated Federer in straight sets to win the gold medal, denying Federer a Career Golden Slam. In 2013 Murray beat Federer for the first time in a major in the semi-finals of the Australian Open, prevailing in five sets after Federer had come back twice from a set down.[437] Their last grand slam meeting was at the 2015 Wimbledon Championships semi-finals, where a dominant Federer defeated Murray in straight sets, earning a place in his 10th Wimbledon final. Murray is one of only three players to have recorded 10 or more victories over Federer, the other two being Nadal and Djokovic. Their most recent meeting took place at the 2015 Cincinnati Masters semi-finals, with Federer winning the match in two close sets, recording his fifth consecutive victory over Murray.[435]
274
+
275
+ Murray has played against Rafael Nadal on 24 occasions since 2007, with Nadal leading 17–7. Nadal leads 7–2 on clay, 3–0 on grass and 7–5 on hard courts. The pair regularly meet at Grand Slam level, with nine out of their twenty-four meetings coming in slams, with Nadal leading 7–2 (3–0 at Wimbledon, 2–0 at the French Open, 1–1 at the Australian Open and 1–1 at the US Open).[438] Eight of these nine appearances have been at quarter-final and semi-final level. They have never met in a slam final, however, Murray leads 3–1 in ATP finals, with Nadal winning at Indian Wells in 2009[439] and Murray winning in Rotterdam the same year,[440] Tokyo[441] in 2011, and at Madrid in 2015.
276
+
277
+ Murray lost three consecutive Grand Slam semi-finals to Nadal in 2011 from the French Open to the US Open. The pair had not met for three years since the final of the 2011 Japan Open until the quarter-finals of the 2014 Rome Masters, although they were scheduled to meet in the semi-final of the 2012 Miami Masters before Nadal withdrew with injury.[442] At the semi-final stage of the 2014 French Open, Nadal triumphed in a dominant straight sets win for the loss of just 6 games. In one of their most recent meetings, Murray beat Nadal for the first time on clay, and the first time in a Masters 1000 final, at the Madrid Open in 2015.[443] Murray fell to Nadal in the semi-finals of the 2016 Monte Carlo Masters, despite taking the first set.[444] Three weeks later they met again at the semi-final stage of the 2016 Madrid Open, this time Murray winning the match in straight sets.[445]
278
+
279
+ Murray and Stan Wawrinka have played 20 times with Murray leading 12–8. Murray leads 8–4 on hard courts and 3–0 on grass courts while Wawrinka leads 4–1 on clay courts. They have also met six times in Grand Slam tournaments with each player winning three matches.[446] They have contested some close matches and one of their most notable meetings was in the 2009 Wimbledon fourth round, which Murray won in five sets; this was the first men's match to be played under the Wimbledon roof, having the latest finish for a Wimbledon match at the time.[447] Wawrinka also ended Murray's title defence at the 2013 US Open quarter-finals with a comfortable straight sets victory.[448] Other close matches between the two include three-set wins for Murray at the 2008 Canada Open and 2011 Shanghai Masters, and the 2010 US Open which Wawrinka won in four sets.
280
+
281
+ While Murray has led the majority of the rivalry, Wawrinka won their first two matches and beat Murray three times in succession between 2013 and 2015, winning all of them in straight sets, until Murray ended the losing streak at the 2016 French Open, beating defending champion Wawrinka in four sets to reach his first ever French Open final. Around this time Wawrinka was identified by some, including Djokovic, as a potential contender to turn the Big Four tennis quartet into a "Big Five", although Wawrinka himself downplayed those suggestions, stating that he was still far behind the others.[449][450] Their most recent match was in the final of the 2019 European Open, which Murray won in three sets to claim his first title after undergoing hip surgery.[422]
282
+
283
+ Murray plays an all-court game with an emphasis on defensive baseline play, and in 2009 professional tennis coach Paul Annacone stated that Murray "may be the best counterpuncher on tour today."[451][452] His strengths include groundstrokes with low error rate, the ability to anticipate and react, and his transition from defence to offence with speed, which enables him to hit winners from defensive positions. His playing style has been likened to that of Miloslav Mečíř. Murray also has one of the best two-handed backhands on the tour, with dynamic stroke execution[453] while he primarily uses his forehand, which is more passive, and a sliced backhand to let opponents play into his defensive game before playing more offensively.[454] Tim Henman stated in 2013 that Murray may have the best lob in the game, succeeding Lleyton Hewitt. Murray's tactics often involve passive exchanges from the baseline. He is capable of injecting sudden pace into his groundstrokes to surprise his opponents who are used to the slow rally. Murray is also one of the top returners in the game, often able to block back fast serves with his excellent reach and ability to anticipate. For this reason, Murray is rarely aced.[455]
284
+
285
+ Murray is known for being one of the most intelligent tacticians on the court, often constructing points.[456][457] Other strengths in his game, although not huge parts of his game, include his drop shot[458] and net game.[459] As he plays predominantly from the baseline, he usually approaches the net to volley when finishing points more quickly.[460] Murray is most proficient on a fast surface, like grass, where he has won eight singles titles including the Wimbledon Championships and the 2012 Olympic Gold Medal, although hard courts are his preferred surface.[452] He has worked hard since 2008 on improving his clay court game,[461] ultimately winning his first clay titles during 2015 at Munich and Madrid, as well as reaching his first French Open final during 2016. While Murray's serve is a major weapon for him, with his first serve reaching speeds of 130 mph or higher on some occasions and winning him many free points,[462] it can become inconsistent when hit under pressure,[463] especially with a more vulnerable and slower second serve. Since the 2011 season, under Ivan Lendl's coaching, Murray has played a more offensive game and has also worked to improve his second serve, forehand, consistency and mental game which have all been crucial to his further success.[462][464][465][466]
286
+
287
+ In 2009, German manufacturer Adidas and Murray signed a five-year-deal worth £30 million. This included wearing their range of tennis shoes.[467] The contract with Adidas allowed Murray to keep his shirt sleeve sponsors Shiatzy Chen, Royal Bank of Scotland and Highland Spring. Before he was signed by Adidas in late 2009, he wore Fred Perry apparel.[468] At the end of their contract together Adidas decided not to re-sign with Murray,[469] and he began a 4-year partnership with athletic apparel company Under Armour in December 2014,[470] reportedly worth $25 million.[471] Murray signed with Castore for the 2019 season which Murray called his last deal before announcing his retirement.[472][473]
288
+
289
+ Murray uses Head rackets, and regularly appears in advertisements for the brand.[474] He endorses the Head Radical Pro model, whereas his actual playing racket (underneath the various Radical Pro paintjobs) is reported to be a customized pro stock PT57A, derived from the original Pro Tour 630 model, but with a 16×19 string pattern.[475][476] The racquet used to be set up extremely heavy at the beginning of his career, but after a 2007 wrist injury its weight was lowered.[477]
290
+
291
+ In June 2012, the Swiss watch manufacturer Rado announced that Murray had signed a deal to wear their D-Star 200 model.[478]
292
+
293
+ Murray's coaching staff has changed through the years and are as follows: Leon Smith (1998–2004), Pato Álvarez (2003–2005), Mark Petchey (2005–2006), Brad Gilbert (2006–2007), Miles Maclagan (2007–2010), Àlex Corretja (2010–2011), Ivan Lendl (2011–2014, 2016–2017), Amélie Mauresmo (2014–2016), Jonas Björkman (2015),[479] Jamie Delgado (2016–).[480]
294
+
295
+ Murray is a founding member of the Malaria No More UK Leadership Council and helped launch the charity in 2009 with David Beckham. Footage from the launch at Wembley Stadium can be seen on YouTube and the charity's website.[481] Murray also made 'Nets Needed', a short public service announcement, for the charity to help raise awareness and funds to help in the fight against malaria.[482] Murray has also taken part in several charity tennis events, including the Rally for Relief events that took place prior to the start of the 2011 Australian Open.[483]
296
+
297
+ In June 2013, Murray teamed up with former British No. 1 Tim Henman for a charity doubles match against Murray's coach and eight-time grand slam champion Ivan Lendl, and No. 6 Tomáš Berdych at the Queen's Club in London. The event named Rally Against Cancer was organised to raise money for Royal Marsden Cancer Charity after his best friend and fellow British player Ross Hutchins was diagnosed with Hodgkin's lymphoma.[484][485] The event took place following the final day of competitive play at the AEGON Championships, on Sunday 16 June. Subsequently, following his victory at the tournament, Murray donated his entire prize money pot to The Royal Marsden Cancer Charity.[486]
298
+
299
+ In June 2014, following the death of Elena Baltacha due to liver cancer, Murray featured in an event known as 'Rally for Bally'. Murray played at Queen's Club alongside Victoria Azarenka, Martina Hingis, Heather Watson and his brother Jamie. The event raised money for the Royal Marsden Cancer Charity and the Elena Baltacha Academy of Tennis. Children from Baltacha's academy took to the court to play alongside Murray.[487][488] As a result of his various charitable exploits, Murray was awarded the Arthur Ashe Humanitarian of the Year award for 2014.[489]
300
+
301
+ Murray identifies himself as Scottish and British.[490] His national identity has often been commented on by the media.[491] While making a cameo appearance on the comedy show Outnumbered, Murray was asked whether he was British or Scottish, to which he responded "Depends whether I win or not."[492] Much of the discussion about Murray's national identity began prior to Wimbledon 2006, when he was quoted as saying he would "support whoever England is playing" at the 2006 World Cup. English ex-tennis player Tim Henman confirmed that the remarks had been made in jest and were only in response to Murray being teased by journalist Des Kelly and Henman about Scotland's failure to qualify.[493]
302
+
303
+ Murray initially refused to endorse either side of the debate in the 2014 referendum on Scottish independence, citing the abuse he had received after his England-World Cup comments in 2006.[494] Just before the referendum, Murray tweeted a message that was considered by the media to be supportive of independence.[a][495][496][497] He received online abuse for expressing his opinion, including messages that were described as "vile" by Police Scotland; one referred to the Dunblane massacre.[497] A few days after the vote, in which a 55% majority opposed Scottish independence, Murray said that he did not regret stating his view, but said that it was out of character and that he would concentrate on his tennis career in the future.[497]
304
+
305
+ After defeating Nikolay Davydenko at Wimbledon 2012, Murray pointed upwards with both hands and wagged them back and forth while looking to the sky. Murray declined to reveal the reason, and ever since, he has continued to celebrate his victories with this gesture.[498] Murray marked his first Wimbledon title in 2013 with the same victory salute.[499] Then in his book Seventy-Seven; My Path to Wimbledon Glory, released in November 2013, Murray said: “The real reason is that around that time I had a few friends and family who had various issues affecting them . . . I knew that they would be watching and I wanted to let them know I was thinking of them.”[500]
306
+
307
+ After winning the Brisbane International in January 2013, he dedicated the victory to his friend Ross Hutchins who had been diagnosed with cancer in December 2012. Hutchins confirmed that Murray's victory salute after this win was a sign to him.[501]
308
+
309
+ In 2006, there was controversy after a match with Kenneth Carlsen. Having been given a warning for racket abuse, Murray went on in the post-match interview to state that he and Carlsen had "played like women" during the first set.[502] Murray was booed for the remark, but said later that the comment had been intended as a jocular response to what Svetlana Kuznetsova had said at the Hopman Cup.[503] A few months later, Murray was fined for swearing at the umpire during a Davis Cup doubles rubber with the Serbia and Montenegro Davis Cup team. Murray refused to shake hands with the umpire at the end of the match.[504]
310
+
311
+ In 2007, Murray suggested that tennis had a match-fixing problem, stating that everyone knows it goes on,[505] in the wake of the investigation surrounding Nikolay Davydenko.[506] Both Davydenko and Rafael Nadal questioned his comments, but Murray responded that his words had been taken out of context.[507]
312
+
313
+ In a June 2015 column written for the French sports newspaper L'Équipe, Murray criticised what he described as a double standard applied by many in their attitudes towards Amélie Mauresmo in her role as Murray's coach, highlighting how many observers attributed his poor performances during the early part of her tenure to her appointment, which Murray denied, before pointing out that his previous coaches had not been blamed by the media for other spells of poor form. He also lamented the lack of female coaches working in elite tennis, and concluded: "Have I become a feminist? Well, if being a feminist is about fighting so that a woman is treated like a man then yes, I suppose I have".[44] Murray has corrected others a number of times on the subject of women's tennis. After BBC host John Inverdale indirectly suggested Murray was the first person to win more than one tennis Olympic gold medal, Murray interjected; "I think Venus and Serena have won about four each."[508] Murray has also argued that male and female tennis players should receive equal amounts of prize money.[509]
314
+
315
+ Murray has not commented on his personal opinion on Britain's decision to leave the European Union.[510] However, following his win at Wimbledon in 2016, he expressed his surprise at the outcome of the referendum in the UK and added that "it's important that everyone comes together to make the best of it."[511]
316
+
317
+ Current through the 2020 Australian Open.
318
+
319
+
320
+
en/2280.html.txt ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ History of the world · Ancient maritime history Protohistory · Axial Age · Iron Age Historiography · Ancient literature Ancient warfare · Cradle of civilization
4
+
5
+ Ancient Greece (Greek: Ἑλλάς, romanized: Hellás) was a civilization belonging to a period of Greek history from the Greek Dark Ages of the 12th–9th centuries BC to the end of antiquity (c. AD 600). Immediately following this period was the beginning of the Early Middle Ages and the Byzantine time.[1] Roughly three centuries after the Late Bronze Age collapse of Mycenaean Greece, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and colonization of the Mediterranean Basin. This was followed by the period of Classical Greece, an era that began with the Greco-Persian Wars, lasting from the 5th to 4th centuries BC. Due to the conquests by Alexander the Great of Macedon, Hellenistic civilization flourished from Central Asia to the western end of the Mediterranean Sea. The Hellenistic period came to an end with the conquests and annexations of the eastern Mediterranean world by the Roman Republic, which established the Roman province of Macedonia in Roman Greece, and later the province of Achaea during the Roman Empire.
6
+
7
+ Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it to many parts of the Mediterranean Basin and Europe. For this reason, Classical Greece is generally considered to be the seminal culture which provided the foundation of modern Western culture and is considered the cradle of Western civilization.[2][3][4]
8
+
9
+ Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC[5] (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD.
10
+
11
+ Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic Period, beginning around the 8th century BC. The Archaic Period saw early developments in Greek culture and society which formed the basis for the Classical Period.[6] After the Archaic Period, the Classical Period in Greece is conventionally considered to have lasted from the Persian invasion of Greece in 480 until the death of Alexander the Great in 323.[7] The period is characterized by a style which was considered by later observers to be exemplary, i.e., "classical", as shown in the Parthenon, for instance. Politically, the Classical Period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period saw the Greco-Persian Wars and the Rise of Macedon.
12
+
13
+ Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near and Middle East. This period begins with the death of Alexander and ends with the Roman conquest. Roman Greece is usually considered to be the period between Roman victory over the Corinthians at the Battle of Corinth in 146 BC and the establishment of Byzantium by Constantine as the capital of the Roman Empire in AD 330. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, sometimes taken to be complete with the closure of the Academy of Athens by Justinian I in 529.[8]
14
+
15
+ The historical period of ancient Greece is unique in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals or king lists, and pragmatic epigraphy.
16
+
17
+ Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th century historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th century ones such as Candaules.
18
+
19
+ Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most of these authors were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than those of many other cities.
20
+ Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history.[9]
21
+
22
+ In the 8th century BC, Greece began to emerge from the Dark Ages which followed the fall of the Mycenaean civilization. Literacy had been lost and Mycenaean script forgotten, but the Greeks adopted the Phoenician alphabet, modifying it to create the Greek alphabet. Objects with Phoenician writing on them may have been available in Greece from the 9th century BC, but the earliest evidence of Greek writing comes from graffiti on Greek pottery from the mid-8th century.[10] Greece was divided into many small self-governing communities, a pattern largely dictated by Greek geography: every island, valley and plain is cut off from its neighbors by the sea or mountain ranges.[11]
23
+
24
+ The Lelantine War (c. 710 – c. 650 BC) is the earliest documented war of the ancient Greek period. It was fought between the important poleis (city-states) of Chalcis and Eretria over the fertile Lelantine plain of Euboea. Both cities seem to have suffered a decline as result of the long war, though Chalcis was the nominal victor.
25
+
26
+ A mercantile class arose in the first half of the 7th century BC, shown by the introduction of coinage in about 680 BC.[12] This seems to have introduced tension to many city-states. The aristocratic regimes which generally governed the poleis were threatened by the new-found wealth of merchants, who in turn desired political power. From 650 BC onwards, the aristocracies had to fight not to be overthrown and replaced by populist tyrants.[a]
27
+
28
+ A growing population and a shortage of land also seem to have created internal strife between the poor and the rich in many city-states. In Sparta, the Messenian Wars resulted in the conquest of Messenia and enserfment of the Messenians, beginning in the latter half of the 8th century BC, an act without precedent in ancient Greece. This practice allowed a social revolution to occur.[15] The subjugated population, thenceforth known as helots, farmed and labored for Sparta, whilst every Spartan male citizen became a soldier of the Spartan Army in a permanently militarized state. Even the elite were obliged to live and train as soldiers; this commonality between rich and poor citizens served to defuse the social conflict. These reforms, attributed to Lycurgus of Sparta, were probably complete by 650 BC.
29
+
30
+ Athens suffered a land and agrarian crisis in the late 7th century BC, again resulting in civil strife. The Archon (chief magistrate) Draco made severe reforms to the law code in 621 BC (hence "draconian"), but these failed to quell the conflict. Eventually the moderate reforms of Solon (594 BC), improving the lot of the poor but firmly entrenching the aristocracy in power, gave Athens some stability.
31
+
32
+ By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well.
33
+
34
+ Rapidly increasing population in the 8th and 7th centuries BC had resulted in emigration of many Greeks to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The emigration effectively ceased in the 6th century BC by which time the Greek world had, culturally and linguistically, become much larger than the area of present-day Greece. Greek colonies were not politically controlled by their founding cities, although they often retained religious and commercial links with them.
35
+
36
+ The emigration process also determined a long series of conflicts between the Greek cities of Sicily, especially Syracuse, and the Carthaginians. These conflicts lasted from 600 BC to 265 BC when the Roman Republic entered into an alliance with the Mamertines to fend off the hostilities by the new tyrant of Syracuse, Hiero II and then the Carthaginians. This way Rome became the new dominant power against the fading strength of the Sicilian Greek cities and the Carthaginian supremacy in the region. One year later the First Punic War erupted.
37
+
38
+ In this period, there was huge economic development in Greece, and also in its overseas colonies which experienced a growth in commerce and manufacturing. There was a great improvement in the living standards of the population. Some studies estimate that the average size of the Greek household, in the period from 800 BC to 300 BC, increased five times, which indicates[citation needed] a large increase in the average income of the population.
39
+
40
+ In the second half of the 6th century BC, Athens fell under the tyranny of Peisistratos and then of his sons Hippias and Hipparchos. However, in 510 BC, at the instigation of the Athenian aristocrat Cleisthenes, the Spartan king Cleomenes I helped the Athenians overthrow the tyranny. Afterwards, Sparta and Athens promptly turned on each other, at which point Cleomenes I installed Isagoras as a pro-Spartan archon. Eager to prevent Athens from becoming a Spartan puppet, Cleisthenes responded by proposing to his fellow citizens that Athens undergo a revolution: that all citizens share in political power, regardless of status: that Athens become a "democracy". So enthusiastically did the Athenians take to this idea that, having overthrown Isagoras and implemented Cleisthenes's reforms, they were easily able to repel a Spartan-led three-pronged invasion aimed at restoring Isagoras.[16] The advent of the democracy cured many of the ills of Athens and led to a 'golden age' for the Athenians.
41
+
42
+ In 499 BC, the Ionian city states under Persian rule rebelled against the Persian-supported tyrants that ruled them.[17] Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city down, before being driven back by a Persian counterattack.[18] The revolt continued until 494, when the rebelling Ionians were defeated.[19] Darius did not forget that the Athenians had assisted the Ionian revolt, however, and in 490 he assembled an armada to conquer Athens.[20] Despite being heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian forces at the Battle of Marathon, and the Persian fleet withdrew.[21]
43
+
44
+ Ten years later, a second invasion was launched by Darius' son Xerxes.[22] The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders.[23] At the same time, Greek Sicily was invaded by a Carthaginian force.[24] In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small force of Greeks, led by three hundred Spartans, held a crucial pass into the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera.[25]
45
+
46
+ The Persians were defeated by a primarily Athenian naval force at the Battle of Salamis, and in 479 defeated on land at the Battle of Plataea.[26] The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens,[27] and by 460 Persia had been driven out of the Aegean.[28] During this period of campaigning, the Delian league gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power enabled it to compel other league states to comply with its policies.[29] Athens ended its campaigns against Persia in 450 BC, after a disastrous defeat in Egypt in 454 BC, and the death of Cimon in action against the Persians on Cyprus in 450.[30]
47
+
48
+ While Athenian activity against the Persian empire was ending, however, conflict between Sparta and Athens was increasing. Sparta was suspicious of the increasing Athenian power funded by the Delian League, and tensions rose when Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462, when Athens sent a force to aid Sparta in overcoming a helot revolt, but their aid was rejected by the Spartans.[31] In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth.[32] However, Athens failed to win a decisive victory, and in 447 lost Boeotia again.[33] Athens and Sparta signed the Thirty Years' Peace in the winter of 446/5, ending the conflict.[34]
49
+
50
+ Despite the peace of 446/5, Athenian relations with Sparta declined again in the 430s, and in 431 war broke out once again.[35] The first phase of the war is traditionally seen as a series of annual invasions of Attica by Sparta, which made little progress, while Athens were successful against the Corinthian empire in the north-west of Greece, and in defending their own empire, despite suffering from plague and Spartan invasion.[36] The turning point of this phase of the war usually seen as the Athenian victories at Pylos and Sphakteria.[37] Sparta sued for peace, but the Athenians rejected the proposal.[38] The Athenian failure to regain control at Boeotia at Delium and Brasidas' successes in the north of Greece in 424, improved Sparta's position after Sphakteria.[39] After the deaths of Cleon and Brasidas, the strongest objectors to peace on the Athenian and Spartan sides respectively, a peace treaty was agreed in 421.[40]
51
+
52
+ The peace did not last, however. In 418 an alliance between Athens and Argos was defeated by Sparta at Mantinea.[41] In 415 Athens launched a naval expedition against Sicily;[42] the expedition ended in disaster with almost the entire army killed.[43] Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while at the same time Persia began to once again involve itself in Greek affairs on the Spartan side.[44] Initially the Athenian position continued to be relatively strong, winning important battles such as those at Cyzicus in 410 and Arginusae in 406.[45] However, in 405 the Spartans defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour;[46] with no grain supply and in danger of starvation, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League.[47]
53
+
54
+ Greece thus entered the 4th century BC under a Spartan hegemony, but it was clear from the start that this was weak. A demographic crisis meant Sparta was overstretched, and by 395 BC Athens, Argos, Thebes, and Corinth felt able to challenge Spartan dominance, resulting in the Corinthian War (395–387 BC). Another war of stalemates, it ended with the status quo restored, after the threat of Persian intervention on behalf of the Spartans.
55
+
56
+ The Spartan hegemony lasted another 16 years, until, when attempting to impose their will on the Thebans, the Spartans were defeated at Leuctra in 371 BC. The Theban general Epaminondas then led Theban troops into the Peloponnese, whereupon other city-states defected from the Spartan cause. The Thebans were thus able to march into Messenia and free the population.
57
+
58
+ Deprived of land and its serfs, Sparta declined to a second-rank power. The Theban hegemony thus established was short-lived; at the Battle of Mantinea in 362 BC, Thebes lost its key leader, Epaminondas, and much of its manpower, even though they were victorious in battle. In fact such were the losses to all the great city-states at Mantinea that none could establish dominance in the aftermath.
59
+
60
+ The weakened state of the heartland of Greece coincided with the Rise of Macedon, led by Philip II. In twenty years, Philip had unified his kingdom, expanded it north and west at the expense of Illyrian tribes, and then conquered Thessaly and Thrace. His success stemmed from his innovative reforms to the Macedonian army. Phillip intervened repeatedly in the affairs of the southern city-states, culminating in his invasion of 338 BC.
61
+
62
+ Decisively defeating an allied army of Thebes and Athens at the Battle of Chaeronea (338 BC), he became de facto hegemon of all of Greece, except Sparta. He compelled the majority of the city-states to join the League of Corinth, allying them to him, and preventing them from warring with each other. Philip then entered into war against the Achaemenid Empire but was assassinated by Pausanias of Orestis early on in the conflict.
63
+
64
+ Alexander the Great, son and successor of Philip, continued the war. Alexander defeated Darius III of Persia and completely destroyed the Achaemenid Empire, annexing it to Macedon and earning himself the epithet 'the Great'. When Alexander died in 323 BC, Greek power and influence was at its zenith. However, there had been a fundamental shift away from the fierce independence and classical culture of the poleis—and instead towards the developing Hellenistic culture.
65
+
66
+ The Hellenistic period lasted from 323 BC, which marked the end of the wars of Alexander the Great, to the annexation of Greece by the Roman Republic in 146 BC. Although the establishment of Roman rule did not break the continuity of Hellenistic society and culture, which remained essentially unchanged until the advent of Christianity, it did mark the end of Greek political independence.
67
+
68
+ After the death of Alexander, his empire was, after quite some conflict, divided among his generals, resulting in the Ptolemaic Kingdom (Egypt and adjoining North Africa), the Seleucid Empire (the Levant, Mesopotamia and Persia) and the Antigonid dynasty (Macedonia). In the intervening period, the poleis of Greece were able to wrest back some of their freedom, although still nominally subject to the Macedonian Kingdom.
69
+
70
+ During the Hellenistic period, the importance of "Greece proper" (that is, the territory of modern Greece) within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of the Ptolemaic Kingdom and the Seleucid Empire, respectively.
71
+
72
+ The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east.[48] Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as what are now Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the first century BC.
73
+
74
+ The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).
75
+
76
+ The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to make war on Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC, when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing an end to the independence of all of Greece.
77
+
78
+ The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC.
79
+
80
+ Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome.
81
+
82
+ The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.
83
+
84
+ In the south lay the Peloponnese, itself consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia,[49] originally consisting Lower Macedonia and its regions, such as Elimeia, Pieria, and Orestis. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Upper Macedonia, lands inhabited by independent Macedonian tribes like the Lyncestae and the Elmiotae and to the West, beyond the Axius river, into Eordaia, Bottiaea, Mygdonia, and Almopia, regions settled by Thracian tribes.[50] To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia.
85
+
86
+ During the Archaic period, the population of Greece grew beyond the capacity of its limited arable land (according to one estimate, the population of ancient Greece increased by a factor larger than ten during the period from 800 BC to 400 BC, increasing from a population of 800,000 to a total estimated population of 10 to 13 million).[51]
87
+
88
+ From about 750 BC the Greeks began 250 years of expansion, settling colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and south coast of the Black Sea.
89
+
90
+ Eventually Greek colonization reached as far northeast as present day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Sicily and Southern Italy were settled, followed by Southern France, Corsica, and even northeastern Spain. Greek colonies were also founded in Egypt and Libya.
91
+
92
+ Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece.
93
+
94
+ Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting.[53]
95
+
96
+ Thus, the major peculiarities of the ancient Greek political system were its fragmentary nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean Sea, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city.
97
+
98
+ Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon "conquered" the heartlands of ancient Greece, he did not attempt to annex the territory, or unify it into a new province, but simply compelled most of the poleis to join his own Corinthian League.
99
+
100
+ Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens.[54] However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship; and finally by 683 BC an annually elected archonship. Through each stage more power would have been transferred to the aristocracy as a whole, and away from a single individual.
101
+
102
+ Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution.
103
+
104
+ Athens fell under a tyranny in the second half of the 6th century. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all.
105
+
106
+ After the rise of the democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors).
107
+
108
+ Only free, land owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families.[citation needed]
109
+
110
+ Slaves had no power or status. They had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC chattel slavery had spread in Greece. By the 5th century BC slaves made up one-third of the total population in some city-states. Between forty and eighty per cent of the population of Classical Athens were slaves.[55] Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the Ancient Greeks did not think in terms of race.[56]
111
+
112
+ Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state.
113
+
114
+ City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions.
115
+
116
+ Sparta had a special type of slaves called helots. Helots were Messenians enslaved during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times before in 370/69 they won their freedom.[57]
117
+
118
+ For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood.[citation needed]
119
+
120
+ Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports.
121
+
122
+ Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years.[58]
123
+
124
+ Only a small number of boys continued their education after childhood, as in the Spartan agoge. A crucial part of a wealthy teenager's education was a mentorship with an elder, which in a few places and times may have included pederasty.[citation needed] The teenager learned by watching his mentor talking about politics in the agora, helping him perform his public duties, exercising with him in the gymnasium and attending symposia with him. The richest students continued their education by studying with famous teachers. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia.[citation needed]
125
+
126
+ At its economic height, in the 5th and 4th centuries BC, ancient Greece was the most advanced economy in the world. According to some economic historians, it was one of the most advanced pre-industrial economies. This is demonstrated by the average daily wage of the Greek worker which was, in terms of wheat, about 12 kg. This was more than 3 times the average daily wage of an Egyptian worker during the Roman period, about 3.75 kg.[59]
127
+
128
+ At least in the Archaic Period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than 5% of the losing side, but the slain often included the most prominent citizens and generals who led from the front.
129
+
130
+ The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale, and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battle and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society.
131
+ Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oars men—because it owned a lot of silver mines that were worked by slaves.
132
+
133
+ According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period.[60]
134
+
135
+ Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
136
+
137
+ Neither reason nor inquiry began with the Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization.
138
+
139
+ Some of the well-known philosophers of ancient Greece were Plato and Socrates, among others. They have aided in information about ancient Greek society through writings such as The Republic, by Plato.
140
+
141
+ The earliest Greek literature was poetry, and was composed for performance rather than private consumption.[61] The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry.[62] Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC.[63] tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry.[64] Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425.[65]
142
+
143
+ Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the sixth century BC.[66] Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing.[67] Prose as a genre reached maturity in the classical era,[68] and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period.[69]
144
+
145
+ The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres.[70] It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, which ensured that so much ancient Greek literature has survived.[71] The Library of Alexandria, part of the Museum, had the previously-unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry,[72] and Hellenistic poetry tended to be highly intellectual,[73] blending different genres and traditions, and avoiding linear narratives.[74] The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately.[75] At the same time, Hellenistic poets began to write for private, rather than public, consumption.[76]
146
+
147
+ With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome.[77] The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately AD 80 to around AD 230.[78] This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period.[79]
148
+
149
+ Music was present almost universally in Greek society, from marriages and funerals to religious ceremonies, theatre, folk music and the ballad-like reciting of epic poetry. There are significant fragments of actual Greek musical notation as well as many literary references to ancient Greek music. Greek art depicts musical instruments and dance. The word music derives from the name of the Muses, the daughters of Zeus who were patron goddesses of the arts.
150
+
151
+ Ancient Greek mathematics contributed many important developments to the field of mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today.
152
+
153
+ The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive.[80] Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy.[81] In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes.
154
+
155
+ The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC, and was the first ancestor of the astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
156
+
157
+ The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine"[82][83] in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession.[84][85]
158
+
159
+ The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the western world.
160
+
161
+ Religion was a central part of ancient Greek life.[86] Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places.[87] The Greeks were polytheistic, worshipping many gods, but as early as the sixth century BC a pantheon of twelve Olympians began to develop.[87] Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions.[88]
162
+
163
+ The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats.[89] Sacrifice was accompanied by public prayer,[90] and prayer and hymns were themselves a major part of ancient Greek religious life.[91]
164
+
165
+ The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it,
166
+
167
+ Via the Roman Empire, Greek culture came to be foundational to Western culture in general.
168
+ The Byzantine Empire inherited Classical Greek culture directly, without Latin intermediation, and the preservation of classical Greek learning in medieval Byzantine tradition further exerted strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas.
en/2281.html.txt ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ History of the world · Ancient maritime history Protohistory · Axial Age · Iron Age Historiography · Ancient literature Ancient warfare · Cradle of civilization
4
+
5
+ Ancient Greece (Greek: Ἑλλάς, romanized: Hellás) was a civilization belonging to a period of Greek history from the Greek Dark Ages of the 12th–9th centuries BC to the end of antiquity (c. AD 600). Immediately following this period was the beginning of the Early Middle Ages and the Byzantine time.[1] Roughly three centuries after the Late Bronze Age collapse of Mycenaean Greece, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and colonization of the Mediterranean Basin. This was followed by the period of Classical Greece, an era that began with the Greco-Persian Wars, lasting from the 5th to 4th centuries BC. Due to the conquests by Alexander the Great of Macedon, Hellenistic civilization flourished from Central Asia to the western end of the Mediterranean Sea. The Hellenistic period came to an end with the conquests and annexations of the eastern Mediterranean world by the Roman Republic, which established the Roman province of Macedonia in Roman Greece, and later the province of Achaea during the Roman Empire.
6
+
7
+ Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it to many parts of the Mediterranean Basin and Europe. For this reason, Classical Greece is generally considered to be the seminal culture which provided the foundation of modern Western culture and is considered the cradle of Western civilization.[2][3][4]
8
+
9
+ Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC[5] (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD.
10
+
11
+ Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic Period, beginning around the 8th century BC. The Archaic Period saw early developments in Greek culture and society which formed the basis for the Classical Period.[6] After the Archaic Period, the Classical Period in Greece is conventionally considered to have lasted from the Persian invasion of Greece in 480 until the death of Alexander the Great in 323.[7] The period is characterized by a style which was considered by later observers to be exemplary, i.e., "classical", as shown in the Parthenon, for instance. Politically, the Classical Period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period saw the Greco-Persian Wars and the Rise of Macedon.
12
+
13
+ Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near and Middle East. This period begins with the death of Alexander and ends with the Roman conquest. Roman Greece is usually considered to be the period between Roman victory over the Corinthians at the Battle of Corinth in 146 BC and the establishment of Byzantium by Constantine as the capital of the Roman Empire in AD 330. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, sometimes taken to be complete with the closure of the Academy of Athens by Justinian I in 529.[8]
14
+
15
+ The historical period of ancient Greece is unique in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals or king lists, and pragmatic epigraphy.
16
+
17
+ Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th century historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th century ones such as Candaules.
18
+
19
+ Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most of these authors were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than those of many other cities.
20
+ Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history.[9]
21
+
22
+ In the 8th century BC, Greece began to emerge from the Dark Ages which followed the fall of the Mycenaean civilization. Literacy had been lost and Mycenaean script forgotten, but the Greeks adopted the Phoenician alphabet, modifying it to create the Greek alphabet. Objects with Phoenician writing on them may have been available in Greece from the 9th century BC, but the earliest evidence of Greek writing comes from graffiti on Greek pottery from the mid-8th century.[10] Greece was divided into many small self-governing communities, a pattern largely dictated by Greek geography: every island, valley and plain is cut off from its neighbors by the sea or mountain ranges.[11]
23
+
24
+ The Lelantine War (c. 710 – c. 650 BC) is the earliest documented war of the ancient Greek period. It was fought between the important poleis (city-states) of Chalcis and Eretria over the fertile Lelantine plain of Euboea. Both cities seem to have suffered a decline as result of the long war, though Chalcis was the nominal victor.
25
+
26
+ A mercantile class arose in the first half of the 7th century BC, shown by the introduction of coinage in about 680 BC.[12] This seems to have introduced tension to many city-states. The aristocratic regimes which generally governed the poleis were threatened by the new-found wealth of merchants, who in turn desired political power. From 650 BC onwards, the aristocracies had to fight not to be overthrown and replaced by populist tyrants.[a]
27
+
28
+ A growing population and a shortage of land also seem to have created internal strife between the poor and the rich in many city-states. In Sparta, the Messenian Wars resulted in the conquest of Messenia and enserfment of the Messenians, beginning in the latter half of the 8th century BC, an act without precedent in ancient Greece. This practice allowed a social revolution to occur.[15] The subjugated population, thenceforth known as helots, farmed and labored for Sparta, whilst every Spartan male citizen became a soldier of the Spartan Army in a permanently militarized state. Even the elite were obliged to live and train as soldiers; this commonality between rich and poor citizens served to defuse the social conflict. These reforms, attributed to Lycurgus of Sparta, were probably complete by 650 BC.
29
+
30
+ Athens suffered a land and agrarian crisis in the late 7th century BC, again resulting in civil strife. The Archon (chief magistrate) Draco made severe reforms to the law code in 621 BC (hence "draconian"), but these failed to quell the conflict. Eventually the moderate reforms of Solon (594 BC), improving the lot of the poor but firmly entrenching the aristocracy in power, gave Athens some stability.
31
+
32
+ By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well.
33
+
34
+ Rapidly increasing population in the 8th and 7th centuries BC had resulted in emigration of many Greeks to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The emigration effectively ceased in the 6th century BC by which time the Greek world had, culturally and linguistically, become much larger than the area of present-day Greece. Greek colonies were not politically controlled by their founding cities, although they often retained religious and commercial links with them.
35
+
36
+ The emigration process also determined a long series of conflicts between the Greek cities of Sicily, especially Syracuse, and the Carthaginians. These conflicts lasted from 600 BC to 265 BC when the Roman Republic entered into an alliance with the Mamertines to fend off the hostilities by the new tyrant of Syracuse, Hiero II and then the Carthaginians. This way Rome became the new dominant power against the fading strength of the Sicilian Greek cities and the Carthaginian supremacy in the region. One year later the First Punic War erupted.
37
+
38
+ In this period, there was huge economic development in Greece, and also in its overseas colonies which experienced a growth in commerce and manufacturing. There was a great improvement in the living standards of the population. Some studies estimate that the average size of the Greek household, in the period from 800 BC to 300 BC, increased five times, which indicates[citation needed] a large increase in the average income of the population.
39
+
40
+ In the second half of the 6th century BC, Athens fell under the tyranny of Peisistratos and then of his sons Hippias and Hipparchos. However, in 510 BC, at the instigation of the Athenian aristocrat Cleisthenes, the Spartan king Cleomenes I helped the Athenians overthrow the tyranny. Afterwards, Sparta and Athens promptly turned on each other, at which point Cleomenes I installed Isagoras as a pro-Spartan archon. Eager to prevent Athens from becoming a Spartan puppet, Cleisthenes responded by proposing to his fellow citizens that Athens undergo a revolution: that all citizens share in political power, regardless of status: that Athens become a "democracy". So enthusiastically did the Athenians take to this idea that, having overthrown Isagoras and implemented Cleisthenes's reforms, they were easily able to repel a Spartan-led three-pronged invasion aimed at restoring Isagoras.[16] The advent of the democracy cured many of the ills of Athens and led to a 'golden age' for the Athenians.
41
+
42
+ In 499 BC, the Ionian city states under Persian rule rebelled against the Persian-supported tyrants that ruled them.[17] Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city down, before being driven back by a Persian counterattack.[18] The revolt continued until 494, when the rebelling Ionians were defeated.[19] Darius did not forget that the Athenians had assisted the Ionian revolt, however, and in 490 he assembled an armada to conquer Athens.[20] Despite being heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian forces at the Battle of Marathon, and the Persian fleet withdrew.[21]
43
+
44
+ Ten years later, a second invasion was launched by Darius' son Xerxes.[22] The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders.[23] At the same time, Greek Sicily was invaded by a Carthaginian force.[24] In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small force of Greeks, led by three hundred Spartans, held a crucial pass into the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera.[25]
45
+
46
+ The Persians were defeated by a primarily Athenian naval force at the Battle of Salamis, and in 479 defeated on land at the Battle of Plataea.[26] The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens,[27] and by 460 Persia had been driven out of the Aegean.[28] During this period of campaigning, the Delian league gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power enabled it to compel other league states to comply with its policies.[29] Athens ended its campaigns against Persia in 450 BC, after a disastrous defeat in Egypt in 454 BC, and the death of Cimon in action against the Persians on Cyprus in 450.[30]
47
+
48
+ While Athenian activity against the Persian empire was ending, however, conflict between Sparta and Athens was increasing. Sparta was suspicious of the increasing Athenian power funded by the Delian League, and tensions rose when Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462, when Athens sent a force to aid Sparta in overcoming a helot revolt, but their aid was rejected by the Spartans.[31] In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth.[32] However, Athens failed to win a decisive victory, and in 447 lost Boeotia again.[33] Athens and Sparta signed the Thirty Years' Peace in the winter of 446/5, ending the conflict.[34]
49
+
50
+ Despite the peace of 446/5, Athenian relations with Sparta declined again in the 430s, and in 431 war broke out once again.[35] The first phase of the war is traditionally seen as a series of annual invasions of Attica by Sparta, which made little progress, while Athens were successful against the Corinthian empire in the north-west of Greece, and in defending their own empire, despite suffering from plague and Spartan invasion.[36] The turning point of this phase of the war usually seen as the Athenian victories at Pylos and Sphakteria.[37] Sparta sued for peace, but the Athenians rejected the proposal.[38] The Athenian failure to regain control at Boeotia at Delium and Brasidas' successes in the north of Greece in 424, improved Sparta's position after Sphakteria.[39] After the deaths of Cleon and Brasidas, the strongest objectors to peace on the Athenian and Spartan sides respectively, a peace treaty was agreed in 421.[40]
51
+
52
+ The peace did not last, however. In 418 an alliance between Athens and Argos was defeated by Sparta at Mantinea.[41] In 415 Athens launched a naval expedition against Sicily;[42] the expedition ended in disaster with almost the entire army killed.[43] Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while at the same time Persia began to once again involve itself in Greek affairs on the Spartan side.[44] Initially the Athenian position continued to be relatively strong, winning important battles such as those at Cyzicus in 410 and Arginusae in 406.[45] However, in 405 the Spartans defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour;[46] with no grain supply and in danger of starvation, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League.[47]
53
+
54
+ Greece thus entered the 4th century BC under a Spartan hegemony, but it was clear from the start that this was weak. A demographic crisis meant Sparta was overstretched, and by 395 BC Athens, Argos, Thebes, and Corinth felt able to challenge Spartan dominance, resulting in the Corinthian War (395–387 BC). Another war of stalemates, it ended with the status quo restored, after the threat of Persian intervention on behalf of the Spartans.
55
+
56
+ The Spartan hegemony lasted another 16 years, until, when attempting to impose their will on the Thebans, the Spartans were defeated at Leuctra in 371 BC. The Theban general Epaminondas then led Theban troops into the Peloponnese, whereupon other city-states defected from the Spartan cause. The Thebans were thus able to march into Messenia and free the population.
57
+
58
+ Deprived of land and its serfs, Sparta declined to a second-rank power. The Theban hegemony thus established was short-lived; at the Battle of Mantinea in 362 BC, Thebes lost its key leader, Epaminondas, and much of its manpower, even though they were victorious in battle. In fact such were the losses to all the great city-states at Mantinea that none could establish dominance in the aftermath.
59
+
60
+ The weakened state of the heartland of Greece coincided with the Rise of Macedon, led by Philip II. In twenty years, Philip had unified his kingdom, expanded it north and west at the expense of Illyrian tribes, and then conquered Thessaly and Thrace. His success stemmed from his innovative reforms to the Macedonian army. Phillip intervened repeatedly in the affairs of the southern city-states, culminating in his invasion of 338 BC.
61
+
62
+ Decisively defeating an allied army of Thebes and Athens at the Battle of Chaeronea (338 BC), he became de facto hegemon of all of Greece, except Sparta. He compelled the majority of the city-states to join the League of Corinth, allying them to him, and preventing them from warring with each other. Philip then entered into war against the Achaemenid Empire but was assassinated by Pausanias of Orestis early on in the conflict.
63
+
64
+ Alexander the Great, son and successor of Philip, continued the war. Alexander defeated Darius III of Persia and completely destroyed the Achaemenid Empire, annexing it to Macedon and earning himself the epithet 'the Great'. When Alexander died in 323 BC, Greek power and influence was at its zenith. However, there had been a fundamental shift away from the fierce independence and classical culture of the poleis—and instead towards the developing Hellenistic culture.
65
+
66
+ The Hellenistic period lasted from 323 BC, which marked the end of the wars of Alexander the Great, to the annexation of Greece by the Roman Republic in 146 BC. Although the establishment of Roman rule did not break the continuity of Hellenistic society and culture, which remained essentially unchanged until the advent of Christianity, it did mark the end of Greek political independence.
67
+
68
+ After the death of Alexander, his empire was, after quite some conflict, divided among his generals, resulting in the Ptolemaic Kingdom (Egypt and adjoining North Africa), the Seleucid Empire (the Levant, Mesopotamia and Persia) and the Antigonid dynasty (Macedonia). In the intervening period, the poleis of Greece were able to wrest back some of their freedom, although still nominally subject to the Macedonian Kingdom.
69
+
70
+ During the Hellenistic period, the importance of "Greece proper" (that is, the territory of modern Greece) within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of the Ptolemaic Kingdom and the Seleucid Empire, respectively.
71
+
72
+ The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east.[48] Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as what are now Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the first century BC.
73
+
74
+ The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).
75
+
76
+ The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to make war on Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC, when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing an end to the independence of all of Greece.
77
+
78
+ The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC.
79
+
80
+ Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome.
81
+
82
+ The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.
83
+
84
+ In the south lay the Peloponnese, itself consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia,[49] originally consisting Lower Macedonia and its regions, such as Elimeia, Pieria, and Orestis. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Upper Macedonia, lands inhabited by independent Macedonian tribes like the Lyncestae and the Elmiotae and to the West, beyond the Axius river, into Eordaia, Bottiaea, Mygdonia, and Almopia, regions settled by Thracian tribes.[50] To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia.
85
+
86
+ During the Archaic period, the population of Greece grew beyond the capacity of its limited arable land (according to one estimate, the population of ancient Greece increased by a factor larger than ten during the period from 800 BC to 400 BC, increasing from a population of 800,000 to a total estimated population of 10 to 13 million).[51]
87
+
88
+ From about 750 BC the Greeks began 250 years of expansion, settling colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and south coast of the Black Sea.
89
+
90
+ Eventually Greek colonization reached as far northeast as present day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Sicily and Southern Italy were settled, followed by Southern France, Corsica, and even northeastern Spain. Greek colonies were also founded in Egypt and Libya.
91
+
92
+ Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece.
93
+
94
+ Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting.[53]
95
+
96
+ Thus, the major peculiarities of the ancient Greek political system were its fragmentary nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean Sea, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city.
97
+
98
+ Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon "conquered" the heartlands of ancient Greece, he did not attempt to annex the territory, or unify it into a new province, but simply compelled most of the poleis to join his own Corinthian League.
99
+
100
+ Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens.[54] However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship; and finally by 683 BC an annually elected archonship. Through each stage more power would have been transferred to the aristocracy as a whole, and away from a single individual.
101
+
102
+ Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution.
103
+
104
+ Athens fell under a tyranny in the second half of the 6th century. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all.
105
+
106
+ After the rise of the democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors).
107
+
108
+ Only free, land owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families.[citation needed]
109
+
110
+ Slaves had no power or status. They had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC chattel slavery had spread in Greece. By the 5th century BC slaves made up one-third of the total population in some city-states. Between forty and eighty per cent of the population of Classical Athens were slaves.[55] Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the Ancient Greeks did not think in terms of race.[56]
111
+
112
+ Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state.
113
+
114
+ City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions.
115
+
116
+ Sparta had a special type of slaves called helots. Helots were Messenians enslaved during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times before in 370/69 they won their freedom.[57]
117
+
118
+ For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood.[citation needed]
119
+
120
+ Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports.
121
+
122
+ Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years.[58]
123
+
124
+ Only a small number of boys continued their education after childhood, as in the Spartan agoge. A crucial part of a wealthy teenager's education was a mentorship with an elder, which in a few places and times may have included pederasty.[citation needed] The teenager learned by watching his mentor talking about politics in the agora, helping him perform his public duties, exercising with him in the gymnasium and attending symposia with him. The richest students continued their education by studying with famous teachers. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia.[citation needed]
125
+
126
+ At its economic height, in the 5th and 4th centuries BC, ancient Greece was the most advanced economy in the world. According to some economic historians, it was one of the most advanced pre-industrial economies. This is demonstrated by the average daily wage of the Greek worker which was, in terms of wheat, about 12 kg. This was more than 3 times the average daily wage of an Egyptian worker during the Roman period, about 3.75 kg.[59]
127
+
128
+ At least in the Archaic Period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than 5% of the losing side, but the slain often included the most prominent citizens and generals who led from the front.
129
+
130
+ The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale, and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battle and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society.
131
+ Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oars men—because it owned a lot of silver mines that were worked by slaves.
132
+
133
+ According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period.[60]
134
+
135
+ Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
136
+
137
+ Neither reason nor inquiry began with the Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization.
138
+
139
+ Some of the well-known philosophers of ancient Greece were Plato and Socrates, among others. They have aided in information about ancient Greek society through writings such as The Republic, by Plato.
140
+
141
+ The earliest Greek literature was poetry, and was composed for performance rather than private consumption.[61] The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry.[62] Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC.[63] tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry.[64] Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425.[65]
142
+
143
+ Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the sixth century BC.[66] Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing.[67] Prose as a genre reached maturity in the classical era,[68] and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period.[69]
144
+
145
+ The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres.[70] It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, which ensured that so much ancient Greek literature has survived.[71] The Library of Alexandria, part of the Museum, had the previously-unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry,[72] and Hellenistic poetry tended to be highly intellectual,[73] blending different genres and traditions, and avoiding linear narratives.[74] The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately.[75] At the same time, Hellenistic poets began to write for private, rather than public, consumption.[76]
146
+
147
+ With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome.[77] The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately AD 80 to around AD 230.[78] This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period.[79]
148
+
149
+ Music was present almost universally in Greek society, from marriages and funerals to religious ceremonies, theatre, folk music and the ballad-like reciting of epic poetry. There are significant fragments of actual Greek musical notation as well as many literary references to ancient Greek music. Greek art depicts musical instruments and dance. The word music derives from the name of the Muses, the daughters of Zeus who were patron goddesses of the arts.
150
+
151
+ Ancient Greek mathematics contributed many important developments to the field of mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today.
152
+
153
+ The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive.[80] Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy.[81] In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes.
154
+
155
+ The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC, and was the first ancestor of the astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
156
+
157
+ The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine"[82][83] in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession.[84][85]
158
+
159
+ The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the western world.
160
+
161
+ Religion was a central part of ancient Greek life.[86] Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places.[87] The Greeks were polytheistic, worshipping many gods, but as early as the sixth century BC a pantheon of twelve Olympians began to develop.[87] Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions.[88]
162
+
163
+ The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats.[89] Sacrifice was accompanied by public prayer,[90] and prayer and hymns were themselves a major part of ancient Greek religious life.[91]
164
+
165
+ The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it,
166
+
167
+ Via the Roman Empire, Greek culture came to be foundational to Western culture in general.
168
+ The Byzantine Empire inherited Classical Greek culture directly, without Latin intermediation, and the preservation of classical Greek learning in medieval Byzantine tradition further exerted strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas.
en/2282.html.txt ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ History of the world · Ancient maritime history Protohistory · Axial Age · Iron Age Historiography · Ancient literature Ancient warfare · Cradle of civilization
4
+
5
+ Ancient Greece (Greek: Ἑλλάς, romanized: Hellás) was a civilization belonging to a period of Greek history from the Greek Dark Ages of the 12th–9th centuries BC to the end of antiquity (c. AD 600). Immediately following this period was the beginning of the Early Middle Ages and the Byzantine time.[1] Roughly three centuries after the Late Bronze Age collapse of Mycenaean Greece, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and colonization of the Mediterranean Basin. This was followed by the period of Classical Greece, an era that began with the Greco-Persian Wars, lasting from the 5th to 4th centuries BC. Due to the conquests by Alexander the Great of Macedon, Hellenistic civilization flourished from Central Asia to the western end of the Mediterranean Sea. The Hellenistic period came to an end with the conquests and annexations of the eastern Mediterranean world by the Roman Republic, which established the Roman province of Macedonia in Roman Greece, and later the province of Achaea during the Roman Empire.
6
+
7
+ Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it to many parts of the Mediterranean Basin and Europe. For this reason, Classical Greece is generally considered to be the seminal culture which provided the foundation of modern Western culture and is considered the cradle of Western civilization.[2][3][4]
8
+
9
+ Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC[5] (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD.
10
+
11
+ Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic Period, beginning around the 8th century BC. The Archaic Period saw early developments in Greek culture and society which formed the basis for the Classical Period.[6] After the Archaic Period, the Classical Period in Greece is conventionally considered to have lasted from the Persian invasion of Greece in 480 until the death of Alexander the Great in 323.[7] The period is characterized by a style which was considered by later observers to be exemplary, i.e., "classical", as shown in the Parthenon, for instance. Politically, the Classical Period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period saw the Greco-Persian Wars and the Rise of Macedon.
12
+
13
+ Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near and Middle East. This period begins with the death of Alexander and ends with the Roman conquest. Roman Greece is usually considered to be the period between Roman victory over the Corinthians at the Battle of Corinth in 146 BC and the establishment of Byzantium by Constantine as the capital of the Roman Empire in AD 330. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, sometimes taken to be complete with the closure of the Academy of Athens by Justinian I in 529.[8]
14
+
15
+ The historical period of ancient Greece is unique in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals or king lists, and pragmatic epigraphy.
16
+
17
+ Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th century historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th century ones such as Candaules.
18
+
19
+ Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most of these authors were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than those of many other cities.
20
+ Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history.[9]
21
+
22
+ In the 8th century BC, Greece began to emerge from the Dark Ages which followed the fall of the Mycenaean civilization. Literacy had been lost and Mycenaean script forgotten, but the Greeks adopted the Phoenician alphabet, modifying it to create the Greek alphabet. Objects with Phoenician writing on them may have been available in Greece from the 9th century BC, but the earliest evidence of Greek writing comes from graffiti on Greek pottery from the mid-8th century.[10] Greece was divided into many small self-governing communities, a pattern largely dictated by Greek geography: every island, valley and plain is cut off from its neighbors by the sea or mountain ranges.[11]
23
+
24
+ The Lelantine War (c. 710 – c. 650 BC) is the earliest documented war of the ancient Greek period. It was fought between the important poleis (city-states) of Chalcis and Eretria over the fertile Lelantine plain of Euboea. Both cities seem to have suffered a decline as result of the long war, though Chalcis was the nominal victor.
25
+
26
+ A mercantile class arose in the first half of the 7th century BC, shown by the introduction of coinage in about 680 BC.[12] This seems to have introduced tension to many city-states. The aristocratic regimes which generally governed the poleis were threatened by the new-found wealth of merchants, who in turn desired political power. From 650 BC onwards, the aristocracies had to fight not to be overthrown and replaced by populist tyrants.[a]
27
+
28
+ A growing population and a shortage of land also seem to have created internal strife between the poor and the rich in many city-states. In Sparta, the Messenian Wars resulted in the conquest of Messenia and enserfment of the Messenians, beginning in the latter half of the 8th century BC, an act without precedent in ancient Greece. This practice allowed a social revolution to occur.[15] The subjugated population, thenceforth known as helots, farmed and labored for Sparta, whilst every Spartan male citizen became a soldier of the Spartan Army in a permanently militarized state. Even the elite were obliged to live and train as soldiers; this commonality between rich and poor citizens served to defuse the social conflict. These reforms, attributed to Lycurgus of Sparta, were probably complete by 650 BC.
29
+
30
+ Athens suffered a land and agrarian crisis in the late 7th century BC, again resulting in civil strife. The Archon (chief magistrate) Draco made severe reforms to the law code in 621 BC (hence "draconian"), but these failed to quell the conflict. Eventually the moderate reforms of Solon (594 BC), improving the lot of the poor but firmly entrenching the aristocracy in power, gave Athens some stability.
31
+
32
+ By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well.
33
+
34
+ Rapidly increasing population in the 8th and 7th centuries BC had resulted in emigration of many Greeks to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The emigration effectively ceased in the 6th century BC by which time the Greek world had, culturally and linguistically, become much larger than the area of present-day Greece. Greek colonies were not politically controlled by their founding cities, although they often retained religious and commercial links with them.
35
+
36
+ The emigration process also determined a long series of conflicts between the Greek cities of Sicily, especially Syracuse, and the Carthaginians. These conflicts lasted from 600 BC to 265 BC when the Roman Republic entered into an alliance with the Mamertines to fend off the hostilities by the new tyrant of Syracuse, Hiero II and then the Carthaginians. This way Rome became the new dominant power against the fading strength of the Sicilian Greek cities and the Carthaginian supremacy in the region. One year later the First Punic War erupted.
37
+
38
+ In this period, there was huge economic development in Greece, and also in its overseas colonies which experienced a growth in commerce and manufacturing. There was a great improvement in the living standards of the population. Some studies estimate that the average size of the Greek household, in the period from 800 BC to 300 BC, increased five times, which indicates[citation needed] a large increase in the average income of the population.
39
+
40
+ In the second half of the 6th century BC, Athens fell under the tyranny of Peisistratos and then of his sons Hippias and Hipparchos. However, in 510 BC, at the instigation of the Athenian aristocrat Cleisthenes, the Spartan king Cleomenes I helped the Athenians overthrow the tyranny. Afterwards, Sparta and Athens promptly turned on each other, at which point Cleomenes I installed Isagoras as a pro-Spartan archon. Eager to prevent Athens from becoming a Spartan puppet, Cleisthenes responded by proposing to his fellow citizens that Athens undergo a revolution: that all citizens share in political power, regardless of status: that Athens become a "democracy". So enthusiastically did the Athenians take to this idea that, having overthrown Isagoras and implemented Cleisthenes's reforms, they were easily able to repel a Spartan-led three-pronged invasion aimed at restoring Isagoras.[16] The advent of the democracy cured many of the ills of Athens and led to a 'golden age' for the Athenians.
41
+
42
+ In 499 BC, the Ionian city states under Persian rule rebelled against the Persian-supported tyrants that ruled them.[17] Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city down, before being driven back by a Persian counterattack.[18] The revolt continued until 494, when the rebelling Ionians were defeated.[19] Darius did not forget that the Athenians had assisted the Ionian revolt, however, and in 490 he assembled an armada to conquer Athens.[20] Despite being heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian forces at the Battle of Marathon, and the Persian fleet withdrew.[21]
43
+
44
+ Ten years later, a second invasion was launched by Darius' son Xerxes.[22] The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders.[23] At the same time, Greek Sicily was invaded by a Carthaginian force.[24] In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small force of Greeks, led by three hundred Spartans, held a crucial pass into the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera.[25]
45
+
46
+ The Persians were defeated by a primarily Athenian naval force at the Battle of Salamis, and in 479 defeated on land at the Battle of Plataea.[26] The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens,[27] and by 460 Persia had been driven out of the Aegean.[28] During this period of campaigning, the Delian league gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power enabled it to compel other league states to comply with its policies.[29] Athens ended its campaigns against Persia in 450 BC, after a disastrous defeat in Egypt in 454 BC, and the death of Cimon in action against the Persians on Cyprus in 450.[30]
47
+
48
+ While Athenian activity against the Persian empire was ending, however, conflict between Sparta and Athens was increasing. Sparta was suspicious of the increasing Athenian power funded by the Delian League, and tensions rose when Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462, when Athens sent a force to aid Sparta in overcoming a helot revolt, but their aid was rejected by the Spartans.[31] In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth.[32] However, Athens failed to win a decisive victory, and in 447 lost Boeotia again.[33] Athens and Sparta signed the Thirty Years' Peace in the winter of 446/5, ending the conflict.[34]
49
+
50
+ Despite the peace of 446/5, Athenian relations with Sparta declined again in the 430s, and in 431 war broke out once again.[35] The first phase of the war is traditionally seen as a series of annual invasions of Attica by Sparta, which made little progress, while Athens were successful against the Corinthian empire in the north-west of Greece, and in defending their own empire, despite suffering from plague and Spartan invasion.[36] The turning point of this phase of the war usually seen as the Athenian victories at Pylos and Sphakteria.[37] Sparta sued for peace, but the Athenians rejected the proposal.[38] The Athenian failure to regain control at Boeotia at Delium and Brasidas' successes in the north of Greece in 424, improved Sparta's position after Sphakteria.[39] After the deaths of Cleon and Brasidas, the strongest objectors to peace on the Athenian and Spartan sides respectively, a peace treaty was agreed in 421.[40]
51
+
52
+ The peace did not last, however. In 418 an alliance between Athens and Argos was defeated by Sparta at Mantinea.[41] In 415 Athens launched a naval expedition against Sicily;[42] the expedition ended in disaster with almost the entire army killed.[43] Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while at the same time Persia began to once again involve itself in Greek affairs on the Spartan side.[44] Initially the Athenian position continued to be relatively strong, winning important battles such as those at Cyzicus in 410 and Arginusae in 406.[45] However, in 405 the Spartans defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour;[46] with no grain supply and in danger of starvation, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League.[47]
53
+
54
+ Greece thus entered the 4th century BC under a Spartan hegemony, but it was clear from the start that this was weak. A demographic crisis meant Sparta was overstretched, and by 395 BC Athens, Argos, Thebes, and Corinth felt able to challenge Spartan dominance, resulting in the Corinthian War (395–387 BC). Another war of stalemates, it ended with the status quo restored, after the threat of Persian intervention on behalf of the Spartans.
55
+
56
+ The Spartan hegemony lasted another 16 years, until, when attempting to impose their will on the Thebans, the Spartans were defeated at Leuctra in 371 BC. The Theban general Epaminondas then led Theban troops into the Peloponnese, whereupon other city-states defected from the Spartan cause. The Thebans were thus able to march into Messenia and free the population.
57
+
58
+ Deprived of land and its serfs, Sparta declined to a second-rank power. The Theban hegemony thus established was short-lived; at the Battle of Mantinea in 362 BC, Thebes lost its key leader, Epaminondas, and much of its manpower, even though they were victorious in battle. In fact such were the losses to all the great city-states at Mantinea that none could establish dominance in the aftermath.
59
+
60
+ The weakened state of the heartland of Greece coincided with the Rise of Macedon, led by Philip II. In twenty years, Philip had unified his kingdom, expanded it north and west at the expense of Illyrian tribes, and then conquered Thessaly and Thrace. His success stemmed from his innovative reforms to the Macedonian army. Phillip intervened repeatedly in the affairs of the southern city-states, culminating in his invasion of 338 BC.
61
+
62
+ Decisively defeating an allied army of Thebes and Athens at the Battle of Chaeronea (338 BC), he became de facto hegemon of all of Greece, except Sparta. He compelled the majority of the city-states to join the League of Corinth, allying them to him, and preventing them from warring with each other. Philip then entered into war against the Achaemenid Empire but was assassinated by Pausanias of Orestis early on in the conflict.
63
+
64
+ Alexander the Great, son and successor of Philip, continued the war. Alexander defeated Darius III of Persia and completely destroyed the Achaemenid Empire, annexing it to Macedon and earning himself the epithet 'the Great'. When Alexander died in 323 BC, Greek power and influence was at its zenith. However, there had been a fundamental shift away from the fierce independence and classical culture of the poleis—and instead towards the developing Hellenistic culture.
65
+
66
+ The Hellenistic period lasted from 323 BC, which marked the end of the wars of Alexander the Great, to the annexation of Greece by the Roman Republic in 146 BC. Although the establishment of Roman rule did not break the continuity of Hellenistic society and culture, which remained essentially unchanged until the advent of Christianity, it did mark the end of Greek political independence.
67
+
68
+ After the death of Alexander, his empire was, after quite some conflict, divided among his generals, resulting in the Ptolemaic Kingdom (Egypt and adjoining North Africa), the Seleucid Empire (the Levant, Mesopotamia and Persia) and the Antigonid dynasty (Macedonia). In the intervening period, the poleis of Greece were able to wrest back some of their freedom, although still nominally subject to the Macedonian Kingdom.
69
+
70
+ During the Hellenistic period, the importance of "Greece proper" (that is, the territory of modern Greece) within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of the Ptolemaic Kingdom and the Seleucid Empire, respectively.
71
+
72
+ The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east.[48] Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as what are now Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the first century BC.
73
+
74
+ The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).
75
+
76
+ The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to make war on Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC, when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing an end to the independence of all of Greece.
77
+
78
+ The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC.
79
+
80
+ Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome.
81
+
82
+ The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.
83
+
84
+ In the south lay the Peloponnese, itself consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia,[49] originally consisting Lower Macedonia and its regions, such as Elimeia, Pieria, and Orestis. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Upper Macedonia, lands inhabited by independent Macedonian tribes like the Lyncestae and the Elmiotae and to the West, beyond the Axius river, into Eordaia, Bottiaea, Mygdonia, and Almopia, regions settled by Thracian tribes.[50] To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia.
85
+
86
+ During the Archaic period, the population of Greece grew beyond the capacity of its limited arable land (according to one estimate, the population of ancient Greece increased by a factor larger than ten during the period from 800 BC to 400 BC, increasing from a population of 800,000 to a total estimated population of 10 to 13 million).[51]
87
+
88
+ From about 750 BC the Greeks began 250 years of expansion, settling colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and south coast of the Black Sea.
89
+
90
+ Eventually Greek colonization reached as far northeast as present day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Sicily and Southern Italy were settled, followed by Southern France, Corsica, and even northeastern Spain. Greek colonies were also founded in Egypt and Libya.
91
+
92
+ Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece.
93
+
94
+ Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting.[53]
95
+
96
+ Thus, the major peculiarities of the ancient Greek political system were its fragmentary nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean Sea, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city.
97
+
98
+ Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon "conquered" the heartlands of ancient Greece, he did not attempt to annex the territory, or unify it into a new province, but simply compelled most of the poleis to join his own Corinthian League.
99
+
100
+ Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens.[54] However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship; and finally by 683 BC an annually elected archonship. Through each stage more power would have been transferred to the aristocracy as a whole, and away from a single individual.
101
+
102
+ Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution.
103
+
104
+ Athens fell under a tyranny in the second half of the 6th century. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all.
105
+
106
+ After the rise of the democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors).
107
+
108
+ Only free, land owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families.[citation needed]
109
+
110
+ Slaves had no power or status. They had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC chattel slavery had spread in Greece. By the 5th century BC slaves made up one-third of the total population in some city-states. Between forty and eighty per cent of the population of Classical Athens were slaves.[55] Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the Ancient Greeks did not think in terms of race.[56]
111
+
112
+ Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state.
113
+
114
+ City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions.
115
+
116
+ Sparta had a special type of slaves called helots. Helots were Messenians enslaved during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times before in 370/69 they won their freedom.[57]
117
+
118
+ For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood.[citation needed]
119
+
120
+ Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports.
121
+
122
+ Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years.[58]
123
+
124
+ Only a small number of boys continued their education after childhood, as in the Spartan agoge. A crucial part of a wealthy teenager's education was a mentorship with an elder, which in a few places and times may have included pederasty.[citation needed] The teenager learned by watching his mentor talking about politics in the agora, helping him perform his public duties, exercising with him in the gymnasium and attending symposia with him. The richest students continued their education by studying with famous teachers. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia.[citation needed]
125
+
126
+ At its economic height, in the 5th and 4th centuries BC, ancient Greece was the most advanced economy in the world. According to some economic historians, it was one of the most advanced pre-industrial economies. This is demonstrated by the average daily wage of the Greek worker which was, in terms of wheat, about 12 kg. This was more than 3 times the average daily wage of an Egyptian worker during the Roman period, about 3.75 kg.[59]
127
+
128
+ At least in the Archaic Period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than 5% of the losing side, but the slain often included the most prominent citizens and generals who led from the front.
129
+
130
+ The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale, and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battle and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society.
131
+ Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oars men—because it owned a lot of silver mines that were worked by slaves.
132
+
133
+ According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period.[60]
134
+
135
+ Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
136
+
137
+ Neither reason nor inquiry began with the Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization.
138
+
139
+ Some of the well-known philosophers of ancient Greece were Plato and Socrates, among others. They have aided in information about ancient Greek society through writings such as The Republic, by Plato.
140
+
141
+ The earliest Greek literature was poetry, and was composed for performance rather than private consumption.[61] The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry.[62] Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC.[63] tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry.[64] Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425.[65]
142
+
143
+ Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the sixth century BC.[66] Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing.[67] Prose as a genre reached maturity in the classical era,[68] and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period.[69]
144
+
145
+ The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres.[70] It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, which ensured that so much ancient Greek literature has survived.[71] The Library of Alexandria, part of the Museum, had the previously-unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry,[72] and Hellenistic poetry tended to be highly intellectual,[73] blending different genres and traditions, and avoiding linear narratives.[74] The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately.[75] At the same time, Hellenistic poets began to write for private, rather than public, consumption.[76]
146
+
147
+ With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome.[77] The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately AD 80 to around AD 230.[78] This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period.[79]
148
+
149
+ Music was present almost universally in Greek society, from marriages and funerals to religious ceremonies, theatre, folk music and the ballad-like reciting of epic poetry. There are significant fragments of actual Greek musical notation as well as many literary references to ancient Greek music. Greek art depicts musical instruments and dance. The word music derives from the name of the Muses, the daughters of Zeus who were patron goddesses of the arts.
150
+
151
+ Ancient Greek mathematics contributed many important developments to the field of mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today.
152
+
153
+ The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive.[80] Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy.[81] In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes.
154
+
155
+ The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC, and was the first ancestor of the astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
156
+
157
+ The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine"[82][83] in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession.[84][85]
158
+
159
+ The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the western world.
160
+
161
+ Religion was a central part of ancient Greek life.[86] Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places.[87] The Greeks were polytheistic, worshipping many gods, but as early as the sixth century BC a pantheon of twelve Olympians began to develop.[87] Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions.[88]
162
+
163
+ The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats.[89] Sacrifice was accompanied by public prayer,[90] and prayer and hymns were themselves a major part of ancient Greek religious life.[91]
164
+
165
+ The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it,
166
+
167
+ Via the Roman Empire, Greek culture came to be foundational to Western culture in general.
168
+ The Byzantine Empire inherited Classical Greek culture directly, without Latin intermediation, and the preservation of classical Greek learning in medieval Byzantine tradition further exerted strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas.
en/2283.html.txt ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ History of the world · Ancient maritime history Protohistory · Axial Age · Iron Age Historiography · Ancient literature Ancient warfare · Cradle of civilization
4
+
5
+ Ancient Greece (Greek: Ἑλλάς, romanized: Hellás) was a civilization belonging to a period of Greek history from the Greek Dark Ages of the 12th–9th centuries BC to the end of antiquity (c. AD 600). Immediately following this period was the beginning of the Early Middle Ages and the Byzantine time.[1] Roughly three centuries after the Late Bronze Age collapse of Mycenaean Greece, Greek urban poleis began to form in the 8th century BC, ushering in the Archaic period and colonization of the Mediterranean Basin. This was followed by the period of Classical Greece, an era that began with the Greco-Persian Wars, lasting from the 5th to 4th centuries BC. Due to the conquests by Alexander the Great of Macedon, Hellenistic civilization flourished from Central Asia to the western end of the Mediterranean Sea. The Hellenistic period came to an end with the conquests and annexations of the eastern Mediterranean world by the Roman Republic, which established the Roman province of Macedonia in Roman Greece, and later the province of Achaea during the Roman Empire.
6
+
7
+ Classical Greek culture, especially philosophy, had a powerful influence on ancient Rome, which carried a version of it to many parts of the Mediterranean Basin and Europe. For this reason, Classical Greece is generally considered to be the seminal culture which provided the foundation of modern Western culture and is considered the cradle of Western civilization.[2][3][4]
8
+
9
+ Classical antiquity in the Mediterranean region is commonly considered to have begun in the 8th century BC[5] (around the time of the earliest recorded poetry of Homer) and ended in the 6th century AD.
10
+
11
+ Classical antiquity in Greece was preceded by the Greek Dark Ages (c. 1200 – c. 800 BC), archaeologically characterised by the protogeometric and geometric styles of designs on pottery. Following the Dark Ages was the Archaic Period, beginning around the 8th century BC. The Archaic Period saw early developments in Greek culture and society which formed the basis for the Classical Period.[6] After the Archaic Period, the Classical Period in Greece is conventionally considered to have lasted from the Persian invasion of Greece in 480 until the death of Alexander the Great in 323.[7] The period is characterized by a style which was considered by later observers to be exemplary, i.e., "classical", as shown in the Parthenon, for instance. Politically, the Classical Period was dominated by Athens and the Delian League during the 5th century, but displaced by Spartan hegemony during the early 4th century BC, before power shifted to Thebes and the Boeotian League and finally to the League of Corinth led by Macedon. This period saw the Greco-Persian Wars and the Rise of Macedon.
12
+
13
+ Following the Classical period was the Hellenistic period (323–146 BC), during which Greek culture and power expanded into the Near and Middle East. This period begins with the death of Alexander and ends with the Roman conquest. Roman Greece is usually considered to be the period between Roman victory over the Corinthians at the Battle of Corinth in 146 BC and the establishment of Byzantium by Constantine as the capital of the Roman Empire in AD 330. Finally, Late Antiquity refers to the period of Christianization during the later 4th to early 6th centuries AD, sometimes taken to be complete with the closure of the Academy of Athens by Justinian I in 529.[8]
14
+
15
+ The historical period of ancient Greece is unique in world history as the first period attested directly in proper historiography, while earlier ancient history or proto-history is known by much more circumstantial evidence, such as annals or king lists, and pragmatic epigraphy.
16
+
17
+ Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Written between the 450s and 420s BC, Herodotus' work reaches about a century into the past, discussing 6th century historical figures such as Darius I of Persia, Cambyses II and Psamtik III, and alluding to some 8th century ones such as Candaules.
18
+
19
+ Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle. Most of these authors were either Athenian or pro-Athenian, which is why far more is known about the history and politics of Athens than those of many other cities.
20
+ Their scope is further limited by a focus on political, military and diplomatic history, ignoring economic and social history.[9]
21
+
22
+ In the 8th century BC, Greece began to emerge from the Dark Ages which followed the fall of the Mycenaean civilization. Literacy had been lost and Mycenaean script forgotten, but the Greeks adopted the Phoenician alphabet, modifying it to create the Greek alphabet. Objects with Phoenician writing on them may have been available in Greece from the 9th century BC, but the earliest evidence of Greek writing comes from graffiti on Greek pottery from the mid-8th century.[10] Greece was divided into many small self-governing communities, a pattern largely dictated by Greek geography: every island, valley and plain is cut off from its neighbors by the sea or mountain ranges.[11]
23
+
24
+ The Lelantine War (c. 710 – c. 650 BC) is the earliest documented war of the ancient Greek period. It was fought between the important poleis (city-states) of Chalcis and Eretria over the fertile Lelantine plain of Euboea. Both cities seem to have suffered a decline as result of the long war, though Chalcis was the nominal victor.
25
+
26
+ A mercantile class arose in the first half of the 7th century BC, shown by the introduction of coinage in about 680 BC.[12] This seems to have introduced tension to many city-states. The aristocratic regimes which generally governed the poleis were threatened by the new-found wealth of merchants, who in turn desired political power. From 650 BC onwards, the aristocracies had to fight not to be overthrown and replaced by populist tyrants.[a]
27
+
28
+ A growing population and a shortage of land also seem to have created internal strife between the poor and the rich in many city-states. In Sparta, the Messenian Wars resulted in the conquest of Messenia and enserfment of the Messenians, beginning in the latter half of the 8th century BC, an act without precedent in ancient Greece. This practice allowed a social revolution to occur.[15] The subjugated population, thenceforth known as helots, farmed and labored for Sparta, whilst every Spartan male citizen became a soldier of the Spartan Army in a permanently militarized state. Even the elite were obliged to live and train as soldiers; this commonality between rich and poor citizens served to defuse the social conflict. These reforms, attributed to Lycurgus of Sparta, were probably complete by 650 BC.
29
+
30
+ Athens suffered a land and agrarian crisis in the late 7th century BC, again resulting in civil strife. The Archon (chief magistrate) Draco made severe reforms to the law code in 621 BC (hence "draconian"), but these failed to quell the conflict. Eventually the moderate reforms of Solon (594 BC), improving the lot of the poor but firmly entrenching the aristocracy in power, gave Athens some stability.
31
+
32
+ By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well.
33
+
34
+ Rapidly increasing population in the 8th and 7th centuries BC had resulted in emigration of many Greeks to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The emigration effectively ceased in the 6th century BC by which time the Greek world had, culturally and linguistically, become much larger than the area of present-day Greece. Greek colonies were not politically controlled by their founding cities, although they often retained religious and commercial links with them.
35
+
36
+ The emigration process also determined a long series of conflicts between the Greek cities of Sicily, especially Syracuse, and the Carthaginians. These conflicts lasted from 600 BC to 265 BC when the Roman Republic entered into an alliance with the Mamertines to fend off the hostilities by the new tyrant of Syracuse, Hiero II and then the Carthaginians. This way Rome became the new dominant power against the fading strength of the Sicilian Greek cities and the Carthaginian supremacy in the region. One year later the First Punic War erupted.
37
+
38
+ In this period, there was huge economic development in Greece, and also in its overseas colonies which experienced a growth in commerce and manufacturing. There was a great improvement in the living standards of the population. Some studies estimate that the average size of the Greek household, in the period from 800 BC to 300 BC, increased five times, which indicates[citation needed] a large increase in the average income of the population.
39
+
40
+ In the second half of the 6th century BC, Athens fell under the tyranny of Peisistratos and then of his sons Hippias and Hipparchos. However, in 510 BC, at the instigation of the Athenian aristocrat Cleisthenes, the Spartan king Cleomenes I helped the Athenians overthrow the tyranny. Afterwards, Sparta and Athens promptly turned on each other, at which point Cleomenes I installed Isagoras as a pro-Spartan archon. Eager to prevent Athens from becoming a Spartan puppet, Cleisthenes responded by proposing to his fellow citizens that Athens undergo a revolution: that all citizens share in political power, regardless of status: that Athens become a "democracy". So enthusiastically did the Athenians take to this idea that, having overthrown Isagoras and implemented Cleisthenes's reforms, they were easily able to repel a Spartan-led three-pronged invasion aimed at restoring Isagoras.[16] The advent of the democracy cured many of the ills of Athens and led to a 'golden age' for the Athenians.
41
+
42
+ In 499 BC, the Ionian city states under Persian rule rebelled against the Persian-supported tyrants that ruled them.[17] Supported by troops sent from Athens and Eretria, they advanced as far as Sardis and burnt the city down, before being driven back by a Persian counterattack.[18] The revolt continued until 494, when the rebelling Ionians were defeated.[19] Darius did not forget that the Athenians had assisted the Ionian revolt, however, and in 490 he assembled an armada to conquer Athens.[20] Despite being heavily outnumbered, the Athenians—supported by their Plataean allies—defeated the Persian forces at the Battle of Marathon, and the Persian fleet withdrew.[21]
43
+
44
+ Ten years later, a second invasion was launched by Darius' son Xerxes.[22] The city-states of northern and central Greece submitted to the Persian forces without resistance, but a coalition of 31 Greek city states, including Athens and Sparta, determined to resist the Persian invaders.[23] At the same time, Greek Sicily was invaded by a Carthaginian force.[24] In 480 BC, the first major battle of the invasion was fought at Thermopylae, where a small force of Greeks, led by three hundred Spartans, held a crucial pass into the heart of Greece for several days; at the same time Gelon, tyrant of Syracuse, defeated the Carthaginian invasion at the Battle of Himera.[25]
45
+
46
+ The Persians were defeated by a primarily Athenian naval force at the Battle of Salamis, and in 479 defeated on land at the Battle of Plataea.[26] The alliance against Persia continued, initially led by the Spartan Pausanias but from 477 by Athens,[27] and by 460 Persia had been driven out of the Aegean.[28] During this period of campaigning, the Delian league gradually transformed from a defensive alliance of Greek states into an Athenian empire, as Athens' growing naval power enabled it to compel other league states to comply with its policies.[29] Athens ended its campaigns against Persia in 450 BC, after a disastrous defeat in Egypt in 454 BC, and the death of Cimon in action against the Persians on Cyprus in 450.[30]
47
+
48
+ While Athenian activity against the Persian empire was ending, however, conflict between Sparta and Athens was increasing. Sparta was suspicious of the increasing Athenian power funded by the Delian League, and tensions rose when Sparta offered aid to reluctant members of the League to rebel against Athenian domination. These tensions were exacerbated in 462, when Athens sent a force to aid Sparta in overcoming a helot revolt, but their aid was rejected by the Spartans.[31] In the 450s, Athens took control of Boeotia, and won victories over Aegina and Corinth.[32] However, Athens failed to win a decisive victory, and in 447 lost Boeotia again.[33] Athens and Sparta signed the Thirty Years' Peace in the winter of 446/5, ending the conflict.[34]
49
+
50
+ Despite the peace of 446/5, Athenian relations with Sparta declined again in the 430s, and in 431 war broke out once again.[35] The first phase of the war is traditionally seen as a series of annual invasions of Attica by Sparta, which made little progress, while Athens were successful against the Corinthian empire in the north-west of Greece, and in defending their own empire, despite suffering from plague and Spartan invasion.[36] The turning point of this phase of the war usually seen as the Athenian victories at Pylos and Sphakteria.[37] Sparta sued for peace, but the Athenians rejected the proposal.[38] The Athenian failure to regain control at Boeotia at Delium and Brasidas' successes in the north of Greece in 424, improved Sparta's position after Sphakteria.[39] After the deaths of Cleon and Brasidas, the strongest objectors to peace on the Athenian and Spartan sides respectively, a peace treaty was agreed in 421.[40]
51
+
52
+ The peace did not last, however. In 418 an alliance between Athens and Argos was defeated by Sparta at Mantinea.[41] In 415 Athens launched a naval expedition against Sicily;[42] the expedition ended in disaster with almost the entire army killed.[43] Soon after the Athenian defeat in Syracuse, Athens' Ionian allies began to rebel against the Delian league, while at the same time Persia began to once again involve itself in Greek affairs on the Spartan side.[44] Initially the Athenian position continued to be relatively strong, winning important battles such as those at Cyzicus in 410 and Arginusae in 406.[45] However, in 405 the Spartans defeated Athens in the Battle of Aegospotami, and began to blockade Athens' harbour;[46] with no grain supply and in danger of starvation, Athens sued for peace, agreeing to surrender their fleet and join the Spartan-led Peloponnesian League.[47]
53
+
54
+ Greece thus entered the 4th century BC under a Spartan hegemony, but it was clear from the start that this was weak. A demographic crisis meant Sparta was overstretched, and by 395 BC Athens, Argos, Thebes, and Corinth felt able to challenge Spartan dominance, resulting in the Corinthian War (395–387 BC). Another war of stalemates, it ended with the status quo restored, after the threat of Persian intervention on behalf of the Spartans.
55
+
56
+ The Spartan hegemony lasted another 16 years, until, when attempting to impose their will on the Thebans, the Spartans were defeated at Leuctra in 371 BC. The Theban general Epaminondas then led Theban troops into the Peloponnese, whereupon other city-states defected from the Spartan cause. The Thebans were thus able to march into Messenia and free the population.
57
+
58
+ Deprived of land and its serfs, Sparta declined to a second-rank power. The Theban hegemony thus established was short-lived; at the Battle of Mantinea in 362 BC, Thebes lost its key leader, Epaminondas, and much of its manpower, even though they were victorious in battle. In fact such were the losses to all the great city-states at Mantinea that none could establish dominance in the aftermath.
59
+
60
+ The weakened state of the heartland of Greece coincided with the Rise of Macedon, led by Philip II. In twenty years, Philip had unified his kingdom, expanded it north and west at the expense of Illyrian tribes, and then conquered Thessaly and Thrace. His success stemmed from his innovative reforms to the Macedonian army. Phillip intervened repeatedly in the affairs of the southern city-states, culminating in his invasion of 338 BC.
61
+
62
+ Decisively defeating an allied army of Thebes and Athens at the Battle of Chaeronea (338 BC), he became de facto hegemon of all of Greece, except Sparta. He compelled the majority of the city-states to join the League of Corinth, allying them to him, and preventing them from warring with each other. Philip then entered into war against the Achaemenid Empire but was assassinated by Pausanias of Orestis early on in the conflict.
63
+
64
+ Alexander the Great, son and successor of Philip, continued the war. Alexander defeated Darius III of Persia and completely destroyed the Achaemenid Empire, annexing it to Macedon and earning himself the epithet 'the Great'. When Alexander died in 323 BC, Greek power and influence was at its zenith. However, there had been a fundamental shift away from the fierce independence and classical culture of the poleis—and instead towards the developing Hellenistic culture.
65
+
66
+ The Hellenistic period lasted from 323 BC, which marked the end of the wars of Alexander the Great, to the annexation of Greece by the Roman Republic in 146 BC. Although the establishment of Roman rule did not break the continuity of Hellenistic society and culture, which remained essentially unchanged until the advent of Christianity, it did mark the end of Greek political independence.
67
+
68
+ After the death of Alexander, his empire was, after quite some conflict, divided among his generals, resulting in the Ptolemaic Kingdom (Egypt and adjoining North Africa), the Seleucid Empire (the Levant, Mesopotamia and Persia) and the Antigonid dynasty (Macedonia). In the intervening period, the poleis of Greece were able to wrest back some of their freedom, although still nominally subject to the Macedonian Kingdom.
69
+
70
+ During the Hellenistic period, the importance of "Greece proper" (that is, the territory of modern Greece) within the Greek-speaking world declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of the Ptolemaic Kingdom and the Seleucid Empire, respectively.
71
+
72
+ The conquests of Alexander had numerous consequences for the Greek city-states. It greatly widened the horizons of the Greeks and led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east.[48] Many Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as far away as what are now Afghanistan and Pakistan, where the Greco-Bactrian Kingdom and the Indo-Greek Kingdom survived until the end of the first century BC.
73
+
74
+ The city-states within Greece formed themselves into two leagues; the Achaean League (including Thebes, Corinth and Argos) and the Aetolian League (including Sparta and Athens). For much of the period until the Roman conquest, these leagues were usually at war with each other, and/or allied to different sides in the conflicts between the Diadochi (the successor states to Alexander's empire).
75
+
76
+ The Antigonid Kingdom became involved in a war with the Roman Republic in the late 3rd century. Although the First Macedonian War was inconclusive, the Romans, in typical fashion, continued to make war on Macedon until it was completely absorbed into the Roman Republic (by 149 BC). In the east the unwieldy Seleucid Empire gradually disintegrated, although a rump survived until 64 BC, whilst the Ptolemaic Kingdom continued in Egypt until 30 BC, when it too was conquered by the Romans. The Aetolian league grew wary of Roman involvement in Greece, and sided with the Seleucids in the Roman–Seleucid War; when the Romans were victorious, the league was effectively absorbed into the Republic. Although the Achaean league outlasted both the Aetolian league and Macedon, it was also soon defeated and absorbed by the Romans in 146 BC, bringing an end to the independence of all of Greece.
77
+
78
+ The Greek peninsula came under Roman rule during the 146 BC conquest of Greece after the Battle of Corinth. Macedonia became a Roman province while southern Greece came under the surveillance of Macedonia's prefect; however, some Greek poleis managed to maintain a partial independence and avoid taxation. The Aegean islands were added to this territory in 133 BC. Athens and other Greek cities revolted in 88 BC, and the peninsula was crushed by the Roman general Sulla. The Roman civil wars devastated the land even further, until Augustus organized the peninsula as the province of Achaea in 27 BC.
79
+
80
+ Greece was a key eastern province of the Roman Empire, as the Roman culture had long been in fact Greco-Roman. The Greek language served as a lingua franca in the East and in Italy, and many Greek intellectuals such as Galen would perform most of their work in Rome.
81
+
82
+ The territory of Greece is mountainous, and as a result, ancient Greece consisted of many smaller regions each with its own dialect, cultural peculiarities, and identity. Regionalism and regional conflicts were a prominent feature of ancient Greece. Cities tended to be located in valleys between mountains, or on coastal plains, and dominated a certain area around them.
83
+
84
+ In the south lay the Peloponnese, itself consisting of the regions of Laconia (southeast), Messenia (southwest), Elis (west), Achaia (north), Korinthia (northeast), Argolis (east), and Arcadia (center). These names survive to the present day as regional units of modern Greece, though with somewhat different boundaries. Mainland Greece to the north, nowadays known as Central Greece, consisted of Aetolia and Acarnania in the west, Locris, Doris, and Phocis in the center, while in the east lay Boeotia, Attica, and Megaris. Northeast lay Thessaly, while Epirus lay to the northwest. Epirus stretched from the Ambracian Gulf in the south to the Ceraunian mountains and the Aoos river in the north, and consisted of Chaonia (north), Molossia (center), and Thesprotia (south). In the northeast corner was Macedonia,[49] originally consisting Lower Macedonia and its regions, such as Elimeia, Pieria, and Orestis. Around the time of Alexander I of Macedon, the Argead kings of Macedon started to expand into Upper Macedonia, lands inhabited by independent Macedonian tribes like the Lyncestae and the Elmiotae and to the West, beyond the Axius river, into Eordaia, Bottiaea, Mygdonia, and Almopia, regions settled by Thracian tribes.[50] To the north of Macedonia lay various non-Greek peoples such as the Paeonians due north, the Thracians to the northeast, and the Illyrians, with whom the Macedonians were frequently in conflict, to the northwest. Chalcidice was settled early on by southern Greek colonists and was considered part of the Greek world, while from the late 2nd millennium BC substantial Greek settlement also occurred on the eastern shores of the Aegean, in Anatolia.
85
+
86
+ During the Archaic period, the population of Greece grew beyond the capacity of its limited arable land (according to one estimate, the population of ancient Greece increased by a factor larger than ten during the period from 800 BC to 400 BC, increasing from a population of 800,000 to a total estimated population of 10 to 13 million).[51]
87
+
88
+ From about 750 BC the Greeks began 250 years of expansion, settling colonies in all directions. To the east, the Aegean coast of Asia Minor was colonized first, followed by Cyprus and the coasts of Thrace, the Sea of Marmara and south coast of the Black Sea.
89
+
90
+ Eventually Greek colonization reached as far northeast as present day Ukraine and Russia (Taganrog). To the west the coasts of Illyria, Sicily and Southern Italy were settled, followed by Southern France, Corsica, and even northeastern Spain. Greek colonies were also founded in Egypt and Libya.
91
+
92
+ Modern Syracuse, Naples, Marseille and Istanbul had their beginnings as the Greek colonies Syracusae (Συράκουσαι), Neapolis (Νεάπολις), Massalia (Μασσαλία) and Byzantion (Βυζάντιον). These colonies played an important role in the spread of Greek influence throughout Europe and also aided in the establishment of long-distance trading networks between the Greek city-states, boosting the economy of ancient Greece.
93
+
94
+ Ancient Greece consisted of several hundred relatively independent city-states (poleis). This was a situation unlike that in most other contemporary societies, which were either tribal or kingdoms ruling over relatively large territories. Undoubtedly the geography of Greece—divided and sub-divided by hills, mountains, and rivers—contributed to the fragmentary nature of ancient Greece. On the one hand, the ancient Greeks had no doubt that they were "one people"; they had the same religion, same basic culture, and same language. Furthermore, the Greeks were very aware of their tribal origins; Herodotus was able to extensively categorise the city-states by tribe. Yet, although these higher-level relationships existed, they seem to have rarely had a major role in Greek politics. The independence of the poleis was fiercely defended; unification was something rarely contemplated by the ancient Greeks. Even when, during the second Persian invasion of Greece, a group of city-states allied themselves to defend Greece, the vast majority of poleis remained neutral, and after the Persian defeat, the allies quickly returned to infighting.[53]
95
+
96
+ Thus, the major peculiarities of the ancient Greek political system were its fragmentary nature (and that this does not particularly seem to have tribal origin), and the particular focus on urban centers within otherwise tiny states. The peculiarities of the Greek system are further evidenced by the colonies that they set up throughout the Mediterranean Sea, which, though they might count a certain Greek polis as their 'mother' (and remain sympathetic to her), were completely independent of the founding city.
97
+
98
+ Inevitably smaller poleis might be dominated by larger neighbors, but conquest or direct rule by another city-state appears to have been quite rare. Instead the poleis grouped themselves into leagues, membership of which was in a constant state of flux. Later in the Classical period, the leagues would become fewer and larger, be dominated by one city (particularly Athens, Sparta and Thebes); and often poleis would be compelled to join under threat of war (or as part of a peace treaty). Even after Philip II of Macedon "conquered" the heartlands of ancient Greece, he did not attempt to annex the territory, or unify it into a new province, but simply compelled most of the poleis to join his own Corinthian League.
99
+
100
+ Initially many Greek city-states seem to have been petty kingdoms; there was often a city official carrying some residual, ceremonial functions of the king (basileus), e.g., the archon basileus in Athens.[54] However, by the Archaic period and the first historical consciousness, most had already become aristocratic oligarchies. It is unclear exactly how this change occurred. For instance, in Athens, the kingship had been reduced to a hereditary, lifelong chief magistracy (archon) by c. 1050 BC; by 753 BC this had become a decennial, elected archonship; and finally by 683 BC an annually elected archonship. Through each stage more power would have been transferred to the aristocracy as a whole, and away from a single individual.
101
+
102
+ Inevitably, the domination of politics and concomitant aggregation of wealth by small groups of families was apt to cause social unrest in many poleis. In many cities a tyrant (not in the modern sense of repressive autocracies), would at some point seize control and govern according to their own will; often a populist agenda would help sustain them in power. In a system wracked with class conflict, government by a 'strongman' was often the best solution.
103
+
104
+ Athens fell under a tyranny in the second half of the 6th century. When this tyranny was ended, the Athenians founded the world's first democracy as a radical solution to prevent the aristocracy regaining power. A citizens' assembly (the Ecclesia), for the discussion of city policy, had existed since the reforms of Draco in 621 BC; all citizens were permitted to attend after the reforms of Solon (early 6th century), but the poorest citizens could not address the assembly or run for office. With the establishment of the democracy, the assembly became the de jure mechanism of government; all citizens had equal privileges in the assembly. However, non-citizens, such as metics (foreigners living in Athens) or slaves, had no political rights at all.
105
+
106
+ After the rise of the democracy in Athens, other city-states founded democracies. However, many retained more traditional forms of government. As so often in other matters, Sparta was a notable exception to the rest of Greece, ruled through the whole period by not one, but two hereditary monarchs. This was a form of diarchy. The Kings of Sparta belonged to the Agiads and the Eurypontids, descendants respectively of Eurysthenes and Procles. Both dynasties' founders were believed to be twin sons of Aristodemus, a Heraclid ruler. However, the powers of these kings were held in check by both a council of elders (the Gerousia) and magistrates specifically appointed to watch over the kings (the Ephors).
107
+
108
+ Only free, land owning, native-born men could be citizens entitled to the full protection of the law in a city-state. In most city-states, unlike the situation in Rome, social prominence did not allow special rights. Sometimes families controlled public religious functions, but this ordinarily did not give any extra power in the government. In Athens, the population was divided into four social classes based on wealth. People could change classes if they made more money. In Sparta, all male citizens were called homoioi, meaning "peers". However, Spartan kings, who served as the city-state's dual military and religious leaders, came from two families.[citation needed]
109
+
110
+ Slaves had no power or status. They had the right to have a family and own property, subject to their master's goodwill and permission, but they had no political rights. By 600 BC chattel slavery had spread in Greece. By the 5th century BC slaves made up one-third of the total population in some city-states. Between forty and eighty per cent of the population of Classical Athens were slaves.[55] Slaves outside of Sparta almost never revolted because they were made up of too many nationalities and were too scattered to organize. However, unlike later Western culture, the Ancient Greeks did not think in terms of race.[56]
111
+
112
+ Most families owned slaves as household servants and laborers, and even poor families might have owned a few slaves. Owners were not allowed to beat or kill their slaves. Owners often promised to free slaves in the future to encourage slaves to work hard. Unlike in Rome, freedmen did not become citizens. Instead, they were mixed into the population of metics, which included people from foreign countries or other city-states who were officially allowed to live in the state.
113
+
114
+ City-states legally owned slaves. These public slaves had a larger measure of independence than slaves owned by families, living on their own and performing specialized tasks. In Athens, public slaves were trained to look out for counterfeit coinage, while temple slaves acted as servants of the temple's deity and Scythian slaves were employed in Athens as a police force corralling citizens to political functions.
115
+
116
+ Sparta had a special type of slaves called helots. Helots were Messenians enslaved during the Messenian Wars by the state and assigned to families where they were forced to stay. Helots raised food and did household chores so that women could concentrate on raising strong children while men could devote their time to training as hoplites. Their masters treated them harshly, and helots revolted against their masters several times before in 370/69 they won their freedom.[57]
117
+
118
+ For most of Greek history, education was private, except in Sparta. During the Hellenistic period, some city-states established public schools. Only wealthy families could afford a teacher. Boys learned how to read, write and quote literature. They also learned to sing and play one musical instrument and were trained as athletes for military service. They studied not for a job but to become an effective citizen. Girls also learned to read, write and do simple arithmetic so they could manage the household. They almost never received education after childhood.[citation needed]
119
+
120
+ Boys went to school at the age of seven, or went to the barracks, if they lived in Sparta. The three types of teachings were: grammatistes for arithmetic, kitharistes for music and dancing, and Paedotribae for sports.
121
+
122
+ Boys from wealthy families attending the private school lessons were taken care of by a paidagogos, a household slave selected for this task who accompanied the boy during the day. Classes were held in teachers' private houses and included reading, writing, mathematics, singing, and playing the lyre and flute. When the boy became 12 years old the schooling started to include sports such as wrestling, running, and throwing discus and javelin. In Athens some older youths attended academy for the finer disciplines such as culture, sciences, music, and the arts. The schooling ended at age 18, followed by military training in the army usually for one or two years.[58]
123
+
124
+ Only a small number of boys continued their education after childhood, as in the Spartan agoge. A crucial part of a wealthy teenager's education was a mentorship with an elder, which in a few places and times may have included pederasty.[citation needed] The teenager learned by watching his mentor talking about politics in the agora, helping him perform his public duties, exercising with him in the gymnasium and attending symposia with him. The richest students continued their education by studying with famous teachers. Some of Athens' greatest such schools included the Lyceum (the so-called Peripatetic school founded by Aristotle of Stageira) and the Platonic Academy (founded by Plato of Athens). The education system of the wealthy ancient Greeks is also called Paideia.[citation needed]
125
+
126
+ At its economic height, in the 5th and 4th centuries BC, ancient Greece was the most advanced economy in the world. According to some economic historians, it was one of the most advanced pre-industrial economies. This is demonstrated by the average daily wage of the Greek worker which was, in terms of wheat, about 12 kg. This was more than 3 times the average daily wage of an Egyptian worker during the Roman period, about 3.75 kg.[59]
127
+
128
+ At least in the Archaic Period, the fragmentary nature of ancient Greece, with many competing city-states, increased the frequency of conflict but conversely limited the scale of warfare. Unable to maintain professional armies, the city-states relied on their own citizens to fight. This inevitably reduced the potential duration of campaigns, as citizens would need to return to their own professions (especially in the case of, for example, farmers). Campaigns would therefore often be restricted to summer. When battles occurred, they were usually set piece and intended to be decisive. Casualties were slight compared to later battles, rarely amounting to more than 5% of the losing side, but the slain often included the most prominent citizens and generals who led from the front.
129
+
130
+ The scale and scope of warfare in ancient Greece changed dramatically as a result of the Greco-Persian Wars. To fight the enormous armies of the Achaemenid Empire was effectively beyond the capabilities of a single city-state. The eventual triumph of the Greeks was achieved by alliances of city-states (the exact composition changing over time), allowing the pooling of resources and division of labor. Although alliances between city-states occurred before this time, nothing on this scale had been seen before. The rise of Athens and Sparta as pre-eminent powers during this conflict led directly to the Peloponnesian War, which saw further development of the nature of warfare, strategy and tactics. Fought between leagues of cities dominated by Athens and Sparta, the increased manpower and financial resources increased the scale, and allowed the diversification of warfare. Set-piece battles during the Peloponnesian war proved indecisive and instead there was increased reliance on attritionary strategies, naval battle and blockades and sieges. These changes greatly increased the number of casualties and the disruption of Greek society.
131
+ Athens owned one of the largest war fleets in ancient Greece. It had over 200 triremes each powered by 170 oarsmen who were seated in 3 rows on each side of the ship. The city could afford such a large fleet—it had over 34,000 oars men—because it owned a lot of silver mines that were worked by slaves.
132
+
133
+ According to Josiah Ober, Greek city-states faced approximately a one-in-three chance of destruction during the archaic and classical period.[60]
134
+
135
+ Ancient Greek philosophy focused on the role of reason and inquiry. In many ways, it had an important influence on modern philosophy, as well as modern science. Clear unbroken lines of influence lead from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and Islamic scientists, to the European Renaissance and Enlightenment, to the secular sciences of the modern day.
136
+
137
+ Neither reason nor inquiry began with the Greeks. Defining the difference between the Greek quest for knowledge and the quests of the elder civilizations, such as the ancient Egyptians and Babylonians, has long been a topic of study by theorists of civilization.
138
+
139
+ Some of the well-known philosophers of ancient Greece were Plato and Socrates, among others. They have aided in information about ancient Greek society through writings such as The Republic, by Plato.
140
+
141
+ The earliest Greek literature was poetry, and was composed for performance rather than private consumption.[61] The earliest Greek poet known is Homer, although he was certainly part of an existing tradition of oral poetry.[62] Homer's poetry, though it was developed around the same time that the Greeks developed writing, would have been composed orally; the first poet to certainly compose their work in writing was Archilochus, a lyric poet from the mid-seventh century BC.[63] tragedy developed, around the end of the archaic period, taking elements from across the pre-existing genres of late archaic poetry.[64] Towards the beginning of the classical period, comedy began to develop—the earliest date associated with the genre is 486 BC, when a competition for comedy became an official event at the City Dionysia in Athens, though the first preserved ancient comedy is Aristophanes' Acharnians, produced in 425.[65]
142
+
143
+ Like poetry, Greek prose had its origins in the archaic period, and the earliest writers of Greek philosophy, history, and medical literature all date to the sixth century BC.[66] Prose first emerged as the writing style adopted by the presocratic philosophers Anaximander and Anaximenes—though Thales of Miletus, considered the first Greek philosopher, apparently wrote nothing.[67] Prose as a genre reached maturity in the classical era,[68] and the major Greek prose genres—philosophy, history, rhetoric, and dialogue—developed in this period.[69]
144
+
145
+ The Hellenistic period saw the literary centre of the Greek world move from Athens, where it had been in the classical period, to Alexandria. At the same time, other Hellenistic kings such as the Antigonids and the Attalids were patrons of scholarship and literature, turning Pella and Pergamon respectively into cultural centres.[70] It was thanks to this cultural patronage by Hellenistic kings, and especially the Museum at Alexandria, which ensured that so much ancient Greek literature has survived.[71] The Library of Alexandria, part of the Museum, had the previously-unenvisaged aim of collecting together copies of all known authors in Greek. Almost all of the surviving non-technical Hellenistic literature is poetry,[72] and Hellenistic poetry tended to be highly intellectual,[73] blending different genres and traditions, and avoiding linear narratives.[74] The Hellenistic period also saw a shift in the ways literature was consumed—while in the archaic and classical periods literature had typically been experienced in public performance, in the Hellenistic period it was more commonly read privately.[75] At the same time, Hellenistic poets began to write for private, rather than public, consumption.[76]
146
+
147
+ With Octavian's victory at Actium in 31 BC, Rome began to become a major centre of Greek literature, as important Greek authors such as Strabo and Dionysius of Halicarnassus came to Rome.[77] The period of greatest innovation in Greek literature under Rome was the "long second century" from approximately AD 80 to around AD 230.[78] This innovation was especially marked in prose, with the development of the novel and a revival of prominence for display oratory both dating to this period.[79]
148
+
149
+ Music was present almost universally in Greek society, from marriages and funerals to religious ceremonies, theatre, folk music and the ballad-like reciting of epic poetry. There are significant fragments of actual Greek musical notation as well as many literary references to ancient Greek music. Greek art depicts musical instruments and dance. The word music derives from the name of the Muses, the daughters of Zeus who were patron goddesses of the arts.
150
+
151
+ Ancient Greek mathematics contributed many important developments to the field of mathematics, including the basic rules of geometry, the idea of formal mathematical proof, and discoveries in number theory, mathematical analysis, applied mathematics, and approached close to establishing integral calculus. The discoveries of several Greek mathematicians, including Pythagoras, Euclid, and Archimedes, are still used in mathematical teaching today.
152
+
153
+ The Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system. Archimedes in his treatise The Sand Reckoner revives Aristarchus' hypothesis that "the fixed stars and the Sun remain unmoved, while the Earth revolves about the Sun on the circumference of a circle". Otherwise, only fragmentary descriptions of Aristarchus' idea survive.[80] Eratosthenes, using the angles of shadows created at widely separated regions, estimated the circumference of the Earth with great accuracy.[81] In the 2nd century BC Hipparchus of Nicea made a number of contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed the modern system of apparent magnitudes.
154
+
155
+ The Antikythera mechanism, a device for calculating the movements of planets, dates from about 80 BC, and was the first ancestor of the astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica.
156
+
157
+ The ancient Greeks also made important discoveries in the medical field. Hippocrates was a physician of the Classical period, and is considered one of the most outstanding figures in the history of medicine. He is referred to as the "father of medicine"[82][83] in recognition of his lasting contributions to the field as the founder of the Hippocratic school of medicine. This intellectual school revolutionized medicine in ancient Greece, establishing it as a discipline distinct from other fields that it had traditionally been associated with (notably theurgy and philosophy), thus making medicine a profession.[84][85]
158
+
159
+ The art of ancient Greece has exercised an enormous influence on the culture of many countries from ancient times to the present day, particularly in the areas of sculpture and architecture. In the West, the art of the Roman Empire was largely derived from Greek models. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central Asian and Indian cultures, resulting in Greco-Buddhist art, with ramifications as far as Japan. Following the Renaissance in Europe, the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well into the 19th century, the classical tradition derived from Greece dominated the art of the western world.
160
+
161
+ Religion was a central part of ancient Greek life.[86] Though the Greeks of different cities and tribes worshipped similar gods, religious practices were not uniform and the gods were thought of differently in different places.[87] The Greeks were polytheistic, worshipping many gods, but as early as the sixth century BC a pantheon of twelve Olympians began to develop.[87] Greek religion was influenced by the practices of the Greeks' near eastern neighbours at least as early as the archaic period, and by the Hellenistic period this influence was seen in both directions.[88]
162
+
163
+ The most important religious act in ancient Greece was animal sacrifice, most commonly of sheep and goats.[89] Sacrifice was accompanied by public prayer,[90] and prayer and hymns were themselves a major part of ancient Greek religious life.[91]
164
+
165
+ The civilization of ancient Greece has been immensely influential on language, politics, educational systems, philosophy, science, and the arts. It became the Leitkultur of the Roman Empire to the point of marginalizing native Italic traditions. As Horace put it,
166
+
167
+ Via the Roman Empire, Greek culture came to be foundational to Western culture in general.
168
+ The Byzantine Empire inherited Classical Greek culture directly, without Latin intermediation, and the preservation of classical Greek learning in medieval Byzantine tradition further exerted strong influence on the Slavs and later on the Islamic Golden Age and the Western European Renaissance. A modern revival of Classical Greek learning took place in the Neoclassicism movement in 18th- and 19th-century Europe and the Americas.
en/2284.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/2285.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/2286.html.txt ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Ancient Greek includes the forms of the Greek language used in ancient Greece and the ancient world from around the 9th century BC to the 6th century AD. It is often roughly divided into the Archaic period (9th to 6th centuries BC), Classical period (5th and 4th centuries BC), and Hellenistic period (Koine Greek, 3rd century BC to 4th century AD).
6
+
7
+ It is preceded by Mycenaean Greek and succeeded by Medieval Greek. Koine is regarded as a separate historical stage although its earliest form closely resembles Attic Greek and its latest form approaches Medieval Greek. There were several regional dialects of ancient Greek, of which Attic Greek developed into Koine.
8
+
9
+ Ancient Greek was the language of Homer and of fifth-century Athenian historians, playwrights, and philosophers, as well as being the original language of the New Testament of the best-selling book in world history, the Christian Bible. Ancient Greek has contributed many words to English vocabulary and has been a standard subject of study in educational institutions of the Western world since the Renaissance. This article primarily contains information about the Epic and Classical periods of the language.
10
+
11
+ Ancient Greek was a pluricentric language, divided into many dialects. The main dialect groups are Attic and Ionic, Aeolic, Arcadocypriot, and Doric, many of them with several subdivisions. Some dialects are found in standardized literary forms used in literature, while others are attested only in inscriptions.
12
+
13
+ There are also several historical forms. Homeric Greek is a literary form of Archaic Greek (derived primarily from Ionic and Aeolic) used in the epic poems, the Iliad and the Odyssey, and in later poems by other authors. Homeric Greek had significant differences in grammar and pronunciation from Classical Attic and other Classical-era dialects.
14
+
15
+ The origins, early form and development of the Hellenic language family are not well understood because of a lack of contemporaneous evidence. Several theories exist about what Hellenic dialect groups may have existed between the divergence of early Greek-like speech from the common Proto-Indo-European language and the Classical period. They have the same general outline, but differ in some of the detail. The only attested dialect from this period[a] is Mycenaean Greek, but its relationship to the historical dialects and the historical circumstances of the times imply that the overall groups already existed in some form.
16
+
17
+ Scholars assume that major ancient Greek period dialect groups developed not later than 1120 BC, at the time of the Dorian invasions—and that their first appearances as precise alphabetic writing began in the 8th century BC. The invasion would not be "Dorian" unless the invaders had some cultural relationship to the historical Dorians. The invasion is known to have displaced population to the later Attic-Ionic regions, who regarded themselves as descendants of the population displaced by or contending with the Dorians.
18
+
19
+ The Greeks of this period believed there were three major divisions of all Greek people – Dorians, Aeolians, and Ionians (including Athenians), each with their own defining and distinctive dialects. Allowing for their oversight of Arcadian, an obscure mountain dialect, and Cypriot, far from the center of Greek scholarship, this division of people and language is quite similar to the results of modern archaeological-linguistic investigation.
20
+
21
+ One standard formulation for the dialects is:[2]
22
+
23
+ Western group:
24
+
25
+
26
+
27
+ Central group:
28
+
29
+
30
+
31
+ Eastern group:
32
+
33
+
34
+
35
+ Western group:
36
+
37
+
38
+
39
+ Eastern group:
40
+
41
+
42
+
43
+ West vs. non-West Greek is the strongest-marked and earliest division, with non-West in subsets of Ionic-Attic (or Attic-Ionic) and Aeolic vs. Arcadocypriot, or Aeolic and Arcado-Cypriot vs. Ionic-Attic. Often non-West is called 'East Greek'.
44
+
45
+ Arcadocypriot apparently descended more closely from the Mycenaean Greek of the Bronze Age.
46
+
47
+ Boeotian had come under a strong Northwest Greek influence, and can in some respects be considered a transitional dialect. Thessalian likewise had come under Northwest Greek influence, though to a lesser degree.
48
+
49
+ Pamphylian Greek, spoken in a small area on the southwestern coast of Anatolia and little preserved in inscriptions, may be either a fifth major dialect group, or it is Mycenaean Greek overlaid by Doric, with a non-Greek native influence.
50
+
51
+ Most of the dialect sub-groups listed above had further subdivisions, generally equivalent to a city-state and its surrounding territory, or to an island. Doric notably had several intermediate divisions as well, into Island Doric (including Cretan Doric), Southern Peloponnesus Doric (including Laconian, the dialect of Sparta), and Northern Peloponnesus Doric (including Corinthian).
52
+
53
+ The Lesbian dialect was Aeolic Greek.
54
+
55
+ All the groups were represented by colonies beyond Greece proper as well, and these colonies generally developed local characteristics, often under the influence of settlers or neighbors speaking different Greek dialects.
56
+
57
+ The dialects outside the Ionic group are known mainly from inscriptions, notable exceptions being:
58
+
59
+ After the conquests of Alexander the Great in the late 4th century BC, a new international dialect known as Koine or Common Greek developed, largely based on Attic Greek, but with influence from other dialects. This dialect slowly replaced most of the older dialects, although the Doric dialect has survived in the Tsakonian language, which is spoken in the region of modern Sparta. Doric has also passed down its aorist terminations into most verbs of Demotic Greek. By about the 6th century AD, the Koine had slowly metamorphosed into Medieval Greek.
60
+
61
+ Ancient Macedonian was an Indo-European language. Because of no surviving sample texts, it is impossible to ascertain whether it was a Greek dialect or even related to the Greek language at all. Its exact relationship remains unclear. Macedonian could also be related to Thracian and Phrygian languages to some extent. The Macedonian dialect (or language) appears to have been replaced by Attic Greek during the Hellenistic period. Late 20th century epigraphic discoveries in the Greek region of Macedonia, such as the Pella curse tablet, suggest that ancient Macedonian has been a variety of north-western ancient Greek or replaced by a Greek dialect.[4]
62
+
63
+ Ancient Greek differs from Proto-Indo-European (PIE) and other Indo-European languages in certain ways. In phonotactics, ancient Greek words could end only in a vowel or /n s r/; final stops were lost, as in γάλα "milk", compared with γάλακτος "of milk" (genitive). Ancient Greek of the classical period also differed in both the inventory and distribution of original PIE phonemes due to numerous sound changes,[5] notably the following:
64
+
65
+ The pronunciation of ancient Greek was very different from that of Modern Greek. Ancient Greek had long and short vowels; many diphthongs; double and single consonants; voiced, voiceless, and aspirated stops; and a pitch accent. In Modern Greek, all vowels and consonants are short. Many vowels and diphthongs once pronounced distinctly are pronounced as /i/ (iotacism). Some of the stops and glides in diphthongs have become fricatives, and the pitch accent has changed to a stress accent. Many of the changes took place in the Koine Greek period. The writing system of Modern Greek, however, does not reflect all pronunciation changes.
66
+
67
+ The examples below represent Attic Greek in the 5th century BC. Ancient pronunciation cannot be reconstructed with certainty, but Greek from the period is well documented, and there is little disagreement among linguists as to the general nature of the sounds that the letters represent.
68
+
69
+ [ŋ] occurred as an allophone of /n/ that was used before velars and as an allophone of /ɡ/ before nasals. /r/ was probably voiceless when word-initial (written ῥ). /s/ was assimilated to [z] before voiced consonants.
70
+
71
+ /oː/ raised to [uː], probably by the 4th century BC.
72
+
73
+ Greek, like all of the older Indo-European languages, is highly inflected. It is highly archaic in its preservation of Proto-Indo-European forms. In ancient Greek, nouns (including proper nouns) have five cases (nominative, genitive, dative, accusative, and vocative), three genders (masculine, feminine, and neuter), and three numbers (singular, dual, and plural). Verbs have four moods (indicative, imperative, subjunctive, and optative) and three voices (active, middle, and passive), as well as three persons (first, second, and third) and various other forms. Verbs are conjugated through seven combinations of tenses and aspect (generally simply called "tenses"): the present, future, and imperfect are imperfective in aspect; the aorist (perfective aspect); a present perfect, pluperfect and future perfect. Most tenses display all four moods and three voices, although there is no future subjunctive or imperative. Also, there is no imperfect subjunctive, optative or imperative. The infinitives and participles correspond to the finite combinations of tense, aspect, and voice.
74
+
75
+ The indicative of past tenses adds (conceptually, at least) a prefix /e-/, called the augment. This was probably originally a separate word, meaning something like "then", added because tenses in PIE had primarily aspectual meaning. The augment is added to the indicative of the aorist, imperfect, and pluperfect, but not to any of the other forms of the aorist (no other forms of the imperfect and pluperfect exist).
76
+
77
+ The two kinds of augment in Greek are syllabic and quantitative. The syllabic augment is added to stems beginning with consonants, and simply prefixes e (stems beginning with r, however, add er). The quantitative augment is added to stems beginning with vowels, and involves lengthening the vowel:
78
+
79
+ Some verbs augment irregularly; the most common variation is e → ei. The irregularity can be explained diachronically by the loss of s between vowels.
80
+ In verbs with a preposition as a prefix, the augment is placed not at the start of the word, but between the preposition and the original verb. For example, προσ(-)βάλλω (I attack) goes to προσέβαλoν in the aorist. However compound verbs consisting of a prefix that is not a preposition retain the augment at the start of the word: αὐτο(-)μολῶ goes to ηὐτομόλησα in the aorist.
81
+
82
+ Following Homer's practice, the augment is sometimes not made in poetry, especially epic poetry.
83
+
84
+ The augment sometimes substitutes for reduplication; see below.
85
+
86
+ Almost all forms of the perfect, pluperfect, and future perfect reduplicate the initial syllable of the verb stem. (Note that a few irregular forms of perfect do not reduplicate, whereas a handful of irregular aorists reduplicate.) The three types of reduplication are:
87
+
88
+ Irregular duplication can be understood diachronically. For example, lambanō (root lab) has the perfect stem eilēpha (not *lelēpha) because it was originally slambanō, with perfect seslēpha, becoming eilēpha through compensatory lengthening.
89
+
90
+ Reduplication is also visible in the present tense stems of certain verbs. These stems add a syllable consisting of the root's initial consonant followed by i. A nasal stop appears after the reduplication in some verbs.[6]
91
+
92
+ The earliest extant examples of ancient Greek writing (circa 1450 BC) are in the syllabic script Linear B. Beginning in the 8th century BC, however, the Greek alphabet became standard, albeit with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later.
93
+
94
+ The beginning of Homer's Iliad exemplifies the Archaic period of ancient Greek (see Homeric Greek for more details):
95
+
96
+ Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος
97
+ οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε' ἔθηκε,
98
+ πολλὰς δ' ἰφθίμους ψυχὰς Ἄϊδι προΐαψεν
99
+ ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν
100
+ οἰωνοῖσί τε πᾶσι· Διὸς δ' ἐτελείετο βουλή·
101
+ ἐξ οὗ δὴ τὰ πρῶτα διαστήτην ἐρίσαντε
102
+ Ἀτρεΐδης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς.
103
+
104
+ The beginning of Apology by Plato exemplifies Attic Greek from the Classical period of ancient Greek:
105
+
106
+ Using the IPA:
107
+
108
+ Transliterated into the Latin alphabet using a modern version of the Erasmian scheme:
109
+
110
+ Translated into English:
111
+
112
+ The study of ancient Greek in European countries in addition to Latin occupied an important place in the syllabus from the Renaissance until the beginning of the 20th century. Ancient Greek is still taught as a compulsory or optional subject especially at traditional or elite schools throughout Europe, such as public schools and grammar schools in the United Kingdom. It is compulsory in the liceo classico in Italy, in the gymnasium in the Netherlands, in some classes in Austria, in klasična gimnazija (grammar school - orientation classical languages) in Croatia, in Classical Studies in ASO in Belgium and it is optional in the humanities-oriented gymnasium in Germany (usually as a third language after Latin and English, from the age of 14 to 18). In 2006/07, 15,000 pupils studied ancient Greek in Germany according to the Federal Statistical Office of Germany, and 280,000 pupils studied it in Italy.[7] It is a compulsory subject alongside Latin in the humanities branch of the Spanish bachillerato. Ancient Greek is also taught at most major universities worldwide, often combined with Latin as part of the study of classics. It will also be taught in state primary schools in the UK, to boost children's language skills,[8][9] and will be offered as a foreign language to pupils in all primary schools from 2014 as part of a major drive to boost education standards, together with Latin, Mandarin, French, German, Spanish, and Italian.[10][needs update]
113
+
114
+ In Christian education, especially at the post-graduate level, the study of ancient Greek is commonplace if not compulsory. As a lingua franca of the Roman world at the time of Jesus, the Bible's accounts of his life and the rest of the New Testament were written in Greek; since these books form a vital part of Christian theology, studying the language they are written in is commonplace for those studying to become pastors or priests.
115
+
116
+ Ancient Greek is also taught as a compulsory subject in all gymnasiums and lyceums in Greece.[11][12] Starting in 2001, an annual international competition "Exploring the Ancient Greek Language and Culture" (Greek: Διαγωνισμός στην Αρχαία Ελληνική Γλώσσα και Γραμματεία) was run for upper secondary students through the Greek Ministry of National Education and Religious Affairs, with Greek language and cultural organisations as co-organisers.[13] It appears to have ceased in 2010, having failed to gain the recognition and acceptance of teachers.[14]
117
+
118
+ Modern authors rarely write in ancient Greek, though Jan Křesadlo wrote some poetry and prose in the language, and Harry Potter and the Philosopher's Stone,[15] some volumes of Asterix,[16] and The Adventures of Alix have been translated into ancient Greek. Ὀνόματα Kεχιασμένα (Onomata Kechiasmena) is the first magazine of crosswords and puzzles in ancient Greek.[17] Its first issue appeared in April 2015 as an annex to Hebdomada Aenigmatum. Alfred Rahlfs included a preface, a short history of the Septuagint text, and other front matter translated into ancient Greek in his 1935 edition of the Septuagint; Robert Hanhart also included the introductory remarks to the 2006 revised Rahlfs–Hanhart edition in the language as well.[18] Akropolis World News reports weekly a summary of the most important news in ancient Greek.[19]
119
+
120
+ Ancient Greek is also used by organizations and individuals, mainly Greek, who wish to denote their respect, admiration or preference for the use of this language. This use is sometimes considered graphical, nationalistic or humorous. In any case, the fact that modern Greeks can still wholly or partly understand texts written in non-archaic forms of ancient Greek shows the affinity of the modern Greek language to its ancestral predecessor.[19]
121
+
122
+ An isolated community near Trabzon, Turkey, an area where Pontic Greek is spoken, has been found to speak a variety of Modern Greek, Ophitic, that has parallels, both structurally and in its vocabulary, to ancient Greek not present in other varieties (linguistic conservatism).[20] As few as 5,000 people speak the dialect, and linguists believe that it is the closest living language to ancient Greek.[21]
123
+
124
+ Ancient Greek is often used in the coinage of modern technical terms in the European languages: see English words of Greek origin. Latinized forms of ancient Greek roots are used in many of the scientific names of species and in scientific terminology.
125
+
126
+ Proto-Greek
127
+
128
+ Mycenaean
129
+
130
+ Ancient
131
+
132
+ Koine
133
+
134
+ Medieval
135
+
136
+ Modern
en/2287.html.txt ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Ancient Greek includes the forms of the Greek language used in ancient Greece and the ancient world from around the 9th century BC to the 6th century AD. It is often roughly divided into the Archaic period (9th to 6th centuries BC), Classical period (5th and 4th centuries BC), and Hellenistic period (Koine Greek, 3rd century BC to 4th century AD).
6
+
7
+ It is preceded by Mycenaean Greek and succeeded by Medieval Greek. Koine is regarded as a separate historical stage although its earliest form closely resembles Attic Greek and its latest form approaches Medieval Greek. There were several regional dialects of ancient Greek, of which Attic Greek developed into Koine.
8
+
9
+ Ancient Greek was the language of Homer and of fifth-century Athenian historians, playwrights, and philosophers, as well as being the original language of the New Testament of the best-selling book in world history, the Christian Bible. Ancient Greek has contributed many words to English vocabulary and has been a standard subject of study in educational institutions of the Western world since the Renaissance. This article primarily contains information about the Epic and Classical periods of the language.
10
+
11
+ Ancient Greek was a pluricentric language, divided into many dialects. The main dialect groups are Attic and Ionic, Aeolic, Arcadocypriot, and Doric, many of them with several subdivisions. Some dialects are found in standardized literary forms used in literature, while others are attested only in inscriptions.
12
+
13
+ There are also several historical forms. Homeric Greek is a literary form of Archaic Greek (derived primarily from Ionic and Aeolic) used in the epic poems, the Iliad and the Odyssey, and in later poems by other authors. Homeric Greek had significant differences in grammar and pronunciation from Classical Attic and other Classical-era dialects.
14
+
15
+ The origins, early form and development of the Hellenic language family are not well understood because of a lack of contemporaneous evidence. Several theories exist about what Hellenic dialect groups may have existed between the divergence of early Greek-like speech from the common Proto-Indo-European language and the Classical period. They have the same general outline, but differ in some of the detail. The only attested dialect from this period[a] is Mycenaean Greek, but its relationship to the historical dialects and the historical circumstances of the times imply that the overall groups already existed in some form.
16
+
17
+ Scholars assume that major ancient Greek period dialect groups developed not later than 1120 BC, at the time of the Dorian invasions—and that their first appearances as precise alphabetic writing began in the 8th century BC. The invasion would not be "Dorian" unless the invaders had some cultural relationship to the historical Dorians. The invasion is known to have displaced population to the later Attic-Ionic regions, who regarded themselves as descendants of the population displaced by or contending with the Dorians.
18
+
19
+ The Greeks of this period believed there were three major divisions of all Greek people – Dorians, Aeolians, and Ionians (including Athenians), each with their own defining and distinctive dialects. Allowing for their oversight of Arcadian, an obscure mountain dialect, and Cypriot, far from the center of Greek scholarship, this division of people and language is quite similar to the results of modern archaeological-linguistic investigation.
20
+
21
+ One standard formulation for the dialects is:[2]
22
+
23
+ Western group:
24
+
25
+
26
+
27
+ Central group:
28
+
29
+
30
+
31
+ Eastern group:
32
+
33
+
34
+
35
+ Western group:
36
+
37
+
38
+
39
+ Eastern group:
40
+
41
+
42
+
43
+ West vs. non-West Greek is the strongest-marked and earliest division, with non-West in subsets of Ionic-Attic (or Attic-Ionic) and Aeolic vs. Arcadocypriot, or Aeolic and Arcado-Cypriot vs. Ionic-Attic. Often non-West is called 'East Greek'.
44
+
45
+ Arcadocypriot apparently descended more closely from the Mycenaean Greek of the Bronze Age.
46
+
47
+ Boeotian had come under a strong Northwest Greek influence, and can in some respects be considered a transitional dialect. Thessalian likewise had come under Northwest Greek influence, though to a lesser degree.
48
+
49
+ Pamphylian Greek, spoken in a small area on the southwestern coast of Anatolia and little preserved in inscriptions, may be either a fifth major dialect group, or it is Mycenaean Greek overlaid by Doric, with a non-Greek native influence.
50
+
51
+ Most of the dialect sub-groups listed above had further subdivisions, generally equivalent to a city-state and its surrounding territory, or to an island. Doric notably had several intermediate divisions as well, into Island Doric (including Cretan Doric), Southern Peloponnesus Doric (including Laconian, the dialect of Sparta), and Northern Peloponnesus Doric (including Corinthian).
52
+
53
+ The Lesbian dialect was Aeolic Greek.
54
+
55
+ All the groups were represented by colonies beyond Greece proper as well, and these colonies generally developed local characteristics, often under the influence of settlers or neighbors speaking different Greek dialects.
56
+
57
+ The dialects outside the Ionic group are known mainly from inscriptions, notable exceptions being:
58
+
59
+ After the conquests of Alexander the Great in the late 4th century BC, a new international dialect known as Koine or Common Greek developed, largely based on Attic Greek, but with influence from other dialects. This dialect slowly replaced most of the older dialects, although the Doric dialect has survived in the Tsakonian language, which is spoken in the region of modern Sparta. Doric has also passed down its aorist terminations into most verbs of Demotic Greek. By about the 6th century AD, the Koine had slowly metamorphosed into Medieval Greek.
60
+
61
+ Ancient Macedonian was an Indo-European language. Because of no surviving sample texts, it is impossible to ascertain whether it was a Greek dialect or even related to the Greek language at all. Its exact relationship remains unclear. Macedonian could also be related to Thracian and Phrygian languages to some extent. The Macedonian dialect (or language) appears to have been replaced by Attic Greek during the Hellenistic period. Late 20th century epigraphic discoveries in the Greek region of Macedonia, such as the Pella curse tablet, suggest that ancient Macedonian has been a variety of north-western ancient Greek or replaced by a Greek dialect.[4]
62
+
63
+ Ancient Greek differs from Proto-Indo-European (PIE) and other Indo-European languages in certain ways. In phonotactics, ancient Greek words could end only in a vowel or /n s r/; final stops were lost, as in γάλα "milk", compared with γάλακτος "of milk" (genitive). Ancient Greek of the classical period also differed in both the inventory and distribution of original PIE phonemes due to numerous sound changes,[5] notably the following:
64
+
65
+ The pronunciation of ancient Greek was very different from that of Modern Greek. Ancient Greek had long and short vowels; many diphthongs; double and single consonants; voiced, voiceless, and aspirated stops; and a pitch accent. In Modern Greek, all vowels and consonants are short. Many vowels and diphthongs once pronounced distinctly are pronounced as /i/ (iotacism). Some of the stops and glides in diphthongs have become fricatives, and the pitch accent has changed to a stress accent. Many of the changes took place in the Koine Greek period. The writing system of Modern Greek, however, does not reflect all pronunciation changes.
66
+
67
+ The examples below represent Attic Greek in the 5th century BC. Ancient pronunciation cannot be reconstructed with certainty, but Greek from the period is well documented, and there is little disagreement among linguists as to the general nature of the sounds that the letters represent.
68
+
69
+ [ŋ] occurred as an allophone of /n/ that was used before velars and as an allophone of /ɡ/ before nasals. /r/ was probably voiceless when word-initial (written ῥ). /s/ was assimilated to [z] before voiced consonants.
70
+
71
+ /oː/ raised to [uː], probably by the 4th century BC.
72
+
73
+ Greek, like all of the older Indo-European languages, is highly inflected. It is highly archaic in its preservation of Proto-Indo-European forms. In ancient Greek, nouns (including proper nouns) have five cases (nominative, genitive, dative, accusative, and vocative), three genders (masculine, feminine, and neuter), and three numbers (singular, dual, and plural). Verbs have four moods (indicative, imperative, subjunctive, and optative) and three voices (active, middle, and passive), as well as three persons (first, second, and third) and various other forms. Verbs are conjugated through seven combinations of tenses and aspect (generally simply called "tenses"): the present, future, and imperfect are imperfective in aspect; the aorist (perfective aspect); a present perfect, pluperfect and future perfect. Most tenses display all four moods and three voices, although there is no future subjunctive or imperative. Also, there is no imperfect subjunctive, optative or imperative. The infinitives and participles correspond to the finite combinations of tense, aspect, and voice.
74
+
75
+ The indicative of past tenses adds (conceptually, at least) a prefix /e-/, called the augment. This was probably originally a separate word, meaning something like "then", added because tenses in PIE had primarily aspectual meaning. The augment is added to the indicative of the aorist, imperfect, and pluperfect, but not to any of the other forms of the aorist (no other forms of the imperfect and pluperfect exist).
76
+
77
+ The two kinds of augment in Greek are syllabic and quantitative. The syllabic augment is added to stems beginning with consonants, and simply prefixes e (stems beginning with r, however, add er). The quantitative augment is added to stems beginning with vowels, and involves lengthening the vowel:
78
+
79
+ Some verbs augment irregularly; the most common variation is e → ei. The irregularity can be explained diachronically by the loss of s between vowels.
80
+ In verbs with a preposition as a prefix, the augment is placed not at the start of the word, but between the preposition and the original verb. For example, προσ(-)βάλλω (I attack) goes to προσέβαλoν in the aorist. However compound verbs consisting of a prefix that is not a preposition retain the augment at the start of the word: αὐτο(-)μολῶ goes to ηὐτομόλησα in the aorist.
81
+
82
+ Following Homer's practice, the augment is sometimes not made in poetry, especially epic poetry.
83
+
84
+ The augment sometimes substitutes for reduplication; see below.
85
+
86
+ Almost all forms of the perfect, pluperfect, and future perfect reduplicate the initial syllable of the verb stem. (Note that a few irregular forms of perfect do not reduplicate, whereas a handful of irregular aorists reduplicate.) The three types of reduplication are:
87
+
88
+ Irregular duplication can be understood diachronically. For example, lambanō (root lab) has the perfect stem eilēpha (not *lelēpha) because it was originally slambanō, with perfect seslēpha, becoming eilēpha through compensatory lengthening.
89
+
90
+ Reduplication is also visible in the present tense stems of certain verbs. These stems add a syllable consisting of the root's initial consonant followed by i. A nasal stop appears after the reduplication in some verbs.[6]
91
+
92
+ The earliest extant examples of ancient Greek writing (circa 1450 BC) are in the syllabic script Linear B. Beginning in the 8th century BC, however, the Greek alphabet became standard, albeit with some variation among dialects. Early texts are written in boustrophedon style, but left-to-right became standard during the classic period. Modern editions of ancient Greek texts are usually written with accents and breathing marks, interword spacing, modern punctuation, and sometimes mixed case, but these were all introduced later.
93
+
94
+ The beginning of Homer's Iliad exemplifies the Archaic period of ancient Greek (see Homeric Greek for more details):
95
+
96
+ Μῆνιν ἄειδε, θεά, Πηληϊάδεω Ἀχιλῆος
97
+ οὐλομένην, ἣ μυρί' Ἀχαιοῖς ἄλγε' ἔθηκε,
98
+ πολλὰς δ' ἰφθίμους ψυχὰς Ἄϊδι προΐαψεν
99
+ ἡρώων, αὐτοὺς δὲ ἑλώρια τεῦχε κύνεσσιν
100
+ οἰωνοῖσί τε πᾶσι· Διὸς δ' ἐτελείετο βουλή·
101
+ ἐξ οὗ δὴ τὰ πρῶτα διαστήτην ἐρίσαντε
102
+ Ἀτρεΐδης τε ἄναξ ἀνδρῶν καὶ δῖος Ἀχιλλεύς.
103
+
104
+ The beginning of Apology by Plato exemplifies Attic Greek from the Classical period of ancient Greek:
105
+
106
+ Using the IPA:
107
+
108
+ Transliterated into the Latin alphabet using a modern version of the Erasmian scheme:
109
+
110
+ Translated into English:
111
+
112
+ The study of ancient Greek in European countries in addition to Latin occupied an important place in the syllabus from the Renaissance until the beginning of the 20th century. Ancient Greek is still taught as a compulsory or optional subject especially at traditional or elite schools throughout Europe, such as public schools and grammar schools in the United Kingdom. It is compulsory in the liceo classico in Italy, in the gymnasium in the Netherlands, in some classes in Austria, in klasična gimnazija (grammar school - orientation classical languages) in Croatia, in Classical Studies in ASO in Belgium and it is optional in the humanities-oriented gymnasium in Germany (usually as a third language after Latin and English, from the age of 14 to 18). In 2006/07, 15,000 pupils studied ancient Greek in Germany according to the Federal Statistical Office of Germany, and 280,000 pupils studied it in Italy.[7] It is a compulsory subject alongside Latin in the humanities branch of the Spanish bachillerato. Ancient Greek is also taught at most major universities worldwide, often combined with Latin as part of the study of classics. It will also be taught in state primary schools in the UK, to boost children's language skills,[8][9] and will be offered as a foreign language to pupils in all primary schools from 2014 as part of a major drive to boost education standards, together with Latin, Mandarin, French, German, Spanish, and Italian.[10][needs update]
113
+
114
+ In Christian education, especially at the post-graduate level, the study of ancient Greek is commonplace if not compulsory. As a lingua franca of the Roman world at the time of Jesus, the Bible's accounts of his life and the rest of the New Testament were written in Greek; since these books form a vital part of Christian theology, studying the language they are written in is commonplace for those studying to become pastors or priests.
115
+
116
+ Ancient Greek is also taught as a compulsory subject in all gymnasiums and lyceums in Greece.[11][12] Starting in 2001, an annual international competition "Exploring the Ancient Greek Language and Culture" (Greek: Διαγωνισμός στην Αρχαία Ελληνική Γλώσσα και Γραμματεία) was run for upper secondary students through the Greek Ministry of National Education and Religious Affairs, with Greek language and cultural organisations as co-organisers.[13] It appears to have ceased in 2010, having failed to gain the recognition and acceptance of teachers.[14]
117
+
118
+ Modern authors rarely write in ancient Greek, though Jan Křesadlo wrote some poetry and prose in the language, and Harry Potter and the Philosopher's Stone,[15] some volumes of Asterix,[16] and The Adventures of Alix have been translated into ancient Greek. Ὀνόματα Kεχιασμένα (Onomata Kechiasmena) is the first magazine of crosswords and puzzles in ancient Greek.[17] Its first issue appeared in April 2015 as an annex to Hebdomada Aenigmatum. Alfred Rahlfs included a preface, a short history of the Septuagint text, and other front matter translated into ancient Greek in his 1935 edition of the Septuagint; Robert Hanhart also included the introductory remarks to the 2006 revised Rahlfs–Hanhart edition in the language as well.[18] Akropolis World News reports weekly a summary of the most important news in ancient Greek.[19]
119
+
120
+ Ancient Greek is also used by organizations and individuals, mainly Greek, who wish to denote their respect, admiration or preference for the use of this language. This use is sometimes considered graphical, nationalistic or humorous. In any case, the fact that modern Greeks can still wholly or partly understand texts written in non-archaic forms of ancient Greek shows the affinity of the modern Greek language to its ancestral predecessor.[19]
121
+
122
+ An isolated community near Trabzon, Turkey, an area where Pontic Greek is spoken, has been found to speak a variety of Modern Greek, Ophitic, that has parallels, both structurally and in its vocabulary, to ancient Greek not present in other varieties (linguistic conservatism).[20] As few as 5,000 people speak the dialect, and linguists believe that it is the closest living language to ancient Greek.[21]
123
+
124
+ Ancient Greek is often used in the coinage of modern technical terms in the European languages: see English words of Greek origin. Latinized forms of ancient Greek roots are used in many of the scientific names of species and in scientific terminology.
125
+
126
+ Proto-Greek
127
+
128
+ Mycenaean
129
+
130
+ Ancient
131
+
132
+ Koine
133
+
134
+ Medieval
135
+
136
+ Modern
en/2288.html.txt ADDED
The diff for this file is too large to render. See raw diff
 
en/2289.html.txt ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Green Lantern is the name of several superheroes appearing in American comic books published by DC Comics. They fight evil with the aid of rings that grant them a variety of extraordinary powers, all of which come from imagination and/or emotions.[citation needed] The characters are typically depicted as members of the Green Lantern Corps, an interstellar law enforcement agency.
2
+
3
+ The first Green Lantern character, Alan Scott, was created in 1940 by Martin Nodell during the Golden Age of Comic Books and usually fought common criminals in Capitol City (and later, Gotham City) with the aid of his magic ring. For the Silver Age of Comic Books, John Broome and Gil Kane reinvented the character as Hal Jordan in 1959 and shifted the focus of Green Lantern stories from fantasy to science fiction. Other notable Green Lanterns include Guy Gardner, John Stewart, and Kyle Rayner.
4
+
5
+ The Green Lanterns are among DC Comics' longer lasting sets of characters. They have been adapted to television, video games, and motion pictures.
6
+
7
+ Martin Nodell (initially using the name Mart Dellon) created the first Green Lantern. He first appeared in the Golden Age of Comic Books in All-American Comics #16 (July 1940), published by All-American Publications, one of three companies that would eventually merge to form DC Comics.[1]
8
+
9
+ This Green Lantern's real name was Alan Scott, a railroad engineer who, after a railway crash, came into possession of a magic lantern which spoke to him and said it would bring power. From this, he crafted a magic ring which gave him a wide variety of powers. The limitations of the ring were that it had to be "charged" every 24 hours by touching it to the lantern for a time, and that it could not directly affect objects made of wood. Alan Scott fought mostly ordinary human villains, but he did have a few paranormal ones such as the immortal Vandal Savage and the zombie Solomon Grundy. Most stories took place in New York.
10
+
11
+ As a popular character in the 1940s, the Green Lantern featured both in anthology books such as All-American Comics and Comic Cavalcade, as well as his own book, Green Lantern. He also appeared in All Star Comics as a member of the superhero team known as the Justice Society of America.
12
+
13
+ After World War II the popularity of superheroes in general declined. The Green Lantern comic book was cancelled with issue #38 (May–June 1949), and All Star Comics #57 (1951) was the character's last Golden Age appearance. When superheroes came back in fashion in later decades, the character Alan Scott was revived, but he was forever marginalized by the new Hal Jordan character who had been created to supplant him (see below). Initially, he made guest appearances in other superheroes' books, but eventually got regular roles in books featuring the Justice Society. He never got another solo series. Between 1995 and 2003, DC Comics changed Alan Scott's superhero codename to "Sentinel" in order to distinguish him from the newer and more popular science fiction Green Lanterns.
14
+
15
+ In 2011, the Alan Scott character was revamped. His costume was redesigned and the source of his powers was changed to that of the mystical power of nature (referred to in the stories as "the Green").
16
+
17
+ In 1959, Julius Schwartz reinvented the Green Lantern character as a science fiction hero named Hal Jordan. Hal Jordan's powers were more or less the same as Alan Scott's, but otherwise this character was completely different than the Green Lantern character of the 1940s. He had a new name, a redesigned costume, and a rewritten origin story. Hal Jordan received his ring from a dying alien and was commissioned as an officer of the Green Lantern Corps, an interstellar law enforcement agency overseen by the Guardians of the Universe.[2]
18
+
19
+ Hal Jordan was introduced in Showcase #22 (September–October 1959). Gil Kane and Sid Greene were the art team most notable on the title in its early years, along with writer John Broome. His initial physical appearance, according to Kane, was patterned after his one-time neighbor, actor Paul Newman.[3]
20
+
21
+ With issue #76 (April 1970), the series made a radical stylistic departure. Editor Schwartz, in one of the company's earliest efforts to provide more than fantasy, worked with the writer-artist team of Denny O'Neil and Neal Adams to spark new interest in the comic book series and address a perceived need for social relevance. They added the character Green Arrow (with the cover, but not the official name, retitled Green Lantern Co-Starring Green Arrow) and had the pair travel through America encountering "real world" issues, to which they reacted in different ways — Green Lantern as fundamentally a lawman, Green Arrow as a liberal iconoclast. Additionally during this run, the groundbreaking "Snowbirds Don't Fly" story was published (issues #85 and #86) in which Green Arrow's teen sidekick Speedy (the later grown-up hero Red Arrow) developed a heroin addiction that he was forcibly made to quit. The stories were critically acclaimed, with publications such as The New York Times, The Wall Street Journal, and Newsweek citing it as an example of how comic books were "growing up".[4] However, the O'Neil/Adams run was not a commercial success, and the series was cancelled after only 14 issues, though an additional unpublished three installments were finally published as backups in The Flash #217-219.[5]
22
+
23
+ The title saw a number of revivals and cancellations. It changed to Green Lantern Corps at one point as the popularity rose and waned. During a time there were two regular titles, each with a Green Lantern, and a third member in the Justice League. A new character, Kyle Rayner, was created to become the feature while Hal Jordan first became the villain Parallax, then died and came back as the Spectre.
24
+
25
+ In the wake of The New Frontier, writer Geoff Johns returned Hal Jordan as Green Lantern in Green Lantern: Rebirth (2004–05). Johns began to lay groundwork for "Blackest Night" (released July 13, 2010[6]), viewing it as the third part of the trilogy started by Rebirth. Expanding on the Green Lantern mythology in the second part, "Sinestro Corps War" (2007), Johns, with artist Ethan van Sciver, found wide critical acclaim and commercial success with the series, which promised the introduction of a spectrum of colored "lanterns".
26
+
27
+ The series and its creators have received several awards over the years, including the 1961 Alley Award for Best Adventure Hero/Heroine with Own Book[7] and the Academy of Comic Book Arts Shazam Award for Best Continuing Feature in 1970, for Best Individual Story ("No Evil Shall Escape My Sight", Green Lantern vol. 2, #76, by Dennis O'Neil and Neal Adams),[8] and in 1971 for Best Individual Story ("Snowbirds Don't Fly", Green Lantern vol. 2, #85 by O'Neil and Adams).[9]
28
+
29
+ Writer O'Neil received the Shazam Award for Best Writer (Dramatic Division) in 1970 for his work on Green Lantern, Batman, Superman, and other titles, while artist Adams received the Shazam for Best Artist (Dramatic Division) in 1970 for his work on Green Lantern and Batman.[8] Inker Dick Giordano received the Shazam Award for Best Inker (Dramatic Division) for his work on Green Lantern and other titles.[8]
30
+
31
+ In Judd Winick's first regular writing assignment on Green Lantern, he wrote a storyline in which an assistant of Kyle Rayner's emerged as a gay character in Green Lantern #137 (June 2001). In Green Lantern #154 (November 2001) the story entitled "Hate Crime" gained media recognition when Terry was brutally beaten in a homophobic attack. Winick was interviewed on Phil Donahue's show on MSNBC for that storyline on August 15, 2002[10] and received two GLAAD Media Awards for his Green Lantern work.[11]
32
+
33
+ In May 2011, Green Lantern placed 7th on IGN's Top 100 Comic Book Heroes of All Time.[12]
34
+
35
+ Alan Scott's Green Lantern history originally began thousands of years ago when a mystical "green flame" meteor fell to Earth in ancient China. The voice of the flame prophesied that it would act three times: once to bring death (a lamp-maker named Luke Fairclough crafted the green metal of the meteor into a lamp; in fear and as punishment for what they thought sacrilege, the local villagers killed him, only to be destroyed by a sudden burst of the green flame), once to bring life (in modern times, the lamp came into the hands of a patient in a mental institution who fashioned the lamp into a modern lantern; the green flame restored him to sanity and gave him a new life), and once to bring power. By 1940, the lantern passed into the possession of Alan Scott, a young engineer. Following a railroad-bridge collapse of which he was the only survivor, the flame instructed Scott how to fashion a ring from its metal to give him fantastic powers as the superhero Green Lantern. He adopted a colorful costume and became a crime-fighter. Alan was a founding member of the Justice Society of America.
36
+
37
+ After the Crisis on Infinite Earths (although the original origin story was still in continuity), a later Tales of the Green Lantern Corps story was published that brought Scott even closer to the Corps' ranks, when it was revealed that Alan Scott was predated as Earth's Green Lantern by a Green Lantern named Yalan Gur, a resident of China. Not only had the Corps' now-familiar green, black and white uniform motif not yet been adopted, but Yalan Gur altered the basic red uniform to more closely resemble the style of clothing worn by his countrymen. Power ultimately corrupted this early Green Lantern, as he attempted to rule over mankind, which forced the Guardians to cause his ring to manifest a weakness to wood, the material from which most Earth weapons of the time were fashioned. This allowed the Chinese peasants to ultimately defeat their corrupted "champion". His ring and lantern were burned and it was during this process that the "intelligence" inhabiting the ring and the lantern and linking them to the Guardians was damaged. Over time, when it had occasion to manifest itself, this "intelligence" became known as the mystical 'Starheart' of fable.
38
+
39
+ Centuries later, it was explained, when Scott found the mystical lantern, it had no memory of its true origins, save a vague recollection of the uniform of its last master. This was the origin of Scott's distinctive costume. Due to its damaged link to them, the Guardians presumed the ring and lantern to be lost in whatever cataclysm overcame their last owner of record, thus Scott was never noticed by the Guardians and went on to carve a history of his own apart from that of the Corps, sporting a ring with an artificially induced weakness against anything made of wood. Honoring this separate history, the Guardians never moved to force Scott to relinquish the ring, formally join the Corps, or adopt its colors. Some sort of link between Scott and the Corps, however, was hinted at in a Silver Age crossover story which depicts Scott and Hal Jordan charging their rings at the same Power Battery while both reciting the "Brightest Day" oath. During the Rann-Thanagar War, it was revealed that Scott is an honorary member of the Corps.
40
+
41
+ On June 1, 2012, DC Comics announced that it would be introducing an alternate version of Alan Scott as a gay man in the title "Earth 2." The New 52 issue was released on June 6, 2012.[13] In its story, Alan Scott and his partner Sam were both passengers aboard a train, but the latter was killed when their train was wrecked in the railroad-bridge collapse that Scott alone survived; a magical green flame found Alan amongst the rubble. Telling him he is to become an avatar of the flame's great power and that he must channel this power through an item of importance to his heart, Alan chooses the engagement ring he was to give his boyfriend, becoming Green Lantern. This alternate version is not a member of the Green Lantern Corps, which doesn't exist in Earth 2, but rather adopts the name Green Lantern for himself, for his mystical powers derive from the Green (the elemental force which connects plant life on Earth).
42
+
43
+ The character of Harold "Hal" Jordan was a second-generation test pilot, having followed in the footsteps of his father. He was given the power ring and battery (lantern) by a dying alien named Abin Sur, whose spaceship crashed on Earth. Abin Sur used his ring to seek out an individual who was "utterly honest and born without fear" to take his place as a member of the corps. At one point, when Hal Jordan was incapacitated, it was revealed that there were two individuals matching the specified criteria on Earth, the other being Guy Gardner, and the ring chose Jordan solely because of his proximity to Abin Sur. Gardner then became listed as Hal's "backup", even though he had a strong friendship with Barry Allen (The Flash). Gardner would fill in if Jordan was unavailable or otherwise incapacitated. Later, when Gardner was put into a coma, it turned out that by then there was a third human suitable for the task, John Stewart, who was designated as the Earth Sector's "backup" Lantern. Jordan, as Green Lantern, became a founding member of the Justice League of America and as of the mid-2000s is, along with John Stewart, one of the two active-duty Lanterns in Earth's sector of space.
44
+
45
+ Jordan also automatically became a member of the Green Lantern Corps, a galactic "police" force which bears some similarities to the "Lensmen" from the science fiction series written by E.E. Smith, although both creators Julius Schwartz and John Broome denied ever reading Smith's stories.[14] Nevertheless, the early 1980s miniseries "Green Lantern Corps" honors the similarity with two characters in the corps: Eddore of Tront and Arisia.
46
+
47
+ Following the rebirth of Superman and the destruction of Green Lantern's hometown of Coast City in the early 1990s, Hal Jordan seemingly went insane and destroyed the Green Lantern Corps and the Central Power Battery. Now calling himself Parallax, Hal Jordan would devastate the DC Universe on and off for the next several years. However, after Earth's sun was threatened by a Sun-Eater, Jordan sacrificed his life, expending the last of his vast power to reignite the dying star. Jordan subsequently returned from beyond the grave as the Spectre, the divine Spirit of God's Vengeance, whom Jordan attempted to transform into a Spirit of Redemption, which ended in failure.
48
+
49
+ In Green Lantern: Rebirth, it is revealed that Jordan was under the influence of a creature known as Parallax when he turned renegade. Parallax was a creature of pure fear that had been imprisoned in the Central Power Battery by the Guardians of the Universe in the distant past. Imprisonment had rendered the creature dormant and it was eventually forgotten, becoming known merely as the "yellow impurity" in the power rings. Sinestro was able to wake Parallax and encourage it to seek out Hal Jordan as a host. Although Parallax had been trying to corrupt Jordan (via his ring) for some time, it was not until after the destruction of Coast City that it was able to succeed. It took advantage of Jordan's weakened emotional state to lure him to Oa and cause him to attack anyone who stood in his way. After killing several Green Lanterns, Jordan finally entered the Central Power Battery and absorbed all the power, unwittingly freeing the Parallax entity and allowed it to graft onto his soul.
50
+
51
+ The Spectre bonded with Jordan in the hopes of freeing the former Green Lantern's soul from Parallax's taint, but was not strong enough to do so. In Green Lantern: Rebirth, Parallax began to assert control of the Parallax-Spectre-Jordan composite. Thanks to a supreme effort of will, Jordan was able to free himself from Parallax, rejoin his soul to his body and reclaim his power ring. The newly revived (and rejuvenated) Jordan awoke just in time to save Kyle Rayner and Green Arrow from Sinestro. After the Korugarian's defeat, Jordan was able to successfully lead his fellow Green Lanterns in battle against Parallax and with help from Guardians Sayd and Ganthet, imprisoned it within the personal power batteries of Earth's Lanterns, rendering the Green Lantern's rings free of the yellow impurity, provided they had the power of will to do so.
52
+ Hal Jordan is once again a member of both the Justice League and the Green Lantern Corps, and along with John Stewart is one of the two Corps members assigned to Sector 2814, personally defeating Sinestro in the Sinestro Corps War. Jordan is designated as Green Lantern 2814.1.
53
+
54
+ Post-Sinestro Corps War, DC Comics revisited the origin of Hal Jordan as a precursor to Blackest Night storyline, the next chapter in the Geoff Johns era on Green Lantern.
55
+ Hal Jordan is the Green Lantern portrayed by Ryan Reynolds in the 2011 Green Lantern film.
56
+
57
+ In the late 1960s, Guy Gardner appeared as the second choice to replace Abin Sur as Green Lantern of sector 2814. Gardner was a candidate to receive Abin Sur's ring, but Jordan was closer. This placed him as the "backup" Green Lantern for Jordan. But early in his career as a Green Lantern, tragedy struck Gardner as a power battery blew up in his face, putting him in a coma for years. During the Crisis on Infinite Earths, the Guardians split into factions, one of which appointed a newly revived Gardner as their champion. As a result of his years in a coma, Guy was emotionally unstable, although he still mostly managed to fight valiantly. He has gone through many changes, including wielding Sinestro's yellow Guardian power ring, then gaining and losing Vuldarian powers, and readmission to the Corps during Green Lantern: Rebirth. He later became part of the Green Lantern Honor Guard, and oversees the training of new Green Lanterns. Gardner is designated as Green Lantern 2814.2 within the Corps.
58
+
59
+ Guy Gardner helped lead the defense of Oa during the events of Blackest Night.
60
+
61
+ Following his outstanding acts of valour, the Guardians appoint Guy to a unique role and highest rank in the Green Lantern Corps-Sentinel, answering directly to the Guardians themselves.
62
+
63
+ In the early 1970s, John Stewart, an architect from Detroit, was selected by the Guardians to replace a comatose Guy Gardner as the backup Green Lantern for Jordan. When Jordan resigned from the Corps for an extended period of time, Stewart served as the regular Lantern, coming into his own as he battled numerous Green Lantern villains and played a key role during the Crisis on Infinite Earths. During that time the Guardians of the Universe assigned Katma Tui to train Stewart, and the two developed romantic feelings for each other. They married, but Katma was soon murdered by longtime Green Lantern villain Star Sapphire. Stewart was crushed by this, and his life began to unravel. He reached his lowest point when he failed to save the planet Xanshi from destruction during the Cosmic Odyssey.
64
+
65
+ John Stewart redeemed himself during the Mosaic crisis, when an insane Guardian abducted cities from all over the universe and placed them together on Oa. When the Guardian was defeated, the cities remained, as the other Guardians claimed to not have enough energy in the Central Power Battery to send them home. While they gathered the resources, John Stewart was assigned to oversee the jammed together communities. Using his intellect and unconventional thinking, he formed the warring communities into a cohesive society. He was aided by Rose Hardin, a farmer from West Virginia who was trapped on Oa, due to her town being abducted. Stewart once again found love with Rose, and the two of them came to feel more comfortable on their new world than they did back on Earth.
66
+
67
+ Stewart eventually deduced that the Guardians had the energy to send the cities home whenever they wanted, and that they let them remain on Oa as an experiment in cosmic integration, and also as a test for John Stewart. Stewart passed the test, and discovered that he was a figure in Oan prophecy. That was why the Guardians directly chose him instead of allowing a Power Ring to do it, as is standard procedure. John Stewart rose to a new level of awareness and became the first mortal Guardian of the Universe. He was also rewarded with the resurrection of Katma Tui, which caused him to break up with Rose.
68
+
69
+ Stewart's new powers and resurrected wife were taken from him when Hal Jordan went mad and became Parallax, absorbing the power from the Central Power Battery. During this time, the Green Lantern Corps was disbanded, and Stewart went on to lead the Darkstars, a new organization of universal peacekeepers led by the Controllers, offshoots of the Guardians of the Universe. During a battle, Stewart was badly injured and left paralyzed from the waist down. Hal Jordan eventually restored his ability to walk before sacrificing himself to save Earth's sun. Soon after, John Stewart found himself hunted by a serial killer from Xanshi called Fatality. She sought out any remnants of the Green Lantern Corps so that she might kill them in the name of avenging her doomed planet. Stewart fended off Fatality with residual energy he blasted from his body, which was in him due to Hal Jordan healing his crippling condition; however, this left him unable to walk again.
70
+
71
+ Stewart later visited Fatality while she was in custody, and she revealed to him that his back was fine, and he had the ability to walk if he wanted to. Stewart had imposed a psychological block upon himself due to feeling guilty over his sister's death. Stewart overcame this condition and was given a power ring by Kyle Rayner. Rayner departed Earth and Stewart became the Green Lantern of Earth once again, and also a member of the Justice League of America.
72
+
73
+ When the Green Lantern Corps reformed, Stewart began serving with Jordan as one of his sector's two designated regular-duty Lanterns, designated as Green Lantern 2814.3. Since then, he has played key roles in all big Green Lantern events, such as The Sinestro Corps War and Blackest Night.
74
+
75
+ In the New 52 continuity, John Stewart was a U.S. Marine along with being an architect and the son of a social activist. He started a romantic relationship with his longtime enemy Fatality, who by that point had become a Star Sapphire and apparently forgave him for failing to save her world. In the events leading up to the "Uprising", Fatality was captured by shape-shifting Durlans, and a Durlan operative replicated her and took her place. John Stewart was at first hesitant about the relationship, but he eventually came to love Fatality, but it turns out that it had been the impostor by that point. In the final battle of the "Uprising", the impostor revealed itself as Verrat Din, an eons old Durlan, and destroyed Fatality's Star Sapphire ring, having no use for it after gaining the power of a Daxamite. Though Stewart defeated the powerful threat, he was shaken by having been misled so long, and having been intimate with a Durlan shape-shifter.
76
+
77
+ Stewart immediately set out to find the real Fatality, and when he did, he was astonished to discover that she had reverted to hating him. Fatality revealed that she was forcibly inducted into the Star Sapphires and brainwashed into being one of them. When her ring was destroyed, the spell was broken. Every moment she was with Stewart, she was trapped within herself. She revealed that she never loved John Stewart and departed, leaving Stewart emotionally crushed.
78
+
79
+ John Stewart is notable for being the Green Lantern showcased on the Justice League and Justice League Unlimited cartoon shows, as well as being the primary Green Lantern of the DC Animated Universe.
80
+
81
+ Kyle Rayner was a struggling freelance artist when he was approached by the last Guardian of the Universe, Ganthet, to become a new Green Lantern with the very last power ring. Ganthet's reasons for choosing Rayner remained a secret for quite some time. Despite not being from the same cloth of bravery and fearlessness as Hal Jordan—or perhaps because of that—Rayner proved to be popular with readers and his fellow characters. Having continually proven himself on his own and with the JLA, he became known amongst the Oans as The Torch Bearer. He briefly operated as Ion after using the power of the entire Green Lantern Corps. He was responsible for the rebirth of the Guardians and the re-ignition of the Central Power Battery, essentially restoring all that Jordan had destroyed as Parallax.
82
+
83
+ Kyle Rayner was chosen to wield the last ring because he knew fear, and Parallax had been released from the Central Power Battery. Ganthet knew this and chose Kyle because his experiences dealing with fear enabled him to resist Parallax. Because Parallax is a manifestation of fear, and yellow, none of the other Green Lanterns, including Hal, could harm Parallax and, therefore, came under his control. Kyle taught them to feel and overcome fear so they could defeat Parallax and incarcerate him in the Central Power Battery once again.
84
+
85
+ Kyle became Ion, who is later revealed to be the manifestation of willpower in the same way Parallax is fear. During the Sinestro Corps War between the Green Lantern Corps and the Sinestro Corps, Ion was imprisoned while Parallax possesses Kyle. In Green Lantern (vol. 4) #24, Parallax consumes Hal Jordan. Hal Jordan enters into Kyle's prison, and with his help, Kyle finally escapes Parallax.
86
+
87
+ Afterward, Ganthet and Sayd trap Parallax in the Lanterns of the four Green Lanterns of Earth. Ganthet asks Kyle to give up his right to be Ion and become a Green Lantern again. Kyle accepts, and Ganthet gives Kyle a power ring. Kyle is outfitted with a new costume including a mask that looks like the one from his first uniform. Kyle is now a member of the Green Lantern Corps Honor Guard, and has been partnered with Guy Gardner.
88
+
89
+ Kyle now shows up mostly as part of the ensemble cast of Green Lantern Corps. Corps rookie Sodam Yat took over the mantle of Ion. Sodam has made an appearance in the Legion of Super Heroes Final Crisis tie-in Legion of Three Worlds as the last surviving Green Lantern/Guardian of the Universe.
90
+
91
+ Kyle is designated as Green Lantern 2814.4 within the Corps.[citation needed]
92
+
93
+ Kyle Rayner died in Green Lantern Corps #42 (Jan. 2010) after sacrificing himself to save Oa from an attack by the Black Lantern Corps. The following issue, Kyle is brought back to life by the power of a Star Sapphire who connects Soranik Natu's heart to his heart.
94
+
95
+ Simon Baz is a Lebanese American Muslim from the Detroit suburb of Dearborn, Michigan. He first appeared in The New 52! FCBD #1 before making his first full appearance in Green Lantern #0 during the "Rise of the Third Army" storyline written by Geoff Johns. He was caught by the police street racing in a stolen car with an armed bomb in the back of the van. While being questioned by authorities, Sinestro's Green Lantern ring chose Simon as its next ring bearer, recruiting him into the Green Lantern Corps. The squirrel-like Lantern B'dg follows, becoming Baz's mentor and friend. The Justice League eventually tracks Baz down and questions him as to how he came into the possession of a Green Lantern ring. Batman tries to disarm him by removing Simon's ring, but self-defense mechanisms of the ring prevent this.[15]
96
+ Following the events of "Wrath of The First Lantern",[16][17][18][19] Simon Baz was offered the opportunity to join Amanda Waller and Steve Trevor's "Justice League of America" under the pretense that his criminal charges would be dropped and his innocence publicly declared after FBI Agent Franklin Fed vouched for him.[20] During the events of the "Trinity War" storyline, after Cyborg's (Victor) body was mangled by Crime Syndicate member "The Grid", Baz's ring was the only thing preventing Victor from death.[21] During the battle against Relic, when Lantern Guy Gardner and the Red Lantern Corps become the protectors of space sector 2814, Simon was appointed Green Lantern ambassador on earth by Hal Jordan. Additionally per Hal’s request Simon became the protector of Hal Jordan's family.[citation needed] In Green Lantern #20, after the fierce battle against the First Lantern, it was revealed that Simon Baz will go on to train the first female Green Lantern of Earth, Jessica Cruz.[19]
97
+
98
+ First mentioned in Green Lantern #20 as the first female Green Lantern of Earth, Jessica Cruz is a young Latin American woman who was forced to become the unwilling host to the evil Ring of Volthoom after "Power Ring" dies in his alternate Earth universe. Though she is not technically "Power Ring", as she is not a member of the Crime Syndicate and has no association with the organization, for namesake purposes she is dubbed "Power Ring" while the ring uses her as a host. She is helped by the Justice League and Simon Baz, who help her understand her cursed powers. In the Darkseid War, she becomes trapped inside the Ring of Volthoom, as Volthoom himself takes over Jessica's body. She then battles the previous wearers of the ring with the help of Cyborg, and forces her body in front of the Black Racer (who at the time was controlling the Flash) and kills Volthoom. After the battle, whilst the League mourns her motionless body, a Green Lantern ring appears and Jessica is made the 6th Green Lantern of Earth, to everyone's surprise.
99
+
100
+ In Green Lantern: Rebirth #1, she meets up with Simon Baz to battle a Manhunter. This turns out to be an exercise controlled by Hal Jordan, as he needs them to protect Earth whilst he goes on a mission to find the rest of the Corps. He then fuses both their Lanterns into one, which can only be used when they are together. Hal also gives them membership into the Justice League to help with their training.
101
+
102
+ The daughter of Alan Scott, Jennifer-Lynn Hayden would discover she shared her father's mystical connection to the Starheart, which gave her the abilities of a Green Lantern. Choosing to follow in her father's footsteps, she became the superheroine Jade. She would later fight a manifestation of the Starheart and lose those abilities. When Jade was fighting an Okaaran monster, she was saved by an orange lantern named Cade and fell in love with him.
103
+
104
+ After Jade was stripped of her powers, Kyle Rayner gave her a copy of Hal Jordan's power ring. When Rayner left Earth to restart the Green Lantern Corps, Jade donned the classic Green Lantern uniform and served as the planet's Green Lantern until losing the ring during a battle with the villain Fatality. Later, when the ring was returned to her, she changed her Green Lantern uniform to a modified version of Rayner's. Jade continued to function as a Green Lantern until Rayner, as Ion, used his power to restore her connection to the Starheart. During Infinite Crisis, she died while trying to stop Alexander Luthor, Jr. from destroying the universe to create a new multiverse. Upon her death, Jade returned her Starheart power to Rayner. In the Blackest Night event, her remains have been reanimated as one of the Black Lantern Corps after receiving a black power ring. She was resurrected by the Life Entity along with eleven other Black Lantern Corps members.
105
+
106
+ Following the New 52 and DC Rebirth, she has been removed from continuity. This creates a major hole in Kyle Raynor's backstory as well, given how long they were together. She was later returned to continuity along with her father Alan Scott and the rest of the JSA during Doomsday Clock.
107
+
108
+ Sinestro was born on the planet Korugar and became Green Lantern of space sector 1417. He was a friend of Abin Sur and mentor to Hal Jordan. His desire for order was an asset in the Corps, and initially led him to be considered one of the greatest Green Lanterns. As the years passed, he became more and more fixated upon not simply protecting his sector, but on preserving order in the society of his home planet no matter what the cost. Eventually, he concluded that the best way to accomplish this was to conquer Korugar and rule the planet as a dictator. Exposed by Hal Jordan and punished, he later wielded a yellow ring of fear from Qward. Later in league with Parallax, he would establish the Sinestro Corps, which began the War of Light. Following Blackest Night and War of the Green Lanterns Sinestro would once again receive a Green Lantern ring and temporarily headline the monthly Green Lantern following The New 52. In Scott Snyders Justice League it was revealed that Sinestro was searching for the entity, Umbrax, which is one of the seven hidden forces of the universe. Umbrax represents the unseen emotions of the Ultraviolet Lantern Corps. Sinestro finally discovers this force and creates and army of Ultraviolet lanterns including John Stewart (whom later gets freed.)
109
+
110
+ Premiering in Green Lantern: New Guardians Annual #1, Caul is a deep undercover Green Lantern operative that works in the Tenebrian Dominion. He unwillingly helps Carol Ferris and the New Guardians attempt to petition Lady Styx to send aid against the Third Army. For betraying them, the New Guardians leave Caul behind and he is forced to become part of a reality program called "The Hunted", stripped of his powers and with his discharged power ring embedded into his chest. Caul stars as part of an ensemble cast of spacebound DC characters including the Blue Beetle and a new Captain K'rot in the "Hunted" main feature of Threshold. Caul received his Green Lantern Ring after he shot and killed its previous bearer, unsure himself why he was then chosen. Caul is able to save Sh'diki Borough on the planet Tolerance after it had been bottled by Brainiac. Caul is later informed that The Hunted has been canceled and offered the lead role on a new show, Team Cauldron, with the rest of his friends and Hunted competitors. Caul agrees to the role, having his power ring re-embedded into his chest. He is granted a meeting with Lady Styx to finalize his new role. However, as soon as Caul materializes at her base, he is killed by multiple gunshots, as planned by Colonel T'omas T'morra. In a glimmernet commercial, it is shown that T'morra replaces Caul in the proposed new show. However Caul is shown alive later along with Captain K'rot in tow when the planet Telos manifests during the 2015 "Convergence" storyline, investigating it alongside Superman, Supergirl, Guy Gardner, and the Red Lanterns.
111
+
112
+ Is a rookie Green Lantern who must investigate the first murder committed in City Enduring for the last 500 years. She headlines as Green Lantern in Far Sector published by DC Young Animal.
113
+
114
+ Charlie Vicker was an actor who portrayed Green Lantern in a TV show on Earth. Charlie enjoyed his fame and happily threw himself into the life of a playboy television star. After one particularly grueling night of partying, Charlie was too hung over to show up on set so his brother Rodger had to go on as his stand in. Unfortunately for Roger, a group of various space criminals, led by former Earth criminal Al Magone, mistook the television Green Lantern for the real thing and attacked during a live broadcast. The criminals were ones previous imprisoned by the Green Lanterns on a special timeless criminal planet who had banded together and launched simultaneous attacks on Green Lanterns across the galaxy. By the time the real Green Lantern Hal Jordan arrived on the scene, the defenseless stand-in was dead and the criminal responsible was gone. Charlie was overcome with grief and blamed himself for his brother's death. He demanded that Hal Jordan bring him along in his hunt for the murderer responsible, so that Charlie could avenge his brother.
115
+
116
+ Eventually the two, along with the rest of the Green Lantern Corp tracked the criminals down and brought their terror to an end. During the battle, Green Lantern gave Vicker a power ring from one of the fallen Green Lanterns and appointed him a temporary Green Lantern. Vicker proved himself well enough that the Guardians of the Universe granted him his own Power Ring. He was assigned to Sector 3319 where the strange alien inhabitants made Vicker uncomfortable and alone. Just when he considered resigning from the Green Lantern Corps, Vicker saved an alien child from death. The child's mother was extremely grateful to Vicker making him realize that their physical differences hid how similar the aliens were to mankind.
117
+
118
+ Vicker would later use his skills as an actor to teach the natives of his sector the great plays of Earth. When an invasion force threatened his sector following the first destruction of the Central Power Battery, the now depowered Vicker raised and trained a resistance group that eventually repelled the invaders and ensured his adopted people's freedom. Vicker later joined John Stewart's Darkstars. He was killed during the battle with Grayven, third son of Darkseid.
119
+
120
+ Young Justice (vol. 3) #1 (March 2019) introduced Keli Quintela as Teen Lantern. An unofficial Green Lantern, Quintela is an eleven-year old from La Paz, Bolivia that received a Green Lantern power gauntlet similar to Krona's from a dying green lantern that she then modified and hacked to act like a Green Lantern power ring.
121
+
122
+ The ring is powered by willpower.
123
+ Each Green Lantern wears a ring that grants them a variety of possibilities. The full extent of the ring's ability has never been rigorously defined in the stories, but two consistent traits are that it grants the power of flight and that all its effects are accompanied by a green light.
124
+
125
+ Early Green Lantern stories showed the characters performing all sorts of feats with the ring, from shrinking objects to turning people invisible. Later stories de-emphasized these abilities in favor of constructs.
126
+
127
+ The signature power of all Green Lanterns is the ability to conjure "constructs:" solid green objects that the Green Lantern can control telekinetically. These can be anything, such as a disembodied fist to beat a foe, a shield to block an attack, a sword to cut a rope, or chains to bind a prisoner. Whatever their shape or size, these constructs are always pure green in color, unless a Lantern is skillful enough to know how to change the EM spectrum the construct emits. Hal Jordan has shown the ability to have a construct emit kryptonite radiation under Batman's guidance.
128
+
129
+ The rings of the Green Lantern Corps allow their bearers to travel very quickly across interstellar distances, fast enough that they can efficiently patrol the universe. They allow the wearer to survive in virtually any environment, and also remove the need to eat, sleep and pass waste. The rings can translate practically any language in the universe. They possess powerful sensors that can identify and analyze objects. Lanterns are granted full access to all Guardian knowledge by their rings through the Book of Oa.
130
+
131
+ A noteworthy power the rings do not have is the ability to automatically heal injuries, though they can provide shielding. In Hal Jordan's origin story, Abin Sur passed on his ring to Hal because he was unable to treat his own fatal injuries. If the Green Lantern happens to be a skilled physician, then the ring can be invaluable as it can conjure any conceivable medical tool, but it cannot do much for a Lantern who lacks medical expertise. When Hal Jordan breaks his arm, the best he can do is conjure up a cast. This is further extended into an ability to replace large sections of one's injured body with constructs, but this too requires detailed biological knowledge of one's body and concentration enough to prolong the construct.
132
+
133
+ Alan Scott's ring is unable to directly affect anything made of wood. Alan can conjure a green shield to block bullets, but a wooden club will pass through it effortlessly. The rings of Hal Jordan and his colleagues originally shared a similar weakness to anything colored yellow, though due to the removal of the yellow impurity from the Central Battery on Oa, more recent stories have removed this weakness.
134
+
135
+ The effectiveness of the ring is tied to the wearer's willpower. A Green Lantern with strong willpower will beat a weaker-willed Lantern in a duel. Anything which weakens the Green Lantern's mind, such as a telepathic attack, may render his ring useless.
136
+
137
+ Green Lantern is famous for the oath he recites when he charges his ring. Originally, the oath was:
138
+
139
+ ... and I shall shed my light over dark evil.
140
+ For the dark things cannot stand the light,
141
+ The light of the Green Lantern!
142
+
143
+ This oath is also used by Lanterns Tomar-Re of sector 2813 and Chief Administrator Salaak.[22]
144
+ In the mid-1940s, this was revised into the form that became famous during the Hal Jordan era:
145
+
146
+ In brightest day, in blackest night,
147
+ No evil shall escape my sight!
148
+ Let those who worship evil's might
149
+ Beware my power, Green Lantern's light!
150
+
151
+ The oath in this form is credited to Alfred Bester,[23] who wrote many Green Lantern stories in the 1940s. This version of the oath was first spoken by Alan Scott in Green Lantern #9 from the fall of 1943. Scott would revert to reciting his original oath after he was reintroduced during the Silver Age.
152
+
153
+ Many Green Lanterns have a unique personal oath, but some oaths are shared by several Lanterns. They are usually four lines long with a rhyme scheme of "AAAA" or "AABB".
154
+
155
+ The Pre-Crisis version of Hal Jordan was inspired to create his oath after a series of adventures in which he developed new ways to detect evasive criminals: in the first adventure, he used his ring as radar to find robbers who had blinded him with a magnesium flash; in the second, he tracked criminals in a dark cave by using his ring to make them glow with phosphorescence; finally, Jordan tracked safecrackers by the faint shockwaves from the explosives they had used.
156
+
157
+ Medphyll, the Green Lantern of the planet J586 (seen in Swamp Thing #61, "All Flesh is Grass"), a planet where a sentient plant species lives, has the following oath:
158
+
159
+ In forest dark or glade beferned,
160
+ No blade of grass shall go unturned!
161
+ Let those who have the daylight spurned
162
+ Tread not where this green lamp has burned!
163
+
164
+ Other notable oaths include that of Jack T. Chance,
165
+
166
+ You who are wicked, evil and mean,
167
+ I'm the nastiest creep you've ever seen!
168
+ Come one, come all, put up a fight,
169
+ I'll pound your butts with Green Lantern's light!
170
+ Yowza!
171
+
172
+ and that of Rot Lop Fan, a Green Lantern whose species lacks sight, and thus has no concepts of brightness, darkness, day, night, color, or lanterns.
173
+
174
+ In loudest din or hush profound,
175
+ My ears catch evil's slightest sound!
176
+ Let those who toll out evil's knell
177
+ Beware my power, the F-Sharp Bell!
178
+
179
+ In Green Lantern (vol. 4) #27, the Alpha Lanterns use the oath:
180
+
181
+ In days of peace, in nights of war,
182
+ Obey the Laws forever more!
183
+ Misconduct must be answered for,
184
+ Swear us the chosen: The Alpha Corps!
185
+
186
+ In Legion of 3 Worlds, Sodam Yat in the 31st century – the last of the Green Lanterns and the last of the Guardians – recited a new oath:
187
+
188
+ In brightest day, through Blackest Night,
189
+ No other Corps shall spread its light!
190
+ Let those who try to stop what's right
191
+ Burn like my power, Green Lantern's light!
192
+
193
+ In Batman: The Dawnbreaker #1, the Dawnbreaker (an amalgamation of Batman and Green Lantern from the Dark Multiverse's Earth-32) creates and recites his own oath after the death of the Guardians of the Universe and the Green Lantern Corps by his own hands:
194
+
195
+ With darkness black, I choke the light!
196
+ No brightest day escapes my sight!
197
+ I turn the dawn to midnight!
198
+ Beware my power--Dawnbreaker's might!
199
+
200
+ In The Green Lantern #11, written by Grant Morrison, several distinct oaths were used by the Green Lanterns of the Multiverse. Morrison's creation 'Magic Lantern',[24] first seen in his run on Animal Man, used this oath:
201
+
202
+ When it's groovy, when it's grim,
203
+ We hum the Living Guru's hymn.
204
+ When other Lanterns lose their sh...,
205
+ We keep the Magic Lantern lit!
206
+
207
+ [25]
208
+
209
+ (since it was an all-ages book, the last word in the third line was obscured by another oath balloon from another Lantern.)
210
+
211
+ In the video game, Infinite Crisis, Hal Jordan of Earth-13 (the Arcane universe) has his own variation:
212
+
213
+ In forests deep where darkness dwells,
214
+ In dungeons dank beneath ancient fells,
215
+ Let those who seek to rule the night
216
+ Beware my power, the Emerald Light!
217
+
218
+ In the animated TV series Duck Dodgers, Duck Dodgers temporarily becomes a Green Lantern after accidentally picking up Hal Jordan's laundry. In the first part of the episode, he forgets the real quote and makes up his own version:
219
+
220
+ In blackest day or brightest night
221
+ Watermelon, cantaloupe, yadda yadda,
222
+ Erm ... superstitious and cowardly lot,
223
+ With liberty and justice for all!
224
+
225
+ In 2011, soon after the release of the Green Lantern movie, a trailer for The Muppets featured Kermit reciting a parody of the oath:[26]
226
+
227
+ In brightest day, in darkest night,
228
+ No evil shall escape my sight!
229
+ Let those who laugh at my lack of height
230
+ Beware my banjo ... Green Froggy's light!
231
+
232
+ The TV show, Mad, included a movie parody called "RiOa", a fusion of Green Lantern and Rio. Blu from Rio is turned into a Green Lantern, and recruits Big Bird, the Road Runner, Mordecai from Regular Show, Mumble from Happy Feet, and one of the Angry Birds and turns them into Green Lanterns.
233
+
234
+ In brightest day, in blackest night,
235
+ Despite our shape, our size, our height,
236
+ We're birds who walk, which isn't right,
237
+ But starting now, we will take flight!
en/229.html.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Andy Warhol (/ˈwɔːrhɒl/;[1] born Andrew Warhola; August 6, 1928 – February 22, 1987) was an American artist, film director, and producer who was a leading figure in the visual art movement known as pop art. His works explore the relationship between artistic expression, advertising, and celebrity culture that flourished by the 1960s, and span a variety of media, including painting, silkscreening, photography, film, and sculpture. Some of his best known works include the silkscreen paintings Campbell's Soup Cans (1962) and Marilyn Diptych (1962), the experimental film Chelsea Girls (1966), and the multimedia events known as the Exploding Plastic Inevitable (1966–67).
4
+
5
+ Born and raised in Pittsburgh, Warhol initially pursued a successful career as a commercial illustrator. After exhibiting his work in several galleries in the late 1950s, he began to receive recognition as an influential and controversial artist. His New York studio, The Factory, became a well-known gathering place that brought together distinguished intellectuals, drag queens, playwrights, Bohemian street people, Hollywood celebrities, and wealthy patrons.[2][3][4] He promoted a collection of personalities known as Warhol superstars, and is credited with inspiring the widely used expression "15 minutes of fame". In the late 1960s he managed and produced the experimental rock band The Velvet Underground and founded Interview magazine. He authored numerous books, including The Philosophy of Andy Warhol and Popism: The Warhol Sixties. He lived openly as a gay man before the gay liberation movement. After gallbladder surgery, Warhol died of cardiac arrhythmia in February 1987 at the age of 58.
6
+
7
+ Warhol has been the subject of numerous retrospective exhibitions, books, and feature and documentary films. The Andy Warhol Museum in his native city of Pittsburgh, which holds an extensive permanent collection of art and archives, is the largest museum in the United States dedicated to a single artist. Many of his creations are very collectible and highly valuable. The highest price ever paid for a Warhol painting is US$105 million for a 1963 canvas titled Silver Car Crash (Double Disaster); his works include some of the most expensive paintings ever sold.[5] A 2009 article in The Economist described Warhol as the "bellwether of the art market".[6]
8
+
9
+ Warhol was born on August 6, 1928, in Pittsburgh, Pennsylvania.[7] He was the fourth child of Ondrej Warhola (Americanized as Andrew Warhola, Sr., 1889–1942)[8][9] and Julia (née Zavacká, 1892–1972),[10] whose first child was born in their homeland of Austria-Hungary and died before their move to the U.S.
10
+
11
+ His parents were working-class Lemko[11][12] emigrants from Mikó, Austria-Hungary (now called Miková, located in today's northeastern Slovakia). Warhol's father emigrated to the United States in 1914, and his mother joined him in 1921, after the death of Warhol's grandparents. Warhol's father worked in a coal mine. The family lived at 55 Beelen Street and later at 3252 Dawson Street in the Oakland neighborhood of Pittsburgh.[13] The family was Ruthenian Catholic and attended St. John Chrysostom Byzantine Catholic Church. Andy Warhol had two elder brothers—Pavol (Paul), the eldest, was born before the family emigrated; Ján was born in Pittsburgh. Pavol's son, James Warhola, became a successful children's book illustrator.
12
+
13
+ In third grade, Warhol had Sydenham's chorea (also known as St. Vitus' Dance), the nervous system disease that causes involuntary movements of the extremities, which is believed to be a complication of scarlet fever which causes skin pigmentation blotchiness.[14] At times when he was confined to bed, he drew, listened to the radio and collected pictures of movie stars around his bed. Warhol later described this period as very important in the development of his personality, skill-set and preferences. When Warhol was 13, his father died in an accident.[15]
14
+
15
+ As a teenager, Warhol graduated from Schenley High School in 1945. Also as a teen, Warhol won a Scholastic Art and Writing Award.[16] After graduating from high school, his intentions were to study art education at the University of Pittsburgh in the hope of becoming an art teacher, but his plans changed and he enrolled in the Carnegie Institute of Technology, now Carnegie Mellon University in Pittsburgh, where he studied commercial art. During his time there, Warhol joined the campus Modern Dance Club and Beaux Arts Society.[17] He also served as art director of the student art magazine, Cano, illustrating a cover in 1948[18] and a full-page interior illustration in 1949.[19] These are believed to be his first two published artworks.[19] Warhol earned a Bachelor of Fine Arts in pictorial design in 1949.[20] Later that year, he moved to New York City and began a career in magazine illustration and advertising.
16
+
17
+ Warhol's early career was dedicated to commercial and advertising art, where his first commission had been to draw shoes for Glamour magazine in the late 1940s.[21] In the 1950s, Warhol worked as a designer for shoe manufacturer Israel Miller.[21][22] American photographer John Coplans recalled that
18
+
19
+ nobody drew shoes the way Andy did. He somehow gave each shoe a temperament of its own, a sort of sly, Toulouse-Lautrec kind of sophistication, but the shape and the style came through accurately and the buckle was always in the right place. The kids in the apartment [which Andy shared in New York – note by Coplans] noticed that the vamps on Andy's shoe drawings kept getting longer and longer but [Israel] Miller didn't mind. Miller loved them.
20
+
21
+ Warhol's "whimsical" ink drawings of shoe advertisements figured in some of his earliest showings at the Bodley Gallery in New York.
22
+
23
+ Warhol was an early adopter of the silk screen printmaking process as a technique for making paintings. A young Warhol was taught silk screen printmaking techniques by Max Arthur Cohn at his graphic arts business in Manhattan.[23] While working in the shoe industry, Warhol developed his "blotted line" technique, applying ink to paper and then blotting the ink while still wet, which was akin to a printmaking process on the most rudimentary scale. His use of tracing paper and ink allowed him to repeat the basic image and also to create endless variations on the theme, a method that prefigures his 1960s silk-screen canvas.[21] In his book Popism: The Warhol Sixties, Warhol writes: "When you do something exactly wrong, you always turn up something."[24]
24
+
25
+ Warhol habitually used the expedient of tracing photographs projected with an epidiascope.[25] Using prints by Edward Wallowitch, his 'first boyfriend'[26] the photographs would undergo a subtle transformation during Warhol's often cursory tracing of contours and hatching of shadows. Warhol used Wallowitch's photograph Young Man Smoking a Cigarette (c.1956),[27] for a 1958 design for a book cover he submitted to Simon and Schuster for the Walter Ross pulp novel The Immortal, and later used others for his dollar bill series,[28][29] and for Big Campbell's Soup Can with Can Opener (Vegetable), of 1962 which initiated Warhol's most sustained motif, the soup can.
26
+
27
+ With the rapid expansion of the record industry, RCA Records hired Warhol, along with another freelance artist, Sid Maurer, to design album covers and promotional materials.[30]
28
+
29
+ He began exhibiting his work during the 1950s. He held exhibitions at the Hugo Gallery[31] and the Bodley Gallery[32] in New York City; in California, his first West Coast gallery exhibition[33][34] was on July 9, 1962, in the Ferus Gallery of Los Angeles with Campbell's Soup Cans. The exhibition marked his West Coast debut of pop art.[35]
30
+ Andy Warhol's first New York solo pop art exhibition was hosted at Eleanor Ward's Stable Gallery November 6–24, 1962. The exhibit included the works Marilyn Diptych, 100 Soup Cans, 100 Coke Bottles, and 100 Dollar Bills. At the Stable Gallery exhibit, the artist met for the first time poet John Giorno who would star in Warhol's first film, Sleep, in 1963.[36]
31
+
32
+ It was during the 1960s that Warhol began to make paintings of iconic American objects such as dollar bills, mushroom clouds, electric chairs, Campbell's Soup Cans, Coca-Cola bottles, celebrities such as Marilyn Monroe, Elvis Presley, Marlon Brando, Troy Donahue, Muhammad Ali, and Elizabeth Taylor, as well as newspaper headlines or photographs of police dogs attacking African-American protesters during the Birmingham campaign in the civil rights movement. During these years, he founded his studio, "The Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola:
33
+
34
+ What's great about this country is that America started the tradition where the richest consumers buy essentially the same things as the poorest. You can be watching TV and see Coca-Cola, and you know that the President drinks Coca-Cola, Liz Taylor drinks Coca-Cola, and just think, you can drink Coca-Cola, too. A Coke is a Coke and no amount of money can get you a better Coke than the one the bum on the corner is drinking. All the Cokes are the same and all the Cokes are good. Liz Taylor knows it, the President knows it, the bum knows it, and you know it.[37]
35
+
36
+ New York City's Museum of Modern Art hosted a Symposium on pop art in December 1962 during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were scandalized by Warhol's open embrace of market culture. This symposium set the tone for Warhol's reception.
37
+
38
+ A pivotal event was the 1964 exhibit The American Supermarket, a show held in Paul Bianchini's Upper East Side gallery. The show was presented as a typical U.S. small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by six prominent pop artists of the time, among them the controversial (and like-minded) Billy Apple, Mary Inman, and Robert Watts. Warhol's painting of a can of Campbell's soup cost $1,500 while each autographed can sold for $6. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is.[38]
39
+
40
+ As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations).[39]
41
+
42
+ During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "Superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton,[40] photographer Stephen Shore[41] and artist Bibbe Hansen (mother of pop musician Beck).[42]
43
+
44
+ On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio.[43] Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the S.C.U.M. Manifesto,[44] a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film I, a Man. Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced.[45]
45
+
46
+ Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived: surgeons opened his chest and massaged his heart to help stimulate its movement again. He suffered physical effects for the rest of his life, including being required to wear a surgical corset.[14] The shooting had a profound effect on Warhol's life and art.[46][47]
47
+
48
+ Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life." She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting the Factory scene heavily increased its security, and for many the "Factory 60s" ended.[47]
49
+
50
+ Warhol had this to say about the attack: "Before I was shot, I always thought that I was more half-there than all-there—I always suspected that I was watching TV instead of living life. People sometimes say that the way things happen in movies is unreal, but actually it's the way things happen in life that's unreal. The movies make emotions look so strong and real, whereas when things really do happen to you, it's like watching television—you don't feel anything. Right when I was being shot and ever since, I knew that I was watching television. The channels switch, but it's all television."[48]
51
+
52
+ Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot.[49][50] Warhol's famous portrait of Chinese Communist leader Mao Zedong was created in 1973. He also founded, with Gerard Malanga, Interview magazine, and published The Philosophy of Andy Warhol (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art."[51]
53
+
54
+ Warhol socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54.[52] He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square."[53]
55
+
56
+ In 1979, along with his longtime friend Stuart Pivar, Warhol founded the New York Academy of Art.[54][55]
57
+
58
+ Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster.[56]
59
+
60
+ By this time, graffiti artist Fab Five Freddy paid homage to Warhol when he painted an entire train with Campbell soup cans. This was instrumental in Freddy becoming involved in the underground NYC art scene and becoming an affiliate of Basquiat.[57]
61
+
62
+ By this period, Warhol was being criticized for becoming merely a "business artist".[58] In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. They also criticized his 1980 exhibit of 10 portraits at the Jewish Museum in Manhattan, entitled Jewish Geniuses, which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell."[58] In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s."[58]
63
+
64
+ Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic."[59]
65
+
66
+ In 1984 Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of Purple Rain and its accompanying movie.[60] Referencing the many celebrity portraits produced by Warhol across his career, Orange Prince (1984) was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's very first celebrity portraits.[61] Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background.[62][63][64]
67
+
68
+ In the Andy Warhol Diaries, Warhol recorded how excited he was to see Prince and Billy Idol together at a party in the mid 1980s, and he compared them to the Hollywood movie stars of the 1950s and 1960s who also inspired his portraits: "... seeing these two glamour boys, its like boys are the new Hollywood glamour girls, like Jean Harlow and Marilyn Monroe".[65]
69
+
70
+ Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat.[66] Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors.[54] His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication.[67] The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money.[68]
71
+
72
+ Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought.[69] It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant.[69]
73
+
74
+ Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh.[70]
75
+
76
+ At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Paige Powell dropped a copy of Interview magazine, an Interview T-shirt, and a bottle of the Estee Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol on April 1, 1987, at St. Patrick's Cathedral, New York.
77
+
78
+ By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced.[71]
79
+
80
+ It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, Pop, The Genius of Warhol, was payment for coming up with the idea of the soup cans as subject matter.[72] For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. A 1964 Large Campbell's Soup Can was sold in a 2007 Sotheby's auction to a South American collector for £5.1 million ($7.4 million).[73]
81
+
82
+ He loved celebrities, so he painted them as well. From these beginnings he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations.[74]
83
+
84
+ In 1979, Warhol was commissioned by BMW to paint a Group-4 race version of the then "elite supercar" BMW M1 for the fourth installment in the BMW Art Car Project. It was reported at the time that, unlike the three artists before him, Warhol opted to paint directly onto the automobile himself instead of letting technicians transfer his scale-model design to the car.[75] It was indicated that Warhol spent only a total of 23 minutes to paint the entire car.[76]
85
+
86
+ Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 Death and Disaster series. The Death and Disaster paintings included Red Car Crash, Purple Jumping Man, and Orange Disaster. One of these paintings, the diptych Silver Car Crash, became the highest priced work of his when it sold at Sotheby's Contemporary Art Auction on Wednesday, November 13, 2013, for $105.4 million.[77]
87
+
88
+ Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface'."[78]
89
+
90
+ His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings":
91
+
92
+ Victor ... was Andy's ghost pisser on the Oxidations. He would come to the Factory to urinate on canvases that had already been primed with copper-based paint by Andy or Ronnie Cutrone, a second ghost pisser much appreciated by Andy, who said that the vitamin B that Ronnie took made a prettier color when the acid in the urine turned the copper green. Did Andy ever use his own urine? My diary shows that when he first began the series, in December 1977, he did, and there were many others: boys who'd come to lunch and drink too much wine, and find it funny or even flattering to be asked to help Andy 'paint'. Andy always had a little extra bounce in his walk as he led them to his studio.[79]
93
+
94
+ Warhol's first portrait of Basquiat (1982) is a black photo-silkscreen over an oxidized copper "piss painting".
95
+
96
+ After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand in a series of more than 50 large collaborative works done with Jean-Michel Basquiat between 1984 and 1986.[80][81] Despite negative criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work.[82]
97
+
98
+ Andy Warhol was commissioned in 1984 by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's The Last Supper for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen.[83] Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the Ten Punching Bags (Last Supper).[84]
99
+ The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist.[85] The series of The Last Supper was seen by some as "arguably his greatest,"[86] but by others as "wishy-washy, religiose" and "spiritless."[87] It is the largest series of religious-themed works by any U.S. artist.[86]
100
+
101
+ Artist Maurizio Cattelan describes that it is difficult to separate daily encounters from the art of Andy Warhol: "That's probably the greatest thing about Warhol: the way he penetrated and summarized our world, to the point that distinguishing between him and our everyday life is basically impossible, and in any case useless." Warhol was an inspiration towards Cattelan's magazine and photography compilations, such as Permanent Food, Charley, and Toilet Paper.[88]
102
+
103
+ In the period just before his death, Warhol was working on Cars, a series of paintings for Mercedes-Benz.[89]
104
+
105
+ A self-portrait by Andy Warhol (1963–64), which sold in New York at the May Post-War and Contemporary evening sale in Christie's, fetched $38.4 million.[90]
106
+
107
+ On May 9, 2012, his classic painting Double Elvis (Ferus Type) sold at auction at Sotheby's in New York for US$33 million. With commission, the sale price totaled US$37,042,500, short of the $50 million that Sotheby's had predicted the painting might bring. The piece (silkscreen ink and spray paint on canvas) shows Elvis Presley in a gunslinger pose. It was first exhibited in 1963 at the Ferus Gallery in Los Angeles. Warhol made 22 versions of the Double Elvis, nine of which are held in museums.[91][92]
108
+
109
+ In November 2013, his Silver Car Crash (Double Disaster) diptych sold at Sotheby's Contemporary Art Auction for $105.4 million, a new record for the pop artist (pre-auction estimates were at $80 million).[77] Created in 1963, this work had rarely been seen in public in the previous years.[93] In November 2014, Triple Elvis sold for $81.9m (£51.9m) at an auction in New York.[94]
110
+
111
+ Warhol worked across a wide range of media—painting, photography, drawing, and sculpture. In addition, he was a highly prolific filmmaker. Between 1963 and 1968, he made more than 60 films,[95] plus some 500 short black-and-white "screen test" portraits of Factory visitors.[96] One of his most famous films, Sleep, monitors poet John Giorno sleeping for six hours. The 35-minute film Blow Job is one continuous shot of the face of DeVeren Bookwalter supposedly receiving oral sex from filmmaker Willard Maas, although the camera never tilts down to see this. Another, Empire (1964), consists of eight hours of footage of the Empire State Building in New York City at dusk. The film Eat consists of a man eating a mushroom for 45 minutes. Warhol attended the 1962 premiere of the static composition by LaMonte Young called Trio for Strings and subsequently created his famous series of static films including Kiss, Eat, and Sleep (for which Young initially was commissioned to provide music). Uwe Husslein cites filmmaker Jonas Mekas, who accompanied Warhol to the Trio premiere, and who claims Warhol's static films were directly inspired by the performance.[97]
112
+
113
+ Batman Dracula is a 1964 film that was produced and directed by Warhol, without the permission of DC Comics. It was screened only at his art exhibits. A fan of the Batman series, Warhol's movie was an "homage" to the series, and is considered the first appearance of a blatantly campy Batman. The film was until recently thought to have been lost, until scenes from the picture were shown at some length in the 2006 documentary Jack Smith and the Destruction of Atlantis.
114
+
115
+ Warhol's 1965 film Vinyl is an adaptation of Anthony Burgess' popular dystopian novel A Clockwork Orange. Others record improvised encounters between Factory regulars such as Brigid Berlin, Viva, Edie Sedgwick, Candy Darling, Holly Woodlawn, Ondine, Nico, and Jackie Curtis. Legendary underground artist Jack Smith appears in the film Camp.
116
+
117
+ His most popular and critically successful film was Chelsea Girls (1966). The film was highly innovative in that it consisted of two 16 mm-films being projected simultaneously, with two different stories being shown in tandem. From the projection booth, the sound would be raised for one film to elucidate that "story" while it was lowered for the other. The multiplication of images evoked Warhol's seminal silk-screen works of the early 1960s.
118
+
119
+ Warhol was a fan of filmmaker Radley Metzger's film work[98] and commented that Metzger's film, The Lickerish Quartet, was "an outrageously kinky masterpiece".[99][100][101] Blue Movie—a film in which Warhol superstar Viva makes love in bed with Louis Waldon, another Warhol superstar—was Warhol's last film as director.[102][103] The film, a seminal film in the Golden Age of Porn, was, at the time, controversial for its frank approach to a sexual encounter.[104][105] Blue Movie was publicly screened in New York City in 2005, for the first time in more than 30 years.[106]
120
+
121
+ In the wake of the 1968 shooting, a reclusive Warhol relinquished his personal involvement in filmmaking. His acolyte and assistant director, Paul Morrissey, took over the film-making chores for the Factory collective, steering Warhol-branded cinema towards more mainstream, narrative-based, B-movie exploitation fare with Flesh, Trash, and Heat. All of these films, including the later Andy Warhol's Dracula and Andy Warhol's Frankenstein, were far more mainstream than anything Warhol as a director had attempted. These latter "Warhol" films starred Joe Dallesandro—more of a Morrissey star than a true Warhol superstar.
122
+
123
+ In the early 1970s, most of the films directed by Warhol were pulled out of circulation by Warhol and the people around him who ran his business. After Warhol's death, the films were slowly restored by the Whitney Museum and are occasionally projected at museums and film festivals. Few of the Warhol-directed films are available on video or DVD.
124
+
125
+ In the mid-1960s, Warhol adopted the band the Velvet Underground, making them a crucial element of the Exploding Plastic Inevitable multimedia performance art show. Warhol, with Paul Morrissey, acted as the band's manager, introducing them to Nico (who would perform with the band at Warhol's request). While managing The Velvet Underground, Andy would have them dressed in all black to perform in front of movies that he was also presenting.[107] In 1966 he "produced" their first album The Velvet Underground & Nico, as well as providing its album art. His actual participation in the album's production amounted to simply paying for the studio time. After the band's first album, Warhol and band leader Lou Reed started to disagree more about the direction the band should take, and their artistic friendship ended.[citation needed] In 1989, after Warhol's death, Reed and John Cale re-united for the first time since 1972 to write, perform, record and release the concept album Songs for Drella, a tribute to Warhol. In October 2019, an audio tape of publicly unknown music by Reed, based on Warhols' 1975 book, “The Philosophy of Andy Warhol: From A to B and Back Again”, was reported to have been discovered in an archive at the Andy Warhol Museum in Pittsburgh.[108]
126
+
127
+ Warhol designed many album covers for various artists starting with the photographic cover of John Wallowitch's debut album, This Is John Wallowitch!!! (1964). He designed the cover art for The Rolling Stones' albums Sticky Fingers (1971) and Love You Live (1977), and the John Cale albums The Academy in Peril (1972) and Honi Soit in 1981. One of Warhol's last works was a portrait of Aretha Franklin for the cover of her 1986 gold album Aretha, which was done in the style of the Reigning Queens series he had completed the year before.[109]
128
+
129
+ Warhol strongly influenced the new wave/punk rock band Devo, as well as David Bowie. Bowie recorded a song called "Andy Warhol" for his 1971 album Hunky Dory. Lou Reed wrote the song "Andy's Chest", about Valerie Solanas, the woman who shot Warhol, in 1968. He recorded it with the Velvet Underground, and this version was released on the VU album in 1985. Bowie would later play Warhol in the 1996 movie, Basquiat. Bowie recalled how meeting Warhol in real life helped him in the role, and recounted his early meetings with him:
130
+
131
+ I met him a couple of times, but we seldom shared more than platitudes. The first time we saw each other an awkward silence fell till he remarked my bright yellow shoes and started talking enthusiastically. He wanted to be very superficial. And seemingly emotionless, indifferent, just like a dead fish. Lou Reed described him most profoundly when he once told me they should bring a doll of Andy on the market: a doll that you wind up and doesn't do anything. But I managed to observe him well, and that was a helping hand for the film [Basquiat...] We borrowed his clothes from the museum in Pittsburgh, and they were intact, unwashed. Even the pockets weren't emptied: they contained pancake, white, deadly pale fond de teint which Andy always smeared on his face, a check torn in pieces, someone's address, lots of homeopathic pills and a wig. Andy always wore those silver wigs, but he never admitted it were wigs. One of his hairdressers has told me lately that he had his wigs regularly cut, like it were real hair. When the wig was trimmed, he put on another next month as if his hair had grown.[110]
132
+
133
+ The band Triumph also wrote a song about Andy Warhol, "Stranger In A Strange Land" off their 1984 album Thunder Seven.
134
+
135
+ Beginning in the early 1950s, Warhol produced several unbound portfolios of his work.
136
+
137
+ The first of several bound self-published books by Warhol was 25 Cats Name Sam and One Blue Pussy, printed in 1954 by Seymour Berlin on Arches brand watermarked paper using his blotted line technique for the lithographs. The original edition was limited to 190 numbered, hand colored copies, using Dr. Martin's ink washes. Most of these were given by Warhol as gifts to clients and friends. Copy No. 4, inscribed "Jerry" on the front cover and given to Geraldine Stutz, was used for a facsimile printing in 1987,[111] and the original was auctioned in May 2006 for US$35,000 by Doyle New York.[112]
138
+
139
+ Other self-published books by Warhol include:
140
+
141
+ Warhol's book A La Recherche du Shoe Perdu (1955) marked his "transition from commercial to gallery artist".[113] (The title is a play on words by Warhol on the title of French author Marcel Proust's À la recherche du temps perdu.)[113]
142
+
143
+ After gaining fame, Warhol "wrote" several books that were commercially published:
144
+
145
+ Warhol created the fashion magazine Interview that is still published today. The loopy title script on the cover is thought to be either his own handwriting or that of his mother, Julia Warhola, who would often do text work for his early commercial pieces.[115]
146
+
147
+ Although Andy Warhol is most known for his paintings and films, he authored works in many different media.
148
+
149
+ He founded the gossip magazine Interview, a stage for celebrities he "endorsed" and a business staffed by his friends. He collaborated with others on all of his books (some of which were written with Pat Hackett.) He adopted the young painter Jean-Michel Basquiat, and the band The Velvet Underground, presenting them to the public as his latest interest, and collaborating with them. One might even say that he produced people (as in the Warholian "Superstar" and the Warholian portrait). He endorsed products, appeared in commercials, and made frequent celebrity guest appearances on television shows and in films (he appeared in everything from Love Boat[129] to Saturday Night Live[130] and the Richard Pryor movie Dynamite Chicken[131]).
150
+
151
+ In this respect Warhol was a fan of "Art Business" and "Business Art"—he, in fact, wrote about his interest in thinking about art as business in The Philosophy of Andy Warhol from A to B and Back Again.[132]
152
+
153
+ Warhol was homosexual.[133][134] In 1980, he told an interviewer that he was still a virgin. Biographer Bob Colacello, who was present at the interview, felt it was probably true and that what little sex he had was probably "a mixture of voyeurism and masturbation—to use [Andy's] word abstract".[135] Warhol's assertion of virginity would seem to be contradicted by his hospital treatment in 1960 for condylomata, a sexually transmitted disease.[136] It has also been contradicted by his lovers, including Warhol muse BillyBoy, who has said they had sex to orgasm: "When he wasn't being Andy Warhol and when you were just alone with him he was an incredibly generous and very kind person. What seduced me was the Andy Warhol who I saw alone. In fact when I was with him in public he kind of got on my nerves....I'd say: 'You're just obnoxious, I can't bear you."[137] Billy Name also denied that Warhol was only a voyeur, saying: "He was the essence of sexuality. It permeated everything. Andy exuded it, along with his great artistic creativity....It brought a joy to the whole art world in New York."[138] "But his personality was so vulnerable that it became a defense to put up the blank front."[139] Warhol's lovers included John Giorno,[140] Billy Name,[141] Charles Lisanby,[142] and Jon Gould. His boyfriend of 12 years was Jed Johnson, whom he met in 1968, and who later achieved fame as an interior designer.[143]
154
+
155
+ The fact that Warhol's homosexuality influenced his work and shaped his relationship to the art world is a major subject of scholarship on the artist and is an issue that Warhol himself addressed in interviews, in conversation with his contemporaries, and in his publications (e.g., Popism: The Warhol 1960s). Throughout his career, Warhol produced erotic photography and drawings of male nudes. Many of his most famous works (portraits of Liza Minnelli, Judy Garland, and Elizabeth Taylor, and films such as Blow Job, My Hustler and Lonesome Cowboys) draw from gay underground culture or openly explore the complexity of sexuality and desire. As has been addressed by a range of scholars, many of his films premiered in gay porn theaters, including the New Andy Warhol Garrick Theatre and 55th Street Playhouse, in the late 1960s.[144]
156
+
157
+ The first works that Warhol submitted to a fine art gallery, homoerotic drawings of male nudes, were rejected for being too openly gay.[26] In Popism, furthermore, the artist recalls a conversation with the film maker Emile de Antonio about the difficulty Warhol had being accepted socially by the then-more-famous (but closeted) gay artists Jasper Johns and Robert Rauschenberg. De Antonio explained that Warhol was "too swish and that upsets them." In response to this, Warhol writes, "There was nothing I could say to that. It was all too true. So I decided I just wasn't going to care, because those were all the things that I didn't want to change anyway, that I didn't think I 'should' want to change ... Other people could change their attitudes but not me".[26][145] In exploring Warhol's biography, many turn to this period—the late 1950s and early 1960s—as a key moment in the development of his persona. Some have suggested that his frequent refusal to comment on his work, to speak about himself (confining himself in interviews to responses like "Um, no" and "Um, yes", and often allowing others to speak for him)—and even the evolution of his pop style—can be traced to the years when Warhol was first dismissed by the inner circles of the New York art world.[146]
158
+
159
+ Warhol was a practicing Ruthenian Catholic. He regularly volunteered at homeless shelters in New York City, particularly during the busier times of the year, and described himself as a religious person.[148] Many of Warhol's later works depicted religious subjects, including two series, Details of Renaissance Paintings (1984) and The Last Supper (1986). In addition, a body of religious-themed works was found posthumously in his estate.[148]
160
+
161
+ During his life, Warhol regularly attended Liturgy, and the priest at Warhol's church, Saint Vincent Ferrer, said that the artist went there almost daily,[148] although he was not observed taking Communion or going to Confession and sat or knelt in the pews at the back.[135] The priest thought he was afraid of being recognized; Warhol said he was self-conscious about being seen in a Roman Rite church crossing himself "in the Orthodox way" (right to left instead of the reverse).[135]
162
+
163
+ His art is noticeably influenced by the Eastern Christian tradition which was so evident in his places of worship.[148]
164
+
165
+ Warhol's brother has described the artist as "really religious, but he didn't want people to know about that because [it was] private". Despite the private nature of his faith, in Warhol's eulogy John Richardson depicted it as devout: "To my certain knowledge, he was responsible for at least one conversion. He took considerable pride in financing his nephew's studies for the priesthood".[148]
166
+
167
+ Warhol was an avid collector. His friends referred to his numerous collections, which filled not only his four-story townhouse, but also a nearby storage unit, as "Andy's Stuff." The true extent of his collections was not discovered until after his death, when The Andy Warhol Museum in Pittsburgh took in 641 boxes of his "Stuff."
168
+
169
+ Warhol's collections included a Coca-Cola memorabilia sign, and 19th century paintings[149] along with airplane menus, unpaid invoices, pizza dough, pornographic pulp novels, newspapers, stamps, supermarket flyers, and cookie jars, among other eccentricities. It also included significant works of art, such as George Bellows's Miss Bentham.[150] One of his main collections was his wigs. Warhol owned more than 40 and felt very protective of his hairpieces, which were sewn by a New York wig-maker from hair imported from Italy. In 1985 a girl snatched Warhol's wig off his head. It was later discovered in Warhol's diary entry for that day that he wrote: "I don't know what held me back from pushing her over the balcony."
170
+
171
+ In 1960, he had bought a drawing of a light bulb by Jasper Johns.[151]
172
+
173
+ Another item found in Warhol's boxes at the museum in Pittsburgh was a mummified human foot from Ancient Egypt. The curator of anthropology at Carnegie Museum of Natural History felt that Warhol most likely found it at a flea market.[152]
174
+
175
+ I. Miller Shoes, April 17, 1955, illustration in New York Times
176
+
177
+ Exploding Plastic Inevitable' (show) - the Velvet Underground & Nico, 1966, poster
178
+
179
+ The Souper Dress, 1967, screen-printed paper dress based on Warhol's Campbell's Soup Cans
180
+
181
+ Portrait of Mao Zedong, 1972, synthetic polymer paint and silkscreen ink on canvas
182
+
183
+ photo of Warhol and Farah Pahlavi, 1977, with works of Warhol on the walls of the Tehran museum
184
+
185
+ BMW Group - 4 M1, 1979, painted car
186
+
187
+ Among Warhol's early collectors and influential supporters were Emily and Burton Tremaine. Among the over 15 artworks purchased,[153] Marilyn Diptych (now at Tate Modern, London)[154] and A boy for Meg (now at the National Gallery of Art in Washington, DC),[155] were purchased directly out of Warhol's studio in 1962. One Christmas, Warhol left a small Head of Marilyn Monroe by the Tremaine's door at their New York apartment in gratitude for their support and encouragement.[156]
188
+
189
+ Warhol's will dictated that his entire estate—with the exception of a few modest legacies to family members—would go to create a foundation dedicated to the "advancement of the visual arts". Warhol had so many possessions that it took Sotheby's nine days to auction his estate after his death; the auction grossed more than US$20 million.
190
+
191
+ In 1987, in accordance with Warhol's will, the Andy Warhol Foundation for the Visual Arts began. The foundation serves as the estate of Andy Warhol, but also has a mission "to foster innovative artistic expression and the creative process" and is "focused primarily on supporting work of a challenging and often experimental nature."[157]
192
+
193
+ The Artists Rights Society is the U.S. copyright representative for the Andy Warhol Foundation for the Visual Arts for all Warhol works with the exception of Warhol film stills.[158] The U.S. copyright representative for Warhol film stills is the Warhol Museum in Pittsburgh.[159] Additionally, the Andy Warhol Foundation for the Visual Arts has agreements in place for its image archive. All digital images of Warhol are exclusively managed by Corbis, while all transparency images of Warhol are managed by Art Resource.[160]
194
+
195
+ The Andy Warhol Foundation released its 20th Anniversary Annual Report as a three-volume set in 2007: Vol. I, 1987–2007; Vol. II, Grants & Exhibitions; and Vol. III, Legacy Program.[161] The Foundation remains one of the largest grant-giving organizations for the visual arts in the U.S.[162]
196
+
197
+ Many of Warhol's works and possessions are on display at The Andy Warhol Museum in Pittsburgh. The foundation donated more than 3,000 works of art to the museum.[163]
198
+
199
+ Warhol appeared as himself in the film Cocaine Cowboys (1979)[164] and in the film Tootsie (1982).
200
+
201
+ After his death, Warhol was portrayed by Crispin Glover in Oliver Stone's film The Doors (1991), by David Bowie in Julian Schnabel's film Basquiat (1996), and by Jared Harris in Mary Harron's film I Shot Andy Warhol (1996). Warhol appears as a character in Michael Daugherty's opera Jackie O (1997). Actor Mark Bringleson makes a brief cameo as Warhol in Austin Powers: International Man of Mystery (1997). Many films by avant-garde cineast Jonas Mekas have caught the moments of Warhol's life. Sean Gregory Sullivan depicted Warhol in the film 54 (1998). Guy Pearce portrayed Warhol in the film Factory Girl (2007) about Edie Sedgwick's life.[165] Actor Greg Travis portrays Warhol in a brief scene from the film Watchmen (2009).
202
+
203
+ In the movie Highway to Hell a group of Andy Warhols are part of the Good Intentions Paving Company where good-intentioned souls are ground into pavement.[166] In the film Men in Black 3 (2012) Andy Warhol turns out to really be undercover MIB Agent W (played by Bill Hader). Warhol is throwing a party at The Factory in 1969, where he is looked up by MIB Agents K and J (J from the future). Agent W is desperate to end his undercover job ("I'm so out of ideas I'm painting soup cans and bananas, for Christ sakes!", "You gotta fake my death, okay? I can't listen to sitar music anymore." and "I can't tell the girls from the boys.").
204
+
205
+ Andy Warhol (portrayed by Tom Meeten) is one of main characters of the 2012 British television show Noel Fielding's Luxury Comedy. The character is portrayed as having robot-like mannerisms. In the 2017 feature The Billionaire Boys Club Cary Elwes portrays Warhol in a film based on the true story about Ron Levin (portrayed by Kevin Spacey) a friend of Warhol's who was murdered in 1986.[167] In September 2016, it was announced that Jared Leto would portray the title character in Warhol, an upcoming American biographical drama film produced by Michael De Luca and written by Terence Winter, based on the book Warhol: The Biography by Victor Bockris.[168]
206
+
207
+ Warhol appeared as a recurring character in TV series Vinyl, played by John Cameron Mitchell.[175] Warhol was portrayed by Evan Peters in the American Horror Story: Cult episode "Valerie Solanas Died for Your Sins: Scumbag". The episode depicts the attempted assassination of Warhol by Valerie Solanas (Lena Dunham).
208
+
209
+ In early 1969, Andy Warhol was commissioned by Braniff International to appear in two television commercials to promote the luxury Airline's new When You Got It - Flaunt It Campaign. The campaign was created by Braniff's new advertising agency Lois Holland Calloway, which was led by famed advertiser George Lois, creator of a famed series of Esquire Magazine covers. The first commercial series involved pairing the most unlikely people but who shared the fact that they both flew Braniff Airways. Mr. Warhol was paired with boxing legend Sonny Liston. The odd commercial worked as did the others that featured unlikely fellow travelers such as painter Salvador Dali and baseball legend Whitey Ford.
210
+
211
+ Two additional commercials for Braniff were created that featured famous persons entering a Braniff jet and being greeted a Braniff Hostess while espousing their like for flying Braniff. Mr. Warhol was also featured in the first of these commercials that were also produced by Mr. Lois and were released in the summer of 1969. Mr. Lois has incorrectly stated that he was commissioned by Braniff in 1967 for representation during that year but at that time Madison Avenue advertising doyenne Mary Wells Lawrence, who was married to Braniff's charismatic Chairman and President Harding Lawrence, was representing the Dallas-based carrier at that time. Mr. Lois succeeded Wells Rich Greene Agency on December 1, 1968. The rights to Mr. Warhol's films for Braniff and his signed contracts are owned by a private Trust and are administered by Braniff Airways Foundation in Dallas, Texas.[176]
212
+
213
+ A biography of Andy Warhol written by art critic Blake Gopnik was published in 2020 under the title Warhol.[177][178][179]
214
+
215
+ In 2002, the U.S. Postal Service issued an 18-cent stamp commemorating Warhol. Designed by Richard Sheaff of Scottsdale, Arizona, the stamp was unveiled at a ceremony at The Andy Warhol Museum and features Warhol's painting "Self-Portrait, 1964".[180][181] In March 2011, a chrome statue of Andy Warhol and his Polaroid camera was revealed at Union Square in New York City.[182]
en/2290.html.txt ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused.[1] It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets fall generally in cold weather while hail growth is greatly inhibited during cold surface temperatures.[2]
4
+
5
+ Unlike other forms of water ice such as graupel, which is made of rime, and ice pellets, which are smaller and translucent, hailstones usually measure between 5 mm (0.2 in) and 15 cm (6 in) in diameter. The METAR reporting code for hail 5 mm (0.20 in) or greater is GR, while smaller hailstones and graupel are coded GS.
6
+
7
+ Hail is possible within most thunderstorms as it is produced by cumulonimbus,[3] and within 2 nmi (3.7 km) of the parent storm. Hail formation requires environments of strong, upward motion of air with the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while in the tropics, it tends to be confined to high elevations.
8
+
9
+ There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures and, most commonly, farmers' crops.
10
+
11
+ Any thunderstorm which produces hail that reaches the ground is known as a hailstorm.[4] Hail has a diameter of 5 millimetres (0.20 in) or more.[3] Hailstones can grow to 15 centimetres (6 in) and weigh more than 0.5 kilograms (1.1 lb).[5]
12
+
13
+ Unlike ice pellets, hailstones are layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least 1 millimetre (0.039 in) thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between 2.5 cm (1 in) and golf ball-sized (1.75 in).[6]
14
+
15
+ Stones larger than 2 cm (0.80 in) are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected.[7] The US National Weather Service has a 2.5 cm (1 in) or greater in diameter threshold, effective January 2010, an increase over the previous threshold of ¾-inch hail.[8] Other countries have different thresholds according to local sensitivity to hail; for instance grape growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells).
16
+
17
+ Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing 0 °C (32 °F).[3] These types of strong updrafts can also indicate the presence of a tornado.[9] The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.[10]
18
+
19
+ Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true.
20
+
21
+ The storm's updraft, with upwardly directed wind speeds as high as 110 miles per hour (180 km/h),[11] blows the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice.[12]
22
+
23
+ Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft where they can pass more time growing.[12] As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes 'wet growth', the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.[14]
24
+
25
+ Hail can also undergo 'dry growth' in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the 'wet growth' mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.[15]
26
+
27
+ The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.[16]
28
+
29
+ Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.[12]
30
+
31
+ Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of 11,000 feet (3,400 m).[17] Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.[18]
32
+
33
+ Hail growth becomes vanishingly small when air temperatures fall below −30 °C (−22 °F) as supercooled water droplets become rare at these temperatures.[17] Around thunderstorms, hail is most likely within the cloud at elevations above 20,000 feet (6,100 m). Between 10,000 feet (3,000 m) and 20,000 feet (6,100 m), 60 percent of hail is still within the thunderstorm, though 40 percent now lies within the clear air under the anvil. Below 10,000 feet (3,000 m), hail is equally distributed in and around a thunderstorm to a distance of 2 nautical miles (3.7 km).[19]
34
+
35
+ Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes.[20] Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely.[21] The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888.[22] China also experiences significant hailstorms.[23] Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, and southern and eastern Benelux. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail.[24]
36
+
37
+ In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley".[25] Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season.[26] To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events.
38
+
39
+ Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development.
40
+
41
+ Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate.[27] Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground.
42
+
43
+ Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities.
44
+
45
+ More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain.[28][29] The use of differential reflectivity (
46
+
47
+
48
+
49
+
50
+ Z
51
+
52
+ d
53
+ r
54
+
55
+
56
+
57
+
58
+ {\displaystyle Z_{dr}}
59
+
60
+ ), in combination with horizontal reflectivity (
61
+
62
+
63
+
64
+
65
+ Z
66
+
67
+ h
68
+
69
+
70
+
71
+
72
+ {\displaystyle Z_{h}}
73
+
74
+ ) has led to a variety of hail classification algorithms.[30] Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.[31]
75
+
76
+ The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins.[32] Using the objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.[33]
77
+
78
+ When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 0.25 inches (6.4 mm). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.[34]
79
+
80
+ Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of 1 centimetre (0.39 in) in diameter falls at a rate of 9 metres per second (20 mph), while stones the size of 8 centimetres (3.1 in) in diameter fall at a rate of 48 metres per second (110 mph). Hailstone velocity is dependent on the size of the stone, friction with air it is falling through, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hail stones are not perfect spheres it is difficult to calculate their speed accurately.[35]
81
+
82
+ Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail," which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
83
+
84
+ Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops.[26] Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems.[42] Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings.[43]
85
+
86
+ Hail is one of the most significant thunderstorm hazards to aircraft.[44] When hailstones exceed 0.5 inches (13 mm) in diameter, planes can be seriously damaged within seconds.[45] The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is also a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage.[22] Hail is one of Canada's most expensive hazards.[46]
87
+
88
+ Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls.[47]
89
+
90
+ Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths,[48] which can be detectable by satellite after the storms pass by.[49] Hailstorms normally last from a few minutes up to 15 minutes in duration.[26] Accumulating hail storms can blanket the ground with over 2 inches (5.1 cm) of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail.[50]
91
+
92
+ Depths of up to 18 in (0.46 m) have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure.[51] Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations.
93
+
94
+ On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case[52] of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10 p.m. and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump-truck loads of hail.[53]
95
+
96
+ Research focused on four individual days that accumulated more than 5.9 inches (15 cm) of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics,[54] suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths.[55]
97
+
98
+ During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat,[11] particularly across the Soviet Union – where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells.[56][57] Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.[11][22]
99
+
en/2291.html.txt ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused.[1] It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets fall generally in cold weather while hail growth is greatly inhibited during cold surface temperatures.[2]
4
+
5
+ Unlike other forms of water ice such as graupel, which is made of rime, and ice pellets, which are smaller and translucent, hailstones usually measure between 5 mm (0.2 in) and 15 cm (6 in) in diameter. The METAR reporting code for hail 5 mm (0.20 in) or greater is GR, while smaller hailstones and graupel are coded GS.
6
+
7
+ Hail is possible within most thunderstorms as it is produced by cumulonimbus,[3] and within 2 nmi (3.7 km) of the parent storm. Hail formation requires environments of strong, upward motion of air with the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while in the tropics, it tends to be confined to high elevations.
8
+
9
+ There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures and, most commonly, farmers' crops.
10
+
11
+ Any thunderstorm which produces hail that reaches the ground is known as a hailstorm.[4] Hail has a diameter of 5 millimetres (0.20 in) or more.[3] Hailstones can grow to 15 centimetres (6 in) and weigh more than 0.5 kilograms (1.1 lb).[5]
12
+
13
+ Unlike ice pellets, hailstones are layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least 1 millimetre (0.039 in) thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between 2.5 cm (1 in) and golf ball-sized (1.75 in).[6]
14
+
15
+ Stones larger than 2 cm (0.80 in) are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected.[7] The US National Weather Service has a 2.5 cm (1 in) or greater in diameter threshold, effective January 2010, an increase over the previous threshold of ¾-inch hail.[8] Other countries have different thresholds according to local sensitivity to hail; for instance grape growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells).
16
+
17
+ Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing 0 °C (32 °F).[3] These types of strong updrafts can also indicate the presence of a tornado.[9] The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.[10]
18
+
19
+ Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true.
20
+
21
+ The storm's updraft, with upwardly directed wind speeds as high as 110 miles per hour (180 km/h),[11] blows the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice.[12]
22
+
23
+ Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft where they can pass more time growing.[12] As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes 'wet growth', the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.[14]
24
+
25
+ Hail can also undergo 'dry growth' in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the 'wet growth' mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.[15]
26
+
27
+ The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.[16]
28
+
29
+ Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.[12]
30
+
31
+ Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of 11,000 feet (3,400 m).[17] Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.[18]
32
+
33
+ Hail growth becomes vanishingly small when air temperatures fall below −30 °C (−22 °F) as supercooled water droplets become rare at these temperatures.[17] Around thunderstorms, hail is most likely within the cloud at elevations above 20,000 feet (6,100 m). Between 10,000 feet (3,000 m) and 20,000 feet (6,100 m), 60 percent of hail is still within the thunderstorm, though 40 percent now lies within the clear air under the anvil. Below 10,000 feet (3,000 m), hail is equally distributed in and around a thunderstorm to a distance of 2 nautical miles (3.7 km).[19]
34
+
35
+ Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes.[20] Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely.[21] The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888.[22] China also experiences significant hailstorms.[23] Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, and southern and eastern Benelux. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail.[24]
36
+
37
+ In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley".[25] Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season.[26] To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events.
38
+
39
+ Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development.
40
+
41
+ Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate.[27] Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground.
42
+
43
+ Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities.
44
+
45
+ More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain.[28][29] The use of differential reflectivity (
46
+
47
+
48
+
49
+
50
+ Z
51
+
52
+ d
53
+ r
54
+
55
+
56
+
57
+
58
+ {\displaystyle Z_{dr}}
59
+
60
+ ), in combination with horizontal reflectivity (
61
+
62
+
63
+
64
+
65
+ Z
66
+
67
+ h
68
+
69
+
70
+
71
+
72
+ {\displaystyle Z_{h}}
73
+
74
+ ) has led to a variety of hail classification algorithms.[30] Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.[31]
75
+
76
+ The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins.[32] Using the objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.[33]
77
+
78
+ When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 0.25 inches (6.4 mm). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.[34]
79
+
80
+ Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of 1 centimetre (0.39 in) in diameter falls at a rate of 9 metres per second (20 mph), while stones the size of 8 centimetres (3.1 in) in diameter fall at a rate of 48 metres per second (110 mph). Hailstone velocity is dependent on the size of the stone, friction with air it is falling through, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hail stones are not perfect spheres it is difficult to calculate their speed accurately.[35]
81
+
82
+ Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail," which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
83
+
84
+ Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops.[26] Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems.[42] Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings.[43]
85
+
86
+ Hail is one of the most significant thunderstorm hazards to aircraft.[44] When hailstones exceed 0.5 inches (13 mm) in diameter, planes can be seriously damaged within seconds.[45] The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is also a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage.[22] Hail is one of Canada's most expensive hazards.[46]
87
+
88
+ Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls.[47]
89
+
90
+ Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths,[48] which can be detectable by satellite after the storms pass by.[49] Hailstorms normally last from a few minutes up to 15 minutes in duration.[26] Accumulating hail storms can blanket the ground with over 2 inches (5.1 cm) of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail.[50]
91
+
92
+ Depths of up to 18 in (0.46 m) have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure.[51] Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations.
93
+
94
+ On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case[52] of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10 p.m. and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump-truck loads of hail.[53]
95
+
96
+ Research focused on four individual days that accumulated more than 5.9 inches (15 cm) of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics,[54] suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths.[55]
97
+
98
+ During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat,[11] particularly across the Soviet Union – where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells.[56][57] Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.[11][22]
99
+
en/2292.html.txt ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused.[1] It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets fall generally in cold weather while hail growth is greatly inhibited during cold surface temperatures.[2]
4
+
5
+ Unlike other forms of water ice such as graupel, which is made of rime, and ice pellets, which are smaller and translucent, hailstones usually measure between 5 mm (0.2 in) and 15 cm (6 in) in diameter. The METAR reporting code for hail 5 mm (0.20 in) or greater is GR, while smaller hailstones and graupel are coded GS.
6
+
7
+ Hail is possible within most thunderstorms as it is produced by cumulonimbus,[3] and within 2 nmi (3.7 km) of the parent storm. Hail formation requires environments of strong, upward motion of air with the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while in the tropics, it tends to be confined to high elevations.
8
+
9
+ There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures and, most commonly, farmers' crops.
10
+
11
+ Any thunderstorm which produces hail that reaches the ground is known as a hailstorm.[4] Hail has a diameter of 5 millimetres (0.20 in) or more.[3] Hailstones can grow to 15 centimetres (6 in) and weigh more than 0.5 kilograms (1.1 lb).[5]
12
+
13
+ Unlike ice pellets, hailstones are layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least 1 millimetre (0.039 in) thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between 2.5 cm (1 in) and golf ball-sized (1.75 in).[6]
14
+
15
+ Stones larger than 2 cm (0.80 in) are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected.[7] The US National Weather Service has a 2.5 cm (1 in) or greater in diameter threshold, effective January 2010, an increase over the previous threshold of ¾-inch hail.[8] Other countries have different thresholds according to local sensitivity to hail; for instance grape growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells).
16
+
17
+ Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing 0 °C (32 °F).[3] These types of strong updrafts can also indicate the presence of a tornado.[9] The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.[10]
18
+
19
+ Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true.
20
+
21
+ The storm's updraft, with upwardly directed wind speeds as high as 110 miles per hour (180 km/h),[11] blows the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice.[12]
22
+
23
+ Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft where they can pass more time growing.[12] As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes 'wet growth', the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.[14]
24
+
25
+ Hail can also undergo 'dry growth' in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the 'wet growth' mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.[15]
26
+
27
+ The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.[16]
28
+
29
+ Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.[12]
30
+
31
+ Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of 11,000 feet (3,400 m).[17] Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.[18]
32
+
33
+ Hail growth becomes vanishingly small when air temperatures fall below −30 °C (−22 °F) as supercooled water droplets become rare at these temperatures.[17] Around thunderstorms, hail is most likely within the cloud at elevations above 20,000 feet (6,100 m). Between 10,000 feet (3,000 m) and 20,000 feet (6,100 m), 60 percent of hail is still within the thunderstorm, though 40 percent now lies within the clear air under the anvil. Below 10,000 feet (3,000 m), hail is equally distributed in and around a thunderstorm to a distance of 2 nautical miles (3.7 km).[19]
34
+
35
+ Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes.[20] Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely.[21] The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888.[22] China also experiences significant hailstorms.[23] Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, and southern and eastern Benelux. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail.[24]
36
+
37
+ In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley".[25] Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season.[26] To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events.
38
+
39
+ Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development.
40
+
41
+ Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate.[27] Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground.
42
+
43
+ Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities.
44
+
45
+ More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain.[28][29] The use of differential reflectivity (
46
+
47
+
48
+
49
+
50
+ Z
51
+
52
+ d
53
+ r
54
+
55
+
56
+
57
+
58
+ {\displaystyle Z_{dr}}
59
+
60
+ ), in combination with horizontal reflectivity (
61
+
62
+
63
+
64
+
65
+ Z
66
+
67
+ h
68
+
69
+
70
+
71
+
72
+ {\displaystyle Z_{h}}
73
+
74
+ ) has led to a variety of hail classification algorithms.[30] Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.[31]
75
+
76
+ The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins.[32] Using the objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.[33]
77
+
78
+ When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 0.25 inches (6.4 mm). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.[34]
79
+
80
+ Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of 1 centimetre (0.39 in) in diameter falls at a rate of 9 metres per second (20 mph), while stones the size of 8 centimetres (3.1 in) in diameter fall at a rate of 48 metres per second (110 mph). Hailstone velocity is dependent on the size of the stone, friction with air it is falling through, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hail stones are not perfect spheres it is difficult to calculate their speed accurately.[35]
81
+
82
+ Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail," which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
83
+
84
+ Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops.[26] Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems.[42] Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings.[43]
85
+
86
+ Hail is one of the most significant thunderstorm hazards to aircraft.[44] When hailstones exceed 0.5 inches (13 mm) in diameter, planes can be seriously damaged within seconds.[45] The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is also a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage.[22] Hail is one of Canada's most expensive hazards.[46]
87
+
88
+ Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls.[47]
89
+
90
+ Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths,[48] which can be detectable by satellite after the storms pass by.[49] Hailstorms normally last from a few minutes up to 15 minutes in duration.[26] Accumulating hail storms can blanket the ground with over 2 inches (5.1 cm) of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail.[50]
91
+
92
+ Depths of up to 18 in (0.46 m) have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure.[51] Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations.
93
+
94
+ On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case[52] of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10 p.m. and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump-truck loads of hail.[53]
95
+
96
+ Research focused on four individual days that accumulated more than 5.9 inches (15 cm) of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics,[54] suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths.[55]
97
+
98
+ During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat,[11] particularly across the Soviet Union – where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells.[56][57] Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.[11][22]
99
+
en/2293.html.txt ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Hail is a form of solid precipitation. It is distinct from ice pellets (American English "sleet"), though the two are often confused.[1] It consists of balls or irregular lumps of ice, each of which is called a hailstone. Ice pellets fall generally in cold weather while hail growth is greatly inhibited during cold surface temperatures.[2]
4
+
5
+ Unlike other forms of water ice such as graupel, which is made of rime, and ice pellets, which are smaller and translucent, hailstones usually measure between 5 mm (0.2 in) and 15 cm (6 in) in diameter. The METAR reporting code for hail 5 mm (0.20 in) or greater is GR, while smaller hailstones and graupel are coded GS.
6
+
7
+ Hail is possible within most thunderstorms as it is produced by cumulonimbus,[3] and within 2 nmi (3.7 km) of the parent storm. Hail formation requires environments of strong, upward motion of air with the parent thunderstorm (similar to tornadoes) and lowered heights of the freezing level. In the mid-latitudes, hail forms near the interiors of continents, while in the tropics, it tends to be confined to high elevations.
8
+
9
+ There are methods available to detect hail-producing thunderstorms using weather satellites and weather radar imagery. Hailstones generally fall at higher speeds as they grow in size, though complicating factors such as melting, friction with air, wind, and interaction with rain and other hailstones can slow their descent through Earth's atmosphere. Severe weather warnings are issued for hail when the stones reach a damaging size, as it can cause serious damage to human-made structures and, most commonly, farmers' crops.
10
+
11
+ Any thunderstorm which produces hail that reaches the ground is known as a hailstorm.[4] Hail has a diameter of 5 millimetres (0.20 in) or more.[3] Hailstones can grow to 15 centimetres (6 in) and weigh more than 0.5 kilograms (1.1 lb).[5]
12
+
13
+ Unlike ice pellets, hailstones are layered and can be irregular and clumped together. Hail is composed of transparent ice or alternating layers of transparent and translucent ice at least 1 millimetre (0.039 in) thick, which are deposited upon the hailstone as it travels through the cloud, suspended aloft by air with strong upward motion until its weight overcomes the updraft and falls to the ground. Although the diameter of hail is varied, in the United States, the average observation of damaging hail is between 2.5 cm (1 in) and golf ball-sized (1.75 in).[6]
14
+
15
+ Stones larger than 2 cm (0.80 in) are usually considered large enough to cause damage. The Meteorological Service of Canada issues severe thunderstorm warnings when hail that size or above is expected.[7] The US National Weather Service has a 2.5 cm (1 in) or greater in diameter threshold, effective January 2010, an increase over the previous threshold of ¾-inch hail.[8] Other countries have different thresholds according to local sensitivity to hail; for instance grape growing areas could be adversely impacted by smaller hailstones. Hailstones can be very large or very small, depending on how strong the updraft is: weaker hailstorms produce smaller hailstones than stronger hailstorms (such as supercells).
16
+
17
+ Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing 0 °C (32 °F).[3] These types of strong updrafts can also indicate the presence of a tornado.[9] The growth rate of hailstones is impacted by factors such as higher elevation, lower freezing zones, and wind shear.[10]
18
+
19
+ Like other precipitation in cumulonimbus clouds, hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up and down motion was thought to be responsible for the successive layers of the hailstone. New research, based on theory as well as field study, has shown this is not necessarily true.
20
+
21
+ The storm's updraft, with upwardly directed wind speeds as high as 110 miles per hour (180 km/h),[11] blows the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapor is available, it acquires a layer of opaque white ice.[12]
22
+
23
+ Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally the larger hailstones will form some distance from the stronger updraft where they can pass more time growing.[12] As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Because it undergoes 'wet growth', the outer layer is sticky (i.e. more adhesive), so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.[14]
24
+
25
+ Hail can also undergo 'dry growth' in which the latent heat release through freezing is not enough to keep the outer layer in a liquid state. Hail forming in this manner appears opaque due to small air bubbles that become trapped in the stone during rapid freezing. These bubbles coalesce and escape during the 'wet growth' mode, and the hailstone is more clear. The mode of growth for a hailstone can change throughout its development, and this can result in distinct layers in a hailstone's cross-section.[15]
26
+
27
+ The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than 10 km high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into air above freezing temperature.[16]
28
+
29
+ Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which multiple trajectories can be discussed is in a multicellular thunderstorm, where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter" cell. This, however, is an exceptional case.[12]
30
+
31
+ Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of 11,000 feet (3,400 m).[17] Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.[18]
32
+
33
+ Hail growth becomes vanishingly small when air temperatures fall below −30 °C (−22 °F) as supercooled water droplets become rare at these temperatures.[17] Around thunderstorms, hail is most likely within the cloud at elevations above 20,000 feet (6,100 m). Between 10,000 feet (3,000 m) and 20,000 feet (6,100 m), 60 percent of hail is still within the thunderstorm, though 40 percent now lies within the clear air under the anvil. Below 10,000 feet (3,000 m), hail is equally distributed in and around a thunderstorm to a distance of 2 nautical miles (3.7 km).[19]
34
+
35
+ Hail occurs most frequently within continental interiors at mid-latitudes and is less common in the tropics, despite a much higher frequency of thunderstorms than in the mid-latitudes.[20] Hail is also much more common along mountain ranges because mountains force horizontal winds upwards (known as orographic lifting), thereby intensifying the updrafts within thunderstorms and making hail more likely.[21] The higher elevations also result in there being less time available for hail to melt before reaching the ground. One of the more common regions for large hail is across mountainous northern India, which reported one of the highest hail-related death tolls on record in 1888.[22] China also experiences significant hailstorms.[23] Central Europe and southern Australia also experience a lot of hailstorms. Regions where hailstorms frequently occur are southern and western Germany, northern and eastern France, and southern and eastern Benelux. In southeastern Europe, Croatia and Serbia experience frequent occurrences of hail.[24]
36
+
37
+ In North America, hail is most common in the area where Colorado, Nebraska, and Wyoming meet, known as "Hail Alley".[25] Hail in this region occurs between the months of March and October during the afternoon and evening hours, with the bulk of the occurrences from May through September. Cheyenne, Wyoming is North America's most hail-prone city with an average of nine to ten hailstorms per season.[26] To the north of this area and also just downwind of the Rocky Mountains is the Hailstorm Alley region of Alberta, which also experiences an increased incidence of significant hail events.
38
+
39
+ Weather radar is a very useful tool to detect the presence of hail-producing thunderstorms. However, radar data has to be complemented by a knowledge of current atmospheric conditions which can allow one to determine if the current atmosphere is conducive to hail development.
40
+
41
+ Modern radar scans many angles around the site. Reflectivity values at multiple angles above ground level in a storm are proportional to the precipitation rate at those levels. Summing reflectivities in the Vertically Integrated Liquid or VIL, gives the liquid water content in the cloud. Research shows that hail development in the upper levels of the storm is related to the evolution of VIL. VIL divided by the vertical extent of the storm, called VIL density, has a relationship with hail size, although this varies with atmospheric conditions and therefore is not highly accurate.[27] Traditionally, hail size and probability can be estimated from radar data by computer using algorithms based on this research. Some algorithms include the height of the freezing level to estimate the melting of the hailstone and what would be left on the ground.
42
+
43
+ Certain patterns of reflectivity are important clues for the meteorologist as well. The three body scatter spike is an example. This is the result of energy from the radar hitting hail and being deflected to the ground, where they deflect back to the hail and then to the radar. The energy took more time to go from the hail to the ground and back, as opposed to the energy that went directly from the hail to the radar, and the echo is further away from the radar than the actual location of the hail on the same radial path, forming a cone of weaker reflectivities.
44
+
45
+ More recently, the polarization properties of weather radar returns have been analyzed to differentiate between hail and heavy rain.[28][29] The use of differential reflectivity (
46
+
47
+
48
+
49
+
50
+ Z
51
+
52
+ d
53
+ r
54
+
55
+
56
+
57
+
58
+ {\displaystyle Z_{dr}}
59
+
60
+ ), in combination with horizontal reflectivity (
61
+
62
+
63
+
64
+
65
+ Z
66
+
67
+ h
68
+
69
+
70
+
71
+
72
+ {\displaystyle Z_{h}}
73
+
74
+ ) has led to a variety of hail classification algorithms.[30] Visible satellite imagery is beginning to be used to detect hail, but false alarm rates remain high using this method.[31]
75
+
76
+ The size of hailstones is best determined by measuring their diameter with a ruler. In the absence of a ruler, hailstone size is often visually estimated by comparing its size to that of known objects, such as coins.[32] Using the objects such as hen's eggs, peas, and marbles for comparing hailstone sizes is imprecise, due to their varied dimensions. The UK organisation, TORRO, also scales for both hailstones and hailstorms.[33]
77
+
78
+ When observed at an airport, METAR code is used within a surface weather observation which relates to the size of the hailstone. Within METAR code, GR is used to indicate larger hail, of a diameter of at least 0.25 inches (6.4 mm). GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil.[34]
79
+
80
+ Terminal velocity of hail, or the speed at which hail is falling when it strikes the ground, varies. It is estimated that a hailstone of 1 centimetre (0.39 in) in diameter falls at a rate of 9 metres per second (20 mph), while stones the size of 8 centimetres (3.1 in) in diameter fall at a rate of 48 metres per second (110 mph). Hailstone velocity is dependent on the size of the stone, friction with air it is falling through, the motion of wind it is falling through, collisions with raindrops or other hailstones, and melting as the stones fall through a warmer atmosphere. As hail stones are not perfect spheres it is difficult to calculate their speed accurately.[35]
81
+
82
+ Megacryometeors, large rocks of ice that are not associated with thunderstorms, are not officially recognized by the World Meteorological Organization as "hail," which are aggregations of ice associated with thunderstorms, and therefore records of extreme characteristics of megacryometeors are not given as hail records.
83
+
84
+ Hail can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and most commonly, crops.[26] Hail damage to roofs often goes unnoticed until further structural damage is seen, such as leaks or cracks. It is hardest to recognize hail damage on shingled roofs and flat roofs, but all roofs have their own hail damage detection problems.[42] Metal roofs are fairly resistant to hail damage, but may accumulate cosmetic damage in the form of dents and damaged coatings.[43]
85
+
86
+ Hail is one of the most significant thunderstorm hazards to aircraft.[44] When hailstones exceed 0.5 inches (13 mm) in diameter, planes can be seriously damaged within seconds.[45] The hailstones accumulating on the ground can also be hazardous to landing aircraft. Hail is also a common nuisance to drivers of automobiles, severely denting the vehicle and cracking or even shattering windshields and windows. Wheat, corn, soybeans, and tobacco are the most sensitive crops to hail damage.[22] Hail is one of Canada's most expensive hazards.[46]
87
+
88
+ Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest known incidents occurred around the 9th century in Roopkund, Uttarakhand, India, where 200 to 600 nomads seem to have died of injuries from hail the size of cricket balls.[47]
89
+
90
+ Narrow zones where hail accumulates on the ground in association with thunderstorm activity are known as hail streaks or hail swaths,[48] which can be detectable by satellite after the storms pass by.[49] Hailstorms normally last from a few minutes up to 15 minutes in duration.[26] Accumulating hail storms can blanket the ground with over 2 inches (5.1 cm) of hail, cause thousands to lose power, and bring down many trees. Flash flooding and mudslides within areas of steep terrain can be a concern with accumulating hail.[50]
91
+
92
+ Depths of up to 18 in (0.46 m) have been reported. A landscape covered in accumulated hail generally resembles one covered in accumulated snow and any significant accumulation of hail has the same restrictive effects as snow accumulation, albeit over a smaller area, on transport and infrastructure.[51] Accumulated hail can also cause flooding by blocking drains, and hail can be carried in the floodwater, turning into a snow-like slush which is deposited at lower elevations.
93
+
94
+ On somewhat rare occasions, a thunderstorm can become stationary or nearly so while prolifically producing hail and significant depths of accumulation do occur; this tends to happen in mountainous areas, such as the July 29, 2010 case[52] of a foot of hail accumulation in Boulder County, Colorado. On June 5, 2015, hail up to four feet deep fell on one city block in Denver, Colorado. The hailstones, described as between the size of bumble bees and ping pong balls, were accompanied by rain and high winds. The hail fell in only the one area, leaving the surrounding area untouched. It fell for one and a half hours between 10 p.m. and 11:30 pm. A meteorologist for the National Weather Service in Boulder said, "It's a very interesting phenomenon. We saw the storm stall. It produced copious amounts of hail in one small area. It's a meteorological thing." Tractors used to clear the area filled more than 30 dump-truck loads of hail.[53]
95
+
96
+ Research focused on four individual days that accumulated more than 5.9 inches (15 cm) of hail in 30 minutes on the Colorado front range has shown that these events share similar patterns in observed synoptic weather, radar, and lightning characteristics,[54] suggesting the possibility of predicting these events prior to their occurrence. A fundamental problem in continuing research in this area is that, unlike hail diameter, hail depth is not commonly reported. The lack of data leaves researchers and forecasters in the dark when trying to verify operational methods. A cooperative effort between the University of Colorado and the National Weather Service is in progress. The joint project's goal is to enlist the help of the general public to develop a database of hail accumulation depths.[55]
97
+
98
+ During the Middle Ages, people in Europe used to ring church bells and fire cannons to try to prevent hail, and the subsequent damage to crops. Updated versions of this approach are available as modern hail cannons. Cloud seeding after World War II was done to eliminate the hail threat,[11] particularly across the Soviet Union – where it was claimed a 70–98% reduction in crop damage from hail storms was achieved by deploying silver iodide in clouds using rockets and artillery shells.[56][57] Hail suppression programs have been undertaken by 15 countries between 1965 and 2005.[11][22]
99
+
en/2294.html.txt ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Strike action, also called labor strike, labour strike, or simply strike, is a work stoppage, caused by the mass refusal of employees to work. A strike usually takes place in response to employee grievances. Strikes became common during the Industrial Revolution, when mass labor became important in factories and mines. In most countries, strike actions were quickly made illegal,[citation needed] as factory owners had far more power than workers. Most Western countries partially legalized striking in the late 19th or early 20th centuries.
4
+
5
+ Strikes are sometimes used to pressure governments to change policies. Occasionally, strikes destabilize the rule of a particular political party or ruler; in such cases, strikes are often part of a broader social movement taking the form of a campaign of civil resistance. Notable examples are the 1980 Gdańsk Shipyard, and the 1981 Warning Strike, led by Lech Wałęsa. These strikes were significant in the long campaign of civil resistance for political change in Poland, and were an important mobilizing effort that contributed to the fall of the Iron Curtain and the end of communist party rule in eastern Europe.[1]
6
+
7
+ The use of the English word "strike" was first seen in 1768, when sailors, in support of demonstrations in London, "struck" or removed the topgallant sails of merchant ships at port, thus crippling the ships.[2][3][4] Official publications have typically used the more neutral words "work stoppage" or "industrial dispute".
8
+
9
+ The first historically certain account of strike action was towards the end of the 20th dynasty, under Pharaoh Ramses III in ancient Egypt on 14 November in 1152 BC. The artisans of the Royal Necropolis at Deir el-Medina walked off their jobs because they had not been paid.[5][6] The Egyptian authorities raised the wages.
10
+
11
+ An early predecessor of the general strike may have been the secessio plebis in ancient Rome. In The Outline of History, H. G. Wells characterized this event as "the general strike of the plebeians; the plebeians seem to have invented the strike, which now makes its first appearance in history."[7] Their first strike occurred because they "saw with indignation their friends, who had often served the state bravely in the legions, thrown into chains and reduced to slavery at the demand of patrician creditors".[7]
12
+
13
+ The strike action only became a feature of the political landscape with the onset of the Industrial Revolution. For the first time in history, large numbers of people were members of the industrial working class; they lived in cities and exchanged their labor for payment. By the 1830s, when the Chartist movement was at its peak in Britain, a true and widespread 'workers consciousness' was awakening. In 1838, a Statistical Society of London committee "used the first written questionnaire... The committee prepared and printed a list of questions 'designed to elicit the complete and impartial history of strikes.'" [8]
14
+
15
+ In 1842 the demands for fairer wages and conditions across many different industries finally exploded into the first modern general strike. After the second Chartist Petition was presented to Parliament in April 1842 and rejected, the strike began in the coal mines of Staffordshire, England, and soon spread through Britain affecting factories, mills in Lancashire and coal mines from Dundee to South Wales and Cornwall.[9] Instead of being a spontaneous uprising of the mutinous masses, the strike was politically motivated and was driven by an agenda to win concessions. Probably as much as half of the then industrial work force were on strike at its peak – over 500,000 men.[citation needed] The local leadership marshalled a growing working class tradition to politically organize their followers to mount an articulate challenge to the capitalist, political establishment. Friedrich Engels, an observer in London at the time, wrote:
16
+
17
+ by its numbers, this class has become the most powerful in England, and woe betide the wealthy Englishmen when it becomes conscious of this fact ... The English proletarian is only just becoming aware of his power, and the fruits of this awareness were the disturbances of last summer.[10]
18
+
19
+ As the 19th century progressed, strikes became a fixture of industrial relations across the industrialized world, as workers organized themselves to collectively bargain for better wages and standards with their employers. Karl Marx has condemned the theory of Pierre-Joseph Proudhon criminalizing strike action in his work The Poverty of Philosophy.[11]
20
+
21
+ In 1937 there were 4,740 strikes in the United States.[12] This was the greatest strike wave in American labor history. The number of major strikes and lockouts in the U.S. fell by 97% from 381 in 1970 to 187 in 1980 to only 11 in 2010. Companies countered the threat of a strike by threatening to close or move a plant.[13][14]
22
+
23
+ The International Covenant on Economic, Social and Cultural Rights adopted in 1967 ensure the right to strike in Article 8 and European Social Charter adopted in 1961 also ensure the right to strike in Article 6.
24
+
25
+ The Farah Strike, 1972–1974, labeled the "strike of the century," and it was organized and led by Mexican American women predominantly in El Paso, Texas.[15]
26
+
27
+ Strikes are rare, in part because many workers are not covered by a collective bargaining agreement.[16] Strikes that do occur are generally fairly short in duration.[16] Labor economist John Kennan notes:
28
+
29
+ In Britain in 1926 (the year of the general strike) about 9 workdays per worker were lost due to strikes. In 1979, the loss due to strikes was a little more than one day per worker. These are the extreme cases. In the 79 years following 1926, the number of workdays lost in Britain was less than 2 hours per year per worker. In the U.S., idleness due to strikes never exceeded one half of one percent of total working days in any year during the period 1948-2005; the average loss was 0.1% per year. Similarly, in Canada over the period 1980-2005, the annual number of work days lost due to strikes never exceeded one day per worker; on average over this period lost worktime due to strikes was about one-third of a day per worker. Although the data are not readily available for a broad sample of developed countries, the pattern described above seems quite general: days lost due to strikes amount to only a fraction of a day per worker per annum, on average, exceeding one day only in a few exceptional years.[16]
30
+
31
+ Since the 1990s, strike actions have generally further declined, a phenomenon that might be attributable to lower information costs (and thus more readily available access to information on economic rents) made possible by computerization.[16] In the United States, the number of workers involved in major work stoppages (including strikes and, less commonly, lockouts) that involved at least a thousand works for at least one full shift generally declined from 1973 to 2017 (coinciding with a general decrease in overall union membership), before substantially increasing in 2018 and 2019.[17]
32
+
33
+ Most strikes are undertaken by labor unions during collective bargaining as a last resort. The object of collective bargaining is for the employer and the union to come to an agreement over wages, benefits, and working conditions. A collective bargaining agreement may include a clause (a contractual "no-strike clause") which prohibits the union from striking during the term of the agreement.[18] Under U.S. labor law, a strike in violation of a no-strike clause is not a protected concerted activity.[18] The scope of a no-strike clause varies; generally, the U.S. courts and National Labor Relations Board have determined that a collective bargaining agreement's no-strike clause has the same scope as the agreement's arbitration clauses, such that "the union cannot strike over an arbitrable issue."[18] The U.S. Supreme Court held in Jacksonville Bulk Terminals Inc. v. International Longshoremen's Association (1982), a case involving the International Longshoremen's Association refusing to work with goods for export to the Soviet Union in protest against its invasion of Afghanistan, that a no-strike clause does not bar unions from refusing to work as a political protest (since that is not an "arbitrable" issue), although such activity may lead to damages for a secondary boycott.[18] Whether a no-strike clause applies to sympathy strikes depends on the context.[18] Some in the labor movement consider no-strike clauses to be an unnecessary detriment to unions in the collective bargaining process.[19]
34
+
35
+ Occasionally, workers decide to strike without the sanction of a labor union, either because the union refuses to endorse such a tactic, or because the workers concerned are non-unionized. Such strikes are often described as unofficial. Strikes without formal union authorization are also known as wildcat strikes.
36
+
37
+ In many countries, wildcat strikes do not enjoy the same legal protections as recognized union strikes, and may result in penalties for the union members who participate or their union. The same often applies in the case of strikes conducted without an official ballot of the union membership, as is required in some countries such as the United Kingdom.
38
+
39
+ A strike may consist of workers refusing to attend work or picketing outside the workplace to prevent or dissuade people from working in their place or conducting business with their employer. Less frequently workers may occupy the workplace, but refuse either to do their jobs or to leave. This is known as a sit-down strike. A similar tactic is the work-in, where employees occupy the workplace but still continue work, often without pay, which attempts to show they are still useful, or that worker self-management can be successful. For instance, this occurred with factory occupations in the Biennio Rosso strikes – the "two red years" of Italy from 1919 to 1920.[citation needed]
40
+
41
+ Another unconventional tactic is work-to-rule (also known as an Italian strike, in Italian: Sciopero bianco), in which workers perform their tasks exactly as they are required to but no better. For example, workers might follow all safety regulations in such a way that it impedes their productivity or they might refuse to work overtime. Such strikes may in some cases be a form of "partial strike" or "slowdown".
42
+
43
+ During the development boom of the 1970s in Australia, the Green ban was developed by certain unions described by some as more socially conscious. This is a form of strike action taken by a trade union or other organized labor group for environmentalist or conservationist purposes. This developed from the black ban, strike action taken against a particular job or employer in order to protect the economic interests of the strikers.
44
+
45
+ United States labor law also draws a distinction, in the case of private sector employers covered by the National Labor Relations Act, between "economic" and "unfair labor practice" strikes. An employer may not fire, but may permanently replace, workers who engage in a strike over economic issues. On the other hand, employers who commit unfair labor practices (ULPs) may not replace employees who strike over them, and must fire any strikebreakers they have hired as replacements in order to reinstate the striking workers.
46
+
47
+ Strikes may be specific to a particular workplace, employer, or unit within a workplace, or they may encompass an entire industry, or every worker within a city or country. Strikes that involve all workers, or a number of large and important groups of workers, in a particular community or region are known as general strikes. Under some circumstances, strikes may take place in order to put pressure on the State or other authorities or may be a response to unsafe conditions in the workplace.
48
+
49
+ A sympathy strike is, in a way, a small scale version of a general strike in which one group of workers refuses to cross a picket line established by another as a means of supporting the striking workers. Sympathy strikes, once the norm in the construction industry in the United States, have been made much more difficult to conduct due to decisions of the National Labor Relations Board permitting employers to establish separate or "reserved" gates for particular trades, making it an unlawful secondary boycott for a union to establish a picket line at any gate other than the one reserved for the employer it is picketing. Sympathy strikes may be undertaken by a union as an orgition or by individual union members choosing not to cross a picket line.
50
+
51
+ A jurisdictional strike in United States labor law refers to a concerted refusal to work undertaken by a union to assert its members’ right to particular job assignments and to protest the assignment of disputed work to members of another union or to unorganized workers.
52
+
53
+ A student strike has the students (sometimes supported by faculty) not attending schools. In some cases, the strike is intended to draw media attention to the institution so that the grievances that are causing the students to "strike" can be aired before the public; this usually damages the institution's (or government's) public image. In other cases, especially in government-supported institutions, the student strike can cause a budgetary imbalance and have actual economic repercussions for the institution.
54
+
55
+ A hunger strike is a deliberate refusal to eat. Hunger strikes are often used in prisons as a form of political protest. Like student strikes, a hunger strike aims to worsen the public image of the target.
56
+
57
+ A "sickout", or (especially by uniformed police officers) "blue flu", is a type of strike action in which the strikers call in sick. This is used in cases where laws prohibit certain employees from declaring a strike. Police, firefighters, air traffic controllers, and teachers in some U.S. states are among the groups commonly barred from striking usually by state and federal laws meant to ensure the safety or security of the general public.
58
+
59
+ Newspaper writers may withhold their names from their stories as a way to protest actions of their employer.[20]
60
+
61
+ Activists may form "flying squad" groups for strikes or other actions to disrupt the workplace or another aspect of capitalism: supporting other strikers or unemployed workers, participating in protests against globalization, or opposing abusive landlords.[21]
62
+
63
+ On 30 January 2015, the Supreme Court of Canada ruled that there is a constitutional right to strike.[22] In this 5–2 majority decision, Justice Rosalie Abella ruled that "[a]long with their right to associate, speak through a bargaining representative of their choice, and bargain collectively with their employer through that representative, the right of employees to strike is vital to protecting the meaningful process of collective bargaining..." [paragraph 24]. This decision adopted the dissent by Chief Justice Brian Dickson in a 1987 Supreme Court ruling on a reference case brought by the province of Alberta. The exact scope of this right to strike remains unclear.[23] Prior to this Supreme Court decision, the federal and provincial governments had the ability to introduce "back to work legislation", a special law that blocks the strike action (or a lockout) from happening or continuing. Canadian governments could also have imposed binding arbitration or a new contract on the disputing parties. Back to work legislation was first used in 1950 during a railway strike, and as of 2012 had been used 33 times by the federal government for those parts of the economy that are regulated federally (grain handling, rail and air travel, and the postal service), and in more cases provincially. In addition, certain parts of the economy can be proclaimed "essential services" in which case all strikes are illegal.[24]
64
+
65
+ Examples include when the government of Canada passed back to work legislation during the 2011 Canada Post lockout and the 2012 CP Rail strike, thus effectively ending the strikes. In 2016, the government's use of back to work legislation during the 2011 Canada Post lockout was ruled unconstitutional, with the judge specifically referencing the Supreme Court of Canada's 2015 decision Saskatchewan Federation of Labour v Saskatchewan.[25]
66
+
67
+ In some Marxist–Leninist states, such as the former USSR or the People's Republic of China, striking was illegal and viewed as counter-revolutionary. Since the government in such systems claims to represent the working class, it has been argued that unions and strikes were not necessary.[citation needed] In 1976, China signed the International Covenant on Economic, Social and Cultural Rights, which guaranteed the right to unions and striking, but Chinese officials declared that they had no interest in allowing these liberties.[26] (In June 2008, however, the municipal government in Shenzhen in southern China introduced draft labor regulations, which labor rights advocacy groups say would, if implemented, virtually restore Chinese workers' right to strike.[27]) Trade unions in the Soviet Union served in part as a means to educate workers about the country's economic system. Vladimir Lenin referred to trade unions as "Schools of Communism". They were essentially state propaganda and control organs to regulate the workforce, also providing them with social activities.[citation needed]
68
+
69
+ In France, the right to strike is recognized and guaranteed by the Constitution.
70
+
71
+ A "minimum service" during strikes in public transport was a promise of Nicolas Sarkozy during his campaign for the French presidential election. A law "on social dialogue and continuity of public service in regular terrestrial transports of passengers" was adopted on 12 August 2007, and it took effect on 1 January 2008.
72
+
73
+ This law, among other measures, forces certain categories of public transport workers (such as train and bus drivers) to declare to their employer 48 hours in advance if they intend to go on strike. Should they go on strike without having declared their intention to do so beforehand, they leave themselves open to sanctions.
74
+
75
+ The unions did and still oppose this law and argue these 48 hours are used not only to pressure the workers but also to keep files on the more militant workers, who will more easily be undermined in their careers by the employers. Most importantly, they argue this law prevents the more hesitant workers from making the decision to join the strike the day before, once they've been convinced to do so by their colleagues and more particularly the union militants, who maximize their efforts in building the strike (by handing out leaflets, organizing meetings, discussing the demands with their colleagues) in the last few days preceding the strike. This law makes it also more difficult for the strike to spread rapidly to other workers, as they are required to wait at least 48 hours before joining the strike.
76
+
77
+ This law also makes it easier for the employers to organize the production as it may use its human resources more effectively, knowing beforehand who is going to be at work and not, thus undermining, albeit not that much, the effects of the strike.
78
+
79
+ However, this law has not had much effect as strikes in public transports still occur in France and at times, the workers refuse to comply by the rules of this law. The public transport industry – public or privately owned – remains very militant in France and keen on taking strike action when their interests are threatened by the employers or the government.
80
+
81
+ The public transport workers in France, in particular the "Cheminots" (employees of the national French railway company) are often seen as the most radical "vanguard" of the French working class. This law has not, in the eyes of many, changed this fact.
82
+
83
+ Legislation was enacted in the aftermath of the 1919 police strikes, forbidding British police from both taking industrial action, and discussing the possibility with colleagues.[28]
84
+
85
+ In January 1951 during the Labour Attlee ministry, Attorney-General Hartley Shawcross left his name to a Parliamentary principle in a defense of his conduct regarding an illegal strike: that the Attorney-General "is not to be put, and is not put, under pressure by his colleagues in the matter" of whether or not to establish criminal proceedings.[29][30]
86
+
87
+ The Industrial Relations Act 1971 was repealed through the Trade Union and Labour Relations Act 1974, sections of which were repealed by the Employment Act 1982.
88
+
89
+ The Code of Practice on Industrial Action Ballots and Notices, and sections 22 and 25 of the Employment Relations Act 2004, which concern industrial action notices, commenced on 1 October 2005.
90
+
91
+ The Police Federation, which was created at the time to deal with employment grievances and to provide representation to police officers, attempted to put pressure on the Blair ministry and at the time repeatedly threatened strike action.[28]
92
+
93
+ Prison officers have gained and lost the right to strike over the years; most recently despite it being illegal, they walked out on 15 November 2016.[31] and again on 14 September 2018.[32]
94
+
95
+ The Railway Labor Act bans strikes by United States airline and railroad employees except in narrowly defined circumstances. The National Labor Relations Act generally permits strikes, but provides a mechanism to enjoin from striking workers in industries in which a strike would create a national emergency. The federal government most recently invoked these statutory provisions to obtain an injunction requiring the International Longshore and Warehouse Union return to work in 2002 after having been locked out by the employer group, the Pacific Maritime Association.
96
+
97
+ Some jurisdictions prohibit all strikes by public employees, under laws such as the "Taylor Law" in New York. Other jurisdictions impose strike bans only on certain categories of workers, particularly those regarded as critical to society: police, teachers and firefighters are among the groups commonly barred from striking in these jurisdictions. Some states, such as New Jersey, Michigan, Iowa or Florida, do not allow teachers in public schools to strike. Workers have sometimes circumvented these restrictions by falsely claiming inability to work due to illness – this is sometimes called a "sickout" or "blue flu", the latter receiving its name from the uniforms worn by police officers, who are traditionally prohibited from striking. The term "red flu" has sometimes been used to describe this action when undertaken by firefighters.
98
+
99
+ Often, specific regulations on strike actions exist for employees in prisons. The Code of Federal Regulations declares "encouraging others to refuse to work, or to participate in a work stoppage" by prisoners to be a "High Severity Level Prohibited Act" and authorizes solitary confinement for periods of up to a year for each violation.[33] The California Code of Regulations states that "[p]articipation in a strike or work stoppage", "[r]efusal to perform work or participate in a program as ordered or assigned", and "[r]ecurring failure to meet work or program expectations within the inmate's abilities when lesser disciplinary methods failed to correct the misconduct" by prisoners is "serious misconduct" under §3315(a)(3)(L), leading to gang affiliation under CCR §3000.[34]
100
+
101
+ Postal workers involved in 1978 wildcat strikes in Jersey City, Kearny, New Jersey, San Francisco, and Washington, D.C. were fired under the presidency of Jimmy Carter, and President Ronald Reagan fired air traffic controllers and the PATCO union after the air traffic controllers' strike of 1981.
102
+
103
+ The West Virginia teachers' strike in 2018 inspired teachers in other states, including Oklahoma, Colorado, and Arizona, to take similar action.[35]
104
+
105
+ A strikebreaker (sometimes derogatorily called a scab, blackleg, or knobstick) is a person who works despite an ongoing strike. Strikebreakers are usually individuals who are not employed by the company prior to the trade union dispute, but rather hired after or during the strike to keep the organization running. "Strikebreakers" may also refer to workers (union members or not) who cross picket lines to work.
106
+
107
+ Irwin, Jones, McGovern (2008) believe that the term "scab" is part of a larger metaphor involving strikes. They argue that the picket line is symbolic of a wound and those who break its borders to return to work are the scabs who bond that wound. Others have argued that the word is not a part of a larger metaphor but, rather, was an old-fashioned English insult whose meaning narrowed over time.
108
+
109
+ "Blackleg" is an older word and is found in the late-nineteenth/early-twentieth century folk song from Northumberland, "Blackleg Miner". The term does not necessarily owe its origins to this tune of unknown origin. The song is, however, notable for its lyrics that encourage violent acts against strikebreakers.
110
+
111
+ The concept of union strikebreaking or union scabbing refers to any circumstance in which union workers themselves cross picket lines to work.
112
+
113
+ Unionized workers are sometimes required to cross the picket lines established by other unions due to their organizations having signed contracts which include no-strike clauses. The no-strike clause typically requires that members of the union not conduct any strike action for the duration of the contract; such actions are called sympathy or secondary strikes. Members who honor the picket line in spite of the contract frequently face discipline, for their action may be viewed as a violation of provisions of the contract. Therefore, any union conducting a strike action typically seeks to include a provision of amnesty for all who honored the picket line in the agreement that settles the strike.
114
+
115
+ No-strike clauses may also prevent unionized workers from engaging in solidarity actions for other workers even when no picket line is crossed. For example, striking workers in manufacturing or mining produce a product which must be transported. In a situation where the factory or mine owners have replaced the strikers, unionized transport workers may feel inclined to refuse to haul any product that is produced by strikebreakers, yet their own contract obligates them to do so.
116
+
117
+ Historically the practice of union strikebreaking has been a contentious issue in the union movement, and a point of contention between adherents of different union philosophies. For example, supporters of industrial unions, which have sought to organize entire workplaces without regard to individual skills, have criticized craft unions for organizing workplaces into separate unions according to skill, a circumstance that makes union strikebreaking more common. Union strikebreaking is not, however, unique to craft unions.
118
+
119
+ Most strikes called by unions are somewhat predictable; they typically occur after the contract has expired. However, not all strikes are called by union organizations – some strikes have been called in an effort to pressure employers to recognize unions. Other strikes may be spontaneous actions by working people. Spontaneous strikes are sometimes called "wildcat strikes"; they were the key fighting point in May 1968 in France; most commonly, they are responses to serious (often life-threatening) safety hazards in the workplace rather than wage or hour disputes, etc.
120
+
121
+ Whatever the cause of the strike, employers are generally motivated to take measures to prevent them, mitigate the impact, or to undermine strikes when they do occur.
122
+
123
+ Companies which produce products for sale will frequently increase inventories prior to a strike. Salaried employees may be called upon to take the place of strikers, which may entail advance training. If the company has multiple locations, personnel may be redeployed to meet the needs of reduced staff.
124
+
125
+ Companies may also take out strike insurance, to help offset the losses which a strike would cause.
126
+
127
+ One of the weapons traditionally wielded by already-established unions is strike action. Some companies may decline entirely to negotiate with the union, and respond to the strike by hiring replacement workers. This may create a crisis situation for strikers – do they stick to their original plan and rely upon their solidarity, or is there a chance that the strike may be lost? How long will the strike last? Will strikers' jobs still be there if the strike fails? Are other strikers defecting from the strike? Companies that hire strikebreakers typically play upon these fears when they attempt to convince union members to abandon the strike and cross the union's picket line.
128
+
129
+ Unions faced with a strikebreaking situation may try to inhibit the use of strikebreakers by a variety of methods – establishing picket lines where the strikebreakers enter the workplace; discouraging strike breakers from taking, or from keeping, strikebreaking jobs; raising the cost of hiring strikebreakers for the company; or employing public relations tactics. Companies may respond by increasing security forces and seeking court injunctions.
130
+
131
+ Examining conditions in the late 1990s, John Logan observed that union busting agencies helped to "transform economic strikes into a virtually suicidal tactic for US unions". Logan further observed, "as strike rates in the United States have plummeted to historic low levels, the demand for strike management firms has also declined."[36]
132
+
133
+ In the US, as established in the National Labor Relations Act there is a legally protected right for private sector employees to strike to gain better wages, benefits, or working conditions and they cannot be fired. Striking for economic reasons (like protesting workplace conditions or supporting a union's bargaining demands) allows an employer to hire permanent replacements. The replacement worker can continue in the job and then the striking worker must wait for a vacancy. But if the strike is due to unfair labor practices, the strikers replaced can demand immediate reinstatement when the strike ends. If a collective bargaining agreement is in effect, and it contains a "no-strike clause", a strike during the life of the contract could result in the firing of all striking employees which could result in dissolution of that union. Although this is legal it could be viewed as union busting.
134
+
135
+ Some companies negotiate with the union during a strike; other companies may see a strike as an opportunity to eliminate the union. This is sometimes accomplished by the importation of replacement workers, strikebreakers or "scabs". Historically, strike breaking has often coincided with union busting. It was also called 'black legging' in the early twentieth century, during the Russian socialist movement.[37]
136
+
137
+ One method of inhibiting or ending a strike is firing union members who are striking which can result in elimination of the union. Although this has happened, it is rare due to laws regarding firing and "right to strike" having a wide range of differences in the US depending on whether union members are public or private sector. Laws also vary country to country. In the UK, "It is important to understand that there is no right to strike in UK law."[38] Employees who strike risk dismissal, unless it is an official strike (one called or endorsed by their union) in which case they are protected from unlawful dismissal, and cannot be fired for at least 12 weeks. UK laws regarding work stoppages and strikes are defined within the Employment Relations Act 1999 and the Trade Union and Labour Relations (Consolidation) Act 1992.
138
+
139
+ A significant case of mass-dismissals in the UK in 2005 involved the sacking of over 600 Gate Gourmet employees at Heathrow Airport.[39] The sacking prompted a walkout by British Airways ground staff leading to cancelled flights and thousands of delayed passengers. The walkout was illegal under UK law and the T&GWU quickly brought it to an end. A subsequent court case ruled that demonstrations on a grass verge approaching the Gate Gourmet premises were not illegal, but limited the number and made the T&G responsible for their action. [40]
140
+
141
+ In 1962 US President John F. Kennedy issued Executive Order #10988[41] which permitted federal employees to form trade unions but prohibited strikes (codified in 1966 at 5 U.S.C. 7311 – Loyalty and Striking). In 1981, after public sector union PATCO (Professional Air Traffic Controllers Organization) went on strike illegally, President Ronald Reagan fired all of the controllers. His action resulted in the dissolution of the union. PATCO reformed to become the National Air Traffic Controllers Association.
142
+
143
+ In the U.S., as established in the National Labor Relations Act there is a legally protected right for private sector employees to strike to gain better wages, benefits, or working conditions and they cannot be fired. Striking for economic reasons (i.e., protesting workplace conditions or supporting a union's bargaining demands) allows an employer to hire permanent replacements. The replacement worker can continue in the job and then the striking worker must wait for a vacancy. But if the strike is due to unfair labor practices (ULP), the strikers replaced can demand immediate reinstatement when the strike ends. If a collective bargaining agreement is in effect, and it contains a "no-strike clause", a strike during the life of the contract could result in the firing of all striking employees which could result in dissolution of that union.
144
+
145
+ Another counter to a strike is a lockout, the form of work stoppage in which an employer refuses to allow employees to work. Two of the three employers involved in the Caravan park grocery workers strike of 2003–2004 locked out their employees in response to a strike against the third member of the employer bargaining group. Lockouts are, with certain exceptions, lawful under United States labor law.
146
+
147
+ Historically, some employers have attempted to break union strikes by force. One of the most famous examples of this occurred during the Homestead Strike of 1892. Industrialist Henry Clay Frick sent private security agents from the Pinkerton National Detective Agency to break the Amalgamated Association of Iron and Steel Workers strike at a Homestead, Pennsylvania steel mill. Two strikers were killed, twelve wounded, along with two Pinkertons killed and eleven wounded. In the aftermath, Frick was shot in the neck and then stabbed by Alexander Berkman, surviving the attack, while Berkman was sentenced to 22 years in prison.
en/2295.html.txt ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Strike action, also called labor strike, labour strike, or simply strike, is a work stoppage, caused by the mass refusal of employees to work. A strike usually takes place in response to employee grievances. Strikes became common during the Industrial Revolution, when mass labor became important in factories and mines. In most countries, strike actions were quickly made illegal,[citation needed] as factory owners had far more power than workers. Most Western countries partially legalized striking in the late 19th or early 20th centuries.
4
+
5
+ Strikes are sometimes used to pressure governments to change policies. Occasionally, strikes destabilize the rule of a particular political party or ruler; in such cases, strikes are often part of a broader social movement taking the form of a campaign of civil resistance. Notable examples are the 1980 Gdańsk Shipyard, and the 1981 Warning Strike, led by Lech Wałęsa. These strikes were significant in the long campaign of civil resistance for political change in Poland, and were an important mobilizing effort that contributed to the fall of the Iron Curtain and the end of communist party rule in eastern Europe.[1]
6
+
7
+ The use of the English word "strike" was first seen in 1768, when sailors, in support of demonstrations in London, "struck" or removed the topgallant sails of merchant ships at port, thus crippling the ships.[2][3][4] Official publications have typically used the more neutral words "work stoppage" or "industrial dispute".
8
+
9
+ The first historically certain account of strike action was towards the end of the 20th dynasty, under Pharaoh Ramses III in ancient Egypt on 14 November in 1152 BC. The artisans of the Royal Necropolis at Deir el-Medina walked off their jobs because they had not been paid.[5][6] The Egyptian authorities raised the wages.
10
+
11
+ An early predecessor of the general strike may have been the secessio plebis in ancient Rome. In The Outline of History, H. G. Wells characterized this event as "the general strike of the plebeians; the plebeians seem to have invented the strike, which now makes its first appearance in history."[7] Their first strike occurred because they "saw with indignation their friends, who had often served the state bravely in the legions, thrown into chains and reduced to slavery at the demand of patrician creditors".[7]
12
+
13
+ The strike action only became a feature of the political landscape with the onset of the Industrial Revolution. For the first time in history, large numbers of people were members of the industrial working class; they lived in cities and exchanged their labor for payment. By the 1830s, when the Chartist movement was at its peak in Britain, a true and widespread 'workers consciousness' was awakening. In 1838, a Statistical Society of London committee "used the first written questionnaire... The committee prepared and printed a list of questions 'designed to elicit the complete and impartial history of strikes.'" [8]
14
+
15
+ In 1842 the demands for fairer wages and conditions across many different industries finally exploded into the first modern general strike. After the second Chartist Petition was presented to Parliament in April 1842 and rejected, the strike began in the coal mines of Staffordshire, England, and soon spread through Britain affecting factories, mills in Lancashire and coal mines from Dundee to South Wales and Cornwall.[9] Instead of being a spontaneous uprising of the mutinous masses, the strike was politically motivated and was driven by an agenda to win concessions. Probably as much as half of the then industrial work force were on strike at its peak – over 500,000 men.[citation needed] The local leadership marshalled a growing working class tradition to politically organize their followers to mount an articulate challenge to the capitalist, political establishment. Friedrich Engels, an observer in London at the time, wrote:
16
+
17
+ by its numbers, this class has become the most powerful in England, and woe betide the wealthy Englishmen when it becomes conscious of this fact ... The English proletarian is only just becoming aware of his power, and the fruits of this awareness were the disturbances of last summer.[10]
18
+
19
+ As the 19th century progressed, strikes became a fixture of industrial relations across the industrialized world, as workers organized themselves to collectively bargain for better wages and standards with their employers. Karl Marx has condemned the theory of Pierre-Joseph Proudhon criminalizing strike action in his work The Poverty of Philosophy.[11]
20
+
21
+ In 1937 there were 4,740 strikes in the United States.[12] This was the greatest strike wave in American labor history. The number of major strikes and lockouts in the U.S. fell by 97% from 381 in 1970 to 187 in 1980 to only 11 in 2010. Companies countered the threat of a strike by threatening to close or move a plant.[13][14]
22
+
23
+ The International Covenant on Economic, Social and Cultural Rights adopted in 1967 ensure the right to strike in Article 8 and European Social Charter adopted in 1961 also ensure the right to strike in Article 6.
24
+
25
+ The Farah Strike, 1972–1974, labeled the "strike of the century," and it was organized and led by Mexican American women predominantly in El Paso, Texas.[15]
26
+
27
+ Strikes are rare, in part because many workers are not covered by a collective bargaining agreement.[16] Strikes that do occur are generally fairly short in duration.[16] Labor economist John Kennan notes:
28
+
29
+ In Britain in 1926 (the year of the general strike) about 9 workdays per worker were lost due to strikes. In 1979, the loss due to strikes was a little more than one day per worker. These are the extreme cases. In the 79 years following 1926, the number of workdays lost in Britain was less than 2 hours per year per worker. In the U.S., idleness due to strikes never exceeded one half of one percent of total working days in any year during the period 1948-2005; the average loss was 0.1% per year. Similarly, in Canada over the period 1980-2005, the annual number of work days lost due to strikes never exceeded one day per worker; on average over this period lost worktime due to strikes was about one-third of a day per worker. Although the data are not readily available for a broad sample of developed countries, the pattern described above seems quite general: days lost due to strikes amount to only a fraction of a day per worker per annum, on average, exceeding one day only in a few exceptional years.[16]
30
+
31
+ Since the 1990s, strike actions have generally further declined, a phenomenon that might be attributable to lower information costs (and thus more readily available access to information on economic rents) made possible by computerization.[16] In the United States, the number of workers involved in major work stoppages (including strikes and, less commonly, lockouts) that involved at least a thousand works for at least one full shift generally declined from 1973 to 2017 (coinciding with a general decrease in overall union membership), before substantially increasing in 2018 and 2019.[17]
32
+
33
+ Most strikes are undertaken by labor unions during collective bargaining as a last resort. The object of collective bargaining is for the employer and the union to come to an agreement over wages, benefits, and working conditions. A collective bargaining agreement may include a clause (a contractual "no-strike clause") which prohibits the union from striking during the term of the agreement.[18] Under U.S. labor law, a strike in violation of a no-strike clause is not a protected concerted activity.[18] The scope of a no-strike clause varies; generally, the U.S. courts and National Labor Relations Board have determined that a collective bargaining agreement's no-strike clause has the same scope as the agreement's arbitration clauses, such that "the union cannot strike over an arbitrable issue."[18] The U.S. Supreme Court held in Jacksonville Bulk Terminals Inc. v. International Longshoremen's Association (1982), a case involving the International Longshoremen's Association refusing to work with goods for export to the Soviet Union in protest against its invasion of Afghanistan, that a no-strike clause does not bar unions from refusing to work as a political protest (since that is not an "arbitrable" issue), although such activity may lead to damages for a secondary boycott.[18] Whether a no-strike clause applies to sympathy strikes depends on the context.[18] Some in the labor movement consider no-strike clauses to be an unnecessary detriment to unions in the collective bargaining process.[19]
34
+
35
+ Occasionally, workers decide to strike without the sanction of a labor union, either because the union refuses to endorse such a tactic, or because the workers concerned are non-unionized. Such strikes are often described as unofficial. Strikes without formal union authorization are also known as wildcat strikes.
36
+
37
+ In many countries, wildcat strikes do not enjoy the same legal protections as recognized union strikes, and may result in penalties for the union members who participate or their union. The same often applies in the case of strikes conducted without an official ballot of the union membership, as is required in some countries such as the United Kingdom.
38
+
39
+ A strike may consist of workers refusing to attend work or picketing outside the workplace to prevent or dissuade people from working in their place or conducting business with their employer. Less frequently workers may occupy the workplace, but refuse either to do their jobs or to leave. This is known as a sit-down strike. A similar tactic is the work-in, where employees occupy the workplace but still continue work, often without pay, which attempts to show they are still useful, or that worker self-management can be successful. For instance, this occurred with factory occupations in the Biennio Rosso strikes – the "two red years" of Italy from 1919 to 1920.[citation needed]
40
+
41
+ Another unconventional tactic is work-to-rule (also known as an Italian strike, in Italian: Sciopero bianco), in which workers perform their tasks exactly as they are required to but no better. For example, workers might follow all safety regulations in such a way that it impedes their productivity or they might refuse to work overtime. Such strikes may in some cases be a form of "partial strike" or "slowdown".
42
+
43
+ During the development boom of the 1970s in Australia, the Green ban was developed by certain unions described by some as more socially conscious. This is a form of strike action taken by a trade union or other organized labor group for environmentalist or conservationist purposes. This developed from the black ban, strike action taken against a particular job or employer in order to protect the economic interests of the strikers.
44
+
45
+ United States labor law also draws a distinction, in the case of private sector employers covered by the National Labor Relations Act, between "economic" and "unfair labor practice" strikes. An employer may not fire, but may permanently replace, workers who engage in a strike over economic issues. On the other hand, employers who commit unfair labor practices (ULPs) may not replace employees who strike over them, and must fire any strikebreakers they have hired as replacements in order to reinstate the striking workers.
46
+
47
+ Strikes may be specific to a particular workplace, employer, or unit within a workplace, or they may encompass an entire industry, or every worker within a city or country. Strikes that involve all workers, or a number of large and important groups of workers, in a particular community or region are known as general strikes. Under some circumstances, strikes may take place in order to put pressure on the State or other authorities or may be a response to unsafe conditions in the workplace.
48
+
49
+ A sympathy strike is, in a way, a small scale version of a general strike in which one group of workers refuses to cross a picket line established by another as a means of supporting the striking workers. Sympathy strikes, once the norm in the construction industry in the United States, have been made much more difficult to conduct due to decisions of the National Labor Relations Board permitting employers to establish separate or "reserved" gates for particular trades, making it an unlawful secondary boycott for a union to establish a picket line at any gate other than the one reserved for the employer it is picketing. Sympathy strikes may be undertaken by a union as an orgition or by individual union members choosing not to cross a picket line.
50
+
51
+ A jurisdictional strike in United States labor law refers to a concerted refusal to work undertaken by a union to assert its members’ right to particular job assignments and to protest the assignment of disputed work to members of another union or to unorganized workers.
52
+
53
+ A student strike has the students (sometimes supported by faculty) not attending schools. In some cases, the strike is intended to draw media attention to the institution so that the grievances that are causing the students to "strike" can be aired before the public; this usually damages the institution's (or government's) public image. In other cases, especially in government-supported institutions, the student strike can cause a budgetary imbalance and have actual economic repercussions for the institution.
54
+
55
+ A hunger strike is a deliberate refusal to eat. Hunger strikes are often used in prisons as a form of political protest. Like student strikes, a hunger strike aims to worsen the public image of the target.
56
+
57
+ A "sickout", or (especially by uniformed police officers) "blue flu", is a type of strike action in which the strikers call in sick. This is used in cases where laws prohibit certain employees from declaring a strike. Police, firefighters, air traffic controllers, and teachers in some U.S. states are among the groups commonly barred from striking usually by state and federal laws meant to ensure the safety or security of the general public.
58
+
59
+ Newspaper writers may withhold their names from their stories as a way to protest actions of their employer.[20]
60
+
61
+ Activists may form "flying squad" groups for strikes or other actions to disrupt the workplace or another aspect of capitalism: supporting other strikers or unemployed workers, participating in protests against globalization, or opposing abusive landlords.[21]
62
+
63
+ On 30 January 2015, the Supreme Court of Canada ruled that there is a constitutional right to strike.[22] In this 5–2 majority decision, Justice Rosalie Abella ruled that "[a]long with their right to associate, speak through a bargaining representative of their choice, and bargain collectively with their employer through that representative, the right of employees to strike is vital to protecting the meaningful process of collective bargaining..." [paragraph 24]. This decision adopted the dissent by Chief Justice Brian Dickson in a 1987 Supreme Court ruling on a reference case brought by the province of Alberta. The exact scope of this right to strike remains unclear.[23] Prior to this Supreme Court decision, the federal and provincial governments had the ability to introduce "back to work legislation", a special law that blocks the strike action (or a lockout) from happening or continuing. Canadian governments could also have imposed binding arbitration or a new contract on the disputing parties. Back to work legislation was first used in 1950 during a railway strike, and as of 2012 had been used 33 times by the federal government for those parts of the economy that are regulated federally (grain handling, rail and air travel, and the postal service), and in more cases provincially. In addition, certain parts of the economy can be proclaimed "essential services" in which case all strikes are illegal.[24]
64
+
65
+ Examples include when the government of Canada passed back to work legislation during the 2011 Canada Post lockout and the 2012 CP Rail strike, thus effectively ending the strikes. In 2016, the government's use of back to work legislation during the 2011 Canada Post lockout was ruled unconstitutional, with the judge specifically referencing the Supreme Court of Canada's 2015 decision Saskatchewan Federation of Labour v Saskatchewan.[25]
66
+
67
+ In some Marxist–Leninist states, such as the former USSR or the People's Republic of China, striking was illegal and viewed as counter-revolutionary. Since the government in such systems claims to represent the working class, it has been argued that unions and strikes were not necessary.[citation needed] In 1976, China signed the International Covenant on Economic, Social and Cultural Rights, which guaranteed the right to unions and striking, but Chinese officials declared that they had no interest in allowing these liberties.[26] (In June 2008, however, the municipal government in Shenzhen in southern China introduced draft labor regulations, which labor rights advocacy groups say would, if implemented, virtually restore Chinese workers' right to strike.[27]) Trade unions in the Soviet Union served in part as a means to educate workers about the country's economic system. Vladimir Lenin referred to trade unions as "Schools of Communism". They were essentially state propaganda and control organs to regulate the workforce, also providing them with social activities.[citation needed]
68
+
69
+ In France, the right to strike is recognized and guaranteed by the Constitution.
70
+
71
+ A "minimum service" during strikes in public transport was a promise of Nicolas Sarkozy during his campaign for the French presidential election. A law "on social dialogue and continuity of public service in regular terrestrial transports of passengers" was adopted on 12 August 2007, and it took effect on 1 January 2008.
72
+
73
+ This law, among other measures, forces certain categories of public transport workers (such as train and bus drivers) to declare to their employer 48 hours in advance if they intend to go on strike. Should they go on strike without having declared their intention to do so beforehand, they leave themselves open to sanctions.
74
+
75
+ The unions did and still oppose this law and argue these 48 hours are used not only to pressure the workers but also to keep files on the more militant workers, who will more easily be undermined in their careers by the employers. Most importantly, they argue this law prevents the more hesitant workers from making the decision to join the strike the day before, once they've been convinced to do so by their colleagues and more particularly the union militants, who maximize their efforts in building the strike (by handing out leaflets, organizing meetings, discussing the demands with their colleagues) in the last few days preceding the strike. This law makes it also more difficult for the strike to spread rapidly to other workers, as they are required to wait at least 48 hours before joining the strike.
76
+
77
+ This law also makes it easier for the employers to organize the production as it may use its human resources more effectively, knowing beforehand who is going to be at work and not, thus undermining, albeit not that much, the effects of the strike.
78
+
79
+ However, this law has not had much effect as strikes in public transports still occur in France and at times, the workers refuse to comply by the rules of this law. The public transport industry – public or privately owned – remains very militant in France and keen on taking strike action when their interests are threatened by the employers or the government.
80
+
81
+ The public transport workers in France, in particular the "Cheminots" (employees of the national French railway company) are often seen as the most radical "vanguard" of the French working class. This law has not, in the eyes of many, changed this fact.
82
+
83
+ Legislation was enacted in the aftermath of the 1919 police strikes, forbidding British police from both taking industrial action, and discussing the possibility with colleagues.[28]
84
+
85
+ In January 1951 during the Labour Attlee ministry, Attorney-General Hartley Shawcross left his name to a Parliamentary principle in a defense of his conduct regarding an illegal strike: that the Attorney-General "is not to be put, and is not put, under pressure by his colleagues in the matter" of whether or not to establish criminal proceedings.[29][30]
86
+
87
+ The Industrial Relations Act 1971 was repealed through the Trade Union and Labour Relations Act 1974, sections of which were repealed by the Employment Act 1982.
88
+
89
+ The Code of Practice on Industrial Action Ballots and Notices, and sections 22 and 25 of the Employment Relations Act 2004, which concern industrial action notices, commenced on 1 October 2005.
90
+
91
+ The Police Federation, which was created at the time to deal with employment grievances and to provide representation to police officers, attempted to put pressure on the Blair ministry and at the time repeatedly threatened strike action.[28]
92
+
93
+ Prison officers have gained and lost the right to strike over the years; most recently despite it being illegal, they walked out on 15 November 2016.[31] and again on 14 September 2018.[32]
94
+
95
+ The Railway Labor Act bans strikes by United States airline and railroad employees except in narrowly defined circumstances. The National Labor Relations Act generally permits strikes, but provides a mechanism to enjoin from striking workers in industries in which a strike would create a national emergency. The federal government most recently invoked these statutory provisions to obtain an injunction requiring the International Longshore and Warehouse Union return to work in 2002 after having been locked out by the employer group, the Pacific Maritime Association.
96
+
97
+ Some jurisdictions prohibit all strikes by public employees, under laws such as the "Taylor Law" in New York. Other jurisdictions impose strike bans only on certain categories of workers, particularly those regarded as critical to society: police, teachers and firefighters are among the groups commonly barred from striking in these jurisdictions. Some states, such as New Jersey, Michigan, Iowa or Florida, do not allow teachers in public schools to strike. Workers have sometimes circumvented these restrictions by falsely claiming inability to work due to illness – this is sometimes called a "sickout" or "blue flu", the latter receiving its name from the uniforms worn by police officers, who are traditionally prohibited from striking. The term "red flu" has sometimes been used to describe this action when undertaken by firefighters.
98
+
99
+ Often, specific regulations on strike actions exist for employees in prisons. The Code of Federal Regulations declares "encouraging others to refuse to work, or to participate in a work stoppage" by prisoners to be a "High Severity Level Prohibited Act" and authorizes solitary confinement for periods of up to a year for each violation.[33] The California Code of Regulations states that "[p]articipation in a strike or work stoppage", "[r]efusal to perform work or participate in a program as ordered or assigned", and "[r]ecurring failure to meet work or program expectations within the inmate's abilities when lesser disciplinary methods failed to correct the misconduct" by prisoners is "serious misconduct" under §3315(a)(3)(L), leading to gang affiliation under CCR §3000.[34]
100
+
101
+ Postal workers involved in 1978 wildcat strikes in Jersey City, Kearny, New Jersey, San Francisco, and Washington, D.C. were fired under the presidency of Jimmy Carter, and President Ronald Reagan fired air traffic controllers and the PATCO union after the air traffic controllers' strike of 1981.
102
+
103
+ The West Virginia teachers' strike in 2018 inspired teachers in other states, including Oklahoma, Colorado, and Arizona, to take similar action.[35]
104
+
105
+ A strikebreaker (sometimes derogatorily called a scab, blackleg, or knobstick) is a person who works despite an ongoing strike. Strikebreakers are usually individuals who are not employed by the company prior to the trade union dispute, but rather hired after or during the strike to keep the organization running. "Strikebreakers" may also refer to workers (union members or not) who cross picket lines to work.
106
+
107
+ Irwin, Jones, McGovern (2008) believe that the term "scab" is part of a larger metaphor involving strikes. They argue that the picket line is symbolic of a wound and those who break its borders to return to work are the scabs who bond that wound. Others have argued that the word is not a part of a larger metaphor but, rather, was an old-fashioned English insult whose meaning narrowed over time.
108
+
109
+ "Blackleg" is an older word and is found in the late-nineteenth/early-twentieth century folk song from Northumberland, "Blackleg Miner". The term does not necessarily owe its origins to this tune of unknown origin. The song is, however, notable for its lyrics that encourage violent acts against strikebreakers.
110
+
111
+ The concept of union strikebreaking or union scabbing refers to any circumstance in which union workers themselves cross picket lines to work.
112
+
113
+ Unionized workers are sometimes required to cross the picket lines established by other unions due to their organizations having signed contracts which include no-strike clauses. The no-strike clause typically requires that members of the union not conduct any strike action for the duration of the contract; such actions are called sympathy or secondary strikes. Members who honor the picket line in spite of the contract frequently face discipline, for their action may be viewed as a violation of provisions of the contract. Therefore, any union conducting a strike action typically seeks to include a provision of amnesty for all who honored the picket line in the agreement that settles the strike.
114
+
115
+ No-strike clauses may also prevent unionized workers from engaging in solidarity actions for other workers even when no picket line is crossed. For example, striking workers in manufacturing or mining produce a product which must be transported. In a situation where the factory or mine owners have replaced the strikers, unionized transport workers may feel inclined to refuse to haul any product that is produced by strikebreakers, yet their own contract obligates them to do so.
116
+
117
+ Historically the practice of union strikebreaking has been a contentious issue in the union movement, and a point of contention between adherents of different union philosophies. For example, supporters of industrial unions, which have sought to organize entire workplaces without regard to individual skills, have criticized craft unions for organizing workplaces into separate unions according to skill, a circumstance that makes union strikebreaking more common. Union strikebreaking is not, however, unique to craft unions.
118
+
119
+ Most strikes called by unions are somewhat predictable; they typically occur after the contract has expired. However, not all strikes are called by union organizations – some strikes have been called in an effort to pressure employers to recognize unions. Other strikes may be spontaneous actions by working people. Spontaneous strikes are sometimes called "wildcat strikes"; they were the key fighting point in May 1968 in France; most commonly, they are responses to serious (often life-threatening) safety hazards in the workplace rather than wage or hour disputes, etc.
120
+
121
+ Whatever the cause of the strike, employers are generally motivated to take measures to prevent them, mitigate the impact, or to undermine strikes when they do occur.
122
+
123
+ Companies which produce products for sale will frequently increase inventories prior to a strike. Salaried employees may be called upon to take the place of strikers, which may entail advance training. If the company has multiple locations, personnel may be redeployed to meet the needs of reduced staff.
124
+
125
+ Companies may also take out strike insurance, to help offset the losses which a strike would cause.
126
+
127
+ One of the weapons traditionally wielded by already-established unions is strike action. Some companies may decline entirely to negotiate with the union, and respond to the strike by hiring replacement workers. This may create a crisis situation for strikers – do they stick to their original plan and rely upon their solidarity, or is there a chance that the strike may be lost? How long will the strike last? Will strikers' jobs still be there if the strike fails? Are other strikers defecting from the strike? Companies that hire strikebreakers typically play upon these fears when they attempt to convince union members to abandon the strike and cross the union's picket line.
128
+
129
+ Unions faced with a strikebreaking situation may try to inhibit the use of strikebreakers by a variety of methods – establishing picket lines where the strikebreakers enter the workplace; discouraging strike breakers from taking, or from keeping, strikebreaking jobs; raising the cost of hiring strikebreakers for the company; or employing public relations tactics. Companies may respond by increasing security forces and seeking court injunctions.
130
+
131
+ Examining conditions in the late 1990s, John Logan observed that union busting agencies helped to "transform economic strikes into a virtually suicidal tactic for US unions". Logan further observed, "as strike rates in the United States have plummeted to historic low levels, the demand for strike management firms has also declined."[36]
132
+
133
+ In the US, as established in the National Labor Relations Act there is a legally protected right for private sector employees to strike to gain better wages, benefits, or working conditions and they cannot be fired. Striking for economic reasons (like protesting workplace conditions or supporting a union's bargaining demands) allows an employer to hire permanent replacements. The replacement worker can continue in the job and then the striking worker must wait for a vacancy. But if the strike is due to unfair labor practices, the strikers replaced can demand immediate reinstatement when the strike ends. If a collective bargaining agreement is in effect, and it contains a "no-strike clause", a strike during the life of the contract could result in the firing of all striking employees which could result in dissolution of that union. Although this is legal it could be viewed as union busting.
134
+
135
+ Some companies negotiate with the union during a strike; other companies may see a strike as an opportunity to eliminate the union. This is sometimes accomplished by the importation of replacement workers, strikebreakers or "scabs". Historically, strike breaking has often coincided with union busting. It was also called 'black legging' in the early twentieth century, during the Russian socialist movement.[37]
136
+
137
+ One method of inhibiting or ending a strike is firing union members who are striking which can result in elimination of the union. Although this has happened, it is rare due to laws regarding firing and "right to strike" having a wide range of differences in the US depending on whether union members are public or private sector. Laws also vary country to country. In the UK, "It is important to understand that there is no right to strike in UK law."[38] Employees who strike risk dismissal, unless it is an official strike (one called or endorsed by their union) in which case they are protected from unlawful dismissal, and cannot be fired for at least 12 weeks. UK laws regarding work stoppages and strikes are defined within the Employment Relations Act 1999 and the Trade Union and Labour Relations (Consolidation) Act 1992.
138
+
139
+ A significant case of mass-dismissals in the UK in 2005 involved the sacking of over 600 Gate Gourmet employees at Heathrow Airport.[39] The sacking prompted a walkout by British Airways ground staff leading to cancelled flights and thousands of delayed passengers. The walkout was illegal under UK law and the T&GWU quickly brought it to an end. A subsequent court case ruled that demonstrations on a grass verge approaching the Gate Gourmet premises were not illegal, but limited the number and made the T&G responsible for their action. [40]
140
+
141
+ In 1962 US President John F. Kennedy issued Executive Order #10988[41] which permitted federal employees to form trade unions but prohibited strikes (codified in 1966 at 5 U.S.C. 7311 – Loyalty and Striking). In 1981, after public sector union PATCO (Professional Air Traffic Controllers Organization) went on strike illegally, President Ronald Reagan fired all of the controllers. His action resulted in the dissolution of the union. PATCO reformed to become the National Air Traffic Controllers Association.
142
+
143
+ In the U.S., as established in the National Labor Relations Act there is a legally protected right for private sector employees to strike to gain better wages, benefits, or working conditions and they cannot be fired. Striking for economic reasons (i.e., protesting workplace conditions or supporting a union's bargaining demands) allows an employer to hire permanent replacements. The replacement worker can continue in the job and then the striking worker must wait for a vacancy. But if the strike is due to unfair labor practices (ULP), the strikers replaced can demand immediate reinstatement when the strike ends. If a collective bargaining agreement is in effect, and it contains a "no-strike clause", a strike during the life of the contract could result in the firing of all striking employees which could result in dissolution of that union.
144
+
145
+ Another counter to a strike is a lockout, the form of work stoppage in which an employer refuses to allow employees to work. Two of the three employers involved in the Caravan park grocery workers strike of 2003–2004 locked out their employees in response to a strike against the third member of the employer bargaining group. Lockouts are, with certain exceptions, lawful under United States labor law.
146
+
147
+ Historically, some employers have attempted to break union strikes by force. One of the most famous examples of this occurred during the Homestead Strike of 1892. Industrialist Henry Clay Frick sent private security agents from the Pinkerton National Detective Agency to break the Amalgamated Association of Iron and Steel Workers strike at a Homestead, Pennsylvania steel mill. Two strikers were killed, twelve wounded, along with two Pinkertons killed and eleven wounded. In the aftermath, Frick was shot in the neck and then stabbed by Alexander Berkman, surviving the attack, while Berkman was sentenced to 22 years in prison.
en/2296.html.txt ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Influenza, commonly known as "the flu", is an infectious disease caused by an influenza virus.[1] Symptoms can be mild to severe.[5] The most common symptoms include: high fever, runny nose, sore throat, muscle and joint pain, headache, coughing, and feeling tired.[1] These symptoms typically begin two days after exposure to the virus and most last less than a week.[1] The cough, however, may last for more than two weeks.[1] In children, there may be diarrhea and vomiting, but these are not common in adults.[6] Diarrhea and vomiting occur more commonly in gastroenteritis, which is an unrelated disease and sometimes inaccurately referred to as "stomach flu" or the "24-hour flu".[6] Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.[2][5]
6
+
7
+ Three of the four types of influenza viruses affect humans: Type A, Type B, and Type C.[2][7] Type D has not been known to infect humans, but is believed to have the potential to do so.[7][8] Usually, the virus is spread through the air from coughs or sneezes.[1] This is believed to occur mostly over relatively short distances.[9] It can also be spread by touching surfaces contaminated by the virus and then touching the eyes, nose, or mouth.[5][9][10] A person may be infectious to others both before and during the time they are showing symptoms.[5] The infection may be confirmed by testing the throat, sputum, or nose for the virus.[2] A number of rapid tests are available; however, people may still have the infection even if the results are negative.[2] A type of polymerase chain reaction that detects the virus's RNA is more accurate.[2]
8
+
9
+ Frequent hand washing reduces the risk of viral spread, as does wearing a surgical mask.[3] Yearly vaccinations against influenza are recommended by the World Health Organization (WHO) for those at high risk,[1] and by the Centers for Disease Control and Prevention (CDC) for those six months of age and older.[11] The vaccine is usually effective against three or four types of influenza.[1] It is usually well tolerated.[1] A vaccine made for one year may not be useful in the following year, since the virus evolves rapidly.[1] Antiviral medications such as the neuraminidase inhibitor oseltamivir, among others, have been used to treat influenza.[1] The benefit of antiviral medications in those who are otherwise healthy do not appear to be greater than their risks.[12] No benefit has been found in those with other health problems.[12][13]
10
+
11
+ Influenza spreads around the world in yearly outbreaks, resulting in about three to five million cases of severe illness and about 290,000 to 650,000 deaths.[1][4] About 20% of unvaccinated children and 10% of unvaccinated adults are infected each year.[14] In the northern and southern parts of the world, outbreaks occur mainly in the winter, while around the equator, outbreaks may occur at any time of the year.[1] Death occurs mostly in high risk groups—the young, the old, and those with other health problems.[1] Larger outbreaks known as pandemics are less frequent.[2] In the 20th century, three influenza pandemics occurred: Spanish influenza in 1918 (17–100 million deaths), Asian influenza in 1957 (two million deaths), and Hong Kong influenza in 1968 (one million deaths).[15][16][17] The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June 2009.[18] Influenza may also affect other animals, including pigs, horses, and birds.[19]
12
+
13
+
14
+
15
+
16
+
17
+ Approximately 33% of people with influenza are asymptomatic.[23][24]
18
+
19
+ Symptoms of influenza can start quite suddenly one to two days after infection. Usually the first symptoms are chills and body aches, with fever also common early in the infection, with body temperatures ranging from 38 to 39 °C (approximately 100 to 103 °F).[25] Many people are so ill that they are confined to bed for several days, with aches and pains throughout their bodies, which are worse in their backs and legs.[26]
20
+
21
+ It can be difficult to distinguish between the common cold and influenza in the early stages of these infections.[31] Influenza symptoms are a mixture of symptoms of common cold and pneumonia, body ache, headache, and fatigue. Diarrhea is not usually a symptom of influenza in adults,[20] although it has been seen in some human cases of the H5N1 "bird flu"[32] and can be a symptom in children.[28] The symptoms most reliably seen in influenza are shown in the adjacent table.[20]
22
+
23
+ The specific combination of fever and cough has been found to be the best predictor; diagnostic accuracy increases with a body temperature above 38 °C (100.4 °F).[33] Two decision analysis studies[34][35] suggest that during local outbreaks of influenza, the prevalence will be over 70%.[35] Even in the absence of a local outbreak, diagnosis may be justified in the elderly during the influenza season as long as the prevalence is over 15%.[35]
24
+
25
+ The United States Centers for Disease Control and Prevention (CDC) maintains an up-to-date summary of available laboratory tests.[36] According to the CDC, rapid diagnostic tests have a sensitivity of 50–75% and specificity of 90–95% when compared with viral culture.[37]
26
+
27
+ Occasionally, influenza can cause severe illness including primary viral pneumonia or secondary bacterial pneumonia.[38][39] The obvious symptom is trouble breathing. In addition, if a child (or presumably an adult) seems to be getting better and then relapses with a high fever, that is a danger sign since this relapse can be bacterial pneumonia.[40]
28
+
29
+ Sometimes, influenza may have abnormal presentations, like confusion in the elderly and a sepsis-like syndrome in the young.[41]
30
+
31
+ In virus classification, influenza viruses are RNA viruses that make up four of the seven genera of the family Orthomyxoviridae:[43]
32
+
33
+ These viruses are only distantly related to the human parainfluenza viruses, which are RNA viruses belonging to the paramyxovirus family that are a common cause of respiratory infections in children such as croup,[44] but can also cause a disease similar to influenza in adults.[45]
34
+
35
+ The fourth family of influenza viruses – Influenza D – was identified in 2016.[46][47][48][49][50][51][52] The type species for this family is Influenza D virus, which was first isolated in 2011.[8]
36
+
37
+ This genus has one species, influenza A virus. Wild aquatic birds are the natural hosts for a large variety of influenza A.[53] Occasionally, viruses are transmitted to other species and may then cause devastating outbreaks in domestic poultry or give rise to human influenza pandemics.[53] The influenza A virus can be subdivided into different serotypes based on the antibody response to these viruses.[54] The serotypes that have been confirmed in humans are:
38
+
39
+ This genus has one species, influenza B virus. Influenza B almost exclusively infects humans[54] and is less common than influenza A. The only other animals known to be susceptible to influenza B infection are seals[60] and ferrets.[61] This type of influenza mutates at a rate 2–3 times slower than type A[62] and consequently is less genetically diverse, with only one influenza B serotype.[54] As a result of this lack of antigenic diversity, a degree of immunity to influenza B is usually acquired at an early age. However, influenza B mutates enough that lasting immunity is not possible.[63] This reduced rate of antigenic change, combined with its limited host range (inhibiting cross species antigenic shift), ensures that pandemics of influenza B do not occur.[64]
40
+
41
+ This genus has one species, influenza C virus, which infects humans, dogs and pigs, sometimes causing both severe illness and local epidemics.[65][66] However, influenza C is less common than the other types and usually only causes mild disease in children.[67][68]
42
+
43
+ This genus has only one species, influenza D virus, which infects pigs and cattle. The virus has the potential to infect humans, although no such cases have been observed.[8]
44
+
45
+ Influenzaviruses A, B, C, and D are very similar in overall structure.[8][69][70] The virus particle (also called the virion) is 80–120 nanometers in diameter such that the smallest virions adopt an elliptical shape.[71] The length of each particle varies considerably, owing to the fact that influenza is pleomorphic, and can be in excess of many tens of micrometers, producing filamentous virions.[72] However, despite these varied shapes, the viral particles of all influenza viruses are similar in composition.[73] These are made of a viral envelope containing the glycoproteins hemagglutinin and neuraminidase wrapped around a central core. The central core contains the viral RNA genome and other viral proteins that package and protect this RNA. RNA tends to be single stranded but in special cases it is double.[74] Unusually for a virus, its genome is not a single piece of nucleic acid; instead, it contains seven or eight pieces of segmented negative-sense RNA, each piece of RNA containing either one or two genes, which code for a gene product (protein).[73] For example, the influenza A genome contains 11 genes on eight pieces of RNA, encoding for 11 proteins: hemagglutinin (HA), neuraminidase (NA), nucleoprotein (NP), M1 (matrix 1 protein), M2, NS1 (non-structural protein 1), NS2 (other name is NEP, nuclear export protein), PA, PB1 (polymerase basic 1), PB1-F2 and PB2.[75]
46
+
47
+ Hemagglutinin (HA) and neuraminidase (NA) are the two large glycoproteins on the outside of the viral particles. HA is a lectin that mediates binding of the virus to target cells and entry of the viral genome into the target cell, while NA is involved in the release of progeny virus from infected cells, by cleaving sugars that bind the mature viral particles.[76] Thus, these proteins are targets for antiviral medications.[77] Furthermore, they are antigens to which antibodies can be raised. Influenza A viruses are classified into subtypes based on antibody responses to HA and NA. These different types of HA and NA form the basis of the H and N distinctions in, for example, H5N1.[78] There are 18 H and 11 N subtypes known, but only H 1, 2 and 3, and N 1 and 2 are commonly found in humans.[79][80]
48
+
49
+ Viruses can replicate only in living cells.[81] Influenza infection and replication is a multi-step process: First, the virus has to bind to and enter the cell, then deliver its genome to a site where it can produce new copies of viral proteins and RNA, assemble these components into new viral particles, and, last, exit the host cell.[73]
50
+
51
+ Influenza viruses bind through hemagglutinin onto sialic acid sugars on the surfaces of epithelial cells, typically in the nose, throat, and lungs of mammals, and intestines of birds (Stage 1 in infection figure).[82] After the hemagglutinin is cleaved by a protease, the cell imports the virus by endocytosis.[83]
52
+
53
+ The intracellular details are still being elucidated. It is known that virions converge to the microtubule organizing center, interact with acidic endosomes and finally enter the target endosomes for genome release.[84]
54
+
55
+ Once inside the cell, the acidic conditions in the endosome cause two events to happen: First, part of the hemagglutinin protein fuses the viral envelope with the vacuole's membrane, then the M2 ion channel allows protons to move through the viral envelope and acidify the core of the virus, which causes the core to disassemble and release the viral RNA and core proteins.[73] The viral RNA (vRNA) molecules, accessory proteins and RNA-dependent RNA polymerase are then released into the cytoplasm (Stage 2).[85] The M2 ion channel is blocked by amantadine drugs, preventing infection.[86]
56
+
57
+ These core proteins and vRNA form a complex that is transported into the cell nucleus, where the RNA-dependent RNA polymerase begins transcribing complementary positive-sense vRNA (Steps 3a and b).[87] The vRNA either is exported into the cytoplasm and translated (step 4) or remains in the nucleus. Newly synthesized viral proteins are either secreted through the Golgi apparatus onto the cell surface (in the case of neuraminidase and hemagglutinin, step 5b) or transported back into the nucleus to bind vRNA and form new viral genome particles (step 5a). Other viral proteins have multiple actions in the host cell, including degrading cellular mRNA and using the released nucleotides for vRNA synthesis and also inhibiting translation of host-cell mRNAs.[88]
58
+
59
+ Negative-sense vRNAs that form the genomes of future viruses, RNA-dependent RNA polymerase, and other viral proteins are assembled into a virion. Hemagglutinin and neuraminidase molecules cluster into a bulge in the cell membrane. The vRNA and viral core proteins leave the nucleus and enter this membrane protrusion (step 6). The mature virus buds off from the cell in a sphere of host phospholipid membrane, acquiring hemagglutinin and neuraminidase with this membrane coat (step 7).[89] As before, the viruses adhere to the cell through hemagglutinin; the mature viruses detach once their neuraminidase has cleaved sialic acid residues from the host cell.[82] After the release of new influenza viruses, the host cell dies.
60
+
61
+ Because of the absence of RNA proofreading enzymes, the RNA-dependent RNA polymerase that copies the viral genome makes an error roughly every 10 thousand nucleotides, which is the approximate length of the influenza vRNA. Hence, the majority of newly manufactured influenza viruses are mutants; this causes antigenic drift, which is a slow change in the antigens on the viral surface over time.[90] The separation of the genome into eight separate segments of vRNA allows mixing or reassortment of vRNAs if more than one type of influenza virus infects a single cell. The resulting rapid change in viral genetics produces antigenic shifts, which are sudden changes from one antigen to another. These sudden large changes allow the virus to infect new host species and quickly overcome protective immunity.[78] This is important in the emergence of pandemics, as discussed below in the section on epidemiology. Also, when two or more viruses infect a cell, genetic variation may be generated by homologous recombination.[91][92] Homologous recombination can arise during viral genome replication by the RNA polymerase switching from one template to another, a process known as copy choice.[92]
62
+
63
+ When an infected person sneezes or coughs more than half a million virus particles can be spread to those close by.[93] In otherwise healthy adults, influenza virus shedding (the time during which a person might be infectious to another person) increases sharply one-half to one day after infection, peaks on day 2 and persists for an average total duration of 5 days—but can persist as long as 9 days.[23] In those who develop symptoms from experimental infection (only 67% of healthy experimentally infected individuals), symptoms and viral shedding show a similar pattern, but with viral shedding preceding illness by one day.[23] Children are much more infectious than adults and shed virus from just before they develop symptoms until two weeks after infection.[94] In immunocompromised people, viral shedding can continue for longer than two weeks.[95]
64
+
65
+ Influenza can be spread in three main ways:[96][97] by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person); the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting) and through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a handshake. The relative importance of these three modes of transmission is unclear, and they may all contribute to the spread of the virus.[9] In the airborne route, the droplets that are small enough for people to inhale are 0.5 to 5 μm in diameter and inhaling just one droplet might be enough to cause an infection.[96] Although a single sneeze releases up to 40,000 droplets,[98] most of these droplets are quite large and will quickly settle out of the air.[96] How long influenza survives in airborne droplets seems to be influenced by the levels of humidity and UV radiation, with low humidity and a lack of sunlight in winter aiding its survival;[96] ideal conditions can allow it to live for an hour in the atmosphere.[99]
66
+
67
+ As the influenza virus can persist outside of the body, it can also be transmitted by contaminated surfaces such as banknotes,[100] doorknobs, light switches and other household items.[26] The length of time the virus will persist on a surface varies, with the virus surviving for one to two days on hard, non-porous surfaces such as plastic or metal, for about fifteen minutes on dry paper tissues, and only five minutes on skin.[101] However, if the virus is present in mucus, this can protect it for longer periods (up to 17 days on banknotes).[96][100] Avian influenza viruses can survive indefinitely when frozen.[102] They are inactivated by heating to 56 °C (133 °F) for a minimum of 60 minutes, as well as by acids (at pH <2).[102]
68
+
69
+ The mechanisms by which influenza infection causes symptoms in humans have been studied intensively. One of the mechanisms is believed to be the inhibition of adrenocorticotropic hormone (ACTH) resulting in lowered cortisol levels.[103]
70
+ Knowing which genes are carried by a particular strain can help predict how well it will infect humans and how severe this infection will be (that is, predict the strain's pathophysiology).[66][104]
71
+
72
+ For instance, part of the process that allows influenza viruses to invade cells is the cleavage of the viral hemagglutinin protein by any one of several human proteases.[83] In mild and avirulent viruses, the structure of the hemagglutinin means that it can only be cleaved by proteases found in the throat and lungs, so these viruses cannot infect other tissues. However, in highly virulent strains, such as H5N1, the hemagglutinin can be cleaved by a wide variety of proteases, allowing the virus to spread throughout the body.[104]
73
+
74
+ The viral hemagglutinin protein is responsible for determining both which species a strain can infect and where in the human respiratory tract a strain of influenza will bind.[105] Strains that are easily transmitted between people have hemagglutinin proteins that bind to receptors in the upper part of the respiratory tract, such as in the nose, throat and mouth. In contrast, the highly lethal H5N1 strain binds to receptors that are mostly found deep in the lungs.[106] This difference in the site of infection may be part of the reason why the H5N1 strain causes severe viral pneumonia in the lungs, but is not easily transmitted by people coughing and sneezing.[107][108]
75
+
76
+ Common symptoms of the flu such as fever, headaches, and fatigue are the result of the huge amounts of proinflammatory cytokines and chemokines (such as interferon or tumor necrosis factor) produced from influenza-infected cells.[31][109] In contrast to the rhinovirus that causes the common cold, influenza does cause tissue damage, so symptoms are not entirely due to the inflammatory response.[110] This massive immune response might produce a life-threatening cytokine storm. This effect has been proposed to be the cause of the unusual lethality of both the H5N1 avian influenza,[111] and the 1918 pandemic strain.[112][113] However, another possibility is that these large amounts of cytokines are just a result of the massive levels of viral replication produced by these strains, and the immune response does not itself contribute to the disease.[114] Influenza appears to trigger programmed cell death (apoptosis).[115]
77
+
78
+ The influenza vaccine is recommended by the World Health Organization (WHO) for high-risk groups, such as pregnant women, children aged less than five years, the elderly, health care workers, and people who have chronic illnesses such as HIV/AIDS, asthma, diabetes, heart disease, or are immunocompromised among others.[116][117] The United States Centers for Disease Control and Prevention (CDC) recommends the influenza vaccine for those aged six months or older who do not have contraindications.[118][11] In healthy adults it is modestly effective in decreasing the amount of influenza-like symptoms in a population.[119] In healthy children over the age of two years, the vaccine reduces the chances of getting influenza by around two-thirds, while it has not been well studied in children under two years.[120] In those with chronic obstructive pulmonary disease vaccination reduces exacerbations,[121] it is not clear if it reduces asthma exacerbations.[122] Evidence supports a lower rate of influenza-like illness in many groups who are immunocompromised such as those with: HIV/AIDS, cancer, and post organ transplant.[123] In those at high risk immunization may reduce the risk of heart disease.[124] Whether immunizing health care workers affects patient outcomes is controversial with some reviews finding insufficient evidence[125][126] and others finding tentative evidence.[127][128]
79
+
80
+ Due to the high mutation rate of the virus, a particular influenza vaccine usually confers protection for no more than a few years. Each year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year (see Historical annual reformulations of the influenza vaccine), allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.[129] The vaccine is reformulated each season for a few specific flu strains but does not include all the strains active in the world during that season. It takes about six months for the manufacturers to formulate and produce the millions of doses required to deal with the seasonal epidemics; occasionally, a new or overlooked strain becomes prominent during that time.[130] It is also possible to get infected just before vaccination and get sick with the strain that the vaccine is supposed to prevent, as the vaccine takes about two weeks to become effective.[131]
81
+ Vaccines can cause the immune system to react as if the body were actually being infected, and general infection symptoms (many cold and flu symptoms are just general infection symptoms) can appear, though these symptoms are usually not as severe or long-lasting as influenza. The most dangerous adverse effect is a severe allergic reaction to either the virus material itself or residues from the hen eggs used to grow the influenza; however, these reactions are extremely rare.[132]
82
+
83
+ A 2018 Cochrane review of children in good general health found that the live immunization seemed to lower the risk of getting influenza for the season from 18% to 4%. The inactivated vaccine seemed to lower the risk of getting flu for the season from 30% to 11%. Not enough data was available to draw definite conclusions about serious complications such as pneumonia or hospitalization.[120]
84
+
85
+ For healthy adults, a 2018 Cochrane review showed that vaccines reduced the incidence of lab-confirmed influenza from 2.3% to 0.9%, which constitutes a reduction of risk of approximately 60%. However, for influenza-like illness which is defined as the same symptoms of cough, fever, headache, runny nose, and bodily aches and pains, vaccine reduced the risk from 21.5% to 18.1%. This constitutes a much more modest reduction of risk of approximately 16%. The difference is most probably explained by the fact that over 200 viruses cause the same or similar symptoms as the flu virus.[119] Another review looked at the effect of short and long term exercise before the vaccine, however, no benefits or harms were recorded.[133]
86
+
87
+ The cost-effectiveness of seasonal influenza vaccination has been widely evaluated for different groups and in different settings.[134] It has generally been found to be a cost-effective intervention, especially in children[135] and the elderly,[136] however the results of economic evaluations of influenza vaccination have often been found to be dependent on key assumptions.[137][138]
88
+
89
+ These are the main ways that influenza spreads
90
+
91
+ When vaccines and antiviral medications are limited, non-pharmaceutical interventions are essential to reduce transmission and spread. The lack of controlled studies and rigorous evidence of the effectiveness of some measures has hampered planning decisions and recommendations. Nevertheless, strategies endorsed by experts for all phases of flu outbreaks include hand and respiratory hygiene, self-isolation by symptomatic individuals and the use of face masks by them and their caregivers, surface disinfection, rapid testing and diagnosis, and contact tracing. In some cases, other forms of social distancing including school closures and travel restrictions are recommended.[139]
92
+
93
+ Reasonably effective ways to reduce the transmission of influenza include good personal health and hygiene habits such as: not touching the eyes, nose or mouth;[140] frequent hand washing (with soap and water, or with alcohol-based hand rubs);[141] covering coughs and sneezes with a tissue or sleeve; avoiding close contact with sick people; and staying home when sick. Avoiding spitting is also recommended.[139] Although face masks might help prevent transmission when caring for the sick,[142][143] there is mixed evidence on beneficial effects in the community.[139][144] Smoking raises the risk of contracting influenza, as well as producing more severe disease symptoms.[145][146]
94
+
95
+ Since influenza spreads through both aerosols and contact with contaminated surfaces, surface sanitizing may help prevent some infections.[147] Alcohol is an effective sanitizer against influenza viruses, while quaternary ammonium compounds can be used with alcohol so that the sanitizing effect lasts for longer.[148] In hospitals, quaternary ammonium compounds and bleach are used to sanitize rooms or equipment that have been occupied by people with influenza symptoms.[148] At home, this can be done effectively with a diluted chlorine bleach.[149]
96
+
97
+ Social distancing strategies used during past pandemics, such as quarantines, travel restrictions, and the closing of schools, churches and theaters, have been employed to slow the spread of influenza viruses. Researchers have estimated that such interventions during the 1918 Spanish flu pandemic in the US reduced the peak death rate by up to 50%, and the overall mortality by about 10–30%, in areas where multiple interventions were implemented. The more moderate effect on total deaths was attributed to the measures being employed too late, or lifted too early, most after six weeks or less.[150][151]
98
+
99
+ For typical flu outbreaks, routine cancellation of large gatherings or mandatory travel restrictions have received little agreement, particularly as they may be disruptive and unpopular. School closures have been found by most empirical studies to reduce community spread, but some findings have been contradictory. Recommendations for these community restrictions are usually on a case-by-case basis.[139]
100
+
101
+ There are a number of rapid tests for the flu. One is called a Rapid Molecular Assay, when an upper respiratory tract specimen (mucus) is taken using a nasal swab or a nasopharyngeal swab.[152] It should be done within 3–4 days of symptom onset, as upper respiratory viral shedding takes a downward spiral after that.[41]
102
+
103
+ People with the flu are advised to get plenty of rest, drink plenty of liquids, avoid using alcohol and tobacco and, if necessary, take medications such as acetaminophen (paracetamol) to relieve the fever and muscle aches associated with the flu.[153][154] In contrast, there is not enough evidence to support corticosteroids as additional therapy for influenza.[155] It is advised to avoid close contact with others to prevent spread of infection.[153][154] Children and teenagers with flu symptoms (particularly fever) should avoid taking aspirin during an influenza infection (especially influenza type B), because doing so can lead to Reye's syndrome, a rare but potentially fatal disease of the liver.[156] Since influenza is caused by a virus, antibiotics have no effect on the infection; unless prescribed for secondary infections such as bacterial pneumonia. Antiviral medication may be effective, if given early (within 48 hours to first symptoms), but some strains of influenza can show resistance to the standard antiviral medications and there is concern about the quality of the research.[157] High-risk individuals such as young children, pregnant women, the elderly, and those with compromised immune systems should visit the doctor for antiviral medications. Those with the emergency warning signs should visit the emergency room at once.[42]
104
+
105
+ The two classes of antiviral medications used against influenza are neuraminidase inhibitors (oseltamivir, zanamivir, laninamivir and peramivir) and M2 protein inhibitors (adamantane derivatives).[158][159][160] In Russia, umifenovir is sold for treatment of influenza[161] and in the first quarter of 2020 had a 16 percent share in the antiviral market.[162]
106
+
107
+ Overall the benefits of neuraminidase inhibitors in those who are otherwise healthy do not appear to be greater than the risks.[12] There does not appear to be any benefit in those with other health problems.[12] In those believed to have the flu, they decreased the length of time symptoms were present by slightly less than a day but did not appear to affect the risk of complications such as needing hospitalization or pneumonia.[13] Increasingly prevalent resistance to neuraminidase inhibitors has led researchers to seek alternative antiviral medications with different mechanisms of action.[163]
108
+
109
+ The antiviral medications amantadine and rimantadine inhibit a viral ion channel (M2 protein), thus inhibiting replication of the influenza A virus.[86] These medications are sometimes effective against influenza A if given early in the infection but are ineffective against influenza B viruses, which lack the M2 drug target.[164] Measured resistance to amantadine and rimantadine in American isolates of H3N2 has increased to 91% in 2005.[165] This high level of resistance may be due to the easy availability of amantadines as part of over-the-counter cold remedies in countries such as China and Russia,[166] and their use to prevent outbreaks of influenza in farmed poultry.[167][168] The CDC recommended against using M2 inhibitors during the 2005–06 influenza season due to high levels of drug resistance.[169]
110
+
111
+ Influenza's effects are much more severe and last longer than those of the common cold. Most people will recover completely in about one to two weeks, but others will develop life-threatening complications (such as pneumonia). Thus, influenza can be deadly, especially for the weak, young and old, those with compromised immune systems, or the chronically ill.[78] People with a weak immune system, such as people with advanced HIV infection or transplant recipients (whose immune systems are medically suppressed to prevent transplant organ rejection), suffer from particularly severe disease.[170] Pregnant women and young children are also at a high risk for complications.[171]
112
+
113
+ The flu can worsen chronic health problems. People with emphysema, chronic bronchitis or asthma may experience shortness of breath while they have the flu, and influenza may cause worsening of coronary heart disease or congestive heart failure.[172] Smoking is another risk factor associated with more serious disease and increased mortality from influenza.[145]
114
+
115
+ Even healthy people can be affected, and serious problems from influenza can happen at any age. People over 65 years old, pregnant women, very young children and people of any age with chronic medical conditions are more likely to get complications from influenza, such as pneumonia, bronchitis, sinus, and ear infections.[173]
116
+
117
+ In some cases, an autoimmune response to an influenza infection may contribute to the development of Guillain–Barré syndrome.[174] However, as many other infections can increase the risk of this disease, influenza may only be an important cause during epidemics.[174][175] This syndrome has been believed to also be a rare side effect of influenza vaccines. One review gives an incidence of about one case per million vaccinations.[176] Getting infected by influenza itself increases both the risk of death (up to 1 in 10,000) and increases the risk of developing GBS to a much higher level than the highest level of suspected vaccine involvement (approx. 10 times higher by recent estimates).[177][174]
118
+
119
+ According to the Centers for Disease Control and Prevention (CDC), "Children of any age with neurologic conditions are more likely than other children to become very sick if they get the flu. Flu complications may vary and for some children, can include pneumonia and even death."[178]
120
+
121
+ Neurological conditions can include:
122
+
123
+ These conditions can impair coughing, swallowing, clearing the airways, and in the worst cases, breathing. Therefore, they worsen the flu symptoms.[178]
124
+
125
+ Influenza reaches peak prevalence in winter, and because the Northern and Southern Hemispheres have winter at different times of the year, there are actually two different flu seasons each year. This is why the World Health Organization (assisted by the National Influenza Centers) makes recommendations for two different vaccine formulations every year; one for the Northern, and one for the Southern Hemisphere.[129]
126
+
127
+ A long-standing puzzle has been why outbreaks of the flu occur seasonally rather than uniformly throughout the year. One possible explanation is that, because people are indoors more often during the winter, they are in close contact more often, and this promotes transmission from person to person. Increased travel due to the Northern Hemisphere winter holiday season may also play a role.[179] Another factor is that cold temperatures lead to drier air, which may dehydrate mucus particles. Dry particles are lighter and can thus remain airborne for a longer period. The virus also survives longer on surfaces at colder temperatures and aerosol transmission of the virus is highest in cold environments (less than 5 °C) with low relative humidity.[180] The lower air humidity in winter seems to be the main cause of seasonal influenza transmission in temperate regions.[181][182]
128
+
129
+ However, seasonal changes in infection rates also occur in tropical regions, and in some countries these peaks of infection are seen mainly during the rainy season.[183] Seasonal changes in contact rates from school terms, which are a major factor in other childhood diseases such as measles and pertussis, may also play a role in the flu. A combination of these small seasonal effects may be amplified by dynamical resonance with the endogenous disease cycles.[184] H5N1 exhibits seasonality in both humans and birds.[185][186]
130
+
131
+ An alternative hypothesis to explain seasonality in influenza infections is an effect of vitamin D levels on immunity to the virus.[187] This idea was first proposed by Robert Edgar Hope-Simpson in 1981.[188] He proposed that the cause of influenza epidemics during winter may be connected to seasonal fluctuations of vitamin D, which is produced in the skin under the influence of solar (or artificial) UV radiation. This could explain why influenza occurs mostly in winter and during the tropical rainy season, when people stay indoors, away from the sun, and their vitamin D levels fall.
132
+
133
+ Every year about 290,000 to 650,000 people die due to influenza globally, with an average of 389,000.[190] In the developed world most of those who die are over the age of 65.[1] In the developing world the effects are less clear; however, it appears that children are affected to a greater degree.[1]
134
+
135
+ Although the number of cases of influenza can vary widely between years, approximately 36,000 deaths and more than 200,000 hospitalizations are directly associated with influenza a year in the United States.[191][192] One method of calculating influenza mortality produced an estimate of 41,400 average deaths per year in the United States between 1979 and 2001.[193] Different methods in 2010 by the Centers for Disease Control and Prevention (CDC) reported a range from a low of about 3,300 deaths to a high of 49,000 per year.[194]
136
+
137
+ As influenza is caused by a variety of species and strains of viruses, in any given year some strains can die out while others create epidemics, while yet another strain can cause a pandemic. Typically, in a year's normal two flu seasons (one per hemisphere), there are between three and five million cases of severe illness,[4][1][195] which by some definitions is a yearly influenza epidemic.[1]
138
+
139
+ Roughly three times per century, a pandemic occurs, which infects a large proportion of the world's population and can kill tens of millions of people (see pandemics section). In 2006, a study estimated that if a strain with similar virulence to the 1918 influenza had emerged that year, it could have killed between 50 and 80 million people.[196]
140
+
141
+ New influenza viruses are constantly evolving by mutation or by reassortment.[54] Mutations can cause small changes in the hemagglutinin and neuraminidase antigens on the surface of the virus. This is called antigenic drift, which slowly creates an increasing variety of strains until one evolves that can infect people who are immune to the pre-existing strains. This new variant then replaces the older strains as it rapidly sweeps through the human population, often causing an epidemic.[197] However, since the strains produced by drift will still be reasonably similar to the older strains, some people will still be immune to them. In contrast, when influenza viruses reassort, they acquire completely new antigens—for example by reassortment between avian strains and human strains; this is called antigenic shift. If a human influenza virus is produced that has entirely new antigens, everybody will be susceptible, and the novel influenza will spread uncontrollably, causing a pandemic.[198] In contrast to this model of pandemics based on antigenic drift and shift, an alternative approach has been proposed where the periodic pandemics are produced by interactions of a fixed set of viral strains with a human population with a constantly changing set of immunities to different viral strains.[199]
142
+
143
+ From a public health point of view, flu epidemics spread rapidly and are very difficult to control. Most influenza virus strains are not very infectious and each infected individual will only go on to infect one or two other individuals (the basic reproduction number for influenza is generally around 1.4). However, the generation time for influenza is extremely short: the time from a person becoming infected to when he infects the next person is only two days. The short generation time means that influenza epidemics generally peak at around 2 months and burn out after 3 months: the decision to intervene in an influenza epidemic, therefore, has to be taken early, and the decision is therefore often made on the back of incomplete data. Another problem is that individuals become infectious before they become symptomatic, which means that putting people in quarantine after they become ill is not an effective public health intervention.[200] For the average person, viral shedding tends to peak on day two, whereas symptoms peak on day three.[23]
144
+
145
+ The word Influenza comes from the Italian language meaning "influence" and refers to the cause of the disease; initially, this ascribed illness to unfavorable astrological influences. It was introduced into English in the mid-eighteenth century during a pan-European epidemic.[201]
146
+ Archaic terms for influenza include epidemic catarrh, la grippe (from the French, first used by Molyneaux in 1694; also used in German),[202] sweating sickness, and Spanish fever (particularly for the 1918 flu pandemic strain).[203]
147
+
148
+ An overall lack of data up until 1500 precludes meaningful search for the influenza outbreaks in the more distant past.[205] Possibly the first influenza pandemic occurred around 6000 BC in China.[205] The symptoms of human influenza were clearly described by Hippocrates roughly 2,400 years ago.[206][207] Although the virus seems to have caused epidemics throughout human history, historical data on influenza are difficult to interpret, because the symptoms can be similar to those of other respiratory diseases.[208][202] The disease may have spread from Europe to the Americas as early as the European colonization of the Americas, since almost the entire indigenous population of the Antilles was killed by an epidemic resembling influenza that broke out in 1493, after the arrival of Christopher Columbus.[209][210]
149
+
150
+ The first convincing record of an influenza pandemic was a minor pandemic chronicled in 1510, which began in East Asia before spreading to North Africa and then Europe. During this pandemic, influenza killed about 1% of its victims.[211][212] The first pandemic of influenza to be reliably recorded as spreading worldwide was the 1557 influenza pandemic,[213][214][215][216] in which a reoccurring wave likely killed Queen Mary I of England and the Archbishop of Canterbury within 12 hours of each other.[217][218] One of the most well-chronicled pandemics of influenza in the 16th Century occurred in 1580, beginning in East Asia and spreading to Europe through Africa, Russia, and the Spanish and Ottoman Empires. In Rome, over 8,000 people were killed. Several Spanish cities saw large scale deaths, among the fatalities the Queen of Spain, Anna of Austria. Pandemics continued sporadically throughout the 17th and 18th centuries, with the pandemic of 1830–1833 being particularly widespread; it infected approximately a quarter of the people exposed.[202]
151
+
152
+ The most famous and lethal outbreak was the 1918 flu pandemic (Spanish flu) (type A influenza, H1N1 subtype), which lasted into 1920. It is not known exactly how many it killed, but estimates range from 17 million to 100 million people.[15][204][219][220] This pandemic has been described as "the greatest medical holocaust in history" and may have killed as many people as the Black Death.[202] This huge death toll was caused by an extremely high infection rate of up to 50% and the extreme severity of the symptoms, suspected to be caused by cytokine storms.[220] Symptoms in 1918 were so unusual that initially influenza was misdiagnosed as dengue, cholera, or typhoid. One observer wrote, "One of the most striking of the complications was hemorrhage from mucous membranes, especially from the nose, stomach, and intestine. Bleeding from the ears and petechial hemorrhages in the skin also occurred."[219] The majority of deaths were from bacterial pneumonia, a secondary infection caused by influenza, but the virus also killed people directly, causing massive hemorrhages and edema in the lung.[221]
153
+
154
+ The 1918 flu pandemic was truly global, spreading even to the Arctic and remote Pacific islands. The unusually severe disease killed between two and twenty percent of those infected, as opposed to the more usual flu epidemic mortality rate of 0.1%.[204][219] Another unusual feature of this pandemic was that it mostly killed young adults, with 99% of pandemic influenza deaths occurring in people under 65, and more than half in young adults 20 to 40 years old.[222] This is unusual since influenza is normally most deadly to the very young (under age 2) and the very old (over age 70). The total mortality of the 1918–1919 pandemic is not known, but it is estimated that 2.5% to 5% of the world's population was killed. As many as 25 million may have been killed in the first 25 weeks; in contrast, HIV/AIDS has killed 25 million in its first 25 years.[219]
155
+
156
+ Later flu pandemics were not so devastating. They included the 1957 Asian flu (type A, H2N2 strain) and the 1968 Hong Kong flu (type A, H3N2 strain), but even these smaller outbreaks killed millions of people. In later pandemics antibiotics were available to control secondary infections and this may have helped reduce mortality compared to the Spanish flu of 1918.[204]
157
+
158
+
159
+
160
+ It was incorrectly assumed that the cause of influenza was bacterial in origin from 1892 (with Haemophilus influenzae being discovered by and suggested as the origin of influenza by R. F. J. Pfeiffer).[246] The first influenza virus to be isolated was from poultry, when in 1901, the agent causing a disease called "fowl plague" was passed through Chamberland filters, which have pores that are too small for bacteria to pass through.[247] However, the conceptual differences between viruses and bacteria as different entities was not fully understood for some time, complicating preventative measures taken during the 1918 influenza pandemic.[246] The etiological cause of influenza, the virus family Orthomyxoviridae, was first discovered in pigs by Richard Shope in 1931.[248] This discovery was shortly followed by the isolation of the virus from humans by a group headed by Patrick Laidlaw at the Medical Research Council of the United Kingdom in 1933.[249] However, it was not until Wendell Stanley first crystallized tobacco mosaic virus in 1935 that the non-cellular nature of viruses was appreciated.
161
+
162
+ The first significant step towards preventing influenza was the development in 1944 of a killed-virus vaccine for influenza by Thomas Francis, Jr. This built on work by Australian Frank Macfarlane Burnet, who showed that the virus lost virulence when it was cultured in fertilized hen's eggs.[251] Application of this observation by Francis allowed his group of researchers at the University of Michigan to develop the first influenza vaccine, with support from the U.S. Army.[252] The Army was deeply involved in this research due to its experience of influenza in World War I, when thousands of troops were killed by the virus in a matter of months.[219] In comparison to vaccines, the development of anti-influenza drugs has been slower, with amantadine being licensed in 1966 and, almost thirty years later, the next class of drugs (the neuraminidase inhibitors) being developed.[253]
163
+
164
+ Influenza produces direct costs due to lost productivity and associated medical treatment, as well as indirect costs of preventive measures. In the United States, seasonal influenza is estimated to result in a total average annual economic cost of over $11 billion, with direct medical costs estimated to be over $3 billion annually.[254] It has been estimated that a future pandemic could cause hundreds of billions of dollars in direct and indirect costs.[255] However, the economic impacts of past pandemics have not been intensively studied, and some authors have suggested that the Spanish influenza actually had a positive long-term effect on per-capita income growth, despite a large reduction in the working population and severe short-term depressive effects.[256] Other studies have attempted to predict the costs of a pandemic as serious as the 1918 Spanish flu on the U.S. economy, where 30% of all workers became ill, and 2.5% were killed. A 30% sickness rate and a three-week length of illness would decrease the gross domestic product by 5%. Additional costs would come from medical treatment of 18 million to 45 million people, and total economic costs would be approximately $700 billion.[257]
165
+
166
+ Preventive costs are also high. Governments worldwide have spent billions of U.S. dollars preparing and planning for a potential H5N1 avian influenza pandemic, with costs associated with purchasing drugs and vaccines as well as developing disaster drills and strategies for improved border controls.[258] On 1 November 2005, United States President George W. Bush unveiled the National Strategy to Safeguard Against the Danger of Pandemic Influenza[255] backed by a request to Congress for $7.1 billion to begin implementing the plan.[259] Internationally, on 18 January 2006, donor nations pledged US$2 billion to combat bird flu at the two-day International Pledging Conference on Avian and Human Influenza held in China.[260][261]
167
+
168
+ In an assessment of the 2009 H1N1 pandemic on selected countries in the Southern Hemisphere, data suggest that all countries experienced some time-limited and/or geographically isolated socioeconomic effects and a temporary decrease in tourism most likely due to fear of 2009 H1N1 disease. It is still too early to determine whether the H1N1 pandemic has had any long-term economic effects.[262][needs update]
169
+
170
+ Research on influenza includes studies on molecular virology, how the virus produces disease (pathogenesis), host immune responses, viral genomics, and how the virus spreads (epidemiology). These studies help in developing influenza countermeasures; for example, a better understanding of the body's immune system response helps vaccine development, and a detailed picture of how influenza invades cells aids the development of antiviral drugs. One important basic research program is the Influenza Genome Sequencing Project, which was initiated in 2004 to create a library of influenza sequences and help clarify which factors make one strain more lethal than another, which genes most affect immunogenicity, and how the virus evolves over time.[263]
171
+
172
+ The sequencing of the influenza genome and recombinant DNA technology may accelerate the generation of new vaccine strains by allowing scientists to substitute new antigens into a previously developed vaccine strain.[264] Growing viruses in cell culture also promises higher yields, less cost, better quality and surge capacity.[265] Research on a universal influenza A vaccine, targeted against the external domain of the transmembrane viral M2 protein (M2e), is being done at the University of Ghent by Walter Fiers, Xavier Saelens and their team[266][267][268] and has now successfully concluded Phase I clinical trials. There has been some research success towards a "universal flu vaccine" that produces antibodies against proteins on the viral coat which mutate less rapidly, and thus a single shot could potentially provide longer-lasting protection.[269][270][271]
173
+
174
+ A number of biologics, therapeutic vaccines and immunobiologics are also being investigated for treatment of infection caused by viruses. Therapeutic biologics are designed to activate the immune response to virus or antigens. Typically, biologics do not target metabolic pathways like anti-viral drugs, but stimulate immune cells such as lymphocytes, macrophages, and/or antigen-presenting cells, in an effort to drive an immune response towards a cytotoxic effect against the virus. Influenza models, such as murine influenza, are convenient models to test the effects of prophylactic and therapeutic biologics. For example, lymphocyte T-cell immunomodulator inhibits viral growth in the murine model of influenza.[272]
175
+
176
+ Influenza infects many animal species, and transfer of viral strains between species can occur. Birds are thought to be the main animal reservoirs of influenza viruses.[273] Most influenza strains are believed to have originated after humans began their intensive domestication of animals about 10,000 years ago.[274] Sixteen forms of hemagglutinin and nine forms of neuraminidase have been identified. All known subtypes (HxNy) are found in birds, but many subtypes are endemic in humans, dogs, horses, and pigs; populations of camels, ferrets, cats, seals, mink, and whales also show evidence of prior infection or exposure to influenza.[63] Variants of flu virus are sometimes named according to the species the strain is endemic in or adapted to. The main variants named using this convention are: bird flu, human flu, swine flu, horse flu and dog flu. (Cat flu generally refers to feline viral rhinotracheitis or feline calicivirus and not infection from an influenza virus.) In pigs, horses and dogs, influenza symptoms are similar to humans, with cough, fever and loss of appetite.[63] The frequency of animal diseases are not as well-studied as human infection, but an outbreak of influenza in harbor seals caused approximately 500 seal deaths off the New England coast in 1979–1980.[275] However, outbreaks in pigs are common and do not cause severe mortality.[63] Vaccines have also been developed to protect poultry from avian influenza. These vaccines can be effective against multiple strains and are used either as part of a preventive strategy, or combined with culling in attempts to eradicate outbreaks.[276]
177
+
178
+ Flu symptoms in birds are variable and can be unspecific.[277] The symptoms following infection with low-pathogenicity avian influenza may be as mild as ruffled feathers, a small reduction in egg production, or weight loss combined with minor respiratory disease.[278] Since these mild symptoms can make diagnosis in the field difficult, tracking the spread of avian influenza requires laboratory testing of samples from infected birds. Some strains such as Asian H9N2 are highly virulent to poultry and may cause more extreme symptoms and significant mortality.[279] In its most highly pathogenic form, influenza in chickens and turkeys produces a sudden appearance of severe symptoms and almost 100% mortality within two days.[280] As the virus spreads rapidly in the crowded conditions seen in the intensive farming of chickens and turkeys, these outbreaks can cause large economic losses to poultry farmers.[citation needed]
179
+
180
+ An avian-adapted, highly pathogenic strain of H5N1 (called HPAI A(H5N1), for "highly pathogenic avian influenza virus of type A of subtype H5N1") causes H5N1 flu, commonly known as "avian influenza" or simply "bird flu", and is endemic in many bird populations, especially in Southeast Asia. This Asian lineage strain of HPAI A(H5N1) is spreading globally. It is epizootic (an epidemic in non-humans) and panzootic (a disease affecting animals of many species, especially over a wide area), killing tens of millions of birds and spurring the culling of hundreds of millions of other birds in an attempt to control its spread. Most references in the media to "bird flu" and most references to H5N1 are about this specific strain.[281][282]
181
+
182
+ HPAI A(H5N1) is an avian disease and there is no evidence suggesting efficient human-to-human transmission of HPAI A(H5N1). In almost all cases, those infected have had extensive physical contact with infected birds.[283] H5N1 may mutate or reassort into a strain capable of efficient human-to-human transmission. The exact changes that are required for this to happen are not well understood.[284] Due to the high lethality and virulence of H5N1, its endemic presence, and its large and increasing biological host reservoir, the H5N1 virus was the world's major pandemic threat in the 2006–07 flu season, and billions of dollars are being raised and spent researching H5N1 and preparing for a potential influenza pandemic.[258]
183
+
184
+ In March 2013, the Chinese government reported three cases of H7N9 influenza infections in humans, two of whom had died and the third became critically ill. Although the strain of the virus is not thought to spread efficiently between humans,[285][286] by mid-April, at least 82 persons had become ill from H7N9, of which 17 had died. These cases include three small family clusters in Shanghai and one cluster between a neighboring girl and boy in Beijing, raising at least the possibility of human-to-human transmission. The WHO points out that one cluster did not have two of the cases lab confirmed and further points out, as a matter of baseline information, that some viruses are able to cause limited human-to-human transmission under conditions of close contact but are not transmissible enough to cause large community outbreaks.[287][288][289]
185
+
186
+ In pigs swine influenza produces fever, lethargy, sneezing, coughing, difficulty breathing and decreased appetite.[290] In some cases the infection can cause abortion. Although mortality is usually low, the virus can produce weight loss and poor growth, causing economic loss to farmers.[290] Infected pigs can lose up to 12 pounds of body weight over a three- to four-week period.[290] Direct transmission of an influenza virus from pigs to humans is occasionally possible (this is called zoonotic swine flu). In all, 50 human cases are known to have occurred since the virus was identified in the mid-20th century, which have resulted in six deaths.[291]
187
+
188
+ In 2009, a swine-origin H1N1 virus strain commonly referred to as "swine flu" caused the 2009 flu pandemic, but there is no evidence that it is endemic to pigs (i.e. actually a swine flu) or of transmission from pigs to people; instead, the virus spreads from person to person.[292][293] This strain is a reassortment of several strains of H1N1 that are usually found separately, in humans, birds, and pigs.[294]
189
+
190
+ General
191
+
192
+ History
193
+
194
+ Microbiology
195
+
196
+
197
+
198
+ Pathogenesis
199
+
200
+ Epidemiology
201
+
202
+ Treatment and prevention
203
+
204
+ Research
205
+
206
+
207
+
en/2297.html.txt ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+
5
+ Influenza, commonly known as "the flu", is an infectious disease caused by an influenza virus.[1] Symptoms can be mild to severe.[5] The most common symptoms include: high fever, runny nose, sore throat, muscle and joint pain, headache, coughing, and feeling tired.[1] These symptoms typically begin two days after exposure to the virus and most last less than a week.[1] The cough, however, may last for more than two weeks.[1] In children, there may be diarrhea and vomiting, but these are not common in adults.[6] Diarrhea and vomiting occur more commonly in gastroenteritis, which is an unrelated disease and sometimes inaccurately referred to as "stomach flu" or the "24-hour flu".[6] Complications of influenza may include viral pneumonia, secondary bacterial pneumonia, sinus infections, and worsening of previous health problems such as asthma or heart failure.[2][5]
6
+
7
+ Three of the four types of influenza viruses affect humans: Type A, Type B, and Type C.[2][7] Type D has not been known to infect humans, but is believed to have the potential to do so.[7][8] Usually, the virus is spread through the air from coughs or sneezes.[1] This is believed to occur mostly over relatively short distances.[9] It can also be spread by touching surfaces contaminated by the virus and then touching the eyes, nose, or mouth.[5][9][10] A person may be infectious to others both before and during the time they are showing symptoms.[5] The infection may be confirmed by testing the throat, sputum, or nose for the virus.[2] A number of rapid tests are available; however, people may still have the infection even if the results are negative.[2] A type of polymerase chain reaction that detects the virus's RNA is more accurate.[2]
8
+
9
+ Frequent hand washing reduces the risk of viral spread, as does wearing a surgical mask.[3] Yearly vaccinations against influenza are recommended by the World Health Organization (WHO) for those at high risk,[1] and by the Centers for Disease Control and Prevention (CDC) for those six months of age and older.[11] The vaccine is usually effective against three or four types of influenza.[1] It is usually well tolerated.[1] A vaccine made for one year may not be useful in the following year, since the virus evolves rapidly.[1] Antiviral medications such as the neuraminidase inhibitor oseltamivir, among others, have been used to treat influenza.[1] The benefit of antiviral medications in those who are otherwise healthy do not appear to be greater than their risks.[12] No benefit has been found in those with other health problems.[12][13]
10
+
11
+ Influenza spreads around the world in yearly outbreaks, resulting in about three to five million cases of severe illness and about 290,000 to 650,000 deaths.[1][4] About 20% of unvaccinated children and 10% of unvaccinated adults are infected each year.[14] In the northern and southern parts of the world, outbreaks occur mainly in the winter, while around the equator, outbreaks may occur at any time of the year.[1] Death occurs mostly in high risk groups—the young, the old, and those with other health problems.[1] Larger outbreaks known as pandemics are less frequent.[2] In the 20th century, three influenza pandemics occurred: Spanish influenza in 1918 (17–100 million deaths), Asian influenza in 1957 (two million deaths), and Hong Kong influenza in 1968 (one million deaths).[15][16][17] The World Health Organization declared an outbreak of a new type of influenza A/H1N1 to be a pandemic in June 2009.[18] Influenza may also affect other animals, including pigs, horses, and birds.[19]
12
+
13
+
14
+
15
+
16
+
17
+ Approximately 33% of people with influenza are asymptomatic.[23][24]
18
+
19
+ Symptoms of influenza can start quite suddenly one to two days after infection. Usually the first symptoms are chills and body aches, with fever also common early in the infection, with body temperatures ranging from 38 to 39 °C (approximately 100 to 103 °F).[25] Many people are so ill that they are confined to bed for several days, with aches and pains throughout their bodies, which are worse in their backs and legs.[26]
20
+
21
+ It can be difficult to distinguish between the common cold and influenza in the early stages of these infections.[31] Influenza symptoms are a mixture of symptoms of common cold and pneumonia, body ache, headache, and fatigue. Diarrhea is not usually a symptom of influenza in adults,[20] although it has been seen in some human cases of the H5N1 "bird flu"[32] and can be a symptom in children.[28] The symptoms most reliably seen in influenza are shown in the adjacent table.[20]
22
+
23
+ The specific combination of fever and cough has been found to be the best predictor; diagnostic accuracy increases with a body temperature above 38 °C (100.4 °F).[33] Two decision analysis studies[34][35] suggest that during local outbreaks of influenza, the prevalence will be over 70%.[35] Even in the absence of a local outbreak, diagnosis may be justified in the elderly during the influenza season as long as the prevalence is over 15%.[35]
24
+
25
+ The United States Centers for Disease Control and Prevention (CDC) maintains an up-to-date summary of available laboratory tests.[36] According to the CDC, rapid diagnostic tests have a sensitivity of 50–75% and specificity of 90–95% when compared with viral culture.[37]
26
+
27
+ Occasionally, influenza can cause severe illness including primary viral pneumonia or secondary bacterial pneumonia.[38][39] The obvious symptom is trouble breathing. In addition, if a child (or presumably an adult) seems to be getting better and then relapses with a high fever, that is a danger sign since this relapse can be bacterial pneumonia.[40]
28
+
29
+ Sometimes, influenza may have abnormal presentations, like confusion in the elderly and a sepsis-like syndrome in the young.[41]
30
+
31
+ In virus classification, influenza viruses are RNA viruses that make up four of the seven genera of the family Orthomyxoviridae:[43]
32
+
33
+ These viruses are only distantly related to the human parainfluenza viruses, which are RNA viruses belonging to the paramyxovirus family that are a common cause of respiratory infections in children such as croup,[44] but can also cause a disease similar to influenza in adults.[45]
34
+
35
+ The fourth family of influenza viruses – Influenza D – was identified in 2016.[46][47][48][49][50][51][52] The type species for this family is Influenza D virus, which was first isolated in 2011.[8]
36
+
37
+ This genus has one species, influenza A virus. Wild aquatic birds are the natural hosts for a large variety of influenza A.[53] Occasionally, viruses are transmitted to other species and may then cause devastating outbreaks in domestic poultry or give rise to human influenza pandemics.[53] The influenza A virus can be subdivided into different serotypes based on the antibody response to these viruses.[54] The serotypes that have been confirmed in humans are:
38
+
39
+ This genus has one species, influenza B virus. Influenza B almost exclusively infects humans[54] and is less common than influenza A. The only other animals known to be susceptible to influenza B infection are seals[60] and ferrets.[61] This type of influenza mutates at a rate 2–3 times slower than type A[62] and consequently is less genetically diverse, with only one influenza B serotype.[54] As a result of this lack of antigenic diversity, a degree of immunity to influenza B is usually acquired at an early age. However, influenza B mutates enough that lasting immunity is not possible.[63] This reduced rate of antigenic change, combined with its limited host range (inhibiting cross species antigenic shift), ensures that pandemics of influenza B do not occur.[64]
40
+
41
+ This genus has one species, influenza C virus, which infects humans, dogs and pigs, sometimes causing both severe illness and local epidemics.[65][66] However, influenza C is less common than the other types and usually only causes mild disease in children.[67][68]
42
+
43
+ This genus has only one species, influenza D virus, which infects pigs and cattle. The virus has the potential to infect humans, although no such cases have been observed.[8]
44
+
45
+ Influenzaviruses A, B, C, and D are very similar in overall structure.[8][69][70] The virus particle (also called the virion) is 80–120 nanometers in diameter such that the smallest virions adopt an elliptical shape.[71] The length of each particle varies considerably, owing to the fact that influenza is pleomorphic, and can be in excess of many tens of micrometers, producing filamentous virions.[72] However, despite these varied shapes, the viral particles of all influenza viruses are similar in composition.[73] These are made of a viral envelope containing the glycoproteins hemagglutinin and neuraminidase wrapped around a central core. The central core contains the viral RNA genome and other viral proteins that package and protect this RNA. RNA tends to be single stranded but in special cases it is double.[74] Unusually for a virus, its genome is not a single piece of nucleic acid; instead, it contains seven or eight pieces of segmented negative-sense RNA, each piece of RNA containing either one or two genes, which code for a gene product (protein).[73] For example, the influenza A genome contains 11 genes on eight pieces of RNA, encoding for 11 proteins: hemagglutinin (HA), neuraminidase (NA), nucleoprotein (NP), M1 (matrix 1 protein), M2, NS1 (non-structural protein 1), NS2 (other name is NEP, nuclear export protein), PA, PB1 (polymerase basic 1), PB1-F2 and PB2.[75]
46
+
47
+ Hemagglutinin (HA) and neuraminidase (NA) are the two large glycoproteins on the outside of the viral particles. HA is a lectin that mediates binding of the virus to target cells and entry of the viral genome into the target cell, while NA is involved in the release of progeny virus from infected cells, by cleaving sugars that bind the mature viral particles.[76] Thus, these proteins are targets for antiviral medications.[77] Furthermore, they are antigens to which antibodies can be raised. Influenza A viruses are classified into subtypes based on antibody responses to HA and NA. These different types of HA and NA form the basis of the H and N distinctions in, for example, H5N1.[78] There are 18 H and 11 N subtypes known, but only H 1, 2 and 3, and N 1 and 2 are commonly found in humans.[79][80]
48
+
49
+ Viruses can replicate only in living cells.[81] Influenza infection and replication is a multi-step process: First, the virus has to bind to and enter the cell, then deliver its genome to a site where it can produce new copies of viral proteins and RNA, assemble these components into new viral particles, and, last, exit the host cell.[73]
50
+
51
+ Influenza viruses bind through hemagglutinin onto sialic acid sugars on the surfaces of epithelial cells, typically in the nose, throat, and lungs of mammals, and intestines of birds (Stage 1 in infection figure).[82] After the hemagglutinin is cleaved by a protease, the cell imports the virus by endocytosis.[83]
52
+
53
+ The intracellular details are still being elucidated. It is known that virions converge to the microtubule organizing center, interact with acidic endosomes and finally enter the target endosomes for genome release.[84]
54
+
55
+ Once inside the cell, the acidic conditions in the endosome cause two events to happen: First, part of the hemagglutinin protein fuses the viral envelope with the vacuole's membrane, then the M2 ion channel allows protons to move through the viral envelope and acidify the core of the virus, which causes the core to disassemble and release the viral RNA and core proteins.[73] The viral RNA (vRNA) molecules, accessory proteins and RNA-dependent RNA polymerase are then released into the cytoplasm (Stage 2).[85] The M2 ion channel is blocked by amantadine drugs, preventing infection.[86]
56
+
57
+ These core proteins and vRNA form a complex that is transported into the cell nucleus, where the RNA-dependent RNA polymerase begins transcribing complementary positive-sense vRNA (Steps 3a and b).[87] The vRNA either is exported into the cytoplasm and translated (step 4) or remains in the nucleus. Newly synthesized viral proteins are either secreted through the Golgi apparatus onto the cell surface (in the case of neuraminidase and hemagglutinin, step 5b) or transported back into the nucleus to bind vRNA and form new viral genome particles (step 5a). Other viral proteins have multiple actions in the host cell, including degrading cellular mRNA and using the released nucleotides for vRNA synthesis and also inhibiting translation of host-cell mRNAs.[88]
58
+
59
+ Negative-sense vRNAs that form the genomes of future viruses, RNA-dependent RNA polymerase, and other viral proteins are assembled into a virion. Hemagglutinin and neuraminidase molecules cluster into a bulge in the cell membrane. The vRNA and viral core proteins leave the nucleus and enter this membrane protrusion (step 6). The mature virus buds off from the cell in a sphere of host phospholipid membrane, acquiring hemagglutinin and neuraminidase with this membrane coat (step 7).[89] As before, the viruses adhere to the cell through hemagglutinin; the mature viruses detach once their neuraminidase has cleaved sialic acid residues from the host cell.[82] After the release of new influenza viruses, the host cell dies.
60
+
61
+ Because of the absence of RNA proofreading enzymes, the RNA-dependent RNA polymerase that copies the viral genome makes an error roughly every 10 thousand nucleotides, which is the approximate length of the influenza vRNA. Hence, the majority of newly manufactured influenza viruses are mutants; this causes antigenic drift, which is a slow change in the antigens on the viral surface over time.[90] The separation of the genome into eight separate segments of vRNA allows mixing or reassortment of vRNAs if more than one type of influenza virus infects a single cell. The resulting rapid change in viral genetics produces antigenic shifts, which are sudden changes from one antigen to another. These sudden large changes allow the virus to infect new host species and quickly overcome protective immunity.[78] This is important in the emergence of pandemics, as discussed below in the section on epidemiology. Also, when two or more viruses infect a cell, genetic variation may be generated by homologous recombination.[91][92] Homologous recombination can arise during viral genome replication by the RNA polymerase switching from one template to another, a process known as copy choice.[92]
62
+
63
+ When an infected person sneezes or coughs more than half a million virus particles can be spread to those close by.[93] In otherwise healthy adults, influenza virus shedding (the time during which a person might be infectious to another person) increases sharply one-half to one day after infection, peaks on day 2 and persists for an average total duration of 5 days—but can persist as long as 9 days.[23] In those who develop symptoms from experimental infection (only 67% of healthy experimentally infected individuals), symptoms and viral shedding show a similar pattern, but with viral shedding preceding illness by one day.[23] Children are much more infectious than adults and shed virus from just before they develop symptoms until two weeks after infection.[94] In immunocompromised people, viral shedding can continue for longer than two weeks.[95]
64
+
65
+ Influenza can be spread in three main ways:[96][97] by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person); the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting) and through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a handshake. The relative importance of these three modes of transmission is unclear, and they may all contribute to the spread of the virus.[9] In the airborne route, the droplets that are small enough for people to inhale are 0.5 to 5 μm in diameter and inhaling just one droplet might be enough to cause an infection.[96] Although a single sneeze releases up to 40,000 droplets,[98] most of these droplets are quite large and will quickly settle out of the air.[96] How long influenza survives in airborne droplets seems to be influenced by the levels of humidity and UV radiation, with low humidity and a lack of sunlight in winter aiding its survival;[96] ideal conditions can allow it to live for an hour in the atmosphere.[99]
66
+
67
+ As the influenza virus can persist outside of the body, it can also be transmitted by contaminated surfaces such as banknotes,[100] doorknobs, light switches and other household items.[26] The length of time the virus will persist on a surface varies, with the virus surviving for one to two days on hard, non-porous surfaces such as plastic or metal, for about fifteen minutes on dry paper tissues, and only five minutes on skin.[101] However, if the virus is present in mucus, this can protect it for longer periods (up to 17 days on banknotes).[96][100] Avian influenza viruses can survive indefinitely when frozen.[102] They are inactivated by heating to 56 °C (133 °F) for a minimum of 60 minutes, as well as by acids (at pH <2).[102]
68
+
69
+ The mechanisms by which influenza infection causes symptoms in humans have been studied intensively. One of the mechanisms is believed to be the inhibition of adrenocorticotropic hormone (ACTH) resulting in lowered cortisol levels.[103]
70
+ Knowing which genes are carried by a particular strain can help predict how well it will infect humans and how severe this infection will be (that is, predict the strain's pathophysiology).[66][104]
71
+
72
+ For instance, part of the process that allows influenza viruses to invade cells is the cleavage of the viral hemagglutinin protein by any one of several human proteases.[83] In mild and avirulent viruses, the structure of the hemagglutinin means that it can only be cleaved by proteases found in the throat and lungs, so these viruses cannot infect other tissues. However, in highly virulent strains, such as H5N1, the hemagglutinin can be cleaved by a wide variety of proteases, allowing the virus to spread throughout the body.[104]
73
+
74
+ The viral hemagglutinin protein is responsible for determining both which species a strain can infect and where in the human respiratory tract a strain of influenza will bind.[105] Strains that are easily transmitted between people have hemagglutinin proteins that bind to receptors in the upper part of the respiratory tract, such as in the nose, throat and mouth. In contrast, the highly lethal H5N1 strain binds to receptors that are mostly found deep in the lungs.[106] This difference in the site of infection may be part of the reason why the H5N1 strain causes severe viral pneumonia in the lungs, but is not easily transmitted by people coughing and sneezing.[107][108]
75
+
76
+ Common symptoms of the flu such as fever, headaches, and fatigue are the result of the huge amounts of proinflammatory cytokines and chemokines (such as interferon or tumor necrosis factor) produced from influenza-infected cells.[31][109] In contrast to the rhinovirus that causes the common cold, influenza does cause tissue damage, so symptoms are not entirely due to the inflammatory response.[110] This massive immune response might produce a life-threatening cytokine storm. This effect has been proposed to be the cause of the unusual lethality of both the H5N1 avian influenza,[111] and the 1918 pandemic strain.[112][113] However, another possibility is that these large amounts of cytokines are just a result of the massive levels of viral replication produced by these strains, and the immune response does not itself contribute to the disease.[114] Influenza appears to trigger programmed cell death (apoptosis).[115]
77
+
78
+ The influenza vaccine is recommended by the World Health Organization (WHO) for high-risk groups, such as pregnant women, children aged less than five years, the elderly, health care workers, and people who have chronic illnesses such as HIV/AIDS, asthma, diabetes, heart disease, or are immunocompromised among others.[116][117] The United States Centers for Disease Control and Prevention (CDC) recommends the influenza vaccine for those aged six months or older who do not have contraindications.[118][11] In healthy adults it is modestly effective in decreasing the amount of influenza-like symptoms in a population.[119] In healthy children over the age of two years, the vaccine reduces the chances of getting influenza by around two-thirds, while it has not been well studied in children under two years.[120] In those with chronic obstructive pulmonary disease vaccination reduces exacerbations,[121] it is not clear if it reduces asthma exacerbations.[122] Evidence supports a lower rate of influenza-like illness in many groups who are immunocompromised such as those with: HIV/AIDS, cancer, and post organ transplant.[123] In those at high risk immunization may reduce the risk of heart disease.[124] Whether immunizing health care workers affects patient outcomes is controversial with some reviews finding insufficient evidence[125][126] and others finding tentative evidence.[127][128]
79
+
80
+ Due to the high mutation rate of the virus, a particular influenza vaccine usually confers protection for no more than a few years. Each year, the World Health Organization predicts which strains of the virus are most likely to be circulating in the next year (see Historical annual reformulations of the influenza vaccine), allowing pharmaceutical companies to develop vaccines that will provide the best immunity against these strains.[129] The vaccine is reformulated each season for a few specific flu strains but does not include all the strains active in the world during that season. It takes about six months for the manufacturers to formulate and produce the millions of doses required to deal with the seasonal epidemics; occasionally, a new or overlooked strain becomes prominent during that time.[130] It is also possible to get infected just before vaccination and get sick with the strain that the vaccine is supposed to prevent, as the vaccine takes about two weeks to become effective.[131]
81
+ Vaccines can cause the immune system to react as if the body were actually being infected, and general infection symptoms (many cold and flu symptoms are just general infection symptoms) can appear, though these symptoms are usually not as severe or long-lasting as influenza. The most dangerous adverse effect is a severe allergic reaction to either the virus material itself or residues from the hen eggs used to grow the influenza; however, these reactions are extremely rare.[132]
82
+
83
+ A 2018 Cochrane review of children in good general health found that the live immunization seemed to lower the risk of getting influenza for the season from 18% to 4%. The inactivated vaccine seemed to lower the risk of getting flu for the season from 30% to 11%. Not enough data was available to draw definite conclusions about serious complications such as pneumonia or hospitalization.[120]
84
+
85
+ For healthy adults, a 2018 Cochrane review showed that vaccines reduced the incidence of lab-confirmed influenza from 2.3% to 0.9%, which constitutes a reduction of risk of approximately 60%. However, for influenza-like illness which is defined as the same symptoms of cough, fever, headache, runny nose, and bodily aches and pains, vaccine reduced the risk from 21.5% to 18.1%. This constitutes a much more modest reduction of risk of approximately 16%. The difference is most probably explained by the fact that over 200 viruses cause the same or similar symptoms as the flu virus.[119] Another review looked at the effect of short and long term exercise before the vaccine, however, no benefits or harms were recorded.[133]
86
+
87
+ The cost-effectiveness of seasonal influenza vaccination has been widely evaluated for different groups and in different settings.[134] It has generally been found to be a cost-effective intervention, especially in children[135] and the elderly,[136] however the results of economic evaluations of influenza vaccination have often been found to be dependent on key assumptions.[137][138]
88
+
89
+ These are the main ways that influenza spreads
90
+
91
+ When vaccines and antiviral medications are limited, non-pharmaceutical interventions are essential to reduce transmission and spread. The lack of controlled studies and rigorous evidence of the effectiveness of some measures has hampered planning decisions and recommendations. Nevertheless, strategies endorsed by experts for all phases of flu outbreaks include hand and respiratory hygiene, self-isolation by symptomatic individuals and the use of face masks by them and their caregivers, surface disinfection, rapid testing and diagnosis, and contact tracing. In some cases, other forms of social distancing including school closures and travel restrictions are recommended.[139]
92
+
93
+ Reasonably effective ways to reduce the transmission of influenza include good personal health and hygiene habits such as: not touching the eyes, nose or mouth;[140] frequent hand washing (with soap and water, or with alcohol-based hand rubs);[141] covering coughs and sneezes with a tissue or sleeve; avoiding close contact with sick people; and staying home when sick. Avoiding spitting is also recommended.[139] Although face masks might help prevent transmission when caring for the sick,[142][143] there is mixed evidence on beneficial effects in the community.[139][144] Smoking raises the risk of contracting influenza, as well as producing more severe disease symptoms.[145][146]
94
+
95
+ Since influenza spreads through both aerosols and contact with contaminated surfaces, surface sanitizing may help prevent some infections.[147] Alcohol is an effective sanitizer against influenza viruses, while quaternary ammonium compounds can be used with alcohol so that the sanitizing effect lasts for longer.[148] In hospitals, quaternary ammonium compounds and bleach are used to sanitize rooms or equipment that have been occupied by people with influenza symptoms.[148] At home, this can be done effectively with a diluted chlorine bleach.[149]
96
+
97
+ Social distancing strategies used during past pandemics, such as quarantines, travel restrictions, and the closing of schools, churches and theaters, have been employed to slow the spread of influenza viruses. Researchers have estimated that such interventions during the 1918 Spanish flu pandemic in the US reduced the peak death rate by up to 50%, and the overall mortality by about 10–30%, in areas where multiple interventions were implemented. The more moderate effect on total deaths was attributed to the measures being employed too late, or lifted too early, most after six weeks or less.[150][151]
98
+
99
+ For typical flu outbreaks, routine cancellation of large gatherings or mandatory travel restrictions have received little agreement, particularly as they may be disruptive and unpopular. School closures have been found by most empirical studies to reduce community spread, but some findings have been contradictory. Recommendations for these community restrictions are usually on a case-by-case basis.[139]
100
+
101
+ There are a number of rapid tests for the flu. One is called a Rapid Molecular Assay, when an upper respiratory tract specimen (mucus) is taken using a nasal swab or a nasopharyngeal swab.[152] It should be done within 3–4 days of symptom onset, as upper respiratory viral shedding takes a downward spiral after that.[41]
102
+
103
+ People with the flu are advised to get plenty of rest, drink plenty of liquids, avoid using alcohol and tobacco and, if necessary, take medications such as acetaminophen (paracetamol) to relieve the fever and muscle aches associated with the flu.[153][154] In contrast, there is not enough evidence to support corticosteroids as additional therapy for influenza.[155] It is advised to avoid close contact with others to prevent spread of infection.[153][154] Children and teenagers with flu symptoms (particularly fever) should avoid taking aspirin during an influenza infection (especially influenza type B), because doing so can lead to Reye's syndrome, a rare but potentially fatal disease of the liver.[156] Since influenza is caused by a virus, antibiotics have no effect on the infection; unless prescribed for secondary infections such as bacterial pneumonia. Antiviral medication may be effective, if given early (within 48 hours to first symptoms), but some strains of influenza can show resistance to the standard antiviral medications and there is concern about the quality of the research.[157] High-risk individuals such as young children, pregnant women, the elderly, and those with compromised immune systems should visit the doctor for antiviral medications. Those with the emergency warning signs should visit the emergency room at once.[42]
104
+
105
+ The two classes of antiviral medications used against influenza are neuraminidase inhibitors (oseltamivir, zanamivir, laninamivir and peramivir) and M2 protein inhibitors (adamantane derivatives).[158][159][160] In Russia, umifenovir is sold for treatment of influenza[161] and in the first quarter of 2020 had a 16 percent share in the antiviral market.[162]
106
+
107
+ Overall the benefits of neuraminidase inhibitors in those who are otherwise healthy do not appear to be greater than the risks.[12] There does not appear to be any benefit in those with other health problems.[12] In those believed to have the flu, they decreased the length of time symptoms were present by slightly less than a day but did not appear to affect the risk of complications such as needing hospitalization or pneumonia.[13] Increasingly prevalent resistance to neuraminidase inhibitors has led researchers to seek alternative antiviral medications with different mechanisms of action.[163]
108
+
109
+ The antiviral medications amantadine and rimantadine inhibit a viral ion channel (M2 protein), thus inhibiting replication of the influenza A virus.[86] These medications are sometimes effective against influenza A if given early in the infection but are ineffective against influenza B viruses, which lack the M2 drug target.[164] Measured resistance to amantadine and rimantadine in American isolates of H3N2 has increased to 91% in 2005.[165] This high level of resistance may be due to the easy availability of amantadines as part of over-the-counter cold remedies in countries such as China and Russia,[166] and their use to prevent outbreaks of influenza in farmed poultry.[167][168] The CDC recommended against using M2 inhibitors during the 2005–06 influenza season due to high levels of drug resistance.[169]
110
+
111
+ Influenza's effects are much more severe and last longer than those of the common cold. Most people will recover completely in about one to two weeks, but others will develop life-threatening complications (such as pneumonia). Thus, influenza can be deadly, especially for the weak, young and old, those with compromised immune systems, or the chronically ill.[78] People with a weak immune system, such as people with advanced HIV infection or transplant recipients (whose immune systems are medically suppressed to prevent transplant organ rejection), suffer from particularly severe disease.[170] Pregnant women and young children are also at a high risk for complications.[171]
112
+
113
+ The flu can worsen chronic health problems. People with emphysema, chronic bronchitis or asthma may experience shortness of breath while they have the flu, and influenza may cause worsening of coronary heart disease or congestive heart failure.[172] Smoking is another risk factor associated with more serious disease and increased mortality from influenza.[145]
114
+
115
+ Even healthy people can be affected, and serious problems from influenza can happen at any age. People over 65 years old, pregnant women, very young children and people of any age with chronic medical conditions are more likely to get complications from influenza, such as pneumonia, bronchitis, sinus, and ear infections.[173]
116
+
117
+ In some cases, an autoimmune response to an influenza infection may contribute to the development of Guillain–Barré syndrome.[174] However, as many other infections can increase the risk of this disease, influenza may only be an important cause during epidemics.[174][175] This syndrome has been believed to also be a rare side effect of influenza vaccines. One review gives an incidence of about one case per million vaccinations.[176] Getting infected by influenza itself increases both the risk of death (up to 1 in 10,000) and increases the risk of developing GBS to a much higher level than the highest level of suspected vaccine involvement (approx. 10 times higher by recent estimates).[177][174]
118
+
119
+ According to the Centers for Disease Control and Prevention (CDC), "Children of any age with neurologic conditions are more likely than other children to become very sick if they get the flu. Flu complications may vary and for some children, can include pneumonia and even death."[178]
120
+
121
+ Neurological conditions can include:
122
+
123
+ These conditions can impair coughing, swallowing, clearing the airways, and in the worst cases, breathing. Therefore, they worsen the flu symptoms.[178]
124
+
125
+ Influenza reaches peak prevalence in winter, and because the Northern and Southern Hemispheres have winter at different times of the year, there are actually two different flu seasons each year. This is why the World Health Organization (assisted by the National Influenza Centers) makes recommendations for two different vaccine formulations every year; one for the Northern, and one for the Southern Hemisphere.[129]
126
+
127
+ A long-standing puzzle has been why outbreaks of the flu occur seasonally rather than uniformly throughout the year. One possible explanation is that, because people are indoors more often during the winter, they are in close contact more often, and this promotes transmission from person to person. Increased travel due to the Northern Hemisphere winter holiday season may also play a role.[179] Another factor is that cold temperatures lead to drier air, which may dehydrate mucus particles. Dry particles are lighter and can thus remain airborne for a longer period. The virus also survives longer on surfaces at colder temperatures and aerosol transmission of the virus is highest in cold environments (less than 5 °C) with low relative humidity.[180] The lower air humidity in winter seems to be the main cause of seasonal influenza transmission in temperate regions.[181][182]
128
+
129
+ However, seasonal changes in infection rates also occur in tropical regions, and in some countries these peaks of infection are seen mainly during the rainy season.[183] Seasonal changes in contact rates from school terms, which are a major factor in other childhood diseases such as measles and pertussis, may also play a role in the flu. A combination of these small seasonal effects may be amplified by dynamical resonance with the endogenous disease cycles.[184] H5N1 exhibits seasonality in both humans and birds.[185][186]
130
+
131
+ An alternative hypothesis to explain seasonality in influenza infections is an effect of vitamin D levels on immunity to the virus.[187] This idea was first proposed by Robert Edgar Hope-Simpson in 1981.[188] He proposed that the cause of influenza epidemics during winter may be connected to seasonal fluctuations of vitamin D, which is produced in the skin under the influence of solar (or artificial) UV radiation. This could explain why influenza occurs mostly in winter and during the tropical rainy season, when people stay indoors, away from the sun, and their vitamin D levels fall.
132
+
133
+ Every year about 290,000 to 650,000 people die due to influenza globally, with an average of 389,000.[190] In the developed world most of those who die are over the age of 65.[1] In the developing world the effects are less clear; however, it appears that children are affected to a greater degree.[1]
134
+
135
+ Although the number of cases of influenza can vary widely between years, approximately 36,000 deaths and more than 200,000 hospitalizations are directly associated with influenza a year in the United States.[191][192] One method of calculating influenza mortality produced an estimate of 41,400 average deaths per year in the United States between 1979 and 2001.[193] Different methods in 2010 by the Centers for Disease Control and Prevention (CDC) reported a range from a low of about 3,300 deaths to a high of 49,000 per year.[194]
136
+
137
+ As influenza is caused by a variety of species and strains of viruses, in any given year some strains can die out while others create epidemics, while yet another strain can cause a pandemic. Typically, in a year's normal two flu seasons (one per hemisphere), there are between three and five million cases of severe illness,[4][1][195] which by some definitions is a yearly influenza epidemic.[1]
138
+
139
+ Roughly three times per century, a pandemic occurs, which infects a large proportion of the world's population and can kill tens of millions of people (see pandemics section). In 2006, a study estimated that if a strain with similar virulence to the 1918 influenza had emerged that year, it could have killed between 50 and 80 million people.[196]
140
+
141
+ New influenza viruses are constantly evolving by mutation or by reassortment.[54] Mutations can cause small changes in the hemagglutinin and neuraminidase antigens on the surface of the virus. This is called antigenic drift, which slowly creates an increasing variety of strains until one evolves that can infect people who are immune to the pre-existing strains. This new variant then replaces the older strains as it rapidly sweeps through the human population, often causing an epidemic.[197] However, since the strains produced by drift will still be reasonably similar to the older strains, some people will still be immune to them. In contrast, when influenza viruses reassort, they acquire completely new antigens—for example by reassortment between avian strains and human strains; this is called antigenic shift. If a human influenza virus is produced that has entirely new antigens, everybody will be susceptible, and the novel influenza will spread uncontrollably, causing a pandemic.[198] In contrast to this model of pandemics based on antigenic drift and shift, an alternative approach has been proposed where the periodic pandemics are produced by interactions of a fixed set of viral strains with a human population with a constantly changing set of immunities to different viral strains.[199]
142
+
143
+ From a public health point of view, flu epidemics spread rapidly and are very difficult to control. Most influenza virus strains are not very infectious and each infected individual will only go on to infect one or two other individuals (the basic reproduction number for influenza is generally around 1.4). However, the generation time for influenza is extremely short: the time from a person becoming infected to when he infects the next person is only two days. The short generation time means that influenza epidemics generally peak at around 2 months and burn out after 3 months: the decision to intervene in an influenza epidemic, therefore, has to be taken early, and the decision is therefore often made on the back of incomplete data. Another problem is that individuals become infectious before they become symptomatic, which means that putting people in quarantine after they become ill is not an effective public health intervention.[200] For the average person, viral shedding tends to peak on day two, whereas symptoms peak on day three.[23]
144
+
145
+ The word Influenza comes from the Italian language meaning "influence" and refers to the cause of the disease; initially, this ascribed illness to unfavorable astrological influences. It was introduced into English in the mid-eighteenth century during a pan-European epidemic.[201]
146
+ Archaic terms for influenza include epidemic catarrh, la grippe (from the French, first used by Molyneaux in 1694; also used in German),[202] sweating sickness, and Spanish fever (particularly for the 1918 flu pandemic strain).[203]
147
+
148
+ An overall lack of data up until 1500 precludes meaningful search for the influenza outbreaks in the more distant past.[205] Possibly the first influenza pandemic occurred around 6000 BC in China.[205] The symptoms of human influenza were clearly described by Hippocrates roughly 2,400 years ago.[206][207] Although the virus seems to have caused epidemics throughout human history, historical data on influenza are difficult to interpret, because the symptoms can be similar to those of other respiratory diseases.[208][202] The disease may have spread from Europe to the Americas as early as the European colonization of the Americas, since almost the entire indigenous population of the Antilles was killed by an epidemic resembling influenza that broke out in 1493, after the arrival of Christopher Columbus.[209][210]
149
+
150
+ The first convincing record of an influenza pandemic was a minor pandemic chronicled in 1510, which began in East Asia before spreading to North Africa and then Europe. During this pandemic, influenza killed about 1% of its victims.[211][212] The first pandemic of influenza to be reliably recorded as spreading worldwide was the 1557 influenza pandemic,[213][214][215][216] in which a reoccurring wave likely killed Queen Mary I of England and the Archbishop of Canterbury within 12 hours of each other.[217][218] One of the most well-chronicled pandemics of influenza in the 16th Century occurred in 1580, beginning in East Asia and spreading to Europe through Africa, Russia, and the Spanish and Ottoman Empires. In Rome, over 8,000 people were killed. Several Spanish cities saw large scale deaths, among the fatalities the Queen of Spain, Anna of Austria. Pandemics continued sporadically throughout the 17th and 18th centuries, with the pandemic of 1830–1833 being particularly widespread; it infected approximately a quarter of the people exposed.[202]
151
+
152
+ The most famous and lethal outbreak was the 1918 flu pandemic (Spanish flu) (type A influenza, H1N1 subtype), which lasted into 1920. It is not known exactly how many it killed, but estimates range from 17 million to 100 million people.[15][204][219][220] This pandemic has been described as "the greatest medical holocaust in history" and may have killed as many people as the Black Death.[202] This huge death toll was caused by an extremely high infection rate of up to 50% and the extreme severity of the symptoms, suspected to be caused by cytokine storms.[220] Symptoms in 1918 were so unusual that initially influenza was misdiagnosed as dengue, cholera, or typhoid. One observer wrote, "One of the most striking of the complications was hemorrhage from mucous membranes, especially from the nose, stomach, and intestine. Bleeding from the ears and petechial hemorrhages in the skin also occurred."[219] The majority of deaths were from bacterial pneumonia, a secondary infection caused by influenza, but the virus also killed people directly, causing massive hemorrhages and edema in the lung.[221]
153
+
154
+ The 1918 flu pandemic was truly global, spreading even to the Arctic and remote Pacific islands. The unusually severe disease killed between two and twenty percent of those infected, as opposed to the more usual flu epidemic mortality rate of 0.1%.[204][219] Another unusual feature of this pandemic was that it mostly killed young adults, with 99% of pandemic influenza deaths occurring in people under 65, and more than half in young adults 20 to 40 years old.[222] This is unusual since influenza is normally most deadly to the very young (under age 2) and the very old (over age 70). The total mortality of the 1918–1919 pandemic is not known, but it is estimated that 2.5% to 5% of the world's population was killed. As many as 25 million may have been killed in the first 25 weeks; in contrast, HIV/AIDS has killed 25 million in its first 25 years.[219]
155
+
156
+ Later flu pandemics were not so devastating. They included the 1957 Asian flu (type A, H2N2 strain) and the 1968 Hong Kong flu (type A, H3N2 strain), but even these smaller outbreaks killed millions of people. In later pandemics antibiotics were available to control secondary infections and this may have helped reduce mortality compared to the Spanish flu of 1918.[204]
157
+
158
+
159
+
160
+ It was incorrectly assumed that the cause of influenza was bacterial in origin from 1892 (with Haemophilus influenzae being discovered by and suggested as the origin of influenza by R. F. J. Pfeiffer).[246] The first influenza virus to be isolated was from poultry, when in 1901, the agent causing a disease called "fowl plague" was passed through Chamberland filters, which have pores that are too small for bacteria to pass through.[247] However, the conceptual differences between viruses and bacteria as different entities was not fully understood for some time, complicating preventative measures taken during the 1918 influenza pandemic.[246] The etiological cause of influenza, the virus family Orthomyxoviridae, was first discovered in pigs by Richard Shope in 1931.[248] This discovery was shortly followed by the isolation of the virus from humans by a group headed by Patrick Laidlaw at the Medical Research Council of the United Kingdom in 1933.[249] However, it was not until Wendell Stanley first crystallized tobacco mosaic virus in 1935 that the non-cellular nature of viruses was appreciated.
161
+
162
+ The first significant step towards preventing influenza was the development in 1944 of a killed-virus vaccine for influenza by Thomas Francis, Jr. This built on work by Australian Frank Macfarlane Burnet, who showed that the virus lost virulence when it was cultured in fertilized hen's eggs.[251] Application of this observation by Francis allowed his group of researchers at the University of Michigan to develop the first influenza vaccine, with support from the U.S. Army.[252] The Army was deeply involved in this research due to its experience of influenza in World War I, when thousands of troops were killed by the virus in a matter of months.[219] In comparison to vaccines, the development of anti-influenza drugs has been slower, with amantadine being licensed in 1966 and, almost thirty years later, the next class of drugs (the neuraminidase inhibitors) being developed.[253]
163
+
164
+ Influenza produces direct costs due to lost productivity and associated medical treatment, as well as indirect costs of preventive measures. In the United States, seasonal influenza is estimated to result in a total average annual economic cost of over $11 billion, with direct medical costs estimated to be over $3 billion annually.[254] It has been estimated that a future pandemic could cause hundreds of billions of dollars in direct and indirect costs.[255] However, the economic impacts of past pandemics have not been intensively studied, and some authors have suggested that the Spanish influenza actually had a positive long-term effect on per-capita income growth, despite a large reduction in the working population and severe short-term depressive effects.[256] Other studies have attempted to predict the costs of a pandemic as serious as the 1918 Spanish flu on the U.S. economy, where 30% of all workers became ill, and 2.5% were killed. A 30% sickness rate and a three-week length of illness would decrease the gross domestic product by 5%. Additional costs would come from medical treatment of 18 million to 45 million people, and total economic costs would be approximately $700 billion.[257]
165
+
166
+ Preventive costs are also high. Governments worldwide have spent billions of U.S. dollars preparing and planning for a potential H5N1 avian influenza pandemic, with costs associated with purchasing drugs and vaccines as well as developing disaster drills and strategies for improved border controls.[258] On 1 November 2005, United States President George W. Bush unveiled the National Strategy to Safeguard Against the Danger of Pandemic Influenza[255] backed by a request to Congress for $7.1 billion to begin implementing the plan.[259] Internationally, on 18 January 2006, donor nations pledged US$2 billion to combat bird flu at the two-day International Pledging Conference on Avian and Human Influenza held in China.[260][261]
167
+
168
+ In an assessment of the 2009 H1N1 pandemic on selected countries in the Southern Hemisphere, data suggest that all countries experienced some time-limited and/or geographically isolated socioeconomic effects and a temporary decrease in tourism most likely due to fear of 2009 H1N1 disease. It is still too early to determine whether the H1N1 pandemic has had any long-term economic effects.[262][needs update]
169
+
170
+ Research on influenza includes studies on molecular virology, how the virus produces disease (pathogenesis), host immune responses, viral genomics, and how the virus spreads (epidemiology). These studies help in developing influenza countermeasures; for example, a better understanding of the body's immune system response helps vaccine development, and a detailed picture of how influenza invades cells aids the development of antiviral drugs. One important basic research program is the Influenza Genome Sequencing Project, which was initiated in 2004 to create a library of influenza sequences and help clarify which factors make one strain more lethal than another, which genes most affect immunogenicity, and how the virus evolves over time.[263]
171
+
172
+ The sequencing of the influenza genome and recombinant DNA technology may accelerate the generation of new vaccine strains by allowing scientists to substitute new antigens into a previously developed vaccine strain.[264] Growing viruses in cell culture also promises higher yields, less cost, better quality and surge capacity.[265] Research on a universal influenza A vaccine, targeted against the external domain of the transmembrane viral M2 protein (M2e), is being done at the University of Ghent by Walter Fiers, Xavier Saelens and their team[266][267][268] and has now successfully concluded Phase I clinical trials. There has been some research success towards a "universal flu vaccine" that produces antibodies against proteins on the viral coat which mutate less rapidly, and thus a single shot could potentially provide longer-lasting protection.[269][270][271]
173
+
174
+ A number of biologics, therapeutic vaccines and immunobiologics are also being investigated for treatment of infection caused by viruses. Therapeutic biologics are designed to activate the immune response to virus or antigens. Typically, biologics do not target metabolic pathways like anti-viral drugs, but stimulate immune cells such as lymphocytes, macrophages, and/or antigen-presenting cells, in an effort to drive an immune response towards a cytotoxic effect against the virus. Influenza models, such as murine influenza, are convenient models to test the effects of prophylactic and therapeutic biologics. For example, lymphocyte T-cell immunomodulator inhibits viral growth in the murine model of influenza.[272]
175
+
176
+ Influenza infects many animal species, and transfer of viral strains between species can occur. Birds are thought to be the main animal reservoirs of influenza viruses.[273] Most influenza strains are believed to have originated after humans began their intensive domestication of animals about 10,000 years ago.[274] Sixteen forms of hemagglutinin and nine forms of neuraminidase have been identified. All known subtypes (HxNy) are found in birds, but many subtypes are endemic in humans, dogs, horses, and pigs; populations of camels, ferrets, cats, seals, mink, and whales also show evidence of prior infection or exposure to influenza.[63] Variants of flu virus are sometimes named according to the species the strain is endemic in or adapted to. The main variants named using this convention are: bird flu, human flu, swine flu, horse flu and dog flu. (Cat flu generally refers to feline viral rhinotracheitis or feline calicivirus and not infection from an influenza virus.) In pigs, horses and dogs, influenza symptoms are similar to humans, with cough, fever and loss of appetite.[63] The frequency of animal diseases are not as well-studied as human infection, but an outbreak of influenza in harbor seals caused approximately 500 seal deaths off the New England coast in 1979–1980.[275] However, outbreaks in pigs are common and do not cause severe mortality.[63] Vaccines have also been developed to protect poultry from avian influenza. These vaccines can be effective against multiple strains and are used either as part of a preventive strategy, or combined with culling in attempts to eradicate outbreaks.[276]
177
+
178
+ Flu symptoms in birds are variable and can be unspecific.[277] The symptoms following infection with low-pathogenicity avian influenza may be as mild as ruffled feathers, a small reduction in egg production, or weight loss combined with minor respiratory disease.[278] Since these mild symptoms can make diagnosis in the field difficult, tracking the spread of avian influenza requires laboratory testing of samples from infected birds. Some strains such as Asian H9N2 are highly virulent to poultry and may cause more extreme symptoms and significant mortality.[279] In its most highly pathogenic form, influenza in chickens and turkeys produces a sudden appearance of severe symptoms and almost 100% mortality within two days.[280] As the virus spreads rapidly in the crowded conditions seen in the intensive farming of chickens and turkeys, these outbreaks can cause large economic losses to poultry farmers.[citation needed]
179
+
180
+ An avian-adapted, highly pathogenic strain of H5N1 (called HPAI A(H5N1), for "highly pathogenic avian influenza virus of type A of subtype H5N1") causes H5N1 flu, commonly known as "avian influenza" or simply "bird flu", and is endemic in many bird populations, especially in Southeast Asia. This Asian lineage strain of HPAI A(H5N1) is spreading globally. It is epizootic (an epidemic in non-humans) and panzootic (a disease affecting animals of many species, especially over a wide area), killing tens of millions of birds and spurring the culling of hundreds of millions of other birds in an attempt to control its spread. Most references in the media to "bird flu" and most references to H5N1 are about this specific strain.[281][282]
181
+
182
+ HPAI A(H5N1) is an avian disease and there is no evidence suggesting efficient human-to-human transmission of HPAI A(H5N1). In almost all cases, those infected have had extensive physical contact with infected birds.[283] H5N1 may mutate or reassort into a strain capable of efficient human-to-human transmission. The exact changes that are required for this to happen are not well understood.[284] Due to the high lethality and virulence of H5N1, its endemic presence, and its large and increasing biological host reservoir, the H5N1 virus was the world's major pandemic threat in the 2006–07 flu season, and billions of dollars are being raised and spent researching H5N1 and preparing for a potential influenza pandemic.[258]
183
+
184
+ In March 2013, the Chinese government reported three cases of H7N9 influenza infections in humans, two of whom had died and the third became critically ill. Although the strain of the virus is not thought to spread efficiently between humans,[285][286] by mid-April, at least 82 persons had become ill from H7N9, of which 17 had died. These cases include three small family clusters in Shanghai and one cluster between a neighboring girl and boy in Beijing, raising at least the possibility of human-to-human transmission. The WHO points out that one cluster did not have two of the cases lab confirmed and further points out, as a matter of baseline information, that some viruses are able to cause limited human-to-human transmission under conditions of close contact but are not transmissible enough to cause large community outbreaks.[287][288][289]
185
+
186
+ In pigs swine influenza produces fever, lethargy, sneezing, coughing, difficulty breathing and decreased appetite.[290] In some cases the infection can cause abortion. Although mortality is usually low, the virus can produce weight loss and poor growth, causing economic loss to farmers.[290] Infected pigs can lose up to 12 pounds of body weight over a three- to four-week period.[290] Direct transmission of an influenza virus from pigs to humans is occasionally possible (this is called zoonotic swine flu). In all, 50 human cases are known to have occurred since the virus was identified in the mid-20th century, which have resulted in six deaths.[291]
187
+
188
+ In 2009, a swine-origin H1N1 virus strain commonly referred to as "swine flu" caused the 2009 flu pandemic, but there is no evidence that it is endemic to pigs (i.e. actually a swine flu) or of transmission from pigs to people; instead, the virus spreads from person to person.[292][293] This strain is a reassortment of several strains of H1N1 that are usually found separately, in humans, birds, and pigs.[294]
189
+
190
+ General
191
+
192
+ History
193
+
194
+ Microbiology
195
+
196
+
197
+
198
+ Pathogenesis
199
+
200
+ Epidemiology
201
+
202
+ Treatment and prevention
203
+
204
+ Research
205
+
206
+
207
+
en/2298.html.txt ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Grey or gray (American English alternative; see spelling differences) is an intermediate color between black and white. It is a neutral color or achromatic color, meaning literally that it is a color "without color," because it can be composed of black and white.[2] It is the color of a cloud-covered sky, of ash and of lead.[3]
2
+
3
+ The first recorded use of grey as a color name in the English language was in AD 700.[4] Grey is the dominant spelling in European and Commonwealth English, although gray remained in common usage in the UK until the second half of the 20th century.[5] Gray has been the preferred American spelling since approximately 1825,[6] although grey is an accepted variant.[7][8]
4
+
5
+ In Europe and North America, surveys show that grey is the color most commonly associated with neutrality, conformity, boredom, uncertainty, old age, indifference, and modesty. Only one percent of respondents chose it as their favorite color.[9]
6
+
7
+ Grey comes from the Middle English grai or grei, from the Anglo-Saxon graeg, and is related to the Dutch grauw and grijs and German grau.[10] The first recorded use of grey as a color name in the English language was in AD 700.[4]
8
+
9
+ In antiquity and the Middle Ages, grey was the color of undyed wool, and thus was the color most commonly worn by peasants and the poor. It was also the color worn by Cistercian monks and friars of the Franciscan and Capuchin orders as a symbol of their vows of humility and poverty. Franciscan friars in England and Scotland were commonly known as the grey friars, and that name is now attached to many places in Great Britain.
10
+
11
+ During the Renaissance and the Baroque, grey began to play an important role in fashion and art. Black became the most popular color of the nobility, particularly in Italy, France, and Spain, and grey and white were harmonious with it.
12
+
13
+ Grey was also frequently used for the drawing of oil paintings, a technique called grisaille. The painting would first be composed in grey and white, and then the colors, made with thin transparent glazes, would be added on top. The grisaille beneath would provide the shading, visible through the layers of color. Sometimes the grisaille was simply left uncovered, giving the appearance of carved stone.
14
+
15
+ Grey was a particularly good background color for gold and for skin tones. It became the most common background for the portraits of Rembrandt Van Rijn and for many of the paintings of El Greco, who used it to highlight the faces and costumes of the central figures. The palette of Rembrandt was composed almost entirely of somber colors. He composed his warm greys out of black pigments made from charcoal or burnt animal bones, mixed with lead white or a white made of lime, which he warmed with a little red lake color from cochineal or madder. In one painting, the portrait of Margaretha de Geer (1661), one part of a grey wall in the background is painted with a layer of dark brown over a layer of orange, red, and yellow earths, mixed with ivory black and some lead white. Over this he put an additional layer of glaze made of mixture of blue smalt, red ochre, and yellow lake. Using these ingredients and many others, he made greys which had, according to art historian Philip Ball, "an incredible subtlety of pigmentation."[11] The warm, dark and rich greys and browns served to emphasize the golden light on the faces in the paintings.
16
+
17
+ Grey became a highly fashionable color in the 18th century, both for women's dresses and for men's waistcoats and coats. It looked particularly luminous coloring the silk and satin fabrics worn by the nobility and wealthy.
18
+
19
+ Women's fashion in the 19th century was dominated by Paris, while men's fashion was set by London. The grey business suit appeared in the mid-19th century in London; light grey in summer, dark grey in winter; replacing the more colorful palette of men's clothing early in the century.
20
+
21
+ The clothing of women working in the factories and workshops of Paris in the 19th century was usually grey. This gave them the name of grisettes. "Gris" or grey also meant drunk, and the name "grisette" was also given to the lower class of Parisian prostitutes.
22
+
23
+ Grey also became a common color for military uniforms; in an age of rifles with longer range, soldiers in grey were less visible as targets than those in blue or red. Grey was the color of the uniforms of the Confederate Army during the American Civil War, and of the Prussian Army for active service wear from 1910 onwards.
24
+
25
+ Several artists of the mid-19th century used tones of grey to create memorable paintings; Jean-Baptiste-Camille Corot used tones of green-grey and blue grey to give harmony to his landscapes, and James McNeill Whistler created a special grey for the background of the portrait of his mother, and for his own self-portrait.
26
+
27
+ Whistler's arrangement of tones of grey had an effect on the world of music, on the French composer Claude Debussy. In 1894, Debussy wrote to violinist Eugène Ysaÿe describing his Nocturnes as "an experiment in the combinations that can be obtained from one color – what a study in grey would be in painting."[12]
28
+
29
+ In the late 1930s, grey became a symbol of industrialization and war. It was the dominant color of Pablo Picasso's celebrated painting about the horrors of the Spanish Civil War, Guernica.[13]
30
+
31
+ After the war, the grey business suit became a metaphor for uniformity of thought, popularized in such books as The Man in the Gray Flannel Suit (1955), which became a successful film in 1956.[14]
32
+
33
+ The whiteness or darkness of clouds is a function of their depth. Small, fluffy white clouds in summer look white because the sunlight is being scattered by the tiny water droplets they contain, and that white light comes to the viewer's eye. However, as clouds become larger and thicker, the white light cannot penetrate through the cloud, and is reflected off the top. Clouds look darkest grey during thunderstorms, when they can be as much as 20,000 to 30,000 feet high.
34
+
35
+ Stratiform clouds are a layer of clouds that covers the entire sky, and which have a depth of between a few hundred to a few thousand feet thick. The thicker the clouds, the darker they appear from below, because little of the sunlight is able to pass through. From above, in an airplane, the same clouds look perfectly white, but from the ground the sky looks gloomy and gray.[15]
36
+
37
+ The color of a person's hair is created by the pigment melanin, found in the core of each hair. Melanin is also responsible for the color of the skin and of the eyes. There are only two types of pigment: dark (eumelanin) or light (phaeomelanin). Combined in various combinations, these pigments create all natural hair colors.
38
+
39
+ Melanin itself is the product of a specialized cell, the melanocyte, which is found in each hair follicle, from which the hair grows. As hair grows, the melanocyte injects melanin into the hair cells, which contain the protein keratin and which makes up our hair, skin, and nails. As long as the melanocytes continue injecting melanin into the hair cells, the hair retains its original color. At a certain age, however, which varies from person to person, the amount of melanin injected is reduced and eventually stops. The hair, without pigment, turns grey and eventually white. The reason for this decline of production of melanocytes is uncertain. In the February 2005 issue of Science, a team of Harvard scientists suggested that the cause was the failure of the melanocyte stem cells to maintain the production of the essential pigments, due to age or genetic factors, after a certain period of time. For some people, the breakdown comes in their twenties; for others, many years later.[16] According to the site of the magazine Scientific American, "Generally speaking, among Caucasians 50 percent are 50 percent grey by age 50."[17] Adult male gorillas also develop silver hair, but only on their backs - see Physical characteristics of gorillas.
40
+
41
+ Christine Lagarde, head of the International Monetary Fund
42
+
43
+ Actor Donald Sutherland
44
+
45
+ Over the centuries, artists have traditionally created grey by mixing black and white in various proportions. They added a little red to make a warmer grey, or a little blue for a cooler grey. Artists could also make a grey by mixing two complementary colors, such as orange and blue.
46
+
47
+ Today the grey on televisions, computer displays, and telephones is usually created using the RGB color model. Red, green, and blue light combined at full intensity on the black screen makes white; by lowering the intensity, it is possible to create shades of grey.
48
+
49
+ In printing, grey is usually obtained with the CMYK color model, using cyan, magenta, yellow, and black. Grey is produced either by using black and white, or by combining equal amounts of cyan, magenta, and yellow. Most greys have a cool or warm cast to them, as the human eye can detect even a minute amount of color saturation. Yellow, orange, and red create a "warm grey". Green, blue, and violet create a "cool grey".[18] When no color is added, the color is "neutral grey", "achromatic grey", or simply "grey". Images consisting wholly of black, white and greys are called monochrome, black-and-white, or greyscale.
50
+
51
+ There are several tones of grey available for use with HTML and Cascading Style Sheets (CSS) as named colors, while 254 true greys are available by specification of a hex triplet for the RGB value. All are spelled gray, using the spelling grey can cause errors. This spelling was inherited from the X11 color list. Internet Explorer's Trident browser engine does not recognize grey and renders it green. Another anomaly is that gray is in fact much darker than the X11 color marked darkgray; this is because of a conflict with the original HTML gray and the X11 gray, which is closer to HTML's silver. The three slategray colors are not themselves on the greyscale, but are slightly saturated toward cyan (green + blue). Since there are an even (256, including black and white) number of unsaturated tones of grey, there are two grey tones straddling the midpoint in the 8-bit greyscale. The color name gray has been assigned the lighter of the two shades (128, also known as #808080), due to rounding up.
52
+
53
+ Until the 19th century, artists traditionally created grey by simply combining black and white. Rembrandt Van Rijn, for instance, usually used lead white and either carbon black or ivory black, along with touches of either blues or reds to cool or warm the grey.
54
+
55
+ In the early 19th century, a new grey, Payne's grey, appeared on the market. Payne's grey is a dark blue-gray, a mixture of ultramarine and black or of ultramarine and sienna. It is named after William Payne, a British artist who painted watercolors in the late 18th century. The first recorded use of Payne's grey as a color name in English was in 1835.[19]
56
+
57
+ Grey is a very common color for animals, birds, and fish, ranging in size from whales to mice. It provides a natural camouflage and allows them to blend with their surroundings.
58
+
59
+ The substance that composes the brain is sometimes referred to as grey matter, or "the little grey cells", so the color grey is associated with things intellectual. However, the living human brain is actually pink in color; it only turns grey when dead.
60
+
61
+ Grey goo is a hypothetical end-of-the-world scenario, also known as ecophagy: out-of-control self-replicating nanobots consume all living matter on Earth while building more of themselves.[20]
62
+
63
+ In sound engineering, grey noise is random noise subjected to a psychoacoustic equal loudness curve, such as an inverted A-weighting curve, over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies.
64
+
65
+ In the Christian religion, grey is the color of ashes, and so a biblical symbol of mourning and repentance, described as sackcloth and ashes. It can be used during Lent or on special days of fasting and prayer. As the color of humility and modesty, grey is worn by friars of the Order of Friars Minor Capuchin and Franciscan order as well as monks of the Cistercian order.[21] Grey cassocks are worn by clergy of the Brazilian Catholic Apostolic Church.
66
+
67
+ Buddhist monks and priests in Japan and Korea will often wear a sleeved grey, brown, or black outer robe.
68
+
69
+ Taoist priests in China also often wear grey.
70
+
71
+ Grey is rarely used as a color by political parties, largely because of its common association with conformity, boredom and indecision. An example of a political party using grey as a color are the German Grey Panthers.
72
+
73
+ The term "grey power" or "the grey vote" is sometimes used to describe the influence of older voters as a voting bloc. In the United States, older people are more likely to vote, and usually vote to protect certain social benefits, such as Social Security.[22][23]
74
+
75
+ Greys is a term sometimes used pejoratively by environmentalists in the green movement to describe those who oppose environmental measures and supposedly prefer the grey of concrete and cement.
76
+
77
+ During the American Civil War, the soldiers of the Confederate Army wore grey uniforms. At the beginning of the war, The armies of the North and of the South had very similar uniforms; some Confederate units wore blue, and some Union units wore grey. There naturally was confusion, and sometimes soldiers fired by mistake at soldiers of their own army. On June 6, 1861, the Confederate government issued regulations standardizing the army uniform and establishing cadet grey as the uniform color. This was (and still is) the color of the uniform of cadets at the United States Military Academy at West Point, and cadets at the Virginia Military Institute, which produced many officers for the Confederacy.
78
+
79
+ The new uniforms were designed by Nicola Marschall, a German-American artist, who also designed the original Confederate flag. He closely followed the design of contemporary French and Austrian military uniforms.[24] Grey was not chosen for its camouflage value; this was not appreciated for several more decades; but because the South did not have a major dye industry and grey dyes were inexpensive and easy to manufacture. While some units had uniforms colored with good-quality dyes, which were a solid bluish-grey, others had uniforms colored with vegetable dyes made from sumac or logwood, which quickly faded in sunshine to the yellowish color of butternut squash.
80
+
81
+ The German Army wore grey uniforms from 1907 until 1945, during both the First World War and Second World War. The color chosen was a grey-green called field grey (German: feldgrau). It was chosen because it was less visible at a distance than the previous German uniforms, which were Prussian blue. It was one of the first uniform colors to be chosen for its camouflage value, important in the new age of smokeless powder and more accurate rifles and machine guns. It gave the Germans a distinct advantage at the beginning of the First World War, when the French soldiers were dressed in blue jackets and red trousers. The Finnish Army also began using grey uniforms on the German model.
82
+
83
+ Some of the more recent uniforms of the German Army and East German Army were field grey, as were some uniforms of the Swedish army. The formal dress (M/83) of the Finnish Army is grey. The Army of Chile wears field grey today.
84
+
85
+ During the 19th century, women's fashions were largely dictated by Paris, while London set fashions for men. The intent of a business suit was above all to show seriousness, and to show one's position in business and society. Over the course of the century, bright colors disappeared from men's fashion, and were largely replaced by a black or dark charcoal grey frock coat in winter, and lighter greys in summer. In the early 20th century, the frock coat was gradually replaced by the lounge suit, a less formal version of evening dress, which was also usually black or charcoal grey. In the 1930s the English suit style was called the drape suit, with wide shoulders and a nipped waist, usually dark or light grey. After World War II, the style changed to a slimmer fit called the continental cut, but the color remained grey.[25]
86
+
87
+ In America and Europe, grey is one of the least popular colors; In a European survey, only one percent of men said it was their favorite color, and thirteen percent called it their least favorite color; the response from women was almost the same. According to color historian Eva Heller, "grey is too weak to be considered masculine, but too menacing to be considered a feminine color. It is neither warm nor cold, neither material or spiritual. With grey, nothing seems to be decided."[27] It also denotes undefinedness, as in a grey area.
88
+
89
+ Grey is the color most commonly associated in many cultures with the elderly and old age, because of the association with grey hair; it symbolizes the wisdom and dignity that come with experience and age. The New York Times is sometimes called The Grey Lady because of its long history and esteemed position in American journalism.[28]
90
+
91
+ Grey is the color most often associated in Europe and America with modesty.[29]
en/2299.html.txt ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Grey or gray (American English alternative; see spelling differences) is an intermediate color between black and white. It is a neutral color or achromatic color, meaning literally that it is a color "without color," because it can be composed of black and white.[2] It is the color of a cloud-covered sky, of ash and of lead.[3]
2
+
3
+ The first recorded use of grey as a color name in the English language was in AD 700.[4] Grey is the dominant spelling in European and Commonwealth English, although gray remained in common usage in the UK until the second half of the 20th century.[5] Gray has been the preferred American spelling since approximately 1825,[6] although grey is an accepted variant.[7][8]
4
+
5
+ In Europe and North America, surveys show that grey is the color most commonly associated with neutrality, conformity, boredom, uncertainty, old age, indifference, and modesty. Only one percent of respondents chose it as their favorite color.[9]
6
+
7
+ Grey comes from the Middle English grai or grei, from the Anglo-Saxon graeg, and is related to the Dutch grauw and grijs and German grau.[10] The first recorded use of grey as a color name in the English language was in AD 700.[4]
8
+
9
+ In antiquity and the Middle Ages, grey was the color of undyed wool, and thus was the color most commonly worn by peasants and the poor. It was also the color worn by Cistercian monks and friars of the Franciscan and Capuchin orders as a symbol of their vows of humility and poverty. Franciscan friars in England and Scotland were commonly known as the grey friars, and that name is now attached to many places in Great Britain.
10
+
11
+ During the Renaissance and the Baroque, grey began to play an important role in fashion and art. Black became the most popular color of the nobility, particularly in Italy, France, and Spain, and grey and white were harmonious with it.
12
+
13
+ Grey was also frequently used for the drawing of oil paintings, a technique called grisaille. The painting would first be composed in grey and white, and then the colors, made with thin transparent glazes, would be added on top. The grisaille beneath would provide the shading, visible through the layers of color. Sometimes the grisaille was simply left uncovered, giving the appearance of carved stone.
14
+
15
+ Grey was a particularly good background color for gold and for skin tones. It became the most common background for the portraits of Rembrandt Van Rijn and for many of the paintings of El Greco, who used it to highlight the faces and costumes of the central figures. The palette of Rembrandt was composed almost entirely of somber colors. He composed his warm greys out of black pigments made from charcoal or burnt animal bones, mixed with lead white or a white made of lime, which he warmed with a little red lake color from cochineal or madder. In one painting, the portrait of Margaretha de Geer (1661), one part of a grey wall in the background is painted with a layer of dark brown over a layer of orange, red, and yellow earths, mixed with ivory black and some lead white. Over this he put an additional layer of glaze made of mixture of blue smalt, red ochre, and yellow lake. Using these ingredients and many others, he made greys which had, according to art historian Philip Ball, "an incredible subtlety of pigmentation."[11] The warm, dark and rich greys and browns served to emphasize the golden light on the faces in the paintings.
16
+
17
+ Grey became a highly fashionable color in the 18th century, both for women's dresses and for men's waistcoats and coats. It looked particularly luminous coloring the silk and satin fabrics worn by the nobility and wealthy.
18
+
19
+ Women's fashion in the 19th century was dominated by Paris, while men's fashion was set by London. The grey business suit appeared in the mid-19th century in London; light grey in summer, dark grey in winter; replacing the more colorful palette of men's clothing early in the century.
20
+
21
+ The clothing of women working in the factories and workshops of Paris in the 19th century was usually grey. This gave them the name of grisettes. "Gris" or grey also meant drunk, and the name "grisette" was also given to the lower class of Parisian prostitutes.
22
+
23
+ Grey also became a common color for military uniforms; in an age of rifles with longer range, soldiers in grey were less visible as targets than those in blue or red. Grey was the color of the uniforms of the Confederate Army during the American Civil War, and of the Prussian Army for active service wear from 1910 onwards.
24
+
25
+ Several artists of the mid-19th century used tones of grey to create memorable paintings; Jean-Baptiste-Camille Corot used tones of green-grey and blue grey to give harmony to his landscapes, and James McNeill Whistler created a special grey for the background of the portrait of his mother, and for his own self-portrait.
26
+
27
+ Whistler's arrangement of tones of grey had an effect on the world of music, on the French composer Claude Debussy. In 1894, Debussy wrote to violinist Eugène Ysaÿe describing his Nocturnes as "an experiment in the combinations that can be obtained from one color – what a study in grey would be in painting."[12]
28
+
29
+ In the late 1930s, grey became a symbol of industrialization and war. It was the dominant color of Pablo Picasso's celebrated painting about the horrors of the Spanish Civil War, Guernica.[13]
30
+
31
+ After the war, the grey business suit became a metaphor for uniformity of thought, popularized in such books as The Man in the Gray Flannel Suit (1955), which became a successful film in 1956.[14]
32
+
33
+ The whiteness or darkness of clouds is a function of their depth. Small, fluffy white clouds in summer look white because the sunlight is being scattered by the tiny water droplets they contain, and that white light comes to the viewer's eye. However, as clouds become larger and thicker, the white light cannot penetrate through the cloud, and is reflected off the top. Clouds look darkest grey during thunderstorms, when they can be as much as 20,000 to 30,000 feet high.
34
+
35
+ Stratiform clouds are a layer of clouds that covers the entire sky, and which have a depth of between a few hundred to a few thousand feet thick. The thicker the clouds, the darker they appear from below, because little of the sunlight is able to pass through. From above, in an airplane, the same clouds look perfectly white, but from the ground the sky looks gloomy and gray.[15]
36
+
37
+ The color of a person's hair is created by the pigment melanin, found in the core of each hair. Melanin is also responsible for the color of the skin and of the eyes. There are only two types of pigment: dark (eumelanin) or light (phaeomelanin). Combined in various combinations, these pigments create all natural hair colors.
38
+
39
+ Melanin itself is the product of a specialized cell, the melanocyte, which is found in each hair follicle, from which the hair grows. As hair grows, the melanocyte injects melanin into the hair cells, which contain the protein keratin and which makes up our hair, skin, and nails. As long as the melanocytes continue injecting melanin into the hair cells, the hair retains its original color. At a certain age, however, which varies from person to person, the amount of melanin injected is reduced and eventually stops. The hair, without pigment, turns grey and eventually white. The reason for this decline of production of melanocytes is uncertain. In the February 2005 issue of Science, a team of Harvard scientists suggested that the cause was the failure of the melanocyte stem cells to maintain the production of the essential pigments, due to age or genetic factors, after a certain period of time. For some people, the breakdown comes in their twenties; for others, many years later.[16] According to the site of the magazine Scientific American, "Generally speaking, among Caucasians 50 percent are 50 percent grey by age 50."[17] Adult male gorillas also develop silver hair, but only on their backs - see Physical characteristics of gorillas.
40
+
41
+ Christine Lagarde, head of the International Monetary Fund
42
+
43
+ Actor Donald Sutherland
44
+
45
+ Over the centuries, artists have traditionally created grey by mixing black and white in various proportions. They added a little red to make a warmer grey, or a little blue for a cooler grey. Artists could also make a grey by mixing two complementary colors, such as orange and blue.
46
+
47
+ Today the grey on televisions, computer displays, and telephones is usually created using the RGB color model. Red, green, and blue light combined at full intensity on the black screen makes white; by lowering the intensity, it is possible to create shades of grey.
48
+
49
+ In printing, grey is usually obtained with the CMYK color model, using cyan, magenta, yellow, and black. Grey is produced either by using black and white, or by combining equal amounts of cyan, magenta, and yellow. Most greys have a cool or warm cast to them, as the human eye can detect even a minute amount of color saturation. Yellow, orange, and red create a "warm grey". Green, blue, and violet create a "cool grey".[18] When no color is added, the color is "neutral grey", "achromatic grey", or simply "grey". Images consisting wholly of black, white and greys are called monochrome, black-and-white, or greyscale.
50
+
51
+ There are several tones of grey available for use with HTML and Cascading Style Sheets (CSS) as named colors, while 254 true greys are available by specification of a hex triplet for the RGB value. All are spelled gray, using the spelling grey can cause errors. This spelling was inherited from the X11 color list. Internet Explorer's Trident browser engine does not recognize grey and renders it green. Another anomaly is that gray is in fact much darker than the X11 color marked darkgray; this is because of a conflict with the original HTML gray and the X11 gray, which is closer to HTML's silver. The three slategray colors are not themselves on the greyscale, but are slightly saturated toward cyan (green + blue). Since there are an even (256, including black and white) number of unsaturated tones of grey, there are two grey tones straddling the midpoint in the 8-bit greyscale. The color name gray has been assigned the lighter of the two shades (128, also known as #808080), due to rounding up.
52
+
53
+ Until the 19th century, artists traditionally created grey by simply combining black and white. Rembrandt Van Rijn, for instance, usually used lead white and either carbon black or ivory black, along with touches of either blues or reds to cool or warm the grey.
54
+
55
+ In the early 19th century, a new grey, Payne's grey, appeared on the market. Payne's grey is a dark blue-gray, a mixture of ultramarine and black or of ultramarine and sienna. It is named after William Payne, a British artist who painted watercolors in the late 18th century. The first recorded use of Payne's grey as a color name in English was in 1835.[19]
56
+
57
+ Grey is a very common color for animals, birds, and fish, ranging in size from whales to mice. It provides a natural camouflage and allows them to blend with their surroundings.
58
+
59
+ The substance that composes the brain is sometimes referred to as grey matter, or "the little grey cells", so the color grey is associated with things intellectual. However, the living human brain is actually pink in color; it only turns grey when dead.
60
+
61
+ Grey goo is a hypothetical end-of-the-world scenario, also known as ecophagy: out-of-control self-replicating nanobots consume all living matter on Earth while building more of themselves.[20]
62
+
63
+ In sound engineering, grey noise is random noise subjected to a psychoacoustic equal loudness curve, such as an inverted A-weighting curve, over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies.
64
+
65
+ In the Christian religion, grey is the color of ashes, and so a biblical symbol of mourning and repentance, described as sackcloth and ashes. It can be used during Lent or on special days of fasting and prayer. As the color of humility and modesty, grey is worn by friars of the Order of Friars Minor Capuchin and Franciscan order as well as monks of the Cistercian order.[21] Grey cassocks are worn by clergy of the Brazilian Catholic Apostolic Church.
66
+
67
+ Buddhist monks and priests in Japan and Korea will often wear a sleeved grey, brown, or black outer robe.
68
+
69
+ Taoist priests in China also often wear grey.
70
+
71
+ Grey is rarely used as a color by political parties, largely because of its common association with conformity, boredom and indecision. An example of a political party using grey as a color are the German Grey Panthers.
72
+
73
+ The term "grey power" or "the grey vote" is sometimes used to describe the influence of older voters as a voting bloc. In the United States, older people are more likely to vote, and usually vote to protect certain social benefits, such as Social Security.[22][23]
74
+
75
+ Greys is a term sometimes used pejoratively by environmentalists in the green movement to describe those who oppose environmental measures and supposedly prefer the grey of concrete and cement.
76
+
77
+ During the American Civil War, the soldiers of the Confederate Army wore grey uniforms. At the beginning of the war, The armies of the North and of the South had very similar uniforms; some Confederate units wore blue, and some Union units wore grey. There naturally was confusion, and sometimes soldiers fired by mistake at soldiers of their own army. On June 6, 1861, the Confederate government issued regulations standardizing the army uniform and establishing cadet grey as the uniform color. This was (and still is) the color of the uniform of cadets at the United States Military Academy at West Point, and cadets at the Virginia Military Institute, which produced many officers for the Confederacy.
78
+
79
+ The new uniforms were designed by Nicola Marschall, a German-American artist, who also designed the original Confederate flag. He closely followed the design of contemporary French and Austrian military uniforms.[24] Grey was not chosen for its camouflage value; this was not appreciated for several more decades; but because the South did not have a major dye industry and grey dyes were inexpensive and easy to manufacture. While some units had uniforms colored with good-quality dyes, which were a solid bluish-grey, others had uniforms colored with vegetable dyes made from sumac or logwood, which quickly faded in sunshine to the yellowish color of butternut squash.
80
+
81
+ The German Army wore grey uniforms from 1907 until 1945, during both the First World War and Second World War. The color chosen was a grey-green called field grey (German: feldgrau). It was chosen because it was less visible at a distance than the previous German uniforms, which were Prussian blue. It was one of the first uniform colors to be chosen for its camouflage value, important in the new age of smokeless powder and more accurate rifles and machine guns. It gave the Germans a distinct advantage at the beginning of the First World War, when the French soldiers were dressed in blue jackets and red trousers. The Finnish Army also began using grey uniforms on the German model.
82
+
83
+ Some of the more recent uniforms of the German Army and East German Army were field grey, as were some uniforms of the Swedish army. The formal dress (M/83) of the Finnish Army is grey. The Army of Chile wears field grey today.
84
+
85
+ During the 19th century, women's fashions were largely dictated by Paris, while London set fashions for men. The intent of a business suit was above all to show seriousness, and to show one's position in business and society. Over the course of the century, bright colors disappeared from men's fashion, and were largely replaced by a black or dark charcoal grey frock coat in winter, and lighter greys in summer. In the early 20th century, the frock coat was gradually replaced by the lounge suit, a less formal version of evening dress, which was also usually black or charcoal grey. In the 1930s the English suit style was called the drape suit, with wide shoulders and a nipped waist, usually dark or light grey. After World War II, the style changed to a slimmer fit called the continental cut, but the color remained grey.[25]
86
+
87
+ In America and Europe, grey is one of the least popular colors; In a European survey, only one percent of men said it was their favorite color, and thirteen percent called it their least favorite color; the response from women was almost the same. According to color historian Eva Heller, "grey is too weak to be considered masculine, but too menacing to be considered a feminine color. It is neither warm nor cold, neither material or spiritual. With grey, nothing seems to be decided."[27] It also denotes undefinedness, as in a grey area.
88
+
89
+ Grey is the color most commonly associated in many cultures with the elderly and old age, because of the association with grey hair; it symbolizes the wisdom and dignity that come with experience and age. The New York Times is sometimes called The Grey Lady because of its long history and esteemed position in American journalism.[28]
90
+
91
+ Grey is the color most often associated in Europe and America with modesty.[29]
en/23.html.txt ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Depends on configuration:
4
+ Left-hand manual
5
+
6
+ Left-hand manual
7
+
8
+ Hand-pumped:
9
+ Bandoneon, concertina, flutina, garmon, trikitixa, Indian harmonium
10
+
11
+ Foot-pumped:
12
+ Harmonium, reed organ
13
+
14
+ Mouth-blown:
15
+ Claviola, melodica, harmonica, Laotian khene, Chinese shēng, Japanese shō
16
+
17
+ Electronic reedless instruments:
18
+
19
+ Accordions (from 19th-century German Akkordeon, from Akkord—"musical chord, concord of sounds")[1] are a family of box-shaped musical instruments of the bellows-driven free-reed aerophone type, colloquially referred to as a squeezebox. A person who plays the accordion is called an accordionist. The concertina and bandoneón are related. The harmonium and American reed organ are in the same family, but are typically larger than an accordion and sit on a surface or the floor.
20
+
21
+ The accordion is played by compressing or expanding the bellows while pressing buttons or keys, causing pallets to open, which allow air to flow across strips of brass or steel, called reeds. These vibrate to produce sound inside the body. Valves on opposing reeds of each note are used to make the instrument's reeds sound louder without air leaking from each reed block.[notes 1] The performer normally plays the melody on buttons or keys on the right-hand manual, and the accompaniment, consisting of bass and pre-set chord buttons, on the left-hand manual.
22
+
23
+ The accordion is widely spread across the world because of the waves of immigration from Europe to the Americas and other regions. In some countries (for example Brazil,[2][3] Colombia, Dominican Republic, Mexico and Panama) it is used in popular music (for example Gaucho, Forró and Sertanejo in Brazil, Vallenato in Colombia, and norteño in Mexico), whereas in other regions (such as Europe, North America and other countries in South America) it tends to be more used for dance-pop and folk music and is often used in folk music in Europe, North America and South America.
24
+
25
+ In Europe and North America, some popular music acts also make use of the instrument. Additionally, the accordion is used in cajun, zydeco, jazz music and in both solo and orchestral performances of classical music. The piano accordion is the official city instrument of San Francisco, California.[4] Many conservatories in Europe have classical accordion departments. The oldest name for this group of instruments is harmonika, from the Greek harmonikos, meaning "harmonic, musical". Today, native versions of the name accordion are more common. These names refer to the type of accordion patented by Cyrill Demian, which concerned "automatically coupled chords on the bass side".[5]
26
+
27
+ Accordions have many configurations and types. What may be easy to do with one type of accordion could be technically challenging or impossible with another, and proficiency with one layout may not translate to another.
28
+
29
+ The most obvious difference between accordions is their right-hand manuals. Piano accordions use a piano-style musical keyboard, while button accordions use a buttonboard. Button accordions are furthermore differentiated by their usage of a chromatic or diatonic buttonboard for the right-hand manual.[6]
30
+
31
+ Accordions may be either bisonoric, producing different pitches depending on the direction of bellows movement, or unisonoric, producing the same pitch in both directions. Piano accordions are unisonoric. Chromatic button accordions also tend to be unisonoric, while diatonic button accordions tend to be bisonoric,[7] though notable exceptions exist.[8]
32
+
33
+ Accordion size is not standardized, and may vary significantly from model to model. Accordions vary not only in their dimensions and weight, but also in number of buttons or keys present in the right- and left-hand manuals. For example, piano accordions may have as few as 12 bass buttons, or up to 120 (and even beyond this in rare cases). Accordions also vary by their available registers and by their specific tuning and voicing.
34
+
35
+ Despite these differences, all accordions share a number of common components.
36
+
37
+ The bellows is the most recognizable part of the instrument, and the primary means of articulation. The production of sound in an accordion is in direct proportion to the motion of the bellows by the player. In a sense, the role of the bellows can be compared to the role of moving a violin's bow on bowed strings. For a more direct analogy, the bellows can be compared to the role of breathing for a singer. The bellows is located between the right- and left-hand manuals, and is made from pleated layers of cloth and cardboard, with added leather and metal.[9] It is used to create pressure and vacuum, driving air across the internal reeds and producing sound by their vibrations, applied pressure increases the volume.
38
+
39
+ The keyboard touch is not expressive and does not affect dynamics: all expression is effected through the bellows. Bellows effects include:
40
+
41
+ The accordion's body consists of two wooden boxes joined together by the bellows. These boxes house reed chambers for the right- and left-hand manuals. Each side has grilles in order to facilitate the transmission of air in and out of the instrument, and to allow the sound to project better. The grille for the right-hand manual is usually larger and is often shaped for decorative purposes. The right-hand manual is normally used for playing the melody and the left-hand manual for playing the accompaniment; however, skilled players can reverse these roles and play melodies with the left hand.[notes 2]
42
+
43
+ The size and weight of an accordion varies depending on its type, layout and playing range, which can be as small as to have only one or two rows of basses and a single octave on the right-hand manual, to the standard 120-bass accordion and through to large and heavy 160-bass free-bass converter models.
44
+
45
+ The accordion is an aerophone. The manual mechanism of the instrument either enables the air flow, or disables it:[notes 3]
46
+
47
+ The term accordion covers a wide range of instruments, with varying components. All instruments have reed ranks of some format, apart from reedless digital accordions. Not all have switches to change registers or ranks, as some have only one treble register and one bass register. The most typical accordion is the piano accordion, which is used for many musical genres. Another type of accordion is the button accordion, which is used in musical traditions including Cajun, Conjunto and Tejano music, Swiss and Austro-German Alpine music, and Argentinian tango music. The Helikon-style accordion has multiple flared horns projecting out of the left side to strengthen the bass tone. The word "Helikon" refers to a deep-pitched tuba.
48
+
49
+ Different systems exist for the right-hand manual of an accordion, which is normally used for playing the melody (while it can also play chords). Some use a button layout arranged in one way or another, while others use a piano-style keyboard. Each system has different claimed benefits[11] by those who prefer it. They are also used to define one accordion or another as a different "type":
50
+
51
+ A button key accordion made by the company Marrazza in Italy. It was brought by Italian immigrants to Australia as a reminder of their homeland.
52
+
53
+ A Weltmeister piano accordion by VEB Klingenthaler Harmonikawerke
54
+
55
+ Different systems are also in use for the left-hand manual, which is normally used for playing the accompaniment. These almost always use distinct bass buttons and often have buttons with concavities or studs to help the player navigate the layout despite not being able to see the buttons while playing. There are three general categories:
56
+
57
+ Inside the accordion are the reeds that generate the instrument tones. These are organized in different sounding banks, which can be further combined into registers producing differing timbres. All but the smaller accordions are equipped with switches that control which combination of reed banks operate, organized from high to low registers. Each register stop produces a separate sound timbre, many of which also differ in octaves or in how different octaves are combined. See the accordion reed ranks and switches article for further explanation and audio samples. All but the smallest accordions usually have treble switches. The larger and more expensive accordions often also have bass switches to give options for the reed bank on the bass side.
58
+
59
+ In describing or pricing an accordion, the first factor is size, expressed in number of keys on either side. For a piano type, this could for one example be 37/96, meaning 37 treble keys (three octaves plus one note) on the treble side and 96 bass keys. A second aspect of size is the width of the white keys, which means that even accordions with the same number of keys have keyboards of different lengths, ranging from 14 inches (36 cm) for a child's accordion to 19 inches (48 cm) for an adult-sized instrument. After size, the price and weight of an accordion is largely dependent on the number of reed ranks on either side, either on a cassotto or not, and to a lesser degree on the number of combinations available through register switches.
60
+
61
+ Price is also affected by the use of costly woods, luxury decorations, and features such as a palm switch, grille mute, and so on. Some accordion makers sell the same model in a range of different models, from a less-expensive base model to a more costly luxury model. Typically, the register switches are described as Reeds: 5 + 3, meaning five reeds on the treble side and three on the bass, and Registers: 13 + M, 7, meaning 13 register buttons on the treble side plus a special "master" that activates all ranks, like the "tutti" or "full organ" switch on an organ, and seven register switches on the bass side. Another factor affecting the price is the presence of electronics, such as condenser microphones, volume and tone controls, or MIDI sensors and connections.
62
+
63
+ The larger piano and chromatic button accordions are usually heavier than other smaller squeezeboxes, and are equipped with two shoulder straps to make it easier to balance the weight and increase bellows control while sitting, and avoid dropping the instrument while standing. Other accordions, such as the diatonic button accordion, have only a single shoulder strap and a right hand thumb strap. All accordions have a (mostly adjustable) leather strap on the left-hand manual to keep the player's hand in position while drawing the bellows. There are also straps above and below the bellows to keep it securely closed when the instrument is not playing.
64
+
65
+ In the 2010s, a range of electronic and digital accordions are made. They have an electronic sound module which creates the accordion sound, and most use MIDI systems to encode the keypresses and transmit them to the sound module. A digital accordion can have hundreds of sounds, which can include different types of accordions and even non-accordion sounds, such as pipe organ, piano, or guitar. Sensors are used on the buttons and keys, such as magnetic reed switches. Sensors are also used on the bellows to transmit the pushing and pulling of the bellows to the sound module. Digital accordions may have features not found in acoustic instruments, such as a piano-style sustain pedal, a modulation control for changing keys, and a portamento effect.
66
+
67
+ As an electronic instrument, these types of accordions are plugged into a PA system or keyboard amplifier to produce sound. Some digital accordions have a small internal speaker and amplifier, so they can be used without a PA system or keyboard amplifier, at least for practicing and small venues like coffeehouses. One benefit of electronic accordions is that they can be practiced with headphones, making them inaudible to other people nearby. On a digital accordion, the volume of the right-hand keyboard and the left-hand buttons can be independently adjusted.
68
+
69
+ Acoustic-digital hybrid accordions also exist. They are acoustic accordions (with reeds, bellows, and so on), but they also contain sensors, electronics, and MIDI connections, which provides a wider range of sound options. An acoustic-digital hybrid may be manufactured in this form, or it may be an acoustic accordion which has had aftermarket electronics sensors and connections added. Several companies sell aftermarket electronics kits, but they are typically installed by professional accordion technicians, due to the complex and delicate nature of the internal parts of an accordion.
70
+
71
+ Various hybrid accordions have been created between instruments of different buttonboards and actions. Many remain curiosities – only a few have remained in use:
72
+
73
+ The accordion's basic form is believed to have been invented in Berlin, in 1822, by Christian Friedrich Ludwig Buschmann,[notes 4][13] although one instrument has been recently discovered that appears to have been built earlier.[notes 5][14][15]
74
+
75
+ The earliest history of the accordion in Russia is poorly documented. Nevertheless, according to Russian researchers, the earliest known simple accordions were made in Tula, Russia, by Ivan Sizov and Timofey Vorontsov around 1830, after they received an early accordion from Germany.[16] By the late 1840s, the instrument was already very widespread;[17] together the factories of the two masters were producing 10,000 instruments a year. By 1866, over 50,000 instruments were being produced yearly by Tula and neighbouring villages, and by 1874 the yearly production was over 700,000.[18] By the 1860s, Novgorod, Vyatka and Saratov governorates also had significant accordion production. By the 1880s, the list included Oryol, Ryazan, Moscow, Tver, Vologda, Kostroma, Nizhny Novgorod and Simbirsk, and many of these places created their own varieties of the instrument.[19]
76
+
77
+ The accordion is one of several European inventions of the early 19th century that use free reeds driven by a bellows. An instrument called accordion was first patented in 1829 by Cyrill Demian, of Armenian origin, in Vienna.[notes 6] Demian's instrument bore little resemblance to modern instruments. It only had a left hand buttonboard, with the right hand simply operating the bellows. One key feature for which Demian sought the patent was the sounding of an entire chord by depressing one key. His instrument also could sound two different chords with the same key, one for each bellows direction (a bisonoric action). At that time in Vienna, mouth harmonicas with Kanzellen (chambers) had already been available for many years, along with bigger instruments driven by hand bellows. The diatonic key arrangement was also already in use on mouth-blown instruments. Demian's patent thus covered an accompanying instrument: an accordion played with the left hand, opposite to the way that contemporary chromatic hand harmonicas were played, small and light enough for travelers to take with them and used to accompany singing. The patent also described instruments with both bass and treble sections, although Demian preferred the bass-only instrument owing to its cost and weight advantages.[notes 7]
78
+
79
+ The accordion was introduced from Germany into Britain in about the year 1828.[20] The instrument was noted in The Times in 1831 as one new to British audiences[21] and was not favourably reviewed, but nevertheless it soon became popular.[22] It had also become popular with New Yorkers by the mid-1840s.[23]
80
+
81
+ After Demian's invention, other accordions appeared, some featuring only the right-handed keyboard for playing melodies. It took English inventor Charles Wheatstone to bring both chords and keyboard together in one squeezebox. His 1844 patent for what he called a concertina also featured the ability to easily tune the reeds from the outside with a simple tool.
82
+
83
+ The musician Adolph Müller described a great variety of instruments in his 1833 book Schule für Accordion. At the time, Vienna and London had a close musical relationship, with musicians often performing in both cities in the same year, so it is possible that Wheatstone was aware of this type of instrument and may have used them to put his key-arrangement ideas into practice.
84
+
85
+ Jeune's flutina resembles Wheatstone's concertina in internal construction and tone colour, but it appears to complement Demian's accordion functionally. The flutina is a one-sided bisonoric melody-only instrument whose keys are operated with the right hand while the bellows is operated with the left. When the two instruments are combined, the result is quite similar to diatonic button accordions still manufactured today.
86
+
87
+ Further innovations followed and continue to the present. Various buttonboard and keyboard systems have been developed, as well as voicings (the combination of multiple tones at different octaves), with mechanisms to switch between different voices during performance, and different methods of internal construction to improve tone, stability and durability. Modern accordions may incorporate electronics such as condenser microphones and tone and volume controls, so that the accordion can be plugged into a PA system or keyboard amplifier for live shows. Some 2010s-era accordions may incorporate MIDI sensors and circuitry, enabling the accordion to be plugged into a synth module and produce accordion sounds or other synthesized instrument sounds, such as piano or organ.
88
+
89
+ The accordion has traditionally been used to perform folk or ethnic music, popular music, and transcriptions from the operatic and light-classical music repertoire.[24] Today the instrument is sometimes heard in contemporary pop styles, such as rock and pop-rock,[25] and occasionally even in serious classical music concerts, as well as advertisements.
90
+
91
+ The accordion's popularity spread rapidly: it has mostly been associated with the common people, and was propagated by Europeans who emigrated around the world. The accordion in both button and piano forms became a favorite of folk musicians[26] and has been integrated into traditional music styles all over the world: see the list of music styles that incorporate the accordion.
92
+
93
+ Early jazz accordionist include Charles Melrose, who recorded Wailing Blues/Barrel House Stomp (1930, Voc. 1503) with the Cellar Boys; Buster Moten, who played second piano and accordion in the Bennie Moten orchestra; and Jack Cornell, who did recordings with Irving Mills. Later jazz accordionists from the United States include Steve Bach, Milton DeLugg, Orlando DiGirolamo, Dominic Frontiere, Guy Klucevsek, Yuri Lemeshev, Frank Marocco, John Serry Sr., Lee Tomboulian, and Art Van Damme. French jazz accordionists include Richard Galliano, Bernard Lubat, and Vincent Peirani. Norwegian jazz accordionists include Asmund Bjørken, Stian Carstensen, Gabriel Fliflet, Frode Haltli, and Eivin One Pedersen.
94
+
95
+ While the accordion's left hand preset chord buttons are limited to triads and seventh chords (for the dominant seventh chord and the diminished seventh chord), jazz accordionists expand the range of chord possibilities by using more than one chord button simultaneously, or by using combinations of a chord button and a bass note other than the typical root of the chord. An example of the former technique is used to play a minor seventh chord. To play an "a minor" seventh chord (with an added ninth), the "a minor" and "e minor" preset buttons are pressed simultaneously, along with an "A" bassnote. An example of the latter technique is used to play the half-diminished chord. To play an "e" half-diminished seventh chord, a "g minor" preset button is pressed along with an "E" bassnote.
96
+
97
+ The accordion appeared in popular music from the 1900s to the 1960s. This half-century is often called the "golden age of the accordion".[27] Five players, Pietro Frosini, the two brothers Count Guido Deiro and Pietro Deiro and Slovenian brothers Vilko Ovsenik and Slavko Avsenik, Charles Magnante were major influences at this time.[28]
98
+
99
+ Most vaudeville theaters closed during the Great Depression, but accordionists during the 1930s–1950s taught and performed for radio. Included among this group was the concert virtuoso John Serry, Sr.[29][30][31] During the 1950s through the 1980s the accordion received significant exposure on television with performances by Myron Floren on The Lawrence Welk Show.[32] In the late 1950s and early 1960s, the accordion declined in popularity due to the rise of rock and roll.[33] The first accordionist to appear and perform at the Newport Jazz Festival was Angelo DiPippo. He can be seen playing his accordion in the motion picture The Godfather. He also composed and performed with his accordion on part of the soundtrack of Woody Allen's movie To Rome With Love. He was featured twice on The Tonight Show with Johnny Carson.
100
+
101
+ Richard Galliano is an internationally known accordionist whose repertoire covers jazz, tango nuevo, latin, and classical. Some popular bands use the instrument to create distinctive sounds. A notable example is Grammy Award-winning parodist "Weird Al" Yankovic, who plays the accordion on many of his musical tracks, particularly his polkas. Yankovic was trained in the accordion as a child.[34]
102
+
103
+ The accordion has also been used in the rock genre, most notably by John Linnell of They Might Be Giants, featuring more prominently in the band's earlier works.[35] The instrument is still frequently used during live performances, and continues to make appearances in their studio albums. Accordion is also used in the music of the Dropkick Murphys and Gogol Bordello.
104
+
105
+ Accordionists in heavy metal music make their most extensive appearances in the folk metal subgenre, and are otherwise generally rare. Full-time accordionists in folk metal seem even rarer, but they are still utilized for studio work, as flexible keyboardists are usually more accessible for live performances. The Finnish symphonic folk-metal band Turisas used to have a full-time accordionist, employing classical and polka sensibilities alongside a violinist. One of their accordionists, Netta Skog, is now a member of Ensiferum, another folk-metal band. Another Finnish metal band, Korpiklaani, invokes a type of Finnish polka called humppa, and also has a full-time accordionist. Sarah Kiener, the former hurdy-gurdy player for the Swiss melodic-death-folk metal band Eluveitie, played a Helvetic accordion known as a zugerörgeli.[citation needed]
106
+
107
+ Although best known as a folk instrument, it has grown in popularity among classical composers. The earliest surviving concert piece is Thême varié très brillant pour accordéon methode Reisner, written in 1836 by Louise Reisner of Paris. Other composers, including the Russian Pyotr Ilyich Tchaikovsky, the Italian Umberto Giordano, and the American Charles Ives, wrote works for the diatonic button accordion.
108
+
109
+ The first composer to write specifically for the chromatic accordion was Paul Hindemith.[36] In 1922, the Austrian Alban Berg included an accordion in Wozzeck, Op. 7. In 1937 the first accordion concerto was composed in Russia. Other notable composers have written for the accordion during the first half of the 20th century.[37] Included among this group was the Italian-American John Serry Sr., whose Concerto for Free Bass Accordion was completed in 1964.[38][39] In addition, the American accordionist Robert Davine composed his Divertimento for Flute, Clarinet, Bassoon and Accordion as a work for chamber orchestra.[40] American composer William P. Perry featured the accordion in his orchestral suite Six Title Themes in Search of a Movie (2008). The experimental composer Howard Skempton began his musical career as an accordionist, and has written numerous solo works for it. In his work Drang (1999), British composer John Palmer pushed the expressive possibilities of the accordion/bayan. Luciano Berio wrote Sequenza XIII (1995) for accordionist Teodoro Anzellotti.[41] Accordionists like Mogens Ellegaard, Joseph Macerollo, Friedrich Lips, Hugo Noth, Stefan Hussong, Italo Salizzato, Teodoro Anzellotti, Mie Miki, and Geir Draugsvoll, encouraged composers to write new music for the accordion (solo and chamber music) and also started playing baroque music on the free bass accordion.
110
+
111
+ French composer Henri Dutilleux used an accordion in both his late song cycles Correspondances (2003) and Le Temps l'Horloge (2009). Russian-born composer Sofia Gubaidulina has composed solos, concertos, and chamber works for accordion. Astor Piazzolla's concert tangos are performed widely. Piazzolla performed on the bandoneon, but his works are performed on either bandoneon or accordion.
112
+
113
+ The earliest mention of the novel accordion instrument in Australian music occurs in the 1830s.[42]
114
+ The accordion initially competed against cheaper and more convenient reed instruments such as mouth organ, concertina and melodeon.
115
+ Frank Fracchia was an Australian accordion composer[43] and copies of his works "My dear, can you come out tonight"[44] and "Dancing with you"[45] are preserved in Australian libraries.
116
+ Other Australian composers who arranged music for accordion include Reginald Stoneham.[46]
117
+ The popularity of the accordion peaked in the late 1930s[47] and continued until the 1950s.[48]
118
+ The accordion was particularly favoured by buskers.[49][50]
119
+
120
+ The accordion is a traditional instrument in Bosnia and Herzegovina. It is the dominant instrument used in sevdalinka, a traditional genre of folk music from Bosnia and Herzegovina. It is also considered a national instrument of the country.[citation needed]
121
+
122
+ The accordion was brought to Brazil by settlers and immigrants from Europe, especially from Italy and Germany, who mainly settled in the south (Rio Grande do Sul, Santa Catarina and Parana). The first instrument brought was a "Concertina" (a 120 button chromatic accordion).[51] The instrument was popular in the 1950s, and was common to find several accordions in the same house. There are many different configurations and tunes which were adapted from the cultures that came from Europe.
123
+
124
+ Accordion is the official symbol instrument of the Rio Grande do Sul state, where was voted by unanimity in the deputy chamber.[52]
125
+ During the boom of accordions there were around 65 factories in Brazil, where most of them (52) in the south, in Rio Grande do Sul state, with only 7 outside the south. One of the most famous and genuinely Brazilian brands was Acordeões Todeschini from Bento Gonçalves-RS, closed in 1973. The Todeschini accordion is very appreciated today and survives with very few maintainers.[53][54] The most notable musicians of button accordions are Renato Borghetti, Adelar Bertussi, Albino Manique and Edson Dutra.[55]
126
+
127
+ Compared to many other countries, the instrument is very popular in mainstream pop music. In some parts of the country, such as the northeast it is the most popular melodic instrument. As opposed to most European folk accordions, a very dry tuning is usually used in Brazil. Outside the south, the accordion (predominantly the piano accordion) is used in almost all styles of Forró (in particular in the subgenres of Xote and Baião) as the principal instrument, Luiz Gonzaga (the "King of the Baião") and Dominguinhos being among the notable musicians in this style from the northeast. In this musical style the typical combination is a trio of accordion, triangle and zabumba (a type of drum).
128
+
129
+ This style has gained popularity recently, in particular among the student population of the southeast of the country (in the Forró Universitário genre, with important exponents today being Falamansa, and trios such as Trio Dona Zefa, Trio Virgulino and Trio Alvorada). Moreover, the accordion is the principal instrument in Junina music (music of the São João Festival), with Mario Zan having been a very important exponent of this music. It is an important instrument in Sertanejo (and Caipira) music, which originated in the midwest and southeast of Brazil, and subsequently has gained popularity throughout the country.
130
+
131
+ The accordion is also a traditional instrument in Colombia, commonly associated with the vallenato and cumbia genres. The accordion has been used by tropipop musicians such as Carlos Vives, Andrés Cabas, Fonseca (singer) and Bacilos, as well as rock musicians such as Juanes and pop musicians as Shakira. Vallenato, who emerged in the early twentieth century in a city known as Valledupar, and have come to symbolize the folk music of Colombia.[citation needed]
132
+
133
+ Every year in April, Colombia holds one of the most important musical festivals in the country: the Vallenato Legend Festival. The festival holds contests for best accordion player. Once every decade, the "King of Kings" accordion competition takes place, where winners of the previous festivals compete for the highest possible award for a vallenato accordion player: the Pilonera Mayor prize.[56] This is the world's largest competitive accordion festival.
134
+
135
+ Norteño heavily relies on the accordion, it is a genre related to polka. Ramón Ayala known in Mexico as the "King of the Accordion" is a norteño musician. Cumbia which features the accordion is also popular with musicians such as Celso Piña creating a more contemporary style. U.S. born Mexican musician Julieta Venegas incorporates the sound of the instrument into rock, pop and folk. She was influenced by her fellow Chicanos Los Lobos who also use the music of the accordion.[57]
136
+
137
+ According to Barbara Demick in Nothing to Envy, the accordion is known as "the people's instrument" and all North Korean teachers were expected to learn the accordion.[58]
138
+
139
+ The most expensive[according to whom?] accordions are typically fully hand-made, particularly the reeds; completely hand-made reeds have a better tonal quality than even the best automatically-manufactured ones. Some accordions have been modified by individuals striving to bring a more pure[clarification needed] sound out of low-end instruments, such as the ones improved by Yutaka Usui,[59][irrelevant citation] a Japanese craftsman.
140
+
141
+ The manufacture of an accordion is only a partly automated process. In a sense[clarification needed], all accordions are handmade, since there is always some hand assembly of the small parts required. The general process involves making the individual parts, assembling the subsections, assembling the entire instrument, and final decorating and packaging.[60]
142
+
143
+ Famous[according to whom?][peacock term] centres of production are the Italian cities of Stradella and Castelfidardo, with many small and medium size manufacturers especially at the latter. Castelfidardo honours[clarification needed] the memory of Paolo Soprani who was one of the first large-scale producers. Maugein Freres has built accordions in the French town of Tulle since 1919, and the company is now the last complete-process[clarification needed] manufacturer of accordions in France. German companies such as Hohner and Weltmeister made large numbers of accordions, but production diminished by the end of the 20th century. Hohner still manufactures its top-end models[clarification needed] in Germany, and Weltmeister instruments are still handmade by HARMONA Akkordeon GmbH in Klingenthal.
en/230.html.txt ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ Andy Warhol (/ˈwɔːrhɒl/;[1] born Andrew Warhola; August 6, 1928 – February 22, 1987) was an American artist, film director, and producer who was a leading figure in the visual art movement known as pop art. His works explore the relationship between artistic expression, advertising, and celebrity culture that flourished by the 1960s, and span a variety of media, including painting, silkscreening, photography, film, and sculpture. Some of his best known works include the silkscreen paintings Campbell's Soup Cans (1962) and Marilyn Diptych (1962), the experimental film Chelsea Girls (1966), and the multimedia events known as the Exploding Plastic Inevitable (1966–67).
4
+
5
+ Born and raised in Pittsburgh, Warhol initially pursued a successful career as a commercial illustrator. After exhibiting his work in several galleries in the late 1950s, he began to receive recognition as an influential and controversial artist. His New York studio, The Factory, became a well-known gathering place that brought together distinguished intellectuals, drag queens, playwrights, Bohemian street people, Hollywood celebrities, and wealthy patrons.[2][3][4] He promoted a collection of personalities known as Warhol superstars, and is credited with inspiring the widely used expression "15 minutes of fame". In the late 1960s he managed and produced the experimental rock band The Velvet Underground and founded Interview magazine. He authored numerous books, including The Philosophy of Andy Warhol and Popism: The Warhol Sixties. He lived openly as a gay man before the gay liberation movement. After gallbladder surgery, Warhol died of cardiac arrhythmia in February 1987 at the age of 58.
6
+
7
+ Warhol has been the subject of numerous retrospective exhibitions, books, and feature and documentary films. The Andy Warhol Museum in his native city of Pittsburgh, which holds an extensive permanent collection of art and archives, is the largest museum in the United States dedicated to a single artist. Many of his creations are very collectible and highly valuable. The highest price ever paid for a Warhol painting is US$105 million for a 1963 canvas titled Silver Car Crash (Double Disaster); his works include some of the most expensive paintings ever sold.[5] A 2009 article in The Economist described Warhol as the "bellwether of the art market".[6]
8
+
9
+ Warhol was born on August 6, 1928, in Pittsburgh, Pennsylvania.[7] He was the fourth child of Ondrej Warhola (Americanized as Andrew Warhola, Sr., 1889–1942)[8][9] and Julia (née Zavacká, 1892–1972),[10] whose first child was born in their homeland of Austria-Hungary and died before their move to the U.S.
10
+
11
+ His parents were working-class Lemko[11][12] emigrants from Mikó, Austria-Hungary (now called Miková, located in today's northeastern Slovakia). Warhol's father emigrated to the United States in 1914, and his mother joined him in 1921, after the death of Warhol's grandparents. Warhol's father worked in a coal mine. The family lived at 55 Beelen Street and later at 3252 Dawson Street in the Oakland neighborhood of Pittsburgh.[13] The family was Ruthenian Catholic and attended St. John Chrysostom Byzantine Catholic Church. Andy Warhol had two elder brothers—Pavol (Paul), the eldest, was born before the family emigrated; Ján was born in Pittsburgh. Pavol's son, James Warhola, became a successful children's book illustrator.
12
+
13
+ In third grade, Warhol had Sydenham's chorea (also known as St. Vitus' Dance), the nervous system disease that causes involuntary movements of the extremities, which is believed to be a complication of scarlet fever which causes skin pigmentation blotchiness.[14] At times when he was confined to bed, he drew, listened to the radio and collected pictures of movie stars around his bed. Warhol later described this period as very important in the development of his personality, skill-set and preferences. When Warhol was 13, his father died in an accident.[15]
14
+
15
+ As a teenager, Warhol graduated from Schenley High School in 1945. Also as a teen, Warhol won a Scholastic Art and Writing Award.[16] After graduating from high school, his intentions were to study art education at the University of Pittsburgh in the hope of becoming an art teacher, but his plans changed and he enrolled in the Carnegie Institute of Technology, now Carnegie Mellon University in Pittsburgh, where he studied commercial art. During his time there, Warhol joined the campus Modern Dance Club and Beaux Arts Society.[17] He also served as art director of the student art magazine, Cano, illustrating a cover in 1948[18] and a full-page interior illustration in 1949.[19] These are believed to be his first two published artworks.[19] Warhol earned a Bachelor of Fine Arts in pictorial design in 1949.[20] Later that year, he moved to New York City and began a career in magazine illustration and advertising.
16
+
17
+ Warhol's early career was dedicated to commercial and advertising art, where his first commission had been to draw shoes for Glamour magazine in the late 1940s.[21] In the 1950s, Warhol worked as a designer for shoe manufacturer Israel Miller.[21][22] American photographer John Coplans recalled that
18
+
19
+ nobody drew shoes the way Andy did. He somehow gave each shoe a temperament of its own, a sort of sly, Toulouse-Lautrec kind of sophistication, but the shape and the style came through accurately and the buckle was always in the right place. The kids in the apartment [which Andy shared in New York – note by Coplans] noticed that the vamps on Andy's shoe drawings kept getting longer and longer but [Israel] Miller didn't mind. Miller loved them.
20
+
21
+ Warhol's "whimsical" ink drawings of shoe advertisements figured in some of his earliest showings at the Bodley Gallery in New York.
22
+
23
+ Warhol was an early adopter of the silk screen printmaking process as a technique for making paintings. A young Warhol was taught silk screen printmaking techniques by Max Arthur Cohn at his graphic arts business in Manhattan.[23] While working in the shoe industry, Warhol developed his "blotted line" technique, applying ink to paper and then blotting the ink while still wet, which was akin to a printmaking process on the most rudimentary scale. His use of tracing paper and ink allowed him to repeat the basic image and also to create endless variations on the theme, a method that prefigures his 1960s silk-screen canvas.[21] In his book Popism: The Warhol Sixties, Warhol writes: "When you do something exactly wrong, you always turn up something."[24]
24
+
25
+ Warhol habitually used the expedient of tracing photographs projected with an epidiascope.[25] Using prints by Edward Wallowitch, his 'first boyfriend'[26] the photographs would undergo a subtle transformation during Warhol's often cursory tracing of contours and hatching of shadows. Warhol used Wallowitch's photograph Young Man Smoking a Cigarette (c.1956),[27] for a 1958 design for a book cover he submitted to Simon and Schuster for the Walter Ross pulp novel The Immortal, and later used others for his dollar bill series,[28][29] and for Big Campbell's Soup Can with Can Opener (Vegetable), of 1962 which initiated Warhol's most sustained motif, the soup can.
26
+
27
+ With the rapid expansion of the record industry, RCA Records hired Warhol, along with another freelance artist, Sid Maurer, to design album covers and promotional materials.[30]
28
+
29
+ He began exhibiting his work during the 1950s. He held exhibitions at the Hugo Gallery[31] and the Bodley Gallery[32] in New York City; in California, his first West Coast gallery exhibition[33][34] was on July 9, 1962, in the Ferus Gallery of Los Angeles with Campbell's Soup Cans. The exhibition marked his West Coast debut of pop art.[35]
30
+ Andy Warhol's first New York solo pop art exhibition was hosted at Eleanor Ward's Stable Gallery November 6–24, 1962. The exhibit included the works Marilyn Diptych, 100 Soup Cans, 100 Coke Bottles, and 100 Dollar Bills. At the Stable Gallery exhibit, the artist met for the first time poet John Giorno who would star in Warhol's first film, Sleep, in 1963.[36]
31
+
32
+ It was during the 1960s that Warhol began to make paintings of iconic American objects such as dollar bills, mushroom clouds, electric chairs, Campbell's Soup Cans, Coca-Cola bottles, celebrities such as Marilyn Monroe, Elvis Presley, Marlon Brando, Troy Donahue, Muhammad Ali, and Elizabeth Taylor, as well as newspaper headlines or photographs of police dogs attacking African-American protesters during the Birmingham campaign in the civil rights movement. During these years, he founded his studio, "The Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola:
33
+
34
+ What's great about this country is that America started the tradition where the richest consumers buy essentially the same things as the poorest. You can be watching TV and see Coca-Cola, and you know that the President drinks Coca-Cola, Liz Taylor drinks Coca-Cola, and just think, you can drink Coca-Cola, too. A Coke is a Coke and no amount of money can get you a better Coke than the one the bum on the corner is drinking. All the Cokes are the same and all the Cokes are good. Liz Taylor knows it, the President knows it, the bum knows it, and you know it.[37]
35
+
36
+ New York City's Museum of Modern Art hosted a Symposium on pop art in December 1962 during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were scandalized by Warhol's open embrace of market culture. This symposium set the tone for Warhol's reception.
37
+
38
+ A pivotal event was the 1964 exhibit The American Supermarket, a show held in Paul Bianchini's Upper East Side gallery. The show was presented as a typical U.S. small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by six prominent pop artists of the time, among them the controversial (and like-minded) Billy Apple, Mary Inman, and Robert Watts. Warhol's painting of a can of Campbell's soup cost $1,500 while each autographed can sold for $6. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is.[38]
39
+
40
+ As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations).[39]
41
+
42
+ During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "Superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton,[40] photographer Stephen Shore[41] and artist Bibbe Hansen (mother of pop musician Beck).[42]
43
+
44
+ On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio.[43] Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the S.C.U.M. Manifesto,[44] a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film I, a Man. Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced.[45]
45
+
46
+ Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived: surgeons opened his chest and massaged his heart to help stimulate its movement again. He suffered physical effects for the rest of his life, including being required to wear a surgical corset.[14] The shooting had a profound effect on Warhol's life and art.[46][47]
47
+
48
+ Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life." She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting the Factory scene heavily increased its security, and for many the "Factory 60s" ended.[47]
49
+
50
+ Warhol had this to say about the attack: "Before I was shot, I always thought that I was more half-there than all-there—I always suspected that I was watching TV instead of living life. People sometimes say that the way things happen in movies is unreal, but actually it's the way things happen in life that's unreal. The movies make emotions look so strong and real, whereas when things really do happen to you, it's like watching television—you don't feel anything. Right when I was being shot and ever since, I knew that I was watching television. The channels switch, but it's all television."[48]
51
+
52
+ Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot.[49][50] Warhol's famous portrait of Chinese Communist leader Mao Zedong was created in 1973. He also founded, with Gerard Malanga, Interview magazine, and published The Philosophy of Andy Warhol (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art."[51]
53
+
54
+ Warhol socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54.[52] He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square."[53]
55
+
56
+ In 1979, along with his longtime friend Stuart Pivar, Warhol founded the New York Academy of Art.[54][55]
57
+
58
+ Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster.[56]
59
+
60
+ By this time, graffiti artist Fab Five Freddy paid homage to Warhol when he painted an entire train with Campbell soup cans. This was instrumental in Freddy becoming involved in the underground NYC art scene and becoming an affiliate of Basquiat.[57]
61
+
62
+ By this period, Warhol was being criticized for becoming merely a "business artist".[58] In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. They also criticized his 1980 exhibit of 10 portraits at the Jewish Museum in Manhattan, entitled Jewish Geniuses, which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell."[58] In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s."[58]
63
+
64
+ Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic."[59]
65
+
66
+ In 1984 Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of Purple Rain and its accompanying movie.[60] Referencing the many celebrity portraits produced by Warhol across his career, Orange Prince (1984) was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's very first celebrity portraits.[61] Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background.[62][63][64]
67
+
68
+ In the Andy Warhol Diaries, Warhol recorded how excited he was to see Prince and Billy Idol together at a party in the mid 1980s, and he compared them to the Hollywood movie stars of the 1950s and 1960s who also inspired his portraits: "... seeing these two glamour boys, its like boys are the new Hollywood glamour girls, like Jean Harlow and Marilyn Monroe".[65]
69
+
70
+ Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat.[66] Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors.[54] His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication.[67] The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money.[68]
71
+
72
+ Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought.[69] It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant.[69]
73
+
74
+ Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh.[70]
75
+
76
+ At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Paige Powell dropped a copy of Interview magazine, an Interview T-shirt, and a bottle of the Estee Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol on April 1, 1987, at St. Patrick's Cathedral, New York.
77
+
78
+ By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced.[71]
79
+
80
+ It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, Pop, The Genius of Warhol, was payment for coming up with the idea of the soup cans as subject matter.[72] For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. A 1964 Large Campbell's Soup Can was sold in a 2007 Sotheby's auction to a South American collector for £5.1 million ($7.4 million).[73]
81
+
82
+ He loved celebrities, so he painted them as well. From these beginnings he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations.[74]
83
+
84
+ In 1979, Warhol was commissioned by BMW to paint a Group-4 race version of the then "elite supercar" BMW M1 for the fourth installment in the BMW Art Car Project. It was reported at the time that, unlike the three artists before him, Warhol opted to paint directly onto the automobile himself instead of letting technicians transfer his scale-model design to the car.[75] It was indicated that Warhol spent only a total of 23 minutes to paint the entire car.[76]
85
+
86
+ Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 Death and Disaster series. The Death and Disaster paintings included Red Car Crash, Purple Jumping Man, and Orange Disaster. One of these paintings, the diptych Silver Car Crash, became the highest priced work of his when it sold at Sotheby's Contemporary Art Auction on Wednesday, November 13, 2013, for $105.4 million.[77]
87
+
88
+ Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface'."[78]
89
+
90
+ His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings":
91
+
92
+ Victor ... was Andy's ghost pisser on the Oxidations. He would come to the Factory to urinate on canvases that had already been primed with copper-based paint by Andy or Ronnie Cutrone, a second ghost pisser much appreciated by Andy, who said that the vitamin B that Ronnie took made a prettier color when the acid in the urine turned the copper green. Did Andy ever use his own urine? My diary shows that when he first began the series, in December 1977, he did, and there were many others: boys who'd come to lunch and drink too much wine, and find it funny or even flattering to be asked to help Andy 'paint'. Andy always had a little extra bounce in his walk as he led them to his studio.[79]
93
+
94
+ Warhol's first portrait of Basquiat (1982) is a black photo-silkscreen over an oxidized copper "piss painting".
95
+
96
+ After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand in a series of more than 50 large collaborative works done with Jean-Michel Basquiat between 1984 and 1986.[80][81] Despite negative criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work.[82]
97
+
98
+ Andy Warhol was commissioned in 1984 by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's The Last Supper for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen.[83] Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the Ten Punching Bags (Last Supper).[84]
99
+ The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist.[85] The series of The Last Supper was seen by some as "arguably his greatest,"[86] but by others as "wishy-washy, religiose" and "spiritless."[87] It is the largest series of religious-themed works by any U.S. artist.[86]
100
+
101
+ Artist Maurizio Cattelan describes that it is difficult to separate daily encounters from the art of Andy Warhol: "That's probably the greatest thing about Warhol: the way he penetrated and summarized our world, to the point that distinguishing between him and our everyday life is basically impossible, and in any case useless." Warhol was an inspiration towards Cattelan's magazine and photography compilations, such as Permanent Food, Charley, and Toilet Paper.[88]
102
+
103
+ In the period just before his death, Warhol was working on Cars, a series of paintings for Mercedes-Benz.[89]
104
+
105
+ A self-portrait by Andy Warhol (1963–64), which sold in New York at the May Post-War and Contemporary evening sale in Christie's, fetched $38.4 million.[90]
106
+
107
+ On May 9, 2012, his classic painting Double Elvis (Ferus Type) sold at auction at Sotheby's in New York for US$33 million. With commission, the sale price totaled US$37,042,500, short of the $50 million that Sotheby's had predicted the painting might bring. The piece (silkscreen ink and spray paint on canvas) shows Elvis Presley in a gunslinger pose. It was first exhibited in 1963 at the Ferus Gallery in Los Angeles. Warhol made 22 versions of the Double Elvis, nine of which are held in museums.[91][92]
108
+
109
+ In November 2013, his Silver Car Crash (Double Disaster) diptych sold at Sotheby's Contemporary Art Auction for $105.4 million, a new record for the pop artist (pre-auction estimates were at $80 million).[77] Created in 1963, this work had rarely been seen in public in the previous years.[93] In November 2014, Triple Elvis sold for $81.9m (£51.9m) at an auction in New York.[94]
110
+
111
+ Warhol worked across a wide range of media—painting, photography, drawing, and sculpture. In addition, he was a highly prolific filmmaker. Between 1963 and 1968, he made more than 60 films,[95] plus some 500 short black-and-white "screen test" portraits of Factory visitors.[96] One of his most famous films, Sleep, monitors poet John Giorno sleeping for six hours. The 35-minute film Blow Job is one continuous shot of the face of DeVeren Bookwalter supposedly receiving oral sex from filmmaker Willard Maas, although the camera never tilts down to see this. Another, Empire (1964), consists of eight hours of footage of the Empire State Building in New York City at dusk. The film Eat consists of a man eating a mushroom for 45 minutes. Warhol attended the 1962 premiere of the static composition by LaMonte Young called Trio for Strings and subsequently created his famous series of static films including Kiss, Eat, and Sleep (for which Young initially was commissioned to provide music). Uwe Husslein cites filmmaker Jonas Mekas, who accompanied Warhol to the Trio premiere, and who claims Warhol's static films were directly inspired by the performance.[97]
112
+
113
+ Batman Dracula is a 1964 film that was produced and directed by Warhol, without the permission of DC Comics. It was screened only at his art exhibits. A fan of the Batman series, Warhol's movie was an "homage" to the series, and is considered the first appearance of a blatantly campy Batman. The film was until recently thought to have been lost, until scenes from the picture were shown at some length in the 2006 documentary Jack Smith and the Destruction of Atlantis.
114
+
115
+ Warhol's 1965 film Vinyl is an adaptation of Anthony Burgess' popular dystopian novel A Clockwork Orange. Others record improvised encounters between Factory regulars such as Brigid Berlin, Viva, Edie Sedgwick, Candy Darling, Holly Woodlawn, Ondine, Nico, and Jackie Curtis. Legendary underground artist Jack Smith appears in the film Camp.
116
+
117
+ His most popular and critically successful film was Chelsea Girls (1966). The film was highly innovative in that it consisted of two 16 mm-films being projected simultaneously, with two different stories being shown in tandem. From the projection booth, the sound would be raised for one film to elucidate that "story" while it was lowered for the other. The multiplication of images evoked Warhol's seminal silk-screen works of the early 1960s.
118
+
119
+ Warhol was a fan of filmmaker Radley Metzger's film work[98] and commented that Metzger's film, The Lickerish Quartet, was "an outrageously kinky masterpiece".[99][100][101] Blue Movie—a film in which Warhol superstar Viva makes love in bed with Louis Waldon, another Warhol superstar—was Warhol's last film as director.[102][103] The film, a seminal film in the Golden Age of Porn, was, at the time, controversial for its frank approach to a sexual encounter.[104][105] Blue Movie was publicly screened in New York City in 2005, for the first time in more than 30 years.[106]
120
+
121
+ In the wake of the 1968 shooting, a reclusive Warhol relinquished his personal involvement in filmmaking. His acolyte and assistant director, Paul Morrissey, took over the film-making chores for the Factory collective, steering Warhol-branded cinema towards more mainstream, narrative-based, B-movie exploitation fare with Flesh, Trash, and Heat. All of these films, including the later Andy Warhol's Dracula and Andy Warhol's Frankenstein, were far more mainstream than anything Warhol as a director had attempted. These latter "Warhol" films starred Joe Dallesandro—more of a Morrissey star than a true Warhol superstar.
122
+
123
+ In the early 1970s, most of the films directed by Warhol were pulled out of circulation by Warhol and the people around him who ran his business. After Warhol's death, the films were slowly restored by the Whitney Museum and are occasionally projected at museums and film festivals. Few of the Warhol-directed films are available on video or DVD.
124
+
125
+ In the mid-1960s, Warhol adopted the band the Velvet Underground, making them a crucial element of the Exploding Plastic Inevitable multimedia performance art show. Warhol, with Paul Morrissey, acted as the band's manager, introducing them to Nico (who would perform with the band at Warhol's request). While managing The Velvet Underground, Andy would have them dressed in all black to perform in front of movies that he was also presenting.[107] In 1966 he "produced" their first album The Velvet Underground & Nico, as well as providing its album art. His actual participation in the album's production amounted to simply paying for the studio time. After the band's first album, Warhol and band leader Lou Reed started to disagree more about the direction the band should take, and their artistic friendship ended.[citation needed] In 1989, after Warhol's death, Reed and John Cale re-united for the first time since 1972 to write, perform, record and release the concept album Songs for Drella, a tribute to Warhol. In October 2019, an audio tape of publicly unknown music by Reed, based on Warhols' 1975 book, “The Philosophy of Andy Warhol: From A to B and Back Again”, was reported to have been discovered in an archive at the Andy Warhol Museum in Pittsburgh.[108]
126
+
127
+ Warhol designed many album covers for various artists starting with the photographic cover of John Wallowitch's debut album, This Is John Wallowitch!!! (1964). He designed the cover art for The Rolling Stones' albums Sticky Fingers (1971) and Love You Live (1977), and the John Cale albums The Academy in Peril (1972) and Honi Soit in 1981. One of Warhol's last works was a portrait of Aretha Franklin for the cover of her 1986 gold album Aretha, which was done in the style of the Reigning Queens series he had completed the year before.[109]
128
+
129
+ Warhol strongly influenced the new wave/punk rock band Devo, as well as David Bowie. Bowie recorded a song called "Andy Warhol" for his 1971 album Hunky Dory. Lou Reed wrote the song "Andy's Chest", about Valerie Solanas, the woman who shot Warhol, in 1968. He recorded it with the Velvet Underground, and this version was released on the VU album in 1985. Bowie would later play Warhol in the 1996 movie, Basquiat. Bowie recalled how meeting Warhol in real life helped him in the role, and recounted his early meetings with him:
130
+
131
+ I met him a couple of times, but we seldom shared more than platitudes. The first time we saw each other an awkward silence fell till he remarked my bright yellow shoes and started talking enthusiastically. He wanted to be very superficial. And seemingly emotionless, indifferent, just like a dead fish. Lou Reed described him most profoundly when he once told me they should bring a doll of Andy on the market: a doll that you wind up and doesn't do anything. But I managed to observe him well, and that was a helping hand for the film [Basquiat...] We borrowed his clothes from the museum in Pittsburgh, and they were intact, unwashed. Even the pockets weren't emptied: they contained pancake, white, deadly pale fond de teint which Andy always smeared on his face, a check torn in pieces, someone's address, lots of homeopathic pills and a wig. Andy always wore those silver wigs, but he never admitted it were wigs. One of his hairdressers has told me lately that he had his wigs regularly cut, like it were real hair. When the wig was trimmed, he put on another next month as if his hair had grown.[110]
132
+
133
+ The band Triumph also wrote a song about Andy Warhol, "Stranger In A Strange Land" off their 1984 album Thunder Seven.
134
+
135
+ Beginning in the early 1950s, Warhol produced several unbound portfolios of his work.
136
+
137
+ The first of several bound self-published books by Warhol was 25 Cats Name Sam and One Blue Pussy, printed in 1954 by Seymour Berlin on Arches brand watermarked paper using his blotted line technique for the lithographs. The original edition was limited to 190 numbered, hand colored copies, using Dr. Martin's ink washes. Most of these were given by Warhol as gifts to clients and friends. Copy No. 4, inscribed "Jerry" on the front cover and given to Geraldine Stutz, was used for a facsimile printing in 1987,[111] and the original was auctioned in May 2006 for US$35,000 by Doyle New York.[112]
138
+
139
+ Other self-published books by Warhol include:
140
+
141
+ Warhol's book A La Recherche du Shoe Perdu (1955) marked his "transition from commercial to gallery artist".[113] (The title is a play on words by Warhol on the title of French author Marcel Proust's À la recherche du temps perdu.)[113]
142
+
143
+ After gaining fame, Warhol "wrote" several books that were commercially published:
144
+
145
+ Warhol created the fashion magazine Interview that is still published today. The loopy title script on the cover is thought to be either his own handwriting or that of his mother, Julia Warhola, who would often do text work for his early commercial pieces.[115]
146
+
147
+ Although Andy Warhol is most known for his paintings and films, he authored works in many different media.
148
+
149
+ He founded the gossip magazine Interview, a stage for celebrities he "endorsed" and a business staffed by his friends. He collaborated with others on all of his books (some of which were written with Pat Hackett.) He adopted the young painter Jean-Michel Basquiat, and the band The Velvet Underground, presenting them to the public as his latest interest, and collaborating with them. One might even say that he produced people (as in the Warholian "Superstar" and the Warholian portrait). He endorsed products, appeared in commercials, and made frequent celebrity guest appearances on television shows and in films (he appeared in everything from Love Boat[129] to Saturday Night Live[130] and the Richard Pryor movie Dynamite Chicken[131]).
150
+
151
+ In this respect Warhol was a fan of "Art Business" and "Business Art"—he, in fact, wrote about his interest in thinking about art as business in The Philosophy of Andy Warhol from A to B and Back Again.[132]
152
+
153
+ Warhol was homosexual.[133][134] In 1980, he told an interviewer that he was still a virgin. Biographer Bob Colacello, who was present at the interview, felt it was probably true and that what little sex he had was probably "a mixture of voyeurism and masturbation—to use [Andy's] word abstract".[135] Warhol's assertion of virginity would seem to be contradicted by his hospital treatment in 1960 for condylomata, a sexually transmitted disease.[136] It has also been contradicted by his lovers, including Warhol muse BillyBoy, who has said they had sex to orgasm: "When he wasn't being Andy Warhol and when you were just alone with him he was an incredibly generous and very kind person. What seduced me was the Andy Warhol who I saw alone. In fact when I was with him in public he kind of got on my nerves....I'd say: 'You're just obnoxious, I can't bear you."[137] Billy Name also denied that Warhol was only a voyeur, saying: "He was the essence of sexuality. It permeated everything. Andy exuded it, along with his great artistic creativity....It brought a joy to the whole art world in New York."[138] "But his personality was so vulnerable that it became a defense to put up the blank front."[139] Warhol's lovers included John Giorno,[140] Billy Name,[141] Charles Lisanby,[142] and Jon Gould. His boyfriend of 12 years was Jed Johnson, whom he met in 1968, and who later achieved fame as an interior designer.[143]
154
+
155
+ The fact that Warhol's homosexuality influenced his work and shaped his relationship to the art world is a major subject of scholarship on the artist and is an issue that Warhol himself addressed in interviews, in conversation with his contemporaries, and in his publications (e.g., Popism: The Warhol 1960s). Throughout his career, Warhol produced erotic photography and drawings of male nudes. Many of his most famous works (portraits of Liza Minnelli, Judy Garland, and Elizabeth Taylor, and films such as Blow Job, My Hustler and Lonesome Cowboys) draw from gay underground culture or openly explore the complexity of sexuality and desire. As has been addressed by a range of scholars, many of his films premiered in gay porn theaters, including the New Andy Warhol Garrick Theatre and 55th Street Playhouse, in the late 1960s.[144]
156
+
157
+ The first works that Warhol submitted to a fine art gallery, homoerotic drawings of male nudes, were rejected for being too openly gay.[26] In Popism, furthermore, the artist recalls a conversation with the film maker Emile de Antonio about the difficulty Warhol had being accepted socially by the then-more-famous (but closeted) gay artists Jasper Johns and Robert Rauschenberg. De Antonio explained that Warhol was "too swish and that upsets them." In response to this, Warhol writes, "There was nothing I could say to that. It was all too true. So I decided I just wasn't going to care, because those were all the things that I didn't want to change anyway, that I didn't think I 'should' want to change ... Other people could change their attitudes but not me".[26][145] In exploring Warhol's biography, many turn to this period—the late 1950s and early 1960s—as a key moment in the development of his persona. Some have suggested that his frequent refusal to comment on his work, to speak about himself (confining himself in interviews to responses like "Um, no" and "Um, yes", and often allowing others to speak for him)—and even the evolution of his pop style—can be traced to the years when Warhol was first dismissed by the inner circles of the New York art world.[146]
158
+
159
+ Warhol was a practicing Ruthenian Catholic. He regularly volunteered at homeless shelters in New York City, particularly during the busier times of the year, and described himself as a religious person.[148] Many of Warhol's later works depicted religious subjects, including two series, Details of Renaissance Paintings (1984) and The Last Supper (1986). In addition, a body of religious-themed works was found posthumously in his estate.[148]
160
+
161
+ During his life, Warhol regularly attended Liturgy, and the priest at Warhol's church, Saint Vincent Ferrer, said that the artist went there almost daily,[148] although he was not observed taking Communion or going to Confession and sat or knelt in the pews at the back.[135] The priest thought he was afraid of being recognized; Warhol said he was self-conscious about being seen in a Roman Rite church crossing himself "in the Orthodox way" (right to left instead of the reverse).[135]
162
+
163
+ His art is noticeably influenced by the Eastern Christian tradition which was so evident in his places of worship.[148]
164
+
165
+ Warhol's brother has described the artist as "really religious, but he didn't want people to know about that because [it was] private". Despite the private nature of his faith, in Warhol's eulogy John Richardson depicted it as devout: "To my certain knowledge, he was responsible for at least one conversion. He took considerable pride in financing his nephew's studies for the priesthood".[148]
166
+
167
+ Warhol was an avid collector. His friends referred to his numerous collections, which filled not only his four-story townhouse, but also a nearby storage unit, as "Andy's Stuff." The true extent of his collections was not discovered until after his death, when The Andy Warhol Museum in Pittsburgh took in 641 boxes of his "Stuff."
168
+
169
+ Warhol's collections included a Coca-Cola memorabilia sign, and 19th century paintings[149] along with airplane menus, unpaid invoices, pizza dough, pornographic pulp novels, newspapers, stamps, supermarket flyers, and cookie jars, among other eccentricities. It also included significant works of art, such as George Bellows's Miss Bentham.[150] One of his main collections was his wigs. Warhol owned more than 40 and felt very protective of his hairpieces, which were sewn by a New York wig-maker from hair imported from Italy. In 1985 a girl snatched Warhol's wig off his head. It was later discovered in Warhol's diary entry for that day that he wrote: "I don't know what held me back from pushing her over the balcony."
170
+
171
+ In 1960, he had bought a drawing of a light bulb by Jasper Johns.[151]
172
+
173
+ Another item found in Warhol's boxes at the museum in Pittsburgh was a mummified human foot from Ancient Egypt. The curator of anthropology at Carnegie Museum of Natural History felt that Warhol most likely found it at a flea market.[152]
174
+
175
+ I. Miller Shoes, April 17, 1955, illustration in New York Times
176
+
177
+ Exploding Plastic Inevitable' (show) - the Velvet Underground & Nico, 1966, poster
178
+
179
+ The Souper Dress, 1967, screen-printed paper dress based on Warhol's Campbell's Soup Cans
180
+
181
+ Portrait of Mao Zedong, 1972, synthetic polymer paint and silkscreen ink on canvas
182
+
183
+ photo of Warhol and Farah Pahlavi, 1977, with works of Warhol on the walls of the Tehran museum
184
+
185
+ BMW Group - 4 M1, 1979, painted car
186
+
187
+ Among Warhol's early collectors and influential supporters were Emily and Burton Tremaine. Among the over 15 artworks purchased,[153] Marilyn Diptych (now at Tate Modern, London)[154] and A boy for Meg (now at the National Gallery of Art in Washington, DC),[155] were purchased directly out of Warhol's studio in 1962. One Christmas, Warhol left a small Head of Marilyn Monroe by the Tremaine's door at their New York apartment in gratitude for their support and encouragement.[156]
188
+
189
+ Warhol's will dictated that his entire estate—with the exception of a few modest legacies to family members—would go to create a foundation dedicated to the "advancement of the visual arts". Warhol had so many possessions that it took Sotheby's nine days to auction his estate after his death; the auction grossed more than US$20 million.
190
+
191
+ In 1987, in accordance with Warhol's will, the Andy Warhol Foundation for the Visual Arts began. The foundation serves as the estate of Andy Warhol, but also has a mission "to foster innovative artistic expression and the creative process" and is "focused primarily on supporting work of a challenging and often experimental nature."[157]
192
+
193
+ The Artists Rights Society is the U.S. copyright representative for the Andy Warhol Foundation for the Visual Arts for all Warhol works with the exception of Warhol film stills.[158] The U.S. copyright representative for Warhol film stills is the Warhol Museum in Pittsburgh.[159] Additionally, the Andy Warhol Foundation for the Visual Arts has agreements in place for its image archive. All digital images of Warhol are exclusively managed by Corbis, while all transparency images of Warhol are managed by Art Resource.[160]
194
+
195
+ The Andy Warhol Foundation released its 20th Anniversary Annual Report as a three-volume set in 2007: Vol. I, 1987–2007; Vol. II, Grants & Exhibitions; and Vol. III, Legacy Program.[161] The Foundation remains one of the largest grant-giving organizations for the visual arts in the U.S.[162]
196
+
197
+ Many of Warhol's works and possessions are on display at The Andy Warhol Museum in Pittsburgh. The foundation donated more than 3,000 works of art to the museum.[163]
198
+
199
+ Warhol appeared as himself in the film Cocaine Cowboys (1979)[164] and in the film Tootsie (1982).
200
+
201
+ After his death, Warhol was portrayed by Crispin Glover in Oliver Stone's film The Doors (1991), by David Bowie in Julian Schnabel's film Basquiat (1996), and by Jared Harris in Mary Harron's film I Shot Andy Warhol (1996). Warhol appears as a character in Michael Daugherty's opera Jackie O (1997). Actor Mark Bringleson makes a brief cameo as Warhol in Austin Powers: International Man of Mystery (1997). Many films by avant-garde cineast Jonas Mekas have caught the moments of Warhol's life. Sean Gregory Sullivan depicted Warhol in the film 54 (1998). Guy Pearce portrayed Warhol in the film Factory Girl (2007) about Edie Sedgwick's life.[165] Actor Greg Travis portrays Warhol in a brief scene from the film Watchmen (2009).
202
+
203
+ In the movie Highway to Hell a group of Andy Warhols are part of the Good Intentions Paving Company where good-intentioned souls are ground into pavement.[166] In the film Men in Black 3 (2012) Andy Warhol turns out to really be undercover MIB Agent W (played by Bill Hader). Warhol is throwing a party at The Factory in 1969, where he is looked up by MIB Agents K and J (J from the future). Agent W is desperate to end his undercover job ("I'm so out of ideas I'm painting soup cans and bananas, for Christ sakes!", "You gotta fake my death, okay? I can't listen to sitar music anymore." and "I can't tell the girls from the boys.").
204
+
205
+ Andy Warhol (portrayed by Tom Meeten) is one of main characters of the 2012 British television show Noel Fielding's Luxury Comedy. The character is portrayed as having robot-like mannerisms. In the 2017 feature The Billionaire Boys Club Cary Elwes portrays Warhol in a film based on the true story about Ron Levin (portrayed by Kevin Spacey) a friend of Warhol's who was murdered in 1986.[167] In September 2016, it was announced that Jared Leto would portray the title character in Warhol, an upcoming American biographical drama film produced by Michael De Luca and written by Terence Winter, based on the book Warhol: The Biography by Victor Bockris.[168]
206
+
207
+ Warhol appeared as a recurring character in TV series Vinyl, played by John Cameron Mitchell.[175] Warhol was portrayed by Evan Peters in the American Horror Story: Cult episode "Valerie Solanas Died for Your Sins: Scumbag". The episode depicts the attempted assassination of Warhol by Valerie Solanas (Lena Dunham).
208
+
209
+ In early 1969, Andy Warhol was commissioned by Braniff International to appear in two television commercials to promote the luxury Airline's new When You Got It - Flaunt It Campaign. The campaign was created by Braniff's new advertising agency Lois Holland Calloway, which was led by famed advertiser George Lois, creator of a famed series of Esquire Magazine covers. The first commercial series involved pairing the most unlikely people but who shared the fact that they both flew Braniff Airways. Mr. Warhol was paired with boxing legend Sonny Liston. The odd commercial worked as did the others that featured unlikely fellow travelers such as painter Salvador Dali and baseball legend Whitey Ford.
210
+
211
+ Two additional commercials for Braniff were created that featured famous persons entering a Braniff jet and being greeted a Braniff Hostess while espousing their like for flying Braniff. Mr. Warhol was also featured in the first of these commercials that were also produced by Mr. Lois and were released in the summer of 1969. Mr. Lois has incorrectly stated that he was commissioned by Braniff in 1967 for representation during that year but at that time Madison Avenue advertising doyenne Mary Wells Lawrence, who was married to Braniff's charismatic Chairman and President Harding Lawrence, was representing the Dallas-based carrier at that time. Mr. Lois succeeded Wells Rich Greene Agency on December 1, 1968. The rights to Mr. Warhol's films for Braniff and his signed contracts are owned by a private Trust and are administered by Braniff Airways Foundation in Dallas, Texas.[176]
212
+
213
+ A biography of Andy Warhol written by art critic Blake Gopnik was published in 2020 under the title Warhol.[177][178][179]
214
+
215
+ In 2002, the U.S. Postal Service issued an 18-cent stamp commemorating Warhol. Designed by Richard Sheaff of Scottsdale, Arizona, the stamp was unveiled at a ceremony at The Andy Warhol Museum and features Warhol's painting "Self-Portrait, 1964".[180][181] In March 2011, a chrome statue of Andy Warhol and his Polaroid camera was revealed at Union Square in New York City.[182]
en/2300.html.txt ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ in the Kingdom of Denmark (red and beige)
4
+
5
+ Greenland (Greenlandic: Kalaallit Nunaat, pronounced [kalaːɬit nunaːt]; Danish: Grønland, pronounced [ˈkʁɶnˌlænˀ]) is the world's largest island,[d] located between the Arctic and Atlantic oceans, east of the Canadian Arctic Archipelago. It is an autonomous territory[10] within the Kingdom of Denmark. Though physiographically a part of the continent of North America, Greenland has been politically and culturally associated with Europe (specifically Norway and Denmark, the colonial powers, as well as the nearby island of Iceland) for more than a millennium.[11] The majority of its residents are Inuit, whose ancestors migrated from Alaska through Northern Canada, gradually settling across the island by the 13th century.[12]
6
+
7
+ Nowadays, the population is largely concentrated on the southwest coast, while the rest of the island is sparsely populated. Greenland is divided into five municipalities – Sermersooq, Kujalleq, Qeqertalik, Qeqqata, and Avannaata. It has two unincorporated areas – the Northeast Greenland National Park and the Thule Air Base. The latter, while under Danish control, is administered by the United States Air Force.[13] Three-quarters of Greenland is covered by the only permanent ice sheet outside Antarctica. With a population of 56,081 (2020),[6] it is the least densely populated territory in the world.[14] About a third of the population lives in Nuuk, the capital and largest city; the second largest city in terms of population is Sisimiut, 320 kilometres (200 mi) north of Nuuk. The Arctic Umiaq Line ferry acts as a lifeline for western Greenland, connecting the various cities and settlements.
8
+
9
+ Greenland has been inhabited at intervals over at least the last 4,500 years by Arctic peoples whose forebears migrated there from what is now Canada.[15][16] Norsemen settled the uninhabited southern part of Greenland beginning in the 10th century, having previously settled Iceland. These Norsemen would later set sail from Greenland and Iceland, with Leif Erikson becoming the first known European to reach North America nearly 500 years before Columbus reached the Caribbean islands. Inuit peoples arrived in the 13th century. Though under continuous influence of Norway and Norwegians, Greenland was not formally under the Norwegian crown until 1261. The Norse colonies disappeared in the late 15th century when Norway was hit by the Black Death and entered a severe decline. Soon after their demise, beginning in 1499, the Portuguese briefly explored and claimed the island, naming it Terra do Lavrador (later applied to Labrador in Canada).[17]
10
+
11
+ In the early 17th century, Danish explorers reached Greenland again. To strengthen trading and power, Denmark–Norway affirmed sovereignty over the island. Because of Norway's weak status, it lost sovereignty over Greenland in 1814 when the union was dissolved. Greenland became Danish in 1814, and was fully integrated in the Danish state in 1953 under the Constitution of Denmark. In 1973, Greenland joined the European Economic Community with Denmark. However, in a referendum in 1982, a majority of the population voted for Greenland to withdraw from the EEC, which was effected in 1985. Greenland contains the world's largest and most northerly national park, Northeast Greenland National Park (Kalaallit Nunaanni nuna eqqissisimatitaq). Established in 1974, and expanded to its present size in 1988, it protects 972,001 square kilometres (375,292 sq mi) of the interior and northeastern coast of Greenland and is bigger than all but twenty-nine countries in the world.
12
+
13
+ In 1979, Denmark granted home rule to Greenland; in 2008, Greenlanders voted in favor of the Self-Government Act, which transferred more power from the Danish government to the local Greenlandic government. Under the new structure, in effect since 21 June 2009,[18] Greenland can gradually assume responsibility for policing, judicial system, company law, accounting, and auditing; mineral resource activities; aviation; law of legal capacity, family law and succession law; aliens and border controls; the working environment; and financial regulation and supervision, while the Danish government retains control of foreign affairs and defence. It also retains control of monetary policy, providing an initial annual subsidy of DKK 3.4 billion, which is planned to diminish gradually over time. Greenland expects to grow its economy based on increased income from the extraction of natural resources. The capital, Nuuk, held the 2016 Arctic Winter Games. At 70%, Greenland has one of the highest shares of renewable energy in the world, mostly coming from hydropower.[19][additional citation(s) needed]
14
+
15
+ The early Norse settlers named the island as Greenland. In the Icelandic sagas, the Norwegian-born Icelander Erik the Red was said to be exiled from Iceland for manslaughter. Along with his extended family and his thralls (i.e. slaves or serfs), he set out in ships to explore an icy land known to lie to the northwest. After finding a habitable area and settling there, he named it Grœnland (translated as "Greenland"), supposedly in the hope that the pleasant name would attract settlers.[20][21][22] The Saga of Erik the Red states: "In the summer, Erik left to settle in the country he had found, which he called Greenland, as he said people would be attracted there if it had a favorable name."[23]
16
+
17
+ The name of the country in the indigenous Greenlandic language is Kalaallit Nunaat ("land of the Kalaallit").[24] The Kalaallit are the indigenous Greenlandic Inuit people who inhabit the country's western region.
18
+
19
+ In prehistoric times, Greenland was home to several successive Paleo-Eskimo cultures known today primarily through archaeological finds. The earliest entry of the Paleo-Eskimo into Greenland is thought to have occurred about 2500 BC. From around 2500 BC to 800 BC, southern and western Greenland were inhabited by the Saqqaq culture. Most finds of Saqqaq-period archaeological remains have been around Disko Bay, including the site of Saqqaq, after which the culture is named.[25][26]
20
+
21
+ From 2400 BC to 1300 BC, the Independence I culture existed in northern Greenland. It was a part of the Arctic small tool tradition.[27][28][29] Towns, including Deltaterrasserne, started to appear.
22
+
23
+ Around 800 BC, the Saqqaq culture disappeared and the Early Dorset culture emerged in western Greenland and the Independence II culture in northern Greenland.[30] The Dorset culture was the first culture to extend throughout the Greenlandic coastal areas, both on the west and east coasts. It lasted until the total onset of the Thule culture in 1500 AD. The Dorset culture population lived primarily from hunting of whales and caribou.[31][32][33][34]
24
+
25
+ From 986, Greenland's west coast was settled by Icelanders and Norwegians, through a contingent of 14 boats led by Erik the Red. They formed three settlements – known as the Eastern Settlement, the Western Settlement and the Middle Settlement – on fjords near the southwesternmost tip of the island.[11][35] They shared the island with the late Dorset culture inhabitants who occupied the northern and western parts, and later with the Thule culture that entered from the north. Norse Greenlanders submitted to Norwegian rule in 1261 under the Kingdom of Norway (872–1397). Later the Kingdom of Norway entered into a personal union with Denmark in 1380, and from 1397 was a part of the Kalmar Union.[36]
26
+
27
+ The Norse settlements, such as Brattahlíð, thrived for centuries but disappeared sometime in the 15th century, perhaps at the onset of the Little Ice Age.[37] Apart from some runic inscriptions, no contemporary records or historiography survives from the Norse settlements. Medieval Norwegian sagas and historical works mention Greenland's economy as well as the bishops of Gardar and the collection of tithes. A chapter in the Konungs skuggsjá (The King's Mirror) describes Norse Greenland's exports and imports as well as grain cultivation.
28
+
29
+ Icelandic saga accounts of life in Greenland were composed in the 13th century and later, and do not constitute primary sources for the history of early Norse Greenland.[22] Modern understanding therefore mostly depends on the physical data from archeological sites. Interpretation of ice core and clam shell data suggests that between 800 and 1300, the regions around the fjords of southern Greenland experienced a relatively mild climate several degrees Celsius higher than usual in the North Atlantic,[38] with trees and herbaceous plants growing, and livestock being farmed. Barley was grown as a crop up to the 70th parallel.[39] What is verifiable is that the ice cores indicate Greenland has had dramatic temperature shifts many times over the past 100,000 years.[40] Similarly the Icelandic Book of Settlements records famines during the winters, in which "the old and helpless were killed and thrown over cliffs".[38]
30
+
31
+ These Icelandic settlements vanished during the 14th and early 15th centuries.[41] The demise of the Western Settlement coincides with a decrease in summer and winter temperatures. A study of North Atlantic seasonal temperature variability during the Little Ice Age showed a significant decrease in maximum summer temperatures beginning in the late 13th century to early 14th century – as much as 6 to 8 °C (11 to 14 °F) lower than modern summer temperatures.[42] The study also found that the lowest winter temperatures of the last 2000 years occurred in the late 14th century and early 15th century. The Eastern Settlement was likely abandoned in the early to mid-15th century, during this cold period.
32
+
33
+ Theories drawn from archeological excavations at Herjolfsnes in the 1920s, suggest that the condition of human bones from this period indicates that the Norse population was malnourished, maybe due to soil erosion resulting from the Norsemen's destruction of natural vegetation in the course of farming, turf-cutting, and wood-cutting. Malnutrition may also have resulted from widespread deaths due to pandemic plague;[43] the decline in temperatures during the Little Ice Age; and armed conflicts with the Skrælings (Norse word for Inuit, meaning "wretches"[37]). In 1379, the Inuit attacked the Eastern Settlement, killed 18 men and captured two boys and a woman.[37] Recent archeological studies somewhat challenge the general assumption that the Norse colonisation had a dramatic negative environmental effect on the vegetation. Data support traces of a possible Norse soil amendment strategy.[44] More recent evidence suggests that the Norse, who never numbered more than about 2,500, gradually abandoned the Greenland settlements over the 1400s as walrus ivory,[45] the most valuable export from Greenland, decreased in price due to competition with other sources of higher-quality ivory, and that there was actually little evidence of starvation or difficulties.[46]
34
+
35
+ Other theories about the disappearance of the Norse settlement have been proposed;
36
+
37
+ The Thule people are the ancestors of the current Greenlandic population. No genes from the Paleo-Eskimos have been found in the present population of Greenland.[48] The Thule Culture migrated eastward from what is now known as Alaska around 1000, reaching Greenland around 1300. The Thule culture was the first to introduce to Greenland such technological innovations as dog sleds and toggling harpoons.
38
+
39
+ In 1500, King Manuel I of Portugal sent Gaspar Corte-Real to Greenland in search of a Northwest Passage to Asia which, according to the Treaty of Tordesillas, was part of Portugal's sphere of influence. In 1501, Corte-Real returned with his brother, Miguel Corte-Real. Finding the sea frozen, they headed south and arrived in Labrador and Newfoundland. Upon the brothers' return to Portugal, the cartographic information supplied by Corte-Real was incorporated into a new map of the world which was presented to Ercole I d'Este, Duke of Ferrara, by Alberto Cantino in 1502. The Cantino planisphere, made in Lisbon, accurately depicts the southern coastline of Greenland.[49]
40
+
41
+ In 1605–1607, King Christian IV of Denmark sent a series of expeditions to Greenland and Arctic waterways to locate the lost eastern Norse settlement and assert Danish sovereignty over Greenland. The expeditions were mostly unsuccessful, partly due to leaders who lacked experience with the difficult arctic ice and weather conditions, and partly because the expedition leaders were given instructions to search for the Eastern Settlement on the east coast of Greenland just north of Cape Farewell, which is almost inaccessible due to southward drifting ice. The pilot on all three trips was English explorer James Hall.
42
+
43
+ After the Norse settlements died off, Greenland came under the de facto control of various Inuit groups, but the Danish government never forgot or relinquished the claims to Greenland that it had inherited from the Norse. When it re-established contact with Greenland in the early 17th century, Denmark asserted its sovereignty over the island. In 1721, a joint mercantile and clerical expedition led by Danish-Norwegian missionary Hans Egede was sent to Greenland, not knowing whether a Norse civilization remained there. This expedition is part of the Dano-Norwegian colonization of the Americas. After 15 years in Greenland, Hans Egede left his son Paul Egede in charge of the mission there and returned to Denmark, where he established a Greenland Seminary. This new colony was centred at Godthåb ("Good Hope") on the southwest coast. Gradually, Greenland was opened up to Danish merchants, and closed to those from other countries.
44
+
45
+ When the union between the crowns of Denmark and Norway was dissolved in 1814, the Treaty of Kiel severed Norway's former colonies and left them under the control of the Danish monarch. Norway occupied then-uninhabited eastern Greenland as Erik the Red's Land in July 1931, claiming that it constituted terra nullius. Norway and Denmark agreed to submit the matter in 1933 to the Permanent Court of International Justice, which decided against Norway.[50]
46
+
47
+ Greenland's connection to Denmark was severed on 9 April 1940, early in World War II, after Denmark was occupied by Nazi Germany. On 8 April 1941, the United States occupied Greenland to defend it against a possible invasion by Germany.[51] The United States occupation of Greenland continued until 1945. Greenland was able to buy goods from the United States and Canada by selling cryolite from the mine at Ivittuut. The major air bases were Bluie West-1 at Narsarsuaq and Bluie West-8 at Søndre Strømfjord (Kangerlussuaq), both of which are still used as Greenland's major international airports. Bluie was the military code name for Greenland.
48
+
49
+ During this war, the system of government changed: Governor Eske Brun ruled the island under a law of 1925 that allowed governors to take control under extreme circumstances; Governor Aksel Svane was transferred to the United States to lead the commission to supply Greenland. The Danish Sirius Patrol guarded the northeastern shores of Greenland in 1942 using dogsleds. They detected several German weather stations and alerted American troops, who destroyed the facilities. After the collapse of the Third Reich, Albert Speer briefly considered escaping in a small aeroplane to hide out in Greenland, but changed his mind and decided to surrender to the United States Armed Forces.[52]
50
+
51
+ Greenland had been a protected and very isolated society until 1940. The Danish government had maintained a strict monopoly of Greenlandic trade, allowing only small scale troaking with Scottish whalers. In wartime Greenland developed a sense of self-reliance through self-government and independent communication with the outside world. Despite this change, in 1946 a commission including the highest Greenlandic council, the Landsrådene, recommended patience and no radical reform of the system. Two years later, the first step towards a change of government was initiated when a grand commission was established. A final report (G-50) was presented in 1950: Greenland was to be a modern welfare state with Denmark as sponsor and example. In 1953, Greenland was made an equal part of the Danish Kingdom. Home rule was granted in 1979.
52
+
53
+ Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy the island from Denmark for $100,000,000. Denmark refused to sell it.[53][54] Historically this repeated an interest by Secretary of State William H. Seward. In 1867 he worked with former senator Robert J. Walker to explore the possibility of buying Greenland and perhaps Iceland. Opposition in Congress ended this project.[55] In the 21st century, the United States, according to WikiLeaks, remains interested in investing in the resource base of Greenland and in tapping hydrocarbons off the Greenlandic coast.[56][57] In August 2019, the American president Donald Trump again proposed to buy the territory, prompting premier Kim Kielsen to issue the statement, "Greenland is not for sale and cannot be sold, but Greenland is open for trade and cooperation with other countries – including the United States."[58]
54
+
55
+ In 1950, Denmark agreed to allow the US to reestablish Thule Air Base in Greenland; it was greatly expanded between 1951 and 1953 as part of a unified NATO Cold War defense strategy. The local population of three nearby villages was moved more than 100 kilometres (62 mi) away in the winter. The United States tried to construct a subterranean network of secret nuclear missile launch sites in the Greenlandic ice cap, named Project Iceworm. It managed this project from Camp Century from 1960 to 1966 before abandoning it as unworkable.[59] The Danish government did not become aware of the program's mission until 1997, when they discovered it while looking for records related to the crash of a nuclear-equipped B-52 bomber at Thule in 1968.[60]
56
+
57
+ With the 1953 Danish constitution, Greenland's colonial status ended as the island was incorporated into the Danish realm as an amt (county). Danish citizenship was extended to Greenlanders. Danish policies toward Greenland consisted of a strategy of cultural assimilation – or de-Greenlandification. During this period, the Danish government promoted the exclusive use of the Danish language in official matters, and required Greenlanders to go to Denmark for their post-secondary education. Many Greenlandic children grew up in boarding schools in southern Denmark, and a number lost their cultural ties to Greenland. While the policies "succeeded" in the sense of shifting Greenlanders from being primarily subsistence hunters into being urbanized wage earners, the Greenlandic elite began to reassert a Greenlandic cultural identity. A movement developed in favour of independence, reaching its peak in the 1970s.[61] As a consequence of political complications in relation to Denmark's entry into the European Common Market in 1972, Denmark began to seek a different status for Greenland, resulting in the Home Rule Act of 1979.
58
+
59
+ This gave Greenland limited autonomy with its own legislature taking control of some internal policies, while the Parliament of Denmark maintained full control of external policies, security, and natural resources. The law came into effect on 1 May 1979. The Queen of Denmark, Margrethe II, remains Greenland's head of state. In 1985, Greenland left the European Economic Community (EEC) upon achieving self-rule, as it did not agree with the EEC's commercial fishing regulations and an EEC ban on seal skin products.[62] Greenland voters approved a referendum on greater autonomy on 25 November 2008.[63][64] According to one study, the 2008 vote created what "can be seen as a system between home rule and full independence."[65]
60
+
61
+ On 21 June 2009, Greenland gained self-rule with provisions for assuming responsibility for self-government of judicial affairs, policing, and natural resources. Also, Greenlanders were recognized as a separate people under international law.[66] Denmark maintains control of foreign affairs and defence matters. Denmark upholds the annual block grant of 3.2 billion Danish kroner, but as Greenland begins to collect revenues of its natural resources, the grant will gradually be diminished. This is generally considered to be a step toward eventual full independence from Denmark.[67] Greenlandic was declared the sole official language of Greenland at the historic ceremony.[2][4][68][69][70]
62
+
63
+ Greenland is the world's largest non-continental island[71] and the third largest area in North America after Canada and the United States.[72] It is between latitudes 59° and 83°N, and longitudes 11° and 74°W. Greenland is bordered by the Arctic Ocean to the north, the Greenland Sea to the east, the North Atlantic Ocean to the southeast, the Davis Strait to the southwest, Baffin Bay to the west, the Nares Strait and Lincoln Sea to the northwest. The nearest countries are Canada, to the west and southwest across Nares Strait and Baffin Bay; and Iceland, southeast of Greenland in the Atlantic Ocean. Greenland also contains the world's largest national park, and it is the largest dependent territory by area in the world, as well as the fourth largest country subdivision in the world, after Sakha Republic in Russia, Australia's state of Western Australia, and Russia's Krasnoyarsk Krai, and the largest in North America.
64
+
65
+ The average daily temperature of Nuuk varies over the seasons from −5.1 to 9.9 °C (23 to 50 °F)[73] The total area of Greenland is 2,166,086 km2 (836,330 sq mi) (including other offshore minor islands), of which the Greenland ice sheet covers 1,755,637 km2 (677,855 sq mi) (81%) and has a volume of approximately 2,850,000 km3 (680,000 cu mi).[74] The highest point on Greenland is Gunnbjørn Fjeld at 3,700 m (12,139 ft) of the Watkins Range (East Greenland mountain range). The majority of Greenland, however, is less than 1,500 m (4,921 ft) in elevation.
66
+
67
+ The weight of the ice sheet has depressed the central land area to form a basin lying more than 300 m (984 ft) below sea level,[75][76] while elevations rise suddenly and steeply near the coast.[77]
68
+
69
+ The ice flows generally to the coast from the centre of the island. A survey led by French scientist Paul-Emile Victor in 1951 concluded that, under the ice sheet, Greenland is composed of three large islands.[78] This is disputed, but if it is so, they would be separated by narrow straits, reaching the sea at Ilulissat Icefjord, at Greenland's Grand Canyon and south of Nordostrundingen.
70
+
71
+ All towns and settlements of Greenland are situated along the ice-free coast, with the population being concentrated along the west coast. The northeastern part of Greenland is not part of any municipality, but it is the site of the world's largest national park, Northeast Greenland National Park.[79]
72
+
73
+ At least four scientific expedition stations and camps had been established on the ice sheet in the ice-covered central part of Greenland (indicated as pale blue in the adjacent map): Eismitte, North Ice, North GRIP Camp and The Raven Skiway. There is a year-round station Summit Camp on the ice sheet, established in 1989. The radio station Jørgen Brønlund Fjord was, until 1950, the northernmost permanent outpost in the world.
74
+
75
+ The extreme north of Greenland, Peary Land, is not covered by an ice sheet, because the air there is too dry to produce snow, which is essential in the production and maintenance of an ice sheet. If the Greenland ice sheet were to melt away completely, the world's sea level would rise by more than 7 m (23 ft).[80]
76
+
77
+ In 2003, a small island, 35 by 15 metres (115 by 49 feet) in length and width, was discovered by arctic explorer Dennis Schmitt and his team at the coordinates of 83-42. Whether this island is permanent is not yet confirmed. If it is, it is the northernmost permanent known land on Earth.
78
+
79
+ In 2007, the existence of a new island was announced. Named "Uunartoq Qeqertaq" (English: Warming Island), this island has always been present off the coast of Greenland, but was covered by a glacier. This glacier was discovered in 2002 to be shrinking rapidly, and by 2007 had completely melted away, leaving the exposed island.[81] The island was named Place of the Year by the Oxford Atlas of the World in 2007.[82] Ben Keene, the atlas's editor, commented: "In the last two or three decades, global warming has reduced the size of glaciers throughout the Arctic and earlier this year, news sources confirmed what climate scientists already knew: water, not rock, lay beneath this ice bridge on the east coast of Greenland. More islets are likely to appear as the sheet of frozen water covering the world's largest island continues to melt".[83] Some controversy surrounds the history of the island, specifically over whether the island might have been revealed during a brief warm period in Greenland during the mid-20th century.[84]
80
+
81
+ Between 1989 and 1993, US and European climate researchers drilled into the summit of Greenland's ice sheet, obtaining a pair of 3 km (1.9 mi) long ice cores. Analysis of the layering and chemical composition of the cores has provided a revolutionary new record of climate change in the Northern Hemisphere going back about 100,000 years and illustrated that the world's weather and temperature have often shifted rapidly from one seemingly stable state to another, with worldwide consequences.[85] The glaciers of Greenland are also contributing to a rise in the global sea level faster than was previously believed.[86] Between 1991 and 2004, monitoring of the weather at one location (Swiss Camp) showed that the average winter temperature had risen almost 6 °C (11 °F).[87] Other research has shown that higher snowfalls from the North Atlantic oscillation caused the interior of the ice cap to thicken by an average of 6 cm or 2.36 in/y between 1994 and 2005.[88]
82
+
83
+ The 1,310-metre (4,300 ft) Qaqugdluit mountain land on the south side of Nuussuaq peninsula, 50 kilometres (31 miles) west of the Greenland inland ice at 70°7′50″N 51°44′30″W / 70.13056°N 51.74167°W / 70.13056; -51.74167, is an example of the many mountainous areas of west Greenland. Up to 1979 (Stage 0) it showed postglacial glacier stages dating back about 7,000–10,000 years.[89][90] In 1979 the glacier tongues retreated – according to the extent and height of the glacier-nourishing area – from 140 to 660 metres (460 to 2,170 feet) above sea level. The climatic glacier snowline (ELA) was at about 800 metres (2,600 feet). The snowline of the oldest (VII) of the three Holocene glacier stages (V–VII) was about 230 metres (750 feet) deeper, i.e. at about 570 metres (1,870 feet).[91] The four youngest glacier stages (IV-I) can be classified as belonging to the global glacier advances in the years 1811 to 1850 and 1880 to 1900 ("Little Ice Age"), 1910 to 1930, 1948 and 1953.[90] Their snowlines rose step by step up to the level of 1979. The current snowline (Stage 0) is nearly unchanged. During the oldest Postglacial Stage VII an ice-stream network from valley glaciers joined each other and completely covered the land. Its nourishing areas consist of high-lying plateau glaciers and local ice caps. However, due to the rise of the snowline about 230 metres (750 feet) – corresponding to a warming of about 1.5 °C (2.7 °F) since 1979 - there is now only plateau-glaciation with small glacier tongues that hardly reach the main valley bottoms.[91] 96 polar scientists of the IMBIE research community from 50 scientific bodies, led by Professor Andrew Schaefer of the University of Leeds, produced the most complete study during the 1992–2018 period. Findings show that Greenland has lost 3.8 trillion tonnes of ice since 1992, enough to raise sea levels by almost 11mm (1.06 cm). The rate of ice loss has increased from an average of 33 billion tonnes a year in the 1990s, to 254 billion tonnes a year in the last decade.[92]
84
+
85
+ There are approximately 700 known species of insects in Greenland, which is low compared with other countries (over one million species have been described worldwide). The sea is rich in fish and invertebrates, especially in the milder West Greenland Current; a large part of the Greenland fauna is associated with marine-based food chains, including large colonies of seabirds. The few native land mammals in Greenland include the polar bear, reindeer (introduced by Europeans), arctic fox, arctic hare, musk ox, collared lemming, ermine, and arctic wolf. The last four are found naturally only in East Greenland, having immigrated from Ellesmere Island. There are dozens of species of seals and whales along the coast. Land fauna consists predominantly of animals which have spread from North America or, in the case of many birds and insects, from Europe. There are no native or free-living reptiles or amphibians on the island.[93]
86
+
87
+ Phytogeographically, Greenland belongs to the Arctic province of the Circumboreal Region within the Boreal Kingdom. The island is sparsely populated in vegetation; plant life consists mainly of grassland and small shrubs, which are regularly grazed by livestock. The most common tree native to Greenland is the European white birch (Betula pubescens) along with gray-leaf willow (Salix glauca), rowan (Sorbus aucuparia), common juniper (Juniperus communis) and other smaller trees, mainly willows.
88
+
89
+ Greenland's flora consists of about 500 species of "higher" plants, i.e. flowering plants, ferns, horsetails and lycopodiophyta. Of the other groups, the lichens are the most diverse, with about 950 species; there are 600–700 species of fungi; mosses and bryophytes are also found. Most of Greenland's higher plants have circumpolar or circumboreal distributions; only a dozen species of saxifrage and hawkweed are endemic. A few plant species were introduced by the Norsemen, such as cow vetch.
90
+
91
+ The terrestrial vertebrates of Greenland include the Greenland dog, which was introduced by the Inuit, as well as European-introduced species such as Greenlandic sheep, goats, cattle, reindeer, horse, chicken and sheepdog, all descendants of animals imported by Europeans.[citation needed] Marine mammals include the hooded seal (Cystophora cristata) as well as the grey seal (Halichoerus grypus).[94] Whales frequently pass very close to Greenland's shores in the late summer and early autumn. Whale species include the beluga whale, blue whale, Greenland whale, fin whale, humpback whale, minke whale, narwhal, pilot whale, sperm whale.[95]
92
+
93
+ As of 2009, 269 species of fish from over 80 different families are known from the waters surrounding Greenland. Almost all are marine species with only a few in freshwater, notably Atlantic salmon and charr.[96] The fishing industry is the primary industry of Greenland's economy, accounting for the majority of the country's total exports.[97]
94
+
95
+ Birds, particularly seabirds, are an important part of Greenland's animal life; breeding populations of auks, puffins, skuas, and kittiwakes are found on steep mountainsides.[citation needed] Greenland's ducks and geese include common eider, long-tailed duck, king eider, white-fronted goose, pink-footed goose and barnacle goose. Breeding migratory birds include the snow bunting, lapland bunting, ringed plover, red-throated loon and red-necked phalarope. Non-migratory land birds include the arctic redpoll, ptarmigan, short-eared owl, snowy owl, gyrfalcon and white-tailed eagle.[93]
96
+
97
+ The Kingdom of Denmark is a constitutional monarchy, in which Queen Margrethe II is the head of state. The monarch officially retains executive power and presides over the Council of State (privy council).[98][99] However, following the introduction of a parliamentary system of government, the duties of the monarch have since become strictly representative and ceremonial,[100] such as the formal appointment and dismissal of the prime minister and other ministers in the executive government. The monarch is not answerable for his or her actions, and the monarch's person is sacrosanct.[101]
98
+
99
+ The party system is dominated by the social-democratic Forward Party, and the democratic socialist Inuit Community Party, both of which broadly argue for greater independence from Denmark. While the 2009 election saw the unionist Democrat Party (two MPs) decline greatly, the 2013 election consolidated the power of the two main parties at the expense of the smaller groups, and saw the eco-socialist Inuit Party elected to the Parliament for the first time. The dominance of the Forward and Inuit Community parties began to wane after the snap 2014 and 2018 elections.
100
+
101
+ The non-binding 2008 referendum on self-governance favoured increased self-governance by 21,355 votes to 6,663.
102
+
103
+ In 1985, Greenland left the European Economic Community (EEC), unlike Denmark, which remains a member. The EEC later became the European Union (EU, renamed and expanded in scope in 1992). Greenland retains some ties with the EU via Denmark. However, EU law largely does not apply to Greenland except in the area of trade.
104
+
105
+ Greenland's head of state is Queen Margrethe II of Denmark. The Queen's government in Denmark appoints a high commissioner (Rigsombudsmand) to represent it on the island. The commissioner is Mikaela Engell.
106
+
107
+ Greenlanders elect two representatives to the Folketing, Denmark's parliament, out of a total of 179. The representatives are Aleqa Hammond of the Siumut Party and Aaja Chemnitz Larsen of the Inuit Community Party.[102]
108
+
109
+ Greenland also has its own Parliament, which has 31 members. The government is the Naalakkersuisut whose members are appointed by the premier. The head of government is the premier, usually the leader of the majority party in Parliament. The premier is Kim Kielsen of the Siumut party.
110
+
111
+ Several American and Danish military bases are located in Greenland, including Thule Air Base, which is home to the United States Space Force's 21st Space Wing's global network of sensors providing missile warning, space surveillance and space control to North American Aerospace Defense Command (NORAD).[103]
112
+
113
+ In 1995, a political scandal resulted in Denmark after a report revealed the government had given tacit permission for nuclear weapons to be located in Greenland, in contravention of Denmark's 1957 nuclear-free zone policy.[104][60] The United States built a secret nuclear powered base, called Camp Century, in the Greenland ice sheet.[105] On 21 January 1968, a B-52G, with four nuclear bombs aboard as part of Operation Chrome Dome, crashed on the ice of the North Star Bay while attempting an emergency landing at Thule Air Base.[106] The resulting fire caused extensive radioactive contamination.[107] One of the H-bombs remains lost.[108][109]
114
+
115
+ Formerly consisting of three counties comprising a total of 18 municipalities, Greenland abolished these in 2009 and has since been divided into large territories known as "municipalities" (Greenlandic: kommuneqarfiit, Danish: kommuner): Sermersooq ("Much Ice") around the capital Nuuk and also including all East Coast communities; Kujalleq ("South") around Cape Farewell; Qeqqata ("Centre") north of the capital along the Davis Strait; Qeqertalik ("The one with islands") surrounding Disko Bay; and Avannaata ("Northern") in the northwest; the latter two having come into being as a result of the Qaasuitsup municipality, one of the original four, being partitioned in 2018. The northeast of the island composes the unincorporated Northeast Greenland National Park. Thule Air Base is also unincorporated, an enclave within Avannaata municipality administered by the United States Air Force. During its construction, there were as many as 12,000 American residents but in recent years the number has been below 1,000.
116
+
117
+ The Greenlandic economy is highly dependent on fishing. Fishing accounts for more than 90% of Greenland's exports.[110] The shrimp and fish industry is by far the largest income earner.[5]
118
+
119
+ Greenland is abundant in minerals.[110] Mining of ruby deposits began in 2007. Other mineral prospects are improving as prices are increasing. These include iron, uranium, aluminium, nickel, platinum, tungsten, titanium, and copper. Despite resumption[when?] of several hydrocarbon and mineral exploration activities, it will take several years before hydrocarbon production can materialize. The state oil company Nunaoil was created to help develop the hydrocarbon industry in Greenland. The state company Nunamineral has been launched on the Copenhagen Stock Exchange to raise more capital to increase the production of gold, started in 2007.
120
+
121
+ Electricity has traditionally been generated by oil or diesel power plants, even if there is a large surplus of potential hydropower. There is a programme to build hydro power plants. The first, and still the largest, is Buksefjord hydroelectric power plant.
122
+
123
+ There are also plans to build a large aluminium smelter, using hydropower to create an exportable product. It is expected that much of the labour needed will be imported.[111]
124
+
125
+ The European Union has urged Greenland to restrict People's Republic of China development of rare-earth projects, as China accounts for 95% of the world's current supply. In early 2013, the Greenland government said that it had no plans to impose such restrictions.[112]
126
+
127
+ The public sector, including publicly owned enterprises and the municipalities, plays a dominant role in Greenland's economy. About half the government revenues come from grants from the Danish government, an important supplement to the gross domestic product (GDP). Gross domestic product per capita is equivalent to that of the average economies of Europe.
128
+
129
+ Greenland suffered an economic contraction in the early 1990s. But, since 1993, the economy has improved. The Greenland Home Rule Government (GHRG) has pursued a tight fiscal policy since the late 1980s, which has helped create surpluses in the public budget and low inflation. Since 1990, Greenland has registered a foreign-trade deficit following the closure of the last remaining lead and zinc mine that year. In 2017, new sources of ruby in Greenland have been discovered, promising to bring new industry and a new export from the country.[113] (See Gemstone industry in Greenland).
130
+
131
+ There is air transport both within Greenland and between the island and other nations. There is also scheduled boat traffic, but the long distances lead to long travel times and low frequency. There are virtually no roads between cities because the coast has many fjords that would require ferry service to connect a road network. The only exception is a gravel road of 5 km (3 mi) length between Kangilinnguit and the now abandoned former cryolite mining town of Ivittuut.[114] In addition, the lack of agriculture, forestry and similar countryside activities has meant that very few country roads have been built.
132
+
133
+ Kangerlussuaq Airport (SFJ) [115] is the largest airport and the main aviation hub for international passenger transport. It serves international and domestic airline operated flight.[116] SFJ is far from the vicinity of the larger metropolitan capital areas, 317 km (197 mi) to the capital Nuuk, and airline passenger services are available.[117] Greenland has no passenger railways.
134
+
135
+ Nuuk Airport (GOH) [118] is the second-largest airport located just 6.0 km (3.7 mi) from the centre of the capital. GOH serves general aviation traffic and has daily- or regular domestic flights within Greenland. GOH also serves international flights to Iceland, business and private airplanes.
136
+
137
+ Ilulissat Airport (JAV) [119] is a domestic airport that also serves international flights to Iceland. There are a total of 13 registered civil airports and 47 helipads in Greenland; most of them are unpaved and located in rural areas. The second longest runway is at Narsarsuaq, a domestic airport with limited international service in south Greenland.
138
+
139
+ All civil aviation matters are handled by the Danish Transport Authority. Most airports including Nuuk Airport have short runways and can only be served by special fairly small aircraft on fairly short flights. Kangerlussuaq Airport around 100 kilometres (62 miles) inland from the west coast is the major airport of Greenland and the hub for domestic flights. Intercontinental flights connect mainly to Copenhagen. Travel between international destinations (except Iceland) and any city in Greenland requires a plane change.
140
+
141
+ Air Iceland operates flights from Reykjavík to a number of airports in Greenland, and the company promotes the service as a day-trip option from Iceland for tourists.[120]
142
+
143
+ There are no direct flights to the United States or Canada, although there have been flights Kangerlussuaq – Baltimore,[121] and Nuuk – Iqaluit,[122] which were cancelled because of too few passengers and financial losses.[123] An alternative between Greenland and the United States/Canada is Air Iceland/Icelandair with a plane change in Iceland.[124]
144
+
145
+ Sea passenger transport is served by several coastal ferries. Arctic Umiaq Line makes a single round trip per week, taking 80 hours each direction.[125]
146
+
147
+ Cargo freight by sea is handled by the shipping company Royal Arctic Line from, to and across Greenland. It provides trade and transport opportunities between Greenland, Europe and North America.
148
+
149
+ Greenland has a population of 56,081 (January 2020 Estimate),[6] of whom 88% are Greenlandic Inuit (including Danish-Inuit mixed). The remaining 12% of people are of European descent, mainly Greenland Danes. A 2015 wide genetic study of Greenlanders found modern-day Inuit in Greenland are direct descendants of the first Inuit pioneers of the Thule culture with ∼25 % admixture of the European colonizers from the 16th century. Despite previous speculations, no evidence of Viking settlers predecessors has been found. [126]
150
+
151
+ Several thousand Greenlandic Inuit reside in Denmark proper. The majority of the population is Lutheran. Nearly all Greenlanders live along the fjords in the south-west of the main island, which has a relatively mild climate.[127] In 2020, 18,326 people reside in Nuuk, the capital city. Greenland's warmest climates such as the vegetated area around Narsarsuaq are sparsely populated, whereas the majority of the population lives north of 64°N in colder coastal climates.
152
+
153
+
154
+
155
+ Both Greenlandic (an Eskimo–Aleut language) and Danish have been used in public affairs since the establishment of home rule in 1979; the majority of the population can speak both languages. Greenlandic became the sole official language in June 2009,[129] In practice, Danish is still widely used in the administration and in higher education, as well as remaining the first or only language for some Danish immigrants in Nuuk and other larger towns. Debate about the roles of Greenlandic and Danish in the country's future is ongoing. The orthography of Greenlandic was established in 1851[130] and revised in 1973. The country has a 100% literacy rate.[5]
156
+
157
+ A majority of the population speaks Greenlandic, most of them bilingually. It is spoken by about 50,000 people, making it the most populous of the Eskimo–Aleut language family, spoken by more people than all the other languages of the family combined.
158
+
159
+ Kalaallisut is the Greenlandic dialect of West Greenland, which has long been the most populous area of the island. This has led to its de facto status as the official "Greenlandic" language, although the northern dialect Inuktun remains spoken by 1,000 or so people around Qaanaaq, and the eastern dialect Tunumiisut by around 3,000.[131] Each of these dialects is almost unintelligible to the speakers of the other and are considered by some linguists to be separate languages.[citation needed] A UNESCO report has labelled the other dialects as endangered, and measures are now being considered to protect the East Greenlandic dialects.[132]
160
+
161
+ About 12% of the population speak Danish as a first or sole language, particularly Danish immigrants in Greenland, many of whom fill positions such as administrators, professionals, academics, or skilled tradesmen. While Greenlandic is dominant in all smaller settlements, a part of the population of Inuit or mixed ancestry, especially in towns, speaks Danish. Most of the Inuit population speaks Danish as a second language. In larger towns, especially Nuuk and in the higher social strata, this is still a large group. While one strategy aims at promoting Greenlandic in public life and education, developing its vocabulary and suitability for all complex contexts, there are opponents of this.[133]
162
+
163
+ English is another important language for Greenland, taught in schools from the first school year.[134]
164
+
165
+ Education is organised in a similar way to Denmark. There is ten year mandatory primary school. There is also a secondary school, with either work education or preparatory for university education. There is one university, the University of Greenland (Greenlandic: Ilisimatusarfik) in Nuuk. Many Greenlanders attend universities in Denmark or elsewhere.
166
+
167
+ Religion in Greenland (2010):[135][136]
168
+
169
+ The nomadic Inuit people were traditionally shamanistic, with a well-developed mythology primarily concerned with appeasing a vengeful and fingerless sea goddess who controlled the success of the seal and whale hunts.
170
+
171
+ The first Norse colonists worshipped the Norse gods, but Erik the Red's son Leif was converted to Christianity by King Olaf Trygvesson on a trip to Norway in 999 and sent missionaries back to Greenland. These swiftly established sixteen parishes, some monasteries, and a bishopric at Garðar.
172
+
173
+ Rediscovering these colonists and spreading ideas of the Protestant Reformation among them was one of the primary reasons for the Danish recolonization in the 18th century. Under the patronage of the Royal Mission College in Copenhagen, Norwegian and Danish Lutherans and German Moravian missionaries searched for the missing Norse settlements, but no Norse were found, and instead they began preaching to the Inuit. The principal figures in the Christianization of Greenland were Hans and Poul Egede and Matthias Stach. The New Testament was translated piecemeal from the time of the very first settlement on Kangeq Island, but the first translation of the whole Bible was not completed until 1900. An improved translation using the modern orthography was completed in 2000.[137]
174
+
175
+ Today, the major religion is Protestant Christianity, represented mainly by the Church of Denmark, which is Lutheran in orientation. While there are no official census data on religion in Greenland, the Bishop of Greenland Sofie Petersen[138] estimates that 85% of the Greenlandic population are members of her congregation.[139] The Church of Denmark is the established church through the Constitution of Denmark.[140]
176
+
177
+ The Roman Catholic minority is pastorally served by the Roman Catholic Diocese of Copenhagen. There are still Christian missionaries on the island, but mainly from charismatic movements proselytizing fellow Christians.[141]
178
+
179
+ The rate of suicide in Greenland is very high. According to a 2010 census, Greenland holds the highest suicide rate in the world.[142][143] Another significant social issue faced by Greenland is a high rate of alcoholism.[144] Alcohol consumption rates in Greenland reached their height in the 1980s, when it was twice as high as in Denmark, and had by 2010 fallen slightly below the average level of consumption in Denmark (which at the time were 12th highest in the world, but has since fallen). However, at the same time, alcohol prices are far higher, meaning that consumption has a large social impact.[145][146] Prevalence of HIV/AIDS used to be high in Greenland and peaked in the 1990s when the fatality rate also was relatively high. Through a number of initiatives the prevalence (along with the fatality rate through efficient treatment) has fallen and is now low, c. 0.13%,[147][148] below most other countries. In recent decades, the unemployment rates have generally been somewhat above those in Denmark;[149] in 2017, the rate was 6.8% in Greenland,[150] compared to 5.6% in Denmark.[151]
180
+
181
+ Today Greenlandic culture is a blending of traditional Inuit (Kalaallit) and Scandinavian culture. Inuit, or Kalaallit, culture has a strong artistic tradition, dating back thousands of years. The Kalaallit are known for an art form of figures called tupilak or a "spirit object." Traditional art-making practices thrive in the Ammassalik.[152] Sperm whale ivory remains a valued medium for carving.[153]
182
+
183
+ Greenland also has a successful, albeit small, music culture. Some popular Greenlandic bands and artists include Sume (classic rock), Chilly Friday (rock), Nanook (rock), Siissisoq (rock), Nuuk Posse (hip hop) and Rasmus Lyberth (folk), who performed in the Danish national final for the 1979 Eurovision Song Contest, performing in Greenlandic. The singer-songwriter Simon Lynge is the first musical artist from Greenland to have an album released across the United Kingdom, and to perform at the UK's Glastonbury Festival. The music culture of Greenland also includes traditional Inuit music, largely revolving around singing and drums.
184
+
185
+ Sport is an important part of Greenlandic culture, as the population is generally quite active.[154] Popular sports include association football, track and field, handball and skiing. Handball is often referred to as the national sport,[155] and Greenland's men's national team was ranked among the top 20 in the world in 2001.
186
+
187
+ Greenland has excellent conditions for skiing, fishing, snowboarding, ice climbing and rock climbing, although mountain climbing and hiking are preferred by the general public. Although the environment is generally ill-suited for golf, there is a golf course in Nuuk.
188
+
189
+ The national dish of Greenland is suaasat. Meat from marine mammals, game, birds, and fish play a large role in the Greenlandic diet. Due to the glacial landscape, most ingredients come from the ocean.[156] Spices are seldom used besides salt and pepper.[157] Greenlandic coffee is a "flaming" dessert coffee (set alight before serving) made with coffee, whiskey, Kahlúa, Grand Marnier, and whipped cream. It is stronger than the familiar Irish dessert coffee.[158]
190
+
191
+ Coordinates: 72°00′N 40°00′W / 72.000°N 40.000°W / 72.000; -40.000
en/2301.html.txt ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ in the Kingdom of Denmark (red and beige)
4
+
5
+ Greenland (Greenlandic: Kalaallit Nunaat, pronounced [kalaːɬit nunaːt]; Danish: Grønland, pronounced [ˈkʁɶnˌlænˀ]) is the world's largest island,[d] located between the Arctic and Atlantic oceans, east of the Canadian Arctic Archipelago. It is an autonomous territory[10] within the Kingdom of Denmark. Though physiographically a part of the continent of North America, Greenland has been politically and culturally associated with Europe (specifically Norway and Denmark, the colonial powers, as well as the nearby island of Iceland) for more than a millennium.[11] The majority of its residents are Inuit, whose ancestors migrated from Alaska through Northern Canada, gradually settling across the island by the 13th century.[12]
6
+
7
+ Nowadays, the population is largely concentrated on the southwest coast, while the rest of the island is sparsely populated. Greenland is divided into five municipalities – Sermersooq, Kujalleq, Qeqertalik, Qeqqata, and Avannaata. It has two unincorporated areas – the Northeast Greenland National Park and the Thule Air Base. The latter, while under Danish control, is administered by the United States Air Force.[13] Three-quarters of Greenland is covered by the only permanent ice sheet outside Antarctica. With a population of 56,081 (2020),[6] it is the least densely populated territory in the world.[14] About a third of the population lives in Nuuk, the capital and largest city; the second largest city in terms of population is Sisimiut, 320 kilometres (200 mi) north of Nuuk. The Arctic Umiaq Line ferry acts as a lifeline for western Greenland, connecting the various cities and settlements.
8
+
9
+ Greenland has been inhabited at intervals over at least the last 4,500 years by Arctic peoples whose forebears migrated there from what is now Canada.[15][16] Norsemen settled the uninhabited southern part of Greenland beginning in the 10th century, having previously settled Iceland. These Norsemen would later set sail from Greenland and Iceland, with Leif Erikson becoming the first known European to reach North America nearly 500 years before Columbus reached the Caribbean islands. Inuit peoples arrived in the 13th century. Though under continuous influence of Norway and Norwegians, Greenland was not formally under the Norwegian crown until 1261. The Norse colonies disappeared in the late 15th century when Norway was hit by the Black Death and entered a severe decline. Soon after their demise, beginning in 1499, the Portuguese briefly explored and claimed the island, naming it Terra do Lavrador (later applied to Labrador in Canada).[17]
10
+
11
+ In the early 17th century, Danish explorers reached Greenland again. To strengthen trading and power, Denmark–Norway affirmed sovereignty over the island. Because of Norway's weak status, it lost sovereignty over Greenland in 1814 when the union was dissolved. Greenland became Danish in 1814, and was fully integrated in the Danish state in 1953 under the Constitution of Denmark. In 1973, Greenland joined the European Economic Community with Denmark. However, in a referendum in 1982, a majority of the population voted for Greenland to withdraw from the EEC, which was effected in 1985. Greenland contains the world's largest and most northerly national park, Northeast Greenland National Park (Kalaallit Nunaanni nuna eqqissisimatitaq). Established in 1974, and expanded to its present size in 1988, it protects 972,001 square kilometres (375,292 sq mi) of the interior and northeastern coast of Greenland and is bigger than all but twenty-nine countries in the world.
12
+
13
+ In 1979, Denmark granted home rule to Greenland; in 2008, Greenlanders voted in favor of the Self-Government Act, which transferred more power from the Danish government to the local Greenlandic government. Under the new structure, in effect since 21 June 2009,[18] Greenland can gradually assume responsibility for policing, judicial system, company law, accounting, and auditing; mineral resource activities; aviation; law of legal capacity, family law and succession law; aliens and border controls; the working environment; and financial regulation and supervision, while the Danish government retains control of foreign affairs and defence. It also retains control of monetary policy, providing an initial annual subsidy of DKK 3.4 billion, which is planned to diminish gradually over time. Greenland expects to grow its economy based on increased income from the extraction of natural resources. The capital, Nuuk, held the 2016 Arctic Winter Games. At 70%, Greenland has one of the highest shares of renewable energy in the world, mostly coming from hydropower.[19][additional citation(s) needed]
14
+
15
+ The early Norse settlers named the island as Greenland. In the Icelandic sagas, the Norwegian-born Icelander Erik the Red was said to be exiled from Iceland for manslaughter. Along with his extended family and his thralls (i.e. slaves or serfs), he set out in ships to explore an icy land known to lie to the northwest. After finding a habitable area and settling there, he named it Grœnland (translated as "Greenland"), supposedly in the hope that the pleasant name would attract settlers.[20][21][22] The Saga of Erik the Red states: "In the summer, Erik left to settle in the country he had found, which he called Greenland, as he said people would be attracted there if it had a favorable name."[23]
16
+
17
+ The name of the country in the indigenous Greenlandic language is Kalaallit Nunaat ("land of the Kalaallit").[24] The Kalaallit are the indigenous Greenlandic Inuit people who inhabit the country's western region.
18
+
19
+ In prehistoric times, Greenland was home to several successive Paleo-Eskimo cultures known today primarily through archaeological finds. The earliest entry of the Paleo-Eskimo into Greenland is thought to have occurred about 2500 BC. From around 2500 BC to 800 BC, southern and western Greenland were inhabited by the Saqqaq culture. Most finds of Saqqaq-period archaeological remains have been around Disko Bay, including the site of Saqqaq, after which the culture is named.[25][26]
20
+
21
+ From 2400 BC to 1300 BC, the Independence I culture existed in northern Greenland. It was a part of the Arctic small tool tradition.[27][28][29] Towns, including Deltaterrasserne, started to appear.
22
+
23
+ Around 800 BC, the Saqqaq culture disappeared and the Early Dorset culture emerged in western Greenland and the Independence II culture in northern Greenland.[30] The Dorset culture was the first culture to extend throughout the Greenlandic coastal areas, both on the west and east coasts. It lasted until the total onset of the Thule culture in 1500 AD. The Dorset culture population lived primarily from hunting of whales and caribou.[31][32][33][34]
24
+
25
+ From 986, Greenland's west coast was settled by Icelanders and Norwegians, through a contingent of 14 boats led by Erik the Red. They formed three settlements – known as the Eastern Settlement, the Western Settlement and the Middle Settlement – on fjords near the southwesternmost tip of the island.[11][35] They shared the island with the late Dorset culture inhabitants who occupied the northern and western parts, and later with the Thule culture that entered from the north. Norse Greenlanders submitted to Norwegian rule in 1261 under the Kingdom of Norway (872–1397). Later the Kingdom of Norway entered into a personal union with Denmark in 1380, and from 1397 was a part of the Kalmar Union.[36]
26
+
27
+ The Norse settlements, such as Brattahlíð, thrived for centuries but disappeared sometime in the 15th century, perhaps at the onset of the Little Ice Age.[37] Apart from some runic inscriptions, no contemporary records or historiography survives from the Norse settlements. Medieval Norwegian sagas and historical works mention Greenland's economy as well as the bishops of Gardar and the collection of tithes. A chapter in the Konungs skuggsjá (The King's Mirror) describes Norse Greenland's exports and imports as well as grain cultivation.
28
+
29
+ Icelandic saga accounts of life in Greenland were composed in the 13th century and later, and do not constitute primary sources for the history of early Norse Greenland.[22] Modern understanding therefore mostly depends on the physical data from archeological sites. Interpretation of ice core and clam shell data suggests that between 800 and 1300, the regions around the fjords of southern Greenland experienced a relatively mild climate several degrees Celsius higher than usual in the North Atlantic,[38] with trees and herbaceous plants growing, and livestock being farmed. Barley was grown as a crop up to the 70th parallel.[39] What is verifiable is that the ice cores indicate Greenland has had dramatic temperature shifts many times over the past 100,000 years.[40] Similarly the Icelandic Book of Settlements records famines during the winters, in which "the old and helpless were killed and thrown over cliffs".[38]
30
+
31
+ These Icelandic settlements vanished during the 14th and early 15th centuries.[41] The demise of the Western Settlement coincides with a decrease in summer and winter temperatures. A study of North Atlantic seasonal temperature variability during the Little Ice Age showed a significant decrease in maximum summer temperatures beginning in the late 13th century to early 14th century – as much as 6 to 8 °C (11 to 14 °F) lower than modern summer temperatures.[42] The study also found that the lowest winter temperatures of the last 2000 years occurred in the late 14th century and early 15th century. The Eastern Settlement was likely abandoned in the early to mid-15th century, during this cold period.
32
+
33
+ Theories drawn from archeological excavations at Herjolfsnes in the 1920s, suggest that the condition of human bones from this period indicates that the Norse population was malnourished, maybe due to soil erosion resulting from the Norsemen's destruction of natural vegetation in the course of farming, turf-cutting, and wood-cutting. Malnutrition may also have resulted from widespread deaths due to pandemic plague;[43] the decline in temperatures during the Little Ice Age; and armed conflicts with the Skrælings (Norse word for Inuit, meaning "wretches"[37]). In 1379, the Inuit attacked the Eastern Settlement, killed 18 men and captured two boys and a woman.[37] Recent archeological studies somewhat challenge the general assumption that the Norse colonisation had a dramatic negative environmental effect on the vegetation. Data support traces of a possible Norse soil amendment strategy.[44] More recent evidence suggests that the Norse, who never numbered more than about 2,500, gradually abandoned the Greenland settlements over the 1400s as walrus ivory,[45] the most valuable export from Greenland, decreased in price due to competition with other sources of higher-quality ivory, and that there was actually little evidence of starvation or difficulties.[46]
34
+
35
+ Other theories about the disappearance of the Norse settlement have been proposed;
36
+
37
+ The Thule people are the ancestors of the current Greenlandic population. No genes from the Paleo-Eskimos have been found in the present population of Greenland.[48] The Thule Culture migrated eastward from what is now known as Alaska around 1000, reaching Greenland around 1300. The Thule culture was the first to introduce to Greenland such technological innovations as dog sleds and toggling harpoons.
38
+
39
+ In 1500, King Manuel I of Portugal sent Gaspar Corte-Real to Greenland in search of a Northwest Passage to Asia which, according to the Treaty of Tordesillas, was part of Portugal's sphere of influence. In 1501, Corte-Real returned with his brother, Miguel Corte-Real. Finding the sea frozen, they headed south and arrived in Labrador and Newfoundland. Upon the brothers' return to Portugal, the cartographic information supplied by Corte-Real was incorporated into a new map of the world which was presented to Ercole I d'Este, Duke of Ferrara, by Alberto Cantino in 1502. The Cantino planisphere, made in Lisbon, accurately depicts the southern coastline of Greenland.[49]
40
+
41
+ In 1605–1607, King Christian IV of Denmark sent a series of expeditions to Greenland and Arctic waterways to locate the lost eastern Norse settlement and assert Danish sovereignty over Greenland. The expeditions were mostly unsuccessful, partly due to leaders who lacked experience with the difficult arctic ice and weather conditions, and partly because the expedition leaders were given instructions to search for the Eastern Settlement on the east coast of Greenland just north of Cape Farewell, which is almost inaccessible due to southward drifting ice. The pilot on all three trips was English explorer James Hall.
42
+
43
+ After the Norse settlements died off, Greenland came under the de facto control of various Inuit groups, but the Danish government never forgot or relinquished the claims to Greenland that it had inherited from the Norse. When it re-established contact with Greenland in the early 17th century, Denmark asserted its sovereignty over the island. In 1721, a joint mercantile and clerical expedition led by Danish-Norwegian missionary Hans Egede was sent to Greenland, not knowing whether a Norse civilization remained there. This expedition is part of the Dano-Norwegian colonization of the Americas. After 15 years in Greenland, Hans Egede left his son Paul Egede in charge of the mission there and returned to Denmark, where he established a Greenland Seminary. This new colony was centred at Godthåb ("Good Hope") on the southwest coast. Gradually, Greenland was opened up to Danish merchants, and closed to those from other countries.
44
+
45
+ When the union between the crowns of Denmark and Norway was dissolved in 1814, the Treaty of Kiel severed Norway's former colonies and left them under the control of the Danish monarch. Norway occupied then-uninhabited eastern Greenland as Erik the Red's Land in July 1931, claiming that it constituted terra nullius. Norway and Denmark agreed to submit the matter in 1933 to the Permanent Court of International Justice, which decided against Norway.[50]
46
+
47
+ Greenland's connection to Denmark was severed on 9 April 1940, early in World War II, after Denmark was occupied by Nazi Germany. On 8 April 1941, the United States occupied Greenland to defend it against a possible invasion by Germany.[51] The United States occupation of Greenland continued until 1945. Greenland was able to buy goods from the United States and Canada by selling cryolite from the mine at Ivittuut. The major air bases were Bluie West-1 at Narsarsuaq and Bluie West-8 at Søndre Strømfjord (Kangerlussuaq), both of which are still used as Greenland's major international airports. Bluie was the military code name for Greenland.
48
+
49
+ During this war, the system of government changed: Governor Eske Brun ruled the island under a law of 1925 that allowed governors to take control under extreme circumstances; Governor Aksel Svane was transferred to the United States to lead the commission to supply Greenland. The Danish Sirius Patrol guarded the northeastern shores of Greenland in 1942 using dogsleds. They detected several German weather stations and alerted American troops, who destroyed the facilities. After the collapse of the Third Reich, Albert Speer briefly considered escaping in a small aeroplane to hide out in Greenland, but changed his mind and decided to surrender to the United States Armed Forces.[52]
50
+
51
+ Greenland had been a protected and very isolated society until 1940. The Danish government had maintained a strict monopoly of Greenlandic trade, allowing only small scale troaking with Scottish whalers. In wartime Greenland developed a sense of self-reliance through self-government and independent communication with the outside world. Despite this change, in 1946 a commission including the highest Greenlandic council, the Landsrådene, recommended patience and no radical reform of the system. Two years later, the first step towards a change of government was initiated when a grand commission was established. A final report (G-50) was presented in 1950: Greenland was to be a modern welfare state with Denmark as sponsor and example. In 1953, Greenland was made an equal part of the Danish Kingdom. Home rule was granted in 1979.
52
+
53
+ Following World War II, the United States developed a geopolitical interest in Greenland, and in 1946 the United States offered to buy the island from Denmark for $100,000,000. Denmark refused to sell it.[53][54] Historically this repeated an interest by Secretary of State William H. Seward. In 1867 he worked with former senator Robert J. Walker to explore the possibility of buying Greenland and perhaps Iceland. Opposition in Congress ended this project.[55] In the 21st century, the United States, according to WikiLeaks, remains interested in investing in the resource base of Greenland and in tapping hydrocarbons off the Greenlandic coast.[56][57] In August 2019, the American president Donald Trump again proposed to buy the territory, prompting premier Kim Kielsen to issue the statement, "Greenland is not for sale and cannot be sold, but Greenland is open for trade and cooperation with other countries – including the United States."[58]
54
+
55
+ In 1950, Denmark agreed to allow the US to reestablish Thule Air Base in Greenland; it was greatly expanded between 1951 and 1953 as part of a unified NATO Cold War defense strategy. The local population of three nearby villages was moved more than 100 kilometres (62 mi) away in the winter. The United States tried to construct a subterranean network of secret nuclear missile launch sites in the Greenlandic ice cap, named Project Iceworm. It managed this project from Camp Century from 1960 to 1966 before abandoning it as unworkable.[59] The Danish government did not become aware of the program's mission until 1997, when they discovered it while looking for records related to the crash of a nuclear-equipped B-52 bomber at Thule in 1968.[60]
56
+
57
+ With the 1953 Danish constitution, Greenland's colonial status ended as the island was incorporated into the Danish realm as an amt (county). Danish citizenship was extended to Greenlanders. Danish policies toward Greenland consisted of a strategy of cultural assimilation – or de-Greenlandification. During this period, the Danish government promoted the exclusive use of the Danish language in official matters, and required Greenlanders to go to Denmark for their post-secondary education. Many Greenlandic children grew up in boarding schools in southern Denmark, and a number lost their cultural ties to Greenland. While the policies "succeeded" in the sense of shifting Greenlanders from being primarily subsistence hunters into being urbanized wage earners, the Greenlandic elite began to reassert a Greenlandic cultural identity. A movement developed in favour of independence, reaching its peak in the 1970s.[61] As a consequence of political complications in relation to Denmark's entry into the European Common Market in 1972, Denmark began to seek a different status for Greenland, resulting in the Home Rule Act of 1979.
58
+
59
+ This gave Greenland limited autonomy with its own legislature taking control of some internal policies, while the Parliament of Denmark maintained full control of external policies, security, and natural resources. The law came into effect on 1 May 1979. The Queen of Denmark, Margrethe II, remains Greenland's head of state. In 1985, Greenland left the European Economic Community (EEC) upon achieving self-rule, as it did not agree with the EEC's commercial fishing regulations and an EEC ban on seal skin products.[62] Greenland voters approved a referendum on greater autonomy on 25 November 2008.[63][64] According to one study, the 2008 vote created what "can be seen as a system between home rule and full independence."[65]
60
+
61
+ On 21 June 2009, Greenland gained self-rule with provisions for assuming responsibility for self-government of judicial affairs, policing, and natural resources. Also, Greenlanders were recognized as a separate people under international law.[66] Denmark maintains control of foreign affairs and defence matters. Denmark upholds the annual block grant of 3.2 billion Danish kroner, but as Greenland begins to collect revenues of its natural resources, the grant will gradually be diminished. This is generally considered to be a step toward eventual full independence from Denmark.[67] Greenlandic was declared the sole official language of Greenland at the historic ceremony.[2][4][68][69][70]
62
+
63
+ Greenland is the world's largest non-continental island[71] and the third largest area in North America after Canada and the United States.[72] It is between latitudes 59° and 83°N, and longitudes 11° and 74°W. Greenland is bordered by the Arctic Ocean to the north, the Greenland Sea to the east, the North Atlantic Ocean to the southeast, the Davis Strait to the southwest, Baffin Bay to the west, the Nares Strait and Lincoln Sea to the northwest. The nearest countries are Canada, to the west and southwest across Nares Strait and Baffin Bay; and Iceland, southeast of Greenland in the Atlantic Ocean. Greenland also contains the world's largest national park, and it is the largest dependent territory by area in the world, as well as the fourth largest country subdivision in the world, after Sakha Republic in Russia, Australia's state of Western Australia, and Russia's Krasnoyarsk Krai, and the largest in North America.
64
+
65
+ The average daily temperature of Nuuk varies over the seasons from −5.1 to 9.9 °C (23 to 50 °F)[73] The total area of Greenland is 2,166,086 km2 (836,330 sq mi) (including other offshore minor islands), of which the Greenland ice sheet covers 1,755,637 km2 (677,855 sq mi) (81%) and has a volume of approximately 2,850,000 km3 (680,000 cu mi).[74] The highest point on Greenland is Gunnbjørn Fjeld at 3,700 m (12,139 ft) of the Watkins Range (East Greenland mountain range). The majority of Greenland, however, is less than 1,500 m (4,921 ft) in elevation.
66
+
67
+ The weight of the ice sheet has depressed the central land area to form a basin lying more than 300 m (984 ft) below sea level,[75][76] while elevations rise suddenly and steeply near the coast.[77]
68
+
69
+ The ice flows generally to the coast from the centre of the island. A survey led by French scientist Paul-Emile Victor in 1951 concluded that, under the ice sheet, Greenland is composed of three large islands.[78] This is disputed, but if it is so, they would be separated by narrow straits, reaching the sea at Ilulissat Icefjord, at Greenland's Grand Canyon and south of Nordostrundingen.
70
+
71
+ All towns and settlements of Greenland are situated along the ice-free coast, with the population being concentrated along the west coast. The northeastern part of Greenland is not part of any municipality, but it is the site of the world's largest national park, Northeast Greenland National Park.[79]
72
+
73
+ At least four scientific expedition stations and camps had been established on the ice sheet in the ice-covered central part of Greenland (indicated as pale blue in the adjacent map): Eismitte, North Ice, North GRIP Camp and The Raven Skiway. There is a year-round station Summit Camp on the ice sheet, established in 1989. The radio station Jørgen Brønlund Fjord was, until 1950, the northernmost permanent outpost in the world.
74
+
75
+ The extreme north of Greenland, Peary Land, is not covered by an ice sheet, because the air there is too dry to produce snow, which is essential in the production and maintenance of an ice sheet. If the Greenland ice sheet were to melt away completely, the world's sea level would rise by more than 7 m (23 ft).[80]
76
+
77
+ In 2003, a small island, 35 by 15 metres (115 by 49 feet) in length and width, was discovered by arctic explorer Dennis Schmitt and his team at the coordinates of 83-42. Whether this island is permanent is not yet confirmed. If it is, it is the northernmost permanent known land on Earth.
78
+
79
+ In 2007, the existence of a new island was announced. Named "Uunartoq Qeqertaq" (English: Warming Island), this island has always been present off the coast of Greenland, but was covered by a glacier. This glacier was discovered in 2002 to be shrinking rapidly, and by 2007 had completely melted away, leaving the exposed island.[81] The island was named Place of the Year by the Oxford Atlas of the World in 2007.[82] Ben Keene, the atlas's editor, commented: "In the last two or three decades, global warming has reduced the size of glaciers throughout the Arctic and earlier this year, news sources confirmed what climate scientists already knew: water, not rock, lay beneath this ice bridge on the east coast of Greenland. More islets are likely to appear as the sheet of frozen water covering the world's largest island continues to melt".[83] Some controversy surrounds the history of the island, specifically over whether the island might have been revealed during a brief warm period in Greenland during the mid-20th century.[84]
80
+
81
+ Between 1989 and 1993, US and European climate researchers drilled into the summit of Greenland's ice sheet, obtaining a pair of 3 km (1.9 mi) long ice cores. Analysis of the layering and chemical composition of the cores has provided a revolutionary new record of climate change in the Northern Hemisphere going back about 100,000 years and illustrated that the world's weather and temperature have often shifted rapidly from one seemingly stable state to another, with worldwide consequences.[85] The glaciers of Greenland are also contributing to a rise in the global sea level faster than was previously believed.[86] Between 1991 and 2004, monitoring of the weather at one location (Swiss Camp) showed that the average winter temperature had risen almost 6 °C (11 °F).[87] Other research has shown that higher snowfalls from the North Atlantic oscillation caused the interior of the ice cap to thicken by an average of 6 cm or 2.36 in/y between 1994 and 2005.[88]
82
+
83
+ The 1,310-metre (4,300 ft) Qaqugdluit mountain land on the south side of Nuussuaq peninsula, 50 kilometres (31 miles) west of the Greenland inland ice at 70°7′50″N 51°44′30″W / 70.13056°N 51.74167°W / 70.13056; -51.74167, is an example of the many mountainous areas of west Greenland. Up to 1979 (Stage 0) it showed postglacial glacier stages dating back about 7,000–10,000 years.[89][90] In 1979 the glacier tongues retreated – according to the extent and height of the glacier-nourishing area – from 140 to 660 metres (460 to 2,170 feet) above sea level. The climatic glacier snowline (ELA) was at about 800 metres (2,600 feet). The snowline of the oldest (VII) of the three Holocene glacier stages (V–VII) was about 230 metres (750 feet) deeper, i.e. at about 570 metres (1,870 feet).[91] The four youngest glacier stages (IV-I) can be classified as belonging to the global glacier advances in the years 1811 to 1850 and 1880 to 1900 ("Little Ice Age"), 1910 to 1930, 1948 and 1953.[90] Their snowlines rose step by step up to the level of 1979. The current snowline (Stage 0) is nearly unchanged. During the oldest Postglacial Stage VII an ice-stream network from valley glaciers joined each other and completely covered the land. Its nourishing areas consist of high-lying plateau glaciers and local ice caps. However, due to the rise of the snowline about 230 metres (750 feet) – corresponding to a warming of about 1.5 °C (2.7 °F) since 1979 - there is now only plateau-glaciation with small glacier tongues that hardly reach the main valley bottoms.[91] 96 polar scientists of the IMBIE research community from 50 scientific bodies, led by Professor Andrew Schaefer of the University of Leeds, produced the most complete study during the 1992–2018 period. Findings show that Greenland has lost 3.8 trillion tonnes of ice since 1992, enough to raise sea levels by almost 11mm (1.06 cm). The rate of ice loss has increased from an average of 33 billion tonnes a year in the 1990s, to 254 billion tonnes a year in the last decade.[92]
84
+
85
+ There are approximately 700 known species of insects in Greenland, which is low compared with other countries (over one million species have been described worldwide). The sea is rich in fish and invertebrates, especially in the milder West Greenland Current; a large part of the Greenland fauna is associated with marine-based food chains, including large colonies of seabirds. The few native land mammals in Greenland include the polar bear, reindeer (introduced by Europeans), arctic fox, arctic hare, musk ox, collared lemming, ermine, and arctic wolf. The last four are found naturally only in East Greenland, having immigrated from Ellesmere Island. There are dozens of species of seals and whales along the coast. Land fauna consists predominantly of animals which have spread from North America or, in the case of many birds and insects, from Europe. There are no native or free-living reptiles or amphibians on the island.[93]
86
+
87
+ Phytogeographically, Greenland belongs to the Arctic province of the Circumboreal Region within the Boreal Kingdom. The island is sparsely populated in vegetation; plant life consists mainly of grassland and small shrubs, which are regularly grazed by livestock. The most common tree native to Greenland is the European white birch (Betula pubescens) along with gray-leaf willow (Salix glauca), rowan (Sorbus aucuparia), common juniper (Juniperus communis) and other smaller trees, mainly willows.
88
+
89
+ Greenland's flora consists of about 500 species of "higher" plants, i.e. flowering plants, ferns, horsetails and lycopodiophyta. Of the other groups, the lichens are the most diverse, with about 950 species; there are 600–700 species of fungi; mosses and bryophytes are also found. Most of Greenland's higher plants have circumpolar or circumboreal distributions; only a dozen species of saxifrage and hawkweed are endemic. A few plant species were introduced by the Norsemen, such as cow vetch.
90
+
91
+ The terrestrial vertebrates of Greenland include the Greenland dog, which was introduced by the Inuit, as well as European-introduced species such as Greenlandic sheep, goats, cattle, reindeer, horse, chicken and sheepdog, all descendants of animals imported by Europeans.[citation needed] Marine mammals include the hooded seal (Cystophora cristata) as well as the grey seal (Halichoerus grypus).[94] Whales frequently pass very close to Greenland's shores in the late summer and early autumn. Whale species include the beluga whale, blue whale, Greenland whale, fin whale, humpback whale, minke whale, narwhal, pilot whale, sperm whale.[95]
92
+
93
+ As of 2009, 269 species of fish from over 80 different families are known from the waters surrounding Greenland. Almost all are marine species with only a few in freshwater, notably Atlantic salmon and charr.[96] The fishing industry is the primary industry of Greenland's economy, accounting for the majority of the country's total exports.[97]
94
+
95
+ Birds, particularly seabirds, are an important part of Greenland's animal life; breeding populations of auks, puffins, skuas, and kittiwakes are found on steep mountainsides.[citation needed] Greenland's ducks and geese include common eider, long-tailed duck, king eider, white-fronted goose, pink-footed goose and barnacle goose. Breeding migratory birds include the snow bunting, lapland bunting, ringed plover, red-throated loon and red-necked phalarope. Non-migratory land birds include the arctic redpoll, ptarmigan, short-eared owl, snowy owl, gyrfalcon and white-tailed eagle.[93]
96
+
97
+ The Kingdom of Denmark is a constitutional monarchy, in which Queen Margrethe II is the head of state. The monarch officially retains executive power and presides over the Council of State (privy council).[98][99] However, following the introduction of a parliamentary system of government, the duties of the monarch have since become strictly representative and ceremonial,[100] such as the formal appointment and dismissal of the prime minister and other ministers in the executive government. The monarch is not answerable for his or her actions, and the monarch's person is sacrosanct.[101]
98
+
99
+ The party system is dominated by the social-democratic Forward Party, and the democratic socialist Inuit Community Party, both of which broadly argue for greater independence from Denmark. While the 2009 election saw the unionist Democrat Party (two MPs) decline greatly, the 2013 election consolidated the power of the two main parties at the expense of the smaller groups, and saw the eco-socialist Inuit Party elected to the Parliament for the first time. The dominance of the Forward and Inuit Community parties began to wane after the snap 2014 and 2018 elections.
100
+
101
+ The non-binding 2008 referendum on self-governance favoured increased self-governance by 21,355 votes to 6,663.
102
+
103
+ In 1985, Greenland left the European Economic Community (EEC), unlike Denmark, which remains a member. The EEC later became the European Union (EU, renamed and expanded in scope in 1992). Greenland retains some ties with the EU via Denmark. However, EU law largely does not apply to Greenland except in the area of trade.
104
+
105
+ Greenland's head of state is Queen Margrethe II of Denmark. The Queen's government in Denmark appoints a high commissioner (Rigsombudsmand) to represent it on the island. The commissioner is Mikaela Engell.
106
+
107
+ Greenlanders elect two representatives to the Folketing, Denmark's parliament, out of a total of 179. The representatives are Aleqa Hammond of the Siumut Party and Aaja Chemnitz Larsen of the Inuit Community Party.[102]
108
+
109
+ Greenland also has its own Parliament, which has 31 members. The government is the Naalakkersuisut whose members are appointed by the premier. The head of government is the premier, usually the leader of the majority party in Parliament. The premier is Kim Kielsen of the Siumut party.
110
+
111
+ Several American and Danish military bases are located in Greenland, including Thule Air Base, which is home to the United States Space Force's 21st Space Wing's global network of sensors providing missile warning, space surveillance and space control to North American Aerospace Defense Command (NORAD).[103]
112
+
113
+ In 1995, a political scandal resulted in Denmark after a report revealed the government had given tacit permission for nuclear weapons to be located in Greenland, in contravention of Denmark's 1957 nuclear-free zone policy.[104][60] The United States built a secret nuclear powered base, called Camp Century, in the Greenland ice sheet.[105] On 21 January 1968, a B-52G, with four nuclear bombs aboard as part of Operation Chrome Dome, crashed on the ice of the North Star Bay while attempting an emergency landing at Thule Air Base.[106] The resulting fire caused extensive radioactive contamination.[107] One of the H-bombs remains lost.[108][109]
114
+
115
+ Formerly consisting of three counties comprising a total of 18 municipalities, Greenland abolished these in 2009 and has since been divided into large territories known as "municipalities" (Greenlandic: kommuneqarfiit, Danish: kommuner): Sermersooq ("Much Ice") around the capital Nuuk and also including all East Coast communities; Kujalleq ("South") around Cape Farewell; Qeqqata ("Centre") north of the capital along the Davis Strait; Qeqertalik ("The one with islands") surrounding Disko Bay; and Avannaata ("Northern") in the northwest; the latter two having come into being as a result of the Qaasuitsup municipality, one of the original four, being partitioned in 2018. The northeast of the island composes the unincorporated Northeast Greenland National Park. Thule Air Base is also unincorporated, an enclave within Avannaata municipality administered by the United States Air Force. During its construction, there were as many as 12,000 American residents but in recent years the number has been below 1,000.
116
+
117
+ The Greenlandic economy is highly dependent on fishing. Fishing accounts for more than 90% of Greenland's exports.[110] The shrimp and fish industry is by far the largest income earner.[5]
118
+
119
+ Greenland is abundant in minerals.[110] Mining of ruby deposits began in 2007. Other mineral prospects are improving as prices are increasing. These include iron, uranium, aluminium, nickel, platinum, tungsten, titanium, and copper. Despite resumption[when?] of several hydrocarbon and mineral exploration activities, it will take several years before hydrocarbon production can materialize. The state oil company Nunaoil was created to help develop the hydrocarbon industry in Greenland. The state company Nunamineral has been launched on the Copenhagen Stock Exchange to raise more capital to increase the production of gold, started in 2007.
120
+
121
+ Electricity has traditionally been generated by oil or diesel power plants, even if there is a large surplus of potential hydropower. There is a programme to build hydro power plants. The first, and still the largest, is Buksefjord hydroelectric power plant.
122
+
123
+ There are also plans to build a large aluminium smelter, using hydropower to create an exportable product. It is expected that much of the labour needed will be imported.[111]
124
+
125
+ The European Union has urged Greenland to restrict People's Republic of China development of rare-earth projects, as China accounts for 95% of the world's current supply. In early 2013, the Greenland government said that it had no plans to impose such restrictions.[112]
126
+
127
+ The public sector, including publicly owned enterprises and the municipalities, plays a dominant role in Greenland's economy. About half the government revenues come from grants from the Danish government, an important supplement to the gross domestic product (GDP). Gross domestic product per capita is equivalent to that of the average economies of Europe.
128
+
129
+ Greenland suffered an economic contraction in the early 1990s. But, since 1993, the economy has improved. The Greenland Home Rule Government (GHRG) has pursued a tight fiscal policy since the late 1980s, which has helped create surpluses in the public budget and low inflation. Since 1990, Greenland has registered a foreign-trade deficit following the closure of the last remaining lead and zinc mine that year. In 2017, new sources of ruby in Greenland have been discovered, promising to bring new industry and a new export from the country.[113] (See Gemstone industry in Greenland).
130
+
131
+ There is air transport both within Greenland and between the island and other nations. There is also scheduled boat traffic, but the long distances lead to long travel times and low frequency. There are virtually no roads between cities because the coast has many fjords that would require ferry service to connect a road network. The only exception is a gravel road of 5 km (3 mi) length between Kangilinnguit and the now abandoned former cryolite mining town of Ivittuut.[114] In addition, the lack of agriculture, forestry and similar countryside activities has meant that very few country roads have been built.
132
+
133
+ Kangerlussuaq Airport (SFJ) [115] is the largest airport and the main aviation hub for international passenger transport. It serves international and domestic airline operated flight.[116] SFJ is far from the vicinity of the larger metropolitan capital areas, 317 km (197 mi) to the capital Nuuk, and airline passenger services are available.[117] Greenland has no passenger railways.
134
+
135
+ Nuuk Airport (GOH) [118] is the second-largest airport located just 6.0 km (3.7 mi) from the centre of the capital. GOH serves general aviation traffic and has daily- or regular domestic flights within Greenland. GOH also serves international flights to Iceland, business and private airplanes.
136
+
137
+ Ilulissat Airport (JAV) [119] is a domestic airport that also serves international flights to Iceland. There are a total of 13 registered civil airports and 47 helipads in Greenland; most of them are unpaved and located in rural areas. The second longest runway is at Narsarsuaq, a domestic airport with limited international service in south Greenland.
138
+
139
+ All civil aviation matters are handled by the Danish Transport Authority. Most airports including Nuuk Airport have short runways and can only be served by special fairly small aircraft on fairly short flights. Kangerlussuaq Airport around 100 kilometres (62 miles) inland from the west coast is the major airport of Greenland and the hub for domestic flights. Intercontinental flights connect mainly to Copenhagen. Travel between international destinations (except Iceland) and any city in Greenland requires a plane change.
140
+
141
+ Air Iceland operates flights from Reykjavík to a number of airports in Greenland, and the company promotes the service as a day-trip option from Iceland for tourists.[120]
142
+
143
+ There are no direct flights to the United States or Canada, although there have been flights Kangerlussuaq – Baltimore,[121] and Nuuk – Iqaluit,[122] which were cancelled because of too few passengers and financial losses.[123] An alternative between Greenland and the United States/Canada is Air Iceland/Icelandair with a plane change in Iceland.[124]
144
+
145
+ Sea passenger transport is served by several coastal ferries. Arctic Umiaq Line makes a single round trip per week, taking 80 hours each direction.[125]
146
+
147
+ Cargo freight by sea is handled by the shipping company Royal Arctic Line from, to and across Greenland. It provides trade and transport opportunities between Greenland, Europe and North America.
148
+
149
+ Greenland has a population of 56,081 (January 2020 Estimate),[6] of whom 88% are Greenlandic Inuit (including Danish-Inuit mixed). The remaining 12% of people are of European descent, mainly Greenland Danes. A 2015 wide genetic study of Greenlanders found modern-day Inuit in Greenland are direct descendants of the first Inuit pioneers of the Thule culture with ∼25 % admixture of the European colonizers from the 16th century. Despite previous speculations, no evidence of Viking settlers predecessors has been found. [126]
150
+
151
+ Several thousand Greenlandic Inuit reside in Denmark proper. The majority of the population is Lutheran. Nearly all Greenlanders live along the fjords in the south-west of the main island, which has a relatively mild climate.[127] In 2020, 18,326 people reside in Nuuk, the capital city. Greenland's warmest climates such as the vegetated area around Narsarsuaq are sparsely populated, whereas the majority of the population lives north of 64°N in colder coastal climates.
152
+
153
+
154
+
155
+ Both Greenlandic (an Eskimo–Aleut language) and Danish have been used in public affairs since the establishment of home rule in 1979; the majority of the population can speak both languages. Greenlandic became the sole official language in June 2009,[129] In practice, Danish is still widely used in the administration and in higher education, as well as remaining the first or only language for some Danish immigrants in Nuuk and other larger towns. Debate about the roles of Greenlandic and Danish in the country's future is ongoing. The orthography of Greenlandic was established in 1851[130] and revised in 1973. The country has a 100% literacy rate.[5]
156
+
157
+ A majority of the population speaks Greenlandic, most of them bilingually. It is spoken by about 50,000 people, making it the most populous of the Eskimo–Aleut language family, spoken by more people than all the other languages of the family combined.
158
+
159
+ Kalaallisut is the Greenlandic dialect of West Greenland, which has long been the most populous area of the island. This has led to its de facto status as the official "Greenlandic" language, although the northern dialect Inuktun remains spoken by 1,000 or so people around Qaanaaq, and the eastern dialect Tunumiisut by around 3,000.[131] Each of these dialects is almost unintelligible to the speakers of the other and are considered by some linguists to be separate languages.[citation needed] A UNESCO report has labelled the other dialects as endangered, and measures are now being considered to protect the East Greenlandic dialects.[132]
160
+
161
+ About 12% of the population speak Danish as a first or sole language, particularly Danish immigrants in Greenland, many of whom fill positions such as administrators, professionals, academics, or skilled tradesmen. While Greenlandic is dominant in all smaller settlements, a part of the population of Inuit or mixed ancestry, especially in towns, speaks Danish. Most of the Inuit population speaks Danish as a second language. In larger towns, especially Nuuk and in the higher social strata, this is still a large group. While one strategy aims at promoting Greenlandic in public life and education, developing its vocabulary and suitability for all complex contexts, there are opponents of this.[133]
162
+
163
+ English is another important language for Greenland, taught in schools from the first school year.[134]
164
+
165
+ Education is organised in a similar way to Denmark. There is ten year mandatory primary school. There is also a secondary school, with either work education or preparatory for university education. There is one university, the University of Greenland (Greenlandic: Ilisimatusarfik) in Nuuk. Many Greenlanders attend universities in Denmark or elsewhere.
166
+
167
+ Religion in Greenland (2010):[135][136]
168
+
169
+ The nomadic Inuit people were traditionally shamanistic, with a well-developed mythology primarily concerned with appeasing a vengeful and fingerless sea goddess who controlled the success of the seal and whale hunts.
170
+
171
+ The first Norse colonists worshipped the Norse gods, but Erik the Red's son Leif was converted to Christianity by King Olaf Trygvesson on a trip to Norway in 999 and sent missionaries back to Greenland. These swiftly established sixteen parishes, some monasteries, and a bishopric at Garðar.
172
+
173
+ Rediscovering these colonists and spreading ideas of the Protestant Reformation among them was one of the primary reasons for the Danish recolonization in the 18th century. Under the patronage of the Royal Mission College in Copenhagen, Norwegian and Danish Lutherans and German Moravian missionaries searched for the missing Norse settlements, but no Norse were found, and instead they began preaching to the Inuit. The principal figures in the Christianization of Greenland were Hans and Poul Egede and Matthias Stach. The New Testament was translated piecemeal from the time of the very first settlement on Kangeq Island, but the first translation of the whole Bible was not completed until 1900. An improved translation using the modern orthography was completed in 2000.[137]
174
+
175
+ Today, the major religion is Protestant Christianity, represented mainly by the Church of Denmark, which is Lutheran in orientation. While there are no official census data on religion in Greenland, the Bishop of Greenland Sofie Petersen[138] estimates that 85% of the Greenlandic population are members of her congregation.[139] The Church of Denmark is the established church through the Constitution of Denmark.[140]
176
+
177
+ The Roman Catholic minority is pastorally served by the Roman Catholic Diocese of Copenhagen. There are still Christian missionaries on the island, but mainly from charismatic movements proselytizing fellow Christians.[141]
178
+
179
+ The rate of suicide in Greenland is very high. According to a 2010 census, Greenland holds the highest suicide rate in the world.[142][143] Another significant social issue faced by Greenland is a high rate of alcoholism.[144] Alcohol consumption rates in Greenland reached their height in the 1980s, when it was twice as high as in Denmark, and had by 2010 fallen slightly below the average level of consumption in Denmark (which at the time were 12th highest in the world, but has since fallen). However, at the same time, alcohol prices are far higher, meaning that consumption has a large social impact.[145][146] Prevalence of HIV/AIDS used to be high in Greenland and peaked in the 1990s when the fatality rate also was relatively high. Through a number of initiatives the prevalence (along with the fatality rate through efficient treatment) has fallen and is now low, c. 0.13%,[147][148] below most other countries. In recent decades, the unemployment rates have generally been somewhat above those in Denmark;[149] in 2017, the rate was 6.8% in Greenland,[150] compared to 5.6% in Denmark.[151]
180
+
181
+ Today Greenlandic culture is a blending of traditional Inuit (Kalaallit) and Scandinavian culture. Inuit, or Kalaallit, culture has a strong artistic tradition, dating back thousands of years. The Kalaallit are known for an art form of figures called tupilak or a "spirit object." Traditional art-making practices thrive in the Ammassalik.[152] Sperm whale ivory remains a valued medium for carving.[153]
182
+
183
+ Greenland also has a successful, albeit small, music culture. Some popular Greenlandic bands and artists include Sume (classic rock), Chilly Friday (rock), Nanook (rock), Siissisoq (rock), Nuuk Posse (hip hop) and Rasmus Lyberth (folk), who performed in the Danish national final for the 1979 Eurovision Song Contest, performing in Greenlandic. The singer-songwriter Simon Lynge is the first musical artist from Greenland to have an album released across the United Kingdom, and to perform at the UK's Glastonbury Festival. The music culture of Greenland also includes traditional Inuit music, largely revolving around singing and drums.
184
+
185
+ Sport is an important part of Greenlandic culture, as the population is generally quite active.[154] Popular sports include association football, track and field, handball and skiing. Handball is often referred to as the national sport,[155] and Greenland's men's national team was ranked among the top 20 in the world in 2001.
186
+
187
+ Greenland has excellent conditions for skiing, fishing, snowboarding, ice climbing and rock climbing, although mountain climbing and hiking are preferred by the general public. Although the environment is generally ill-suited for golf, there is a golf course in Nuuk.
188
+
189
+ The national dish of Greenland is suaasat. Meat from marine mammals, game, birds, and fish play a large role in the Greenlandic diet. Due to the glacial landscape, most ingredients come from the ocean.[156] Spices are seldom used besides salt and pepper.[157] Greenlandic coffee is a "flaming" dessert coffee (set alight before serving) made with coffee, whiskey, Kahlúa, Grand Marnier, and whipped cream. It is stronger than the familiar Irish dessert coffee.[158]
190
+
191
+ Coordinates: 72°00′N 40°00′W / 72.000°N 40.000°W / 72.000; -40.000
en/2302.html.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Colon usually refers to
2
+
3
+ Colon may also refer to: