Datasets:
de-francophones
commited on
Commit
•
2626e30
1
Parent(s):
232d0c7
453f987f93f8823c53a12a080d128ae583751cc82c6c393b08a33b4d21a4f6ed
Browse files- en/4914.html.txt +334 -0
- en/4915.html.txt +71 -0
- en/4916.html.txt +71 -0
- en/4917.html.txt +81 -0
- en/4918.html.txt +101 -0
- en/4919.html.txt +1 -0
- en/492.html.txt +183 -0
- en/4920.html.txt +170 -0
- en/4921.html.txt +140 -0
- en/4922.html.txt +140 -0
- en/4923.html.txt +140 -0
- en/4924.html.txt +140 -0
- en/4925.html.txt +145 -0
- en/4926.html.txt +90 -0
- en/4927.html.txt +135 -0
- en/4928.html.txt +135 -0
- en/4929.html.txt +135 -0
- en/493.html.txt +183 -0
- en/4930.html.txt +147 -0
- en/4931.html.txt +67 -0
- en/4932.html.txt +76 -0
- en/4933.html.txt +118 -0
- en/4934.html.txt +157 -0
- en/4935.html.txt +157 -0
- en/4936.html.txt +170 -0
- en/4937.html.txt +170 -0
- en/4938.html.txt +264 -0
- en/4939.html.txt +195 -0
- en/494.html.txt +174 -0
- en/4940.html.txt +45 -0
- en/4941.html.txt +218 -0
- en/4942.html.txt +218 -0
- en/4943.html.txt +287 -0
- en/4944.html.txt +287 -0
- en/4945.html.txt +49 -0
- en/4946.html.txt +125 -0
- en/4947.html.txt +125 -0
- en/4948.html.txt +240 -0
- en/4949.html.txt +108 -0
- en/495.html.txt +258 -0
- en/4950.html.txt +108 -0
- en/4951.html.txt +145 -0
- en/4952.html.txt +75 -0
- en/4953.html.txt +75 -0
- en/4954.html.txt +75 -0
- en/4955.html.txt +0 -0
- en/4956.html.txt +53 -0
- en/4957.html.txt +144 -0
- en/4958.html.txt +139 -0
- en/4959.html.txt +143 -0
en/4914.html.txt
ADDED
@@ -0,0 +1,334 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Rafael "Rafa" Nadal Parera (Catalan: [rəf(ə)ˈɛl nəˈðal pəˈɾeɾə], Spanish: [rafaˈel naˈðal paˈɾeɾa];[2] born 3 June 1986) is a Spanish professional tennis player currently ranked world No. 2 in men's singles tennis by the Association of Tennis Professionals (ATP).[3]
|
4 |
+
|
5 |
+
Nadal has won 19 Grand Slam singles titles, the second-most in history for a male player, as well as a record 35 ATP Tour Masters 1000 titles, 21 ATP Tour 500 titles and the 2008 Olympic gold medal in singles. In addition, Nadal has held the world No. 1 ranking for a total of 209 weeks, including being the year-end No. 1 five times.[3] In majors, Nadal has won a record twelve French Open titles, four US Open titles, two Wimbledon titles and one Australian Open title, and won at least one Grand Slam every year for a record ten consecutive years (2005–2014). Nadal has won 85 career titles overall, including the most outdoor titles in the Open Era (83) and a record 59 titles on clay. With 81 consecutive wins on clay, Nadal holds the record for the longest single-surface win streak in the Open Era.
|
6 |
+
|
7 |
+
Nadal has been involved in five Davis Cup titles with Spain, and currently has a 29-win streak and 29–1 record in singles matches at the event. In 2010, at the age of 24, he became the seventh male player and the youngest of five in the Open Era to achieve the singles Career Grand Slam. Nadal is the second male player after Andre Agassi to complete the singles Career Golden Slam, as well as the second male player after Mats Wilander to have won at least two Grand Slams on all three surfaces (grass, hard court and clay). He has received the tour Sportsmanship Award three times and has been named the ATP Player of the Year five times and the ITF World Champion four times. In 2011, Nadal was named the Laureus World Sportsman of the Year.[4]
|
8 |
+
|
9 |
+
Rafael Nadal was born in Manacor, a town on the island of Mallorca in the Balearic Islands, Spain, to parents Ana María Parera Femenías and Sebastián Nadal Homar. His father is a businessman, owner of an insurance company, glass and window company Vidres Mallorca, and the restaurant, Sa Punta. Rafael has a younger sister, María Isabel. His uncle, Miguel Ángel Nadal, is a retired professional footballer, who played for RCD Mallorca, FC Barcelona, and the Spanish national team.[5] He idolized Barcelona striker Ronaldo as a child, and via his uncle got access to the Barcelona dressing room to have a photo with the Brazilian.[6] Recognizing in Rafael a natural talent, another uncle, Toni Nadal, a tennis coach, introduced him to the game when he was three years old.[7]
|
10 |
+
|
11 |
+
At age 8, Nadal won an under-12 regional tennis championship at a time when he was also a promising football player.[8] This made Toni Nadal intensify training, and it was at that time that his uncle encouraged Nadal to play left-handed for a natural advantage on the tennis court, after studying Nadal's then two-handed forehand stroke.[8]
|
12 |
+
|
13 |
+
At age 12, Nadal won the Spanish and European tennis titles in his age group, while also playing football.[8] Nadal's father made him choose between football and tennis so that his school work would not deteriorate entirely. Nadal said: "I chose tennis. Football had to stop straight away."[8]
|
14 |
+
|
15 |
+
When he was 14, the Spanish tennis federation requested that Nadal leave Mallorca and move to Barcelona to continue his tennis training. His family turned down this request, partly because they feared his education would suffer,[8] but also because Toni said that "I don't want to believe that you have to go to America, or other places to be a good athlete. You can do it from your home."[7] The decision to stay home meant less financial support from the federation; instead, Nadal's father covered the costs. In May 2001, he defeated former Grand Slam tournament champion Pat Cash in a clay-court exhibition match.[5]
|
16 |
+
|
17 |
+
Nadal turned professional at age 15,[9] and participated in two events on the ITF junior circuit. On 29 April 2002, at 15 years and 10 months, the world No. 762 Nadal won his first ATP match, defeating Ramón Delgado,[10] and became the ninth player in the Open Era to do so before the age of 16.[11]
|
18 |
+
|
19 |
+
In 2001, Nadal finished the year with a Challenger series record of 1–1 in singles with no titles or finals appearances. He did not participate in any doubles Challengers events. At ITF Futures, Nadal's record was 7–5 in singles and 1–2 in doubles, with no titles or finals appearances.[12]
|
20 |
+
|
21 |
+
In 2002, aged 16, Nadal reached the semifinals of the Boys' Singles tournament at Wimbledon, in his first ITF junior event.[13] In the same year, he helped Spain defeat the US in the final of the Junior Davis Cup in his second, and final, appearance on the ITF junior circuit.[13][14] Nadal's Challenger level record in 2002 was 4–2 in singles with no titles. He did not participate in any doubles Challengers events. Nadal finished the year with a Futures record of 40–9 in singles and 10–9 in doubles. He won 6 singles tournaments at this level, including 5 on clay and 1 on hard courts. He did not reach any doubles finals.[15][16]
|
22 |
+
|
23 |
+
Nadal also entered the clay-court Mallorca Open, part of the ATP International Series, at the end of April as a wildcard, where he participated in both singles and doubles. In singles, Nadal won his first ever ATP match, defeating Ramón Delgado in the Round of 32. He then was defeated in the Round of 16 by Olivier Rochus.[15] In doubles, Nadal and his partner, Bartolomé Salvá Vidal, were defeated in the first round by David Adams and Simon Aspelin.[17]
|
24 |
+
|
25 |
+
In 2003, Nadal won two Challenger titles and finished the year ranked No. 49. He won the ATP Newcomer of the Year Award. In his Wimbledon debut in 2003, he became the youngest man to reach the third round since Boris Becker in 1984.[18] After Wimbledon Nadal participated at Bastad, where he lost to Nicolas Lapentti in the quarterfinals, and at Stuttgart, where he lost to Fernando Gonzalez in the Round of 32. Finally, at Umag, he lost to Carlos Moya in the semifinals.
|
26 |
+
|
27 |
+
After playing two more Challenger level events, the last Challenger tournaments of his career, Nadal finished his 2003 campaign with three first round losses in ATP events.[12] Nadal also competed in seven doubles tournaments in 2003, and won his first ATP title (doubles or singles) at the clay-court Croatia Open in Umag, where he partnered with Álex López Morón to defeat Todd Perry and Thomas Shimada in straight sets in the final.[19]
|
28 |
+
|
29 |
+
2004 started with a doubles championship alongside Tommy Robredo at the Maharashtra Open.[20] In singles, Nadal reached the third round of the 2004 Australian Open where he lost in three sets against Australian Lleyton Hewitt. Later that year, the 34th-ranked 17-year-old played the first of many matches against Roger Federer, then ranked No. 1, at the Miami Open, and won in straight sets,[21][22] before losing to Fernando González in the fourth round. He was one of the six players who defeated Federer that year (along with Tim Henman, Albert Costa, Gustavo Kuerten, Dominik Hrbatý, and Tomáš Berdych). He missed most of the clay court season, including the French Open, because of a stress fracture in his left ankle.[5] In August, Nadal won his first ATP singles title at the Prokom Open by defeating José Acasuso in the final in two sets.[23]
|
30 |
+
|
31 |
+
Nadal, at 18 years and six months, became the youngest player to register a singles victory in a Davis Cup final for a winning nation.[24] By beating No. 2 Andy Roddick, he helped Spain clinch the 2004 title over the United States in a 3–2 win. He finished the year ranked No. 51.
|
32 |
+
|
33 |
+
At the 2005 Australian Open, Nadal lost in the fourth round to eventual runner-up Lleyton Hewitt. Two months later, he reached the final of the 2005 Miami Masters, and despite being two points from a straight-sets victory, he was defeated in five sets by No. 1 Roger Federer. Both performances were considered breakthroughs for Nadal.[25][26]
|
34 |
+
|
35 |
+
He then dominated the spring clay-court season. He won 24 consecutive singles matches, breaking Andre Agassi's Open Era record of consecutive match wins for a male teenager.[27] Nadal won the Torneo Conde de Godó in Barcelona and beat 2004 French Open runner-up Guillermo Coria in the finals of the 2005 Monte Carlo Masters and the 2005 Rome Masters. These victories raised his ranking to No. 5[28] and made him one of the favorites at his career-first French Open. On his 19th birthday, Nadal defeated Federer in the 2005 French Open semifinals, being one of only four players to defeat the top-seeded player that year (along with Marat Safin, Richard Gasquet, and David Nalbandian). Two days later, he defeated Mariano Puerta in the final, becoming the second male player, after Mats Wilander in 1982, to win the French Open on his first attempt. He was the first teenager to win a Grand Slam singles title since Pete Sampras won the 1990 US Open at age 19.[5] Winning improved his ranking to No. 3.[28]
|
36 |
+
|
37 |
+
Three days after his victory in Paris, Nadal's 24-match winning streak was snapped in the first round of the grass court Gerry Weber Open in Halle, Germany, where he lost to Alexander Waske.[29] He then lost in the second round of 2005 Wimbledon to Gilles Müller of Luxembourg. Immediately after Wimbledon, Nadal won 16 consecutive matches and three consecutive tournaments, bringing his ranking to No. 2 on 25 July 2005. Nadal started his North American summer hard-court season by defeating Agassi in the final of the 2005 Canada Masters, but lost in the first round of the 2005 Cincinnati Masters. Nadal was seeded second at the 2005 US Open, but was upset in the third round by No. 49 James Blake in four sets.
|
38 |
+
|
39 |
+
In September, he defeated Coria in the final of the China Open in Beijing and won both of his Davis Cup matches against Italy. In October, he won his fourth ATP Masters Series title of the year, defeating Ivan Ljubičić in the final of the 2005 Madrid Masters. He then suffered a foot injury that prevented his competing in the year-ending Tennis Masters Cup.[30]
|
40 |
+
|
41 |
+
Both Nadal and Federer won eleven singles titles and four ATP Masters Series titles in 2005. Nadal broke Mats Wilander's previous teenage record of nine in 1983.[31] Nine of Nadal's titles were on clay, and the remainder were on hard courts. Nadal won 79 matches, second only to Federer's 81. Nadal won the Golden Bagel Award for 2005, with eleven 6–0 sets during the year.[32] Also, he earned the highest year-end ranking ever by a Spaniard and the ATP Most Improved Player of the Year award.
|
42 |
+
|
43 |
+
Nadal missed the Australian Open because of a foot injury.[33] In February, he lost in the semifinals of the first tournament he played, the Open 13 tournament in Marseille, France. Two weeks later, he handed Roger Federer his first loss of the year in the final of the Dubai Duty Free Men's Open (in 2006, Rafael Nadal and Andy Murray were the only two men who defeated Federer). To complete the spring hard-court season, Nadal was upset in the semifinals of the Pacific Life Open in Indian Wells, California, by James Blake, and was upset in the second round of the 2006 Miami Masters.
|
44 |
+
|
45 |
+
On European clay, Nadal won all four tournaments he entered and 24 consecutive matches. He defeated Federer in the final of the Masters Series Monte Carlo in four sets. The following week, he defeated Tommy Robredo in the final of the Open Sabadell Atlántico tournament in Barcelona. After a one-week break, Nadal won the Masters Series Internazionali BNL d'Italia in Rome, defeating Federer in a fifth-set tiebreaker in the final, after saving two match points and equaling Björn Borg's tally of 16 ATP titles won as a teenager. Nadal broke Argentinian Guillermo Vilas's 29-year male record of 53 consecutive clay-court match victories by winning his first round match at the French Open. Vilas presented Nadal with a trophy, but commented later that Nadal's feat was less impressive than his own because Nadal's winning streak covered two years and was accomplished by adding easy tournaments to his schedule.[34] Nadal went on to play Federer in the final of the French Open. The first two sets of the match were hardly competitive, as the rivals traded 6–1 sets. Nadal won the third set easily and served for the match in the fourth set before Federer broke him and forced a tiebreaker. Nadal won the tiebreaker and became the first to defeat Federer in a Grand Slam tournament final.[35]
|
46 |
+
|
47 |
+
Nadal injured his shoulder during a quarterfinal match against Lleyton Hewitt at the Artois Championships, played on grass at the Queen's Club in London.[36] Nadal was unable to complete the match, which ended his 26-match winning streak. Nadal was seeded second at Wimbledon, and was two points from defeat against American qualifier Robert Kendrick in the second round before coming back to win in five sets. In the third round, Nadal defeated No. 20 Andre Agassi in straight sets in Agassi's last career match at Wimbledon. Nadal also won his next three matches in straight sets, which set up his first Wimbledon final, which was against Federer, who had won this tournament the three previous years. Nadal was the first Spanish man since Manuel Santana in 1966, to reach the Wimbledon final, but Federer won the match in four sets to win his fourth consecutive Wimbledon title.
|
48 |
+
|
49 |
+
During the lead up to the US Open, Nadal played the two Masters Series tournaments in North America. He was upset in the third round of the Rogers Cup in Toronto and the quarterfinals of the Western & Southern Financial Group Masters in Cincinnati. Nadal was seeded second at the US Open, but lost in the quarterfinals to No. 54 Mikhail Youzhny of Russia in four sets.
|
50 |
+
|
51 |
+
Nadal played only three tournaments the remainder of the year. Joachim Johansson, ranked No. 690, upset Nadal in the second round of the Stockholm Open. The following week, Nadal lost to Tomáš Berdych in the quarterfinals of the year's last Masters Series tournament, the Mutua Madrileña Masters in Madrid. During the round-robin stage of the year-ending Tennis Masters Cup, Nadal lost to James Blake but defeated Nikolay Davydenko and Robredo. Because of those two victories, Nadal qualified for the semifinals, where he lost to Federer. This was Nadal's third loss in nine career matches with Federer.
|
52 |
+
|
53 |
+
Nadal went on to become the first player since Andre Agassi in 1994–95 to finish the year ranked No. 2 in consecutive years.
|
54 |
+
|
55 |
+
Nadal started the year by playing in six hard-court tournaments. He lost in the semifinals and first round of his first two tournaments and then lost in the quarterfinals of the Australian Open to eventual runner-up Fernando González. After another quarterfinal loss at the Dubai Tennis Championships, he won the 2007 Indian Wells Masters, before Novak Djokovic defeated him in the quarterfinals of the 2007 Miami Masters.
|
56 |
+
|
57 |
+
He had comparatively more success after returning to Europe to play five clay-court tournaments. He won the titles at the Masters Series Monte Carlo, the Open Sabadell Atlántico in Barcelona, and the Masters Series Internazionali BNL d'Italia in Rome, before losing to Roger Federer in the final of the Masters Series Hamburg. This defeat ended his 81-match winning streak on clay, which is the male Open Era record for consecutive wins on a single surface. He then rebounded to win the French Open for the third straight year, defeating Federer once again in the final. Between the tournaments in Barcelona and Rome, Nadal defeated Federer in the "Battle of Surfaces" exhibition match in Mallorca, Spain, with the tennis court being half grass and half clay.[37]
|
58 |
+
|
59 |
+
Nadal played the Artois Championships at the Queen's Club in London for the second consecutive year. As in 2006, Nadal was upset in the quarterfinals. Nadal then won consecutive five-set matches during the third and fourth rounds of Wimbledon before being beaten by Federer in the five-set final. This was Federer's first five-set match at Wimbledon since 2001.[38] In July, Nadal won the clay court Mercedes Cup in Stuttgart, which proved to be his last title of the year. He played three important tournaments during the North American summer hard court season. He was a semifinalist at the Masters Series Rogers Cup in Montreal before losing his first match at the Western & Southern Financial Group Masters in Cincinnati. He was the second-seeded player at the US Open, but was defeated in the fourth round by David Ferrer.
|
60 |
+
|
61 |
+
After a month-long break from tournament tennis, Nadal played the Mutua Madrileña Masters in Madrid and the BNP Paribas Masters in Paris. David Nalbandian upset him in the quarterfinals and final of those tournaments. To end the year, Nadal won two of his three-round robin matches to advance to the semifinals of the Tennis Masters Cup in Shanghai, where Federer defeated him in straight sets.
|
62 |
+
|
63 |
+
During the second half of the year, Nadal battled a knee injury suffered during the Wimbledon final. In addition, there were rumors at the end of the year that the foot injury he suffered during 2005, caused long-term damage, which were given credence by coach Toni Nadal's claim that the problem was "serious". Nadal and his spokesman strongly denied this, however, with Nadal himself calling the story "totally false".[39]
|
64 |
+
|
65 |
+
Nadal began the year in India, where he was comprehensively beaten by Mikhail Youzhny in the final of the Chennai Open. Nadal then reached the semifinals of the Australian Open for the first time. Jo-Wilfried Tsonga defeated Nadal in the semifinal of 2008 Australian Open. Nadal also reached the final of the Miami Masters for the second time.
|
66 |
+
|
67 |
+
During the spring clay-court season, Nadal won four singles titles and defeated Roger Federer in three finals. He beat Federer at the Masters Series Monte Carlo for the third straight year, capturing his Open Era record fourth consecutive title there.[40] Nadal then won his fourth consecutive title at the Open Sabadell Atlántico tournament in Barcelona. A few weeks later, Nadal won his first title at the Masters Series Hamburg, defeating Federer in a three-set final. He then won the French Open, becoming the fifth man in the Open Era to win a Grand Slam singles title without losing a set.[41] He defeated Federer in the final for the third straight year, but this was the most lopsided of all their matches, as Nadal only lost four games and gave Federer his first bagel since 1999.[40] This was Nadal's fourth consecutive French title, tying Björn Borg's all-time record. Nadal became the fourth male player during Open era to win the same Grand Slam singles tournament four consecutive years (the others being Borg, Pete Sampras, and Federer).
|
68 |
+
|
69 |
+
Nadal then played Federer in the final of Wimbledon for the third consecutive year, in the most anticipated match of their rivalry.[42][43] Nadal entered the final on a 23-match winning streak, including his first career grass-court title at the Artois Championships staged at the Queen's Club in London prior to Wimbledon. Federer had won his record fifth grass-court title at the Gerry Weber Open in Halle, and then reached the Wimbledon final without losing a set. Unlike their previous two Wimbledon finals, though, Federer was not the prohibitive favorite, and many analysts picked Nadal to win.[43][44] They played the longest (in terms of time on court, not in terms of numbers of games) final in Wimbledon history, and because of rain delays, Nadal won the fifth set 9–7 in near-darkness. (The 2019 final later broke the record of longest Wimbledon final.) The match was widely lauded as the greatest Wimbledon final ever, with some tennis critics even calling it the greatest match in tennis history.[45][46][47][48][49]
|
70 |
+
|
71 |
+
By winning his first Wimbledon title, Nadal became the third man in the open era to win both the French Open and Wimbledon in the same year, after Rod Laver in 1969, and Borg in 1978–1980, (Federer later accomplished this the following year) as well as the second Spaniard to win Wimbledon. He also ended Federer's record streak of five consecutive Wimbledon titles and 65 straight wins on grass courts. This was also the first time that Nadal won two Grand Slam tournaments back-to-back.
|
72 |
+
|
73 |
+
After Wimbledon, Nadal extended his winning streak to a career-best 32 matches. He won his second Rogers Cup title in Toronto, and then made it into the semifinals of the Western & Southern Financial Group Masters in Cincinnati. As a result, Nadal clinched the US Open Series and, combined with Federer's early-round losses in both of those tournaments, finally earned the world No. 1 ranking on 18 August, officially ending Federer's record four-and-a-half-year reign at the top.
|
74 |
+
|
75 |
+
At the 2008 Beijing Olympics, Nadal defeated Fernando González of Chile in the final to win his first Olympic gold medal.[50]
|
76 |
+
|
77 |
+
At the US Open, Nadal was the top-seeded player for the first time at a Grand Slam tournament. He did not lose a set during his first three matches, defeating qualifiers in the first and second rounds and Viktor Troicki in the third round. In the semifinals, he lost to Andy Murray. Later in the year in Madrid, Nadal helped Spain defeat the United States in the Davis Cup semifinals.
|
78 |
+
|
79 |
+
At the Mutua Madrileña Masters in Madrid, Nadal lost in the semifinals to Gilles Simon. However, his performance at the event guaranteed that he would become the first Spaniard during the open era to finish the year ranked No. 1.[51] Two weeks later at the BNP Paribas Masters in Paris, Nadal reached the quarterfinals, where he withdrew because of a knee injury.[52] The following week, Nadal announced his withdrawal from the year-ending Tennis Masters Cup in Shanghai, citing tendinitis of the knee. On 10 November, Nadal withdrew from Spain's Davis Cup final against Argentina, as his knee injury had not healed completely.[53]
|
80 |
+
|
81 |
+
Nadal's first official ATP tour event for the year was the 250 series Qatar Open in Doha, where he lost in the quarterfinals to Gaël Monfils. Nadal also entered and won the tournament's doubles event with partner Marc López, defeating the No. 1-ranked doubles team of Daniel Nestor and Nenad Zimonjić in the final. At the 2009 Australian Open, Nadal won his first five matches without dropping a set, before defeating compatriot Fernando Verdasco in the semifinals in the second longest match in Australian Open history at 5 hours and 14 minutes.[54] This win set up a championship match with Roger Federer, their first meeting ever in a hard-court Grand Slam tournament. Nadal defeated Federer in five sets to earn his first hard-court Grand Slam singles title,[55] making him the first Spaniard to win the Australian Open.[56]
|
82 |
+
|
83 |
+
At the ABN AMRO World Tennis Tournament in Rotterdam, Nadal lost in the final to second-seeded Andy Murray in three sets.[57] Although this knee problem was not associated with Nadal's right-knee tendonitis, it was serious enough to cause him to withdraw from the Dubai Championships a week later.[58] In March, Nadal helped Spain defeat Serbia in a Davis Cup World Group first-round tie on clay in Benidorm, Spain. Nadal defeated Janko Tipsarević and Novak Djokovic.[59][60] At the 2009 Indian Wells Masters, Nadal won his thirteenth Masters 1000 series tournament, defeating Murray in the final. The next ATP tour event was the 2009 Miami Masters. Nadal advanced to the quarterfinals, where he again faced Argentinian del Potro, this time losing the match.[61]
|
84 |
+
|
85 |
+
Nadal began his European clay court season at the Monte Carlo Masters, where he defeated Novak Djokovic to win a record fifth consecutive singles title there.[62] He then won back to back titles in Barcelona and Rome Masters, defeating Ferrer and Djokovic respectively.[63][64] He then surprisingly lost the final of the Madrid Open to Roger Federer. This was the first time that Nadal had lost to Federer since the semifinals of the 2007 Tennis Masters Cup.
|
86 |
+
|
87 |
+
By beating Lleyton Hewitt in the third round of the French Open, Nadal set a record of 31 consecutive wins at the French Open, beating the previous record of 28 by Björn Borg. This run came to an end on 31 May 2009, when Nadal lost to eventual runner-up, Robin Söderling in the 4th round. This was Nadal's first and, until 2015, only loss at the French Open. After his surprise defeat in France, Nadal withdrew from the AEGON Championships. It was confirmed that he was suffering from tendinitis in both of his knees.[65] On 19 June, Nadal withdrew from the 2009 Wimbledon Championships, citing his recurring knee injury.[66] Roger Federer went on to win the title, and Nadal consequently dropped back to No. 2 on 6 July 2009.
|
88 |
+
|
89 |
+
On 4 August, Toni Nadal confirmed that Nadal would return to play at the Rogers Cup in Montreal.[67] There, he lost in the quarterfinals to Juan Martín del Potro.[68] With this loss, he relinquished the No. 2 spot to Andy Murray on 17 August 2009, ranking outside the top two for the first time since 25 July 2005.
|
90 |
+
|
91 |
+
At the US Open Nadal fell in the semifinals, losing to eventual champion Juan Martín del Potro.[69] At the World Tour Finals, Nadal lost all three of his matches against Robin Söderling, Nikolay Davydenko, and Novak Djokovic respectively without winning a set. In December, Nadal participated in the second Davis Cup final of his career. He defeated Tomáš Berdych in his first singles rubber to give the Spanish Davis Cup Team their first point in the tie. After the Spanish Davis Cup team had secured its fourth Davis Cup victory, Nadal defeated Jan Hájek in the first Davis Cup dead rubber of his career.
|
92 |
+
|
93 |
+
Nadal finished the year as No. 2 for the fourth time in five years. Nadal won the Golden Bagel Award for the third time in 2009, with nine 6–0 sets during the year.
|
94 |
+
|
95 |
+
Nadal has called 2010 his best year as a professional tennis player. The 2010 tennis season Nadal became the only male player in tennis history to win Grand Slam tournaments on three different surfaces (clay, grass and hard court) the same calendar year.
|
96 |
+
|
97 |
+
Nadal began the year by participating in the Capitala World Tennis Championship in Abu Dhabi. In the final, Nadal defeated Robin Söderling in straight sets.[70] Nadal participated in the Qatar ExxonMobil Open ATP 250 event in Doha, where he lost in the finals to Nikolay Davydenko.[71][71] In the Australian Open, Nadal reached the quarterfinals, where he had to pull out at 3–0 down in the third set against Andy Murray.[72] After examining Nadal's knees, doctors told him that he should take two weeks of rest, and then two weeks of rehabilitation.
|
98 |
+
|
99 |
+
Nadal reached the semifinals in singles at the BNP Paribas Open in Indian Wells, where he was defeated by Ivan Ljubičić in three sets.[73] After Indian Wells, Nadal reached the semifinals of the Sony Ericsson Open, where he lost to eventual champion Andy Roddick in three sets.[74] Nadal won the Monte-Carlo Rolex Masters, beating Fernando Verdasco in the final. With this win, Nadal became the first player in the open era to win a tournament title for six straight years.[75] Nadal next chose to skip the Barcelona tournament, and his next tournament was the Rome Masters. He defeated David Ferrer in the final for his fifth title at Rome. Nadal then won the 2010 Mutua Madrileña Madrid Open, defeating Roger Federer in straight sets. The win gave him his 18th Masters title, breaking the all-time record. Nadal moved back to No. 2 the following day.
|
100 |
+
|
101 |
+
Entering the French Open, many were expecting another Nadal-Federer final. However, Robin Söderling defeated Federer in the quarterfinals.[76] Nadal advanced to the final and defeated Söderling in straight sets. The victory marked the second time that Nadal had won the French Open without dropping a set.
|
102 |
+
|
103 |
+
In June, Nadal entered the AEGON Championships, which he had won in 2008. He was defeated by compatriot Feliciano López in the quarterfinals. At the Wimbledon Championships, he won his first two matches in straight sets. In the third round he needed five sets to defeat Philipp Petzschner. During the match Nadal was warned twice for allegedly receiving coaching from his coach and uncle, Toni Nadal, resulting in a $2,000 fine by Wimbledon officials.[77][78] He then defeated Andy Murray in the semifinals and Tomáš Berdych in the final to win his second Wimbledon title and his eighth career major title[79] just past the age of 24.[80]
|
104 |
+
|
105 |
+
In his first tournament since Wimbledon, Nadal advanced to the semifinals of the Rogers Cup, where he was defeated by Andy Murray.[81] Nadal also competed in the doubles with Djokovic in a high-profile partnership between the world Nos. 1 and 2.[82] The pair lost in the first round to Milos Raonic and Vasek Pospisil. The next week, Nadal was the top seed at the Cincinnati Masters, losing in the quarterfinals to Marcos Baghdatis.
|
106 |
+
|
107 |
+
At the 2010 US Open, Nadal reached his first final without dropping a set. In the final, he defeated Novak Djokovic in four sets, completing the Career Grand Slam for Nadal; he also became the second male after Andre Agassi to complete a Career Golden Slam.[83] Nadal's US Open victory meant that he also became the first man to win majors on clay, grass, and hard courts in the same year, and the first to win the French Open, Wimbledon, and the US Open in the same year since Rod Laver in 1969.[84] Nadal's victory also clinched the year-end No. 1 ranking for 2010.[85]
|
108 |
+
|
109 |
+
Nadal began his Asian tour at the 2010 PTT Thailand Open in Bangkok where he lost to compatriot Guillermo García-López in the semifinals. Nadal was able to regroup, winning the 2010 Rakuten Japan Open Tennis Championships in Tokyo by defeating Gaël Monfils for his seventh title of the season. Nadal next played in the Shanghai Rolex Masters, where he lost to No. 12 Jürgen Melzer in the third round. On 5 November, Nadal announced that he was pulling out of the Paris Masters owing to tendinitis in his left shoulder.[86] On 21 November 2010, in London, Nadal won the Stefan Edberg Sportsmanship Award for the first time.[87]
|
110 |
+
|
111 |
+
At the 2010 ATP World Tour Finals in London, Nadal won all of his round-robin matches. In the semifinal, he defeated Murray in three sets, before losing to Roger Federer in the final.[88]
|
112 |
+
|
113 |
+
Nadal started 2011 by participating in the Mubadala World Tennis Championship in Abu Dhabi. In the final, he won over Roger Federer. At the Qatar ExxonMobil Open, he fell in straight sets Nikolay Davydenko in the semifinals.[89] He and countryman López won the doubles title by defeating Daniele Bracciali and Andreas Seppi.[90]
|
114 |
+
|
115 |
+
In the quarterfinals of the Australian Open, Nadal suffered a hamstring injury against David Ferrer early in the pair's quarterfinal match and ultimately lost in straight sets, thus ending his effort to win four major tournaments in a row.[91]
|
116 |
+
|
117 |
+
In March, Nadal helped Spain defeat Belgium in a 2011 Davis Cup World Group first-round tie in the Spiroudome in Charleroi, Belgium. Nadal defeated Ruben Bemelmans and Olivier Rochus.[92][93] At both the 2011 BNP Paribas Open and the 2011 Sony Ericsson Open, Nadal reached the final and lost to Novak Djokovic in three sets.[94][95] This was the first time Nadal reached the finals of Indian Wells and Miami in the same year.
|
118 |
+
|
119 |
+
Nadal began his clay-court season by winning the 2011 Monte-Carlo Rolex Masters with the loss of just one set. In the final, he avenged his defeat by David Ferrer in the quarterfinals of the Australian Open.[96] Just a week later, Nadal won his sixth Barcelona Open crown, again defeating Ferrer in straight sets. He then lost to Novak Djokovic in the Rome Masters and Madrid Open finals.[97] However, Nadal retained his No. 1 ranking during the clay-court season and won his sixth French Open title by defeating Roger Federer.[98]
|
120 |
+
|
121 |
+
At Wimbledon, Nadal reached the final after three four-set matches. This set up a final against No. 2 Novak Djokovic, who had beaten Nadal in all four of their matches in 2011. After dropping the third set, Djokovic defeated Nadal in the fourth. Djokovic's success at the tournament also meant that the Serb overtook Nadal as world No. 1. After resting for a month from a foot injury sustained during Wimbledon, he contested the 2011 Rogers Cup, where he was beaten by Croatian Ivan Dodig in the quarterfinals. He next played in the 2011 Cincinnati Masters, where he lost to Mardy Fish, again in the quarterfinals.
|
122 |
+
|
123 |
+
At the 2011 US Open, Nadal made headlines when after defeating David Nalbandian in the fourth round, he collapsed in his post-match press conference because to severe cramps.[99] He again lost in four sets to Novak Djokovic in the final. After the US Open, Nadal made the final of the Japan Open Tennis Championships. Nadal, who was the 2010 champion, was defeated by Andy Murray. At the Shanghai Masters, he was upset in the third round by No. 23 ranked Florian Mayer. At the 2011 ATP World Tour Finals, Nadal was defeated by Roger Federer and Jo-Wilfried Tsonga in the round-robin stage, and was subsequently eliminated from the tournament. In the Davis Cup final in December, he helped Spain win the title with victories over Juan Mónaco and Juan Martín del Potro.[100]
|
124 |
+
|
125 |
+
Nadal began his ATP World Tour season at the Qatar Open. In the semifinal he lost to Gaël Monfils in two sets.[101] In the Australian Open Nadal won his first four matches without dropping a set. He then won in his quarterfinal and semifinal matches against Tomáš Berdych and Roger Federer respectively. In the final, on 29 January, he was beaten by Novak Djokovic in a five-set match that lasted 5 hours and 53 minutes, the longest Grand Slam final of all time.[102]
|
126 |
+
|
127 |
+
Nadal made it to the semifinals in Indian Wells, where he was beaten in straight sets by eventual champion Roger Federer. He also made the semifinals in Miami, but withdrew because of knee problems.
|
128 |
+
|
129 |
+
As the clay court season started, Nadal was seeded 2nd at the 2012 Monte-Carlo Rolex Masters. In the final he topped No. 1 Novak Djokovic to win his 8th consecutive Monte Carlo trophy. This ended a streak of seven straight final losses to Djokovic. A day after the Monte Carlo final, Nadal traveled to Barcelona where he received a bye in the first round. His tremendous record on clay continued as he beat compatriot David Ferrer in a three-set final to clinch his seventh title in eight years at the Barcelona Open. At the Mutua Madrileña Madrid Open Nadal surprisingly lost to Fernando Verdasco, whom he held a 13–0 record against. He heavily criticized the new blue-colored clay and threatened not to attend in the future if the surface was not changed back to red clay. Several other players such as Novak Djokovic voiced similar criticism.[103] In the last tournament before the French Open, Nadal defeated Djokovic in a tight straight set final. This was his second victory over Novak Djokovic in 2012 and his third title of the season, as well as his 6th Rome title overall.
|
130 |
+
|
131 |
+
At the 2012 French Open, Nadal dropped only 30 games against his first five opponents. In the semifinals he dismantled Ferrer to set up another final against Novak Djokovic. This marked the first time two opposing players faced each other in four consecutive Grand Slam finals. Nadal won the first two sets before Djokovic claimed the third. Play was suspended in the fourth set due to rain. When the match resumed the following day, Nadal won when Djokovic double faulted on match point, sealing a record 7th French Open title for Nadal.[104] By winning his seventh title[105] at Stade Roland Garros, Nadal surpassed Borg's overall titles record[106] to become the most successful male player in French Open history.[107] Nadal lost a total of only three sets in the 2012 clay court season.
|
132 |
+
|
133 |
+
As a warm-up ahead of Wimbledon Nadal played in Halle, losing to Philipp Kohlschreiber in the quarterfinals.[108] At Wimbledon, Nadal was upset in the second round by Lukáš Rosol in a close five-set match. This was the first time since the Wimbledon 2005 championships that Nadal had failed to progress past the 2nd round of a Grand Slam tournament.[109]
|
134 |
+
|
135 |
+
In July 2012, Nadal withdrew from the 2012 Olympics owing to tendinitis in his knee, which subsequently led to him pulling out of both the Rogers Cup and the Cincinnati Masters. He later withdrew from the rest of the 2012 season, as he felt he still was not healthy enough to compete.[110][111] Nadal ended 2012 ranked No. 4 in the world, the first time in eight years that he has not been ranked 1st or 2nd at the end of the year.
|
136 |
+
|
137 |
+
Two weeks prior to the Australian Open, Nadal officially withdrew from the tournament citing a stomach virus.[112] Nadal's withdrawal saw him drop out of the ATP's Top Four for the first time since 2005.[113] Playing in his first tournaments in South America since 2005, Nadal made his comeback at the VTR Open in Chile,[114] where he was upset by Argentine No. 73 Horacio Zeballos in the final. At the Brasil Open, Nadal reached the final, where he defeated David Nalbandian.[115] In the title match of the Abierto Mexicano Telcel in Acapulco, Nadal defeated David Ferrer, losing just two games in the match.
|
138 |
+
|
139 |
+
Nadal then returned to the American hard courts, playing the Indian Wells Masters as the fifth seed. He lost only one set, and defeated No. 2 Roger Federer and No. 6 Tomáš Berdych before beating Juan Martín del Potro in the final. After withdrawing from Miami, Nadal attempted to defend his title at the Monte-Carlo Rolex Masters, but was beaten by Djokovic in straight sets. He then won his eight title at the Barcelona Open. Nadal went on to win the Mutua Madrid Open, beating Stanislas Wawrinka in the final.
|
140 |
+
|
141 |
+
In May, he defeated Roger Federer for his 7th championship at the 2013 Rome Masters. These victories raised his ranking to No. 4.
|
142 |
+
|
143 |
+
Nadal won the 2013 French Open after beating Novak Djokovic in the semifinal and David Ferrer in the final, breaking the record for the most match wins in the tournament in the process with his 59th match victory.[116] His match with Djokovic is widely considered one of the greatest clay court matches ever played, as Nadal came back from down a break in the fifth set to take out a hard-fought 4-hour, 37-minute victory. Nadal then lost his first-round match at the 2013 Wimbledon Championships in straight sets to unseeded Belgian Steve Darcis (ranked No. 135), the first time he had ever lost in the first round of a Grand Slam.
|
144 |
+
|
145 |
+
In August, Nadal won a close semifinal match in Montreal, denying Djokovic his fourth Rogers Cup title.[117] Nadal proceeded to win the title after beating Milos Raonic in the final in straight sets.[118] He won his 26th ATP Masters 1000 in Cincinnati on Sunday 18 August after beating John Isner in the final.[119] Nadal concluded a brilliant North American hard court season with his 4th hard court title of the year, defeating Djokovic at the 2013 US Open final in four sets, bringing his Grand Slam count to 13 and giving him a male tennis record paycheck of $3.6 million.[120][121]
|
146 |
+
|
147 |
+
Later in September, Nadal helped Spain secure their Davis Cup World Group Playoff spot for 2014, with a victory against Sergiy Stakhovsky and a doubles win with Marc Lopez. In October, he reached the final of the China Open, guaranteeing he would be back to the No. 1 ranking.[122] In the final, he was beaten by Djokovic in straight sets.[123] At the 2013 Shanghai Rolex Masters, he reached the semifinals but was defeated by Del Potro. In November, Nadal played his final event of the season in London at the 2013 ATP World Tour Finals where he secured the year-end No. 1 spot. He beat David Ferrer, Stanislas Wawrinka and Tomáš Berdych in the round robin stage to set up a semifinal victory over Roger Federer. Nadal met Djokovic in the final, losing in straight sets.
|
148 |
+
|
149 |
+
Rafael Nadal began his 2014 season at the Qatar Open in Doha, defeating Lukáš Rosol in the first round[124] and he won the title after defeating Gaël Monfils in the final.[125]
|
150 |
+
|
151 |
+
At the Australian Open, he defeated Roger Federer to reach his third Australian Open final. This marked Nadal's 11th consecutive victory in a Major semifinal, second only to Borg's all-time record of 14. In the final, he faced Stanislas Wawrinka, against whom he entered the match with a 12–0 record. However, Nadal suffered a back injury during the warm-up, which progressively worsened as the match wore on.[126] Nadal lost the first two sets, and although he won the third set, he ultimately lost the match in four sets. The first tournament he played after that was the inaugural Rio Open which he won after defeating Alexandr Dolgopolov in the final. However, at the Indian Wells Masters, Dolgopolov would avenge his loss, defeating Nadal in three sets in the third round. He reached the final of the Miami Masters, falling to Novak Djokovic in straight sets.
|
152 |
+
|
153 |
+
Nadal began his clay court season with a quarterfinal loss to David Ferrer in the Monte-Carlo Masters. He was stunned by Nicolas Almagro in the quarterfinals of the Barcelona Open. Nadal then won his 27th masters title at the Madrid Open after Kei Nishikori retired in the third set of the final.[127] On 8 June 2014, Nadal defeated Novak Djokovic in the Men's Singles French Open final to win his 9th French Open title and a 5th straight win. Nadal equaled Pete Sampras' total of 14 Grand Slam wins.[128] Nadal then lost in the second round of the Halle Open to Dustin Brown the following week.[129]
|
154 |
+
|
155 |
+
Nadal entered the Wimbledon Championships in a bid to win the tournament for the third time. In the fourth round he was upset by Australian teenager Nick Kyrgios in four sets.[130] Nadal withdrew from the American swing owing to a wrist injury.[131] He made his return at the 2014 China Open but was defeated in the quarterfinals by Martin Klizan in three sets.[132] At the 2014 Shanghai Rolex Masters, he was suffering from appendicitis. He lost his opening match to Feliciano Lopez in straight sets.[133] Later, he was upset by Borna Ćorić at the quarterfinals of the 2014 Swiss Indoors. After the loss, he announced that he would skip the rest of the season to undergo surgery for his appendix.[134]
|
156 |
+
|
157 |
+
Nadal began the year as the defending Champion at the Qatar Open, but suffered a shocking three set defeat to Michael Berrer in the first round.[135] He won the doubles title with Juan Mónaco. At the Australian Open, Nadal lost in straight sets to Tomáš Berdych in the quarterfinal, thus ending a 17-match winning streak against the seventh-seeded Czech.[136]
|
158 |
+
|
159 |
+
In February, Nadal lost in the semifinals to Fabio Fognini at the Rio Open,[137] before going on to win his 46th career clay-court title against Juan Mónaco at the Argentina Open.[138] Nadal then participated at the Indian Wells and Miami Open but suffered early defeats to Milos Raonic and Fernando Verdasco, in the quarterfinals and third round respectively.[139][140] Nadal then began his spring clay season at the Monte Carlo Masters and reached the semifinals where he lost to Novak Djokovic in straight sets.[141] After losing to Fognini again at the Barcelona Open quarterfinals,[142] Nadal entered the Madrid Open as the two-time defending champion but lost in the final to Andy Murray in straight sets, resulting in his dropping out of the top five for the first time since 2005.[143][144] He then lost in the quarterfinals of the Rome Masters to Stan Wawrinka in straight sets.[145]
|
160 |
+
|
161 |
+
Nadal lost to eventual runner-up Djokovic in the quarterfinals of the French Open, ending his winning streak of 39 consecutive victories in Paris since his defeat by Robin Söderling in 2009.[146] Nadal went on to win the 2015 Mercedes Cup against Serbian Viktor Troicki, his first grass court title since he won at Wimbledon in 2010.[147] He was unable to continue his good form on grass as he lost in the first round of the Aegon Championships to Alexandr Dolgopolov in three sets.[148] Nadal's struggles continued when he lost in the second round of Wimbledon to Dustin Brown.[149]
|
162 |
+
|
163 |
+
In the third round of the 2015 US Open, Nadal once again lost to Fognini, despite having won the first two sets.[150] This early exit ended Nadal's record 10-year streak of winning at least one major.
|
164 |
+
|
165 |
+
Nadal started the year winning Mubadala Title defeating Milos Raonic in straight sets. After that, he entered the Doha, Qatar, where he reached the finals, losing to Djokovic in straight sets. This was their 47th match, after which Djokovic led their head-to-head rivalry with 24 matches won. At the Australian Open, Nadal was defeated in five sets by compatriot Fernando Verdasco in the first round. The defeat marked his first opening round exit at the Australian Open.[151]
|
166 |
+
|
167 |
+
In April he won his 28th Masters 1000 in Monte Carlo.[152]
|
168 |
+
He went on to win his 17th ATP 500 in Barcelona, winning the trophy for the ninth time in his career.[153]
|
169 |
+
He continued the clay court season in Madrid, falling to Murray in the semifinal.[154]
|
170 |
+
|
171 |
+
The following week, Nadal played in Rome Masters where he reached the quarterfinal. Nadal was again defeated by Djokovic in straight sets, although he had a break advantage in both sets and served to win the second.[155]
|
172 |
+
|
173 |
+
Following Federer's withdrawal due to injury, Nadal was named the fourth seed at the French Open.[156] On 26 May, he became the eighth male player in tennis history to record 200 Grand Slam match wins, as he defeated Facundo Bagnis in straight sets in the second round of the Slam.[157] Following the victory, however, Nadal had to withdraw from competition owing to a left wrist injury initially suffered during the Madrid Open,[158] handing Marcel Granollers a walk-over into the fourth round.[159] On 9 June, Nadal announced that the same wrist injury that forced him to withdraw from the French Open needed more time to heal, and that he would not play at the 2016 Wimbledon Championships.[160] At the Rio 2016 Olympics, Nadal achieved 800 career wins with his quarterfinal victory over the Brazilian Thomaz Bellucci. Partnering Marc López, he won the gold medal in men's doubles event for Spain by defeating Romania's Florin Mergea and Horia Tecau in the finals.[161] This made Nadal the second man in the open era to have won gold medals in both singles and doubles. Nadal also advanced to the bronze medal match in the men's singles but was defeated by Kei Nishikori.
|
174 |
+
|
175 |
+
At the US Open Nadal was seeded #4 and advanced to the fourth round but was defeated by 24th seed Lucas Pouille in 5 sets. The defeat meant that 2016 was the first year since 2004 in which Nadal had failed to reach a Grand Slam quarter-final.[162] He played the Shanghai Masters and was upset in the second round by Viktor Troicki. He subsequently ended his 2016 season to let his wrist recover.
|
176 |
+
|
177 |
+
Nadal opened his season by playing at the Brisbane International for the first time, where he reached the quarterfinals before losing to Milos Raonic in three sets.[163] In the second round of the tournament, he defeated Mischa Zverev for the loss of just two games;[164] Nadal began the Australian Open with straight-set wins over Florian Mayer and Marcos Baghdatis, before more difficult wins over Alexander Zverev and Gael Monfils, which set up his first quarterfinal berth at a Grand Slam since the 2015 French Open. Nadal defeated Raonic and Grigor Dimitrov in the quarterfinal and semifinal, respectively (the latter lasting for five sets over five hours), to set up a final against Roger Federer, his first Grand Slam final since he won the 2014 French Open. Nadal went on to lose to Federer in five sets; this was the first time that Nadal had lost to Federer in a Grand Slam since the final of the 2007 Wimbledon Championships.
|
178 |
+
|
179 |
+
Nadal made it to the final of Acapulco without dropping a set, but was defeated by big-serving Sam Querrey. In a rematch of the Australian Open final Nadal took on Roger Federer in the fourth round at Indian Wells but again lost to his old rival, this time in straight sets; it was their earliest meeting in a tournament in over a decade. In the Miami Masters, Nadal reached the final to again play Federer, and was once again defeated in straight sets.[165] Nadal then won his 29th Masters 1000 title in Monte Carlo; it was his tenth victory in the principality, the most wins by any player at a single tournament in the Open era.[166]
|
180 |
+
Nadal won his 18th ATP 500 title in Barcelona without dropping a set, also marking his tenth victory in Barcelona.[167] Nadal next played in the Madrid Open, where he defeated Dominic Thiem to tie Novak Djokovic's all-time Masters record of 30 titles.[168]
|
181 |
+
|
182 |
+
Nadal went on to beat Stan Wawrinka in straight sets and win a record tenth French Open title. This marked his first Grand Slam title since 2014, ending his three-year drought in Grand Slams.[169] Nadal won every set that he played in the tournament, dropping a total of only 35 games over his seven matches, which is the second-fewest by any male (second only to Björn Borg's 32 dropped games at the 1978 French Open) on the way to a title at a Grand Slam tournament in the Open era with all matches being best-of-five-sets.[citation needed] The achievement, called "La Décima" ("the tenth" in Spanish), made Nadal the first male or female in the Open era to win ten titles from a single Grand Slam tournament, following similar achievements in Monte Carlo and Barcelona. Nadal also climbed to second on the all-time Grand Slam titles list, with 15 grand slam championships, putting him one ahead of Pete Sampras.[170]
|
183 |
+
|
184 |
+
Nadal lost in the round of 16 at Wimbledon, 13–15 in the fifth set, to Gilles Müller.[171] He returned to competition in Montreal. He won his first match against Coric in straight sets but fell in the Round of 16 to Canadian teenager Denis Shapovalov. By 21 August, he retook the ATP No. 1 ranking from Andy Murray. Nadal earned his third US Open title against first-time Grand Slam finalist Kevin Anderson, winning the final in straight sets. This marked the first time that Nadal had captured two Grand Slam tournaments in a year since 2013, and the second time since 2010. Nadal extended his winning streak by winning the China Open, winning the final against Nick Kyrgios in straight sets.[172] On 11 September 2017, Nadal and Garbiñe Muguruza made Spain the first country since the United States 14 years ago to simultaneously top both the ATP and the WTA rankings, with Muguruza making her debut in the No. 1 spot.[173]
|
185 |
+
|
186 |
+
After defeating Hyeon Chung in the second round of the Paris Masters Nadal secured the year-end No. 1. He became year-end No. 1 for the fourth time in his career, tying him for fourth all-time with Novak Djokovic, Ivan Lendl and John McEnroe, behind Pete Sampras (6), and Roger Federer and Jimmy Connors (5). By securing the year-end no. 1 ranking, Nadal became the first player aged over 30 to finish as year-end No. 1 and the first to finish in the top spot four years since he last achieved the feat; he also broke a number of other historical records, all of which he broke again in 2019.[174]
|
187 |
+
|
188 |
+
Nadal began his 2018 season at the Kooyong Classic, where he lost to Richard Gasquet in the first round. He then played at the Tie Break Tens exhibition tournament in Melbourne, losing in the final to Tomáš Berdych. At the Australian Open, Nadal recorded straight-sets wins in the first three rounds, before notching a tougher four-set win against Diego Schwartzman in the fourth round. He faced Marin Čilić in the quarterfinal, but retired in the fifth set due to a hip injury.[175]
|
189 |
+
|
190 |
+
On 16 February, Nadal dropped to the No. 2 ranking after 26 weeks at the top when his rival Roger Federer overtook him in points. Nadal withdrew from the Mexican Open, Indian Wells Masters, and Miami Open due to an injury. Despite his absence in Miami, he regained the No. 1 ranking on 2 April due to Federer's second-round loss. After recovering from injury, Nadal helped secure the Spanish Davis Cup team a victory over Germany in the quarterfinal of the World Group. He beat Philipp Kohlschreiber and Alexander Zverev in straight sets.[176]
|
191 |
+
|
192 |
+
At the Monte Carlo Masters, Nadal successfully defended his title and won a record-breaking 31st Masters title, thus becoming the player with the most Masters 1000 titles in tennis history. It also marked his 11th title in Monte Carlo, as well as the 76th title in his career. Because he defended the points won the previous year, he kept his No. 1 ranking and began his 171st week as the world No. 1.[177]
|
193 |
+
Nadal won in Monte Carlo without dropping a set, beating Kei Nishikori in the final. Nadal went on to win his 11th title in Barcelona, defeating Stefanos Tsitsipas in straight sets, becoming the first player in the open era to win 400 matches on both clay and hard.[178][179] The win marked his 20th ATP 500 series title, which put him back atop the list of most ATP 500 titles, tied with Roger Federer. It also marked his 14th consecutive season with at least one ATP 500 title.
|
194 |
+
|
195 |
+
Fresh after achieving the 'Undecima' at Monte Carlo and Barcelona, Nadal had to defend yet another title at Madrid. He reached the quarterfinals, defeating Gael Monfils and Diego Schwartzman in straight sets, to extend his record to 50 consecutive sets won on clay, starting from the 2017 French Open. His win over Schwartzman broke John McEnroe's record of 49 straight sets won on a single surface.[180] McEnroe had previously achieved the record on carpet in 1984. In a surprise, Nadal lost in straight sets to Dominic Thiem in the quarterfinals, ending his 21-match and record 50-set winning streaks on clay. He also relinquished his world No. 1 ranking to Federer in the process.
|
196 |
+
|
197 |
+
At the Rome Masters, Nadal captured his 8th title in the Italian capital as well as his 78th career title, defeating Alexander Zverev in three sets, thus overtaking John McEnroe in the fourth place on the list of most titles won in the Open Era.[181] It was Nadal's 32nd Masters title – most of any player in the Open Era. With his victory in Rome, Nadal also regained the No. 1 spot from Federer.
|
198 |
+
|
199 |
+
Then at the French Open, Nadal won his 17th Grand Slam title. This tied Margaret Court's record for singles titles at a Grand Slam event (Court won 11 Australian Opens, but seven came when it was the Australian Championships, an amateur event). En route to the title, Nadal dropped only one set, beating Dominic Thiem in the final in three sets.[182] Nadal became just the fourth man in the Open Era to win three or more major titles after turning 30.
|
200 |
+
|
201 |
+
Going into Wimbledon, Nadal was ranked world number one, but was seeded second due to Wimbledon's seeding algorithm. He made it to the quarterfinals without dropping set. He then faced #5 seed Juan Martín del Potro, who he defeated in five sets. In the semifinals he faced long-time rival Novak Djokovic, who was aiming to reach his first major final since the 2016 US Open. This match lasted 5 hours and 17 minutes, spread over two days, becoming the second-longest Wimbledon semifinal in history, second only to the match between Kevin Anderson and John Isner held earlier on the same day. Djokovic defeated Nadal in five sets with the fifth set being 10–8.[183] This was Nadal's first defeat in the semifinals of a major since the 2009 US Open, and his first ever defeat in the semifinals of Wimbledon. Despite this, Nadal achieved his best results at Wimbledon since 2011. This performance, combined with Roger Federer's unsuccessful title defense, ensured that Nadal retained the world number one ranking after the grass season.
|
202 |
+
|
203 |
+
He then won the Rogers Cup, a record-extending 33rd Masters 1000 title.[184] This was Nadal's first Masters 1000 title win on hard court since 2013. He then withdrew from the Cincinnati Masters to prepare for the US Open. Nadal was the top seed during his title defense at the US Open. He first faced David Ferrer in Ferrer's last Grand Slam match, who retired due to injury during the second set. In his semi-final matchup against Juan Martin del Potro, Nadal retired after losing the second set 6–2 due to knee pain. On 31 October, he announced his withdrawal from the Paris Masters due to an abdominal injury and as a result Novak Djokovic replaced him as world No. 1.[185]
|
204 |
+
|
205 |
+
Nadal was due to start his season at the 2019 Brisbane International, but withdrew shortly before his first match due to an injury. He was seeded second at the 2019 Australian Open, and recorded straight-sets wins against James Duckworth, Matthew Ebden, Alex de Minaur, Tomáš Berdych, first-time quarterfinalist Frances Tiafoe and first-time semifinalist Stefanos Tsitsipas to reach his fifth Australian Open final. This was the first time that Nadal had advanced to an Australian Open final without losing a set; he had also lost only two service games during this run, both in his first-round match against Duckworth. Nadal lost the final in straight sets to Novak Djokovic, winning only eight games for the match and marking Nadal's first straight-sets loss in a Grand Slam final. Nadal next played at the 2019 Mexico Open, where he reached the second round, losing to Nick Kyrgios in three sets despite having three match points in the third set.[186] Nadal withdrew from both Indian Wells and Miami due to a right hip injury.[187]
|
206 |
+
|
207 |
+
Nadal began the clay season at the 2019 Monte Carlo Masters, reaching the semifinal, where he was defeated by eventual champion Fabio Fognini in straight sets.[188] He then competed in Barcelona (where he had won a record eleven titles), defeating Leonardo Mayer, David Ferrer and Jan-Lennard Struff, but lost to the eventual champion Dominic Thiem in straight sets. In Madrid, he had a bye in the first round and defeated Felix Auger-Aliassime, Frances Tiafoe and Stan Wawrinka, leading to his third clay-court semifinal of the year. He faced Stefanos Tsitsipas in the semifinal, where he lost in three sets.[189] He won his first tournament of the year in Rome, with a three-set win over Djokovic in the final.[190] At the 2019 French Open, Nadal defeated Yannick Hanfmann, Yannick Maden, David Goffin, Juan Ignacio Londero, Kei Nishikori and Roger Federer (their first meeting at the tournament since 2011), dropping only one set along the way, to set up his twelfth French Open final. In a rematch of the previous year's final against Thiem, Nadal prevailed in four sets to claim his record-extending twelfth French Open title.[191] In doing so, he broke Margaret Court's all-time record of eleven singles titles won at a single Grand Slam event.[192]
|
208 |
+
|
209 |
+
Nadal next played at the 2019 Wimbledon Championships and, like the previous year, reached the semifinals, where he faced Federer at Wimbledon for the first time since the 2008 Wimbledon final, a match regarded by some as the greatest in the history of tennis. Nadal lost the semifinal in four sets.[193] At the Rogers Cup, Nadal was the defending champion and top seed. By defeating Fabio Fognini in the quarterfinals, he took over the record for the highest number of Masters 1000 match wins of any active player, surpassing Roger Federer's previous record of 378 victories.[194] In the semifinals, he received a walkover over Gaël Monfils, and in the final, he yielded just three games to Daniil Medvedev, winning in straight sets. This victory marked the first time he achieved a title defence on a surface other than clay.[195] For the second year in a row, Nadal withdrew from Cincinnati Masters afterwards to focus on his US Open preparations.[196] At the 2019 US Open, Nadal lost only one set (against Marin Čilić) en route to the final, which he won against Daniil Medvedev in five sets. In doing so, Nadal claimed his fourth US Open title and 19th Grand Slam title (placing him only one behind Roger Federer in overall standings), won his first five-set Grand Slam final since the 2009 Australian Open final, and completed his second-best season in terms of Grand Slam singles results.[197] At Paris Masters, Nadal reached semi-final stage of the tournament, but pulled out due to an abdominal injury.[198]
|
210 |
+
|
211 |
+
At the 2019 ATP Finals, Nadal played in the Andre Agassi group and defeated Tsitsipas and Medvedev in the round-robin stage, but it was not enough to progress to the semifinals.[199] Despite his elimination, Nadal secured the year-end no. 1 ranking when Djokovic was also eliminated in the round-robin stage. This was Nadal's fifth time as the year-end no. 1 player, drawing level with Jimmy Connors, Federer and Djokovic behind Pete Sampras (six), and in doing so, he broke a number of the records he set in 2017:[200]
|
212 |
+
|
213 |
+
At the 2019 Davis Cup Finals, Nadal helped Spain win its sixth Davis Cup title. Nadal won all eight of his matches in singles and doubles, extending his winning streak in Davis Cup singles matches to 29 (29–1 record overall) without dropping a set or losing a game on serve;[201][202][203] he also won the tournament's most valuable player award.[203]
|
214 |
+
|
215 |
+
Nadal began his 2020 season by playing at the inaugural 2020 ATP Cup and helped Spain reach the final where they lost to Serbia, with Nadal losing to Djokovic in straight sets.[204] Nadal then played at the 2020 Australian Open and won his first three matches in straight sets against Hugo Dellien, Federico Delbonis and Pablo Carreño Busta. In the fourth round, he defeated Nick Kyrgios in four sets and reached the quarterfinals where he lost to Dominic Thiem in four sets.[205] Afterwards, Nadal went on to win his third Mexican Open title, defeating Taylor Fritz in straights sets in the final.[206]
|
216 |
+
|
217 |
+
Roger Federer and Nadal have been playing each other since 2004, and their rivalry is a significant part of both men's careers.[45][207][208] They held the top two rankings on the ATP Tour from July 2005 to 14 August 2009,[209] and again from 11 September 2017 to 15 October 2018. They are the only pair of men to have ever finished four consecutive calendar years at the top.[210][211] Nadal ascended to No. 2 in July 2005 and held this spot for a record 160 consecutive weeks before surpassing Federer in August 2008.[212]
|
218 |
+
|
219 |
+
They have played 40 times. Nadal leads 24–16 overall and 10–4 in Grand Slam tournaments. Nadal has a winning record on clay (14–2) and outdoor hard courts (8–6), while Federer leads the indoor hard courts 5–1 and grass 3–1.[213]
|
220 |
+
|
221 |
+
As tournament seedings are based on rankings, 24 of their matches have been in tournament finals, including an all-time record nine Grand Slam tournament finals.[214] From 2006 to 2008, they played in every French Open and Wimbledon final, and also met in the title match of the 2009 Australian Open, the 2011 French Open and the 2017 Australian Open.[214] Nadal won six of the nine, losing the first two Wimbledon finals. Four of these matches were five-set matches (2007 and 2008 Wimbledon, 2009 and 2017 Australian Open), and the 2008 Wimbledon final has been lauded as the greatest match ever by many long-time tennis analysts.[46][215][216][217] Nadal is the only player who has competed and won against Federer in the final of a Grand Slam on all three surfaces (grass, hard, and clay).
|
222 |
+
|
223 |
+
Novak Djokovic and Nadal have met 55 times (more than any other pair in the Open Era) and Nadal leads 9–6 at the Grand Slams and trails 26–29 overall.[117][218][219] Nadal leads on clay 17–7, while Djokovic leads on hard courts 20–7, and they are tied on grass 2–2.[117][219] In 2009, this rivalry was listed as the third greatest of the previous 10 years by ATPworldtour.com.[220] Djokovic is one of only two players to have at least ten match wins against Nadal (the other being Federer) and the only person to defeat Nadal seven consecutive times, and two times consecutively on clay. The two earlier shared the record for the longest match played in a best of three sets (4 hours and 3 minutes) at the 2009 Mutua Madrid Open semifinals until the match between Roger Federer and Juan Martín del Potro in the London 2012 Olympics Semifinal, which lasted 4 hours and 26 minutes.[221][222] They have also played in a record 13 Masters Series finals.
|
224 |
+
|
225 |
+
In the 2011 Wimbledon final, Djokovic won in four sets for his first Grand Slam final victory over Nadal.[223] Djokovic also defeated Nadal in the 2011 US Open Final. In 2012, Djokovic defeated Nadal in the Australian Open final for a third consecutive Grand Slam final win over Nadal. This is the longest Grand Slam tournament final in Open era history at 5 hours, 53 minutes.[224] Nadal won their last three 2012 meetings in the final of the Monte Carlo Masters, Rome Masters and French Open in April, May, and June 2012, respectively.[225] In 2013, Djokovic defeated Nadal in straight sets in the final at Monte Carlo, ending Nadal's record eight consecutive titles there, but Nadal got revenge at the French Open in an epic five-setter 9–7 in the fifth. In August 2013, Nadal won in Montreal, denying Djokovic his fourth Rogers Cup title.[117] Nadal also defeated Djokovic in the 2013 US Open Final.
|
226 |
+
|
227 |
+
Nadal defeated Djokovic in the 2014 French Open final. Since the 2014 French Open Final, Djokovic has won seven consecutive meetings including a win in straight sets in the quarterfinals of the 2015 French Open which ended Nadal's 39-match win streak at Stade Roland Garros and an opportunity for a sixth consecutive title, with Djokovic becoming only the second player after Robin Söderling to defeat Nadal at the event. Nadal easily defeated Djokovic in the 2017 Madrid Open semifinals (6–2, 6–4), his first victory against the Serb since the 2014 French Open. When they next met Nadal beat Djokovic again, this time on clay in the 2018 Rome semifinals. They then met in the 2018 Wimbledon semifinals, where Djokovic finally emerged victorious after a battle lasting over five hours that was spread over two days and went to 10–8 in the fifth set. Then in the 2019 Australian Open final, Djokovic easily won in straight sets, marking Nadal's first straight-sets loss in a Grand Slam final. But in the 2019 Rome Masters final, it was Nadal who defeated the Serbian in three sets, and also the very first time in which either of them (Nadal) got a 6–0 win in a set.
|
228 |
+
|
229 |
+
Nadal and Andy Murray have met on 24 occasions since 2007, with Nadal leading 17–7. Nadal leads 7–2 on clay, 3–0 on grass, and 7–5 on hard courts (including 4–4 on outdoor courts, but Nadal leads 3–1 on indoor hard courts), but trails 1–3 in finals. The pair once met regularly at Grand Slam level, with nine out of their 23 meetings coming in Grand Slams, with Nadal leading 7–2 (3–0 at Wimbledon, 2–0 at the French Open, 1–1 at the Australian Open, and 1–1 at the US Open).[226] Seven of these nine appearances have been in quarterfinals and semifinals, making the rivalry an important part of both men's careers. Nadal defeated Murray in three consecutive Grand Slam semifinals in 2011 from the French Open to the US Open. They have never met in a Grand Slam final, but Murray leads 3–1 in ATP finals, with Nadal winning at Indian Wells in 2009[227] and Murray winning in Rotterdam the same year,[228] Tokyo[229] in 2011, and Madrid in 2015.
|
230 |
+
|
231 |
+
Nadal and Stan Wawrinka have met 20 times, with Nadal leading 17–3 (85.0%). Although this rivalry has less significance than rivalries with the other members of the Big Four, the pair have met in several prestigious tournaments. The rivalry saw Nadal winning the first 12 encounters, all in straight sets, including 2 finals, one of which is a Masters 1000 final at Madrid in 2013. However, since Wawrinka's breakthrough season in 2013 the pair has won an almost equal number of matches against each other (3–4) from 2014 onward.[230] Wawrinka scored his first win against Nadal in their most important encounter, the 2014 Australian Open final in 4 sets, denying Nadal's double career slam. It was also the only match between the pair not resulting in a straight set win for either player. Nadal won their second Grand Slam final, at the 2017 French Open.[231]
|
232 |
+
|
233 |
+
Nadal and compatriot David Ferrer met a total of 32 times, with the total record ending 26–6 (81.3%) in favor of Nadal with Ferrer's retirement. Nadal and Ferrer had met in several prestigious tournaments and important matches. Ferrer won their first meeting in 2004 in Stuttgart in 3 sets, but Nadal went on to win the next four until Ferrer defeated him in the 4th round of the 2007 US Open. The pair met in their first tournament final in 2008, in Barcelona, where Nadal won in three sets. They met a year later again in the Barcelona final, with Nadal taking the title in straight sets. In 2010, the pair met in their first Masters 1000 final in Rome, where Nadal won in straight sets. Ferrer, however, would get his revenge in the 2011 Australian Open quarterfinal, defeating Nadal in straight sets for the first time in a grand slam.
|
234 |
+
|
235 |
+
Their biggest meeting, came in the 2013 French Open final. Ferrer was in his first major final, whereas Nadal was aiming for his 8th title at Roland Garros, and 13th overall. It was a straightforward victory for Nadal, 6–3, 6–2, 6–3. Between that meeting and 2015, Ferrer and Nadal would go on to play 6 more matches, with Nadal winning 4 of the 6.
|
236 |
+
|
237 |
+
In 2018, Ferrer announced that the US Open would be his last Grand Slam tournament, and he would retire the next year in the clay court season in Spain. Nadal and Ferrer had their first meeting since 2015 in the first round in the US Open. Ferrer's final grand slam match, however, ended in injury as he was forced to retire in the 2nd set against Nadal. Yet, in his second to last tournament in Barcelona, he would have one more meeting with Nadal. Although it was a straight sets victory it was a close match until the end, with the resilient Ferrer fighting until the last point. The overall score was 6–3, 6–3 in the final match between the two before Ferrer's retirement at the 2019 Madrid Open.
|
238 |
+
|
239 |
+
Nadal and Juan Martin del Potro have met 17 times, with Nadal leading 11–6 (64.7%). Outside the Big 4, no active player has more wins against Nadal than Del Potro. The two have met in many prestigious tournaments, including at 3 of the 4 grand slams. Nadal won their first four meetings between 2007–09, however Del Potro went on to win the next three, including a straight sets victory at the 2009 US Open SF (he later went on to win the tournament after defeating Roger Federer in the final. Their next major meeting came during the 2011 Davis Cup final. Nadal went on to beat Del Potro in 4 sets to claim the Davis Cup for Spain, their fourth since 2004. Nadal in 2013 also denied Del Potro his first Masters 1000 title, with a victory in 3 sets at the 2013 Indian Wells Masters. However, Del Potro got his revenge, and had one of his most important victories against Nadal in the SF of the 2016 Summer Olympics, beating him in three close sets (culminating with a tie break). Del Potro went on to claim the silver medal.
|
240 |
+
|
241 |
+
After a long span of injuries with both players, the pair met in a Grand Slam for the first time since the R16 at Wimbledon in 2011 at the 2017 US Open. Del Potro, facing Nadal after a victory in 4 sets against Federer, made it to the SF of a Grand Slam for the first time since 2013. However, the Spaniard got the better of him in that encounter, beating Del Potro in 4 sets. The pair then met in 3 of the 4 grand slam events in 2018, including a memorable match at the 2018 Wimbledon QF. The match went on for close to 5 hours, with Nadal coming out on top, 7–5, 6–7, 4–6, 6–4, 6–4. The pair had another meeting at the 2018 US Open, during which Nadal was forced to retire against Del Potro in the SF. Del Potro then went on to his first grand slam final since his victory in the 2009 US Open. He ended up losing in the final in straight sets to Novak Djokovic.
|
242 |
+
|
243 |
+
Nadal and Tomas Berdych have met a total of 24 times, with Nadal leading 20–4 (83.3%). Although this rivalry is lopsided mostly in favor of Nadal, the two have had some incredible matches in many prestigious tournaments. The pair have met at 2 of the 4 grand slams, with 3 meetings at the Australian Open and twice at Wimbledon, including the 2010 final. Nadal and Berdych first met in an ATP tournament in Båstad, where both men reached the final. Nadal won the match in 3 sets, in what was only his 8th title on the tour. Nadal and Berdych met a few more times in 2005–06, all in Masters 1000 tournaments. Out of their 4 matches, Berdych was able to win in 3, in Canada, Madrid, and Cincinnati. Their first meeting in a Grand Slam came at Wimbledon in 2007. They met in the QF, where Nadal defeated Berdych in straight sets. Their next significant meeting was in the opening round of the 2009 Davis Cup Final, where Nadal again defeated Berdych in straight sets. Spain went on to win the Davis Cup that year.
|
244 |
+
|
245 |
+
Their next meeting in a final came at a Grand Slam, in Wimbledon. Nadal had reached his 4th Wimbledon final, in an attempt to win his second title. Berdych had reached his first Grand Slam final, defeating Roger Federer in 4 sets in the quarterfinal and Novak Djokovic in straight sets in the SF. However the Spaniard was too good for the Czech in the final, and Nadal won in straight sets to take his 11th Grand Slam title. Their next meeting in a Grand Slam came only two years later in the 2012 Australian Open quarterfinal, where Nadal won in 4 tight sets. Nadal would later go on to lose the final to Djokovic in 5 sets. After multiple meetings from 2012–2014, all of which were won by Nadal, the pair met again at the 2015 Australian Open quarterfinal. Here, after 18 straight losses over 9 years, Berdych was able to claim a win over Nadal, and his only in a Grand Slam against the Spaniard. Berdych won the match in straight sets, including a bagel (6–0) in the second. The two met later in 2015 in Madrid, where Nadal won in straight sets.
|
246 |
+
|
247 |
+
After a long gap of 4 years, Nadal and Berdych met most recently at the 2019 Australian Open R16. After both players had prematurely ended their 2018 seasons with injuries, both had been playing very well in the 2019 season up to that point, with Berdych reaching the final in Doha. However, like many of their meetings, Nadal dominated the Czech and beat him in straight sets.
|
248 |
+
|
249 |
+
Nadal stands alone in the Open Era as the player with the most clay court titles (59), and holds an all-time record of 12 French Opens, 11 Monte-Carlo Masters and 11 Barcelona titles. He also stands alone with the longest single surface win streak in matches (clay courts, 81) and in sets (clay courts, 50) in the history of the Open Era. Due to these achievements, many have called Nadal "The King of Clay",[a] and he is widely regarded as the greatest clay-court player in history.[b] Nadal's records and evolution into an all-court champion have established him as one of the greatest players in tennis history,[c] with some former tennis players and analysts considering him to be the greatest tennis player of all time.[d]
|
250 |
+
|
251 |
+
The former tennis player Andre Agassi picked Nadal as the greatest of all time because of the way the Spaniard "had to deal with Federer, Djokovic, Murray in the golden age of tennis".[261] Nadal leads the head-to-head record in Grand Slams against all members of the Big Three and he has highest number of Slams won beating a Big Three member en route (13).
|
252 |
+
|
253 |
+
Nadal is, along with Wilander, the only male tennis player to win at least two Grand Slams on three different surfaces (hard, grass and clay). Nadal is also, along with Agassi, the only male tennis player to win the Olympic Gold in singles and the four Grand Slams in his career, a feat known as Career Golden Slam. In 2010, Nadal became the only male tennis player to win Grand Slams on three different surfaces (clay, grass and hard courts) the same calendar year. Nadal holds the record for most consecutive years winning a Grand Slam (2005–2014), as well as the record for most outdoor titles (83).
|
254 |
+
|
255 |
+
Nadal's playing style and personality can be summarised by Jimmy Connors: "He's built out of a mold that I think I came from also, that you walk out there, you give everything you have from the very first point to the end no matter what the score. And you're willing to lay it all out on the line and you're not afraid to let the people see that."
|
256 |
+
|
257 |
+
Former ATP world no. 1 and coach of Nadal, Carlos Moya, remembers the first time he played Nadal in Germany, when he was 22 and Rafa was just 12. He shared the account in the book "Facing Nadal" by Scoop Malimowski: "I met him for the first time in Stuttgart. He was playing an under 12s and I was playing the Masters event. We actually played that day and he was twelve and I was twenty-two. I think he was a very great player under twelve, he was very shy off court. But then we saw something different on court. But he was very hungry to play and compete and that’s something you could see right away.” [266]
|
258 |
+
|
259 |
+
Nadal generally plays an aggressive, behind-the-baseline game founded on heavy topspin groundstrokes, consistency, speedy footwork and tenacious court coverage, thus making him an aggressive counterpuncher.[267] Known for his athleticism and speed around the court, Nadal is an excellent defender[268] who hits well on the run, constructing winning plays from seemingly defensive positions. He also plays very fine dropshots, which work especially well because his heavy topspin often forces opponents to the back of the court.[269]
|
260 |
+
|
261 |
+
Nadal employs a semi-western grip forehand, often with a "lasso-whip" follow-through, where his left arm hits through the ball and finishes above his left shoulder – as opposed to a more traditional finish across the body or around his opposite shoulder.[270][271] Nadal's forehand groundstroke form allows him to hit shots with heavy topspin – more so than many of his contemporaries.[272]
|
262 |
+
|
263 |
+
San Francisco tennis researcher John Yandell used a high-speed video camera and special software to count the average number of revolutions of a tennis ball hit full force by Nadal. Yandell concluded:
|
264 |
+
|
265 |
+
The first guys we did were Sampras and Agassi. They were hitting forehands that in general were spinning about 1,800 to 1,900 revolutions per minute. Federer is hitting with an amazing amount of spin, too, right? 2,700 revolutions per minute. Well, we measured one forehand Nadal hit at 4,900. His average was 3,200.[273]
|
266 |
+
|
267 |
+
While Nadal's shots tend to land short of the baseline, the characteristically high bounces his forehands achieve tend to mitigate the advantage an opponent would normally gain from capitalizing on a short ball.[274] Although his forehand is based on heavy topspin, he can hit the ball deep and flat with a more orthodox follow through for clean winners.
|
268 |
+
|
269 |
+
Nadal's serve was initially considered a weak point in his game, although his improvements in both first-serve points won and break points saved since 2005 have allowed him to consistently compete for and win major titles on faster surfaces. Nadal relies on the consistency of his serve to gain a strategic advantage in points, rather than going for service winners.[275] However, before the 2010 US Open, he altered his service motion, arriving in the trophy pose earlier and pulling the racket lower during the trophy pose. Before the 2010 U.S. Open, Nadal modified his service grip to a more continental one. These two changes in his serve increased his average speed by around 10 mph during the 2010 US Open, maxing out at 135 mph (217 km), allowing him to win more free points on his serve.[276] Since the 2010 US Open, Nadal's serve speed dropped to previous levels and was again been cited as a need for improvement.[277][278][279] From 2019 onwards, several analysts praised Nadal's improvement on the serve, and noticed the speed of his serve had notably increased.[280][281][282][283]
|
270 |
+
|
271 |
+
After signing the new coach Carlos Moyá in December 2016, Nadal's game style acquired a more offensive approach. Under the direction of Carlos Moyá, Nadal improved his serve,[284][285] and also incorporated serve-and-volley as a surprise tactic in some of his matches.[286]
|
272 |
+
|
273 |
+
Nadal is a clay court specialist in the sense that he has been extremely successful on that surface. He has won 12 times at the French Open, 11 times at Monte Carlo and Barcelona, and nine at Rome. However, Nadal has shed that label owing to his success on other surfaces, including holding simultaneous Grand Slam tournament titles on grass, hard courts, and clay on two separate occasions, winning ten Masters series titles on hard court, and winning the Olympic gold medal on hardcourt.[267][287]
|
274 |
+
|
275 |
+
Despite praise for Nadal's talent and skill, in the past, some had questioned his longevity in the sport, citing his build and playing style as conducive to injury.[288] Nadal himself has admitted to the physical toll hard courts place on ATP Tour players, calling for a reevaluated tour schedule featuring fewer hard court tournaments.[289] This "longevity" narrative has proved to be inaccurate and pundits today admire his resilience.[290]
|
276 |
+
|
277 |
+
Nadal has had several coaches throughout his career. Toni Nadal coached him from 1990–2017.[291] He is currently being coached by Francisco Roig (2005–)[292] and Carlos Moyá (2016–).[293]
|
278 |
+
|
279 |
+
Nadal has been sponsored by Kia Motors since 2006. He has appeared in advertising campaigns for Kia as a global ambassador for the company. In May 2008, Kia released a claymation viral ad featuring Nadal in a tennis match with an alien.[294] In May 2015, Nadal extended his partnership with Kia for another five years.[295]
|
280 |
+
|
281 |
+
Nike serves as Nadal's clothing and shoe sponsor. Nadal's signature on-court attire entailed a variety of sleeveless shirts paired with 3/4 length capri pants.[296] For the 2009 season, Nadal adopted more-traditional on-court apparel. Nike encouraged Nadal to update his look in order to reflect his new status as the sport's top player at that time[297] and associate Nadal with a style that, while less distinctive than his "pirate" look, would be more widely emulated by consumers.[298][299] At warmup tournaments in Abu Dhabi and Doha, Nadal played matches in a polo shirt specifically designed for him by Nike,[300] paired with shorts cut above the knee. Nadal's new, more conventional style carried over to the 2009 Australian Open, where he was outfitted with Nike's Bold Crew Men's Tee[301] and Nadal Long Check Shorts.[302][303][304] Nadal wears Nike's Air CourtBallistec 2.3 tennis shoes,[305] bearing various customizations throughout the season, including his nickname "Rafa" on the right shoe and a stylized bull logo on the left.
|
282 |
+
|
283 |
+
He became the face of Lanvin's L'Homme Sport cologne in April 2009.[306] Nadal uses an AeroPro Drive racquet with a 41⁄4-inch L2 grip. As of the 2010 season[update], Nadal's racquets are painted to resemble the new Babolat AeroPro Drive with Cortex GT racquet in order to market a current model which Babolat sells.[307][308] Nadal uses no replacement grip, and instead wraps two overgrips around the handle. He used Duralast 15L strings until the 2010 season, when he switched to Babolat's new, black-colored, RPM Blast string. Nadal's rackets are always strung at 55 lb (25 kg), regardless of which surface or conditions he is playing on.[citation needed]
|
284 |
+
|
285 |
+
As of January 2010[update], Nadal is the international ambassador for Quely, a company from his native Mallorca that manufactures biscuits, bakery and chocolate-coated products; he has consumed their products ever since he was a young child.[309]
|
286 |
+
|
287 |
+
In 2010, luxury watchmaker Richard Mille announced that he had developed an ultra-light wristwatch in collaboration with Nadal called the Richard Mille RM027 Tourbillon watch.[310] The watch is made of titanium and lithium and is valued at US$525,000; Nadal was involved in the design and testing of the watch on the tennis court.[310] During the 2010 French Open, Men's Fitness reported that Nadal wore the Richard Mille watch on the court as part of a sponsorship deal with the Swiss watchmaker.[311]
|
288 |
+
|
289 |
+
Nadal replaced Cristiano Ronaldo as the new face of Emporio Armani Underwear and Armani Jeans for the spring/summer 2011 collection.[312] This was the first time that the label has chosen a tennis player for the job; association football has ruled lately prior to Ronaldo, David Beckham graced the ads since 2008.[313] Armani said that he selected Nadal as his latest male underwear model because "...he is ideal as he represents a healthy and positive model for youngsters".[312]
|
290 |
+
|
291 |
+
In June 2012, Nadal joined the group of sports endorsers of the PokerStars online poker cardroom.[314] Nadal won a charity poker tournament against retired Brazilian football player Ronaldo in 2014.[315]
|
292 |
+
|
293 |
+
In April 2017, the centre court of the Barcelona Open was named Pista Rafa Nadal.[316]
|
294 |
+
|
295 |
+
In February 2010, Rafael Nadal was featured in the music video of Shakira's "Gypsy".[317] and part of her album release She Wolf. In explaining why she chose Nadal for the video, Shakira was quoted as saying in an interview with the Latin American Herald Tribune: "I thought that maybe I needed someone I could in some way identify with. And Rafael Nadal is a person who has been totally committed to his career since he was very young. Since he was 17, I believe."[318][319]
|
296 |
+
|
297 |
+
Nadal was the subject of Scoop Malinowski's 2016 book "Facing Nadal."
|
298 |
+
|
299 |
+
128036 Rafaelnadal is a main belt asteroid discovered in 2003 at the Observatorio Astronómico de Mallorca and named after Nadal.[320] The decision to name the asteroid after Nadal was made by the International Astronomical Union in response to a request by the observatory. The asteroid is four kilometers in diameter and travels through space at a speed of 20 km per second."[321]
|
300 |
+
|
301 |
+
Nadal owns and trains at the Rafa Nadal Sports Centre (40,000 square meters) in his hometown of Manacor, Mallorca. The centre houses the Rafa Nadal Tennis Academy, where the American International School of Mallorca is located.[322] Also located in the centre is a sports residence, a Rafael Nadal museum, a health clinic, a fitness centre with spa and a café. The facility has 26 tennis courts among its sporting areas.[322]
|
302 |
+
|
303 |
+
Nadal took part in Thailand's "A Million Trees for the King" project, planting a tree in honour of King Bhumibol Adulyadej on a visit to Hua Hin during his Thailand Open 2010. "For me it's an honour to be part of this project", said Nadal. "It's a very good project. I want to congratulate the Thai people and congratulate the King for this unbelievable day. I wish all the best for this idea. It's very, very nice."[323]
|
304 |
+
|
305 |
+
The creation of the Fundación Rafa Nadal took place in November 2007, and its official presentation was in February 2008, at the Manacor Tennis Club in Mallorca, Spain. The foundation will focus on social work and development aid particularly on childhood and youth.[324] On deciding why to start a foundation, Nadal said "This can be the beginning of my future, when I retire and have more time, [...] I am doing very well and I owe society, [...] A month-and-a-half ago I was in Chennai, in India. The truth is we live great here....I can contribute something with my image..." Nadal was inspired by the Red Cross benefit match against malaria with Real Madrid goalkeeper Iker Casillas, recalling, "We raised an amount of money that we would never have imagined. I have to thank Iker, my project partner, who went all out for it, [...] That is why the time has come to set up my own foundation and determine the destination of the money."
|
306 |
+
|
307 |
+
Nadal's mother, Ana María Parera, chairs the charitable organization and father Sebastian is vice-chairman. Coach and uncle Toni Nadal and his agent, former tennis player Carlos Costa, are also involved. Roger Federer has given Nadal advice on getting involved in philanthropy. Despite the fact that poverty in India struck him particularly hard, Nadal wants to start by helping "people close by, in the Balearic Islands, in Spain, and then, if possible, abroad".[325]
|
308 |
+
|
309 |
+
On 16 October 2010, Nadal traveled to India for the first time to visit his tennis academy for underprivileged children at Anantapur Sports Village, in the Anantapur City, Andhra Pradesh. His foundation has also worked in the Anantapur Educational Center project, in collaboration with the Vicente Ferrer Foundation.[326][327]
|
310 |
+
|
311 |
+
Rafael Nadal opened his tennis academy centre to Majorca flood victims in October 2018.[328] By that time he was recovering at home in Majorca, shortly after having to leave the US Open due to injury and one day after the flood he worked personally with some friends to help the victims.[329][330]
|
312 |
+
|
313 |
+
Later, Nadal donated €1 million for rebuilding Sant Llorenç des Cardassar, the most affected town by the floods in the island.[331][332] Nadal also organized other charitable activities to help repair the damage of the disaster, such as the Olazábal & Nadal charity golf tournament [333][334] and a charity tennis match in which he was going to participate and that had to be suspended because he had to have an operation on an ankle injury.[335]
|
314 |
+
|
315 |
+
Nadal supports or has supported other charities, such as City Harvest, Elton John AIDS Foundation, Laureus Sport for Good Foundation and Small Steps Project[336]
|
316 |
+
|
317 |
+
Nadal is an avid fan of association football club Real Madrid. On 8 July 2010, it was reported that he had become a shareholder of RCD Mallorca, his local club by birth, in an attempt to assist the club from debt.[337] Nadal reportedly owns 10 percent and was offered the role of vice president, which he rejected.[338] His uncle Miguel Ángel Nadal became assistant coach under Michael Laudrup. Nadal remains a passionate Real Madrid supporter; ESPN.com writer Graham Hunter wrote, "He's as Merengue as [Real Madrid icons] Raúl, Iker Casillas and Alfredo Di Stéfano."
|
318 |
+
|
319 |
+
Shortly after acquiring his interest in Mallorca, Nadal called out UEFA for apparent hypocrisy in ejecting the club from the 2010–11 UEFA Europa League for excessive debts, saying through a club spokesperson, "Well, if those are the criteria upon which UEFA is operating, then European competition will only comprise two or three clubs because all the rest are in debt, too."[339]
|
320 |
+
|
321 |
+
He is a fervent supporter of the Spanish national team, and he was one of six people not affiliated with the team or the national federation allowed to enter the team's locker room following Spain's victory in the 2010 FIFA World Cup Final.[339]
|
322 |
+
|
323 |
+
Nadal lived with his parents and younger sister María Isabel in a five-story apartment building in their hometown of Manacor, Mallorca. In June 2009, Spanish newspaper La Vanguardia, and then The New York Times, reported that his parents, Ana María and Sebastián, had separated. This news came after weeks of speculation in Internet posts and message boards over Nadal's personal issues as the cause of his setback.[340]
|
324 |
+
|
325 |
+
Nadal is an agnostic atheist.[341] As a young boy, he would run home from school to watch Goku in his favorite Japanese anime, Dragon Ball. CNN released an article about Nadal's childhood inspiration, and called him "the Dragon Ball of tennis" owing to his unorthodox style "from another planet".[342]
|
326 |
+
|
327 |
+
In addition to tennis and football, Nadal enjoys playing golf and poker.[343] In April 2014 he played the world's No. 1 female poker player, Vanessa Selbst, in a poker game in Monaco.[344] Nadal's autobiography, Rafa (Hyperion, 2012, ISBN 1-4013-1092-3), written with assistance from John Carlin, was published in August 2011.[345] Nadal has been in a relationship with María Francisca (Mery) Perelló Pascual[346] (often mistakenly referred to as Xisca in the press)[347][348] since 2005, and their engagement was reported in January 2019.[349] The couple married in October 2019.[350]
|
328 |
+
|
329 |
+
Current through the 2020 Mexican Open.
|
330 |
+
|
331 |
+
* Nadal withdrew before the third round of the 2016 French Open due to a wrist injury, which does not officially count as a loss.
|
332 |
+
* Nadal received a walkover in the second round of the 2019 US Open, which does not count as a win.
|
333 |
+
|
334 |
+
Finals: 2 (2 runners-up)
|
en/4915.html.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
B
|
4 |
+
|
5 |
+
C
|
6 |
+
|
7 |
+
D
|
8 |
+
|
9 |
+
F
|
10 |
+
|
11 |
+
G
|
12 |
+
|
13 |
+
H
|
14 |
+
|
15 |
+
I
|
16 |
+
|
17 |
+
K
|
18 |
+
|
19 |
+
M
|
20 |
+
|
21 |
+
N
|
22 |
+
|
23 |
+
P
|
24 |
+
|
25 |
+
Q
|
26 |
+
|
27 |
+
R
|
28 |
+
|
29 |
+
S
|
30 |
+
|
31 |
+
T
|
32 |
+
|
33 |
+
U
|
34 |
+
|
35 |
+
W
|
36 |
+
|
37 |
+
Ra (/rɑː/;[1] Ancient Egyptian: rꜥ or rˤ; also transliterated rˤw /ˈɾiːʕuw/; cuneiform: 𒊑𒀀 ri-a or 𒊑𒅀ri-ia)[2] or Re (/reɪ/; Coptic: ⲣⲏ, Rē) is the ancient Egyptian deity of the sun. By the Fifth Dynasty in the 25th and 24th centuries BC, he had become one of the most important gods in ancient Egyptian religion, identified primarily with the noon sun. Ra was believed to rule in all parts of the created world: the sky, the Earth, and the underworld.[3] He was the god of the sun, order, kings, and the sky.
|
38 |
+
|
39 |
+
Ra was portrayed as a falcon and shared characteristics with the sky god Horus. At times the two deities were merged as Ra-Horakhty, "Ra, who is Horus of the Two Horizons". In the New Kingdom, when the god Amun rose to prominence he was fused with Ra into Amun-Ra.
|
40 |
+
|
41 |
+
The cult of the Mnevis bull, an embodiment of Ra, had its center in Heliopolis and there was a formal burial ground for the sacrificed bulls north of the city.
|
42 |
+
|
43 |
+
All forms of life were believed to have been created by Ra. In some accounts, humans were created from Ra's tears and sweat, hence the Egyptians call themselves the "Cattle of Ra". In the myth of the Celestial Cow, it is recounted how mankind plotted against Ra and how he sent his eye as the goddess Sekhmet to punish them.
|
44 |
+
|
45 |
+
The sun is the giver of life, controlling the ripening of crops which were worked by man. Because of the life giving qualities of the sun the Egyptians worshiped the sun as a god. The creator of the universe and the giver of life, the sun or Ra represented life, warmth and growth. Since the people regarded Ra as a principal god, creator of the universe and the source of life, he had a strong influence on them, which led to him being one of the most worshiped of all the Egyptian gods and even considered King of the Gods. At an early period in Egyptian history his influence spread throughout the whole country, bringing multiple representations in form and in name. The most common form combinations are with Atum (his human form), Khepri (the scarab beetle) and Horus (the falcon). The form in which he usually appears is that of a man with a falcon head, which is due to his combination with Horus, another sky god. On top of his head sits a solar disc with a cobra, which in many myths represents the eye of Ra. At the beginning of time, when there was nothing but chaos, the sun god existed alone in the watery mass of Nun which filled the universe.[4] "I am Atum when he was alone in Nun, I am Ra when he dawned, when he began to rule that which he had made."[4] This passage talks about how Atum created everything in human form out of the chaos and how Ra then began to rule over the earth where humans and divine beings coexisted. He created the Shu, god of air, and the goddess of moisture, Tefnut.[5] The siblings symbolized two universal principles of humans: life and right (justice). Ra was believed to have created all forms of life by calling them into existence by uttering their secret names. In some accounts humans were created from Ra's tears and sweat.[4] According to one myth the first portion of Earth came into being when the sun god summoned it out of the watery mass of Nun. In the myth of the Celestial Cow (the sky was thought of as a huge cow, the goddess Meht-urt) it is recounted how mankind plotted against[6] Ra and how he sent his eye as the goddess Sekhmet to punish them. Extensions of Ra's power were often shown as the eye of Ra, which were the female versions of the sun god. Ra had three daughters Bastet, Sekhmet, and Hathor who were all considered the eye of Ra who would seek out his vengeance. Sekhmet was the Eye of Ra and was created by the fire in Ra's eye. She was violent and sent to slaughter the people who betrayed Ra, but when calm she became the more kind and forgiving goddess Hathor. Sekhmet was the powerful warrior and protector while Bastet, who was depicted as a cat, was shown as gentle and nurturing.
|
46 |
+
|
47 |
+
Ra was thought to travel on the Atet, two solar barques called the Mandjet (the Boat of Millions of Years) or morning boat and the Mesektet or evening boat.[7] These boats took him on his journey through the sky and the Duat, the literal underworld of Egypt. While Ra was on the Mesektet, he was in his ram-headed form.[7] When Ra traveled in his sun boat, he was accompanied by various other deities including Sia (perception) and Hu (command), as well as Heka (magic power). Sometimes, members of the Ennead helped him on his journey, including Set, who overcame the serpent Apophis, and Mehen, who defended against the monsters of the underworld. When Ra was in the underworld, he would visit all of his various forms.[7]
|
48 |
+
|
49 |
+
Apophis, the god of chaos, was an enormous serpent who attempted to stop the sun boat's journey every night by consuming it or by stopping it in its tracks with a hypnotic stare. During the evening, the Egyptians believed that Ra set as Atum or in the form of a ram. The night boat would carry him through the underworld and back towards the east in preparation for his rebirth. These myths of Ra represented the sun rising as the rebirth of the sun by the sky goddess Nut; thus attributing the concept of rebirth and renewal to Ra and strengthening his role as a creator god as well.[8]
|
50 |
+
|
51 |
+
When Ra was in the underworld, he merged with Osiris, the god of the dead.[7]
|
52 |
+
|
53 |
+
Ra was represented in a variety of forms. The most usual form was a man with the head of a falcon and a solar disk on top and a coiled serpent around the disk.[7] Other common forms are a man with the head of a beetle (in his form as Khepri), or a man with the head of a ram. Ra was also pictured as a full-bodied ram, beetle, phoenix, heron, serpent, bull, cat, or lion, among others.[9]
|
54 |
+
|
55 |
+
He was most commonly featured with a ram's head in the Underworld.[7] In this form, Ra is described as being the "ram of the west" or "ram in charge of his harem.[7]
|
56 |
+
|
57 |
+
In some literature, Ra is described as an aging king with golden flesh, silver bones, and hair of lapis lazuli.[7]
|
58 |
+
|
59 |
+
The chief cultic center of Ra was Iunu "the Place of Pillars", later known to the Ptolemaic Kingdom as Heliopolis (Koinē Greek: Ἡλιούπολις, lit. "Sun City")[3] and today located in the suburbs of Cairo. He was identified with the local sun god Atum. As Atum or Atum-Ra, he was reckoned the first being and the originator of the Ennead ("The Nine"), consisting of Shu and Tefnut, Geb and Nut, Osiris, Set, Isis and Nephthys. The holiday of "The Receiving of Ra" was celebrated on May 26 in the Gregorian calendar.[10]
|
60 |
+
|
61 |
+
Ra's local cult began to grow from roughly the Second Dynasty, establishing him as a sun deity. By the Fourth Dynasty, pharaohs were seen as Ra's manifestations on earth, referred to as "Sons of Ra". His worship increased massively in the Fifth Dynasty, when Ra became a state deity and pharaohs had specially aligned pyramids, obelisks, and sun temples built in his honor. The rulers of the Fifth Dynasty told their followers that they were sons of Ra himself and the wife of the high priest of Heliopolis.[7] These pharaohs spent much of Egypt's money on sun temples.[7] The first Pyramid Texts began to arise, giving Ra more and more significance in the journey of the pharaoh through the Duat (underworld).[7]
|
62 |
+
|
63 |
+
During the Middle Kingdom, Ra was increasingly affiliated and combined with other chief deities, especially Amun and Osiris.
|
64 |
+
|
65 |
+
At the time of the New Kingdom of Egypt, the worship of Ra had become more complicated and grander. The walls of tombs were dedicated to extremely detailed texts that depicted Ra's journey through the underworld. Ra was said to carry the prayers and blessings of the living with the souls of the dead on the sun boat. The idea that Ra aged with the sun became more popular during the rise of the New Kingdom.
|
66 |
+
|
67 |
+
Many acts of worship included hymns, prayers, and spells to help Ra and the sun boat overcome Apep.
|
68 |
+
|
69 |
+
The rise of Christianity in the Roman Empire put an end to the worship of Ra.[11]
|
70 |
+
|
71 |
+
As with most widely worshiped Egyptian deities, Ra's identity was often combined with other gods, forming an interconnection between deities.
|
en/4916.html.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
B
|
4 |
+
|
5 |
+
C
|
6 |
+
|
7 |
+
D
|
8 |
+
|
9 |
+
F
|
10 |
+
|
11 |
+
G
|
12 |
+
|
13 |
+
H
|
14 |
+
|
15 |
+
I
|
16 |
+
|
17 |
+
K
|
18 |
+
|
19 |
+
M
|
20 |
+
|
21 |
+
N
|
22 |
+
|
23 |
+
P
|
24 |
+
|
25 |
+
Q
|
26 |
+
|
27 |
+
R
|
28 |
+
|
29 |
+
S
|
30 |
+
|
31 |
+
T
|
32 |
+
|
33 |
+
U
|
34 |
+
|
35 |
+
W
|
36 |
+
|
37 |
+
Ra (/rɑː/;[1] Ancient Egyptian: rꜥ or rˤ; also transliterated rˤw /ˈɾiːʕuw/; cuneiform: 𒊑𒀀 ri-a or 𒊑𒅀ri-ia)[2] or Re (/reɪ/; Coptic: ⲣⲏ, Rē) is the ancient Egyptian deity of the sun. By the Fifth Dynasty in the 25th and 24th centuries BC, he had become one of the most important gods in ancient Egyptian religion, identified primarily with the noon sun. Ra was believed to rule in all parts of the created world: the sky, the Earth, and the underworld.[3] He was the god of the sun, order, kings, and the sky.
|
38 |
+
|
39 |
+
Ra was portrayed as a falcon and shared characteristics with the sky god Horus. At times the two deities were merged as Ra-Horakhty, "Ra, who is Horus of the Two Horizons". In the New Kingdom, when the god Amun rose to prominence he was fused with Ra into Amun-Ra.
|
40 |
+
|
41 |
+
The cult of the Mnevis bull, an embodiment of Ra, had its center in Heliopolis and there was a formal burial ground for the sacrificed bulls north of the city.
|
42 |
+
|
43 |
+
All forms of life were believed to have been created by Ra. In some accounts, humans were created from Ra's tears and sweat, hence the Egyptians call themselves the "Cattle of Ra". In the myth of the Celestial Cow, it is recounted how mankind plotted against Ra and how he sent his eye as the goddess Sekhmet to punish them.
|
44 |
+
|
45 |
+
The sun is the giver of life, controlling the ripening of crops which were worked by man. Because of the life giving qualities of the sun the Egyptians worshiped the sun as a god. The creator of the universe and the giver of life, the sun or Ra represented life, warmth and growth. Since the people regarded Ra as a principal god, creator of the universe and the source of life, he had a strong influence on them, which led to him being one of the most worshiped of all the Egyptian gods and even considered King of the Gods. At an early period in Egyptian history his influence spread throughout the whole country, bringing multiple representations in form and in name. The most common form combinations are with Atum (his human form), Khepri (the scarab beetle) and Horus (the falcon). The form in which he usually appears is that of a man with a falcon head, which is due to his combination with Horus, another sky god. On top of his head sits a solar disc with a cobra, which in many myths represents the eye of Ra. At the beginning of time, when there was nothing but chaos, the sun god existed alone in the watery mass of Nun which filled the universe.[4] "I am Atum when he was alone in Nun, I am Ra when he dawned, when he began to rule that which he had made."[4] This passage talks about how Atum created everything in human form out of the chaos and how Ra then began to rule over the earth where humans and divine beings coexisted. He created the Shu, god of air, and the goddess of moisture, Tefnut.[5] The siblings symbolized two universal principles of humans: life and right (justice). Ra was believed to have created all forms of life by calling them into existence by uttering their secret names. In some accounts humans were created from Ra's tears and sweat.[4] According to one myth the first portion of Earth came into being when the sun god summoned it out of the watery mass of Nun. In the myth of the Celestial Cow (the sky was thought of as a huge cow, the goddess Meht-urt) it is recounted how mankind plotted against[6] Ra and how he sent his eye as the goddess Sekhmet to punish them. Extensions of Ra's power were often shown as the eye of Ra, which were the female versions of the sun god. Ra had three daughters Bastet, Sekhmet, and Hathor who were all considered the eye of Ra who would seek out his vengeance. Sekhmet was the Eye of Ra and was created by the fire in Ra's eye. She was violent and sent to slaughter the people who betrayed Ra, but when calm she became the more kind and forgiving goddess Hathor. Sekhmet was the powerful warrior and protector while Bastet, who was depicted as a cat, was shown as gentle and nurturing.
|
46 |
+
|
47 |
+
Ra was thought to travel on the Atet, two solar barques called the Mandjet (the Boat of Millions of Years) or morning boat and the Mesektet or evening boat.[7] These boats took him on his journey through the sky and the Duat, the literal underworld of Egypt. While Ra was on the Mesektet, he was in his ram-headed form.[7] When Ra traveled in his sun boat, he was accompanied by various other deities including Sia (perception) and Hu (command), as well as Heka (magic power). Sometimes, members of the Ennead helped him on his journey, including Set, who overcame the serpent Apophis, and Mehen, who defended against the monsters of the underworld. When Ra was in the underworld, he would visit all of his various forms.[7]
|
48 |
+
|
49 |
+
Apophis, the god of chaos, was an enormous serpent who attempted to stop the sun boat's journey every night by consuming it or by stopping it in its tracks with a hypnotic stare. During the evening, the Egyptians believed that Ra set as Atum or in the form of a ram. The night boat would carry him through the underworld and back towards the east in preparation for his rebirth. These myths of Ra represented the sun rising as the rebirth of the sun by the sky goddess Nut; thus attributing the concept of rebirth and renewal to Ra and strengthening his role as a creator god as well.[8]
|
50 |
+
|
51 |
+
When Ra was in the underworld, he merged with Osiris, the god of the dead.[7]
|
52 |
+
|
53 |
+
Ra was represented in a variety of forms. The most usual form was a man with the head of a falcon and a solar disk on top and a coiled serpent around the disk.[7] Other common forms are a man with the head of a beetle (in his form as Khepri), or a man with the head of a ram. Ra was also pictured as a full-bodied ram, beetle, phoenix, heron, serpent, bull, cat, or lion, among others.[9]
|
54 |
+
|
55 |
+
He was most commonly featured with a ram's head in the Underworld.[7] In this form, Ra is described as being the "ram of the west" or "ram in charge of his harem.[7]
|
56 |
+
|
57 |
+
In some literature, Ra is described as an aging king with golden flesh, silver bones, and hair of lapis lazuli.[7]
|
58 |
+
|
59 |
+
The chief cultic center of Ra was Iunu "the Place of Pillars", later known to the Ptolemaic Kingdom as Heliopolis (Koinē Greek: Ἡλιούπολις, lit. "Sun City")[3] and today located in the suburbs of Cairo. He was identified with the local sun god Atum. As Atum or Atum-Ra, he was reckoned the first being and the originator of the Ennead ("The Nine"), consisting of Shu and Tefnut, Geb and Nut, Osiris, Set, Isis and Nephthys. The holiday of "The Receiving of Ra" was celebrated on May 26 in the Gregorian calendar.[10]
|
60 |
+
|
61 |
+
Ra's local cult began to grow from roughly the Second Dynasty, establishing him as a sun deity. By the Fourth Dynasty, pharaohs were seen as Ra's manifestations on earth, referred to as "Sons of Ra". His worship increased massively in the Fifth Dynasty, when Ra became a state deity and pharaohs had specially aligned pyramids, obelisks, and sun temples built in his honor. The rulers of the Fifth Dynasty told their followers that they were sons of Ra himself and the wife of the high priest of Heliopolis.[7] These pharaohs spent much of Egypt's money on sun temples.[7] The first Pyramid Texts began to arise, giving Ra more and more significance in the journey of the pharaoh through the Duat (underworld).[7]
|
62 |
+
|
63 |
+
During the Middle Kingdom, Ra was increasingly affiliated and combined with other chief deities, especially Amun and Osiris.
|
64 |
+
|
65 |
+
At the time of the New Kingdom of Egypt, the worship of Ra had become more complicated and grander. The walls of tombs were dedicated to extremely detailed texts that depicted Ra's journey through the underworld. Ra was said to carry the prayers and blessings of the living with the souls of the dead on the sun boat. The idea that Ra aged with the sun became more popular during the rise of the New Kingdom.
|
66 |
+
|
67 |
+
Many acts of worship included hymns, prayers, and spells to help Ra and the sun boat overcome Apep.
|
68 |
+
|
69 |
+
The rise of Christianity in the Roman Empire put an end to the worship of Ra.[11]
|
70 |
+
|
71 |
+
As with most widely worshiped Egyptian deities, Ra's identity was often combined with other gods, forming an interconnection between deities.
|
en/4917.html.txt
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A grape is a fruit, botanically a berry, of the deciduous woody vines of the flowering plant genus Vitis.
|
4 |
+
|
5 |
+
Grapes can be eaten fresh as table grapes or they can be used for making wine, jam, grape juice, jelly, grape seed extract, raisins, vinegar, and grape seed oil. Grapes are a non-climacteric type of fruit, generally occurring in clusters.
|
6 |
+
|
7 |
+
The cultivation of the domesticated grape began 6,000–8,000 years ago in the Near East.[1] Yeast, one of the earliest domesticated microorganisms, occurs naturally on the skins of grapes, leading to the discovery of alcoholic drinks such as wine. The earliest archeological evidence for a dominant position of wine-making in human culture dates from 8,000 years ago in Georgia.[2][3][4]
|
8 |
+
|
9 |
+
The oldest known winery was found in Armenia, dating to around 4000 BC.[5] By the 9th century AD the city of Shiraz was known to produce some of the finest wines in the Middle East. Thus it has been proposed that Syrah red wine is named after Shiraz, a city in Persia where the grape was used to make Shirazi wine.[6]
|
10 |
+
|
11 |
+
Ancient Egyptian hieroglyphics record the cultivation of purple grapes, and history attests to the ancient Greeks, Phoenicians, and Romans growing purple grapes both for eating and wine production.[7] The growing of grapes would later spread to other regions in Europe, as well as North Africa, and eventually in North America.
|
12 |
+
|
13 |
+
In North America, native grapes belonging to various species of the genus Vitis proliferate in the wild across the continent, and were a part of the diet of many Native Americans, but were considered by early European colonists to be unsuitable for wine. In the 19th century, Ephraim Bull of Concord, Massachusetts, cultivated seeds from wild Vitis labrusca vines to create the Concord grape which would become an important agricultural crop in the United States.[8]
|
14 |
+
|
15 |
+
Grapes are a type of fruit that grow in clusters of 15 to 300, and can be crimson, black, dark blue, yellow, green, orange, and pink. "White" grapes are actually green in color, and are evolutionarily derived from the purple grape. Mutations in two regulatory genes of white grapes turn off production of anthocyanins, which are responsible for the color of purple grapes.[9] Anthocyanins and other pigment chemicals of the larger family of polyphenols in purple grapes are responsible for the varying shades of purple in red wines.[10][11] Grapes are typically an ellipsoid shape resembling a prolate spheroid.
|
16 |
+
|
17 |
+
Raw grapes are 81% water, 18% carbohydrates, 1% protein, and have negligible fat (table). A 100 gram reference amount of raw grapes supplies 69 calories and a moderate amount of vitamin K (14% of the Daily Value), with no other micronutrients in significant content.
|
18 |
+
|
19 |
+
Most grapes come from cultivars of Vitis vinifera, the European grapevine native to the Mediterranean and Central Asia. Minor amounts of fruit and wine come from American and Asian species such as:
|
20 |
+
|
21 |
+
According to the Food and Agriculture Organization (FAO), 75,866 square kilometers of the world are dedicated to grapes. Approximately 71% of world grape production is used for wine, 27% as fresh fruit, and 2% as dried fruit. A portion of grape production goes to producing grape juice to be reconstituted for fruits canned "with no added sugar" and "100% natural". The area dedicated to vineyards is increasing by about 2% per year.
|
22 |
+
|
23 |
+
There are no reliable statistics that break down grape production by variety. It is believed that the most widely planted variety is Sultana, also known as Thompson Seedless, with at least 3,600 km2 (880,000 acres) dedicated to it. The second most common variety is Airén. Other popular varieties include Cabernet Sauvignon, Sauvignon blanc, Cabernet Franc, Merlot, Grenache, Tempranillo, Riesling, and Chardonnay.[13]
|
24 |
+
|
25 |
+
Commercially cultivated grapes can usually be classified as either table or wine grapes, based on their intended method of consumption: eaten raw (table grapes) or used to make wine (wine grapes). While almost all of them belong to the same species, Vitis vinifera, table and wine grapes have significant differences, brought about through selective breeding. Table grape cultivars tend to have large, seedless fruit (see below) with relatively thin skin. Wine grapes are smaller, usually seeded, and have relatively thick skins (a desirable characteristic in winemaking, since much of the aroma in wine comes from the skin). Wine grapes also tend to be very sweet: they are harvested at the time when their juice is approximately 24% sugar by weight. By comparison, commercially produced "100% grape juice", made from table grapes, is usually around 15% sugar by weight.[15]
|
26 |
+
|
27 |
+
Seedless cultivars now make up the overwhelming majority of table grape plantings. Because grapevines are vegetatively propagated by cuttings, the lack of seeds does not present a problem for reproduction. It is an issue for breeders, who must either use a seeded variety as the female parent or rescue embryos early in development using tissue culture techniques.
|
28 |
+
|
29 |
+
There are several sources of the seedlessness trait, and essentially all commercial cultivators get it from one of three sources: Thompson Seedless, Russian Seedless, and Black Monukka, all being cultivars of Vitis vinifera. There are currently more than a dozen varieties of seedless grapes. Several, such as Einset Seedless, Benjamin Gunnels's Prime seedless grapes, Reliance, and Venus, have been specifically cultivated for hardiness and quality in the relatively cold climates of northeastern United States and southern Ontario.[16]
|
30 |
+
|
31 |
+
An offset to the improved eating quality of seedlessness is the loss of potential health benefits provided by the enriched phytochemical content of grape seeds (see Health claims, below).[17][18]
|
32 |
+
|
33 |
+
In most of Europe and North America, dried grapes are referred to as "raisins" or the local equivalent. In the UK, three different varieties are recognized, forcing the EU to use the term "dried vine fruit" in official documents.
|
34 |
+
|
35 |
+
A raisin is any dried grape. While raisin is a French loanword, the word in French refers to the fresh fruit; grappe (from which the English grape is derived) refers to the bunch (as in une grappe de raisins).
|
36 |
+
|
37 |
+
A currant is a dried Zante Black Corinth grape, the name being a corruption of the French raisin de Corinthe (Corinth grape). Currant has also come to refer to the blackcurrant and redcurrant, two berries unrelated to grapes.
|
38 |
+
|
39 |
+
A sultana was originally a raisin made from Sultana grapes of Turkish origin (known as Thompson Seedless in the United States), but the word is now applied to raisins made from either white grapes or red grapes that are bleached to resemble the traditional sultana.
|
40 |
+
|
41 |
+
Grape juice is obtained from crushing and blending grapes into a liquid. The juice is often sold in stores or fermented and made into wine, brandy, or vinegar. Grape juice that has been pasteurized, removing any naturally occurring yeast, will not ferment if kept sterile, and thus contains no alcohol. In the wine industry, grape juice that contains 7–23% of pulp, skins, stems and seeds is often referred to as "must". In North America, the most common grape juice is purple and made from Concord grapes, while white grape juice is commonly made from Niagara grapes, both of which are varieties of native American grapes, a different species from European wine grapes. In California, Sultana (known there as Thompson Seedless) grapes are sometimes diverted from the raisin or table market to produce white juice.[19]
|
42 |
+
|
43 |
+
Winemaking from red and white grape flesh and skins produces substantial quantities of organic residues, collectively called pomace (also "marc"), which includes crushed skins, seeds, stems, and leaves generally used as compost.[20] Grape pomace – some 10-30% of the total mass of grapes crushed – contains various phytochemicals, such as unfermented sugars, alcohol, polyphenols, tannins, anthocyanins, and numerous other compounds, some of which are harvested and extracted for commercial applications (a process sometimes called "valorization" of the pomace).[20][21]
|
44 |
+
|
45 |
+
Anthocyanins tend to be the main polyphenolics in purple grapes, whereas flavan-3-ols (i.e. catechins) are the more abundant class of polyphenols in white varieties.[22] Total phenolic content is higher in purple varieties due almost entirely to anthocyanin density in purple grape skin compared to absence of anthocyanins in white grape skin.[22] Phenolic content of grape skin varies with cultivar, soil composition, climate, geographic origin, and cultivation practices or exposure to diseases, such as fungal infections.
|
46 |
+
|
47 |
+
Muscadine grapes contain a relatively high phenolic content among dark grapes.[23][24] In muscadine skins, ellagic acid, myricetin, quercetin, kaempferol, and trans-resveratrol are major phenolics.[25]
|
48 |
+
|
49 |
+
The flavonols syringetin, syringetin 3-O-galactoside, laricitrin and laricitrin 3-O-galactoside are also found in purple grape but absent in white grape.[26]
|
50 |
+
|
51 |
+
Muscadine grape seeds contain about twice the total polyphenol content of skins.[24] Grape seed oil from crushed seeds is used in cosmeceuticals and skincare products. Grape seed oil, including tocopherols (vitamin E) and high contents of phytosterols and polyunsaturated fatty acids such as linoleic acid, oleic acid, and alpha-linolenic acid.[27][28][29]
|
52 |
+
|
53 |
+
Resveratrol, a stilbene compound, is found in widely varying amounts among grape varieties, primarily in their skins and seeds.[30] Muscadine grapes have about one hundred times higher concentration of stilbenes than pulp. Fresh grape skin contains about 50 to 100 micrograms of resveratrol per gram.[31]
|
54 |
+
|
55 |
+
Comparing diets among Western countries, researchers have discovered that although French people tend to eat higher levels of animal fat, the incidence of heart disease remains low in France. This phenomenon has been termed the French paradox, and is thought to occur from protective benefits of regularly consuming red wine, among other dietary practices. Alcohol consumption in moderation may be cardioprotective by its minor anticoagulant effect and vasodilation.[32]
|
56 |
+
|
57 |
+
Although adoption of wine consumption is generally not recommended by health authorities,[33] some research indicates moderate consumption, such as one glass of red wine a day for women and two for men, may confer health benefits.[34][35][36] Alcohol itself may have protective effects on the cardiovascular system.[37]
|
58 |
+
|
59 |
+
The consumption of grapes and raisins presents a potential health threat to dogs. Their toxicity to dogs can cause the animal to develop acute kidney failure (the sudden development of kidney failure) with anuria (a lack of urine production) and may be fatal.[38]
|
60 |
+
|
61 |
+
Christians have traditionally used wine during worship services as a means of remembering the blood of Jesus Christ which was shed for the remission of sins. Christians who oppose the partaking of alcoholic beverages sometimes use grape juice or water as the "cup" or "wine" in the Lord's Supper.[39]
|
62 |
+
|
63 |
+
The Catholic Church continues to use wine in the celebration of the Eucharist because it is part of the tradition passed down through the ages starting with Jesus Christ at the Last Supper, where Catholics believe the consecrated bread and wine literally become the body and blood of Jesus Christ, a dogma known as transubstantiation.[40] Wine is used (not grape juice) both due to its strong Scriptural roots, and also to follow the tradition set by the early Christian Church.[41] The Code of Canon Law of the Catholic Church (1983), Canon 924 says that the wine used must be natural, made from grapes of the vine, and not corrupt.[42] In some circumstances, a priest may obtain special permission to use grape juice for the consecration; however, this is extremely rare and typically requires sufficient impetus to warrant such a dispensation, such as personal health of the priest.
|
64 |
+
|
65 |
+
Although alcohol is permitted in Judaism, grape juice is sometimes used as an alternative for kiddush on Shabbat and Jewish holidays, and has the same blessing as wine. Many authorities maintain that grape juice must be capable of turning into wine naturally in order to be used for kiddush. Common practice, however, is to use any kosher grape juice for kiddush.
|
66 |
+
|
67 |
+
Flower buds
|
68 |
+
|
69 |
+
Flowers
|
70 |
+
|
71 |
+
Immature fruit
|
72 |
+
|
73 |
+
Grapes in Iran
|
74 |
+
|
75 |
+
Wine grapes
|
76 |
+
|
77 |
+
Vineyard in the Troodos Mountains
|
78 |
+
|
79 |
+
seedless grapes
|
80 |
+
|
81 |
+
Grapes in the La Union, Philippines
|
en/4918.html.txt
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Ramadan (/ˌræməˈdɑːn/, also US: /ˌrɑːm-, ˈræmədɑːn, ˈrɑːm-/,[6][7][8] UK: /ˈræmədæn/;[9]) or Ramazan (Arabic: رَمَضَان, romanized: Ramaḍān [ra.ma.dˤaːn];[note 1] also spelled Ramzan, Ramadhan, or Ramathan) is the ninth month of the Islamic calendar,[10] observed by Muslims worldwide as a month of fasting (sawm), prayer, reflection and community.[11] A commemoration of Muhammad's first revelation,[12] the annual observance of Ramadan is regarded as one of the Five Pillars of Islam[13] and lasts twenty-nine to thirty days, from one sighting of the crescent moon to the next.[14][15]
|
4 |
+
|
5 |
+
Fasting from sunrise to sunset is fard (obligatory) for all adult Muslims who are not acutely or chronically ill, travelling, elderly, breastfeeding, diabetic, or menstruating.[16] The predawn meal is referred to as suhur, and the nightly feast that breaks the fast is called iftar.[17][18] Although fatwas have been issued declaring that Muslims who live in regions with a midnight sun or polar night should follow the timetable of Mecca,[19] it is common practice to follow the timetable of the closest country in which night can be distinguished from day.[20][21][22]
|
6 |
+
|
7 |
+
The spiritual rewards (thawab) of fasting are believed to be multiplied during Ramadan.[23]
|
8 |
+
Accordingly, Muslims refrain not only from food and drink, but also tobacco products, sexual relations, and sinful behavior,[24][25] devoting themselves instead to salat (prayer) and recitation of the Quran.[26][27]
|
9 |
+
|
10 |
+
The word Ramadan derives from the Arabic root R-M-Ḍ (ر-م-ض) "scorching heat".[28] Ramadan is one of the names of God in Islam, and as such it is reported in many hadiths that it is prohibited to say only "Ramadan" in reference to the calendar month and that it is necessary to say "month of Ramadan", as reported in Sunni,[29][30][31][32][33][34][35] Shia[36][37][38][39][40][41] and Zaydi[42] sources.
|
11 |
+
|
12 |
+
In the Persian language, the Arabic letter ض (Ḍād) is pronounced as /z/. Some Muslim countries with historical Persian influence, such as Azerbaijan, Iran, India, Pakistan and Turkey, use the word Ramazan or Ramzan. The word Romjan is used in Bangladesh.[43]
|
13 |
+
|
14 |
+
The month of Ramadan is that in which was revealed the Quran; a guidance for mankind, and clear proofs of the guidance, and the criterion (of right and wrong). And whosoever of you is present, let him fast the month, and whosoever of you is sick or on a journey, a number of other days. Allah desires for you ease; He desires not hardship for you; and that you should complete the period, and that you should magnify Allah for having guided you, and that perhaps you may be thankful.[Quran 2:185]
|
15 |
+
|
16 |
+
Muslims hold that all scripture was revealed during Ramadan, the scrolls of Abraham, Torah, Psalms, Gospel, and Quran having been handed down on the first, sixth, twelfth, thirteenth (in some sources, eighteenth)[44] and twenty-fourth Ramadans,[year needed] respectively.[45][self-published source] Muhammed is said to have received his first quranic revelation on Laylat al-Qadr, one of five odd-numbered nights that fall during the last ten days of Ramadan.[46]
|
17 |
+
|
18 |
+
Although Muslims were first commanded to fast in the second year of Hijra (624 CE),[45] they believe that the practice of fasting is not in fact an innovation of monotheism[47] but rather has always been necessary for believers to attain taqwa (the fear of God).[48][Quran 2:183] They point to the fact that the pre-Islamic pagans of Mecca fasted on the tenth day of Muharram to expiate sin and avoid drought.[49][self-published source] Philip Jenkins argues that the observance of Ramadan fasting grew out of "the strict Lenten discipline of the Syrian Churches," a postulation corroborated by other scholars, including theologian Paul-Gordon Chandler,[50][51] but disputed by some Muslim academics.[52]
|
19 |
+
|
20 |
+
The first and last dates of Ramadan are determined by the lunar Islamic calendar.[3]
|
21 |
+
|
22 |
+
Because Hilāl, the crescent moon, typically occurs approximately one day after the new moon, Muslims can usually estimate the beginning of Ramadan;[53] however, many[who?] prefer to confirm the opening of Ramadan by direct visual observation of the crescent.[54]
|
23 |
+
|
24 |
+
Laylat al-Qadr is considered the holiest night of the year.[55][56] It is generally believed to have occurred on an odd-numbered night during the last ten days of Ramadan; the Dawoodi Bohra believe that Laylat al-Qadr was the twenty-third night of Ramadan.[57][58]
|
25 |
+
|
26 |
+
The holiday of Eid al-Fitr (Arabic:عيد الفطر), which marks the end of Ramadan and the beginning of Shawwal, the next lunar month, is declared after a crescent new moon has been sighted or after completion of thirty days of fasting if no sighting of the moon is possible. Eid celebrates of the return to a more natural disposition (fitra) of eating, drinking, and marital intimacy.[59]
|
27 |
+
|
28 |
+
The common practice is to fast from dawn to sunset. The pre-dawn meal before the fast is called the suhur, while the meal at sunset that breaks the fast is called iftar.[60]
|
29 |
+
|
30 |
+
Muslims devote more time to prayer and acts of charity, striving to improve their self-discipline, motivated by hadith:[61][62] "When Ramadan arrives, the gates of Paradise are opened and the gates of hell are locked up and devils are put in chains."[63]
|
31 |
+
|
32 |
+
Ramadan is a time of spiritual reflection, self-improvement, and heightened devotion and worship. Muslims are expected to put more effort into following the teachings of Islam. The fast (sawm) begins at dawn and ends at sunset. In addition to abstaining from eating and drinking during this time, Muslims abstain from sexual relations[3] and sinful speech and behaviour during Ramadan fasting or month. The act of fasting is said to redirect the heart away from worldly activities, its purpose being to cleanse the soul by freeing it from harmful impurities. Muslims believe that Ramadan teaches them to practice self-discipline, self-control,[64] sacrifice, and empathy for those who are less fortunate, thus encouraging actions of generosity and compulsory charity (zakat).[65]
|
33 |
+
|
34 |
+
Muslims also believe that for the poor people who don't have enough food they should fast so that the poor can get food to eat. This would also make them realize how poor feel when they remain hungry. The aim of fasting now seems to be being compassionate towards the poor people.[66]
|
35 |
+
|
36 |
+
Exemptions to fasting include travel, menstruation, severe illness, pregnancy, and breastfeeding. However, many Muslims with medical conditions[vague][who?] insist on fasting to satisfy their spiritual needs, although it is not recommended by hadith.[60] Those unable to fast are obligated make up the missed days later.[67]
|
37 |
+
|
38 |
+
Each day, before dawn, Muslims observe a pre-fast meal called the suhoor. After stopping a short time before dawn, Muslims begin the first prayer of the day, Fajr.[68][69]
|
39 |
+
|
40 |
+
At sunset, families break the fast with the iftar, traditionally opening the meal by eating dates to commemorate Muhammad's practice of breaking the fast with three dates.[70][71] They then adjourn for Maghrib, the fourth of the five required daily prayers, after which the main meal is served.[72]
|
41 |
+
|
42 |
+
Social gatherings, many times in buffet style, are frequent at iftar. Traditional dishes are often highlighted, including traditional desserts, particularly those made only during Ramadan.[example needed] Water is usually the beverage of choice, but juice and milk are also often available, as are soft drinks and caffeinated beverages.[73]
|
43 |
+
|
44 |
+
In the Middle East, iftar consists of water, juices, dates, salads and appetizers; one or more main dishes; and rich desserts, with dessert considered the most important aspect of the meal.[74] Typical main dishes include lamb stewed with wheat berries, lamb kebabs with grilled vegetables, and roasted chicken served with chickpea-studded rice pilaf. Desserts may include luqaimat, baklava or kunafeh.[75]
|
45 |
+
|
46 |
+
Over time, the practice of iftar has involved into banquets that may accommodate hundreds or even thousands of diners.[76] The Sheikh Zayed Grand Mosque in Abu Dhabi, the largest mosque in the UAE, feeds up to thirty thousand people every night.[77] Some twelve thousand people attend iftar at the Imam Reza shrine in Mashhad.[78]
|
47 |
+
|
48 |
+
Zakāt, often translated as "the poor-rate", is the fixed percentage of income a believer is required to give to the poor; the practice is obligatory as one of the pillars of Islam. Muslims believe that good deeds are rewarded more handsomely during Ramadan than at any other time of the year; consequently, many[who?] donate a larger portion—or even all—of their yearly zakāt during this month.[citation needed]
|
49 |
+
|
50 |
+
Tarawih (Arabic: تراويح) are extra nightly prayers performed during the month of Ramadan. Contrary to popular belief, they are not compulsory.[79]
|
51 |
+
|
52 |
+
Muslims are encouraged to read the entire Quran, which comprises thirty juz' (sections), over the thirty days of Ramadan. Some Muslims incorporate a recitation of one juz' into each of the thirty tarawih sessions observed during the month.[80]
|
53 |
+
|
54 |
+
In some Islamic countries, lights are strung up in public squares and across city streets,[81][82][83] a tradition believed to have originated during the Fatimid Caliphate, where the rule of Caliph al-Mu'izz li-Din Allah was acclaimed by people holding lanterns.[84]
|
55 |
+
|
56 |
+
On the island of Java, many believers bathe in holy springs to prepare for fasting, a ritual known as Padusan.[85] The city of Semarang marks the beginning of Ramadan with the Dugderan carnival, which involves parading the Warak ngendog, a horse-dragon hybrid creature allegedly inspired by the Buraq.[86] In the Chinese-influenced capital city of Jakarta, firecrackers are widely used to celebrate Ramadan, although they are officially illegal.[87] Towards the end of Ramadan, most employees receive a one-month bonus known as Tunjangan Hari Raya.[88] Certain kinds of food are especially popular during Ramadan, such as large beef or buffalo in Aceh and snails in Central Java.[89] The iftar meal is announced every evening by striking the bedug, a giant drum, in the mosque.[90]
|
57 |
+
|
58 |
+
Common greetings during Ramadan include Ramadan mubarak and Ramadan kareem.[91]
|
59 |
+
|
60 |
+
During Ramadan in the Middle East, a mesaharati beats a drum across a neighbourhood to wake people up to eat the suhoor meal. Similarly in Southeast Asia, the kentongan slit drum is used for the same purpose.
|
61 |
+
|
62 |
+
Striking the bedug in Indonesia
|
63 |
+
|
64 |
+
Crescent is colourfully decorated and illuminated during Ramadan in Jordan
|
65 |
+
|
66 |
+
Ramadan in the Old City of Jerusalem
|
67 |
+
|
68 |
+
Fanous Ramadan decorations in Cairo, Egypt
|
69 |
+
|
70 |
+
According to a 2012 Pew Research Centre study, there was widespread Ramadan observance, with a median of 93 percent across the thirty-nine countries and territories studied.[92] Regions with high percentages of fasting among Muslims include Southeast Asia, South Asia, Middle East and North Africa, Horn of Africa and most of Sub-Saharan Africa.[92] Percentages are lower in Central Asia and Southeast Europe.[92]
|
71 |
+
|
72 |
+
In some Muslim countries, eating in public during daylight hours in Ramadan is a crime.[93][94][95] The sale of alcohol becomes prohibited during Ramadan in Egypt.[96] The penalty for publicly eating, drinking or smoking during Ramadan can result in fines and/or incarceration in the countries of Kuwait,[97][98] Saudi Arabia,[99][100][101] Algeria[102] and Malaysia.[103] In the United Arab Emirates, the punishment is community service.[104]
|
73 |
+
|
74 |
+
In some countries, the observance of Ramadan has been restricted. In the USSR, the practice of Ramadan was suppressed by officials.[105][106] In Albania, Ramadan festivities were banned during the communist period.[107] However, many Albanians continued to fast secretly during this period.[108] China is widely reported to have banned Ramadan fasting since 2012 in Xinjiang.[109][110] Those caught fasting by the government could be sent to a "re-education camp".[111]
|
75 |
+
|
76 |
+
Some countries impose modified work schedules. In the UAE, employees may work no more than six hours per day and thirty-six hours per week. Qatar, Oman, Bahrain and Kuwait have similar laws.[112]
|
77 |
+
|
78 |
+
There are various health effects of fasting in Ramadan. Ramadan fasting is considered safe for healthy individuals; it may pose risks for individuals with certain pre-existing conditions. Most Islamic scholars hold that fasting is not required for those who are ill. Additionally, the elderly and pre-pubertal children are exempt from fasting.[113] Pregnant or lactating women are exempt from fasting during Ramadan according to some authorities,[114] while according to other authorities they are exempt only if they fear fasting may harm them or their babies.[113][115][116]
|
79 |
+
|
80 |
+
There are some health benefits of Ramadan including increasing insulin sensitivity and reducing insulin resistance.[117] It has also been shown that there is a significant improvement in 10 years coronary heart disease risk score and other cardiovascular risk factors such as lipids profile, systolic blood pressure, weight, BMI and waist circumference in subjects with a previous history of cardiovascular disease.[118] The fasting period is usually associated with modest weight loss, but weight can return afterwards.[119]
|
81 |
+
|
82 |
+
Ramadan fasting, as a time-restricted eating habit that inverts the normal human day-night-routine for the observants, can have deleterious health effects on sleep patterns and the general health.
|
83 |
+
Fasting in Ramadan has been shown to alter the sleep patterns[120] and the associated hormone production.
|
84 |
+
|
85 |
+
In Islam, pregnant women and those who are breasfeeding are exempt from fasting.[113] Fasting can be hazardous for pregnant women as it is associated with risks of inducing labour and causing gestational diabetes, although it does not appear to affect the child's weight. It is permissible to not fast if it threatens the woman's or the child's lives, however, in many instances pregnant women are normal before development of complications.[121][122][123][124][125] If a mother fasts during pregnancy, the resulting child may have significantly lower intelligence, lower cognitive capability and be at increased risk for several chronic diseases, e.g. Type 2 diabetes.[126] Many Islamic scholars argue it is obligatory on a pregnant woman not to fast if a doctor recommends against it.[127]
|
86 |
+
|
87 |
+
In many cultures, it is associated with heavy food and water intake during Suhur and Iftar times, which may do more harm than good.[citation needed] Ramadan fasting is safe for healthy people provided that overall food and water intake is adequate but those with medical conditions should seek medical advice if they encounter health problems before or during fasting.[128]
|
88 |
+
|
89 |
+
The education departments of Berlin and the United Kingdom have tried to discourage students from fasting during Ramadan, as they claim that not eating or drinking can lead to concentration problems and bad grades.[129][130]
|
90 |
+
|
91 |
+
A review of the literature by an Iranian group suggested fasting during Ramadan might produce renal injury in patients with moderate (GFR <60 ml/min) or severe kidney disease but was not injurious to renal transplant patients with good function or most stone-forming patients.[131]
|
92 |
+
|
93 |
+
The correlation of Ramadan with crime rates is mixed: some statistics show that crime rates drop during Ramadan, while others show that it increases. Decreases in crime rates have been reported by the police in some cities in Turkey (Istanbul[132] and Konya[133]) and the Eastern province of Saudi Arabia.[134] A 2005 study found that there was a decrease in assault, robbery and alcohol-related crimes during Ramadan in Saudi Arabia, but only the decrease in alcohol-related crimes was statistically significant.[135] Increases in crime rates during Ramadan have been reported in Turkey,[136] Jakarta,[137][138][139] parts of Algeria,[140] Yemen[141] and Egypt.[142]
|
94 |
+
|
95 |
+
Various mechanisms have been proposed for the effect of Ramadan on crime:
|
96 |
+
|
97 |
+
The length of the dawn to sunset time varies in different parts of the world according to summer or winter solstices of the Sun. Most Muslims fast for eleven to sixteen hours during Ramadan. However, in polar regions, the period between dawn and sunset may exceed twenty-two hours in summer. For example, in 2014, Muslims in Reykjavik, Iceland, and Trondheim, Norway, fasted almost twenty-two hours, while Muslims in Sydney, Australia, fasted for only about eleven hours. In areas characterized by continuous night or day, some Muslims follow the fasting schedule observed in the nearest city that experiences sunrise and sunset, while others follow Mecca time.[20][21][22]
|
98 |
+
|
99 |
+
Muslim astronauts in space schedule religious practices around the time zone of their last location on Earth. For example, this means an astronaut from Malaysia launching from the Kennedy Space Center in Florida would center their fast according to sunrise and sunset in Eastern Standard Time. This includes times for daily prayers, as well as sunset and sunrise for Ramadan.[144][145]
|
100 |
+
|
101 |
+
Muslims continue to work during Ramadan;[146][147] however, in some Islamic countries, such as Oman and Lebanon, working hours are shortened.[148][149] It is often recommended that working Muslims inform their employers if they are fasting, given the potential for the observance to impact performance at work.[150] The extent to which Ramadan observers are protected by religious accommodation varies by country. Policies putting them at a disadvantage compared to other employees have been met with discrimination claims in the United Kingdom and the United States.[151][152][153] An Arab News article reported that Saudi Arabian businesses were unhappy with shorter working hours during Ramadan, some reporting a decline in productivity of 35 to 50%.[154] The Saudi businesses proposed awarding salary bonuses in order to incentivize longer hours.[155] Despite the reduction in productivity, merchants can enjoy higher profit margins in Ramadan due to increase in demand.[156]
|
en/4919.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Ramesses may refer to:
|
en/492.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
A fixed-wing aircraft is a flying machine, such as an airplane (or aeroplane; see spelling differences), which is capable of flight using wings that generate lift caused by the aircraft's forward airspeed and the shape of the wings. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which the wings form a rotor mounted on a spinning shaft or "mast"), and ornithopters (in which the wings flap in a manner similar to that of a bird). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft and airplanes that use wing morphing are all examples of fixed-wing aircraft.
|
6 |
+
|
7 |
+
Gliding fixed-wing aircraft, including free-flying gliders of various kinds and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and some ground effect vehicles. Most fixed-wing aircraft are flown by a pilot on board the craft, but some are specifically designed to be unmanned and controlled either remotely or autonomously (using onboard computers).
|
8 |
+
|
9 |
+
Kites were used approximately 2,800 years ago in China, where materials ideal for kite building were readily available. Some authors hold that leaf kites were being flown much earlier in what is now Sulawesi, based on their interpretation of cave paintings on Muna Island off Sulawesi.[1] By at least 549 AD paper kites were being flown, as it was recorded in that year a paper kite was used as a message for a rescue mission.[2] Ancient and medieval Chinese sources list other uses of kites for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.[2]
|
10 |
+
|
11 |
+
Stories of kites were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries.[3] Although they were initially regarded as mere curiosities, by the 18th and 19th centuries kites were being used as vehicles for scientific research.[3]
|
12 |
+
|
13 |
+
Around 400 BC in Greece, Archytas was reputed to have designed and built the first artificial, self-propelled flying device, a bird-shaped model propelled by a jet of what was probably steam, said to have flown some 200 m (660 ft).[4][5] This machine may have been suspended for its flight.[6][7]
|
14 |
+
|
15 |
+
One of the earliest purported attempts with gliders was by the 11th-century monk Eilmer of Malmesbury, which ended in failure. A 17th-century account states that the 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.[8]
|
16 |
+
|
17 |
+
In 1799, Sir George Cayley set forth the concept of the modern airplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control.[9][10] Cayley was building and flying models of fixed-wing aircraft as early as 1803, and he built a successful passenger-carrying glider in 1853.[11] In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, by having his glider "L'Albatros artificiel" pulled by a horse on a beach.[citation needed] In 1884, the American John J. Montgomery made controlled flights in a glider as a part of a series of gliders built between 1883–1886.[12] Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
|
18 |
+
|
19 |
+
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His box kite designs were widely adopted. Although he also developed a type of rotary aircraft engine, he did not create and fly a powered fixed-wing aircraft.[13]
|
20 |
+
|
21 |
+
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan that was powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, which Maxim, it is presumed, realized, because he subsequently abandoned work on it.[14]
|
22 |
+
|
23 |
+
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight".[15] By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
|
24 |
+
|
25 |
+
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis 220 metres (720 ft) in less than 22 seconds.[16] The flight was certified by the FAI.[17]
|
26 |
+
|
27 |
+
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.[18]
|
28 |
+
|
29 |
+
World War I served as a testbed for the use of the aircraft as a weapon. Aircraft demonstrated their potential as mobile observation platforms, then proved themselves to be machines of war capable of causing casualties to the enemy. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, by German Luftstreitkräfte Leutnant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
|
30 |
+
|
31 |
+
Following WWI, aircraft technology continued to develop. Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights took place between the United States and Canada in 1919.
|
32 |
+
|
33 |
+
The so-called Golden Age of Aviation occurred between the two World Wars, during which both updated interpretations of earlier breakthroughs – as with Hugo Junkers' pioneering of all-metal airframes in 1915 leading to giant multi-engined aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft powerplant alongside powerful V-12 liquid-cooled aviation engines, and ever-greater instances of long-distance flight attempts – as with a Vickers Vimy in 1919, followed only months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts, pioneering the way for long-distance flights of the future to become commonplace.
|
34 |
+
|
35 |
+
Airplanes had a presence in all the major battles of World War II. They were an essential component of the military strategies of the period, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
|
36 |
+
|
37 |
+
Military gliders were developed and used in several campaigns, but they did not become widely used due to the high casualty rate often encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German submarines.
|
38 |
+
|
39 |
+
Before and during the war, both British and German designers were developing jet engines to power airplanes. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe and later in the war the British Gloster Meteor entered service but never saw action – top airspeeds of aircraft for that era went as high as 1,130 km/h (702 mph), with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.[19]
|
40 |
+
|
41 |
+
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound.[20]
|
42 |
+
|
43 |
+
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
|
44 |
+
|
45 |
+
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's biggest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005.
|
46 |
+
|
47 |
+
An airplane (also known as an aeroplane or simply a plane) is a powered fixed-wing aircraft that is propelled forward by thrust from a jet engine or propeller. Planes come in a variety of sizes, shapes, and wing configurations. The broad spectrum of uses for planes includes recreation, transportation of goods and people, military, and research.
|
48 |
+
|
49 |
+
A seaplane is a fixed-wing aircraft capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. These aircraft were sometimes called hydroplanes.[21] Seaplanes and amphibians are usually divided into two categories based on their technological characteristics: floatplanes and flying boats.
|
50 |
+
|
51 |
+
Many forms of glider (see below) may be modified by adding a small power plant. These include:
|
52 |
+
|
53 |
+
A ground effect vehicle (GEV) is a craft that attains level flight near the surface of the earth, making use of the ground effect – an aerodynamic interaction between the wings and the earth's surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.[22]
|
54 |
+
|
55 |
+
A glider is a heavier-than-air craft that is supported in flight by the dynamic reaction of the air against its lifting surfaces, and whose free flight does not depend on an engine. A sailplane is a fixed-wing glider designed for soaring – the ability to gain height in updrafts of air and to fly for long periods.
|
56 |
+
|
57 |
+
Gliders are mainly used for recreation, but have also been used for other purposes such as aerodynamics research, warfare and recovering spacecraft.
|
58 |
+
|
59 |
+
A motor glider does have an engine for extending its performance and some have engines powerful enough to take off, but the engine is not used in normal flight.
|
60 |
+
|
61 |
+
As is the case with planes, there are a wide variety of glider types differing in the construction of their wings, aerodynamic efficiency, location of the pilot and controls. Perhaps the most familiar type is the toy paper plane.
|
62 |
+
|
63 |
+
Large gliders are most commonly launched by a tow-plane or by a winch. Military gliders have been used in war to deliver assault troops, and specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have also made unpowered landings.
|
64 |
+
|
65 |
+
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is more common. After launch, further energy is obtained through the skillful exploitation of rising air in the atmosphere. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
|
66 |
+
|
67 |
+
The most numerous unpowered aircraft are paper airplanes, a handmade type of glider. Like hang gliders and paragliders, they are foot-launched and are in general slower, smaller, and less expensive than sailplanes. Hang gliders most often have flexible wings given shape by a frame, though some have rigid wings. Paragliders and paper airplanes have no frames in their wings.
|
68 |
+
|
69 |
+
Gliders and sailplanes can share a number of features in common with powered aircraft, including many of the same types of fuselage and wing structures. For example, the Horten H.IV was a tailless flying wing glider, and the delta wing-shaped Space Shuttle orbiter flew much like a conventional glider in the lower atmosphere. Many gliders also use similar controls and instruments as powered craft.
|
70 |
+
|
71 |
+
The main application today of glider aircraft is sport and recreation.
|
72 |
+
|
73 |
+
Gliders were developed from the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed longer glides to the next source of "lift", and so increase their chances of flying long distances. This gave rise to the popular sport of gliding.
|
74 |
+
|
75 |
+
Early gliders were mainly built of wood and metal but the majority of sailplanes now use composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings having a high aspect ratio. Both single-seat and two-seat gliders are available.
|
76 |
+
|
77 |
+
Initially training was done by short "hops" in primary gliders which are very basic aircraft with no cockpit and minimal instruments.[23] Since shortly after World War II training has always been done in two-seat dual control gliders, but high performance two-seaters are also used to share the workload and the enjoyment of long flights. Originally skids were used for landing, but the majority now land on wheels, often retractable. Some gliders, known as motor gliders, are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines.[24] Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of span and flaps.
|
78 |
+
|
79 |
+
A class of ultralight sailplanes, including some known as microlift gliders and some known as "airchairs", has been defined by the FAI based on a maximum weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some additional crash safety as the pilot can be strapped in an upright seat within a deformable structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Several commercial ultralight gliders have come and gone, but most current development is done by individual designers and home builders.
|
80 |
+
|
81 |
+
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. The advantage over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable, leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
|
82 |
+
|
83 |
+
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was originally developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for modern hang gliders.
|
84 |
+
|
85 |
+
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
|
86 |
+
|
87 |
+
A hang glider is a glider aircraft in which the pilot is ensconced in a harness suspended from the airframe, and exercises control by shifting body weight in opposition to a control frame. Most modern hang gliders are made of an aluminum alloy or composite-framed fabric wing. Pilots have the ability to soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
|
88 |
+
|
89 |
+
A paraglider is a lightweight, free-flying, foot-launched glider aircraft with no rigid primary structure.[25] The pilot sits in a harness suspended below a hollow fabric wing whose shape is formed by its suspension lines, the pressure of air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside. Paragliding is most often a recreational activity.
|
90 |
+
|
91 |
+
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
|
92 |
+
|
93 |
+
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
|
94 |
+
|
95 |
+
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flightpath rather than a ballistic one. This enables the carrying aircraft to attack a heavily defended target from a distance.
|
96 |
+
|
97 |
+
A kite is an aircraft tethered to a fixed point so that the wind blows over its wings.[26] Lift is generated when air flows over the kite's wing, producing low pressure above the wing and high pressure below it, and deflecting the airflow downwards. This deflection also generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the one or more rope lines or tethers attached to the wing.
|
98 |
+
|
99 |
+
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to develop it and confirm its flight characteristics, before adding an engine and flight controls, and flying it as an airplane.
|
100 |
+
|
101 |
+
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
|
102 |
+
|
103 |
+
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with very large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
|
104 |
+
|
105 |
+
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require a lot of wind, which may be not always possible with heavy equipment and a ground conductor.
|
106 |
+
|
107 |
+
Kites can be used to carry light effects such as lightsticks or battery powered lights.
|
108 |
+
|
109 |
+
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, several kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting has also become popular.
|
110 |
+
|
111 |
+
Kite sailing opens several possibilities not available in traditional sailing:
|
112 |
+
|
113 |
+
Conceptual research and development projects are being undertaken by over a hundred participants to investigate the use of kites in harnessing high altitude wind currents for electricity generation.[27]
|
114 |
+
|
115 |
+
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
|
116 |
+
|
117 |
+
The structural parts of a fixed-wing aircraft are called the airframe. The parts present can vary according to the aircraft's type and purpose. Early types were usually made of wood with fabric wing surfaces, When engines became available for a powered flight around a hundred years ago, their mounts were made of metal. Then as speeds increased more and more parts became metal until by the end of WWII all-metal aircraft were common. In modern times, increasing use of composite materials has been made.
|
118 |
+
|
119 |
+
Typical structural parts include:
|
120 |
+
|
121 |
+
The wings of a fixed-wing aircraft are static planes extending either side of the aircraft. When the aircraft travels forwards,
|
122 |
+
air flows over the wings which are shaped to create lift.
|
123 |
+
|
124 |
+
Kites and some light weight gliders and airplanes have flexible wing surfaces which are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces which provide additional strength.
|
125 |
+
|
126 |
+
Whether flexible or rigid, most wings have a strong frame to give them their shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and many ribs running from the leading (front) to the trailing (rear) edge.
|
127 |
+
|
128 |
+
Early airplane engines had little power and light weight was very important. Also, early aerofoil sections were very thin, and could not have strong frame installed within. So until the 1930s, most wings were too light weight to have enough strength and external bracing struts and wires were added. When the available engine power increased during the 1920s and 1930s, wings could be made heavy and strong enough that bracing was not needed anymore. This type of unbraced wing is called a cantilever wing.
|
129 |
+
|
130 |
+
The number and shape of the wings vary widely on different types. A given wing plane may be full-span or divided by a central fuselage into port (left) and starboard (right) wings. Occasionally, even more, wings have been used, with the three-winged triplane achieving some fame in WWI. The four-winged quadruplane and other Multiplane (Aeronautics) designs have had little success.
|
131 |
+
|
132 |
+
A monoplane, which derives from the prefix, mono means one which means it has a single wing plane, a biplane has two stacked one above the other, a tandem wing has two placed one behind the other. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form of powered type.
|
133 |
+
|
134 |
+
The wing planform is the shape when seen from above. To be aerodynamically efficient, a wing should be straight with a long span from side to side but have a short chord (high aspect ratio). But to be structurally efficient, and hence lightweight, a wing must have a short span but still enough area to provide lift (low aspect ratio).
|
135 |
+
|
136 |
+
At transonic speeds, near the speed of sound, it helps to sweep the wing backward or forwards to reduce drag from supersonic shock waves as they begin to form. The swept wing is just a straight wing swept backward or forwards.
|
137 |
+
|
138 |
+
The delta wing is a triangle shape which may be used for a number of reasons. As a flexible Rogallo wing it allows a stable shape under aerodynamic forces, and so is often used for kites and other ultralight craft. As a supersonic wing, it combines high strength with low drag and so is often used for fast jets.
|
139 |
+
|
140 |
+
A variable geometry wing can be changed in flight to a different shape. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage.
|
141 |
+
|
142 |
+
A fuselage is a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically smooth. The fuselage may contain the flight crew, passengers, cargo or payload, fuel and engines. The pilots of manned aircraft operate them from a cockpit located at the front or top of the fuselage and equipped with controls and usually windows and instruments. A plane may have more than one fuselage, or it may be fitted with booms with the tail located between the booms to allow the extreme rear of the fuselage to be useful for a variety of purposes.
|
143 |
+
|
144 |
+
A flying wing is a tailless aircraft which has no definite fuselage, with most of the crew, payload and equipment being housed inside the main wing structure.[28]:224
|
145 |
+
|
146 |
+
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany.
|
147 |
+
After the war, a number of experimental designs were based on the flying wing concept. Some general interest continued until the early 1950s, but designs did not necessarily offer a great advantage in range and presented a number of technical problems, leading to the adoption of "conventional" solutions like the Convair B-36 and the B-52 Stratofortress. Due to the practical need for a deep wing, the flying wing concept is most practical for designs in the slow-to-medium speed range, and there has been continual interest in using it as a tactical airlifter design.
|
148 |
+
|
149 |
+
Interest in flying wings was renewed in the 1980s due to their potentially low radar reflection cross-sections. Stealth technology relies on shapes which only reflect radar waves in certain directions, thus making the aircraft hard to detect unless the radar receiver is at a specific position relative to the aircraft – a position that changes continuously as the aircraft moves. This approach eventually led to the Northrop B-2 Spirit stealth bomber. In this case the aerodynamic advantages of the flying wing are not the primary needs. However, modern computer-controlled fly-by-wire systems allowed for many of the aerodynamic drawbacks of the flying wing to be minimized, making for an efficient and stable long-range bomber.
|
150 |
+
|
151 |
+
Blended wing body aircraft have a flattened and airfoil shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are smoothly blended in with the body.
|
152 |
+
|
153 |
+
Thus blended wing bodied aircraft incorporate design features from both a futuristic fuselage and flying wing design. The purported advantages of the blended wing body approach are efficient high-lift wings and a wide airfoil-shaped body. This enables the entire craft to contribute to lift generation with the result of potentially increased fuel economy.
|
154 |
+
|
155 |
+
A lifting body is a configuration in which the body itself produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for proper flight stability.
|
156 |
+
|
157 |
+
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build a small and lightweight manned spacecraft. The US built a number of famous lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles that were tested over the Pacific. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that the highly shaped fuselages made it difficult to fit fuel tankage.
|
158 |
+
|
159 |
+
The classic aerofoil section wing is unstable in flight and difficult to control. Flexible-wing types often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted aerofoil that is stable, or other ingenious mechanisms including, most recently, electronic artificial stability.
|
160 |
+
|
161 |
+
But in order to achieve trim, stability and control, most fixed-wing types have an empennage comprising a fin and rudder which act horizontally and a tailplane and elevator which act vertically. This is so common that it is known as the conventional layout. Sometimes there may be two or more fins, spaced out along the tailplane.
|
162 |
+
|
163 |
+
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it.[28]:86[29][30] This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
|
164 |
+
|
165 |
+
Kites are controlled by wires running down to the ground. Typically each wire acts as a tether to the part of the kite it is attached to.
|
166 |
+
|
167 |
+
Gliders and airplanes have more complex control systems, especially if they are piloted.
|
168 |
+
|
169 |
+
The main controls allow the pilot to direct the aircraft in the air. Typically these are:
|
170 |
+
|
171 |
+
Other common controls include:
|
172 |
+
|
173 |
+
A craft may have two pilots' seats with dual controls, allowing two pilots to take turns. This is often used for training or for longer flights.
|
174 |
+
|
175 |
+
The control system may allow full or partial automation of flight, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot but is controlled remotely or via means such as gyroscopes or other forms of autonomous control.
|
176 |
+
|
177 |
+
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
|
178 |
+
|
179 |
+
The six basic instruments, sometimes referred to as the "six pack", are as follows:[31]
|
180 |
+
|
181 |
+
Other cockpit instruments might include:
|
182 |
+
|
183 |
+
(Wayback Machine copy)
|
en/4920.html.txt
ADDED
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Raffaello Sanzio da Urbino[2] (Italian: [raffaˈɛllo ˈsantsjo da urˈbiːno]; March 28 or April 6, 1483 – April 6, 1520),[3][a] known as Raphael (/ˈræfeɪəl/, US: /ˈræfiəl, ˈreɪf-, ˌrɑːfaɪˈɛl, ˌrɑːfiˈɛl/), was an Italian painter and architect of the High Renaissance. His work is admired for its clarity of form, ease of composition, and visual achievement of the Neoplatonic ideal of human grandeur.[5] Together with Michelangelo and Leonardo da Vinci, he forms the traditional trinity of great masters of that period.[6]
|
6 |
+
|
7 |
+
Raphael was enormously productive, running an unusually large workshop and, despite his early death at 37, leaving a large body of work. Many of his works are found in the Vatican Palace, where the frescoed Raphael Rooms were the central, and the largest, work of his career. The best known work is The School of Athens in the Vatican Stanza della Segnatura. After his early years in Rome, much of his work was executed by his workshop from his drawings, with considerable loss of quality. He was extremely influential in his lifetime, though outside Rome his work was mostly known from his collaborative printmaking.
|
8 |
+
|
9 |
+
After his death, the influence of his great rival Michelangelo was more widespread until the 18th and 19th centuries, when Raphael's more serene and harmonious qualities were again regarded as the highest models. His career falls naturally into three phases and three styles, first described by Giorgio Vasari: his early years in Umbria, then a period of about four years (1504–1508) absorbing the artistic traditions of Florence, followed by his last hectic and triumphant twelve years in Rome, working for two Popes and their close associates.[7]
|
10 |
+
|
11 |
+
Raphael was born in the small but artistically significant central Italian city of Urbino in the Marche region,[8] where his father Giovanni Santi was court painter to the Duke. The reputation of the court had been established by Federico da Montefeltro, a highly successful condottiere who had been created Duke of Urbino by Pope Sixtus IV – Urbino formed part of the Papal States – and who died the year before Raphael was born. The emphasis of Federico's court was more literary than artistic, but Giovanni Santi was a poet of sorts as well as a painter, and had written a rhymed chronicle of the life of Federico, and both wrote the texts and produced the decor for masque-like court entertainments. His poem to Federico shows him as keen to demonstrate awareness of the most advanced North Italian painters, and Early Netherlandish artists as well. In the very small court of Urbino he was probably more integrated into the central circle of the ruling family than most court painters.[9]
|
12 |
+
|
13 |
+
Federico was succeeded by his son Guidobaldo da Montefeltro, who married Elisabetta Gonzaga, daughter of the ruler of Mantua, the most brilliant of the smaller Italian courts for both music and the visual arts. Under them, the court continued as a centre for literary culture. Growing up in the circle of this small court gave Raphael the excellent manners and social skills stressed by Vasari.[10] Court life in Urbino at just after this period was to become set as the model of the virtues of the Italian humanist court through Baldassare Castiglione's depiction of it in his classic work The Book of the Courtier, published in 1528. Castiglione moved to Urbino in 1504, when Raphael was no longer based there but frequently visited, and they became good friends. Raphael became close to other regular visitors to the court: Pietro Bibbiena and Pietro Bembo, both later cardinals, were already becoming well known as writers, and would later be in Rome during Raphael's period there. Raphael mixed easily in the highest circles throughout his life, one of the factors that tended to give a misleading impression of effortlessness to his career. He did not receive a full humanistic education however; it is unclear how easily he read Latin.[11]
|
14 |
+
|
15 |
+
Raphael's mother Màgia died in 1491 when he was eight, followed on August 1, 1494 by his father, who had already remarried. Raphael was thus orphaned at eleven; his formal guardian became his only paternal uncle Bartolomeo, a priest, who subsequently engaged in litigation with his stepmother. He probably continued to live with his stepmother when not staying as an apprentice with a master. He had already shown talent, according to Vasari, who says that Raphael had been "a great help to his father".[12] A self-portrait drawing from his teenage years shows his precocity.[13] His father's workshop continued and, probably together with his stepmother, Raphael evidently played a part in managing it from a very early age. In Urbino, he came into contact with the works of Paolo Uccello, previously the court painter (d. 1475), and Luca Signorelli, who until 1498 was based in nearby Città di Castello.[14]
|
16 |
+
|
17 |
+
According to Vasari, his father placed him in the workshop of the Umbrian master Pietro Perugino as an apprentice "despite the tears of his mother".[b] The evidence of an apprenticeship comes only from Vasari and another source,[16] and has been disputed; eight was very early for an apprenticeship to begin. An alternative theory is that he received at least some training from Timoteo Viti, who acted as court painter in Urbino from 1495.[17] Most modern historians agree that Raphael at least worked as an assistant to Perugino from around 1500; the influence of Perugino on Raphael's early work is very clear: "probably no other pupil of genius has ever absorbed so much of his master's teaching as Raphael did", according to Wölfflin.[18] Vasari wrote that it was impossible to distinguish between their hands at this period, but many modern art historians claim to do better and detect his hand in specific areas of works by Perugino or his workshop. Apart from stylistic closeness, their techniques are very similar as well, for example having paint applied thickly, using an oil varnish medium, in shadows and darker garments, but very thinly on flesh areas. An excess of resin in the varnish often causes cracking of areas of paint in the works of both masters.[19] The Perugino workshop was active in both Perugia and Florence, perhaps maintaining two permanent branches.[20] Raphael is described as a "master", that is to say fully trained, in December 1500.[21]
|
18 |
+
|
19 |
+
His first documented work was the Baronci altarpiece for the church of Saint Nicholas of Tolentino in Città di Castello, a town halfway between Perugia and Urbino.[22] Evangelista da Pian di Meleto, who had worked for his father, was also named in the commission. It was commissioned in 1500 and finished in 1501; now only some cut sections and a preparatory drawing remain.[23] In the following years he painted works for other churches there, including the Mond Crucifixion (about 1503) and the Brera Wedding of the Virgin (1504), and for Perugia, such as the Oddi Altarpiece. He very probably also visited Florence in this period. These are large works, some in fresco, where Raphael confidently marshals his compositions in the somewhat static style of Perugino. He also painted many small and exquisite cabinet paintings in these years, probably mostly for the connoisseurs in the Urbino court, like the Three Graces and St. Michael, and he began to paint Madonnas and portraits.[24] In 1502 he went to Siena at the invitation of another pupil of Perugino, Pinturicchio, "being a friend of Raphael and knowing him to be a draughtsman of the highest quality" to help with the cartoons, and very likely the designs, for a fresco series in the Piccolomini Library in Siena Cathedral.[25] He was evidently already much in demand even at this early stage in his career.[26]
|
20 |
+
|
21 |
+
The Mond Crucifixion, 1502–3, very much in the style of Perugino (National Gallery)
|
22 |
+
|
23 |
+
The Coronation of the Virgin 1502–3 (Pinacoteca Vaticana)
|
24 |
+
|
25 |
+
The Wedding of the Virgin, Raphael's most sophisticated altarpiece of this period (Pinacoteca di Brera)
|
26 |
+
|
27 |
+
Saint George and the Dragon, a small work (29 x 21 cm) for the court of Urbino (Louvre)
|
28 |
+
|
29 |
+
Raphael led a "nomadic" life, working in various centres in Northern Italy, but spent a good deal of time in Florence, perhaps from about 1504. Although there is traditional reference to a "Florentine period" of about 1504–8, he was possibly never a continuous resident there.[27] He may have needed to visit the city to secure materials in any case. There is a letter of recommendation of Raphael, dated October 1504, from the mother of the next Duke of Urbino to the Gonfaloniere of Florence: "The bearer of this will be found to be Raphael, painter of Urbino, who, being greatly gifted in his profession has determined to spend some time in Florence to study. And because his father was most worthy and I was very attached to him, and the son is a sensible and well-mannered young man, on both accounts, I bear him great love..."[28]
|
30 |
+
|
31 |
+
As earlier with Perugino and others, Raphael was able to assimilate the influence of Florentine art, whilst keeping his own developing style. Frescos in Perugia of about 1505 show a new monumental quality in the figures which may represent the influence of Fra Bartolomeo, who Vasari says was a friend of Raphael. But the most striking influence in the work of these years is Leonardo da Vinci, who returned to the city from 1500 to 1506. Raphael's figures begin to take more dynamic and complex positions, and though as yet his painted subjects are still mostly tranquil, he made drawn studies of fighting nude men, one of the obsessions of the period in Florence. Another drawing is a portrait of a young woman that uses the three-quarter length pyramidal composition of the just-completed Mona Lisa, but still looks completely Raphaelesque. Another of Leonardo's compositional inventions, the pyramidal Holy Family, was repeated in a series of works that remain among his most famous easel paintings. There is a drawing by Raphael in the Royal Collection of Leonardo's lost Leda and the Swan, from which he adapted the contrapposto pose of his own Saint Catherine of Alexandria.[29] He also perfects his own version of Leonardo's sfumato modelling, to give subtlety to his painting of flesh, and develops the interplay of glances between his groups, which are much less enigmatic than those of Leonardo. But he keeps the soft clear light of Perugino in his paintings.[30]
|
32 |
+
|
33 |
+
Leonardo was more than thirty years older than Raphael, but Michelangelo, who was in Rome for this period, was just eight years his senior. Michelangelo already disliked Leonardo, and in Rome came to dislike Raphael even more, attributing conspiracies against him to the younger man.[31] Raphael would have been aware of his works in Florence, but in his most original work of these years, he strikes out in a different direction. His Deposition of Christ draws on classical sarcophagi to spread the figures across the front of the picture space in a complex and not wholly successful arrangement. Wöllflin detects in the kneeling figure on the right the influence of the Madonna in Michelangelo's Doni Tondo, but the rest of the composition is far removed from his style, or that of Leonardo. Though highly regarded at the time, and much later forcibly removed from Perugia by the Borghese, it stands rather alone in Raphael's work. His classicism would later take a less literal direction.[32]
|
34 |
+
|
35 |
+
The Ansidei Madonna, c. 1505, beginning to move on from Perugino
|
36 |
+
|
37 |
+
The Madonna of the Meadow, c. 1506, using Leonardo's pyramidal composition for subjects of the Holy Family.[33]
|
38 |
+
|
39 |
+
Saint Catherine of Alexandria, 1507, possibly echoes the pose of Leonardo's Leda
|
40 |
+
|
41 |
+
Deposition of Christ, 1507, drawing from Roman sarcophagi
|
42 |
+
|
43 |
+
In 1508, Raphael moved to Rome, where he resided for the rest of his life. He was invited by the new pope, Julius II, perhaps at the suggestion of his architect Donato Bramante, then engaged on St. Peter's Basilica, who came from just outside Urbino and was distantly related to Raphael.[34] Unlike Michelangelo, who had been kept lingering in Rome for several months after his first summons,[35] Raphael was immediately commissioned by Julius to fresco what was intended to become the Pope's private library at the Vatican Palace.[36] This was a much larger and more important commission than any he had received before; he had only painted one altarpiece in Florence itself. Several other artists and their teams of assistants were already at work on different rooms, many painting over recently completed paintings commissioned by Julius's loathed predecessor, Alexander VI, whose contributions, and arms, Julius was determined to efface from the palace.[37] Michelangelo, meanwhile, had been commissioned to paint the Sistine Chapel ceiling.
|
44 |
+
|
45 |
+
This first of the famous "Stanze" or "Raphael Rooms" to be painted, now known as the Stanza della Segnatura after its use in Vasari's time, was to make a stunning impact on Roman art, and remains generally regarded as his greatest masterpiece, containing The School of Athens, The Parnassus and the Disputa. Raphael was then given further rooms to paint, displacing other artists including Perugino and Signorelli. He completed a sequence of three rooms, each with paintings on each wall and often the ceilings too, increasingly leaving the work of painting from his detailed drawings to the large and skilled workshop team he had acquired, who added a fourth room, probably only including some elements designed by Raphael, after his early death in 1520. The death of Julius in 1513 did not interrupt the work at all, as he was succeeded by Raphael's last Pope, the Medici Pope Leo X, with whom Raphael formed an even closer relationship, and who continued to commission him.[38] Raphael's friend Cardinal Bibbiena was also one of Leo's old tutors, and a close friend and advisor.
|
46 |
+
|
47 |
+
Raphael was clearly influenced by Michelangelo's Sistine Chapel ceiling in the course of painting the room. Vasari said Bramante let him in secretly. The first section was completed in 1511 and the reaction of other artists to the daunting force of Michelangelo was the dominating question in Italian art for the following few decades. Raphael, who had already shown his gift for absorbing influences into his own personal style, rose to the challenge perhaps better than any other artist. One of the first and clearest instances was the portrait in The School of Athens of Michelangelo himself, as Heraclitus, which seems to draw clearly from the Sybils and ignudi of the Sistine ceiling. Other figures in that and later paintings in the room show the same influences, but as still cohesive with a development of Raphael's own style.[39] Michelangelo accused Raphael of plagiarism and years after Raphael's death, complained in a letter that "everything he knew about art he got from me", although other quotations show more generous reactions.[40]
|
48 |
+
|
49 |
+
These very large and complex compositions have been regarded ever since as among the supreme works of the grand manner of the High Renaissance, and the "classic art" of the post-antique West. They give a highly idealised depiction of the forms represented, and the compositions, though very carefully conceived in drawings, achieve "sprezzatura", a term invented by his friend Castiglione, who defined it as "a certain nonchalance which conceals all artistry and makes whatever one says or does seem uncontrived and effortless ...".[41] According to Michael Levey, "Raphael gives his [figures] a superhuman clarity and grace in a universe of Euclidian certainties".[42] The painting is nearly all of the highest quality in the first two rooms, but the later compositions in the Stanze, especially those involving dramatic action, are not entirely as successful either in conception or their execution by the workshop.
|
50 |
+
|
51 |
+
Stanza della Segnatura
|
52 |
+
|
53 |
+
The Mass at Bolsena, 1514, Stanza di Eliodoro
|
54 |
+
|
55 |
+
Deliverance of Saint Peter, 1514, Stanza di Eliodoro
|
56 |
+
|
57 |
+
The Fire in the Borgo, 1514, Stanza dell'incendio del Borgo, painted by the workshop to Raphael's design
|
58 |
+
|
59 |
+
After Bramante's death in 1514, Raphael was named architect of the new St Peter's. Most of his work there was altered or demolished after his death and the acceptance of Michelangelo's design, but a few drawings have survived. It appears his designs would have made the church a good deal gloomier than the final design, with massive piers all the way down the nave, "like an alley" according to a critical posthumous analysis by Antonio da Sangallo the Younger. It would perhaps have resembled the temple in the background of The Expulsion of Heliodorus from the Temple.[43]
|
60 |
+
|
61 |
+
He designed several other buildings, and for a short time was the most important architect in Rome, working for a small circle around the Papacy. Julius had made changes to the street plan of Rome, creating several new thoroughfares, and he wanted them filled with splendid palaces.[44]
|
62 |
+
|
63 |
+
An important building, the Palazzo Branconio dell'Aquila for Leo's Papal Chamberlain Giovanni Battista Branconio, was completely destroyed to make way for Bernini's piazza for St. Peter's, but drawings of the façade and courtyard remain. The façade was an unusually richly decorated one for the period, including both painted panels on the top story (of three), and much sculpture on the middle one.[45]
|
64 |
+
|
65 |
+
The main designs for the Villa Farnesina were not by Raphael, but he did design, and decorate with mosaics, the Chigi Chapel for the same patron, Agostino Chigi, the Papal Treasurer. Another building, for Pope Leo's doctor, the Palazzo Jacopo da Brescia, was moved in the 1930s but survives; this was designed to complement a palace on the same street by Bramante, where Raphael himself lived for a time.[46]
|
66 |
+
|
67 |
+
The Villa Madama, a lavish hillside retreat for Cardinal Giulio de' Medici, later Pope Clement VII, was never finished, and his full plans have to be reconstructed speculatively. He produced a design from which the final construction plans were completed by Antonio da Sangallo the Younger. Even incomplete, it was the most sophisticated villa design yet seen in Italy, and greatly influenced the later development of the genre; it appears to be the only modern building in Rome of which Palladio made a measured drawing.[47]
|
68 |
+
|
69 |
+
Only some floor-plans remain for a large palace planned for himself on the new via Giulia in the rione of Regola, for which he was accumulating the land in his last years. It was on an irregular island block near the river Tiber. It seems all façades were to have a giant order of pilasters rising at least two storeys to the full height of the piano nobile, "a grandiloquent feature unprecedented in private palace design".[48]
|
70 |
+
|
71 |
+
Raphael asked Marco Fabio Calvo to translate Vitruvius's Four Books of Architecture into Italian; this he received around the end of August 1514. It is preserved at the Library in Munich with handwritten margin notes by Raphael.[49]
|
72 |
+
|
73 |
+
In about 1510, Raphael was asked by Bramante to judge contemporary copies of Laocoön and His Sons.[50] In 1515, he was given powers as Prefect over all antiquities unearthed within, or a mile outside the city.[51] Anyone excavating antiquities was required to inform Raphael within three days, and stonemasons were not allowed to destroy inscriptions without permission.[52] Raphael wrote a letter to Pope Leo suggesting ways of halting the destruction of ancient monuments, and proposed a visual survey of the city to record all antiquities in an organised fashion. The pope intended to continue to re-use ancient masonry in the building of St Peter's, also wanting to ensure that all ancient inscriptions were recorded, and sculpture preserved, before allowing the stones to be reused.[51]
|
74 |
+
|
75 |
+
According to Marino Sanuto the Younger's diary, in 1519 Raphael offered to transport an obelisk from the Mausoleum of August to St. Peter's Square for 90,000 ducats.[53] According to Marcantonio Michiel, Raphael's "youthful death saddened men of letters because he was not able to furnish the description and the painting of ancient Rome that he was making, which was very beautiful".[54] Raphael intended to make an archaeological map of ancient Rome but this was never executed.[55] Four archaeological drawings by the artist are preserved.[56]
|
76 |
+
|
77 |
+
The Vatican projects took most of his time, although he painted several portraits, including those of his two main patrons, the popes Julius II and his successor Leo X, the former considered one of his finest. Other portraits were of his own friends, like Castiglione, or the immediate Papal circle. Other rulers pressed for work, and King Francis I of France was sent two paintings as diplomatic gifts from the Pope.[57] For Agostino Chigi, the hugely rich banker and papal treasurer, he painted the Triumph of Galatea and designed further decorative frescoes for his Villa Farnesina, a chapel in the church of Santa Maria della Pace and mosaics in the funerary chapel in Santa Maria del Popolo. He also designed some of the decoration for the Villa Madama, the work in both villas being executed by his workshop.
|
78 |
+
|
79 |
+
One of his most important papal commissions was the Raphael Cartoons (now in the Victoria and Albert Museum), a series of 10 cartoons, of which seven survive, for tapestries with scenes of the lives of Saint Paul and Saint Peter, for the Sistine Chapel. The cartoons were sent to Brussels to be woven in the workshop of Pier van Aelst. It is possible that Raphael saw the finished series before his death—they were probably completed in 1520.[58] He also designed and painted the Loggie at the Vatican, a long thin gallery then open to a courtyard on one side, decorated with Roman-style grottesche.[59] He produced a number of significant altarpieces, including The Ecstasy of St. Cecilia and the Sistine Madonna. His last work, on which he was working up to his death, was a large Transfiguration, which together with Il Spasimo shows the direction his art was taking in his final years—more proto-Baroque than Mannerist.[60]
|
80 |
+
|
81 |
+
Triumph of Galatea, 1512, his only major mythology, for Chigi's villa (Villa Farnesina)
|
82 |
+
|
83 |
+
Il Spasimo 1517, brings a new degree of expressiveness to his art. (Museo del Prado)
|
84 |
+
|
85 |
+
Transfiguration, 1520, unfinished at his death. (Pinacoteca Vaticana)
|
86 |
+
|
87 |
+
The Holy Family, 1518 (Louvre)
|
88 |
+
|
89 |
+
Raphael painted several of his works on wood support (Madonna of the Pinks) but he also used canvas (Sistine Madonna) and he was known to employ drying oils such as linseed or walnut oils. His palette was rich and he used almost all of the then available pigments such as ultramarine, lead-tin-yellow, carmine, vermilion, madder lake, verdigris and ochres. In several of his paintings (Ansidei Madonna) he even employed the rare brazilwood lake, metallic powdered gold and even less known metallic powdered bismuth.[61][62]
|
90 |
+
|
91 |
+
Vasari says that Raphael eventually had a workshop of fifty pupils and assistants, many of whom later became significant artists in their own right. This was arguably the largest workshop team assembled under any single old master painter, and much higher than the norm. They included established masters from other parts of Italy, probably working with their own teams as sub-contractors, as well as pupils and journeymen. We have very little evidence of the internal working arrangements of the workshop, apart from the works of art themselves, which are often very difficult to assign to a particular hand.[63]
|
92 |
+
|
93 |
+
The most important figures were Giulio Romano, a young pupil from Rome (only about twenty-one at Raphael's death), and Gianfrancesco Penni, already a Florentine master. They were left many of Raphael's drawings and other possessions, and to some extent continued the workshop after Raphael's death. Penni did not achieve a personal reputation equal to Giulio's, as after Raphael's death he became Giulio's less-than-equal collaborator in turn for much of his subsequent career. Perino del Vaga, already a master, and Polidoro da Caravaggio, who was supposedly promoted from a labourer carrying building materials on the site, also became notable painters in their own right. Polidoro's partner, Maturino da Firenze, has, like Penni, been overshadowed in subsequent reputation by his partner. Giovanni da Udine had a more independent status, and was responsible for the decorative stucco work and grotesques surrounding the main frescoes.[64] Most of the artists were later scattered, and some killed, by the violent Sack of Rome in 1527.[65] This did however contribute to the diffusion of versions of Raphael's style around Italy and beyond.
|
94 |
+
|
95 |
+
Vasari emphasises that Raphael ran a very harmonious and efficient workshop, and had extraordinary skill in smoothing over troubles and arguments with both patrons and his assistants—a contrast with the stormy pattern of Michelangelo's relationships with both.[66] However though both Penni and Giulio were sufficiently skilled that distinguishing between their hands and that of Raphael himself is still sometimes difficult,[67] there is no doubt that many of Raphael's later wall-paintings, and probably some of his easel paintings, are more notable for their design than their execution. Many of his portraits, if in good condition, show his brilliance in the detailed handling of paint right up to the end of his life.[68]
|
96 |
+
|
97 |
+
Other pupils or assistants include Raffaellino del Colle, Andrea Sabbatini, Bartolommeo Ramenghi, Pellegrino Aretusi, Vincenzo Tamagni, Battista Dossi, Tommaso Vincidor, Timoteo Viti (the Urbino painter), and the sculptor and architect Lorenzetto (Giulio's brother-in-law).[69] The printmakers and architects in Raphael's circle are discussed below. It has been claimed the Flemish Bernard van Orley worked for Raphael for a time, and Luca Penni, brother of Gianfrancesco and later a member of the First School of Fontainebleau, may have been a member of the team.[70]
|
98 |
+
|
99 |
+
Portrait of Elisabetta Gonzaga, c. 1504
|
100 |
+
|
101 |
+
Portrait of Pope Julius II, c. 1512
|
102 |
+
|
103 |
+
Portrait of Bindo Altoviti, c. 1514
|
104 |
+
|
105 |
+
Portrait of Baldassare Castiglione, c. 1515
|
106 |
+
|
107 |
+
Raphael was one of the finest draftsmen in the history of Western art, and used drawings extensively to plan his compositions. According to a near-contemporary, when beginning to plan a composition, he would lay out a large number of stock drawings of his on the floor, and begin to draw "rapidly", borrowing figures from here and there.[72] Over forty sketches survive for the Disputa in the Stanze, and there may well have been many more originally; over four hundred sheets survive altogether.[73] He used different drawings to refine his poses and compositions, apparently to a greater extent than most other painters, to judge by the number of variants that survive: "... This is how Raphael himself, who was so rich in inventiveness, used to work, always coming up with four or six ways to show a narrative, each one different from the rest, and all of them full of grace and well done." wrote another writer after his death.[74] For John Shearman, Raphael's art marks "a shift of resources away from production to research and development".[75]
|
108 |
+
|
109 |
+
When a final composition was achieved, scaled-up full-size cartoons were often made, which were then pricked with a pin and "pounced" with a bag of soot to leave dotted lines on the surface as a guide. He also made unusually extensive use, on both paper and plaster, of a "blind stylus", scratching lines which leave only an indentation, but no mark. These can be seen on the wall in The School of Athens, and in the originals of many drawings.[76] The "Raphael Cartoons", as tapestry designs, were fully coloured in a glue distemper medium, as they were sent to Brussels to be followed by the weavers.
|
110 |
+
|
111 |
+
In later works painted by the workshop, the drawings are often painfully more attractive than the paintings.[77] Most Raphael drawings are rather precise—even initial sketches with naked outline figures are carefully drawn, and later working drawings often have a high degree of finish, with shading and sometimes highlights in white. They lack the freedom and energy of some of Leonardo's and Michelangelo's sketches, but are nearly always aesthetically very satisfying. He was one of the last artists to use metalpoint (literally a sharp pointed piece of silver or another metal) extensively, although he also made superb use of the freer medium of red or black chalk.[78] In his final years he was one of the first artists to use female models for preparatory drawings—male pupils ("garzoni") were normally used for studies of both sexes.[79]
|
112 |
+
|
113 |
+
Study for soldiers in this Resurrection of Christ, ca 1500.
|
114 |
+
|
115 |
+
Red chalk study for the Villa Farnesina Three Graces
|
116 |
+
|
117 |
+
Sheet with study for the Alba Madonna and other sketches
|
118 |
+
|
119 |
+
Developing the composition for a Madonna and Child
|
120 |
+
|
121 |
+
Raphael made no prints himself, but entered into a collaboration with Marcantonio Raimondi to produce engravings to Raphael's designs, which created many of the most famous Italian prints of the century, and was important in the rise of the reproductive print. His interest was unusual in such a major artist; from his contemporaries it was only shared by Titian, who had worked much less successfully with Raimondi.[80] A total of about fifty prints were made; some were copies of Raphael's paintings, but other designs were apparently created by Raphael purely to be turned into prints. Raphael made preparatory drawings, many of which survive, for Raimondi to translate into engraving.[81]
|
122 |
+
|
123 |
+
The most famous original prints to result from the collaboration were Lucretia, the Judgement of Paris and The Massacre of the Innocents (of which two virtually identical versions were engraved). Among prints of the paintings The Parnassus (with considerable differences)[82] and Galatea were also especially well known. Outside Italy, reproductive prints by Raimondi and others were the main way that Raphael's art was experienced until the twentieth century. Baviero Carocci, called "Il Baviera" by Vasari, an assistant who Raphael evidently trusted with his money,[83] ended up in control of most of the copper plates after Raphael's death, and had a successful career in the new occupation of a publisher of prints.[84]
|
124 |
+
|
125 |
+
Drawing for a Sibyl in the Chigi Chapel.
|
126 |
+
|
127 |
+
The Massacre of the Innocents, engraving by ? Marcantonio Raimondi from a design by Raphael.[c] The version "without fir tree".
|
128 |
+
|
129 |
+
Judgement of Paris, still influencing Manet, who used the seated group in his most famous work.
|
130 |
+
|
131 |
+
Galatea, engraving after the fresco in the Villa Farnesina
|
132 |
+
|
133 |
+
From 1517 until his death, Raphael lived in the Palazzo Caprini, lying at the corner between piazza Scossacavalli and via Alessandrina in the Borgo, in rather grand style in a palace designed by Bramante.[85] He never married, but in 1514 became engaged to Maria Bibbiena, Cardinal Medici Bibbiena's niece; he seems to have been talked into this by his friend the cardinal, and his lack of enthusiasm seems to be shown by the marriage not having taken place before she died in 1520.[86] He is said to have had many affairs, but a permanent fixture in his life in Rome was "La Fornarina", Margherita Luti, the daughter of a baker (fornaro) named Francesco Luti from Siena who lived at Via del Governo Vecchio.[87] He was made a "Groom of the Chamber" of the Pope, which gave him status at court and an additional income, and also a knight of the Papal Order of the Golden Spur. Vasari claims that he had toyed with the ambition of becoming a cardinal, perhaps after some encouragement from Leo, which also may account for his delaying his marriage.[86]
|
134 |
+
|
135 |
+
Raphael died on Good Friday (April 6, 1520), which was possibly his 37th birthday.[d] Vasari says that Raphael had also been born on a Good Friday, which in 1483 fell on March 28,[e] and that the artist died from exhaustion brought on by unceasing romantic interests while he was working on the Loggia.[89] Several other possibilities have been raised by later historians.[f]
|
136 |
+
In his acute illness, which lasted fifteen days, Raphael was composed enough to confess his sins, receive the last rites, and put his affairs in order. He dictated his will, in which he left sufficient funds for his mistress's care, entrusted to his loyal servant Baviera, and left most of his studio contents to Giulio Romano and Penni. At his request, Raphael was buried in the Pantheon.[90]
|
137 |
+
|
138 |
+
Raphael's funeral was extremely grand, attended by large crowds. According to a journal by Paris de Grassis,[g] four cardinals dressed in purple carried his body, the hand of which was kissed by the Pope.[91] The inscription in Raphael's marble sarcophagus, an elegiac distich written by Pietro Bembo, reads: "Here lies that famous Raphael by whom Nature feared to be conquered while he lived, and when he was dying, feared herself to die."[h]
|
139 |
+
|
140 |
+
Probable self-portrait drawing by Raphael in his teens
|
141 |
+
|
142 |
+
Self-portrait, Raphael in the background, from The School of Athens
|
143 |
+
|
144 |
+
Portrait of a Young Man, 1514, Lost during the Second World War. Possible self-portrait by Raphael
|
145 |
+
|
146 |
+
Possible Self-portrait with a friend, c. 1518
|
147 |
+
|
148 |
+
Raphael was highly admired by his contemporaries, although his influence on artistic style in his own century was less than that of Michelangelo. Mannerism, beginning at the time of his death, and later the Baroque, took art "in a direction totally opposed" to Raphael's qualities;[92] "with Raphael's death, classic art – the High Renaissance – subsided", as Walter Friedländer put it.[93] He was soon seen as the ideal model by those disliking the excesses of Mannerism:
|
149 |
+
|
150 |
+
the opinion ...was generally held in the middle of the sixteenth century that Raphael was the ideal balanced painter, universal in his talent, satisfying all the absolute standards, and obeying all the rules which were supposed to govern the arts, whereas Michelangelo was the eccentric genius, more brilliant than any other artists in his particular field, the drawing of the male nude, but unbalanced and lacking in certain qualities, such as grace and restraint, essential to the great artist. Those, like Dolce and Aretino, who held this view were usually the survivors of Renaissance Humanism, unable to follow Michelangelo as he moved on into Mannerism.[94]
|
151 |
+
|
152 |
+
Vasari himself, despite his hero remaining Michelangelo, came to see his influence as harmful in some ways, and added passages to the second edition of the Lives expressing similar views.[95]
|
153 |
+
|
154 |
+
Raphael's compositions were always admired and studied, and became the cornerstone of the training of the Academies of art. His period of greatest influence was from the late 17th to late 19th centuries, when his perfect decorum and balance were greatly admired. He was seen as the best model for the history painting, regarded as the highest in the hierarchy of genres. Sir Joshua Reynolds in his Discourses praised his "simple, grave, and majestic dignity" and said he "stands in general foremost of the first [i.e., best] painters", especially for his frescoes (in which he included the "Raphael Cartoons"), whereas "Michael Angelo claims the next attention. He did not possess so many excellences as Raffaelle, but those he had were of the highest kind..." Echoing the sixteenth-century views above, Reynolds goes on to say of Raphael:
|
155 |
+
|
156 |
+
The excellency of this extraordinary man lay in the propriety, beauty, and majesty of his characters, his judicious contrivance of his composition, correctness of drawing, purity of taste, and the skilful accommodation of other men's conceptions to his own purpose. Nobody excelled him in that judgment, with which he united to his own observations on nature the energy of Michael Angelo, and the beauty and simplicity of the antique. To the question, therefore, which ought to hold the first rank, Raffaelle or Michael Angelo, it must be answered, that if it is to be given to him who possessed a greater combination of the higher qualities of the art than any other man, there is no doubt but Raffaelle is the first. But if, according to Longinus, the sublime, being the highest excellence that human composition can attain to, abundantly compensates the absence of every other beauty, and atones for all other deficiencies, then Michael Angelo demands the preference.[96]
|
157 |
+
|
158 |
+
Reynolds was less enthusiastic about Raphael's panel paintings, but the slight sentimentality of these made them enormously popular in the 19th century: "We have been familiar with them from childhood onwards, through a far greater mass of reproductions than any other artist in the world has ever had..." wrote Wölfflin, who was born in 1862, of Raphael's Madonnas.[97]
|
159 |
+
|
160 |
+
In Germany, Raphael had an immense influence on religious art of the Nazarene movement and Düsseldorf school of painting in the 19th century. In contrast, in England the Pre-Raphaelite Brotherhood explicitly reacted against his influence (and that of his admirers such as Joshua Reynolds), seeking to return to styles that pre-dated what they saw as his baneful influence. According to a critic whose ideas greatly influenced them, John Ruskin:
|
161 |
+
|
162 |
+
The doom of the arts of Europe went forth from that chamber [the Stanza della Segnatura], and it was brought about in great part by the very excellencies of the man who had thus marked the commencement of decline. The perfection of execution and the beauty of feature which were attained in his works, and in those of his great contemporaries, rendered finish of execution and beauty of form the chief objects of all artists; and thenceforward execution was looked for rather than thought, and beauty rather than veracity.
|
163 |
+
|
164 |
+
And as I told you, these are the two secondary causes of the decline of art; the first being the loss of moral purpose. Pray note them clearly. In mediæval art, thought is the first thing, execution the second; in modern art execution is the first thing, and thought the second. And again, in mediæval art, truth is first, beauty second; in modern art, beauty is first, truth second. The mediæval principles led up to Raphael, and the modern principles lead down from him.[98]
|
165 |
+
|
166 |
+
By 1900, Raphael's popularity was surpassed by Michelangelo and Leonardo, perhaps as a reaction against the etiolated Raphaelism of 19th-century academic artists such as Bouguereau.[99] Although art historian Bernard Berenson in 1952 termed Raphael the "most famous and most loved" master of the High Renaissance,[100] art historians Leopold and Helen Ettlinger say that the Raphael's lesser popularity in the 20th century is made obvious by "the contents of art library shelves ... In contrast to volume upon volume that reproduce yet again detailed photographs of the Sistine Ceiling or Leonardo's drawings, the literature on Raphael, particularly in English, is limited to only a few books".[99] They conclude, nonetheless, that "of all the great Renaissance masters, Raphael's influence is the most continuous."[101]
|
167 |
+
|
168 |
+
Footnotes
|
169 |
+
|
170 |
+
Citations
|
en/4921.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Rapping (or rhyming, spitting,[1] emceeing,[2] MCing[2][3]) is a musical form of vocal delivery that incorporates "rhyme, rhythmic speech, and street vernacular",[4] which is performed or chanted in a variety of ways, usually over a backing beat or musical accompaniment.[4] The components of rap include "content" (what is being said), "flow" (rhythm, rhyme), and "delivery" (cadence, tone).[5] Rap differs from spoken-word poetry in that it is usually performed in time to musical accompaniment.[6] Rap being a primary ingredient of hip hop music, it is commonly associated with that genre in particular; however, the origins of rap precede hip-hop culture. The earliest precursor to modern rap is the West African griot tradition, in which "oral historians", or "praise-singers", would disseminate oral traditions and genealogies, or use their rhetorical techniques for gossip or to "praise or critique individuals."[7] Griot traditions connect to rap along a lineage of black verbal reverence,[definition needed] through James Brown interacting with the crowd and the band between songs, to Muhammad Ali's verbal taunts and the poems of The Last Poets.[vague] Therefore, rap lyrics and music are part of the "Black rhetorical continuum", and aim to reuse elements of past traditions while expanding upon them through "creative use of language and rhetorical styles and strategies".[8] The person credited with originating the style of "delivering rhymes over extensive music", that would become known as rap, was Anthony "DJ Hollywood" Holloway from Harlem, New York.[9]
|
6 |
+
|
7 |
+
Rap is usually delivered over a beat, typically provided by a DJ, turntablist, beatboxer, or performed a cappella without accompaniment. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike",[10] and is now used to describe quick speech or repartee.[11] The word had been used in British English since the 16th century. It was part of the African American dialect of English in the 1960s meaning "to converse", and very soon after that in its present usage as a term denoting the musical style.[12] Today, the term rap is so closely associated with hip-hop music that many writers use the terms interchangeably.
|
8 |
+
|
9 |
+
The English verb rap has various meanings, these include "to strike, especially with a quick, smart, or light blow",[13] as well "to utter sharply or vigorously: to rap out a command".[13] The Shorter Oxford English Dictionary gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly".[14] Wentworth and Flexner's Dictionary of American Slang gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932,[15] and a later meaning of "to converse, esp. in an open and frank manner".[16] It is these meanings from which the musical form of rapping derives, and this definition may be from a shortening of repartee.[17] A rapper refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, rap was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade.[citation needed]
|
10 |
+
|
11 |
+
Rap was used to describe talking on records as early as 1971, on Isaac Hayes' album Black Moses with track names such as "Ike's Rap", "Ike's Rap II", "Ike's Rap III", and so on.[18] Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound".[18]
|
12 |
+
Del the Funky Homosapien similarly states that rap was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk."[19]
|
13 |
+
|
14 |
+
Rapping can be traced back to its African roots. Centuries before hip-hop music existed, the griots of West Africa were delivering stories rhythmically, over drums and sparse instrumentation. Such connections have been acknowledged by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics.[20][21][22][23]
|
15 |
+
|
16 |
+
Blues music, rooted in the work songs and spirituals of slavery and influenced greatly by West African musical traditions, was first played by black Americans[clarification needed], and later by some white Americans, in the Mississippi Delta region of the United States around the time of the Emancipation Proclamation. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s.[24][25] Wald went so far as to call hip hop "the living blues."[24] A notable recorded example of rapping in blues music was the 1950 song "Gotta Let You Go" by Joe Hill Louis.[26]
|
17 |
+
|
18 |
+
Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip hop and has been cited as a precursor of hip hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote Digitopia Blues, rap "bears a striking resemblance to the evolution of jazz both stylistically and formally".[27] Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s.[28]
|
19 |
+
|
20 |
+
Precursors also exist in non-African/African-American traditions, especially in vaudeville and musical theater. A comparable tradition is the patter song exemplified by Gilbert and Sullivan but that has origins in earlier Italian opera. "Rock Island" from Meridith Wilson's The Music Man is wholly spoken by an ensemble of travelling salesmen, as are most of the numbers for British actor Rex Harrison in the 1964 Lerner and Loewe musical My Fair Lady. Glenn Miller's "The Lady's in Love with You" and "The Little Man Who Wasn't There" (both 1939), each contain distinctly rap-like sequences set to a driving beat as does the 1937 song "Doin' the Jive". In musical theater, the term "vamp" is identical to its meaning in jazz, gospel, and funk, and it fulfills the same function. Semi-spoken music has long been especially popular in British entertainment, and such examples as David Croft's theme to the 1970s sitcom Are You Being Served? have elements indistinguishable from modern rap.
|
21 |
+
|
22 |
+
In classical music, semi-spoken music was popular stylized by composer Arnold Schoenberg as Sprechstimme, and famously used in Ernst Toch's 1924 Geographical Fugue for spoken chorus and the final scene in Darius Milhaud's 1915 ballet Les Choéphores.[29] In the French chanson field, irrigated by a strong poetry tradition, such singer-songwriters as Léo Ferré or Serge Gainsbourg made their own use of spoken word over rock or symphonic music from the very beginning of the 1970s. Although these probably did not have a direct influence on rap's development in the African-American cultural sphere, they paved the way for acceptance of spoken word music in the media market, as well as providing a broader backdrop, in a range of cultural contexts distinct from that of the African American experience, upon which rapping could later be grafted.
|
23 |
+
|
24 |
+
With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records.[30]
|
25 |
+
|
26 |
+
More directly related to the African-American community were items like schoolyard chants and taunts, clapping games,[31] jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics.[32] A related area that is not strictly folklore is rhythmical cheering and cheerleading for military and sports.
|
27 |
+
|
28 |
+
In his narration between the tracks on George Russell's 1958 jazz album New York, N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers.[33] Coke La Rock, often credited as hip-hop's first MC[34] cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor.[33] Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as This Pussy Belongs To Me (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers",[35] and which later led to him being called "The Godfather of Rap".[36]
|
29 |
+
|
30 |
+
Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One.[37] Scott-Heron himself was influenced by Melvin Van Peebles,[38][39] whose first album was 1968's Brer Soul. Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago.[40] Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France."[41]
|
31 |
+
|
32 |
+
During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956,[42] deejays were toasting (an African tradition of "rapped out" tales of heroism) over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally."[43]
|
33 |
+
|
34 |
+
The early rapping of hip-hop developed out of DJ and Master of Ceremonies' announcements made over the microphone at parties, and later into more complex raps.[44] Grandmaster Caz states: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes."[44]
|
35 |
+
|
36 |
+
One of the first rappers at the beginning of the hip hop period, at the end of the 1970s, was also hip hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting.[45] However, Kool Herc himself denies this link (in the 1984 book Hip Hop), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album Hustler's Convention".[46] Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old,"[47] and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown."[45]
|
37 |
+
|
38 |
+
However, in terms of what we identify in the 2010s as "rap" the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native[48] who was the house DJ at the Apollo Theater. Kurtis Blow also says the first person he heard rhyme was DJ Hollywood.[49] In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the Hip Hop style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes "Good Love 6-9969" and rhymed it to the breakdown part of "Love is the Message".[50] His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx Hip Hop set that until then was composed of DJing and B-boying (or beatboxing), with traditional "shout out" style rapping.
|
39 |
+
|
40 |
+
The style that Hollywood created and his partner introduced to the Hip Hop set quickly became the standard. What actually did Hollywood do? He created "flow." Before then all MCs rhymed based on radio DJs. This usually consisted of short patters that were disconnected thematically; they were separate unto themselves. But by Hollywood using song lyrics, he had an inherent flow and theme to his rhyme. This was the game changer. By the end of the 1970s, artists such as Kurtis Blow and The Sugarhill Gang were just starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was one of the first songs featuring rap to top the U.S. Billboard Hot 100 chart.
|
41 |
+
|
42 |
+
Old school rap (1979–84)[51] was "easily identified by its relatively simple raps"[52] according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times",[52] one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay.[52]
|
43 |
+
|
44 |
+
Golden age hip hop (the mid-1980s to early '90s)[53] was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time"[54] and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop".[55] The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period.[53][55]
|
45 |
+
|
46 |
+
"Flow" is defined as "the rhythms and rhymes"[56][57][58] of a hip-hop song's lyrics and how they interact – the book How to Rap breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence).[59] 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well,[60] though often a distinction is made between the flow and the delivery.[57][56]
|
47 |
+
|
48 |
+
Staying on the beat is central to rap's flow[61] – many MCs note the importance of staying on-beat in How to Rap including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del The Funky Homosapien, Tech N9ne, People Under The Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage.[61]
|
49 |
+
|
50 |
+
MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop.[62][63] Poetry scholar Derek Attridge describes how this works in his book Poetic Rhythm – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse".[62] He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise".[62]
|
51 |
+
|
52 |
+
The same technique is also noted in the book How to Rap, where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music".[64]
|
53 |
+
|
54 |
+
In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music.[65]
|
55 |
+
|
56 |
+
Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes.[60][66]
|
57 |
+
Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986".[67] He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on".[68]
|
58 |
+
|
59 |
+
Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip hop's new school[69] – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do.[70] Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994.[67] from that point on, anybody emceeing was forced to focus on their flow".[71] Kool Moe Dee explains that before Rakim, the term 'flow' wasn't widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!"[72] He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word".[70]
|
60 |
+
|
61 |
+
Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002",[67] and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game".[73] He also cites Craig Mack as an artist who contributed to developing flow in the '90s.[74]
|
62 |
+
|
63 |
+
Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'".[60] He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few"[75] as artists who exemplify this progression.
|
64 |
+
|
65 |
+
Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)".[67]
|
66 |
+
|
67 |
+
There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms –
|
68 |
+
|
69 |
+
Alternatively, music scholar Adam Krims uses the following terms –
|
70 |
+
|
71 |
+
MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes".[75] There is also widespread use of multisyllabic rhymes, by artists such as Kool G Rap,[82] Big Daddy Kane, Rakim, Big L, Nas and Eminem.
|
72 |
+
|
73 |
+
It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities".[83]
|
74 |
+
|
75 |
+
In the book How to Rap, Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple".[84] How to Rap explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality.[2]
|
76 |
+
|
77 |
+
Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists.[85] How to Rap 2 identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G.[86] Rapping has also been done in various time signatures, such as 3/4 time.[87]
|
78 |
+
|
79 |
+
Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English.[88] Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate.[89] The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community).[90]
|
80 |
+
|
81 |
+
The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers".[91] Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.",[92] Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.",[91] and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording."[91]
|
82 |
+
|
83 |
+
Hip-hop scholars also make use of the same flow diagrams: the books How to Rap and How to Rap 2 use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques.[87] Similar systems are used by PhD musicologists Adam Krims in his book Rap Music and the Poetics of Identity[93] and Kyle Adams in his academic work on flow.[94]
|
84 |
+
|
85 |
+
Because rap revolves around a strong 4/4 beat,[95] with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers.[95] This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music.[87]
|
86 |
+
|
87 |
+
To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses.
|
88 |
+
|
89 |
+
Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps".[96] Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular).[97]
|
90 |
+
|
91 |
+
The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008.[98]
|
92 |
+
|
93 |
+
In the late 1970s, the term emcee, MC or M.C., derived from "master of ceremonies",[99] became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip hop progressed, the title MC acquired backronyms such as 'mike chanter'[100] 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term rapper, while for others the term denotes a superior level of skill and connection to the wider culture.
|
94 |
+
|
95 |
+
MC can often be used as a term of distinction; referring to an artist with good performance skills.[101] As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" [sic].[102][103] Many people in hip hop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in hip hop, and arguably even started the first MC rhyme.[104][105]
|
96 |
+
|
97 |
+
For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy).[106] It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album Feel My Power.[107]
|
98 |
+
|
99 |
+
Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album Midnight Marauders:
|
100 |
+
|
101 |
+
The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music.[citation needed]
|
102 |
+
|
103 |
+
"Party rhymes", meant to pump up the crowd at a party, were nearly the exclusive focus of old school hip hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition.
|
104 |
+
Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include Emcee Lynx, The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism.
|
105 |
+
|
106 |
+
Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime.[96] Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film Colors contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream.
|
107 |
+
|
108 |
+
Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers.
|
109 |
+
|
110 |
+
Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics.[108][109][110] However, there are also rappers whose messages may not be in conflict with these views, for example Christian hip hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music.[111]
|
111 |
+
|
112 |
+
In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip hop seem to have found its way in the hip-hop family.[112][113] Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters.
|
113 |
+
|
114 |
+
Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors.
|
115 |
+
|
116 |
+
Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps.[69] Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly.
|
117 |
+
|
118 |
+
Rap music's development into popular culture in the 1990s can be accredited to the album Niggaz4life by artists Niggaz With Attitude, the first rap group to ever take the top spot of the Billboard's Top 200 in 1991, in the United States.[114] With this victory, came the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music.[114] As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults.[115] As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society.
|
119 |
+
|
120 |
+
The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system.[116] French literary theorist Roland Barthes furthers this study with this own theory of myth.[117] He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context.[117] To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding.[117] This contextual meaning is subjective and is dynamic within society.
|
121 |
+
|
122 |
+
Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words.
|
123 |
+
|
124 |
+
Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres.[118] This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers.[118] To illustrate, the acronym YOLO was popularized by rapper, actor and RNB singer Drake in 2012 when he featured it in his own song, The Motto.[119] That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos.[119] However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of La Comédie Humaine, by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!".[120] Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track Broccoli in 1993.[121] In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for their hit song, also titled Broccoli.[121]
|
125 |
+
|
126 |
+
With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of Subculture: The Meaning of Style, merits that subcultures often use music to vocalize the struggles of their experiences.[122] As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics.[115]
|
127 |
+
|
128 |
+
It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang.[115] Vernon Andrews, the professor of the course American Studies 111: Hip-Hop Culture, suggests that many words, such as "hood", "homie", and "dope", are historically influenced.[115] Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity.[115] It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America.[115] This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name.[115]
|
129 |
+
|
130 |
+
There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, the second typically referred to as "freestyling" or "spitting", is the improvisation of rapped lyrics. When freestyling, some rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, freestyles with proven spontaneity are valued above generic, always usable lines.[123] Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality.
|
131 |
+
|
132 |
+
Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was portrayed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's DFX and BET's 106 and Park host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie 8 Mile.
|
133 |
+
|
134 |
+
The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, or how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". Jin The Emcee was considered "World Champion" battle rapper in the mid-2000s.[citation needed]
|
135 |
+
|
136 |
+
Throughout hip hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine.
|
137 |
+
|
138 |
+
Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further.
|
139 |
+
|
140 |
+
Although the majority of rappers are male, there have been a number of female rap stars, including Lauryn Hill, MC Lyte, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Eve, Trina, Nicki Minaj, Khia, M.I.A., CL from 2NE1, Foxy Brown, Iggy Azalea, and Lisa Lopes from TLC. There is also deaf rap artist Signmark.
|
en/4922.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Rapping (or rhyming, spitting,[1] emceeing,[2] MCing[2][3]) is a musical form of vocal delivery that incorporates "rhyme, rhythmic speech, and street vernacular",[4] which is performed or chanted in a variety of ways, usually over a backing beat or musical accompaniment.[4] The components of rap include "content" (what is being said), "flow" (rhythm, rhyme), and "delivery" (cadence, tone).[5] Rap differs from spoken-word poetry in that it is usually performed in time to musical accompaniment.[6] Rap being a primary ingredient of hip hop music, it is commonly associated with that genre in particular; however, the origins of rap precede hip-hop culture. The earliest precursor to modern rap is the West African griot tradition, in which "oral historians", or "praise-singers", would disseminate oral traditions and genealogies, or use their rhetorical techniques for gossip or to "praise or critique individuals."[7] Griot traditions connect to rap along a lineage of black verbal reverence,[definition needed] through James Brown interacting with the crowd and the band between songs, to Muhammad Ali's verbal taunts and the poems of The Last Poets.[vague] Therefore, rap lyrics and music are part of the "Black rhetorical continuum", and aim to reuse elements of past traditions while expanding upon them through "creative use of language and rhetorical styles and strategies".[8] The person credited with originating the style of "delivering rhymes over extensive music", that would become known as rap, was Anthony "DJ Hollywood" Holloway from Harlem, New York.[9]
|
6 |
+
|
7 |
+
Rap is usually delivered over a beat, typically provided by a DJ, turntablist, beatboxer, or performed a cappella without accompaniment. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike",[10] and is now used to describe quick speech or repartee.[11] The word had been used in British English since the 16th century. It was part of the African American dialect of English in the 1960s meaning "to converse", and very soon after that in its present usage as a term denoting the musical style.[12] Today, the term rap is so closely associated with hip-hop music that many writers use the terms interchangeably.
|
8 |
+
|
9 |
+
The English verb rap has various meanings, these include "to strike, especially with a quick, smart, or light blow",[13] as well "to utter sharply or vigorously: to rap out a command".[13] The Shorter Oxford English Dictionary gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly".[14] Wentworth and Flexner's Dictionary of American Slang gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932,[15] and a later meaning of "to converse, esp. in an open and frank manner".[16] It is these meanings from which the musical form of rapping derives, and this definition may be from a shortening of repartee.[17] A rapper refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, rap was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade.[citation needed]
|
10 |
+
|
11 |
+
Rap was used to describe talking on records as early as 1971, on Isaac Hayes' album Black Moses with track names such as "Ike's Rap", "Ike's Rap II", "Ike's Rap III", and so on.[18] Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound".[18]
|
12 |
+
Del the Funky Homosapien similarly states that rap was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk."[19]
|
13 |
+
|
14 |
+
Rapping can be traced back to its African roots. Centuries before hip-hop music existed, the griots of West Africa were delivering stories rhythmically, over drums and sparse instrumentation. Such connections have been acknowledged by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics.[20][21][22][23]
|
15 |
+
|
16 |
+
Blues music, rooted in the work songs and spirituals of slavery and influenced greatly by West African musical traditions, was first played by black Americans[clarification needed], and later by some white Americans, in the Mississippi Delta region of the United States around the time of the Emancipation Proclamation. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s.[24][25] Wald went so far as to call hip hop "the living blues."[24] A notable recorded example of rapping in blues music was the 1950 song "Gotta Let You Go" by Joe Hill Louis.[26]
|
17 |
+
|
18 |
+
Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip hop and has been cited as a precursor of hip hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote Digitopia Blues, rap "bears a striking resemblance to the evolution of jazz both stylistically and formally".[27] Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s.[28]
|
19 |
+
|
20 |
+
Precursors also exist in non-African/African-American traditions, especially in vaudeville and musical theater. A comparable tradition is the patter song exemplified by Gilbert and Sullivan but that has origins in earlier Italian opera. "Rock Island" from Meridith Wilson's The Music Man is wholly spoken by an ensemble of travelling salesmen, as are most of the numbers for British actor Rex Harrison in the 1964 Lerner and Loewe musical My Fair Lady. Glenn Miller's "The Lady's in Love with You" and "The Little Man Who Wasn't There" (both 1939), each contain distinctly rap-like sequences set to a driving beat as does the 1937 song "Doin' the Jive". In musical theater, the term "vamp" is identical to its meaning in jazz, gospel, and funk, and it fulfills the same function. Semi-spoken music has long been especially popular in British entertainment, and such examples as David Croft's theme to the 1970s sitcom Are You Being Served? have elements indistinguishable from modern rap.
|
21 |
+
|
22 |
+
In classical music, semi-spoken music was popular stylized by composer Arnold Schoenberg as Sprechstimme, and famously used in Ernst Toch's 1924 Geographical Fugue for spoken chorus and the final scene in Darius Milhaud's 1915 ballet Les Choéphores.[29] In the French chanson field, irrigated by a strong poetry tradition, such singer-songwriters as Léo Ferré or Serge Gainsbourg made their own use of spoken word over rock or symphonic music from the very beginning of the 1970s. Although these probably did not have a direct influence on rap's development in the African-American cultural sphere, they paved the way for acceptance of spoken word music in the media market, as well as providing a broader backdrop, in a range of cultural contexts distinct from that of the African American experience, upon which rapping could later be grafted.
|
23 |
+
|
24 |
+
With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records.[30]
|
25 |
+
|
26 |
+
More directly related to the African-American community were items like schoolyard chants and taunts, clapping games,[31] jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics.[32] A related area that is not strictly folklore is rhythmical cheering and cheerleading for military and sports.
|
27 |
+
|
28 |
+
In his narration between the tracks on George Russell's 1958 jazz album New York, N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers.[33] Coke La Rock, often credited as hip-hop's first MC[34] cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor.[33] Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as This Pussy Belongs To Me (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers",[35] and which later led to him being called "The Godfather of Rap".[36]
|
29 |
+
|
30 |
+
Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One.[37] Scott-Heron himself was influenced by Melvin Van Peebles,[38][39] whose first album was 1968's Brer Soul. Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago.[40] Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France."[41]
|
31 |
+
|
32 |
+
During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956,[42] deejays were toasting (an African tradition of "rapped out" tales of heroism) over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally."[43]
|
33 |
+
|
34 |
+
The early rapping of hip-hop developed out of DJ and Master of Ceremonies' announcements made over the microphone at parties, and later into more complex raps.[44] Grandmaster Caz states: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes."[44]
|
35 |
+
|
36 |
+
One of the first rappers at the beginning of the hip hop period, at the end of the 1970s, was also hip hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting.[45] However, Kool Herc himself denies this link (in the 1984 book Hip Hop), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album Hustler's Convention".[46] Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old,"[47] and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown."[45]
|
37 |
+
|
38 |
+
However, in terms of what we identify in the 2010s as "rap" the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native[48] who was the house DJ at the Apollo Theater. Kurtis Blow also says the first person he heard rhyme was DJ Hollywood.[49] In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the Hip Hop style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes "Good Love 6-9969" and rhymed it to the breakdown part of "Love is the Message".[50] His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx Hip Hop set that until then was composed of DJing and B-boying (or beatboxing), with traditional "shout out" style rapping.
|
39 |
+
|
40 |
+
The style that Hollywood created and his partner introduced to the Hip Hop set quickly became the standard. What actually did Hollywood do? He created "flow." Before then all MCs rhymed based on radio DJs. This usually consisted of short patters that were disconnected thematically; they were separate unto themselves. But by Hollywood using song lyrics, he had an inherent flow and theme to his rhyme. This was the game changer. By the end of the 1970s, artists such as Kurtis Blow and The Sugarhill Gang were just starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was one of the first songs featuring rap to top the U.S. Billboard Hot 100 chart.
|
41 |
+
|
42 |
+
Old school rap (1979–84)[51] was "easily identified by its relatively simple raps"[52] according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times",[52] one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay.[52]
|
43 |
+
|
44 |
+
Golden age hip hop (the mid-1980s to early '90s)[53] was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time"[54] and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop".[55] The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period.[53][55]
|
45 |
+
|
46 |
+
"Flow" is defined as "the rhythms and rhymes"[56][57][58] of a hip-hop song's lyrics and how they interact – the book How to Rap breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence).[59] 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well,[60] though often a distinction is made between the flow and the delivery.[57][56]
|
47 |
+
|
48 |
+
Staying on the beat is central to rap's flow[61] – many MCs note the importance of staying on-beat in How to Rap including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del The Funky Homosapien, Tech N9ne, People Under The Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage.[61]
|
49 |
+
|
50 |
+
MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop.[62][63] Poetry scholar Derek Attridge describes how this works in his book Poetic Rhythm – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse".[62] He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise".[62]
|
51 |
+
|
52 |
+
The same technique is also noted in the book How to Rap, where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music".[64]
|
53 |
+
|
54 |
+
In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music.[65]
|
55 |
+
|
56 |
+
Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes.[60][66]
|
57 |
+
Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986".[67] He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on".[68]
|
58 |
+
|
59 |
+
Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip hop's new school[69] – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do.[70] Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994.[67] from that point on, anybody emceeing was forced to focus on their flow".[71] Kool Moe Dee explains that before Rakim, the term 'flow' wasn't widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!"[72] He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word".[70]
|
60 |
+
|
61 |
+
Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002",[67] and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game".[73] He also cites Craig Mack as an artist who contributed to developing flow in the '90s.[74]
|
62 |
+
|
63 |
+
Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'".[60] He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few"[75] as artists who exemplify this progression.
|
64 |
+
|
65 |
+
Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)".[67]
|
66 |
+
|
67 |
+
There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms –
|
68 |
+
|
69 |
+
Alternatively, music scholar Adam Krims uses the following terms –
|
70 |
+
|
71 |
+
MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes".[75] There is also widespread use of multisyllabic rhymes, by artists such as Kool G Rap,[82] Big Daddy Kane, Rakim, Big L, Nas and Eminem.
|
72 |
+
|
73 |
+
It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities".[83]
|
74 |
+
|
75 |
+
In the book How to Rap, Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple".[84] How to Rap explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality.[2]
|
76 |
+
|
77 |
+
Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists.[85] How to Rap 2 identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G.[86] Rapping has also been done in various time signatures, such as 3/4 time.[87]
|
78 |
+
|
79 |
+
Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English.[88] Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate.[89] The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community).[90]
|
80 |
+
|
81 |
+
The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers".[91] Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.",[92] Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.",[91] and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording."[91]
|
82 |
+
|
83 |
+
Hip-hop scholars also make use of the same flow diagrams: the books How to Rap and How to Rap 2 use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques.[87] Similar systems are used by PhD musicologists Adam Krims in his book Rap Music and the Poetics of Identity[93] and Kyle Adams in his academic work on flow.[94]
|
84 |
+
|
85 |
+
Because rap revolves around a strong 4/4 beat,[95] with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers.[95] This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music.[87]
|
86 |
+
|
87 |
+
To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses.
|
88 |
+
|
89 |
+
Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps".[96] Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular).[97]
|
90 |
+
|
91 |
+
The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008.[98]
|
92 |
+
|
93 |
+
In the late 1970s, the term emcee, MC or M.C., derived from "master of ceremonies",[99] became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip hop progressed, the title MC acquired backronyms such as 'mike chanter'[100] 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term rapper, while for others the term denotes a superior level of skill and connection to the wider culture.
|
94 |
+
|
95 |
+
MC can often be used as a term of distinction; referring to an artist with good performance skills.[101] As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" [sic].[102][103] Many people in hip hop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in hip hop, and arguably even started the first MC rhyme.[104][105]
|
96 |
+
|
97 |
+
For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy).[106] It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album Feel My Power.[107]
|
98 |
+
|
99 |
+
Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album Midnight Marauders:
|
100 |
+
|
101 |
+
The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music.[citation needed]
|
102 |
+
|
103 |
+
"Party rhymes", meant to pump up the crowd at a party, were nearly the exclusive focus of old school hip hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition.
|
104 |
+
Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include Emcee Lynx, The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism.
|
105 |
+
|
106 |
+
Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime.[96] Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film Colors contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream.
|
107 |
+
|
108 |
+
Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers.
|
109 |
+
|
110 |
+
Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics.[108][109][110] However, there are also rappers whose messages may not be in conflict with these views, for example Christian hip hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music.[111]
|
111 |
+
|
112 |
+
In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip hop seem to have found its way in the hip-hop family.[112][113] Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters.
|
113 |
+
|
114 |
+
Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors.
|
115 |
+
|
116 |
+
Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps.[69] Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly.
|
117 |
+
|
118 |
+
Rap music's development into popular culture in the 1990s can be accredited to the album Niggaz4life by artists Niggaz With Attitude, the first rap group to ever take the top spot of the Billboard's Top 200 in 1991, in the United States.[114] With this victory, came the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music.[114] As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults.[115] As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society.
|
119 |
+
|
120 |
+
The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system.[116] French literary theorist Roland Barthes furthers this study with this own theory of myth.[117] He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context.[117] To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding.[117] This contextual meaning is subjective and is dynamic within society.
|
121 |
+
|
122 |
+
Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words.
|
123 |
+
|
124 |
+
Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres.[118] This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers.[118] To illustrate, the acronym YOLO was popularized by rapper, actor and RNB singer Drake in 2012 when he featured it in his own song, The Motto.[119] That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos.[119] However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of La Comédie Humaine, by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!".[120] Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track Broccoli in 1993.[121] In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for their hit song, also titled Broccoli.[121]
|
125 |
+
|
126 |
+
With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of Subculture: The Meaning of Style, merits that subcultures often use music to vocalize the struggles of their experiences.[122] As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics.[115]
|
127 |
+
|
128 |
+
It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang.[115] Vernon Andrews, the professor of the course American Studies 111: Hip-Hop Culture, suggests that many words, such as "hood", "homie", and "dope", are historically influenced.[115] Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity.[115] It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America.[115] This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name.[115]
|
129 |
+
|
130 |
+
There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, the second typically referred to as "freestyling" or "spitting", is the improvisation of rapped lyrics. When freestyling, some rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, freestyles with proven spontaneity are valued above generic, always usable lines.[123] Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality.
|
131 |
+
|
132 |
+
Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was portrayed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's DFX and BET's 106 and Park host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie 8 Mile.
|
133 |
+
|
134 |
+
The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, or how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". Jin The Emcee was considered "World Champion" battle rapper in the mid-2000s.[citation needed]
|
135 |
+
|
136 |
+
Throughout hip hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine.
|
137 |
+
|
138 |
+
Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further.
|
139 |
+
|
140 |
+
Although the majority of rappers are male, there have been a number of female rap stars, including Lauryn Hill, MC Lyte, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Eve, Trina, Nicki Minaj, Khia, M.I.A., CL from 2NE1, Foxy Brown, Iggy Azalea, and Lisa Lopes from TLC. There is also deaf rap artist Signmark.
|
en/4923.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Rapping (or rhyming, spitting,[1] emceeing,[2] MCing[2][3]) is a musical form of vocal delivery that incorporates "rhyme, rhythmic speech, and street vernacular",[4] which is performed or chanted in a variety of ways, usually over a backing beat or musical accompaniment.[4] The components of rap include "content" (what is being said), "flow" (rhythm, rhyme), and "delivery" (cadence, tone).[5] Rap differs from spoken-word poetry in that it is usually performed in time to musical accompaniment.[6] Rap being a primary ingredient of hip hop music, it is commonly associated with that genre in particular; however, the origins of rap precede hip-hop culture. The earliest precursor to modern rap is the West African griot tradition, in which "oral historians", or "praise-singers", would disseminate oral traditions and genealogies, or use their rhetorical techniques for gossip or to "praise or critique individuals."[7] Griot traditions connect to rap along a lineage of black verbal reverence,[definition needed] through James Brown interacting with the crowd and the band between songs, to Muhammad Ali's verbal taunts and the poems of The Last Poets.[vague] Therefore, rap lyrics and music are part of the "Black rhetorical continuum", and aim to reuse elements of past traditions while expanding upon them through "creative use of language and rhetorical styles and strategies".[8] The person credited with originating the style of "delivering rhymes over extensive music", that would become known as rap, was Anthony "DJ Hollywood" Holloway from Harlem, New York.[9]
|
6 |
+
|
7 |
+
Rap is usually delivered over a beat, typically provided by a DJ, turntablist, beatboxer, or performed a cappella without accompaniment. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike",[10] and is now used to describe quick speech or repartee.[11] The word had been used in British English since the 16th century. It was part of the African American dialect of English in the 1960s meaning "to converse", and very soon after that in its present usage as a term denoting the musical style.[12] Today, the term rap is so closely associated with hip-hop music that many writers use the terms interchangeably.
|
8 |
+
|
9 |
+
The English verb rap has various meanings, these include "to strike, especially with a quick, smart, or light blow",[13] as well "to utter sharply or vigorously: to rap out a command".[13] The Shorter Oxford English Dictionary gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly".[14] Wentworth and Flexner's Dictionary of American Slang gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932,[15] and a later meaning of "to converse, esp. in an open and frank manner".[16] It is these meanings from which the musical form of rapping derives, and this definition may be from a shortening of repartee.[17] A rapper refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, rap was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade.[citation needed]
|
10 |
+
|
11 |
+
Rap was used to describe talking on records as early as 1971, on Isaac Hayes' album Black Moses with track names such as "Ike's Rap", "Ike's Rap II", "Ike's Rap III", and so on.[18] Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound".[18]
|
12 |
+
Del the Funky Homosapien similarly states that rap was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk."[19]
|
13 |
+
|
14 |
+
Rapping can be traced back to its African roots. Centuries before hip-hop music existed, the griots of West Africa were delivering stories rhythmically, over drums and sparse instrumentation. Such connections have been acknowledged by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics.[20][21][22][23]
|
15 |
+
|
16 |
+
Blues music, rooted in the work songs and spirituals of slavery and influenced greatly by West African musical traditions, was first played by black Americans[clarification needed], and later by some white Americans, in the Mississippi Delta region of the United States around the time of the Emancipation Proclamation. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s.[24][25] Wald went so far as to call hip hop "the living blues."[24] A notable recorded example of rapping in blues music was the 1950 song "Gotta Let You Go" by Joe Hill Louis.[26]
|
17 |
+
|
18 |
+
Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip hop and has been cited as a precursor of hip hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote Digitopia Blues, rap "bears a striking resemblance to the evolution of jazz both stylistically and formally".[27] Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s.[28]
|
19 |
+
|
20 |
+
Precursors also exist in non-African/African-American traditions, especially in vaudeville and musical theater. A comparable tradition is the patter song exemplified by Gilbert and Sullivan but that has origins in earlier Italian opera. "Rock Island" from Meridith Wilson's The Music Man is wholly spoken by an ensemble of travelling salesmen, as are most of the numbers for British actor Rex Harrison in the 1964 Lerner and Loewe musical My Fair Lady. Glenn Miller's "The Lady's in Love with You" and "The Little Man Who Wasn't There" (both 1939), each contain distinctly rap-like sequences set to a driving beat as does the 1937 song "Doin' the Jive". In musical theater, the term "vamp" is identical to its meaning in jazz, gospel, and funk, and it fulfills the same function. Semi-spoken music has long been especially popular in British entertainment, and such examples as David Croft's theme to the 1970s sitcom Are You Being Served? have elements indistinguishable from modern rap.
|
21 |
+
|
22 |
+
In classical music, semi-spoken music was popular stylized by composer Arnold Schoenberg as Sprechstimme, and famously used in Ernst Toch's 1924 Geographical Fugue for spoken chorus and the final scene in Darius Milhaud's 1915 ballet Les Choéphores.[29] In the French chanson field, irrigated by a strong poetry tradition, such singer-songwriters as Léo Ferré or Serge Gainsbourg made their own use of spoken word over rock or symphonic music from the very beginning of the 1970s. Although these probably did not have a direct influence on rap's development in the African-American cultural sphere, they paved the way for acceptance of spoken word music in the media market, as well as providing a broader backdrop, in a range of cultural contexts distinct from that of the African American experience, upon which rapping could later be grafted.
|
23 |
+
|
24 |
+
With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records.[30]
|
25 |
+
|
26 |
+
More directly related to the African-American community were items like schoolyard chants and taunts, clapping games,[31] jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics.[32] A related area that is not strictly folklore is rhythmical cheering and cheerleading for military and sports.
|
27 |
+
|
28 |
+
In his narration between the tracks on George Russell's 1958 jazz album New York, N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers.[33] Coke La Rock, often credited as hip-hop's first MC[34] cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor.[33] Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as This Pussy Belongs To Me (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers",[35] and which later led to him being called "The Godfather of Rap".[36]
|
29 |
+
|
30 |
+
Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One.[37] Scott-Heron himself was influenced by Melvin Van Peebles,[38][39] whose first album was 1968's Brer Soul. Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago.[40] Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France."[41]
|
31 |
+
|
32 |
+
During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956,[42] deejays were toasting (an African tradition of "rapped out" tales of heroism) over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally."[43]
|
33 |
+
|
34 |
+
The early rapping of hip-hop developed out of DJ and Master of Ceremonies' announcements made over the microphone at parties, and later into more complex raps.[44] Grandmaster Caz states: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes."[44]
|
35 |
+
|
36 |
+
One of the first rappers at the beginning of the hip hop period, at the end of the 1970s, was also hip hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting.[45] However, Kool Herc himself denies this link (in the 1984 book Hip Hop), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album Hustler's Convention".[46] Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old,"[47] and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown."[45]
|
37 |
+
|
38 |
+
However, in terms of what we identify in the 2010s as "rap" the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native[48] who was the house DJ at the Apollo Theater. Kurtis Blow also says the first person he heard rhyme was DJ Hollywood.[49] In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the Hip Hop style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes "Good Love 6-9969" and rhymed it to the breakdown part of "Love is the Message".[50] His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx Hip Hop set that until then was composed of DJing and B-boying (or beatboxing), with traditional "shout out" style rapping.
|
39 |
+
|
40 |
+
The style that Hollywood created and his partner introduced to the Hip Hop set quickly became the standard. What actually did Hollywood do? He created "flow." Before then all MCs rhymed based on radio DJs. This usually consisted of short patters that were disconnected thematically; they were separate unto themselves. But by Hollywood using song lyrics, he had an inherent flow and theme to his rhyme. This was the game changer. By the end of the 1970s, artists such as Kurtis Blow and The Sugarhill Gang were just starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was one of the first songs featuring rap to top the U.S. Billboard Hot 100 chart.
|
41 |
+
|
42 |
+
Old school rap (1979–84)[51] was "easily identified by its relatively simple raps"[52] according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times",[52] one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay.[52]
|
43 |
+
|
44 |
+
Golden age hip hop (the mid-1980s to early '90s)[53] was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time"[54] and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop".[55] The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period.[53][55]
|
45 |
+
|
46 |
+
"Flow" is defined as "the rhythms and rhymes"[56][57][58] of a hip-hop song's lyrics and how they interact – the book How to Rap breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence).[59] 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well,[60] though often a distinction is made between the flow and the delivery.[57][56]
|
47 |
+
|
48 |
+
Staying on the beat is central to rap's flow[61] – many MCs note the importance of staying on-beat in How to Rap including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del The Funky Homosapien, Tech N9ne, People Under The Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage.[61]
|
49 |
+
|
50 |
+
MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop.[62][63] Poetry scholar Derek Attridge describes how this works in his book Poetic Rhythm – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse".[62] He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise".[62]
|
51 |
+
|
52 |
+
The same technique is also noted in the book How to Rap, where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music".[64]
|
53 |
+
|
54 |
+
In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music.[65]
|
55 |
+
|
56 |
+
Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes.[60][66]
|
57 |
+
Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986".[67] He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on".[68]
|
58 |
+
|
59 |
+
Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip hop's new school[69] – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do.[70] Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994.[67] from that point on, anybody emceeing was forced to focus on their flow".[71] Kool Moe Dee explains that before Rakim, the term 'flow' wasn't widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!"[72] He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word".[70]
|
60 |
+
|
61 |
+
Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002",[67] and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game".[73] He also cites Craig Mack as an artist who contributed to developing flow in the '90s.[74]
|
62 |
+
|
63 |
+
Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'".[60] He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few"[75] as artists who exemplify this progression.
|
64 |
+
|
65 |
+
Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)".[67]
|
66 |
+
|
67 |
+
There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms –
|
68 |
+
|
69 |
+
Alternatively, music scholar Adam Krims uses the following terms –
|
70 |
+
|
71 |
+
MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes".[75] There is also widespread use of multisyllabic rhymes, by artists such as Kool G Rap,[82] Big Daddy Kane, Rakim, Big L, Nas and Eminem.
|
72 |
+
|
73 |
+
It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities".[83]
|
74 |
+
|
75 |
+
In the book How to Rap, Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple".[84] How to Rap explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality.[2]
|
76 |
+
|
77 |
+
Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists.[85] How to Rap 2 identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G.[86] Rapping has also been done in various time signatures, such as 3/4 time.[87]
|
78 |
+
|
79 |
+
Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English.[88] Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate.[89] The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community).[90]
|
80 |
+
|
81 |
+
The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers".[91] Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.",[92] Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.",[91] and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording."[91]
|
82 |
+
|
83 |
+
Hip-hop scholars also make use of the same flow diagrams: the books How to Rap and How to Rap 2 use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques.[87] Similar systems are used by PhD musicologists Adam Krims in his book Rap Music and the Poetics of Identity[93] and Kyle Adams in his academic work on flow.[94]
|
84 |
+
|
85 |
+
Because rap revolves around a strong 4/4 beat,[95] with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers.[95] This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music.[87]
|
86 |
+
|
87 |
+
To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses.
|
88 |
+
|
89 |
+
Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps".[96] Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular).[97]
|
90 |
+
|
91 |
+
The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008.[98]
|
92 |
+
|
93 |
+
In the late 1970s, the term emcee, MC or M.C., derived from "master of ceremonies",[99] became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip hop progressed, the title MC acquired backronyms such as 'mike chanter'[100] 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term rapper, while for others the term denotes a superior level of skill and connection to the wider culture.
|
94 |
+
|
95 |
+
MC can often be used as a term of distinction; referring to an artist with good performance skills.[101] As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" [sic].[102][103] Many people in hip hop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in hip hop, and arguably even started the first MC rhyme.[104][105]
|
96 |
+
|
97 |
+
For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy).[106] It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album Feel My Power.[107]
|
98 |
+
|
99 |
+
Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album Midnight Marauders:
|
100 |
+
|
101 |
+
The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music.[citation needed]
|
102 |
+
|
103 |
+
"Party rhymes", meant to pump up the crowd at a party, were nearly the exclusive focus of old school hip hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition.
|
104 |
+
Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include Emcee Lynx, The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism.
|
105 |
+
|
106 |
+
Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime.[96] Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film Colors contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream.
|
107 |
+
|
108 |
+
Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers.
|
109 |
+
|
110 |
+
Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics.[108][109][110] However, there are also rappers whose messages may not be in conflict with these views, for example Christian hip hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music.[111]
|
111 |
+
|
112 |
+
In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip hop seem to have found its way in the hip-hop family.[112][113] Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters.
|
113 |
+
|
114 |
+
Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors.
|
115 |
+
|
116 |
+
Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps.[69] Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly.
|
117 |
+
|
118 |
+
Rap music's development into popular culture in the 1990s can be accredited to the album Niggaz4life by artists Niggaz With Attitude, the first rap group to ever take the top spot of the Billboard's Top 200 in 1991, in the United States.[114] With this victory, came the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music.[114] As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults.[115] As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society.
|
119 |
+
|
120 |
+
The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system.[116] French literary theorist Roland Barthes furthers this study with this own theory of myth.[117] He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context.[117] To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding.[117] This contextual meaning is subjective and is dynamic within society.
|
121 |
+
|
122 |
+
Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words.
|
123 |
+
|
124 |
+
Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres.[118] This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers.[118] To illustrate, the acronym YOLO was popularized by rapper, actor and RNB singer Drake in 2012 when he featured it in his own song, The Motto.[119] That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos.[119] However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of La Comédie Humaine, by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!".[120] Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track Broccoli in 1993.[121] In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for their hit song, also titled Broccoli.[121]
|
125 |
+
|
126 |
+
With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of Subculture: The Meaning of Style, merits that subcultures often use music to vocalize the struggles of their experiences.[122] As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics.[115]
|
127 |
+
|
128 |
+
It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang.[115] Vernon Andrews, the professor of the course American Studies 111: Hip-Hop Culture, suggests that many words, such as "hood", "homie", and "dope", are historically influenced.[115] Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity.[115] It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America.[115] This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name.[115]
|
129 |
+
|
130 |
+
There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, the second typically referred to as "freestyling" or "spitting", is the improvisation of rapped lyrics. When freestyling, some rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, freestyles with proven spontaneity are valued above generic, always usable lines.[123] Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality.
|
131 |
+
|
132 |
+
Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was portrayed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's DFX and BET's 106 and Park host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie 8 Mile.
|
133 |
+
|
134 |
+
The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, or how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". Jin The Emcee was considered "World Champion" battle rapper in the mid-2000s.[citation needed]
|
135 |
+
|
136 |
+
Throughout hip hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine.
|
137 |
+
|
138 |
+
Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further.
|
139 |
+
|
140 |
+
Although the majority of rappers are male, there have been a number of female rap stars, including Lauryn Hill, MC Lyte, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Eve, Trina, Nicki Minaj, Khia, M.I.A., CL from 2NE1, Foxy Brown, Iggy Azalea, and Lisa Lopes from TLC. There is also deaf rap artist Signmark.
|
en/4924.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Rapping (or rhyming, spitting,[1] emceeing,[2] MCing[2][3]) is a musical form of vocal delivery that incorporates "rhyme, rhythmic speech, and street vernacular",[4] which is performed or chanted in a variety of ways, usually over a backing beat or musical accompaniment.[4] The components of rap include "content" (what is being said), "flow" (rhythm, rhyme), and "delivery" (cadence, tone).[5] Rap differs from spoken-word poetry in that it is usually performed in time to musical accompaniment.[6] Rap being a primary ingredient of hip hop music, it is commonly associated with that genre in particular; however, the origins of rap precede hip-hop culture. The earliest precursor to modern rap is the West African griot tradition, in which "oral historians", or "praise-singers", would disseminate oral traditions and genealogies, or use their rhetorical techniques for gossip or to "praise or critique individuals."[7] Griot traditions connect to rap along a lineage of black verbal reverence,[definition needed] through James Brown interacting with the crowd and the band between songs, to Muhammad Ali's verbal taunts and the poems of The Last Poets.[vague] Therefore, rap lyrics and music are part of the "Black rhetorical continuum", and aim to reuse elements of past traditions while expanding upon them through "creative use of language and rhetorical styles and strategies".[8] The person credited with originating the style of "delivering rhymes over extensive music", that would become known as rap, was Anthony "DJ Hollywood" Holloway from Harlem, New York.[9]
|
6 |
+
|
7 |
+
Rap is usually delivered over a beat, typically provided by a DJ, turntablist, beatboxer, or performed a cappella without accompaniment. Stylistically, rap occupies a gray area between speech, prose, poetry, and singing. The word, which predates the musical form, originally meant "to lightly strike",[10] and is now used to describe quick speech or repartee.[11] The word had been used in British English since the 16th century. It was part of the African American dialect of English in the 1960s meaning "to converse", and very soon after that in its present usage as a term denoting the musical style.[12] Today, the term rap is so closely associated with hip-hop music that many writers use the terms interchangeably.
|
8 |
+
|
9 |
+
The English verb rap has various meanings, these include "to strike, especially with a quick, smart, or light blow",[13] as well "to utter sharply or vigorously: to rap out a command".[13] The Shorter Oxford English Dictionary gives a date of 1541 for the first recorded use of the word with the meaning "to utter (esp. an oath) sharply, vigorously, or suddenly".[14] Wentworth and Flexner's Dictionary of American Slang gives the meaning "to speak to, recognize, or acknowledge acquaintance with someone", dated 1932,[15] and a later meaning of "to converse, esp. in an open and frank manner".[16] It is these meanings from which the musical form of rapping derives, and this definition may be from a shortening of repartee.[17] A rapper refers to a performer who "raps". By the late 1960s, when Hubert G. Brown changed his name to H. Rap Brown, rap was a slang term referring to an oration or speech, such as was common among the "hip" crowd in the protest movements, but it did not come to be associated with a musical style for another decade.[citation needed]
|
10 |
+
|
11 |
+
Rap was used to describe talking on records as early as 1971, on Isaac Hayes' album Black Moses with track names such as "Ike's Rap", "Ike's Rap II", "Ike's Rap III", and so on.[18] Hayes' "husky-voiced sexy spoken 'raps' became key components in his signature sound".[18]
|
12 |
+
Del the Funky Homosapien similarly states that rap was used to refer to talking in a stylistic manner in the early 1970s: "I was born in '72 ... back then what rapping meant, basically, was you trying to convey something—you're trying to convince somebody. That's what rapping is, it's in the way you talk."[19]
|
13 |
+
|
14 |
+
Rapping can be traced back to its African roots. Centuries before hip-hop music existed, the griots of West Africa were delivering stories rhythmically, over drums and sparse instrumentation. Such connections have been acknowledged by many modern artists, modern day "griots", spoken word artists, mainstream news sources, and academics.[20][21][22][23]
|
15 |
+
|
16 |
+
Blues music, rooted in the work songs and spirituals of slavery and influenced greatly by West African musical traditions, was first played by black Americans[clarification needed], and later by some white Americans, in the Mississippi Delta region of the United States around the time of the Emancipation Proclamation. Grammy-winning blues musician/historian Elijah Wald and others have argued that the blues were being rapped as early as the 1920s.[24][25] Wald went so far as to call hip hop "the living blues."[24] A notable recorded example of rapping in blues music was the 1950 song "Gotta Let You Go" by Joe Hill Louis.[26]
|
17 |
+
|
18 |
+
Jazz, which developed from the blues and other African-American and European musical traditions and originated around the beginning of the 20th century, has also influenced hip hop and has been cited as a precursor of hip hop. Not just jazz music and lyrics but also jazz poetry. According to John Sobol, the jazz musician and poet who wrote Digitopia Blues, rap "bears a striking resemblance to the evolution of jazz both stylistically and formally".[27] Boxer Muhammad Ali anticipated elements of rap, often using rhyme schemes and spoken word poetry, both for when he was trash talking in boxing and as political poetry for his activism outside of boxing, paving the way for The Last Poets in 1968, Gil Scott-Heron in 1970, and the emergence of rap music in the 1970s.[28]
|
19 |
+
|
20 |
+
Precursors also exist in non-African/African-American traditions, especially in vaudeville and musical theater. A comparable tradition is the patter song exemplified by Gilbert and Sullivan but that has origins in earlier Italian opera. "Rock Island" from Meridith Wilson's The Music Man is wholly spoken by an ensemble of travelling salesmen, as are most of the numbers for British actor Rex Harrison in the 1964 Lerner and Loewe musical My Fair Lady. Glenn Miller's "The Lady's in Love with You" and "The Little Man Who Wasn't There" (both 1939), each contain distinctly rap-like sequences set to a driving beat as does the 1937 song "Doin' the Jive". In musical theater, the term "vamp" is identical to its meaning in jazz, gospel, and funk, and it fulfills the same function. Semi-spoken music has long been especially popular in British entertainment, and such examples as David Croft's theme to the 1970s sitcom Are You Being Served? have elements indistinguishable from modern rap.
|
21 |
+
|
22 |
+
In classical music, semi-spoken music was popular stylized by composer Arnold Schoenberg as Sprechstimme, and famously used in Ernst Toch's 1924 Geographical Fugue for spoken chorus and the final scene in Darius Milhaud's 1915 ballet Les Choéphores.[29] In the French chanson field, irrigated by a strong poetry tradition, such singer-songwriters as Léo Ferré or Serge Gainsbourg made their own use of spoken word over rock or symphonic music from the very beginning of the 1970s. Although these probably did not have a direct influence on rap's development in the African-American cultural sphere, they paved the way for acceptance of spoken word music in the media market, as well as providing a broader backdrop, in a range of cultural contexts distinct from that of the African American experience, upon which rapping could later be grafted.
|
23 |
+
|
24 |
+
With the decline of disco in the early 1980s rap became a new form of expression. Rap arose from musical experimentation with rhyming, rhythmic speech. Rap was a departure from disco. Sherley Anne Williams refers to the development of rap as "anti-Disco" in style and means of reproduction. The early productions of Rap after Disco sought a more simplified manner of producing the tracks they were to sing over. Williams explains how Rap composers and DJ's opposed the heavily orchestrated and ritzy multi-tracks of Disco for "break beats" which were created from compiling different records from numerous genres and did not require the equipment from professional recording studios. Professional studios were not necessary therefore opening the production of rap to the youth who as Williams explains felt "locked out" because of the capital needed to produce Disco records.[30]
|
25 |
+
|
26 |
+
More directly related to the African-American community were items like schoolyard chants and taunts, clapping games,[31] jump-rope rhymes, some with unwritten folk histories going back hundreds of years across many nationalities. Sometimes these items contain racially offensive lyrics.[32] A related area that is not strictly folklore is rhythmical cheering and cheerleading for military and sports.
|
27 |
+
|
28 |
+
In his narration between the tracks on George Russell's 1958 jazz album New York, N.Y., the singer Jon Hendricks recorded something close to modern rap, since it all rhymed and was delivered in a hip, rhythm-conscious manner. Art forms such as spoken word jazz poetry and comedy records had an influence on the first rappers.[33] Coke La Rock, often credited as hip-hop's first MC[34] cites the Last Poets among his influences, as well as comedians such as Wild Man Steve and Richard Pryor.[33] Comedian Rudy Ray Moore released under the counter albums in the 1960s and 1970s such as This Pussy Belongs To Me (1970), which contained "raunchy, sexually explicit rhymes that often had to do with pimps, prostitutes, players, and hustlers",[35] and which later led to him being called "The Godfather of Rap".[36]
|
29 |
+
|
30 |
+
Gil Scott-Heron, a jazz poet/musician, has been cited as an influence on rappers such as Chuck D and KRS-One.[37] Scott-Heron himself was influenced by Melvin Van Peebles,[38][39] whose first album was 1968's Brer Soul. Van Peebles describes his vocal style as "the old Southern style", which was influenced by singers he had heard growing up in South Chicago.[40] Van Peebles also said that he was influenced by older forms of African-American music: "... people like Blind Lemon Jefferson and the field hollers. I was also influenced by spoken word song styles from Germany that I encountered when I lived in France."[41]
|
31 |
+
|
32 |
+
During the mid-20th century, the musical culture of the Caribbean was constantly influenced by the concurrent changes in American music. As early as 1956,[42] deejays were toasting (an African tradition of "rapped out" tales of heroism) over dubbed Jamaican beats. It was called "rap", expanding the word's earlier meaning in the African-American community—"to discuss or debate informally."[43]
|
33 |
+
|
34 |
+
The early rapping of hip-hop developed out of DJ and Master of Ceremonies' announcements made over the microphone at parties, and later into more complex raps.[44] Grandmaster Caz states: "The microphone was just used for making announcements, like when the next party was gonna be, or people's moms would come to the party looking for them, and you have to announce it on the mic. Different DJs started embellishing what they were saying. I would make an announcement this way, and somebody would hear that and they add a little bit to it. I'd hear it again and take it a little step further 'til it turned from lines to sentences to paragraphs to verses to rhymes."[44]
|
35 |
+
|
36 |
+
One of the first rappers at the beginning of the hip hop period, at the end of the 1970s, was also hip hop's first DJ, DJ Kool Herc. Herc, a Jamaican immigrant, started delivering simple raps at his parties, which some claim were inspired by the Jamaican tradition of toasting.[45] However, Kool Herc himself denies this link (in the 1984 book Hip Hop), saying, "Jamaican toasting? Naw, naw. No connection there. I couldn't play reggae in the Bronx. People wouldn't accept it. The inspiration for rap is James Brown and the album Hustler's Convention".[46] Herc also suggests he was too young while in Jamaica to get into sound system parties: "I couldn't get in. Couldn't get in. I was ten, eleven years old,"[47] and that while in Jamaica, he was listening to James Brown: "I was listening to American music in Jamaica and my favorite artist was James Brown. That's who inspired me. A lot of the records I played were by James Brown."[45]
|
37 |
+
|
38 |
+
However, in terms of what we identify in the 2010s as "rap" the source came from Manhattan. Pete DJ Jones said the first person he heard rap was DJ Hollywood, a Harlem (not Bronx) native[48] who was the house DJ at the Apollo Theater. Kurtis Blow also says the first person he heard rhyme was DJ Hollywood.[49] In a 2014 interview, Hollywood said: "I used to like the way Frankie Crocker would ride a track, but he wasn't syncopated to the track though. I liked [WWRL DJ] Hank Spann too, but he wasn't on the one. Guys back then weren't concerned with being musical. I wanted to flow with the record". And in 1975, he ushered in what became known as the Hip Hop style by rhyming syncopated to the beat of an existing record uninterruptedly for nearly a minute. He adapted the lyrics of Isaac Hayes "Good Love 6-9969" and rhymed it to the breakdown part of "Love is the Message".[50] His partner Kevin Smith, better known as Lovebug Starski, took this new style and introduced it to the Bronx Hip Hop set that until then was composed of DJing and B-boying (or beatboxing), with traditional "shout out" style rapping.
|
39 |
+
|
40 |
+
The style that Hollywood created and his partner introduced to the Hip Hop set quickly became the standard. What actually did Hollywood do? He created "flow." Before then all MCs rhymed based on radio DJs. This usually consisted of short patters that were disconnected thematically; they were separate unto themselves. But by Hollywood using song lyrics, he had an inherent flow and theme to his rhyme. This was the game changer. By the end of the 1970s, artists such as Kurtis Blow and The Sugarhill Gang were just starting to receive radio airplay and make an impact far outside of New York City, on a national scale. Blondie's 1981 single, "Rapture", was one of the first songs featuring rap to top the U.S. Billboard Hot 100 chart.
|
41 |
+
|
42 |
+
Old school rap (1979–84)[51] was "easily identified by its relatively simple raps"[52] according to AllMusic, "the emphasis was not on lyrical technique, but simply on good times",[52] one notable exception being Melle Mel, who set the way for future rappers through his socio-political content and creative wordplay.[52]
|
43 |
+
|
44 |
+
Golden age hip hop (the mid-1980s to early '90s)[53] was the time period where hip-hop lyricism went through its most drastic transformation – writer William Jelani Cobb says "in these golden years, a critical mass of mic prodigies were literally creating themselves and their art form at the same time"[54] and Allmusic writes, "rhymers like PE's Chuck D, Big Daddy Kane, KRS-One, and Rakim basically invented the complex wordplay and lyrical kung-fu of later hip-hop".[55] The golden age is considered to have ended around 1993–94, marking the end of rap lyricism's most innovative period.[53][55]
|
45 |
+
|
46 |
+
"Flow" is defined as "the rhythms and rhymes"[56][57][58] of a hip-hop song's lyrics and how they interact – the book How to Rap breaks flow down into rhyme, rhyme schemes, and rhythm (also known as cadence).[59] 'Flow' is also sometimes used to refer to elements of the delivery (pitch, timbre, volume) as well,[60] though often a distinction is made between the flow and the delivery.[57][56]
|
47 |
+
|
48 |
+
Staying on the beat is central to rap's flow[61] – many MCs note the importance of staying on-beat in How to Rap including Sean Price, Mighty Casey, Zion I, Vinnie Paz, Fredro Starr, Del The Funky Homosapien, Tech N9ne, People Under The Stairs, Twista, B-Real, Mr Lif, 2Mex, and Cage.[61]
|
49 |
+
|
50 |
+
MCs stay on beat by stressing syllables in time to the four beats of the musical backdrop.[62][63] Poetry scholar Derek Attridge describes how this works in his book Poetic Rhythm – "rap lyrics are written to be performed to an accompaniment that emphasizes the metrical structure of the verse".[62] He says rap lyrics are made up of, "lines with four stressed beats, separated by other syllables that may vary in number and may include other stressed syllables. The strong beat of the accompaniment coincides with the stressed beats of the verse, and the rapper organizes the rhythms of the intervening syllables to provide variety and surprise".[62]
|
51 |
+
|
52 |
+
The same technique is also noted in the book How to Rap, where diagrams are used to show how the lyrics line up with the beat – "stressing a syllable on each of the four beats gives the lyrics the same underlying rhythmic pulse as the music and keeps them in rhythm ... other syllables in the song may still be stressed, but the ones that fall in time with the four beats of a bar are the only ones that need to be emphasized in order to keep the lyrics in time with the music".[64]
|
53 |
+
|
54 |
+
In rap terminology, 16-bars is the amount of time that rappers are generally given to perform a guest verse on another artist's song; one bar is typically equal to four beats of music.[65]
|
55 |
+
|
56 |
+
Old school flows were relatively basic and used only few syllables per bar, simple rhythmic patterns, and basic rhyming techniques and rhyme schemes.[60][66]
|
57 |
+
Melle Mel is cited as an MC who epitomizes the old school flow – Kool Moe Dee says, "from 1970 to 1978 we rhymed one way [then] Melle Mel, in 1978, gave us the new cadence we would use from 1978 to 1986".[67] He's the first emcee to explode in a new rhyme cadence, and change the way every emcee rhymed forever. Rakim, The Notorious B.I.G., and Eminem have flipped the flow, but Melle Mel's downbeat on the two, four, kick to snare cadence is still the rhyme foundation all emcees are building on".[68]
|
58 |
+
|
59 |
+
Artists and critics often credit Rakim with creating the overall shift from the more simplistic old school flows to more complex flows near the beginning of hip hop's new school[69] – Kool Moe Dee says, "any emcee that came after 1986 had to study Rakim just to know what to be able to do.[70] Rakim, in 1986, gave us flow and that was the rhyme style from 1986 to 1994.[67] from that point on, anybody emceeing was forced to focus on their flow".[71] Kool Moe Dee explains that before Rakim, the term 'flow' wasn't widely used – "Rakim is basically the inventor of flow. We were not even using the word flow until Rakim came along. It was called rhyming, it was called cadence, but it wasn't called flow. Rakim created flow!"[72] He adds that while Rakim upgraded and popularized the focus on flow, "he didn't invent the word".[70]
|
60 |
+
|
61 |
+
Kool Moe Dee states that Biggie introduced a newer flow which "dominated from 1994 to 2002",[67] and also says that Method Man was "one of the emcees from the early to mid-'90s that ushered in the era of flow ... Rakim invented it, Big Daddy Kane, KRS-One, and Kool G Rap expanded it, but Biggie and Method Man made flow the single most important aspect of an emcee's game".[73] He also cites Craig Mack as an artist who contributed to developing flow in the '90s.[74]
|
62 |
+
|
63 |
+
Music scholar Adam Krims says, "the flow of MCs is one of the profoundest changes that separates out new-sounding from older-sounding music ... it is widely recognized and remarked that rhythmic styles of many commercially successful MCs since roughly the beginning of the 1990s have progressively become faster and more 'complex'".[60] He cites "members of the Wu-Tang Clan, Nas, AZ, Big Pun, and Ras Kass, just to name a few"[75] as artists who exemplify this progression.
|
64 |
+
|
65 |
+
Kool Moe Dee adds, "in 2002 Eminem created the song that got the first Oscar in Hip-Hop history [Lose Yourself] ... and I would have to say that his flow is the most dominant right now (2003)".[67]
|
66 |
+
|
67 |
+
There are many different styles of flow, with different terminology used by different people – stic.man of Dead Prez uses the following terms –
|
68 |
+
|
69 |
+
Alternatively, music scholar Adam Krims uses the following terms –
|
70 |
+
|
71 |
+
MCs use many different rhyming techniques, including complex rhyme schemes, as Adam Krims points out – "the complexity ... involves multiple rhymes in the same rhyme complex (i.e. section with consistently rhyming words), internal rhymes, [and] offbeat rhymes".[75] There is also widespread use of multisyllabic rhymes, by artists such as Kool G Rap,[82] Big Daddy Kane, Rakim, Big L, Nas and Eminem.
|
72 |
+
|
73 |
+
It has been noted that rap's use of rhyme is some of the most advanced in all forms of poetry – music scholar Adam Bradley notes, "rap rhymes so much and with such variety that it is now the largest and richest contemporary archive of rhymed words. It has done more than any other art form in recent history to expand rhyme's formal range and expressive possibilities".[83]
|
74 |
+
|
75 |
+
In the book How to Rap, Masta Ace explains how Rakim and Big Daddy Kane caused a shift in the way MCs rhymed: "Up until Rakim, everybody who you heard rhyme, the last word in the sentence was the rhyming [word], the connection word. Then Rakim showed us that you could put rhymes within a rhyme ... now here comes Big Daddy Kane — instead of going three words, he's going multiple".[84] How to Rap explains that "rhyme is often thought to be the most important factor in rap writing ... rhyme is what gives rap lyrics their musicality.[2]
|
76 |
+
|
77 |
+
Many of the rhythmic techniques used in rapping come from percussive techniques and many rappers compare themselves to percussionists.[85] How to Rap 2 identifies all the rhythmic techniques used in rapping such as triplets, flams, 16th notes, 32nd notes, syncopation, extensive use of rests, and rhythmic techniques unique to rapping such as West Coast "lazy tails", coined by Shock G.[86] Rapping has also been done in various time signatures, such as 3/4 time.[87]
|
78 |
+
|
79 |
+
Since the 2000s, rapping has evolved into a style of rap that spills over the boundaries of the beat, closely resembling spoken English.[88] Rappers like MF Doom and Eminem have exhibited this style, and since then, rapping has been difficult to notate.[89] The American hip-hop group Crime Mob exhibited a new rap flow in songs such as "Knuck If You Buck", heavily dependent on triplets. Rappers including Drake, Kanye West, Rick Ross, Young Jeezy and more have included this influence in their music. In 2014, an American hip-hop collective from Atlanta, Migos, popularized this flow, and is commonly referred to as the "Migos Flow" (a term that is contentious within the hip-hop community).[90]
|
80 |
+
|
81 |
+
The standard form of rap notation is the flow diagram, where rappers line-up their lyrics underneath "beat numbers".[91] Different rappers have slightly different forms of flow diagram that they use: Del the Funky Homosapien says, "I'm just writing out the rhythm of the flow, basically. Even if it's just slashes to represent the beats, that's enough to give me a visual path.",[92] Vinnie Paz states, "I've created my own sort of writing technique, like little marks and asterisks to show like a pause or emphasis on words in certain places.",[91] and Aesop Rock says, "I have a system of maybe 10 little symbols that I use on paper that tell me to do something when I'm recording."[91]
|
82 |
+
|
83 |
+
Hip-hop scholars also make use of the same flow diagrams: the books How to Rap and How to Rap 2 use the diagrams to explain rap's triplets, flams, rests, rhyme schemes, runs of rhyme, and breaking rhyme patterns, among other techniques.[87] Similar systems are used by PhD musicologists Adam Krims in his book Rap Music and the Poetics of Identity[93] and Kyle Adams in his academic work on flow.[94]
|
84 |
+
|
85 |
+
Because rap revolves around a strong 4/4 beat,[95] with certain syllables said in time to the beat, all the notational systems have a similar structure: they all have the same 4 beat numbers at the top of the diagram, so that syllables can be written in-line with the beat numbers.[95] This allows devices such as rests, "lazy tails", flams, and other rhythmic techniques to be shown, as well as illustrating where different rhyming words fall in relation to the music.[87]
|
86 |
+
|
87 |
+
To successfully deliver a rap, a rapper must also develop vocal presence, enunciation, and breath control. Vocal presence is the distinctiveness of a rapper's voice on record. Enunciation is essential to a flowing rap; some rappers choose also to exaggerate it for comic and artistic effect. Breath control, taking in air without interrupting one's delivery, is an important skill for a rapper to master, and a must for any MC. An MC with poor breath control cannot deliver difficult verses without making unintentional pauses.
|
88 |
+
|
89 |
+
Raps are sometimes delivered with melody. West Coast rapper Egyptian Lover was the first notable MC to deliver "sing-raps".[96] Popular rappers such as 50 Cent and Ja Rule add a slight melody to their otherwise purely percussive raps whereas some rappers such as Cee-Lo Green are able to harmonize their raps with the beat. The Midwestern group Bone Thugs-n-Harmony was one of the first groups to achieve nationwide recognition for using the fast-paced, melodic and harmonic raps that are also practiced by Do or Die, another Midwestern group. Another rapper that harmonized his rhymes was Nate Dogg, a rapper part of the group 213. Rakim experimented not only with following the beat, but also with complementing the song's melody with his own voice, making his flow sound like that of an instrument (a saxophone in particular).[97]
|
90 |
+
|
91 |
+
The ability to rap quickly and clearly is sometimes regarded as an important sign of skill. In certain hip-hop subgenres such as chopped and screwed, slow-paced rapping is often considered optimal. The current record for fastest rapper is held by Spanish rapper Domingo Edjang Moreno, known by his alias Chojin, who rapped 921 syllables in one minute on December 23, 2008.[98]
|
92 |
+
|
93 |
+
In the late 1970s, the term emcee, MC or M.C., derived from "master of ceremonies",[99] became an alternative title for a rapper, and for their role within hip-hop music and culture. An MC uses rhyming verses, pre-written or ad lib ('freestyled'), to introduce the DJ with whom they work, to keep the crowd entertained or to glorify themselves. As hip hop progressed, the title MC acquired backronyms such as 'mike chanter'[100] 'microphone controller', 'microphone checker', 'music commentator', and one who 'moves the crowd'. Some use this word interchangeably with the term rapper, while for others the term denotes a superior level of skill and connection to the wider culture.
|
94 |
+
|
95 |
+
MC can often be used as a term of distinction; referring to an artist with good performance skills.[101] As Kool G Rap notes, "masters of ceremony, where the word 'M.C.' comes from, means just keeping the party alive" [sic].[102][103] Many people in hip hop including DJ Premier and KRS-One feel that James Brown was the first MC. James Brown had the lyrics, moves, and soul that greatly influenced a lot of rappers in hip hop, and arguably even started the first MC rhyme.[104][105]
|
96 |
+
|
97 |
+
For some rappers, there was a distinction to the term, such as for MC Hammer who acquired the nickname "MC" for being a "Master of Ceremonies" which he used when he began performing at various clubs while on the road with the Oakland As and eventually in the military (United States Navy).[106] It was within the lyrics of a rap song called "This Wall" that Hammer first identified himself as M.C. Hammer and later marketed it on his debut album Feel My Power.[107]
|
98 |
+
|
99 |
+
Uncertainty over the acronym's expansion may be considered evidence for its ubiquity: the full term "Master of Ceremonies" is very rarely used in the hip-hop scene. This confusion prompted the hip-hop group A Tribe Called Quest to include this statement in the liner notes to their 1993 album Midnight Marauders:
|
100 |
+
|
101 |
+
The use of the term MC when referring to a rhyming wordsmith originates from the dance halls of Jamaica. At each event, there would be a master of ceremonies who would introduce the different musical acts and would say a toast in style of a rhyme, directed at the audience and to the performers. He would also make announcements such as the schedule of other events or advertisements from local sponsors. The term MC continued to be used by the children of women who moved to New York City to work as maids in the 1970s. These MCs eventually created a new style of music called hip-hop based on the rhyming they used to do in Jamaica and the breakbeats used in records. MC has also recently been accepted to refer to all who engineer music.[citation needed]
|
102 |
+
|
103 |
+
"Party rhymes", meant to pump up the crowd at a party, were nearly the exclusive focus of old school hip hop, and they remain a staple of hip-hop music to this day. In addition to party raps, rappers also tend to make references to love and sex. Love raps were first popularized by Spoonie Gee of the Treacherous Three, and later, in the golden age of hip hop, Big Daddy Kane, Heavy D, and LL Cool J would continue this tradition.
|
104 |
+
Hip-hop artists such as KRS-One, Hopsin, Public Enemy, Lupe Fiasco, Mos Def, Talib Kweli, Jay-Z, Nas, The Notorious B.I.G. (Biggie), and dead prez are known for their sociopolitical subject matter. Their West Coast counterparts include Emcee Lynx, The Coup, Paris, and Michael Franti. Tupac Shakur was also known for rapping about social issues such as police brutality, teenage pregnancy, and racism.
|
105 |
+
|
106 |
+
Other rappers take a less critical approach to urbanity, sometimes even embracing such aspects as crime. Schoolly D was the first notable MC to rap about crime.[96] Early on KRS-One was accused of celebrating crime and a hedonistic lifestyle, but after the death of his DJ, Scott La Rock, KRS-One went on to speak out against violence in hip hop and has spent the majority of his career condemning violence and writing on issues of race and class. Ice-T was one of the first rappers to call himself a "playa" and discuss guns on record, but his theme tune to the 1988 film Colors contained warnings against joining gangs. Gangsta rap, made popular largely because of N.W.A, brought rapping about crime and the gangster lifestyle into the musical mainstream.
|
107 |
+
|
108 |
+
Materialism has also been a popular topic in hip-hop since at least the early 1990s, with rappers boasting about their own wealth and possessions, and name-dropping specific brands: liquor brands Cristal and Rémy Martin, car manufacturers Bentley and Mercedes-Benz and clothing brands Gucci and Versace have all been popular subjects for rappers.
|
109 |
+
|
110 |
+
Various politicians, journalists, and religious leaders have accused rappers of fostering a culture of violence and hedonism among hip-hop listeners through their lyrics.[108][109][110] However, there are also rappers whose messages may not be in conflict with these views, for example Christian hip hop. Others have praised the "political critique, innuendo and sarcasm" of hip-hop music.[111]
|
111 |
+
|
112 |
+
In contrast to the more hedonistic approach of gangsta rappers, some rappers have a spiritual or religious focus. Christian rap is currently the most commercially successful form of religious rap. With Christian rappers like Lecrae, Thi'sl and Hostyle Gospel winning national awards and making regular appearances on television, Christian hip hop seem to have found its way in the hip-hop family.[112][113] Aside from Christianity, the Five Percent Nation, an Islamic esotericist religious/spiritual group, has been represented more than any religious group in popular hip hop. Artists such as Rakim, the members of the Wu-Tang Clan, Brand Nubian, X-Clan and Busta Rhymes have had success in spreading the theology of the Five Percenters.
|
113 |
+
|
114 |
+
Rappers use the literary techniques of double entendres, alliteration, and forms of wordplay that are found in classical poetry. Similes and metaphors are used extensively in rap lyrics; rappers such as Fabolous and Lloyd Banks have written entire songs in which every line contains similes, whereas MCs like Rakim, GZA, and Jay-Z are known for the metaphorical content of their raps. Rappers such as Lupe Fiasco are known for the complexity of their songs that contain metaphors within extended metaphors.
|
115 |
+
|
116 |
+
Many hip-hop listeners believe that a rapper's lyrics are enhanced by a complex vocabulary. Kool Moe Dee claims that he appealed to older audiences by using a complex vocabulary in his raps.[69] Rap is famous, however, for having its own vocabulary—from international hip-hop slang to regional slang. Some artists, like the Wu-Tang Clan, develop an entire lexicon among their clique. African-American English has always had a significant effect on hip-hop slang and vice versa. Certain regions have introduced their unique regional slang to hip-hop culture, such as the Bay Area (Mac Dre, E-40), Houston (Chamillionaire, Paul Wall), Atlanta (Ludacris, Lil Jon, T.I.), and Kentucky (Nappy Roots). The Nation of Gods and Earths, aka The Five Percenters, has influenced mainstream hip-hop slang with the introduction of phrases such as "word is bond" that have since lost much of their original spiritual meaning. Preference toward one or the other has much to do with the individual; GZA, for example, prides himself on being very visual and metaphorical but also succinct, whereas underground rapper MF DOOM is known for heaping similes upon similes. In still another variation, 2Pac was known for saying exactly what he meant, literally and clearly.
|
117 |
+
|
118 |
+
Rap music's development into popular culture in the 1990s can be accredited to the album Niggaz4life by artists Niggaz With Attitude, the first rap group to ever take the top spot of the Billboard's Top 200 in 1991, in the United States.[114] With this victory, came the beginning of an era of popular culture guided by the musical influences of hip-hop and rap itself, moving away from the influences of rock music.[114] As rap continued to develop and further disseminate, it went on to influence clothing brands, movies, sports, and dancing through popular culture. As rap has developed to become more of a presence in popular culture, it has focused itself on a particular demographic, adolescent and young adults.[115] As such, it has had a significant impact on the modern vernacular of this portion of the population, which has diffused throughout society.
|
119 |
+
|
120 |
+
The effects of rap music on modern vernacular can be explored through the study of semiotics. Semiotics is the study of signs and symbols, or the study of language as a system.[116] French literary theorist Roland Barthes furthers this study with this own theory of myth.[117] He maintains that the first order of signification is language and that the second is "myth", arguing that a word has both its literal meaning, and its mythical meaning, which is heavily dependent on socio-cultural context.[117] To illustrate, Barthes uses the example of a rat: it has a literal meaning (a physical, objective description) and it has a greater socio-cultural understanding.[117] This contextual meaning is subjective and is dynamic within society.
|
121 |
+
|
122 |
+
Through Barthes' semiotic theory of language and myth, it can be shown that rap music has culturally influenced the language of its listeners, as they influence the connotative message to words that already exist. As more people listen to rap, the words that are used in the lyrics become culturally bound to the song, and then are disseminated through the conversations that people have using these words.
|
123 |
+
|
124 |
+
Most often, the terms that rappers use are pre-established words that have been prescribed new meaning through their music, that are eventually disseminated through social spheres.[118] This newly contextualized word is called a neosemanticism. Neosemanticisms are forgotten words that are often brought forward from subcultures that attract the attention of members of the reigning culture of their time, then they are brought forward by the influential voices in society – in this case, these figures are rappers.[118] To illustrate, the acronym YOLO was popularized by rapper, actor and RNB singer Drake in 2012 when he featured it in his own song, The Motto.[119] That year the term YOLO was so popular that it was printed on t-shirts, became a trending hashtag on Twitter, and was even considered as the inspiration for several tattoos.[119] However, although the rapper may have come up with the acronym, the motto itself was in no way first established by Drake. Similar messages can be seen in many well-known sayings, or as early as 1896, in the English translation of La Comédie Humaine, by Honoré de Balzac where one of his free-spirited characters tells another, "You Only Live Once!".[120] Another example of a neosemanticism is the word "broccoli". Rapper E-40 initially uses the word "broccoli" to refer to marijuana, on his hit track Broccoli in 1993.[121] In contemporary society, artists D.R.A.M. and Lil Yachty are often accredited for this slang on for their hit song, also titled Broccoli.[121]
|
125 |
+
|
126 |
+
With the rise in technology and mass media, the dissemination of subcultural terms has only become easier. Dick Hebdige, author of Subculture: The Meaning of Style, merits that subcultures often use music to vocalize the struggles of their experiences.[122] As rap is also the culmination of a prevalent sub-culture in African-American social spheres, often their own personal cultures are disseminated through rap lyrics.[115]
|
127 |
+
|
128 |
+
It is here that lyrics can be categorized as either historically influenced or (more commonly) considered as slang.[115] Vernon Andrews, the professor of the course American Studies 111: Hip-Hop Culture, suggests that many words, such as "hood", "homie", and "dope", are historically influenced.[115] Most importantly, this also brings forward the anarchistic culture of rap music. Common themes from rap are anti-establishment and instead, promote black excellence and diversity.[115] It is here that rap can be seen to reclaim words, namely, "nigga", a historical term used to subjugate and oppress Black people in America.[115] This word has been reclaimed by Black Americans and is heavily used in rap music. Niggaz With Attitude embodies this notion by using it as the first word of their influential rap group name.[115]
|
129 |
+
|
130 |
+
There are two kinds of freestyle rap: one is scripted (recitation), but having no particular overriding subject matter, the second typically referred to as "freestyling" or "spitting", is the improvisation of rapped lyrics. When freestyling, some rappers inadvertently reuse old lines, or even "cheat" by preparing segments or entire verses in advance. Therefore, freestyles with proven spontaneity are valued above generic, always usable lines.[123] Rappers will often reference places or objects in their immediate setting, or specific (usually demeaning) characteristics of opponents, to prove their authenticity and originality.
|
131 |
+
|
132 |
+
Battle rapping, which can be freestyled, is the competition between two or more rappers in front of an audience. The tradition of insulting one's friends or acquaintances in rhyme goes back to the dozens, and was portrayed famously by Muhammad Ali in his boxing matches. The winner of a battle is decided by the crowd and/or preselected judges. According to Kool Moe Dee, a successful battle rap focuses on an opponent's weaknesses, rather than one's own strengths. Television shows such as MTV's DFX and BET's 106 and Park host weekly freestyle battles live on the air. Battle rapping gained widespread public recognition outside of the African-American community with rapper Eminem's movie 8 Mile.
|
133 |
+
|
134 |
+
The strongest battle rappers will generally perform their rap fully freestyled. This is the most effective form in a battle as the rapper can comment on the other person, whether it be what they look like, or how they talk, or what they wear. It also allows the rapper to reverse a line used to "diss" him or her if they are the second rapper to battle. This is known as a "flip". Jin The Emcee was considered "World Champion" battle rapper in the mid-2000s.[citation needed]
|
135 |
+
|
136 |
+
Throughout hip hop's history, new musical styles and genres have developed that contain rapping. Entire genres, such as rap rock and its derivatives rapcore and rap metal (rock/metal/punk with rapped vocals), or hip house have resulted from the fusion of rap and other styles. Many popular music genres with a focus on percussion have contained rapping at some point; be it disco (DJ Hollywood), jazz (Gang Starr), new wave (Blondie), funk (Fatback Band), contemporary R&B (Mary J. Blige), reggaeton (Daddy Yankee), or even Japanese dance music (Soul'd Out). UK garage music has begun to focus increasingly on rappers in a new subgenre called grime which emerged in London in the early 2000s and was pioneered and popularized by the MC Dizzee Rascal. Increased popularity with the music has shown more UK rappers going to America as well as tour there, such as Sway DaSafo possibly signing with Akon's label Konvict. Hyphy is the latest of these spin-offs. It is typified by slowed-down atonal vocals with instrumentals that borrow heavily from the hip-hop scene and lyrics centered on illegal street racing and car culture. Another Oakland, California group, Beltaine's Fire, has recently gained attention for their Celtic fusion sound which blends hip-hop beats with Celtic melodies. Unlike the majority of hip-hop artists, all their music is performed live without samples, synths, or drum machines, drawing comparisons to The Roots and Rage Against the Machine.
|
137 |
+
|
138 |
+
Bhangra, a widely popular style of music from Punjab, India has been mixed numerous times with reggae and hip-hop music. The most popular song in this genre in the United States was "Mundian to Bach Ke" or "Beware the Boys" by Panjabi MC and Jay-Z. Although "Mundian To Bach Ke" had been released previously, the mixing with Jay-Z popularized the genre further.
|
139 |
+
|
140 |
+
Although the majority of rappers are male, there have been a number of female rap stars, including Lauryn Hill, MC Lyte, Lil' Kim, Missy Elliott, Queen Latifah, Da Brat, Eve, Trina, Nicki Minaj, Khia, M.I.A., CL from 2NE1, Foxy Brown, Iggy Azalea, and Lisa Lopes from TLC. There is also deaf rap artist Signmark.
|
en/4925.html.txt
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Sexual intercourse (or coitus or copulation) is sexual activity typically involving the insertion and thrusting of the penis into the vagina for sexual pleasure, reproduction, or both.[1] This is also known as vaginal intercourse or vaginal sex.[2][3] Other forms of penetrative sexual intercourse include anal sex (penetration of the anus by the penis), oral sex (penetration of the mouth by the penis or oral penetration of the female genitalia), fingering (sexual penetration by the fingers), and penetration by use of a dildo (especially a strap-on dildo).[4][5] These activities involve physical intimacy between two or more individuals and are usually used among humans solely for physical or emotional pleasure and can contribute to human bonding.[4][6]
|
6 |
+
|
7 |
+
There are different views on what constitutes sexual intercourse or other sexual activity, which can impact on views on sexual health.[7] Although sexual intercourse, particularly the variant coitus, generally denotes penile–vaginal penetration and the possibility of creating offspring,[1] it also commonly denotes penetrative oral sex and penile–anal sex, especially the latter.[8] It usually encompasses sexual penetration, while non-penetrative sex has been labeled "outercourse",[9] but non-penetrative sex may also be considered sexual intercourse.[4][10] Sex, often a shorthand for sexual intercourse, can mean any form of sexual activity.[7] Because people can be at risk of contracting sexually transmitted infections during these activities, safer sex practices are recommended by health professionals to reduce transmission risk.[11][12]
|
8 |
+
|
9 |
+
Various jurisdictions place restrictions on certain sexual acts, such as incest, sexual activity with minors, prostitution, rape, zoophilia, sodomy, premarital and extramarital sex. Religious beliefs also play a role in personal decisions about sexual intercourse or other sexual activity, such as decisions about virginity,[13][14] or legal and public policy matters. Religious views on sexuality vary significantly between different religions and sects of the same religion, though there are common themes, such as prohibition of adultery.
|
10 |
+
|
11 |
+
Reproductive sexual intercourse between non-human animals is more often called copulation, and sperm may be introduced into the female's reproductive tract in non-vaginal ways among the animals, such as by cloacal copulation. For most non-human mammals, mating and copulation occur at the point of estrus (the most fertile period of time in the female's reproductive cycle), which increases the chances of successful impregnation.[15][16] However, bonobos, dolphins and chimpanzees are known to engage in sexual intercourse regardless of whether the female is in estrus, and to engage in sex acts with same-sex partners.[17] Like humans engaging in sexual activity primarily for pleasure, this behavior in these animals is also presumed to be for pleasure, and a contributing factor to strengthening their social bonds.[18]
|
12 |
+
|
13 |
+
Sexual intercourse may be called coitus, copulation, coition, or intercourse. Coitus is derived from the Latin word coitio or coire, meaning "a coming together or joining together" or "to go together", and is known under different ancient Latin names for a variety of sexual activities, but usually denotes penile–vaginal penetration.[19] This is often called vaginal intercourse or vaginal sex.[2][20] Vaginal sex, and less often vaginal intercourse, may also denote any vaginal sexual activity, particularly if penetrative, including sexual activity between lesbian couples.[21][22] Copulation, by contrast, more often denotes the mating process, especially for non-human animals; it can mean a variety of sexual activities between opposite-sex or same-sex pairings,[23] but generally means the sexually reproductive act of transferring sperm from a male to a female or sexual procreation between a man and a woman.[23][24][25]
|
14 |
+
|
15 |
+
Although sex and "having sex" also most commonly denote penile–vaginal intercourse,[26] sex can be significantly broad in its meaning and may cover any penetrative or non-penetrative sexual activity between two or more people.[7] The World Health Organization (WHO) states that non-English languages and cultures use different words for sexual activity, "with slightly different meanings".[7] Various vulgarisms, slang, and euphemisms are used for sexual intercourse or other sexual activity, such as fuck, shag, and the phrase "sleep together".[27][28][29] The laws of some countries use the euphemism "carnal knowledge." Penetration of the vagina by the erect penis is additionally known as intromission, or by the Latin name immissio penis (Latin for "insertion of the penis").[30] The age of first sexual intercourse is called sexarche.[31][32]
|
16 |
+
|
17 |
+
Vaginal, anal and oral sex are recognized as sexual intercourse more often than other sexual behaviors.[33] Sexual activity that does not involve penile-vaginal sex or other sexual penetration might be used to retain virginity (sometimes called "technical virginity)" or labeled "outercourse".[34] One reason virginity loss is often based on penile–vaginal intercourse is because heterosexual couples may engage in anal or oral sex as a way of being sexually active while maintaining that they are virgins since they have not engaged in the reproductive act of coitus.[35] Some gay men consider frotting or oral sex as a way of maintaining their virginities, with penile-anal penetration used as sexual intercourse and for virginity loss, while other gay men may consider frotting or oral sex as their main forms of sexual activity.[13][36][37] Lesbians may categorize oral sex or fingering as sexual intercourse and subsequently an act of virginity loss,[13][38] or tribadism as a primary form of sexual activity.[39][40]
|
18 |
+
|
19 |
+
Researchers commonly use sexual intercourse to denote penile–vaginal intercourse while using specific words, such as anal sex or oral sex, for other sexual behaviors.[41] Scholars Richard M. Lerner and Laurence Steinberg state that researchers also "rarely disclose" how they conceptualize sex "or even whether they resolved potential discrepancies" in conceptualizations of sex.[38] Lerner and Steinberg attribute researchers' focus on penile–vaginal sex to "the larger culture's preoccupation with this form of sexual activity," and have expressed concern that the "widespread, unquestioned equation of penile–vaginal intercourse with sex reflects a failure to examine systematically 'whether the respondent's understanding of the question [about sexual activity] matches what the researcher had in mind'".[38] This focus can also relegate other forms of mutual sexual activity to foreplay or contribute to them not being regarded as "real sex", and limits the meaning of rape.[42][43] It may also be that conceptually conflating sexual activity with vaginal intercourse and sexual function hinders and limits information about sexual behavior that non-heterosexual people may be engaging in, or information about heterosexuals who may be engaging in non–vaginal sexual activity.[42]
|
20 |
+
|
21 |
+
Studies regarding the meaning of sexual intercourse sometimes conflict. While most consider penile–vaginal intercourse to be sex, whether anal or oral intercourse are considered sex is more debatable, with oral sex ranking lowest.[44][45] The Centers for Disease Control and Prevention (CDC) stated that "although there are only limited national data about how often adolescents engage in oral sex, some data suggest that many adolescents who engage in oral sex do not consider it to be 'sex'; therefore they may use oral sex as an option to experience sex while still, in their minds, remaining abstinent".[46] Upton et al. stated, "It is possible that individuals who engage in oral sex, but do not consider it as 'sex', may not associate the acts with the potential health risks they can bring."[44] In other cases, condom use is a factor, with some men stating that sexual activity involving the protection of a condom is not "real sex" or "the real thing".[47][48] This view is common among men in Africa,[47][48] where sexual activity involving the protection of a condom is often associated with emasculation because condoms prevent direct penile–to–skin genital contact.[47]
|
22 |
+
|
23 |
+
Sexual intercourse or other sexual activity can encompass various sexually stimulating factors (physiological stimulation or psychological stimulation), including different sex positions (such as the missionary position, the most common human sex position) or the use of sex toys.[49][50] Foreplay may precede some sexual activities, often leading to sexual arousal of the partners and resulting in the erection of the penis or natural lubrication of the vagina.[51] It is also common for people to be as sexually satisfied by being kissed, touched erotically, or held as they are by sexual intercourse.[52]
|
24 |
+
|
25 |
+
Non-primate females copulate only when in estrus,[53] but sexual intercourse is possible at any time of the menstrual cycle for women.[54][55] Sex pheromones facilitate copulatory reflexes in various organisms, but, in humans, the detection of pheromones is impaired and they have only residual effects.[56] Non-primate females put themselves in the crucial lordosis position and remain motionless, but these motor copulatory reflexes are no longer functional in women.[53]
|
26 |
+
|
27 |
+
During coitus, the partners orient their hips to allow the penis to move back and forth in the vagina to cause friction, typically without fully removing the penis. In this way, they stimulate themselves and each other, often continuing until orgasm in either or both partners is achieved.[10][58]
|
28 |
+
|
29 |
+
For human females, stimulation of the clitoris plays a significant role in sexual activity; 70–80% of women require direct clitoral stimulation to achieve orgasm,[59][60][61] though indirect clitoral stimulation (for example, via vaginal intercourse) may also be sufficient (see orgasm in females).[62][63] Because of this, some couples may engage in the woman on top position or the coital alignment technique, a technique combining the "riding high" variation of the missionary position with pressure-counterpressure movements performed by each partner in rhythm with sexual penetration, to maximize clitoral stimulation.[57][64]
|
30 |
+
|
31 |
+
Anal sex involves stimulation of the anus, anal cavity, sphincter valve or rectum; it most commonly means the insertion of a man's penis into another person's rectum, but may also mean the use of sex toys or fingers to penetrate the anus, or oral sex on the anus (anilingus), or pegging.[65]
|
32 |
+
|
33 |
+
Oral sex consists of all the sexual activities that involve the use of the mouth and throat to stimulate genitalia or anus. It is sometimes performed to the exclusion of all other forms of sexual activity, and may include the ingestion or absorption of semen (during fellatio) or vaginal fluids (during cunnilingus).[49][66]
|
34 |
+
|
35 |
+
Fingering (or digital penetration or digital intercourse) involves the manual manipulation of the clitoris, rest of the vulva, vagina or anus for the purpose of sexual arousal and sexual stimulation; it may constitute the entire sexual encounter or it may be part of mutual masturbation, foreplay or other sexual activities.[22][67][68]
|
36 |
+
|
37 |
+
Natural human reproduction involves penile–vaginal penetration,[70] during which semen, containing male gametes known as sperm cells or spermatozoa, is expelled via ejaculation through the penis into the vagina. The sperm passes through the vaginal vault, cervix and into the uterus, and then into the fallopian tubes. Millions of sperm are present in each ejaculation to increase the chances of fertilization (see sperm competition), but only one reaching an egg or ovum is sufficient to achieve fertilization. When a fertile ovum from the female is present in the fallopian tubes, the male gamete joins with the ovum, resulting in fertilization and the formation of a new embryo. When a fertilized ovum reaches the uterus, it becomes implanted in the lining of the uterus (the endometrium) and a pregnancy begins.[70][71]
|
38 |
+
|
39 |
+
Pregnancy rates for sexual intercourse are highest during the menstrual cycle time from some 5 days before until 1 to 2 days after ovulation.[72] For optimal pregnancy chance, there are recommendations of sexual intercourse every 1 or 2 days,[73] or every 2 or 3 days.[74] Studies have shown no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina.[75]
|
40 |
+
|
41 |
+
When a sperm donor has sexual intercourse with a woman who is not his partner and for the sole purpose of impregnating the woman, this may be known as natural insemination, as opposed to artificial insemination. Artificial insemination is a form of assisted reproductive technology, which are methods used to achieve pregnancy by artificial or partially artificial means.[76] For artificial insemination, sperm donors may donate their sperm through a sperm bank, and the insemination is performed with the express intention of attempting to impregnate the female; to this extent, its purpose is the medical equivalent of sexual intercourse.[77][78] Reproductive methods also extend to gay and lesbian couples. For gay male pairings, there is the option of surrogate pregnancy; for lesbian couples, there is donor insemination in addition to choosing surrogate pregnancy.[79][80]
|
42 |
+
|
43 |
+
There are a variety of safe sex methods that are practiced by heterosexual and same-sex couples, including non-penetrative sex acts,[12][81] and heterosexual couples may use oral or anal sex (or both) as a means of birth control.[82][83] However, pregnancy can still occur with anal sex or other forms of sexual activity if the penis is near the vagina (such as during intercrural sex or other genital-genital rubbing) and its sperm is deposited near the vagina's entrance and travels along the vagina's lubricating fluids; the risk of pregnancy can also occur without the penis being near the vagina because sperm may be transported to the vaginal opening by the vagina coming in contact with fingers or other non-genital body parts that have come in contact with semen.[84][85]
|
44 |
+
|
45 |
+
Safe sex is a relevant harm reduction philosophy[86] and condoms are used as a form of safe sex and contraception. Condoms are widely recommended for the prevention of sexually transmitted infections (STIs).[86] According to reports by the National Institutes of Health (NIH) and World Health Organization (WHO), correct and consistent use of latex condoms reduces the risk of HIV/AIDS transmission by approximately 85–99% relative to risk when unprotected.[87][88] Condoms are rarely used for oral sex and there is significantly less research on behaviors with regard to condom use for anal and oral sex.[89] The most effective way to avoid sexually transmitted infections is to abstain from sexual intercourse, especially vaginal, anal, and oral sexual intercourse.[86]
|
46 |
+
|
47 |
+
Decisions and options concerning birth control can be affected by cultural reasons, such as religion, gender roles or folklore.[90] In the predominantly Catholic countries Ireland, Italy and the Philippines, fertility awareness and the rhythm method are emphasized while disapproval is expressed with regard to other contraceptive methods.[11] Worldwide, sterilization is a more common birth control method,[11] and use of the intrauterine device (IUD) is the most common and effective way of reversible contraception.[11][91] Conception and contraception are additionally a life-and-death situation in developing countries, where one in three women give birth before age 20; however, 90% of unsafe abortions in these countries could be prevented by effective contraception use.[11]
|
48 |
+
|
49 |
+
The National Survey of Sexual Health and Behavior (NSSHB) indicated in 2010 that "1 of 4 acts of vaginal intercourse are condom-protected in the U.S. (1 in 3 among singles)," that "condom use is higher among black and Hispanic Americans than among white Americans and those from other racial groups," and that "adults using a condom for intercourse were just as likely to rate the sexual extent positively in terms of arousal, pleasure and orgasm than when having intercourse without one".[92]
|
50 |
+
|
51 |
+
Penile–vaginal penetration is the most common form of sexual intercourse.[2][20] Studies indicate that most heterosexual couples engage in vaginal intercourse nearly every sexual encounter.[20] The National Survey of Sexual Health and Behavior (NSSHB) reported in 2010 that vaginal intercourse is "the most prevalent sexual behavior among men and women of all ages and ethnicities".[20] Clint E. Bruess et al. stated that it "is the most frequently studied behavior" and is "often the focus of sexuality education programming for youth."[93] Weiten et al. said that it "is the most widely endorsed and practiced sexual act in our society."[40]
|
52 |
+
|
53 |
+
Regarding oral or anal intercourse, the CDC stated in 2009, "Studies indicate that oral sex is commonly practiced by sexually active male-female and same-gender couples of various ages, including adolescents."[46] Oral sex is significantly more common than anal sex.[40][45] The 2010 NSSHB study reported that vaginal intercourse was practiced more than insertive anal intercourse among men, but that 13% to 15% of men aged 25 to 49 practiced insertive anal intercourse. Receptive anal intercourse was infrequent among men, with approximately 7% of men aged 14 to 94 years old having said that they were a receptive partner during anal intercourse. It said that women engage in anal intercourse less commonly than men, but that the practice is not uncommon among women; it was estimated that 10% to 14% of women aged 18 to 39 years old practiced anal sex in the past 90 days, and that most of the women said they practiced it once a month or a few times a year.[20]
|
54 |
+
|
55 |
+
The prevalence of sexual intercourse has been compared cross-culturally. In 2003, Michael Bozon of the French Institut national d'études démographiques conducted a cross-cultural study titled "At what age do women and men have their first sexual intercourse?" In the first group of the contemporary cultures he studied, which included sub-Saharan Africa (listing Mali, Senegal and Ethiopia), the data indicated that the age of men at sexual initiation in these societies is at later ages than that of women, but is often extra-marital; the study considered the Indian subcontinent to also fall into this group, though data was only available from Nepal.[94][95]
|
56 |
+
|
57 |
+
In the second group, the data indicated families encouraged daughters to delay marriage, and to abstain from sexual activity before that time. However, sons are encouraged to gain experience with older women or prostitutes before marriage. Age of men at sexual initiation in these societies is at lower ages than that of women; this group includes south European and Latin cultures (Portugal, Greece and Romania are noted) and such from Latin America (Brazil, Chile, and the Dominican Republic). The study considered many Asian societies to also fall into this group, although matching data was only available from Thailand.[94][95]
|
58 |
+
|
59 |
+
In the third group, age of men and women at sexual initiation was more closely matched; there were two sub-groups, however. In non-Latin, Catholic countries (Poland and Lithuania are mentioned), age at sexual initiation was higher, suggesting later marriage and reciprocal valuing of male and female virginity. The same pattern of late marriage and reciprocal valuing of virginity was reflected in Singapore and Sri Lanka. The study considered China and Vietnam to also fall into this group, though data were not available.[94][95] In northern and eastern European countries, age at sexual initiation was lower, with both men and women involved in sexual intercourse before any union formation; the study listed Switzerland, Germany and the Czech Republic as members of this group.[94][95]
|
60 |
+
|
61 |
+
Concerning United States data, tabulations by the National Center for Health Statistics report that the age of first sexual intercourse was 17.1 years for both males and females in 2010.[96] The CDC stated that 45.5 percent of girls and 45.7 percent of boys had engaged in sexual activity by 19 in 2002; in 2011, reporting their research from 2006–2010, they stated that 43% of American unmarried teenage girls and 42% of American unmarried teenage boys have ever engaged in sexual intercourse.[97] The CDC also reports that American girls will most likely lose their virginity to a boy who is 1 to 3 years older than they are.[97] Between 1988 and 2002, the percentage of people in the U.S. who had sexual intercourse between the ages of 15 to 19 fell from 60 to 46 percent for never-married males, and from 51 to 46 percent for never-married females.[98]
|
62 |
+
|
63 |
+
In humans, sexual intercourse and sexual activity in general have been reported as having health benefits as varied as increased immunity by increasing the body's production of antibodies and subsequent lower blood pressure,[99][100] and decreased risk of prostate cancer.[99] Sexual intimacy and orgasms increase levels of the hormone oxytocin (also known as "the love hormone"), which can help people bond and build trust.[100][101] Oxytocin is believed to have a more significant impact on women than on men, which may be why women associate sexual attraction or sexual activity with romance and love more than men do.[6] A long-term study of 3,500 people between ages 18 and 102 by clinical neuropsychologist David Weeks indicated that, based on impartial ratings of the subjects' photographs, sex on a regular basis helps people look significantly chronologically younger.[102]
|
64 |
+
|
65 |
+
Sexually transmitted infections (STIs) are bacteria, viruses or parasites that are spread by sexual contact, especially vaginal, anal, or oral intercourse, or unprotected sex.[103][104] Oral sex is less risky than vaginal or anal intercourse.[105] Many times, STIs initially do not cause symptoms, increasing the risk of unknowingly passing the infection on to a sex partner or others.[106][107]
|
66 |
+
|
67 |
+
There are 19 million new cases of sexually transmitted infections every year in the U.S.,[108] and, in 2005, the World Health Organization (WHO) estimated that 448 million people aged 15–49 were infected per year with curable STIs (such as syphilis, gonorrhea and chlamydia).[109] Some STIs can cause a genital ulcer; even if they do not, they increase the risk of both acquiring and passing on HIV up to ten-fold.[109] Hepatitis B can also be transmitted through sexual contact.[110] Globally, there are about 257 million chronic carriers of hepatitis B.[111] HIV is one of the world's leading infectious killers; in 2010, approximately 30 million people were estimated to have died because of it since the beginning of the epidemic. Of the 2.7 million new HIV infections estimated to occur worldwide in 2010, 1.9 million (70%) were in Africa. The World Health Organization also stated that the "estimated 1.2 million Africans who died of HIV-related illnesses in 2010 comprised 69% of the global total of 1.8 million deaths attributable to the epidemic."[112] It is diagnosed by blood tests, and while no cure has been found, it can be controlled by management through antiretroviral drugs for the disease, and patients can enjoy healthy and productive lives.[113]
|
68 |
+
|
69 |
+
In cases where infection is suspected, early medical intervention is highly beneficial in all cases. The CDC stated "the risk of HIV transmission from an infected partner through oral sex is much less than the risk of HIV transmission from anal or vaginal sex," but that "measuring the exact risk of HIV transmission as a result of oral sex is very difficult" and that this is "because most sexually active individuals practice oral sex in addition to other forms of sex, such as vaginal and/or anal sex, when transmission occurs, it is difficult to determine whether it occurred as a result of oral sex or other more risky sexual activities". They added that "several co-factors may increase the risk of HIV transmission through oral sex"; this includes ulcers, bleeding gums, genital sores, and the presence of other STIs.[46]
|
70 |
+
|
71 |
+
In 2005, the World Health Organization estimated that 123 million women become pregnant worldwide each year, and around 87 million of those pregnancies or 70.7% are unintentional. Approximately 46 million pregnancies per year reportedly end in induced abortion.[114] Approximately 6 million U.S. women become pregnant per year. Out of known pregnancies, two-thirds result in live births and roughly 25% in abortions; the remainder end in miscarriage. However, many more women become pregnant and miscarry without even realizing it, instead mistaking the miscarriage for an unusually heavy menstruation.[115] The U.S. teenage pregnancy rate fell by 27 percent between 1990 and 2000, from 116.3 pregnancies per 1,000 girls aged 15–19 to 84.5. This data includes live births, abortions, and fetal losses. Almost 1 million American teenage women, 10% of all women aged 15–19 and 19% of those who report having had intercourse, become pregnant each year.[116]
|
72 |
+
|
73 |
+
Sexual activity can increase the expression of a gene transcription factor called ΔFosB (delta FosB) in the brain's reward center;[117][118][119] consequently excessively frequent engagement in sexual activity on a regular (daily) basis can lead to the overexpression of ΔFosB, inducing an addiction to sexual activity.[117][118][119] Sexual addiction or hypersexuality is often considered an impulse control disorder or a behavioral addiction. It has been linked to atypical levels of dopamine, a neurotransmitter. This behavior is characterized by a fixation on sexual intercourse and disinhibition. It was proposed that this 'addictive behavior' be classified in DSM-5 as an impulsive–compulsive behavioral disorder. Addiction to sexual intercourse is thought to be genetically linked. Those having an addiction to sexual intercourse have a higher response to visual sexual cues in the brain. Those seeking treatment will typically see a physician for pharmacological management and therapy.[120] One form of hypersexuality is Kleine–Levin syndrome. It is manifested by hypersomnia and hypersexuality and remains relatively rare.[121]
|
74 |
+
|
75 |
+
Sexual activity can directly cause death, particularly due to coronary circulation complications, which is sometimes called coital death, coital sudden death or coital coronary.[10][122][123] However, coital deaths are significantly rare.[122] People, especially those who get little or no physical exercise, have a slightly increased risk of triggering a heart attack or sudden cardiac death when they engage in sexual intercourse or any vigorous physical exercise that is engaged in on a sporadic basis.[123] Regular exercise reduces, but does not eliminate, the increased risk.[123]
|
76 |
+
|
77 |
+
Sexual intercourse, when involving a male participant, often ends when the male has ejaculated, and thus the partner might not have time to reach orgasm.[124] In addition, premature ejaculation (PE) is common, and women often require a substantially longer duration of stimulation with a sexual partner than men do before reaching an orgasm.[51][125][126] Scholars, such as Weiten et al., state that "many couples are locked into the idea that orgasms should be achieved only through intercourse [penile-vaginal sex]," that "the word foreplay suggests that any other form of sexual stimulation is merely preparation for the 'main event'" and that "because women reach orgasm through intercourse less consistently than men," they are likelier than men to fake an orgasm to satisfy their sexual partners.[51]
|
78 |
+
|
79 |
+
In 1991, scholars from the Kinsey Institute stated, "The truth is that the time between penetration and ejaculation varies not only from man to man, but from one time to the next for the same man." They added that the appropriate length for sexual intercourse is the length of time it takes for both partners to be mutually satisfied, emphasizing that Kinsey "found that 75 percent of men ejaculated within two minutes of penetration. But he didn't ask if the men or their partners considered two minutes mutually satisfying" and "more recent research reports slightly longer times for intercourse".[127] A 2008 survey of Canadian and American sex therapists stated that the average time for heterosexual intercourse (coitus) was 7 minutes and that 1 to 2 minutes was too short, 3 to 7 minutes was adequate and 7 to 13 minutes desirable, while 10 to 30 minutes was too long.[20][128]
|
80 |
+
|
81 |
+
Anorgasmia is regular difficulty reaching orgasm after ample sexual stimulation, causing personal distress.[129] This is significantly more common in women than in men,[130][131] which has been attributed to the lack of sex education with regard to women's bodies, especially in sex-negative cultures, such as clitoral stimulation usually being key for women to orgasm.[131] The physical structure of coitus favors penile stimulation over clitoral stimulation; the location of the clitoris then usually necessitates manual or oral stimulation in order for the woman to achieve orgasm.[51] Approximately 25% of women report difficulties with orgasm,[20] 10% of women have never had an orgasm,[132] and 40% or 40–50% have either complained about sexual dissatisfaction or experienced difficulty becoming sexually aroused at some point in their lives.[133]
|
82 |
+
|
83 |
+
Vaginismus is involuntary tensing of the pelvic floor musculature, making coitus, or any form of penetration of the vagina, distressing, painful and sometimes impossible for women. It is a conditioned reflex of the pubococcygeus muscle, and is sometimes referred to as the PC muscle. Vaginismus can be hard to overcome because if a woman expects to experience pain during sexual intercourse, this can cause a muscle spasm, which results in painful sexual intercourse.[131][134] Treatment of vaginismus often includes both psychological and behavioral techniques, including the use of vaginal dilators.[135] Additionally, the use of Botox as a medical treatment for vaginismus has been tested and administered.[136] Painful or uncomfortable sexual intercourse may also be categorized as dyspareunia.[135]
|
84 |
+
|
85 |
+
Approximately 40% of males reportedly suffer from some form of erectile dysfunction (ED) or impotence, at least occasionally.[137] Premature ejaculation has been reported to be more common than erectile dysfunction, although some estimates suggest otherwise.[125][126][137] Due to various meanings of the disorder, estimates for the prevalence of premature ejaculation vary significantly more than for erectile dysfunction.[125][126] For example, the Mayo Clinic states, "Estimates vary, but as many as 1 out of 3 men may be affected by [premature ejaculation] at some time."[138] Further, "Masters and Johnson speculated that premature ejaculation is the most common sexual dysfunction, even though more men seek therapy for erectile difficulties" and that this is because "although an estimated 15 percent to 20 percent of men experience difficulty controlling rapid ejaculation, most do not consider it a problem requiring help, and many women have difficulty expressing their sexual needs".[127] The American Urological Association (AUA) estimates that premature ejaculation could affect 21 percent of men in the United States.[139]
|
86 |
+
|
87 |
+
For those whose impotence is caused by medical conditions, prescription drugs such as Viagra, Cialis, and Levitra are available. However, doctors caution against the unnecessary use of these drugs because they are accompanied by serious risks such as increased chance of heart attack.[140] The selective serotonin reuptake inhibitor (SSRI) and antidepressant drug dapoxetine has been used to treat premature ejaculation.[141] In clinical trials, those with PE who took dapoxetine experienced sexual intercourse three to four times longer before orgasm than without the drug.[142] Another ejaculation-related disorder is delayed ejaculation, which can be caused as an unwanted side effect of antidepressant medications such as fluvoxamine; however, all SSRIs have ejaculation-delaying effects, and fluvoxamine has the least ejaculation-delaying effects.[143]
|
88 |
+
|
89 |
+
Sexual intercourse remains possible after major medical treatment of the reproductive organs and structures. This is especially true for women. Even after extensive gynecological surgical procedures (such as hysterectomy, oophorectomy, salpingectomy, dilation and curettage, hymenotomy, Bartholin gland surgery, abscess removal, vestibulectomy, labia minora reduction, cervical conization, surgical and radiological cancer treatments and chemotherapy), coitus can continue. Reconstructive surgery remains an option for women who have experienced benign and malignant conditions.[144]
|
90 |
+
|
91 |
+
Obstacles that those with disabilities face with regard to engaging in sexual intercourse include pain, depression, fatigue, negative body image, stiffness, functional impairment, anxiety, reduced libido, hormonal imbalance, and drug treatment or side effects. Sexual functioning has been regularly identified as a neglected area of the quality of life in patients with rheumatoid arthritis.[145] For those that must take opioids for pain control, sexual intercourse can become more difficult.[146] Having a stroke can also largely impact on the ability to engage in sexual intercourse.[147] Although disability-related pain, including as a result of cancer, and mobility impairment can hamper sexual intercourse, in many cases, the most significant impediments to sexual intercourse for individuals with a disability are psychological.[148] In particular, people who have a disability can find sexual intercourse daunting due to issues involving their self-concept as a sexual being, or a partner's discomfort or perceived discomfort.[148] Temporary difficulties can arise with alcohol and sex, as alcohol can initially increase interest through disinhibition but decrease capacity with greater intake; however, disinhibition can vary depending on the culture.[149][150]
|
92 |
+
|
93 |
+
The mentally disabled also are subject to challenges in participating in sexual intercourse. Women with Intellectual disabilities (ID) are often presented with situations that prevent sexual intercourse. This can include the lack of a knowledgeable healthcare provider trained and experienced in counseling those with ID on sexual intercourse. Those with ID may have hesitations regarding the discussion of the topic of sex, a lack of sexual knowledge and limited opportunities for sex education. In addition there are other barriers such as a higher prevalence of sexual abuse and assault. These crimes often remain underreported. There remains a lack of "dialogue around this population's human right to consensual sexual expression, undertreatment of menstrual disorders, and legal and systemic barriers". Women with ID may lack sexual health care and sex education. They may not recognize sexual abuse. Consensual sexual intercourse is not always an option for some. Those with ID may have limited knowledge and access to contraception, screening for sexually transmitted infections and cervical cancer.[151]
|
94 |
+
|
95 |
+
Sexual intercourse may be for reproductive, relational, or recreational purposes.[152] It often plays a strong role in human bonding.[6] In many societies, it is normal for couples to have sexual intercourse while using some method of birth control, sharing pleasure and strengthening their emotional bond through sexual activity even though they are deliberately avoiding pregnancy.[6]
|
96 |
+
|
97 |
+
In humans and bonobos, the female undergoes relatively concealed ovulation so that male and female partners commonly do not know whether she is fertile at any given moment. One possible reason for this distinct biological feature may be formation of strong emotional bonds between sexual partners important for social interactions and, in the case of humans, long-term partnership rather than immediate sexual reproduction.[54]
|
98 |
+
|
99 |
+
Sexual dissatisfaction due to the lack of sexual intercourse is associated with increased risk of divorce and relationship dissolution, especially for men.[153][154][155] Some research, however, indicates that general dissatisfaction with marriage for men results if their wives flirted with, erotically kissed or became romantically or sexually involved with another man (infidelity),[153][154] and that this is especially the case for men with a lower emotional and composite marital satisfaction.[155] Other studies report that the lack of sexual intercourse does not significantly result in divorce, though it is commonly one of the various contributors to it.[156][157] According to the 2010 National Survey of Sexual Health and Behavior (NSSHB), men whose most recent sexual encounter was with a relationship partner reported greater arousal, greater pleasure, fewer problems with erectile function, orgasm, and less pain during the event than men whose last sexual encounter was with a non-relationship partner.[158]
|
100 |
+
|
101 |
+
For women, there is often a complaint about the lack of their spouses' sexual spontaneity. Decreased sexual activity among these women may be the result of their perceived failure to maintain ideal physical attractiveness or because their sexual partners' health issues have hindered sexual intercourse.[159] Some women express that their most satisfying sexual experiences entail being connected to someone, rather than solely basing satisfaction on orgasm.[124][160] With regard to divorce, women are more likely to divorce their spouses for a one-night stand or various infidelities if they are in less cooperative or high-conflict marriages.[155]
|
102 |
+
|
103 |
+
Research additionally indicates that non-married couples who are cohabiting engage in sexual intercourse more often than married couples, and are more likely to participate in sexual activity outside of their sexual relationships; this may be due to the "honeymoon" effect (the newness or novelty of sexual intercourse with the partner), since sexual intercourse is usually practiced less the longer a couple is married, with couples engaging in sexual intercourse or other sexual activity once or twice a week, or approximately six to seven times a month.[161] Sexuality in older age also affects the frequency of sexual intercourse, as older people generally engage in sexual intercourse less frequently than younger people do.[161]
|
104 |
+
|
105 |
+
Adolescents commonly use sexual intercourse for relational and recreational purposes, which may negatively or positively impact their lives. For example, while teenage pregnancy may be welcomed in some cultures, it is also commonly disparaged, and research suggests that the earlier onset of puberty for children puts pressure on children and teenagers to act like adults before they are emotionally or cognitively ready.[162] Some studies have concluded that engaging in sexual intercourse leaves adolescents, especially girls, with higher levels of stress and depression, and that girls may be likelier to engage in sexual risk (such as sexual intercourse without the use of a condom),[163][164] but it may be that further research is needed in these areas.[164] In some countries, such as the United States, sex education and abstinence-only sex education curricula are available to educate adolescents about sexual activity; these programs are controversial, as debate exists as to whether teaching children and adolescents about sexual intercourse or other sexual activity should only be left up to parents or other caregivers.[165]
|
106 |
+
|
107 |
+
Some studies from the 1970s through 1990s suggested an association between self-esteem and sexual intercourse among adolescents,[166] while other studies, from the 1980s and 1990s, reported that the research generally indicates little or no relationship between self-esteem and sexual activity among adolescents.[167] By the 1990s, the evidence mostly supported the latter,[167] and further research has supported little or no relationship between self-esteem and sexual activity among adolescents.[168][169] Scholar Lisa Arai stated, "The idea that early sexual activity and pregnancy is linked to low self-esteem became fashionable in the latter half of the 20th century, particularly in the US," adding that, "Yet, in a systematic review of the relationship between self-esteem and teenagers' sexual behaviours, attitudes and intentions (which analyzed findings from 38 publications) 62% of behavioral findings and 72% of the attitudinal findings exhibited no statistically significant associations (Goodson et al, 2006)."[169] Studies that do find a link suggest that non-virgin boys have higher self-esteem than virgin boys and that girls who have low self-esteem and poor self-image are more prone to risk-taking behaviors, such as unprotected sex and multiple sexual partners.[166][168][169]
|
108 |
+
|
109 |
+
Psychiatrist Lynn Ponton wrote, "All adolescents have sex lives, whether they are sexually active with others, with themselves, or seemingly not at all", and that viewing adolescent sexuality as a potentially positive experience, rather than as something inherently dangerous, may help young people develop healthier patterns and make more positive choices regarding sexual activity.[162] Researchers state that long-term romantic relationships allow adolescents to gain the skills necessary for high-quality relationships later in life.[170] Overall, positive romantic relationships among adolescents can result in long-term benefits. High-quality romantic relationships are associated with higher commitment in early adulthood,[171] and are positively associated with social competence.[172][173]
|
110 |
+
|
111 |
+
While sexual intercourse, as coitus, is the natural mode of reproduction for the human species, humans have intricate moral and ethical guidelines which regulate the practice of sexual intercourse and vary according to religious and governmental laws. Some governments and religions also have strict designations of "appropriate" and "inappropriate" sexual behavior, which include restrictions on the types of sex acts which are permissible. A historically prohibited or regulated sex act is anal sex.[174][175]
|
112 |
+
|
113 |
+
Sexual intercourse with a person against their will, or without their consent, is rape, but may also be called sexual assault; it is considered a serious crime in most countries.[176][177] More than 90% of rape victims are female, 99% of rapists male, and only about 5% of rapists are strangers to the victims.[177]
|
114 |
+
|
115 |
+
Most countries have age of consent laws which set the minimum legal age with whom an older person may engage in sexual intercourse, usually set at 16 to 18, but ranges from 12 to 20, years of age. In some societies, an age of consent is set by non-statutory custom or tradition.[178] Sex with a person under the age of consent, regardless of their stated consent, is often considered sexual assault or statutory rape depending on differences in ages of the participants. Some countries treat any sex with a person of diminished or insufficient mental capacity to give consent, regardless of age, as rape.[179]
|
116 |
+
|
117 |
+
Robert Francoeur et al. stated that "prior to the 1970s, rape definitions of sex often included only penile-vaginal sexual intercourse."[180] Authors Pamela J. Kalbfleisch and Michael J. Cody stated that this made it so that if "sex means penile-vaginal intercourse, then rape means forced penile-vaginal intercourse, and other sexual behaviors – such as fondling a person's genitals without her or his consent, forced oral sex, and same-sex coercion – are not considered rape"; they stated that "although some other forms of forced sexual contact are included within the legal category of sodomy (e.g., anal penetration and oral-genital contact), many unwanted sexual contacts have no legal grounding as rape in some states".[43] Ken Plumber argued that the legal meaning "of rape in most countries is unlawful sexual intercourse which means the penis must penetrate the vagina" and that "other forms of sexual violence towards women such as forced oral sex or anal intercourse, or the insertion of other objects into the vagina, constitute the 'less serious' crime of sexual assault".[181]
|
118 |
+
|
119 |
+
Over time, the meaning of rape broadened in some parts of the world to include many types of sexual penetration, including anal intercourse, fellatio, cunnilingus, and penetration of the genitals or rectum by an inanimate object.[180] Until 2012, the Federal Bureau of Investigation (FBI) still considered rape a crime solely committed by men against women. In 2012, they changed the meaning from "The carnal knowledge of a female forcibly and against her will" to "The penetration, no matter how slight, of the vagina or anus with any body part or object, or oral penetration by a sex organ of another person, without the consent of the victim." The meaning does not change federal or state criminal codes or impact charging and prosecution on the federal, state or local level, but instead assures that rape will be more accurately reported nationwide.[182][183] In some instances, penetration is not required for the act to be categorized as rape.[184]
|
120 |
+
|
121 |
+
In most societies around the world, the concept of incest exists and is criminalized. James Roffee, a senior lecturer in criminology at Monash University,[185] addressed potential harm associated with familial sexual activity, such as resulting children born with deficiencies. However, the law is more concerned with protecting the rights of people who are potentially subjected to such abuse. This is why familial sexual relationships are criminalized, even if all parties are consensual. There are laws prohibiting all kinds of sexual activity between relatives, not necessarily penetrative sex. These laws refer to grandparents, parents, children, siblings, aunts and uncles. There are differences between states in terms of the severity of punishments and what they consider to be a relative, including biological parents, step-parents, adoptive parents and half-siblings.[186]
|
122 |
+
|
123 |
+
Another sexual matter concerning consent is zoophilia, which is a paraphilia involving sexual activity between human and non-human animals, or a fixation on such practice.[187][188][189] Human sexual activity with non-human animals is not outlawed in some jurisdictions, but it is illegal in others under animal abuse laws or laws dealing with crimes against nature.[190]
|
124 |
+
|
125 |
+
Sexual intercourse has traditionally been considered an essential part of a marriage, with many religious customs requiring consummation of the marriage and citing marriage as the most appropriate union for sexual reproduction (procreation).[191] In such cases, a failure for any reason to consummate the marriage would be considered a ground for annulment (which does not require a divorce process). Sexual relations between marriage partners have been a "marital right" in various societies and religions, both historically and in modern times, especially with regard to a husband's rights to his wife.[192][193][194] Until the late 20th century, there was usually a marital exemption in rape laws which precluded a husband from being prosecuted under the rape law for forced sex with his wife.[195] Author Oshisanya, 'lai Oshitokunbo stated, "As the legal status of women has changed, the concept of a married man's or woman's marital right to sexual intercourse has become less widely held."[196]
|
126 |
+
|
127 |
+
Adultery (engaging in sexual intercourse with someone other than one's spouse) has been, and remains, a criminal offense in some jurisdictions.[197][198] Sexual intercourse between unmarried partners and cohabitation of an unmarried couple are also illegal in some jurisdictions.[199][200] Conversely, in other countries, marriage is not required, socially or legally, in order to have sexual intercourse or to procreate (for example, the majority of births are outside of marriage in countries such as Iceland, Norway, Sweden, Denmark, Bulgaria, Estonia, Slovenia, France, Belgium).[201]
|
128 |
+
|
129 |
+
With regard to divorce laws, the refusal to engage in sexual intercourse with one's spouse may give rise to a grounds for divorce, which may be listed under "grounds of abandonment".[202] Concerning no-fault divorce jurisdictions, author James G. Dwyer stated that no-fault divorce laws "have made it much easier for a woman to exit a marital relationship, and wives have obtained greater control over their bodies while in a marriage" because of legislative and judicial changes regarding the concept of a marital exemption when a man rapes his wife.[192]
|
130 |
+
|
131 |
+
There are various legal positions regarding the meaning and legality of sexual intercourse between persons of the same sex or gender. For example, in the 2003 New Hampshire Supreme Court case Blanchflower v. Blanchflower, it was held that female same-sex sexual relations, and same-sex sexual practices in general, did not constitute sexual intercourse, based on a 1961 entry in Webster's Third New International Dictionary that categorizes sexual intercourse as coitus; and thereby an accused wife in a divorce case was found not guilty of adultery.[203][204] Some countries consider same-sex sexual behavior an offense punishable by imprisonment or execution; this is the case, for example, in Islamic countries, including LGBT issues in Iran.[205][206]
|
132 |
+
|
133 |
+
Opposition to same-sex marriage is largely based on the belief that sexual intercourse and sexual orientation should be of a heterosexual nature.[207][208][209] The recognition of such marriages is a civil rights, political, social, moral and religious issue in many nations, and the conflicts arise over whether same-sex couples should be allowed to enter into marriage, be required to use a different status (such as a civil union, which either grant equal rights as marriage or limited rights in comparison to marriage), or not have any such rights. A related issue is whether the word marriage should be applied.[208][209]
|
134 |
+
|
135 |
+
There are wide differences in religious views with regard to sexual intercourse in or outside of marriage:
|
136 |
+
|
137 |
+
In some cases, the sexual intercourse between two people is seen as counter to religious law or doctrine. In many religious communities, including the Catholic Church and Mahayana Buddhists, religious leaders are expected to refrain from sexual intercourse in order to devote their full attention, energy, and loyalty to their religious duties.[238]
|
138 |
+
|
139 |
+
In zoology, copulation often means the process in which a male introduces sperm into the female's body, especially directly into her reproductive tract.[15][24] Spiders have separate male and female sexes. Before mating and copulation, the male spider spins a small web and ejaculates on to it. He then stores the sperm in reservoirs on his large pedipalps, from which he transfers sperm to the female's genitals. The females can store sperm indefinitely.[239]
|
140 |
+
|
141 |
+
Many animals that live in water use external fertilization, whereas internal fertilization may have developed from a need to maintain gametes in a liquid medium in the Late Ordovician epoch. Internal fertilization with many vertebrates (such as reptiles, some fish, and most birds) occur via cloacal copulation (see also hemipenis), while mammals copulate vaginally, and many basal vertebrates reproduce sexually with external fertilization.[240][241]
|
142 |
+
|
143 |
+
For primitive insects, the male deposits spermatozoa on the substrate, sometimes stored within a special structure; courtship involves inducing the female to take up the sperm package into her genital opening, but there is no actual copulation.[242][243] In groups that have reproduction similar to spiders, such as dragonflies, males extrude sperm into secondary copulatory structures removed from their genital opening, which are then used to inseminate the female. In dragonflies, it is a set of modified sternites on the second abdominal segment.[244] In advanced groups of insects, the male uses its aedeagus, a structure formed from the terminal segments of the abdomen, to deposit sperm directly (though sometimes in a capsule called a spermatophore) into the female's reproductive tract.[245]
|
144 |
+
|
145 |
+
Bonobos, chimpanzees and dolphins are species known to engage in heterosexual behaviors even when the female is not in estrus, which is a point in her reproductive cycle suitable for successful impregnation. These species are also known to engage in same-sex sexual behaviors.[17] In these animals, the use of sexual intercourse has evolved beyond reproduction to apparently serve additional social functions (such as bonding).[18]
|
en/4926.html.txt
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The spleen is an organ found in virtually all vertebrates. Similar in structure to a large lymph node, it acts primarily as a blood filter. The word spleen comes from Ancient Greek σπλήν (splḗn).[1]
|
2 |
+
|
3 |
+
The spleen plays important roles in regard to red blood cells (erythrocytes) and the immune system.[2] It removes old red blood cells and holds a reserve of blood, which can be valuable in case of hemorrhagic shock, and also recycles iron. As a part of the mononuclear phagocyte system, it metabolizes hemoglobin removed from senescent red blood cells (erythrocytes). The globin portion of hemoglobin is degraded to its constitutive amino acids, and the heme portion is metabolized to bilirubin, which is removed in the liver.[3]
|
4 |
+
|
5 |
+
The spleen synthesizes antibodies in its white pulp and removes antibody-coated bacteria and antibody-coated blood cells by way of blood and lymph node circulation. These monocytes, upon moving to injured tissue (such as the heart after myocardial infarction), turn into dendritic cells and macrophages while promoting tissue healing.[4][5][6] The spleen is a center of activity of the mononuclear phagocyte system and is analogous to a large lymph node, as its absence causes a predisposition to certain infections.[7]
|
6 |
+
|
7 |
+
In humans the spleen is purple in color and is in the left upper quadrant of the abdomen.[3][8]
|
8 |
+
|
9 |
+
The spleen is underneath the left part of the diaphragm, and has a smooth, convex surface that faces the diaphragm. It is underneath the ninth, tenth, and eleventh ribs. The other side of the spleen is divided by a ridge into two regions: an anterior gastric portion, and a posterior renal portion. The gastric surface is directed forward, upward, and toward the middle, is broad and concave, and is in contact with the posterior wall of the stomach. Below this it is in contact with the tail of the pancreas. The renal surface is directed medialward and downward. It is somewhat flattened, considerably narrower than the gastric surface, and is in relation with the upper part of the anterior surface of the left kidney and occasionally with the left adrenal gland.
|
10 |
+
|
11 |
+
The spleen, in healthy adult humans, is approximately 7 centimetres (2.8 in) to 14 centimetres (5.5 in) in length.
|
12 |
+
|
13 |
+
An easy way to remember the anatomy of the spleen is the 1×3×5×7×9×10×11 rule. The spleen is 1 by 3 by 5 inches (3 by 8 by 13 cm), weighs approximately 7 oz (200 g), and lies between the 9th and 11th ribs on the left-hand side and along the axis of 10th rib. The weight varies between 1 oz (28 g) and 8 oz (230 g) (standard reference range),[10] correlating mainly to height, body weight and degree of acute congestion but not to sex or age.[11]
|
14 |
+
|
15 |
+
Spleen seen on abdominal ultrasonography
|
16 |
+
|
17 |
+
Maximum length of spleen on abdominal ultrasonography
|
18 |
+
|
19 |
+
Back of lumbar region, showing surface markings for kidneys, ureters, and spleen
|
20 |
+
|
21 |
+
Side of thorax, showing surface markings for bones, lungs (purple), pleura (blue), and spleen (green)
|
22 |
+
|
23 |
+
Near the middle of the spleen is a long fissure, the hilum, which is the point of attachment for the gastrosplenic ligament and the point of insertion for the splenic artery and splenic vein. There are other openings present for lymphatic vessels and nerves.
|
24 |
+
|
25 |
+
Like the thymus, the spleen possesses only efferent lymphatic vessels. The spleen is part of the lymphatic system. Both the short gastric arteries and the splenic artery supply it with blood.[12]
|
26 |
+
|
27 |
+
The germinal centers are supplied by arterioles called penicilliary radicles.[13]
|
28 |
+
|
29 |
+
The spleen is innervated by the splenic plexus, which connects a branch of the celiac ganglia to the vagus nerve.
|
30 |
+
|
31 |
+
The underlying central nervous processes coordinating the spleen's function seem to be embedded into the Hypothalamic-pituitary-adrenal-axis, and the brainstem, especially the subfornical organ.[14]
|
32 |
+
|
33 |
+
The spleen is unique in respect to its development within the gut. While most of the gut organs are endodermally derived (with the exception of the neural-crest derived adrenal gland), the spleen is derived from mesenchymal tissue.[15] Specifically, the spleen forms within, and from, the dorsal mesentery. However, it still shares the same blood supply—the celiac trunk—as the foregut organs.
|
34 |
+
|
35 |
+
Other functions of the spleen are less prominent, especially in the healthy adult:
|
36 |
+
|
37 |
+
Enlargement of the spleen is known as splenomegaly. It may be caused by sickle cell anemia, sarcoidosis, malaria, bacterial endocarditis, leukemia, pernicious anemia, Gaucher's disease, leishmaniasis, Hodgkin's disease, Banti's disease, hereditary spherocytosis, cysts, glandular fever (mononucleosis or 'Mono' caused by the Epstein–Barr virus), and tumours. Primary tumors of the spleen include hemangiomas and hemangiosarcomas. Marked splenomegaly may result in the spleen occupying a large portion of the left side of the abdomen.
|
38 |
+
|
39 |
+
The spleen is the largest collection of lymphoid tissue in the body. It is normally palpable in preterm infants, in 30% of normal, full-term neonates, and in 5% to 10% of infants and toddlers. A spleen easily palpable below the costal margin in any child over the age of 3–4 years should be considered abnormal until proven otherwise.
|
40 |
+
|
41 |
+
Splenomegaly can result from antigenic stimulation (e.g., infection), obstruction of blood flow (e.g., portal vein obstruction), underlying functional abnormality (e.g., hemolytic anemia), or infiltration (e.g., leukemia or storage disease, such as Gaucher's disease). The most common cause of acute splenomegaly in children is viral infection, which is transient and usually moderate. Basic work-up for acute splenomegaly includes a complete blood count with differential, platelet count, and reticulocyte and atypical lymphocyte counts to exclude hemolytic anemia and leukemia. Assessment of IgM antibodies to viral capsid antigen (a rising titer) is indicated to confirm Epstein–Barr virus or cytomegalovirus. Other infections should be excluded if these tests are negative.
|
42 |
+
|
43 |
+
Traumas, such as a road traffic collision, can cause rupture of the spleen, which is a situation requiring immediate medical attention.
|
44 |
+
|
45 |
+
Asplenia refers to a non-functioning spleen, which may be congenital, or caused by traumatic injury, surgical resection (splenectomy) or a disease such as sickle cell anaemia. Hyposplenia refers to a partially functioning spleen. These conditions may cause[5] a modest increase in circulating white blood cells and platelets, a diminished response to some vaccines, and an increased susceptibility to infection. In particular, there is an increased risk of sepsis from polysaccharide encapsulated bacteria. Encapsulated bacteria inhibit binding of complement or prevent complement assembled on the capsule from interacting with macrophage receptors. Phagocytosis needs natural antibodies, which are immunoglobulins that facilitate phagocytosis either directly or by complement deposition on the capsule. They are produced by IgM memory B cells (a subtype of B cells) in the marginal zone of the spleen.[19][20]
|
46 |
+
|
47 |
+
A splenectomy (removal of the spleen) results in a greatly diminished frequency of memory B cells.[21] A 28-year follow-up of 740 World War II veterans whose spleens were removed on the battlefield showed a significant increase in the usual death rate from pneumonia (6 rather than the expected 1.3) and an increase in the death rate from ischemic heart disease (41 rather than the expected 30), but not from other conditions.[22]
|
48 |
+
|
49 |
+
An accessory spleen is a small splenic nodule extra to the spleen usually formed in early embryogenesis. Accessory spleens are found in approximately 10 percent of the population[23] and are typically around 1 centimeter in diameter. Splenosis is a condition where displaced pieces of splenic tissue (often following trauma or splenectomy) autotransplant in the abdominal cavity as accessory spleens.[24]
|
50 |
+
|
51 |
+
Polysplenia is a congenital disease manifested by multiple small accessory spleens,[25] rather than a single, full-sized, normal spleen. Polysplenia sometimes occurs alone, but it is often accompanied by other developmental abnormalities such as intestinal malrotation or biliary atresia, or cardiac abnormalities, such as dextrocardia. These accessory spleens are non-functional.
|
52 |
+
|
53 |
+
Splenic infarction is a condition in which blood flow supply to the spleen is compromised[26], leading to partial or complete infarction (tissue death due to oxygen shortage) in the organ.[27]
|
54 |
+
|
55 |
+
Splenic infarction occurs when the splenic artery or one of its branches are occluded, for example by a blood clot. Although it can occur asymptomatically, the typical symptom is severe pain in the left upper quadrant of the abdomen, sometimes radiating to the left shoulder. Fever and chills develop in some cases.[28] It has to be differentiated from other causes of acute abdomen.
|
56 |
+
|
57 |
+
The spleen may be affected by hyaloserositis, in which it is coated with fibrous hyaline.[29][30]
|
58 |
+
|
59 |
+
The word spleen comes from the Ancient Greek σπλήν (splḗn), and is the idiomatic equivalent of the heart in English, i.e., to be good-spleened (εὔσπλαγχνος, eúsplankhnos) means to be good-hearted or compassionate.[31]
|
60 |
+
|
61 |
+
In English the word spleen was customary during the period of the 18th century. Authors like Richard Blackmore or George Cheyne employed it to characterise the hypochondriacal and hysterical affections.[32][33] William Shakespeare, in Julius Caesar uses the spleen to describe Cassius's irritable nature.
|
62 |
+
|
63 |
+
Must I observe you? must I stand and crouch
|
64 |
+
Under your testy humour? By the gods
|
65 |
+
You shall digest the venom of your spleen,
|
66 |
+
Though it do split you; for, from this day forth,
|
67 |
+
I'll use you for my mirth, yea, for my laughter,
|
68 |
+
When you are waspish.[34]
|
69 |
+
|
70 |
+
In French, "splénétique" refers to a state of pensive sadness or melancholy. It has been popularized by the poet Charles Baudelaire (1821–1867) but was already used before in particular to the Romantic literature (19th century). The French word for the organ is "rate".
|
71 |
+
|
72 |
+
The connection between spleen (the organ) and melancholy (the temperament) comes from the humoral medicine of the ancient Greeks. One of the humours (body fluid) was the black bile, secreted by the spleen organ and associated with melancholy. In eighteenth- and nineteenth-century England, women in bad humor were said to be afflicted by the spleen, or the vapours of the spleen. In modern English, "to vent one's spleen" means to vent one's anger, e.g., by shouting, and can apply to both males and females. Similarly, the English term "splenetic" describes a person in a foul mood.
|
73 |
+
|
74 |
+
In contrast, the Talmud (tractate Berachoth 61b) refers to the spleen as the organ of laughter while possibly suggesting a link with the humoral view of the organ. Sanhedrin 21b and Avodah Zarah 44a (and Rashi ibid.) additionally describe how in the ancient world, some runners destroyed their spleens with drugs to increase their speed, and hollowed out the soles of their feet of flesh so as not to feel thorns and thistles as they ran.
|
75 |
+
|
76 |
+
The spleen also plays an important role in traditional Chinese medicine and is the Yin part of the Earth element paired with its Yang counterpart the Stomach.
|
77 |
+
|
78 |
+
In cartilaginous and ray-finned fish, it consists primarily of red pulp and is normally somewhat elongated, as it lies inside the serosal lining of the intestine. In many amphibians, especially frogs, it has the more rounded form and there is often a greater quantity of white pulp.[35]. A study published in 2009 using mice found that the red pulp of the spleen forms a reservoir that contains over half of the body's monocytes.[4]
|
79 |
+
|
80 |
+
In reptiles, birds, and mammals, white pulp is always relatively plentiful, and in birds and mammals the spleen is typically rounded, but it adjusts its shape somewhat to the arrangement of the surrounding organs. In most vertebrates, the spleen continues to produce red blood cells throughout life; only in mammals this function is lost in middle-aged adults. Many mammals have tiny spleen-like structures known as haemal nodes throughout the body that are presumed to have the same function as the spleen.[35] The spleens of aquatic mammals differ in some ways from those of fully land-dwelling mammals; in general they are bluish in colour. In cetaceans and manatees they tend to be quite small, but in deep diving pinnipeds, they can be massive, due to their function of storing red blood cells.
|
81 |
+
|
82 |
+
The only vertebrates lacking a spleen are the lampreys and hagfishes (the Cyclostomata). Even in these animals, there is a diffuse layer of haematopoeitic tissue within the gut wall, which has a similar structure to red pulp and is presumed homologous with the spleen of higher vertebrates.[35]
|
83 |
+
|
84 |
+
In mice the spleen stores half the body's monocytes so that upon injury, they can migrate to the injured tissue and transform into dendritic cells and macrophages to assist wound healing.[4]
|
85 |
+
|
86 |
+
Transverse section of the spleen, showing the trabecular tissue and the splenic vein and its tributaries
|
87 |
+
|
88 |
+
Spleen
|
89 |
+
|
90 |
+
Laparoscopic view of human spleen
|
en/4927.html.txt
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The raccoon (/rəˈkuːn/ or US: /ræˈkuːn/ (listen), Procyon lotor) is a medium-sized mammal native to North America. The raccoon is the largest of the procyonid family, having a body length of 40 to 70 cm (16 to 28 in) and a body weight of 5 to 26 kg (11 to 57 lb). Its grayish coat mostly consists of dense underfur which insulates it against cold weather. Three of the raccoon's most distinctive features are its extremely dexterous front paws, its facial mask, and its ringed tail, which are themes in the mythologies of the indigenous peoples of the Americas. Raccoons are noted for their intelligence, with studies showing that they are able to remember the solution to tasks for at least three years. They are usually nocturnal and omnivorous, eating about 40% invertebrates, 33% plants, and 27% vertebrates.
|
6 |
+
|
7 |
+
The original habitats of the raccoon are deciduous and mixed forests, but due to their adaptability they have extended their range to mountainous areas, coastal marshes, and urban areas, where some homeowners consider them to be pests. As a result of escapes and deliberate introductions in the mid-20th century, raccoons are now also distributed across much of mainland Europe, Caucasus, and Japan.
|
8 |
+
|
9 |
+
Though previously thought to be generally solitary, there is now evidence that raccoons engage in sex-specific social behavior. Related females often share a common area, while unrelated males live together in groups of up to four raccoons to maintain their positions against foreign males during the mating season, and other potential invaders. Home range sizes vary anywhere from 3 hectares (7.4 acres) for females in cities to 5,000 hectares (12,000 acres) for males in prairies. After a gestation period of about 65 days, two to five young, known as "kits", are born in spring. The kits are subsequently raised by their mother until dispersal in late fall. Although captive raccoons have been known to live over 20 years, their life expectancy in the wild is only 1.8 to 3.1 years. In many areas, hunting and vehicular injury are the two most common causes of death.
|
10 |
+
|
11 |
+
Names for the species include the common raccoon,[4] North American raccoon,[5] and northern raccoon,[6] The word "raccoon" was adopted into English from the native Powhatan term, as used in the Colony of Virginia. It was recorded on John Smith's list of Powhatan words as aroughcun, and on that of William Strachey as arathkone.[7] It has also been identified as a reflex of a Proto-Algonquian root ahrah-koon-em, meaning "[the] one who rubs, scrubs and scratches with its hands".[8] The word is sometimes spelled as racoon,[9]
|
12 |
+
|
13 |
+
Spanish colonists adopted the Spanish word mapache from the Nahuatl mapachtli of the Aztecs, meaning "[the] one who takes everything in its hands".[10] In many languages, the raccoon is named for its characteristic dousing behavior in conjunction with that language's term for bear, for example Waschbär ('wash-bear') in German, Huan Xiong (浣熊 'wash-bear') in Chinese, dvivón róchetz (דביבון רוחץ 'washing-bear[DIM]') in Hebrew, orsetto lavatore ('little washer bear') in Italian, and araiguma (洗熊 (あらいぐま) 'washing-bear') in Japanese. Alternatively, only the washing behavior might be referred to, as in Russian poloskun (полоскун, 'rinser').
|
14 |
+
|
15 |
+
The colloquial abbreviation coon is used in words like coonskin for fur clothing and in phrases like old coon as a self-designation of trappers.[11][12] In the 1830s, the United States Whig Party used the raccoon as an emblem, causing them to be pejoratively known as "coons" by their political opponents, who saw them as too sympathetic to African-Americans. Soon after that the term became an ethnic slur,[13] especially in use between 1880 and 1920 (see coon song), and the term is still considered offensive.[14]
|
16 |
+
|
17 |
+
In the first decades after its discovery by the members of the expedition of Christopher Columbus, who were the first Europeans to leave a written record about the species, taxonomists thought the raccoon was related to many different species, including dogs, cats, badgers and particularly bears.[15] Carl Linnaeus, the father of modern taxonomy, placed the raccoon in the genus Ursus, first as Ursus cauda elongata ("long-tailed bear") in the second edition of his Systema Naturae (1740), then as Ursus Lotor ("washer bear") in the tenth edition (1758–59).[16][17] In 1780, Gottlieb Conrad Christian Storr placed the raccoon in its own genus Procyon, which can be translated as either "before the dog" or "doglike".[18][19] It is also possible that Storr had its nocturnal lifestyle in mind and chose the star Procyon as eponym for the species.[20][21]
|
18 |
+
|
19 |
+
Based on fossil evidence from Russia and Bulgaria, the first known members of the family Procyonidae lived in Europe in the late Oligocene about 25 million years ago.[22] Similar tooth and skull structures suggest procyonids and weasels share a common ancestor, but molecular analysis indicates a closer relationship between raccoons and bears.[23] After the then-existing species crossed the Bering Strait at least six million years later in the early Miocene, the center of its distribution was probably in Central America.[24] Coatis (Nasua and Nasuella) and raccoons (Procyon) have been considered to share common descent from a species in the genus Paranasua present between 5.2 and 6.0 million years ago.[25] This assumption, based on morphological comparisons of fossils, conflicts with a 2006 genetic analysis which indicates raccoons are more closely related to ringtails.[26] Unlike other procyonids, such as the crab-eating raccoon (Procyon cancrivorus), the ancestors of the common raccoon left tropical and subtropical areas and migrated farther north about 2.5 million years ago, in a migration that has been confirmed by the discovery of fossils in the Great Plains dating back to the middle of the Pliocene.[27][25] Its most recent ancestor was likely Procyon rexroadensis, a large Blancan raccoon from the Rexroad Formation characterized by its narrow back teeth and large lower jaw.[28]
|
20 |
+
|
21 |
+
As of 2005, Mammal Species of the World recognizes 22 subspecies of raccoons.[29] Four of these subspecies living only on small Central American and Caribbean islands were often regarded as distinct species after their discovery. These are the Bahamian raccoon and Guadeloupe raccoon, which are very similar to each other; the Tres Marias raccoon, which is larger than average and has an angular skull; and the extinct Barbados raccoon. Studies of their morphological and genetic traits in 1999, 2003 and 2005 led all these island raccoons to be listed as subspecies of the common raccoon in Mammal Species of the World's third edition. A fifth island raccoon population, the Cozumel raccoon, which weighs only 3 to 4 kg (6.6 to 8.8 lb) and has notably small teeth, is still regarded as a separate species.[30][31][32][33]
|
22 |
+
|
23 |
+
The four smallest raccoon subspecies, with a typical weight of 1.8 to 2.7 kg (4.0 to 6.0 lb), live along the southern coast of Florida and on the adjacent islands; an example is the Ten Thousand Islands raccoon (Procyon lotor marinus).[34] Most of the other 15 subspecies differ only slightly from each other in coat color, size and other physical characteristics.[35][36] The two most widespread subspecies are the eastern raccoon (Procyon lotor lotor) and the Upper Mississippi Valley raccoon (Procyon lotor hirtus). Both share a comparatively dark coat with long hairs, but the Upper Mississippi Valley raccoon is larger than the eastern raccoon. The eastern raccoon occurs in all U.S. states and Canadian provinces to the north of South Carolina and Tennessee. The adjacent range of the Upper Mississippi Valley raccoon covers all U.S. states and Canadian provinces to the north of Louisiana, Texas and New Mexico.[37]
|
24 |
+
|
25 |
+
The taxonomic identity of feral raccoons inhabiting Central Europe, Causasia and Japan is unknown, as the founding populations consisted of uncategorized specimens from zoos and fur farms.[38]
|
26 |
+
|
27 |
+
brachyurus (Wiegmann, 1837)
|
28 |
+
fusca (Burmeister, 1850)
|
29 |
+
gularis (C. E. H. Smith, 1848)
|
30 |
+
melanus (J. E. Gray, 1864)
|
31 |
+
obscurus (Wiegmann, 1837)
|
32 |
+
rufescens (de Beaux, 1910)
|
33 |
+
vulgaris (Tiedemann, 1808)
|
34 |
+
|
35 |
+
dickeyi (Nelson and Goldman, 1931)
|
36 |
+
mexicana (Baird, 1858)
|
37 |
+
shufeldti (Nelson and Goldman, 1931)
|
38 |
+
|
39 |
+
minor (Miller, 1911)
|
40 |
+
varius (Nelson and Goldman, 1930)
|
41 |
+
|
42 |
+
Head to hindquarters, raccoons measure between 40 and 70 cm (16 and 28 in), not including the bushy tail which can
|
43 |
+
measure between 20 and 40 cm (8 and 16 in), but is usually not much longer than 25 cm (10 in).[62][63][64] The shoulder height is between 23 and 30 cm (9 and 12 in).[65] The body weight of an adult raccoon varies considerably with habitat, making the raccoon one of the most variably sized mammals. It can range from 5 to 26 kilograms (10 to 60 lb), but is usually between 5 and 12 kilograms (10 and 30 lb). The smallest specimens live in southern Florida, while those near the northern limits of the raccoon's range tend to be the largest (see Bergmann's rule).[66] Males are usually 15 to 20% heavier than females.[67] At the beginning of winter, a raccoon can weigh twice as much as in spring because of fat storage.[68][69][70] The largest recorded wild raccoon weighed 28.4 kg (62.6 lb) and measured 140 cm (55 in) in total length, by far the largest size recorded for a procyonid.[71][72]
|
44 |
+
|
45 |
+
The most characteristic physical feature of the raccoon is the area of black fur around the eyes, which contrasts sharply with the surrounding white face coloring. This is reminiscent of a "bandit's mask" and has thus enhanced the animal's reputation for mischief.[73][74] The slightly rounded ears are also bordered by white fur. Raccoons are assumed to recognize the facial expression and posture of other members of their species more quickly because of the conspicuous facial coloration and the alternating light and dark rings on the tail.[75][76][77] The dark mask may also reduce glare and thus enhance night vision.[76][77] On other parts of the body, the long and stiff guard hairs, which shed moisture, are usually colored in shades of gray and, to a lesser extent, brown.[78] Raccoons with a very dark coat are more common in the German population because individuals with such coloring were among those initially released to the wild.[79] The dense underfur, which accounts for almost 90% of the coat, insulates against cold weather and is composed of 2 to 3 cm (0.8 to 1.2 in) long hairs.[78]
|
46 |
+
|
47 |
+
The raccoon, whose method of locomotion is usually considered to be plantigrade, can stand on its hind legs to examine objects with its front paws.[80][81] As raccoons have short legs compared to their compact torso, they are usually not able either to run quickly or jump great distances.[82][83] Their top speed over short distances is 16 to 24 km/h (10 to 15 mph).[84][85] Raccoons can swim with an average speed of about 5 km/h (3 mph) and can stay in the water for several hours.[86][83] For climbing down a tree headfirst—an unusual ability for a mammal of its size—a raccoon rotates its hind feet so they are pointing backwards.[87][83] Raccoons have a dual cooling system to regulate their temperature; that is, they are able to both sweat and pant for heat dissipation.[88][89]
|
48 |
+
|
49 |
+
Raccoon skulls have a short and wide facial region and a voluminous braincase. The facial length of the skull is less than the cranial, and their nasal bones are short and quite broad. The auditory bullae are inflated in form, and the sagittal crest is weakly developed.[90] The dentition—40 teeth with the dental formula:3.1.4.23.1.4.2—is adapted to their omnivorous diet: the carnassials are not as sharp and pointed as those of a full-time carnivore, but the molars are not as wide as those of a herbivore.[91] The penis bone of males is about 10 cm (4 in) long and strongly bent at the front end,[92][93] and its shape can be used to distinguish juvenile males from mature males.[94][95][96] Seven of the thirteen identified vocal calls are used in communication between the mother and her kits, one of these being the birdlike twittering of newborns.[97][98][89]
|
50 |
+
|
51 |
+
The most important sense for the raccoon is its sense of touch.[99][100][101] The "hyper sensitive"[100] front paws are protected by a thin horny layer that becomes pliable when wet.[102][103] The five digits of the paws have no webbing between them, which is unusual for a carnivoran.[104] Almost two-thirds of the area responsible for sensory perception in the raccoon's cerebral cortex is specialized for the interpretation of tactile impulses, more than in any other studied animal.[105] They are able to identify objects before touching them with vibrissae located above their sharp, nonretractable claws.[80][101] The raccoon's paws lack an opposable thumb; thus, it does not have the agility of the hands of primates.[101][103] There is no observed negative effect on tactile perception when a raccoon stands in water below 10 °C (50 °F) for hours.[106]
|
52 |
+
|
53 |
+
Raccoons are thought to be color blind or at least poorly able to distinguish color, though their eyes are well-adapted for sensing green light.[107][108][109] Although their accommodation of 11 dioptre is comparable to that of humans and they see well in twilight because of the tapetum lucidum behind the retina, visual perception is of subordinate importance to raccoons because of their poor long-distance vision.[110][111][112] In addition to being useful for orientation in the dark, their sense of smell is important for intraspecific communication. Glandular secretions (usually from their anal glands), urine and feces are used for marking.[113][114][115] With their broad auditory range, they can perceive tones up to 50–85 kHz as well as quiet noises, like those produced by earthworms underground.[116][117]
|
54 |
+
|
55 |
+
Zoologist Clinton Hart Merriam described raccoons as "clever beasts", and that "in certain directions their cunning surpasses that of the fox". The animal's intelligence gave rise to the epithet "sly coon".[118] Only a few studies have been undertaken to determine the mental abilities of raccoons, most of them based on the animal's sense of touch. In a study by the ethologist H. B. Davis in 1908, raccoons were able to open 11 of 13 complex locks in fewer than 10 tries and had no problems repeating the action when the locks were rearranged or turned upside down. Davis concluded that they understood the abstract principles of the locking mechanisms and their learning speed was equivalent to that of rhesus macaques.[119]
|
56 |
+
|
57 |
+
Studies in 1963, 1973, 1975 and 1992 concentrated on raccoon memory showed that they can remember the solutions to tasks for at least three years.[120] In a study by B. Pohl in 1992, raccoons were able to instantly differentiate between identical and different symbols three years after the short initial learning phase.[120] Stanislas Dehaene reports in his book The Number Sense that raccoons can distinguish boxes containing two or four grapes from those containing three.[121] In research by Suzana Herculano-Houzel and other neuroscientists, raccoons have been found to be comparable to primates in density of neurons in the cerebral cortex, which they have proposed to be a neuroanatomical indicator of intelligence.[122][123]
|
58 |
+
|
59 |
+
Studies in the 1990s by the ethologists Stanley D. Gehrt and Ulf Hohmann suggest that raccoons engage in sex-specific social behaviors and are not typically solitary, as was previously thought.[124][125] Related females often live in a so-called "fission-fusion society"; that is, they share a common area and occasionally meet at feeding or resting grounds.[126][127] Unrelated males often form loose male social groups to maintain their position against foreign males during the mating season—or against other potential invaders.[128] Such a group does not usually consist of more than four individuals.[129][130] Since some males show aggressive behavior towards unrelated kits, mothers will isolate themselves from other raccoons until their kits are big enough to defend themselves.[131]
|
60 |
+
|
61 |
+
With respect to these three different modes of life prevalent among raccoons, Hohmann called their social structure a "three-class society".[132] Samuel I. Zeveloff, professor of zoology at Weber State University and author of the book Raccoons: A Natural History, is more cautious in his interpretation and concludes at least the females are solitary most of the time and, according to Erik K. Fritzell's study in North Dakota in 1978, males in areas with low population densities are solitary as well.[133]
|
62 |
+
|
63 |
+
The shape and size of a raccoon's home range varies depending on age, sex, and habitat, with adults claiming areas more than twice as large as juveniles.[134] While the size of home ranges in the habitat of North Dakota's prairies lie between 7 and 50 km2 (3 and 20 sq mi) for males and between 2 and 16 km2 (1 and 6 sq mi) for females, the average size in a marsh at Lake Erie was 0.5 km2 (0.19 sq mi).[135] Irrespective of whether the home ranges of adjacent groups overlap, they are most likely not actively defended outside the mating season if food supplies are sufficient.[136] Odor marks on prominent spots are assumed to establish home ranges and identify individuals.[115] Urine and feces left at shared raccoon latrines may provide additional information about feeding grounds, since raccoons were observed to meet there later for collective eating, sleeping and playing.[137]
|
64 |
+
|
65 |
+
Concerning the general behavior patterns of raccoons, Gehrt points out that "typically you'll find 10 to 15 percent that will do the opposite"[138] of what is expected.
|
66 |
+
|
67 |
+
Though usually nocturnal, the raccoon is sometimes active in daylight to take advantage of available food sources.[139][140] Its diet consists of about 40% invertebrates, 33% plant material and 27% vertebrates.[141] Since its diet consists of such a variety of different foods, Zeveloff argues the raccoon "may well be one of the world's most omnivorous animals".[142] While its diet in spring and early summer consists mostly of insects, worms, and other animals already available early in the year, it prefers fruits and nuts, such as acorns and walnuts, which emerge in late summer and autumn, and represent a rich calorie source for building up fat needed for winter.[143][144] Contrary to popular belief, raccoons only occasionally eat active or large prey, such as birds and mammals.
|
68 |
+
They prefer prey that is easier to catch, specifically fish, amphibians and bird eggs.[145] Raccoons are virulent predators of eggs and hatchlings in both birds and reptile nests, to such a degree that, for threatened prey species, raccoons may need to be removed from the area or nests may need to be relocated to mitigate the effect of their predations (i.e. in the case of some globally threatened turtles).[146][147][148][149][150] When food is plentiful, raccoons can develop strong individual preferences for specific foods.[69] In the northern parts of their range, raccoons go into a winter rest, reducing their activity drastically as long as a permanent snow cover makes searching for food impossible.[151]
|
69 |
+
|
70 |
+
One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; "lotor" is neo-Latin for "washer". In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon "washing" the food. The tactile sensitivity of raccoons' paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws.[100][152] However, the behavior observed in captive raccoons in which they carry their food to water to "wash" or douse it before eating has not been observed in the wild.[153][154] Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect.[152][153][155][156] Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft).[156] The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods.[152][156][157][158] This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for "washing".[156] Experts have cast doubt on the veracity of observations of wild raccoons dousing food.[159][160][161][needs update?]
|
71 |
+
|
72 |
+
Raccoons usually mate in a period triggered by increasing daylight between late January and mid-March.[162][163][164] However, there are large regional differences which are not completely explicable by solar conditions. For example, while raccoons in southern states typically mate later than average, the mating season in Manitoba also peaks later than usual in March and extends until June.[164] During the mating season, males restlessly roam their home ranges in search of females in an attempt to court them during the three- to four-day period when conception is possible. These encounters will often occur at central meeting places.[165][166][167] Copulation, including foreplay, can last over an hour and is repeated over several nights.[168] The weaker members of a male social group also are assumed to get the opportunity to mate, since the stronger ones cannot mate with all available females.[169] In a study in southern Texas during the mating seasons from 1990 to 1992, about one third of all females mated with more than one male.[170] If a female does not become pregnant or if she loses her kits early, she will sometimes become fertile again 80 to 140 days later.[171][172][173]
|
73 |
+
|
74 |
+
After usually 63 to 65 days of gestation (although anywhere from 54 to 70 days is possible), a litter of typically two to five young is born.[174][175] The average litter size varies widely with habitat, ranging from 2.5 in Alabama to 4.8 in North Dakota.[176][177] Larger litters are more common in areas with a high mortality rate, due, for example, to hunting or severe winters.[178][177] While male yearlings usually reach their sexual maturity only after the main mating season, female yearlings can compensate for high mortality rates and may be responsible for about 50% of all young born in a year.[179][180][181] Males have no part in raising young.[129][182][183] The kits (also called "cubs") are blind and deaf at birth, but their mask is already visible against their light fur.[184][185] The birth weight of the about 10 cm (4 in)-long kits is between 60 and 75 g (2.1 and 2.6 oz).[185] Their ear canals open after around 18 to 23 days, a few days before their eyes open for the first time.[186] Once the kits weigh about 1 kg (2 lb), they begin to explore outside the den, consuming solid food for the first time after six to nine weeks.[187][188] After this point, their mother suckles them with decreasing frequency; they are usually weaned by 16 weeks.[189] In the fall, after their mother has shown them dens and feeding grounds, the juvenile group splits up.[190] [191] While many females will stay close to the home range of their mother, males can sometimes move more than 20 km (12 mi) away.[192][193] This is considered an instinctive behavior, preventing inbreeding.[194][195] However, mother and offspring may share a den during the first winter in cold areas.[191]
|
75 |
+
|
76 |
+
Captive raccoons have been known to live for more than 20 years.[73] However, the species' life expectancy in the wild is only 1.8 to 3.1 years, depending on the local conditions such as traffic volume, hunting, and weather severity.[196] It is not unusual for only half of the young born in one year to survive a full year.[179][197] After this point, the annual mortality rate drops to between 10% and 30%.[179] Young raccoons are vulnerable to losing their mother and to starvation, particularly in long and cold winters.[198] The most frequent natural cause of death in the North American raccoon population is distemper, which can reach epidemic proportions and kill most of a local raccoon population.[199] In areas with heavy vehicular traffic and extensive hunting, these factors can account for up to 90% of all deaths of adult raccoons.[200] The most important natural predators of the raccoon are bobcats, coyotes, and great horned owls, the latter mainly preying on young raccoons but capable of killing adults in some cases.[201][202][203][204][205][206] In Florida, they have been reported to fall victim to larger carnivores like American black bear and cougars and these species may also be a threat on occasion in other areas.[207][208][209] Where still present, gray wolves may still occasionally take raccoons as a supplemental prey item.[210][211] Also in the southeast, they are among the favored prey for adult American alligators.[212][213] On occasion, both bald and golden eagles will prey on raccoons.[214][215] In the tropics, raccoons are known to fall prey to smaller eagles such as ornate hawk-eagles and black hawk-eagles, although it is not clear whether adults or merely juvenile raccoons are taken by these.[216][217] In rare cases of overlap, they may fall victim from carnivores ranging from species averaging smaller than themselves such as fishers to those as large and formidable as jaguars in Mexico.[218][219] In their introduced range in the former Soviet Union, their main predators are wolves, lynxes and Eurasian eagle-owls.[220] However, predation is not a significant cause of death, especially because larger predators have been exterminated in many areas inhabited by raccoons.[221]
|
77 |
+
|
78 |
+
Although they have thrived in sparsely wooded areas in the last decades, raccoons depend on vertical structures to climb when they feel threatened.[222][223] Therefore, they avoid open terrain and areas with high concentrations of beech trees, as beech bark is too smooth to climb.[224] Tree hollows in old oaks or other trees and rock crevices are preferred by raccoons as sleeping, winter and litter dens. If such dens are unavailable or accessing them is inconvenient, raccoons use burrows dug by other mammals, dense undergrowth or tree crotches.[225][226] In a study in the Solling range of hills in Germany, more than 60% of all sleeping places were used only once, but those used at least ten times accounted for about 70% of all uses.[227] Since amphibians, crustaceans, and other animals around the shore of lakes and rivers are an important part of the raccoon's diet, lowland deciduous or mixed forests abundant with water and marshes sustain the highest population densities.[228][229] While population densities range from 0.5 to 3.2 animals per square kilometer (1.3 to 8.3 animals per square mile) in prairies and do not usually exceed 6 animals per square kilometer (15.5 animals per square mile) in upland hardwood forests, more than 20 raccoons per square kilometer (51.8 animals per square mile) can live in lowland forests and marshes.[228][230]
|
79 |
+
|
80 |
+
Raccoons are common throughout North America from Canada to Panama, where the subspecies Procyon lotor pumilus coexists with the crab-eating raccoon (Procyon cancrivorus).[231][232] The population on Hispaniola was exterminated as early as 1513 by Spanish colonists who hunted them for their meat.[233] Raccoons were also exterminated in Cuba and Jamaica, where the last sightings were reported in 1687.[234] When they were still considered separate species, the Bahamas raccoon, Guadeloupe raccoon and Tres Marias raccoon were classified as endangered by the IUCN in 1996.[235]
|
81 |
+
|
82 |
+
There is archeological evidence that in pre-Columbian times raccoons were numerous only along rivers and in the woodlands of the Southeastern United States.[236] As raccoons were not mentioned in earlier reports of pioneers exploring the central and north-central parts of the United States,[237] their initial spread may have begun a few decades before the 20th century. Since the 1950s, raccoons have expanded their range from Vancouver Island—formerly the northernmost limit of their range—far into the northern portions of the four south-central Canadian provinces.[238] New habitats which have recently been occupied by raccoons (aside from urban areas) include mountain ranges, such as the Western Rocky Mountains, prairies and coastal marshes.[239] After a population explosion starting in the 1940s, the estimated number of raccoons in North America in the late 1980s was 15 to 20 times higher than in the 1930s, when raccoons were comparatively rare.[240] Urbanization, the expansion of agriculture, deliberate introductions, and the extermination of natural predators of the raccoon have probably caused this increase in abundance and distribution.[241]
|
83 |
+
|
84 |
+
As a result of escapes and deliberate introductions in the mid-20th century, the raccoon is now distributed in several European and Asian countries. Sightings have occurred in all the countries bordering Germany, which hosts the largest population outside of North America.[242] Another stable population exists in northern France, where several pet raccoons were released by members of the U.S. Air Force near the Laon-Couvron Air Base in 1966.[243] Furthermore, raccoons have been known to be in the area around Madrid since the early 1970s. In 2013, the city authorized "the capture and death of any specimen".[244] It is also present in Italy, with one reproductive population in Lombardy.[245]
|
85 |
+
|
86 |
+
About 1,240 animals were released in nine regions of the former Soviet Union between 1936 and 1958 for the purpose of establishing a population to be hunted for their fur. Two of these introductions were successful—one in the south of Belarus between 1954 and 1958, and another in Azerbaijan between 1941 and 1957. With a seasonal harvest of between 1,000 and 1,500 animals, in 1974 the estimated size of the population distributed in the Caucasus region was around 20,000 animals and the density was four animals per square kilometer (10 animals per square mile).[246]
|
87 |
+
|
88 |
+
In Japan, up to 1,500 raccoons were imported as pets each year after the success of the anime series Rascal the Raccoon (1977). In 2004, the descendants of discarded or escaped animals lived in 42 of 47 prefectures.[247][248][249] The range of raccoons in the wild in Japan grew from 17 prefectures in 2000 to all 47 prefectures in 2008.[250] It is estimated that raccoons cause thirty million yen (~$275,000) of agricultural damage on Hokkaido alone.[251]
|
89 |
+
|
90 |
+
In Germany—where the raccoon is called the Waschbär (literally, "wash-bear" or "washing bear") due to its habit of "dousing" food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer.[252] He released them two weeks before receiving permission from the Prussian hunting office to "enrich the fauna."[253] Several prior attempts to introduce raccoons in Germany were not successful.[254][255] A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen (today district of Altlandsberg), east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite.[256] The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008.[199][254] By 2012 it was estimated that Germany now had more than a million raccoons.[257]
|
91 |
+
|
92 |
+
The raccoon was a protected species in Germany, but has been declared a game animal in 14 of the 16 states since 1954.[258] Hunters and environmentalists argue the raccoon spreads uncontrollably, threatens protected bird species and supersedes domestic carnivorans.[79] This view is opposed by the zoologist Frank-Uwe Michler, who finds no evidence a high population density of raccoons has negative effects on the biodiversity of an area.[79] Hohmann holds that extensive hunting cannot be justified by the absence of natural predators, because predation is not a significant cause of death in the North American raccoon population.[259]
|
93 |
+
|
94 |
+
The raccoon is extensively hunted in Germany as they are seen as an invasive species and pests.[260][261] In the 1990s, only about 400 raccoons were hunted yearly. This increased to 67,700 by the 2010/11 hunting season and the tally broke the 100,000 barrier in 2013. During the 2015/16 hunting season, the tally was 128,100 animals, 60 percent of which were provided by the federal states of Hesse.[262]
|
95 |
+
|
96 |
+
Experiments in acclimatising raccoons into the USSR began in 1936, and were repeated a further 25 times until 1962. Overall, 1,222 individuals were released, 64 of which came from zoos and fur farms (38 of them having been imports from western Europe). The remainder originated from a population previously established in Transcaucasia. The range of Soviet raccoons was never single or continuous, as they were often introduced to different locations far from each other. All introductions into the Russian Far East failed; melanistic raccoons were released on Petrov Island near Vladivostok and some areas of southern Primorsky Krai, but died. In Middle Asia, raccoons were released in Kyrgyzstan's Jalal-Abad Province, though they were later recorded as "practically absent" there in January 1963. A large and stable raccoon population (yielding 1000–1500 catches a year) was established in Azerbaijan after an introduction to the area in 1937. Raccoons apparently survived an introduction near Terek, along the Sulak River into the Dagestani lowlands. Attempts to settle raccoons on the Kuban River's left tributary and Kabardino-Balkaria were unsuccessful. A successful acclimatization occurred in Belarus, where three introductions (consisting of 52, 37 and 38 individuals in 1954 and 1958) took place. By January 1, 1963, 700 individuals were recorded in the country.[263]
|
97 |
+
|
98 |
+
Due to its adaptability, the raccoon has been able to use urban areas as a habitat. The first sightings were recorded in a suburb of Cincinnati in the 1920s. Since the 1950s, raccoons have been present in metropolitan areas like Washington, DC, Chicago, and Toronto.[264] Since the 1960s, Kassel has hosted Europe's first and densest population in a large urban area, with about 50 to 150 animals per square kilometer (130 to 390 animals per square mile), a figure comparable to those of urban habitats in North America.[264][265] Home range sizes of urban raccoons are only 3 to 40 hectares (7.5 to 100 acres) for females and 8 to 80 hectares (20 to 200 acres) for males.[266] In small towns and suburbs, many raccoons sleep in a nearby forest after foraging in the settlement area.[264][267] Fruit and insects in gardens and leftovers in municipal waste are easily available food sources.[268] Furthermore, a large number of additional sleeping areas exist in these areas, such as hollows in old garden trees, cottages, garages, abandoned houses, and attics. The percentage of urban raccoons sleeping in abandoned or occupied houses varies from 15% in Washington, DC (1991) to 43% in Kassel (2003).[267][265]
|
99 |
+
|
100 |
+
Raccoons can carry rabies, a lethal disease caused by the neurotropic rabies virus carried in the saliva and transmitted by bites. Its spread began in Florida and Georgia in the 1950s and was facilitated by the introduction of infected individuals to Virginia and North Dakota in the late 1970s.[269] Of the 6,940 documented rabies cases reported in the United States in 2006, 2,615 (37.7%) were in raccoons.[270] The U.S. Department of Agriculture, as well as local authorities in several U.S. states and Canadian provinces, has developed oral vaccination programs to fight the spread of the disease in endangered populations.[271][272][273] Only one human fatality has been reported after transmission of the rabies virus strain commonly known as "raccoon rabies".[274] Among the main symptoms for rabies in raccoons are a generally sickly appearance, impaired mobility, abnormal vocalization, and aggressiveness.[275] There may be no visible signs at all, however, and most individuals do not show the aggressive behavior seen in infected canids; rabid raccoons will often retire to their dens instead.[79][256][275] Organizations like the U.S. Forest Service encourage people to stay away from animals with unusual behavior or appearance, and to notify the proper authorities, such as an animal control officer from the local health department.[276][277] Since healthy animals, especially nursing mothers, will occasionally forage during the day, daylight activity is not a reliable indicator of illness in raccoons.[139][140]
|
101 |
+
|
102 |
+
Unlike rabies and at least a dozen other pathogens carried by raccoons, distemper, an epizootic virus, does not affect humans.[278][279] This disease is the most frequent natural cause of death in the North American raccoon population and affects individuals of all age groups.[199] For example, 94 of 145 raccoons died during an outbreak in Clifton, Ohio, in 1968.[280] It may occur along with a following inflammation of the brain (encephalitis), causing the animal to display rabies-like symptoms.[269] In Germany, the first eight cases of distemper were reported in 2007.[199]
|
103 |
+
|
104 |
+
Some of the most important bacterial diseases which affect raccoons are leptospirosis, listeriosis, tetanus, and tularemia. Although internal parasites weaken their immune systems, well-fed individuals can carry a great many roundworms in their digestive tracts without showing symptoms.[281][279] The larvae of the roundworm Baylisascaris procyonis, which can be contained in the feces and seldom causes a severe illness in humans, can be ingested when cleaning raccoon latrines without wearing breathing protection.[282]
|
105 |
+
|
106 |
+
While not endemic, the worm Trichinella does infect raccoons,[283] and undercooked raccoon meat has caused trichinosis in humans.[284]
|
107 |
+
|
108 |
+
Trematode Metorchis conjunctus can also infect raccoons.[285]
|
109 |
+
|
110 |
+
The increasing number of raccoons in urban areas has resulted in diverse reactions in humans, ranging from outrage at their presence to deliberate feeding.[286] Some wildlife experts and most public authorities caution against feeding wild animals because they might become increasingly obtrusive and dependent on humans as a food source.[287] Other experts challenge such arguments and give advice on feeding raccoons and other wildlife in their books.[288][289] Raccoons without a fear of humans are a concern to those who attribute this trait to rabies, but scientists point out this behavior is much more likely to be a behavioral adjustment to living in habitats with regular contact to humans for many generations.[256][290] Raccoons usually do not prey on domestic cats and dogs, but isolated cases of killings have been reported.[291] Attacks on pets may also target their owners.[292]
|
111 |
+
|
112 |
+
While overturned waste containers and raided fruit trees are just a nuisance to homeowners, it can cost several thousand dollars to repair damage caused by the use of attic space as dens.[293] Relocating or killing raccoons without a permit is forbidden in many urban areas on grounds of animal welfare. These methods usually only solve problems with particularly wild or aggressive individuals, since adequate dens are either known to several raccoons or will quickly be rediscovered.[178][277][294] Loud noises, flashing lights and unpleasant odors have proven particularly effective in driving away a mother and her kits before they would normally leave the nesting place (when the kits are about eight weeks old).[277][295] Typically, though, only precautionary measures to restrict access to food waste and den sites are effective in the long term.[277][296][297]
|
113 |
+
|
114 |
+
Among all fruits and crops cultivated in agricultural areas, sweet corn in its milk stage is particularly popular among raccoons.[298][299] In a two-year study by Purdue University researchers, published in 2004, raccoons were responsible for 87% of the damage to corn plants.[300] Like other predators, raccoons searching for food can break into poultry houses to feed on chickens, ducks, their eggs, or food.[141][277][301]
|
115 |
+
|
116 |
+
Since raccoons in high mortality areas have a higher rate of reproduction, extensive hunting may not solve problems with raccoon populations. Older males also claim larger home ranges than younger ones, resulting in a lower population density.
|
117 |
+
|
118 |
+
In the mythology of the indigenous peoples of the Americas, the raccoon is the subject of folk tales.[302] Stories such as "How raccoons catch so many crayfish" from the Tuscarora centered on its skills at foraging.[303] In other tales, the raccoon played the role of the trickster which outsmarts other animals, like coyotes and wolves.[304] Among others, the Dakota Sioux believe the raccoon has natural spirit powers, since its mask resembled the facial paintings, two-fingered swashes of black and white, used during rituals to connect to spirit beings.[305] The Aztecs linked supernatural abilities especially to females, whose commitment to their young was associated with the role of wise women in their society.[306]
|
119 |
+
|
120 |
+
The raccoon also appears in Native American art across a wide geographic range. Petroglyphs with engraved raccoon tracks were found in Lewis Canyon, Texas;[307] at the Crow Hollow petroglyph site in Grayson County, Kentucky;[308] and in river drainages near Tularosa, New Mexico and San Francisco, California.[309] A true-to-detail figurine made of quartz, the Ohio Mound Builders' Stone Pipe, was found near the Scioto River. The meaning and significance of the Raccoon Priests Gorget, which features a stylized carving of a raccoon and was found at the Spiro Mounds, Oklahoma, remains unknown.[310][311]
|
121 |
+
|
122 |
+
In Western culture, several autobiographical novels about living with a raccoon have been written, mostly for children. The best-known is Sterling North's Rascal, which recounts how he raised a kit during World War I. In recent years, anthropomorphic raccoons played main roles in the animated television series The Raccoons, the computer-animated film Over the Hedge, the live action film Guardians of the Galaxy (and the comics that it was based upon) and the video game series Sly Cooper.
|
123 |
+
|
124 |
+
The fur of raccoons is used for clothing, especially for coats and coonskin caps. At present, it is the material used for the inaccurately named "sealskin" cap worn by the Royal Fusiliers of Great Britain.[312] Sporrans made of raccoon pelt and hide have sometimes been used as part of traditional Scottish highland men's apparel since the 18th century, especially in North America. Such sporrans may or may not be of the "full-mask" type.[313] Historically, Native American tribes not only used the fur for winter clothing, but also used the tails for ornament.[314] The famous Sioux leader Spotted Tail took his name from a raccoon skin hat with the tail attached he acquired from a fur trader. Since the late 18th century, various types of scent hounds, called "coonhounds", which are able to tree animals have been bred in the United States.[315] In the 19th century, when coonskins occasionally even served as means of payment, several thousand raccoons were killed each year in the United States.[316][317] This number rose quickly when automobile coats became popular after the turn of the 20th century. In the 1920s, wearing a raccoon coat was regarded as status symbol among college students.[318] Attempts to breed raccoons in fur farms in the 1920s and 1930s in North America and Europe turned out not to be profitable, and farming was abandoned after prices for long-haired pelts dropped in the 1940s.[319][320] Although raccoons had become rare in the 1930s, at least 388,000 were killed during the hunting season of 1934/35.[318][321]
|
125 |
+
|
126 |
+
After persistent population increases began in the 1940s, the seasonal coon hunting harvest reached about one million animals in 1946/47 and two million in 1962/63.[322] The broadcast of three television episodes about the frontiersman Davy Crockett and the film Davy Crockett, King of the Wild Frontier in 1954 and 1955 led to a high demand for coonskin caps in the United States, although it is unlikely either Crockett or the actor who played him, Fess Parker, actually wore a cap made from raccoon fur.[323] The seasonal hunt reached an all-time high with 5.2 million animals in 1976/77 and ranged between 3.2 and 4.7 million for most of the 1980s. In 1982, the average pelt price was $20.[324] As of 1987, the raccoon was identified as the most important wild furbearer in North America in terms of revenue.[325] In the first half of the 1990s, the seasonal hunt dropped to 0.9 from 1.9 million due to decreasing pelt prices.[326]
|
127 |
+
|
128 |
+
While primarily hunted for their fur, raccoons were also a source of food for Native Americans and early American settlers.[327][328] According to Ernest Thompson Seton, young specimens killed without a fight are palatable, whereas old raccoons caught after a lengthy battle are inedible.[329] Raccoon meat was extensively eaten during the early years of California, where it was sold in the San Francisco market for $1–3 apiece.[330] American slaves occasionally ate raccoon at Christmas, but it was not necessarily a dish of the poor or rural. The first edition of The Joy of Cooking, released in 1931, contained a recipe for preparing raccoon, and US President Calvin Coolidge's pet raccoon Rebecca was originally sent to be served at the White House Thanksgiving Dinner.[331][332][333] Although the idea of eating raccoons seems repulsive to most mainstream consumers since they see them as endearing, cute, and/or vermin, several thousand raccoons are still eaten each year in the United States, primarily in the Southern United States.[334][335][336][337]
|
129 |
+
|
130 |
+
Raccoons are sometimes kept as pets, which is discouraged by many experts because the raccoon is not a domesticated species. Raccoons may act unpredictably and aggressively and it is extremely difficult to teach them to obey commands.[338][339] In places where keeping raccoons as pets is not forbidden, such as in Wisconsin and other U.S. states, an exotic pet permit may be required.[340][341] One notable raccoon pet was Rebecca, kept by US president Calvin Coolidge.[342]
|
131 |
+
|
132 |
+
Their propensity for unruly behavior exceeds that of captive skunks, and they are even less trustworthy when allowed to roam freely. Because of their intelligence and nimble forelimbs, even inexperienced raccoons are easily capable of unscrewing jars, uncorking bottles and opening door latches, with more experienced specimens having been recorded to open door knobs.[118] Sexually mature raccoons often show aggressive natural behaviors such as biting during the mating season.[338][343] Neutering them at around five or six months of age decreases the chances of aggressive behavior developing.[344] Raccoons can become obese and suffer from other disorders due to poor diet and lack of exercise.[345] When fed with cat food over a long time period, raccoons can develop gout.[346] With respect to the research results regarding their social behavior, it is now required by law in Austria and Germany to keep at least two individuals to prevent loneliness.[347][348] Raccoons are usually kept in a pen (indoor or outdoor), also a legal requirement in Austria and Germany, rather than in the apartment where their natural curiosity may result in damage to property.[347][348][338][349][350]
|
133 |
+
|
134 |
+
When orphaned, it is possible for kits to be rehabilitated and reintroduced to the wild. However, it is uncertain whether they readapt well to life in the wild.[351] Feeding unweaned kits with cow's milk rather than a kitten replacement milk or a similar product can be dangerous to their health.[338][352]
|
135 |
+
|
en/4928.html.txt
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The raccoon (/rəˈkuːn/ or US: /ræˈkuːn/ (listen), Procyon lotor) is a medium-sized mammal native to North America. The raccoon is the largest of the procyonid family, having a body length of 40 to 70 cm (16 to 28 in) and a body weight of 5 to 26 kg (11 to 57 lb). Its grayish coat mostly consists of dense underfur which insulates it against cold weather. Three of the raccoon's most distinctive features are its extremely dexterous front paws, its facial mask, and its ringed tail, which are themes in the mythologies of the indigenous peoples of the Americas. Raccoons are noted for their intelligence, with studies showing that they are able to remember the solution to tasks for at least three years. They are usually nocturnal and omnivorous, eating about 40% invertebrates, 33% plants, and 27% vertebrates.
|
6 |
+
|
7 |
+
The original habitats of the raccoon are deciduous and mixed forests, but due to their adaptability they have extended their range to mountainous areas, coastal marshes, and urban areas, where some homeowners consider them to be pests. As a result of escapes and deliberate introductions in the mid-20th century, raccoons are now also distributed across much of mainland Europe, Caucasus, and Japan.
|
8 |
+
|
9 |
+
Though previously thought to be generally solitary, there is now evidence that raccoons engage in sex-specific social behavior. Related females often share a common area, while unrelated males live together in groups of up to four raccoons to maintain their positions against foreign males during the mating season, and other potential invaders. Home range sizes vary anywhere from 3 hectares (7.4 acres) for females in cities to 5,000 hectares (12,000 acres) for males in prairies. After a gestation period of about 65 days, two to five young, known as "kits", are born in spring. The kits are subsequently raised by their mother until dispersal in late fall. Although captive raccoons have been known to live over 20 years, their life expectancy in the wild is only 1.8 to 3.1 years. In many areas, hunting and vehicular injury are the two most common causes of death.
|
10 |
+
|
11 |
+
Names for the species include the common raccoon,[4] North American raccoon,[5] and northern raccoon,[6] The word "raccoon" was adopted into English from the native Powhatan term, as used in the Colony of Virginia. It was recorded on John Smith's list of Powhatan words as aroughcun, and on that of William Strachey as arathkone.[7] It has also been identified as a reflex of a Proto-Algonquian root ahrah-koon-em, meaning "[the] one who rubs, scrubs and scratches with its hands".[8] The word is sometimes spelled as racoon,[9]
|
12 |
+
|
13 |
+
Spanish colonists adopted the Spanish word mapache from the Nahuatl mapachtli of the Aztecs, meaning "[the] one who takes everything in its hands".[10] In many languages, the raccoon is named for its characteristic dousing behavior in conjunction with that language's term for bear, for example Waschbär ('wash-bear') in German, Huan Xiong (浣熊 'wash-bear') in Chinese, dvivón róchetz (דביבון רוחץ 'washing-bear[DIM]') in Hebrew, orsetto lavatore ('little washer bear') in Italian, and araiguma (洗熊 (あらいぐま) 'washing-bear') in Japanese. Alternatively, only the washing behavior might be referred to, as in Russian poloskun (полоскун, 'rinser').
|
14 |
+
|
15 |
+
The colloquial abbreviation coon is used in words like coonskin for fur clothing and in phrases like old coon as a self-designation of trappers.[11][12] In the 1830s, the United States Whig Party used the raccoon as an emblem, causing them to be pejoratively known as "coons" by their political opponents, who saw them as too sympathetic to African-Americans. Soon after that the term became an ethnic slur,[13] especially in use between 1880 and 1920 (see coon song), and the term is still considered offensive.[14]
|
16 |
+
|
17 |
+
In the first decades after its discovery by the members of the expedition of Christopher Columbus, who were the first Europeans to leave a written record about the species, taxonomists thought the raccoon was related to many different species, including dogs, cats, badgers and particularly bears.[15] Carl Linnaeus, the father of modern taxonomy, placed the raccoon in the genus Ursus, first as Ursus cauda elongata ("long-tailed bear") in the second edition of his Systema Naturae (1740), then as Ursus Lotor ("washer bear") in the tenth edition (1758–59).[16][17] In 1780, Gottlieb Conrad Christian Storr placed the raccoon in its own genus Procyon, which can be translated as either "before the dog" or "doglike".[18][19] It is also possible that Storr had its nocturnal lifestyle in mind and chose the star Procyon as eponym for the species.[20][21]
|
18 |
+
|
19 |
+
Based on fossil evidence from Russia and Bulgaria, the first known members of the family Procyonidae lived in Europe in the late Oligocene about 25 million years ago.[22] Similar tooth and skull structures suggest procyonids and weasels share a common ancestor, but molecular analysis indicates a closer relationship between raccoons and bears.[23] After the then-existing species crossed the Bering Strait at least six million years later in the early Miocene, the center of its distribution was probably in Central America.[24] Coatis (Nasua and Nasuella) and raccoons (Procyon) have been considered to share common descent from a species in the genus Paranasua present between 5.2 and 6.0 million years ago.[25] This assumption, based on morphological comparisons of fossils, conflicts with a 2006 genetic analysis which indicates raccoons are more closely related to ringtails.[26] Unlike other procyonids, such as the crab-eating raccoon (Procyon cancrivorus), the ancestors of the common raccoon left tropical and subtropical areas and migrated farther north about 2.5 million years ago, in a migration that has been confirmed by the discovery of fossils in the Great Plains dating back to the middle of the Pliocene.[27][25] Its most recent ancestor was likely Procyon rexroadensis, a large Blancan raccoon from the Rexroad Formation characterized by its narrow back teeth and large lower jaw.[28]
|
20 |
+
|
21 |
+
As of 2005, Mammal Species of the World recognizes 22 subspecies of raccoons.[29] Four of these subspecies living only on small Central American and Caribbean islands were often regarded as distinct species after their discovery. These are the Bahamian raccoon and Guadeloupe raccoon, which are very similar to each other; the Tres Marias raccoon, which is larger than average and has an angular skull; and the extinct Barbados raccoon. Studies of their morphological and genetic traits in 1999, 2003 and 2005 led all these island raccoons to be listed as subspecies of the common raccoon in Mammal Species of the World's third edition. A fifth island raccoon population, the Cozumel raccoon, which weighs only 3 to 4 kg (6.6 to 8.8 lb) and has notably small teeth, is still regarded as a separate species.[30][31][32][33]
|
22 |
+
|
23 |
+
The four smallest raccoon subspecies, with a typical weight of 1.8 to 2.7 kg (4.0 to 6.0 lb), live along the southern coast of Florida and on the adjacent islands; an example is the Ten Thousand Islands raccoon (Procyon lotor marinus).[34] Most of the other 15 subspecies differ only slightly from each other in coat color, size and other physical characteristics.[35][36] The two most widespread subspecies are the eastern raccoon (Procyon lotor lotor) and the Upper Mississippi Valley raccoon (Procyon lotor hirtus). Both share a comparatively dark coat with long hairs, but the Upper Mississippi Valley raccoon is larger than the eastern raccoon. The eastern raccoon occurs in all U.S. states and Canadian provinces to the north of South Carolina and Tennessee. The adjacent range of the Upper Mississippi Valley raccoon covers all U.S. states and Canadian provinces to the north of Louisiana, Texas and New Mexico.[37]
|
24 |
+
|
25 |
+
The taxonomic identity of feral raccoons inhabiting Central Europe, Causasia and Japan is unknown, as the founding populations consisted of uncategorized specimens from zoos and fur farms.[38]
|
26 |
+
|
27 |
+
brachyurus (Wiegmann, 1837)
|
28 |
+
fusca (Burmeister, 1850)
|
29 |
+
gularis (C. E. H. Smith, 1848)
|
30 |
+
melanus (J. E. Gray, 1864)
|
31 |
+
obscurus (Wiegmann, 1837)
|
32 |
+
rufescens (de Beaux, 1910)
|
33 |
+
vulgaris (Tiedemann, 1808)
|
34 |
+
|
35 |
+
dickeyi (Nelson and Goldman, 1931)
|
36 |
+
mexicana (Baird, 1858)
|
37 |
+
shufeldti (Nelson and Goldman, 1931)
|
38 |
+
|
39 |
+
minor (Miller, 1911)
|
40 |
+
varius (Nelson and Goldman, 1930)
|
41 |
+
|
42 |
+
Head to hindquarters, raccoons measure between 40 and 70 cm (16 and 28 in), not including the bushy tail which can
|
43 |
+
measure between 20 and 40 cm (8 and 16 in), but is usually not much longer than 25 cm (10 in).[62][63][64] The shoulder height is between 23 and 30 cm (9 and 12 in).[65] The body weight of an adult raccoon varies considerably with habitat, making the raccoon one of the most variably sized mammals. It can range from 5 to 26 kilograms (10 to 60 lb), but is usually between 5 and 12 kilograms (10 and 30 lb). The smallest specimens live in southern Florida, while those near the northern limits of the raccoon's range tend to be the largest (see Bergmann's rule).[66] Males are usually 15 to 20% heavier than females.[67] At the beginning of winter, a raccoon can weigh twice as much as in spring because of fat storage.[68][69][70] The largest recorded wild raccoon weighed 28.4 kg (62.6 lb) and measured 140 cm (55 in) in total length, by far the largest size recorded for a procyonid.[71][72]
|
44 |
+
|
45 |
+
The most characteristic physical feature of the raccoon is the area of black fur around the eyes, which contrasts sharply with the surrounding white face coloring. This is reminiscent of a "bandit's mask" and has thus enhanced the animal's reputation for mischief.[73][74] The slightly rounded ears are also bordered by white fur. Raccoons are assumed to recognize the facial expression and posture of other members of their species more quickly because of the conspicuous facial coloration and the alternating light and dark rings on the tail.[75][76][77] The dark mask may also reduce glare and thus enhance night vision.[76][77] On other parts of the body, the long and stiff guard hairs, which shed moisture, are usually colored in shades of gray and, to a lesser extent, brown.[78] Raccoons with a very dark coat are more common in the German population because individuals with such coloring were among those initially released to the wild.[79] The dense underfur, which accounts for almost 90% of the coat, insulates against cold weather and is composed of 2 to 3 cm (0.8 to 1.2 in) long hairs.[78]
|
46 |
+
|
47 |
+
The raccoon, whose method of locomotion is usually considered to be plantigrade, can stand on its hind legs to examine objects with its front paws.[80][81] As raccoons have short legs compared to their compact torso, they are usually not able either to run quickly or jump great distances.[82][83] Their top speed over short distances is 16 to 24 km/h (10 to 15 mph).[84][85] Raccoons can swim with an average speed of about 5 km/h (3 mph) and can stay in the water for several hours.[86][83] For climbing down a tree headfirst—an unusual ability for a mammal of its size—a raccoon rotates its hind feet so they are pointing backwards.[87][83] Raccoons have a dual cooling system to regulate their temperature; that is, they are able to both sweat and pant for heat dissipation.[88][89]
|
48 |
+
|
49 |
+
Raccoon skulls have a short and wide facial region and a voluminous braincase. The facial length of the skull is less than the cranial, and their nasal bones are short and quite broad. The auditory bullae are inflated in form, and the sagittal crest is weakly developed.[90] The dentition—40 teeth with the dental formula:3.1.4.23.1.4.2—is adapted to their omnivorous diet: the carnassials are not as sharp and pointed as those of a full-time carnivore, but the molars are not as wide as those of a herbivore.[91] The penis bone of males is about 10 cm (4 in) long and strongly bent at the front end,[92][93] and its shape can be used to distinguish juvenile males from mature males.[94][95][96] Seven of the thirteen identified vocal calls are used in communication between the mother and her kits, one of these being the birdlike twittering of newborns.[97][98][89]
|
50 |
+
|
51 |
+
The most important sense for the raccoon is its sense of touch.[99][100][101] The "hyper sensitive"[100] front paws are protected by a thin horny layer that becomes pliable when wet.[102][103] The five digits of the paws have no webbing between them, which is unusual for a carnivoran.[104] Almost two-thirds of the area responsible for sensory perception in the raccoon's cerebral cortex is specialized for the interpretation of tactile impulses, more than in any other studied animal.[105] They are able to identify objects before touching them with vibrissae located above their sharp, nonretractable claws.[80][101] The raccoon's paws lack an opposable thumb; thus, it does not have the agility of the hands of primates.[101][103] There is no observed negative effect on tactile perception when a raccoon stands in water below 10 °C (50 °F) for hours.[106]
|
52 |
+
|
53 |
+
Raccoons are thought to be color blind or at least poorly able to distinguish color, though their eyes are well-adapted for sensing green light.[107][108][109] Although their accommodation of 11 dioptre is comparable to that of humans and they see well in twilight because of the tapetum lucidum behind the retina, visual perception is of subordinate importance to raccoons because of their poor long-distance vision.[110][111][112] In addition to being useful for orientation in the dark, their sense of smell is important for intraspecific communication. Glandular secretions (usually from their anal glands), urine and feces are used for marking.[113][114][115] With their broad auditory range, they can perceive tones up to 50–85 kHz as well as quiet noises, like those produced by earthworms underground.[116][117]
|
54 |
+
|
55 |
+
Zoologist Clinton Hart Merriam described raccoons as "clever beasts", and that "in certain directions their cunning surpasses that of the fox". The animal's intelligence gave rise to the epithet "sly coon".[118] Only a few studies have been undertaken to determine the mental abilities of raccoons, most of them based on the animal's sense of touch. In a study by the ethologist H. B. Davis in 1908, raccoons were able to open 11 of 13 complex locks in fewer than 10 tries and had no problems repeating the action when the locks were rearranged or turned upside down. Davis concluded that they understood the abstract principles of the locking mechanisms and their learning speed was equivalent to that of rhesus macaques.[119]
|
56 |
+
|
57 |
+
Studies in 1963, 1973, 1975 and 1992 concentrated on raccoon memory showed that they can remember the solutions to tasks for at least three years.[120] In a study by B. Pohl in 1992, raccoons were able to instantly differentiate between identical and different symbols three years after the short initial learning phase.[120] Stanislas Dehaene reports in his book The Number Sense that raccoons can distinguish boxes containing two or four grapes from those containing three.[121] In research by Suzana Herculano-Houzel and other neuroscientists, raccoons have been found to be comparable to primates in density of neurons in the cerebral cortex, which they have proposed to be a neuroanatomical indicator of intelligence.[122][123]
|
58 |
+
|
59 |
+
Studies in the 1990s by the ethologists Stanley D. Gehrt and Ulf Hohmann suggest that raccoons engage in sex-specific social behaviors and are not typically solitary, as was previously thought.[124][125] Related females often live in a so-called "fission-fusion society"; that is, they share a common area and occasionally meet at feeding or resting grounds.[126][127] Unrelated males often form loose male social groups to maintain their position against foreign males during the mating season—or against other potential invaders.[128] Such a group does not usually consist of more than four individuals.[129][130] Since some males show aggressive behavior towards unrelated kits, mothers will isolate themselves from other raccoons until their kits are big enough to defend themselves.[131]
|
60 |
+
|
61 |
+
With respect to these three different modes of life prevalent among raccoons, Hohmann called their social structure a "three-class society".[132] Samuel I. Zeveloff, professor of zoology at Weber State University and author of the book Raccoons: A Natural History, is more cautious in his interpretation and concludes at least the females are solitary most of the time and, according to Erik K. Fritzell's study in North Dakota in 1978, males in areas with low population densities are solitary as well.[133]
|
62 |
+
|
63 |
+
The shape and size of a raccoon's home range varies depending on age, sex, and habitat, with adults claiming areas more than twice as large as juveniles.[134] While the size of home ranges in the habitat of North Dakota's prairies lie between 7 and 50 km2 (3 and 20 sq mi) for males and between 2 and 16 km2 (1 and 6 sq mi) for females, the average size in a marsh at Lake Erie was 0.5 km2 (0.19 sq mi).[135] Irrespective of whether the home ranges of adjacent groups overlap, they are most likely not actively defended outside the mating season if food supplies are sufficient.[136] Odor marks on prominent spots are assumed to establish home ranges and identify individuals.[115] Urine and feces left at shared raccoon latrines may provide additional information about feeding grounds, since raccoons were observed to meet there later for collective eating, sleeping and playing.[137]
|
64 |
+
|
65 |
+
Concerning the general behavior patterns of raccoons, Gehrt points out that "typically you'll find 10 to 15 percent that will do the opposite"[138] of what is expected.
|
66 |
+
|
67 |
+
Though usually nocturnal, the raccoon is sometimes active in daylight to take advantage of available food sources.[139][140] Its diet consists of about 40% invertebrates, 33% plant material and 27% vertebrates.[141] Since its diet consists of such a variety of different foods, Zeveloff argues the raccoon "may well be one of the world's most omnivorous animals".[142] While its diet in spring and early summer consists mostly of insects, worms, and other animals already available early in the year, it prefers fruits and nuts, such as acorns and walnuts, which emerge in late summer and autumn, and represent a rich calorie source for building up fat needed for winter.[143][144] Contrary to popular belief, raccoons only occasionally eat active or large prey, such as birds and mammals.
|
68 |
+
They prefer prey that is easier to catch, specifically fish, amphibians and bird eggs.[145] Raccoons are virulent predators of eggs and hatchlings in both birds and reptile nests, to such a degree that, for threatened prey species, raccoons may need to be removed from the area or nests may need to be relocated to mitigate the effect of their predations (i.e. in the case of some globally threatened turtles).[146][147][148][149][150] When food is plentiful, raccoons can develop strong individual preferences for specific foods.[69] In the northern parts of their range, raccoons go into a winter rest, reducing their activity drastically as long as a permanent snow cover makes searching for food impossible.[151]
|
69 |
+
|
70 |
+
One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; "lotor" is neo-Latin for "washer". In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon "washing" the food. The tactile sensitivity of raccoons' paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws.[100][152] However, the behavior observed in captive raccoons in which they carry their food to water to "wash" or douse it before eating has not been observed in the wild.[153][154] Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect.[152][153][155][156] Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft).[156] The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods.[152][156][157][158] This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for "washing".[156] Experts have cast doubt on the veracity of observations of wild raccoons dousing food.[159][160][161][needs update?]
|
71 |
+
|
72 |
+
Raccoons usually mate in a period triggered by increasing daylight between late January and mid-March.[162][163][164] However, there are large regional differences which are not completely explicable by solar conditions. For example, while raccoons in southern states typically mate later than average, the mating season in Manitoba also peaks later than usual in March and extends until June.[164] During the mating season, males restlessly roam their home ranges in search of females in an attempt to court them during the three- to four-day period when conception is possible. These encounters will often occur at central meeting places.[165][166][167] Copulation, including foreplay, can last over an hour and is repeated over several nights.[168] The weaker members of a male social group also are assumed to get the opportunity to mate, since the stronger ones cannot mate with all available females.[169] In a study in southern Texas during the mating seasons from 1990 to 1992, about one third of all females mated with more than one male.[170] If a female does not become pregnant or if she loses her kits early, she will sometimes become fertile again 80 to 140 days later.[171][172][173]
|
73 |
+
|
74 |
+
After usually 63 to 65 days of gestation (although anywhere from 54 to 70 days is possible), a litter of typically two to five young is born.[174][175] The average litter size varies widely with habitat, ranging from 2.5 in Alabama to 4.8 in North Dakota.[176][177] Larger litters are more common in areas with a high mortality rate, due, for example, to hunting or severe winters.[178][177] While male yearlings usually reach their sexual maturity only after the main mating season, female yearlings can compensate for high mortality rates and may be responsible for about 50% of all young born in a year.[179][180][181] Males have no part in raising young.[129][182][183] The kits (also called "cubs") are blind and deaf at birth, but their mask is already visible against their light fur.[184][185] The birth weight of the about 10 cm (4 in)-long kits is between 60 and 75 g (2.1 and 2.6 oz).[185] Their ear canals open after around 18 to 23 days, a few days before their eyes open for the first time.[186] Once the kits weigh about 1 kg (2 lb), they begin to explore outside the den, consuming solid food for the first time after six to nine weeks.[187][188] After this point, their mother suckles them with decreasing frequency; they are usually weaned by 16 weeks.[189] In the fall, after their mother has shown them dens and feeding grounds, the juvenile group splits up.[190] [191] While many females will stay close to the home range of their mother, males can sometimes move more than 20 km (12 mi) away.[192][193] This is considered an instinctive behavior, preventing inbreeding.[194][195] However, mother and offspring may share a den during the first winter in cold areas.[191]
|
75 |
+
|
76 |
+
Captive raccoons have been known to live for more than 20 years.[73] However, the species' life expectancy in the wild is only 1.8 to 3.1 years, depending on the local conditions such as traffic volume, hunting, and weather severity.[196] It is not unusual for only half of the young born in one year to survive a full year.[179][197] After this point, the annual mortality rate drops to between 10% and 30%.[179] Young raccoons are vulnerable to losing their mother and to starvation, particularly in long and cold winters.[198] The most frequent natural cause of death in the North American raccoon population is distemper, which can reach epidemic proportions and kill most of a local raccoon population.[199] In areas with heavy vehicular traffic and extensive hunting, these factors can account for up to 90% of all deaths of adult raccoons.[200] The most important natural predators of the raccoon are bobcats, coyotes, and great horned owls, the latter mainly preying on young raccoons but capable of killing adults in some cases.[201][202][203][204][205][206] In Florida, they have been reported to fall victim to larger carnivores like American black bear and cougars and these species may also be a threat on occasion in other areas.[207][208][209] Where still present, gray wolves may still occasionally take raccoons as a supplemental prey item.[210][211] Also in the southeast, they are among the favored prey for adult American alligators.[212][213] On occasion, both bald and golden eagles will prey on raccoons.[214][215] In the tropics, raccoons are known to fall prey to smaller eagles such as ornate hawk-eagles and black hawk-eagles, although it is not clear whether adults or merely juvenile raccoons are taken by these.[216][217] In rare cases of overlap, they may fall victim from carnivores ranging from species averaging smaller than themselves such as fishers to those as large and formidable as jaguars in Mexico.[218][219] In their introduced range in the former Soviet Union, their main predators are wolves, lynxes and Eurasian eagle-owls.[220] However, predation is not a significant cause of death, especially because larger predators have been exterminated in many areas inhabited by raccoons.[221]
|
77 |
+
|
78 |
+
Although they have thrived in sparsely wooded areas in the last decades, raccoons depend on vertical structures to climb when they feel threatened.[222][223] Therefore, they avoid open terrain and areas with high concentrations of beech trees, as beech bark is too smooth to climb.[224] Tree hollows in old oaks or other trees and rock crevices are preferred by raccoons as sleeping, winter and litter dens. If such dens are unavailable or accessing them is inconvenient, raccoons use burrows dug by other mammals, dense undergrowth or tree crotches.[225][226] In a study in the Solling range of hills in Germany, more than 60% of all sleeping places were used only once, but those used at least ten times accounted for about 70% of all uses.[227] Since amphibians, crustaceans, and other animals around the shore of lakes and rivers are an important part of the raccoon's diet, lowland deciduous or mixed forests abundant with water and marshes sustain the highest population densities.[228][229] While population densities range from 0.5 to 3.2 animals per square kilometer (1.3 to 8.3 animals per square mile) in prairies and do not usually exceed 6 animals per square kilometer (15.5 animals per square mile) in upland hardwood forests, more than 20 raccoons per square kilometer (51.8 animals per square mile) can live in lowland forests and marshes.[228][230]
|
79 |
+
|
80 |
+
Raccoons are common throughout North America from Canada to Panama, where the subspecies Procyon lotor pumilus coexists with the crab-eating raccoon (Procyon cancrivorus).[231][232] The population on Hispaniola was exterminated as early as 1513 by Spanish colonists who hunted them for their meat.[233] Raccoons were also exterminated in Cuba and Jamaica, where the last sightings were reported in 1687.[234] When they were still considered separate species, the Bahamas raccoon, Guadeloupe raccoon and Tres Marias raccoon were classified as endangered by the IUCN in 1996.[235]
|
81 |
+
|
82 |
+
There is archeological evidence that in pre-Columbian times raccoons were numerous only along rivers and in the woodlands of the Southeastern United States.[236] As raccoons were not mentioned in earlier reports of pioneers exploring the central and north-central parts of the United States,[237] their initial spread may have begun a few decades before the 20th century. Since the 1950s, raccoons have expanded their range from Vancouver Island—formerly the northernmost limit of their range—far into the northern portions of the four south-central Canadian provinces.[238] New habitats which have recently been occupied by raccoons (aside from urban areas) include mountain ranges, such as the Western Rocky Mountains, prairies and coastal marshes.[239] After a population explosion starting in the 1940s, the estimated number of raccoons in North America in the late 1980s was 15 to 20 times higher than in the 1930s, when raccoons were comparatively rare.[240] Urbanization, the expansion of agriculture, deliberate introductions, and the extermination of natural predators of the raccoon have probably caused this increase in abundance and distribution.[241]
|
83 |
+
|
84 |
+
As a result of escapes and deliberate introductions in the mid-20th century, the raccoon is now distributed in several European and Asian countries. Sightings have occurred in all the countries bordering Germany, which hosts the largest population outside of North America.[242] Another stable population exists in northern France, where several pet raccoons were released by members of the U.S. Air Force near the Laon-Couvron Air Base in 1966.[243] Furthermore, raccoons have been known to be in the area around Madrid since the early 1970s. In 2013, the city authorized "the capture and death of any specimen".[244] It is also present in Italy, with one reproductive population in Lombardy.[245]
|
85 |
+
|
86 |
+
About 1,240 animals were released in nine regions of the former Soviet Union between 1936 and 1958 for the purpose of establishing a population to be hunted for their fur. Two of these introductions were successful—one in the south of Belarus between 1954 and 1958, and another in Azerbaijan between 1941 and 1957. With a seasonal harvest of between 1,000 and 1,500 animals, in 1974 the estimated size of the population distributed in the Caucasus region was around 20,000 animals and the density was four animals per square kilometer (10 animals per square mile).[246]
|
87 |
+
|
88 |
+
In Japan, up to 1,500 raccoons were imported as pets each year after the success of the anime series Rascal the Raccoon (1977). In 2004, the descendants of discarded or escaped animals lived in 42 of 47 prefectures.[247][248][249] The range of raccoons in the wild in Japan grew from 17 prefectures in 2000 to all 47 prefectures in 2008.[250] It is estimated that raccoons cause thirty million yen (~$275,000) of agricultural damage on Hokkaido alone.[251]
|
89 |
+
|
90 |
+
In Germany—where the raccoon is called the Waschbär (literally, "wash-bear" or "washing bear") due to its habit of "dousing" food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer.[252] He released them two weeks before receiving permission from the Prussian hunting office to "enrich the fauna."[253] Several prior attempts to introduce raccoons in Germany were not successful.[254][255] A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen (today district of Altlandsberg), east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite.[256] The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008.[199][254] By 2012 it was estimated that Germany now had more than a million raccoons.[257]
|
91 |
+
|
92 |
+
The raccoon was a protected species in Germany, but has been declared a game animal in 14 of the 16 states since 1954.[258] Hunters and environmentalists argue the raccoon spreads uncontrollably, threatens protected bird species and supersedes domestic carnivorans.[79] This view is opposed by the zoologist Frank-Uwe Michler, who finds no evidence a high population density of raccoons has negative effects on the biodiversity of an area.[79] Hohmann holds that extensive hunting cannot be justified by the absence of natural predators, because predation is not a significant cause of death in the North American raccoon population.[259]
|
93 |
+
|
94 |
+
The raccoon is extensively hunted in Germany as they are seen as an invasive species and pests.[260][261] In the 1990s, only about 400 raccoons were hunted yearly. This increased to 67,700 by the 2010/11 hunting season and the tally broke the 100,000 barrier in 2013. During the 2015/16 hunting season, the tally was 128,100 animals, 60 percent of which were provided by the federal states of Hesse.[262]
|
95 |
+
|
96 |
+
Experiments in acclimatising raccoons into the USSR began in 1936, and were repeated a further 25 times until 1962. Overall, 1,222 individuals were released, 64 of which came from zoos and fur farms (38 of them having been imports from western Europe). The remainder originated from a population previously established in Transcaucasia. The range of Soviet raccoons was never single or continuous, as they were often introduced to different locations far from each other. All introductions into the Russian Far East failed; melanistic raccoons were released on Petrov Island near Vladivostok and some areas of southern Primorsky Krai, but died. In Middle Asia, raccoons were released in Kyrgyzstan's Jalal-Abad Province, though they were later recorded as "practically absent" there in January 1963. A large and stable raccoon population (yielding 1000–1500 catches a year) was established in Azerbaijan after an introduction to the area in 1937. Raccoons apparently survived an introduction near Terek, along the Sulak River into the Dagestani lowlands. Attempts to settle raccoons on the Kuban River's left tributary and Kabardino-Balkaria were unsuccessful. A successful acclimatization occurred in Belarus, where three introductions (consisting of 52, 37 and 38 individuals in 1954 and 1958) took place. By January 1, 1963, 700 individuals were recorded in the country.[263]
|
97 |
+
|
98 |
+
Due to its adaptability, the raccoon has been able to use urban areas as a habitat. The first sightings were recorded in a suburb of Cincinnati in the 1920s. Since the 1950s, raccoons have been present in metropolitan areas like Washington, DC, Chicago, and Toronto.[264] Since the 1960s, Kassel has hosted Europe's first and densest population in a large urban area, with about 50 to 150 animals per square kilometer (130 to 390 animals per square mile), a figure comparable to those of urban habitats in North America.[264][265] Home range sizes of urban raccoons are only 3 to 40 hectares (7.5 to 100 acres) for females and 8 to 80 hectares (20 to 200 acres) for males.[266] In small towns and suburbs, many raccoons sleep in a nearby forest after foraging in the settlement area.[264][267] Fruit and insects in gardens and leftovers in municipal waste are easily available food sources.[268] Furthermore, a large number of additional sleeping areas exist in these areas, such as hollows in old garden trees, cottages, garages, abandoned houses, and attics. The percentage of urban raccoons sleeping in abandoned or occupied houses varies from 15% in Washington, DC (1991) to 43% in Kassel (2003).[267][265]
|
99 |
+
|
100 |
+
Raccoons can carry rabies, a lethal disease caused by the neurotropic rabies virus carried in the saliva and transmitted by bites. Its spread began in Florida and Georgia in the 1950s and was facilitated by the introduction of infected individuals to Virginia and North Dakota in the late 1970s.[269] Of the 6,940 documented rabies cases reported in the United States in 2006, 2,615 (37.7%) were in raccoons.[270] The U.S. Department of Agriculture, as well as local authorities in several U.S. states and Canadian provinces, has developed oral vaccination programs to fight the spread of the disease in endangered populations.[271][272][273] Only one human fatality has been reported after transmission of the rabies virus strain commonly known as "raccoon rabies".[274] Among the main symptoms for rabies in raccoons are a generally sickly appearance, impaired mobility, abnormal vocalization, and aggressiveness.[275] There may be no visible signs at all, however, and most individuals do not show the aggressive behavior seen in infected canids; rabid raccoons will often retire to their dens instead.[79][256][275] Organizations like the U.S. Forest Service encourage people to stay away from animals with unusual behavior or appearance, and to notify the proper authorities, such as an animal control officer from the local health department.[276][277] Since healthy animals, especially nursing mothers, will occasionally forage during the day, daylight activity is not a reliable indicator of illness in raccoons.[139][140]
|
101 |
+
|
102 |
+
Unlike rabies and at least a dozen other pathogens carried by raccoons, distemper, an epizootic virus, does not affect humans.[278][279] This disease is the most frequent natural cause of death in the North American raccoon population and affects individuals of all age groups.[199] For example, 94 of 145 raccoons died during an outbreak in Clifton, Ohio, in 1968.[280] It may occur along with a following inflammation of the brain (encephalitis), causing the animal to display rabies-like symptoms.[269] In Germany, the first eight cases of distemper were reported in 2007.[199]
|
103 |
+
|
104 |
+
Some of the most important bacterial diseases which affect raccoons are leptospirosis, listeriosis, tetanus, and tularemia. Although internal parasites weaken their immune systems, well-fed individuals can carry a great many roundworms in their digestive tracts without showing symptoms.[281][279] The larvae of the roundworm Baylisascaris procyonis, which can be contained in the feces and seldom causes a severe illness in humans, can be ingested when cleaning raccoon latrines without wearing breathing protection.[282]
|
105 |
+
|
106 |
+
While not endemic, the worm Trichinella does infect raccoons,[283] and undercooked raccoon meat has caused trichinosis in humans.[284]
|
107 |
+
|
108 |
+
Trematode Metorchis conjunctus can also infect raccoons.[285]
|
109 |
+
|
110 |
+
The increasing number of raccoons in urban areas has resulted in diverse reactions in humans, ranging from outrage at their presence to deliberate feeding.[286] Some wildlife experts and most public authorities caution against feeding wild animals because they might become increasingly obtrusive and dependent on humans as a food source.[287] Other experts challenge such arguments and give advice on feeding raccoons and other wildlife in their books.[288][289] Raccoons without a fear of humans are a concern to those who attribute this trait to rabies, but scientists point out this behavior is much more likely to be a behavioral adjustment to living in habitats with regular contact to humans for many generations.[256][290] Raccoons usually do not prey on domestic cats and dogs, but isolated cases of killings have been reported.[291] Attacks on pets may also target their owners.[292]
|
111 |
+
|
112 |
+
While overturned waste containers and raided fruit trees are just a nuisance to homeowners, it can cost several thousand dollars to repair damage caused by the use of attic space as dens.[293] Relocating or killing raccoons without a permit is forbidden in many urban areas on grounds of animal welfare. These methods usually only solve problems with particularly wild or aggressive individuals, since adequate dens are either known to several raccoons or will quickly be rediscovered.[178][277][294] Loud noises, flashing lights and unpleasant odors have proven particularly effective in driving away a mother and her kits before they would normally leave the nesting place (when the kits are about eight weeks old).[277][295] Typically, though, only precautionary measures to restrict access to food waste and den sites are effective in the long term.[277][296][297]
|
113 |
+
|
114 |
+
Among all fruits and crops cultivated in agricultural areas, sweet corn in its milk stage is particularly popular among raccoons.[298][299] In a two-year study by Purdue University researchers, published in 2004, raccoons were responsible for 87% of the damage to corn plants.[300] Like other predators, raccoons searching for food can break into poultry houses to feed on chickens, ducks, their eggs, or food.[141][277][301]
|
115 |
+
|
116 |
+
Since raccoons in high mortality areas have a higher rate of reproduction, extensive hunting may not solve problems with raccoon populations. Older males also claim larger home ranges than younger ones, resulting in a lower population density.
|
117 |
+
|
118 |
+
In the mythology of the indigenous peoples of the Americas, the raccoon is the subject of folk tales.[302] Stories such as "How raccoons catch so many crayfish" from the Tuscarora centered on its skills at foraging.[303] In other tales, the raccoon played the role of the trickster which outsmarts other animals, like coyotes and wolves.[304] Among others, the Dakota Sioux believe the raccoon has natural spirit powers, since its mask resembled the facial paintings, two-fingered swashes of black and white, used during rituals to connect to spirit beings.[305] The Aztecs linked supernatural abilities especially to females, whose commitment to their young was associated with the role of wise women in their society.[306]
|
119 |
+
|
120 |
+
The raccoon also appears in Native American art across a wide geographic range. Petroglyphs with engraved raccoon tracks were found in Lewis Canyon, Texas;[307] at the Crow Hollow petroglyph site in Grayson County, Kentucky;[308] and in river drainages near Tularosa, New Mexico and San Francisco, California.[309] A true-to-detail figurine made of quartz, the Ohio Mound Builders' Stone Pipe, was found near the Scioto River. The meaning and significance of the Raccoon Priests Gorget, which features a stylized carving of a raccoon and was found at the Spiro Mounds, Oklahoma, remains unknown.[310][311]
|
121 |
+
|
122 |
+
In Western culture, several autobiographical novels about living with a raccoon have been written, mostly for children. The best-known is Sterling North's Rascal, which recounts how he raised a kit during World War I. In recent years, anthropomorphic raccoons played main roles in the animated television series The Raccoons, the computer-animated film Over the Hedge, the live action film Guardians of the Galaxy (and the comics that it was based upon) and the video game series Sly Cooper.
|
123 |
+
|
124 |
+
The fur of raccoons is used for clothing, especially for coats and coonskin caps. At present, it is the material used for the inaccurately named "sealskin" cap worn by the Royal Fusiliers of Great Britain.[312] Sporrans made of raccoon pelt and hide have sometimes been used as part of traditional Scottish highland men's apparel since the 18th century, especially in North America. Such sporrans may or may not be of the "full-mask" type.[313] Historically, Native American tribes not only used the fur for winter clothing, but also used the tails for ornament.[314] The famous Sioux leader Spotted Tail took his name from a raccoon skin hat with the tail attached he acquired from a fur trader. Since the late 18th century, various types of scent hounds, called "coonhounds", which are able to tree animals have been bred in the United States.[315] In the 19th century, when coonskins occasionally even served as means of payment, several thousand raccoons were killed each year in the United States.[316][317] This number rose quickly when automobile coats became popular after the turn of the 20th century. In the 1920s, wearing a raccoon coat was regarded as status symbol among college students.[318] Attempts to breed raccoons in fur farms in the 1920s and 1930s in North America and Europe turned out not to be profitable, and farming was abandoned after prices for long-haired pelts dropped in the 1940s.[319][320] Although raccoons had become rare in the 1930s, at least 388,000 were killed during the hunting season of 1934/35.[318][321]
|
125 |
+
|
126 |
+
After persistent population increases began in the 1940s, the seasonal coon hunting harvest reached about one million animals in 1946/47 and two million in 1962/63.[322] The broadcast of three television episodes about the frontiersman Davy Crockett and the film Davy Crockett, King of the Wild Frontier in 1954 and 1955 led to a high demand for coonskin caps in the United States, although it is unlikely either Crockett or the actor who played him, Fess Parker, actually wore a cap made from raccoon fur.[323] The seasonal hunt reached an all-time high with 5.2 million animals in 1976/77 and ranged between 3.2 and 4.7 million for most of the 1980s. In 1982, the average pelt price was $20.[324] As of 1987, the raccoon was identified as the most important wild furbearer in North America in terms of revenue.[325] In the first half of the 1990s, the seasonal hunt dropped to 0.9 from 1.9 million due to decreasing pelt prices.[326]
|
127 |
+
|
128 |
+
While primarily hunted for their fur, raccoons were also a source of food for Native Americans and early American settlers.[327][328] According to Ernest Thompson Seton, young specimens killed without a fight are palatable, whereas old raccoons caught after a lengthy battle are inedible.[329] Raccoon meat was extensively eaten during the early years of California, where it was sold in the San Francisco market for $1–3 apiece.[330] American slaves occasionally ate raccoon at Christmas, but it was not necessarily a dish of the poor or rural. The first edition of The Joy of Cooking, released in 1931, contained a recipe for preparing raccoon, and US President Calvin Coolidge's pet raccoon Rebecca was originally sent to be served at the White House Thanksgiving Dinner.[331][332][333] Although the idea of eating raccoons seems repulsive to most mainstream consumers since they see them as endearing, cute, and/or vermin, several thousand raccoons are still eaten each year in the United States, primarily in the Southern United States.[334][335][336][337]
|
129 |
+
|
130 |
+
Raccoons are sometimes kept as pets, which is discouraged by many experts because the raccoon is not a domesticated species. Raccoons may act unpredictably and aggressively and it is extremely difficult to teach them to obey commands.[338][339] In places where keeping raccoons as pets is not forbidden, such as in Wisconsin and other U.S. states, an exotic pet permit may be required.[340][341] One notable raccoon pet was Rebecca, kept by US president Calvin Coolidge.[342]
|
131 |
+
|
132 |
+
Their propensity for unruly behavior exceeds that of captive skunks, and they are even less trustworthy when allowed to roam freely. Because of their intelligence and nimble forelimbs, even inexperienced raccoons are easily capable of unscrewing jars, uncorking bottles and opening door latches, with more experienced specimens having been recorded to open door knobs.[118] Sexually mature raccoons often show aggressive natural behaviors such as biting during the mating season.[338][343] Neutering them at around five or six months of age decreases the chances of aggressive behavior developing.[344] Raccoons can become obese and suffer from other disorders due to poor diet and lack of exercise.[345] When fed with cat food over a long time period, raccoons can develop gout.[346] With respect to the research results regarding their social behavior, it is now required by law in Austria and Germany to keep at least two individuals to prevent loneliness.[347][348] Raccoons are usually kept in a pen (indoor or outdoor), also a legal requirement in Austria and Germany, rather than in the apartment where their natural curiosity may result in damage to property.[347][348][338][349][350]
|
133 |
+
|
134 |
+
When orphaned, it is possible for kits to be rehabilitated and reintroduced to the wild. However, it is uncertain whether they readapt well to life in the wild.[351] Feeding unweaned kits with cow's milk rather than a kitten replacement milk or a similar product can be dangerous to their health.[338][352]
|
135 |
+
|
en/4929.html.txt
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The raccoon (/rəˈkuːn/ or US: /ræˈkuːn/ (listen), Procyon lotor) is a medium-sized mammal native to North America. The raccoon is the largest of the procyonid family, having a body length of 40 to 70 cm (16 to 28 in) and a body weight of 5 to 26 kg (11 to 57 lb). Its grayish coat mostly consists of dense underfur which insulates it against cold weather. Three of the raccoon's most distinctive features are its extremely dexterous front paws, its facial mask, and its ringed tail, which are themes in the mythologies of the indigenous peoples of the Americas. Raccoons are noted for their intelligence, with studies showing that they are able to remember the solution to tasks for at least three years. They are usually nocturnal and omnivorous, eating about 40% invertebrates, 33% plants, and 27% vertebrates.
|
6 |
+
|
7 |
+
The original habitats of the raccoon are deciduous and mixed forests, but due to their adaptability they have extended their range to mountainous areas, coastal marshes, and urban areas, where some homeowners consider them to be pests. As a result of escapes and deliberate introductions in the mid-20th century, raccoons are now also distributed across much of mainland Europe, Caucasus, and Japan.
|
8 |
+
|
9 |
+
Though previously thought to be generally solitary, there is now evidence that raccoons engage in sex-specific social behavior. Related females often share a common area, while unrelated males live together in groups of up to four raccoons to maintain their positions against foreign males during the mating season, and other potential invaders. Home range sizes vary anywhere from 3 hectares (7.4 acres) for females in cities to 5,000 hectares (12,000 acres) for males in prairies. After a gestation period of about 65 days, two to five young, known as "kits", are born in spring. The kits are subsequently raised by their mother until dispersal in late fall. Although captive raccoons have been known to live over 20 years, their life expectancy in the wild is only 1.8 to 3.1 years. In many areas, hunting and vehicular injury are the two most common causes of death.
|
10 |
+
|
11 |
+
Names for the species include the common raccoon,[4] North American raccoon,[5] and northern raccoon,[6] The word "raccoon" was adopted into English from the native Powhatan term, as used in the Colony of Virginia. It was recorded on John Smith's list of Powhatan words as aroughcun, and on that of William Strachey as arathkone.[7] It has also been identified as a reflex of a Proto-Algonquian root ahrah-koon-em, meaning "[the] one who rubs, scrubs and scratches with its hands".[8] The word is sometimes spelled as racoon,[9]
|
12 |
+
|
13 |
+
Spanish colonists adopted the Spanish word mapache from the Nahuatl mapachtli of the Aztecs, meaning "[the] one who takes everything in its hands".[10] In many languages, the raccoon is named for its characteristic dousing behavior in conjunction with that language's term for bear, for example Waschbär ('wash-bear') in German, Huan Xiong (浣熊 'wash-bear') in Chinese, dvivón róchetz (דביבון רוחץ 'washing-bear[DIM]') in Hebrew, orsetto lavatore ('little washer bear') in Italian, and araiguma (洗熊 (あらいぐま) 'washing-bear') in Japanese. Alternatively, only the washing behavior might be referred to, as in Russian poloskun (полоскун, 'rinser').
|
14 |
+
|
15 |
+
The colloquial abbreviation coon is used in words like coonskin for fur clothing and in phrases like old coon as a self-designation of trappers.[11][12] In the 1830s, the United States Whig Party used the raccoon as an emblem, causing them to be pejoratively known as "coons" by their political opponents, who saw them as too sympathetic to African-Americans. Soon after that the term became an ethnic slur,[13] especially in use between 1880 and 1920 (see coon song), and the term is still considered offensive.[14]
|
16 |
+
|
17 |
+
In the first decades after its discovery by the members of the expedition of Christopher Columbus, who were the first Europeans to leave a written record about the species, taxonomists thought the raccoon was related to many different species, including dogs, cats, badgers and particularly bears.[15] Carl Linnaeus, the father of modern taxonomy, placed the raccoon in the genus Ursus, first as Ursus cauda elongata ("long-tailed bear") in the second edition of his Systema Naturae (1740), then as Ursus Lotor ("washer bear") in the tenth edition (1758–59).[16][17] In 1780, Gottlieb Conrad Christian Storr placed the raccoon in its own genus Procyon, which can be translated as either "before the dog" or "doglike".[18][19] It is also possible that Storr had its nocturnal lifestyle in mind and chose the star Procyon as eponym for the species.[20][21]
|
18 |
+
|
19 |
+
Based on fossil evidence from Russia and Bulgaria, the first known members of the family Procyonidae lived in Europe in the late Oligocene about 25 million years ago.[22] Similar tooth and skull structures suggest procyonids and weasels share a common ancestor, but molecular analysis indicates a closer relationship between raccoons and bears.[23] After the then-existing species crossed the Bering Strait at least six million years later in the early Miocene, the center of its distribution was probably in Central America.[24] Coatis (Nasua and Nasuella) and raccoons (Procyon) have been considered to share common descent from a species in the genus Paranasua present between 5.2 and 6.0 million years ago.[25] This assumption, based on morphological comparisons of fossils, conflicts with a 2006 genetic analysis which indicates raccoons are more closely related to ringtails.[26] Unlike other procyonids, such as the crab-eating raccoon (Procyon cancrivorus), the ancestors of the common raccoon left tropical and subtropical areas and migrated farther north about 2.5 million years ago, in a migration that has been confirmed by the discovery of fossils in the Great Plains dating back to the middle of the Pliocene.[27][25] Its most recent ancestor was likely Procyon rexroadensis, a large Blancan raccoon from the Rexroad Formation characterized by its narrow back teeth and large lower jaw.[28]
|
20 |
+
|
21 |
+
As of 2005, Mammal Species of the World recognizes 22 subspecies of raccoons.[29] Four of these subspecies living only on small Central American and Caribbean islands were often regarded as distinct species after their discovery. These are the Bahamian raccoon and Guadeloupe raccoon, which are very similar to each other; the Tres Marias raccoon, which is larger than average and has an angular skull; and the extinct Barbados raccoon. Studies of their morphological and genetic traits in 1999, 2003 and 2005 led all these island raccoons to be listed as subspecies of the common raccoon in Mammal Species of the World's third edition. A fifth island raccoon population, the Cozumel raccoon, which weighs only 3 to 4 kg (6.6 to 8.8 lb) and has notably small teeth, is still regarded as a separate species.[30][31][32][33]
|
22 |
+
|
23 |
+
The four smallest raccoon subspecies, with a typical weight of 1.8 to 2.7 kg (4.0 to 6.0 lb), live along the southern coast of Florida and on the adjacent islands; an example is the Ten Thousand Islands raccoon (Procyon lotor marinus).[34] Most of the other 15 subspecies differ only slightly from each other in coat color, size and other physical characteristics.[35][36] The two most widespread subspecies are the eastern raccoon (Procyon lotor lotor) and the Upper Mississippi Valley raccoon (Procyon lotor hirtus). Both share a comparatively dark coat with long hairs, but the Upper Mississippi Valley raccoon is larger than the eastern raccoon. The eastern raccoon occurs in all U.S. states and Canadian provinces to the north of South Carolina and Tennessee. The adjacent range of the Upper Mississippi Valley raccoon covers all U.S. states and Canadian provinces to the north of Louisiana, Texas and New Mexico.[37]
|
24 |
+
|
25 |
+
The taxonomic identity of feral raccoons inhabiting Central Europe, Causasia and Japan is unknown, as the founding populations consisted of uncategorized specimens from zoos and fur farms.[38]
|
26 |
+
|
27 |
+
brachyurus (Wiegmann, 1837)
|
28 |
+
fusca (Burmeister, 1850)
|
29 |
+
gularis (C. E. H. Smith, 1848)
|
30 |
+
melanus (J. E. Gray, 1864)
|
31 |
+
obscurus (Wiegmann, 1837)
|
32 |
+
rufescens (de Beaux, 1910)
|
33 |
+
vulgaris (Tiedemann, 1808)
|
34 |
+
|
35 |
+
dickeyi (Nelson and Goldman, 1931)
|
36 |
+
mexicana (Baird, 1858)
|
37 |
+
shufeldti (Nelson and Goldman, 1931)
|
38 |
+
|
39 |
+
minor (Miller, 1911)
|
40 |
+
varius (Nelson and Goldman, 1930)
|
41 |
+
|
42 |
+
Head to hindquarters, raccoons measure between 40 and 70 cm (16 and 28 in), not including the bushy tail which can
|
43 |
+
measure between 20 and 40 cm (8 and 16 in), but is usually not much longer than 25 cm (10 in).[62][63][64] The shoulder height is between 23 and 30 cm (9 and 12 in).[65] The body weight of an adult raccoon varies considerably with habitat, making the raccoon one of the most variably sized mammals. It can range from 5 to 26 kilograms (10 to 60 lb), but is usually between 5 and 12 kilograms (10 and 30 lb). The smallest specimens live in southern Florida, while those near the northern limits of the raccoon's range tend to be the largest (see Bergmann's rule).[66] Males are usually 15 to 20% heavier than females.[67] At the beginning of winter, a raccoon can weigh twice as much as in spring because of fat storage.[68][69][70] The largest recorded wild raccoon weighed 28.4 kg (62.6 lb) and measured 140 cm (55 in) in total length, by far the largest size recorded for a procyonid.[71][72]
|
44 |
+
|
45 |
+
The most characteristic physical feature of the raccoon is the area of black fur around the eyes, which contrasts sharply with the surrounding white face coloring. This is reminiscent of a "bandit's mask" and has thus enhanced the animal's reputation for mischief.[73][74] The slightly rounded ears are also bordered by white fur. Raccoons are assumed to recognize the facial expression and posture of other members of their species more quickly because of the conspicuous facial coloration and the alternating light and dark rings on the tail.[75][76][77] The dark mask may also reduce glare and thus enhance night vision.[76][77] On other parts of the body, the long and stiff guard hairs, which shed moisture, are usually colored in shades of gray and, to a lesser extent, brown.[78] Raccoons with a very dark coat are more common in the German population because individuals with such coloring were among those initially released to the wild.[79] The dense underfur, which accounts for almost 90% of the coat, insulates against cold weather and is composed of 2 to 3 cm (0.8 to 1.2 in) long hairs.[78]
|
46 |
+
|
47 |
+
The raccoon, whose method of locomotion is usually considered to be plantigrade, can stand on its hind legs to examine objects with its front paws.[80][81] As raccoons have short legs compared to their compact torso, they are usually not able either to run quickly or jump great distances.[82][83] Their top speed over short distances is 16 to 24 km/h (10 to 15 mph).[84][85] Raccoons can swim with an average speed of about 5 km/h (3 mph) and can stay in the water for several hours.[86][83] For climbing down a tree headfirst—an unusual ability for a mammal of its size—a raccoon rotates its hind feet so they are pointing backwards.[87][83] Raccoons have a dual cooling system to regulate their temperature; that is, they are able to both sweat and pant for heat dissipation.[88][89]
|
48 |
+
|
49 |
+
Raccoon skulls have a short and wide facial region and a voluminous braincase. The facial length of the skull is less than the cranial, and their nasal bones are short and quite broad. The auditory bullae are inflated in form, and the sagittal crest is weakly developed.[90] The dentition—40 teeth with the dental formula:3.1.4.23.1.4.2—is adapted to their omnivorous diet: the carnassials are not as sharp and pointed as those of a full-time carnivore, but the molars are not as wide as those of a herbivore.[91] The penis bone of males is about 10 cm (4 in) long and strongly bent at the front end,[92][93] and its shape can be used to distinguish juvenile males from mature males.[94][95][96] Seven of the thirteen identified vocal calls are used in communication between the mother and her kits, one of these being the birdlike twittering of newborns.[97][98][89]
|
50 |
+
|
51 |
+
The most important sense for the raccoon is its sense of touch.[99][100][101] The "hyper sensitive"[100] front paws are protected by a thin horny layer that becomes pliable when wet.[102][103] The five digits of the paws have no webbing between them, which is unusual for a carnivoran.[104] Almost two-thirds of the area responsible for sensory perception in the raccoon's cerebral cortex is specialized for the interpretation of tactile impulses, more than in any other studied animal.[105] They are able to identify objects before touching them with vibrissae located above their sharp, nonretractable claws.[80][101] The raccoon's paws lack an opposable thumb; thus, it does not have the agility of the hands of primates.[101][103] There is no observed negative effect on tactile perception when a raccoon stands in water below 10 °C (50 °F) for hours.[106]
|
52 |
+
|
53 |
+
Raccoons are thought to be color blind or at least poorly able to distinguish color, though their eyes are well-adapted for sensing green light.[107][108][109] Although their accommodation of 11 dioptre is comparable to that of humans and they see well in twilight because of the tapetum lucidum behind the retina, visual perception is of subordinate importance to raccoons because of their poor long-distance vision.[110][111][112] In addition to being useful for orientation in the dark, their sense of smell is important for intraspecific communication. Glandular secretions (usually from their anal glands), urine and feces are used for marking.[113][114][115] With their broad auditory range, they can perceive tones up to 50–85 kHz as well as quiet noises, like those produced by earthworms underground.[116][117]
|
54 |
+
|
55 |
+
Zoologist Clinton Hart Merriam described raccoons as "clever beasts", and that "in certain directions their cunning surpasses that of the fox". The animal's intelligence gave rise to the epithet "sly coon".[118] Only a few studies have been undertaken to determine the mental abilities of raccoons, most of them based on the animal's sense of touch. In a study by the ethologist H. B. Davis in 1908, raccoons were able to open 11 of 13 complex locks in fewer than 10 tries and had no problems repeating the action when the locks were rearranged or turned upside down. Davis concluded that they understood the abstract principles of the locking mechanisms and their learning speed was equivalent to that of rhesus macaques.[119]
|
56 |
+
|
57 |
+
Studies in 1963, 1973, 1975 and 1992 concentrated on raccoon memory showed that they can remember the solutions to tasks for at least three years.[120] In a study by B. Pohl in 1992, raccoons were able to instantly differentiate between identical and different symbols three years after the short initial learning phase.[120] Stanislas Dehaene reports in his book The Number Sense that raccoons can distinguish boxes containing two or four grapes from those containing three.[121] In research by Suzana Herculano-Houzel and other neuroscientists, raccoons have been found to be comparable to primates in density of neurons in the cerebral cortex, which they have proposed to be a neuroanatomical indicator of intelligence.[122][123]
|
58 |
+
|
59 |
+
Studies in the 1990s by the ethologists Stanley D. Gehrt and Ulf Hohmann suggest that raccoons engage in sex-specific social behaviors and are not typically solitary, as was previously thought.[124][125] Related females often live in a so-called "fission-fusion society"; that is, they share a common area and occasionally meet at feeding or resting grounds.[126][127] Unrelated males often form loose male social groups to maintain their position against foreign males during the mating season—or against other potential invaders.[128] Such a group does not usually consist of more than four individuals.[129][130] Since some males show aggressive behavior towards unrelated kits, mothers will isolate themselves from other raccoons until their kits are big enough to defend themselves.[131]
|
60 |
+
|
61 |
+
With respect to these three different modes of life prevalent among raccoons, Hohmann called their social structure a "three-class society".[132] Samuel I. Zeveloff, professor of zoology at Weber State University and author of the book Raccoons: A Natural History, is more cautious in his interpretation and concludes at least the females are solitary most of the time and, according to Erik K. Fritzell's study in North Dakota in 1978, males in areas with low population densities are solitary as well.[133]
|
62 |
+
|
63 |
+
The shape and size of a raccoon's home range varies depending on age, sex, and habitat, with adults claiming areas more than twice as large as juveniles.[134] While the size of home ranges in the habitat of North Dakota's prairies lie between 7 and 50 km2 (3 and 20 sq mi) for males and between 2 and 16 km2 (1 and 6 sq mi) for females, the average size in a marsh at Lake Erie was 0.5 km2 (0.19 sq mi).[135] Irrespective of whether the home ranges of adjacent groups overlap, they are most likely not actively defended outside the mating season if food supplies are sufficient.[136] Odor marks on prominent spots are assumed to establish home ranges and identify individuals.[115] Urine and feces left at shared raccoon latrines may provide additional information about feeding grounds, since raccoons were observed to meet there later for collective eating, sleeping and playing.[137]
|
64 |
+
|
65 |
+
Concerning the general behavior patterns of raccoons, Gehrt points out that "typically you'll find 10 to 15 percent that will do the opposite"[138] of what is expected.
|
66 |
+
|
67 |
+
Though usually nocturnal, the raccoon is sometimes active in daylight to take advantage of available food sources.[139][140] Its diet consists of about 40% invertebrates, 33% plant material and 27% vertebrates.[141] Since its diet consists of such a variety of different foods, Zeveloff argues the raccoon "may well be one of the world's most omnivorous animals".[142] While its diet in spring and early summer consists mostly of insects, worms, and other animals already available early in the year, it prefers fruits and nuts, such as acorns and walnuts, which emerge in late summer and autumn, and represent a rich calorie source for building up fat needed for winter.[143][144] Contrary to popular belief, raccoons only occasionally eat active or large prey, such as birds and mammals.
|
68 |
+
They prefer prey that is easier to catch, specifically fish, amphibians and bird eggs.[145] Raccoons are virulent predators of eggs and hatchlings in both birds and reptile nests, to such a degree that, for threatened prey species, raccoons may need to be removed from the area or nests may need to be relocated to mitigate the effect of their predations (i.e. in the case of some globally threatened turtles).[146][147][148][149][150] When food is plentiful, raccoons can develop strong individual preferences for specific foods.[69] In the northern parts of their range, raccoons go into a winter rest, reducing their activity drastically as long as a permanent snow cover makes searching for food impossible.[151]
|
69 |
+
|
70 |
+
One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; "lotor" is neo-Latin for "washer". In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon "washing" the food. The tactile sensitivity of raccoons' paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws.[100][152] However, the behavior observed in captive raccoons in which they carry their food to water to "wash" or douse it before eating has not been observed in the wild.[153][154] Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect.[152][153][155][156] Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft).[156] The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods.[152][156][157][158] This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for "washing".[156] Experts have cast doubt on the veracity of observations of wild raccoons dousing food.[159][160][161][needs update?]
|
71 |
+
|
72 |
+
Raccoons usually mate in a period triggered by increasing daylight between late January and mid-March.[162][163][164] However, there are large regional differences which are not completely explicable by solar conditions. For example, while raccoons in southern states typically mate later than average, the mating season in Manitoba also peaks later than usual in March and extends until June.[164] During the mating season, males restlessly roam their home ranges in search of females in an attempt to court them during the three- to four-day period when conception is possible. These encounters will often occur at central meeting places.[165][166][167] Copulation, including foreplay, can last over an hour and is repeated over several nights.[168] The weaker members of a male social group also are assumed to get the opportunity to mate, since the stronger ones cannot mate with all available females.[169] In a study in southern Texas during the mating seasons from 1990 to 1992, about one third of all females mated with more than one male.[170] If a female does not become pregnant or if she loses her kits early, she will sometimes become fertile again 80 to 140 days later.[171][172][173]
|
73 |
+
|
74 |
+
After usually 63 to 65 days of gestation (although anywhere from 54 to 70 days is possible), a litter of typically two to five young is born.[174][175] The average litter size varies widely with habitat, ranging from 2.5 in Alabama to 4.8 in North Dakota.[176][177] Larger litters are more common in areas with a high mortality rate, due, for example, to hunting or severe winters.[178][177] While male yearlings usually reach their sexual maturity only after the main mating season, female yearlings can compensate for high mortality rates and may be responsible for about 50% of all young born in a year.[179][180][181] Males have no part in raising young.[129][182][183] The kits (also called "cubs") are blind and deaf at birth, but their mask is already visible against their light fur.[184][185] The birth weight of the about 10 cm (4 in)-long kits is between 60 and 75 g (2.1 and 2.6 oz).[185] Their ear canals open after around 18 to 23 days, a few days before their eyes open for the first time.[186] Once the kits weigh about 1 kg (2 lb), they begin to explore outside the den, consuming solid food for the first time after six to nine weeks.[187][188] After this point, their mother suckles them with decreasing frequency; they are usually weaned by 16 weeks.[189] In the fall, after their mother has shown them dens and feeding grounds, the juvenile group splits up.[190] [191] While many females will stay close to the home range of their mother, males can sometimes move more than 20 km (12 mi) away.[192][193] This is considered an instinctive behavior, preventing inbreeding.[194][195] However, mother and offspring may share a den during the first winter in cold areas.[191]
|
75 |
+
|
76 |
+
Captive raccoons have been known to live for more than 20 years.[73] However, the species' life expectancy in the wild is only 1.8 to 3.1 years, depending on the local conditions such as traffic volume, hunting, and weather severity.[196] It is not unusual for only half of the young born in one year to survive a full year.[179][197] After this point, the annual mortality rate drops to between 10% and 30%.[179] Young raccoons are vulnerable to losing their mother and to starvation, particularly in long and cold winters.[198] The most frequent natural cause of death in the North American raccoon population is distemper, which can reach epidemic proportions and kill most of a local raccoon population.[199] In areas with heavy vehicular traffic and extensive hunting, these factors can account for up to 90% of all deaths of adult raccoons.[200] The most important natural predators of the raccoon are bobcats, coyotes, and great horned owls, the latter mainly preying on young raccoons but capable of killing adults in some cases.[201][202][203][204][205][206] In Florida, they have been reported to fall victim to larger carnivores like American black bear and cougars and these species may also be a threat on occasion in other areas.[207][208][209] Where still present, gray wolves may still occasionally take raccoons as a supplemental prey item.[210][211] Also in the southeast, they are among the favored prey for adult American alligators.[212][213] On occasion, both bald and golden eagles will prey on raccoons.[214][215] In the tropics, raccoons are known to fall prey to smaller eagles such as ornate hawk-eagles and black hawk-eagles, although it is not clear whether adults or merely juvenile raccoons are taken by these.[216][217] In rare cases of overlap, they may fall victim from carnivores ranging from species averaging smaller than themselves such as fishers to those as large and formidable as jaguars in Mexico.[218][219] In their introduced range in the former Soviet Union, their main predators are wolves, lynxes and Eurasian eagle-owls.[220] However, predation is not a significant cause of death, especially because larger predators have been exterminated in many areas inhabited by raccoons.[221]
|
77 |
+
|
78 |
+
Although they have thrived in sparsely wooded areas in the last decades, raccoons depend on vertical structures to climb when they feel threatened.[222][223] Therefore, they avoid open terrain and areas with high concentrations of beech trees, as beech bark is too smooth to climb.[224] Tree hollows in old oaks or other trees and rock crevices are preferred by raccoons as sleeping, winter and litter dens. If such dens are unavailable or accessing them is inconvenient, raccoons use burrows dug by other mammals, dense undergrowth or tree crotches.[225][226] In a study in the Solling range of hills in Germany, more than 60% of all sleeping places were used only once, but those used at least ten times accounted for about 70% of all uses.[227] Since amphibians, crustaceans, and other animals around the shore of lakes and rivers are an important part of the raccoon's diet, lowland deciduous or mixed forests abundant with water and marshes sustain the highest population densities.[228][229] While population densities range from 0.5 to 3.2 animals per square kilometer (1.3 to 8.3 animals per square mile) in prairies and do not usually exceed 6 animals per square kilometer (15.5 animals per square mile) in upland hardwood forests, more than 20 raccoons per square kilometer (51.8 animals per square mile) can live in lowland forests and marshes.[228][230]
|
79 |
+
|
80 |
+
Raccoons are common throughout North America from Canada to Panama, where the subspecies Procyon lotor pumilus coexists with the crab-eating raccoon (Procyon cancrivorus).[231][232] The population on Hispaniola was exterminated as early as 1513 by Spanish colonists who hunted them for their meat.[233] Raccoons were also exterminated in Cuba and Jamaica, where the last sightings were reported in 1687.[234] When they were still considered separate species, the Bahamas raccoon, Guadeloupe raccoon and Tres Marias raccoon were classified as endangered by the IUCN in 1996.[235]
|
81 |
+
|
82 |
+
There is archeological evidence that in pre-Columbian times raccoons were numerous only along rivers and in the woodlands of the Southeastern United States.[236] As raccoons were not mentioned in earlier reports of pioneers exploring the central and north-central parts of the United States,[237] their initial spread may have begun a few decades before the 20th century. Since the 1950s, raccoons have expanded their range from Vancouver Island—formerly the northernmost limit of their range—far into the northern portions of the four south-central Canadian provinces.[238] New habitats which have recently been occupied by raccoons (aside from urban areas) include mountain ranges, such as the Western Rocky Mountains, prairies and coastal marshes.[239] After a population explosion starting in the 1940s, the estimated number of raccoons in North America in the late 1980s was 15 to 20 times higher than in the 1930s, when raccoons were comparatively rare.[240] Urbanization, the expansion of agriculture, deliberate introductions, and the extermination of natural predators of the raccoon have probably caused this increase in abundance and distribution.[241]
|
83 |
+
|
84 |
+
As a result of escapes and deliberate introductions in the mid-20th century, the raccoon is now distributed in several European and Asian countries. Sightings have occurred in all the countries bordering Germany, which hosts the largest population outside of North America.[242] Another stable population exists in northern France, where several pet raccoons were released by members of the U.S. Air Force near the Laon-Couvron Air Base in 1966.[243] Furthermore, raccoons have been known to be in the area around Madrid since the early 1970s. In 2013, the city authorized "the capture and death of any specimen".[244] It is also present in Italy, with one reproductive population in Lombardy.[245]
|
85 |
+
|
86 |
+
About 1,240 animals were released in nine regions of the former Soviet Union between 1936 and 1958 for the purpose of establishing a population to be hunted for their fur. Two of these introductions were successful—one in the south of Belarus between 1954 and 1958, and another in Azerbaijan between 1941 and 1957. With a seasonal harvest of between 1,000 and 1,500 animals, in 1974 the estimated size of the population distributed in the Caucasus region was around 20,000 animals and the density was four animals per square kilometer (10 animals per square mile).[246]
|
87 |
+
|
88 |
+
In Japan, up to 1,500 raccoons were imported as pets each year after the success of the anime series Rascal the Raccoon (1977). In 2004, the descendants of discarded or escaped animals lived in 42 of 47 prefectures.[247][248][249] The range of raccoons in the wild in Japan grew from 17 prefectures in 2000 to all 47 prefectures in 2008.[250] It is estimated that raccoons cause thirty million yen (~$275,000) of agricultural damage on Hokkaido alone.[251]
|
89 |
+
|
90 |
+
In Germany—where the raccoon is called the Waschbär (literally, "wash-bear" or "washing bear") due to its habit of "dousing" food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer.[252] He released them two weeks before receiving permission from the Prussian hunting office to "enrich the fauna."[253] Several prior attempts to introduce raccoons in Germany were not successful.[254][255] A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen (today district of Altlandsberg), east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite.[256] The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008.[199][254] By 2012 it was estimated that Germany now had more than a million raccoons.[257]
|
91 |
+
|
92 |
+
The raccoon was a protected species in Germany, but has been declared a game animal in 14 of the 16 states since 1954.[258] Hunters and environmentalists argue the raccoon spreads uncontrollably, threatens protected bird species and supersedes domestic carnivorans.[79] This view is opposed by the zoologist Frank-Uwe Michler, who finds no evidence a high population density of raccoons has negative effects on the biodiversity of an area.[79] Hohmann holds that extensive hunting cannot be justified by the absence of natural predators, because predation is not a significant cause of death in the North American raccoon population.[259]
|
93 |
+
|
94 |
+
The raccoon is extensively hunted in Germany as they are seen as an invasive species and pests.[260][261] In the 1990s, only about 400 raccoons were hunted yearly. This increased to 67,700 by the 2010/11 hunting season and the tally broke the 100,000 barrier in 2013. During the 2015/16 hunting season, the tally was 128,100 animals, 60 percent of which were provided by the federal states of Hesse.[262]
|
95 |
+
|
96 |
+
Experiments in acclimatising raccoons into the USSR began in 1936, and were repeated a further 25 times until 1962. Overall, 1,222 individuals were released, 64 of which came from zoos and fur farms (38 of them having been imports from western Europe). The remainder originated from a population previously established in Transcaucasia. The range of Soviet raccoons was never single or continuous, as they were often introduced to different locations far from each other. All introductions into the Russian Far East failed; melanistic raccoons were released on Petrov Island near Vladivostok and some areas of southern Primorsky Krai, but died. In Middle Asia, raccoons were released in Kyrgyzstan's Jalal-Abad Province, though they were later recorded as "practically absent" there in January 1963. A large and stable raccoon population (yielding 1000–1500 catches a year) was established in Azerbaijan after an introduction to the area in 1937. Raccoons apparently survived an introduction near Terek, along the Sulak River into the Dagestani lowlands. Attempts to settle raccoons on the Kuban River's left tributary and Kabardino-Balkaria were unsuccessful. A successful acclimatization occurred in Belarus, where three introductions (consisting of 52, 37 and 38 individuals in 1954 and 1958) took place. By January 1, 1963, 700 individuals were recorded in the country.[263]
|
97 |
+
|
98 |
+
Due to its adaptability, the raccoon has been able to use urban areas as a habitat. The first sightings were recorded in a suburb of Cincinnati in the 1920s. Since the 1950s, raccoons have been present in metropolitan areas like Washington, DC, Chicago, and Toronto.[264] Since the 1960s, Kassel has hosted Europe's first and densest population in a large urban area, with about 50 to 150 animals per square kilometer (130 to 390 animals per square mile), a figure comparable to those of urban habitats in North America.[264][265] Home range sizes of urban raccoons are only 3 to 40 hectares (7.5 to 100 acres) for females and 8 to 80 hectares (20 to 200 acres) for males.[266] In small towns and suburbs, many raccoons sleep in a nearby forest after foraging in the settlement area.[264][267] Fruit and insects in gardens and leftovers in municipal waste are easily available food sources.[268] Furthermore, a large number of additional sleeping areas exist in these areas, such as hollows in old garden trees, cottages, garages, abandoned houses, and attics. The percentage of urban raccoons sleeping in abandoned or occupied houses varies from 15% in Washington, DC (1991) to 43% in Kassel (2003).[267][265]
|
99 |
+
|
100 |
+
Raccoons can carry rabies, a lethal disease caused by the neurotropic rabies virus carried in the saliva and transmitted by bites. Its spread began in Florida and Georgia in the 1950s and was facilitated by the introduction of infected individuals to Virginia and North Dakota in the late 1970s.[269] Of the 6,940 documented rabies cases reported in the United States in 2006, 2,615 (37.7%) were in raccoons.[270] The U.S. Department of Agriculture, as well as local authorities in several U.S. states and Canadian provinces, has developed oral vaccination programs to fight the spread of the disease in endangered populations.[271][272][273] Only one human fatality has been reported after transmission of the rabies virus strain commonly known as "raccoon rabies".[274] Among the main symptoms for rabies in raccoons are a generally sickly appearance, impaired mobility, abnormal vocalization, and aggressiveness.[275] There may be no visible signs at all, however, and most individuals do not show the aggressive behavior seen in infected canids; rabid raccoons will often retire to their dens instead.[79][256][275] Organizations like the U.S. Forest Service encourage people to stay away from animals with unusual behavior or appearance, and to notify the proper authorities, such as an animal control officer from the local health department.[276][277] Since healthy animals, especially nursing mothers, will occasionally forage during the day, daylight activity is not a reliable indicator of illness in raccoons.[139][140]
|
101 |
+
|
102 |
+
Unlike rabies and at least a dozen other pathogens carried by raccoons, distemper, an epizootic virus, does not affect humans.[278][279] This disease is the most frequent natural cause of death in the North American raccoon population and affects individuals of all age groups.[199] For example, 94 of 145 raccoons died during an outbreak in Clifton, Ohio, in 1968.[280] It may occur along with a following inflammation of the brain (encephalitis), causing the animal to display rabies-like symptoms.[269] In Germany, the first eight cases of distemper were reported in 2007.[199]
|
103 |
+
|
104 |
+
Some of the most important bacterial diseases which affect raccoons are leptospirosis, listeriosis, tetanus, and tularemia. Although internal parasites weaken their immune systems, well-fed individuals can carry a great many roundworms in their digestive tracts without showing symptoms.[281][279] The larvae of the roundworm Baylisascaris procyonis, which can be contained in the feces and seldom causes a severe illness in humans, can be ingested when cleaning raccoon latrines without wearing breathing protection.[282]
|
105 |
+
|
106 |
+
While not endemic, the worm Trichinella does infect raccoons,[283] and undercooked raccoon meat has caused trichinosis in humans.[284]
|
107 |
+
|
108 |
+
Trematode Metorchis conjunctus can also infect raccoons.[285]
|
109 |
+
|
110 |
+
The increasing number of raccoons in urban areas has resulted in diverse reactions in humans, ranging from outrage at their presence to deliberate feeding.[286] Some wildlife experts and most public authorities caution against feeding wild animals because they might become increasingly obtrusive and dependent on humans as a food source.[287] Other experts challenge such arguments and give advice on feeding raccoons and other wildlife in their books.[288][289] Raccoons without a fear of humans are a concern to those who attribute this trait to rabies, but scientists point out this behavior is much more likely to be a behavioral adjustment to living in habitats with regular contact to humans for many generations.[256][290] Raccoons usually do not prey on domestic cats and dogs, but isolated cases of killings have been reported.[291] Attacks on pets may also target their owners.[292]
|
111 |
+
|
112 |
+
While overturned waste containers and raided fruit trees are just a nuisance to homeowners, it can cost several thousand dollars to repair damage caused by the use of attic space as dens.[293] Relocating or killing raccoons without a permit is forbidden in many urban areas on grounds of animal welfare. These methods usually only solve problems with particularly wild or aggressive individuals, since adequate dens are either known to several raccoons or will quickly be rediscovered.[178][277][294] Loud noises, flashing lights and unpleasant odors have proven particularly effective in driving away a mother and her kits before they would normally leave the nesting place (when the kits are about eight weeks old).[277][295] Typically, though, only precautionary measures to restrict access to food waste and den sites are effective in the long term.[277][296][297]
|
113 |
+
|
114 |
+
Among all fruits and crops cultivated in agricultural areas, sweet corn in its milk stage is particularly popular among raccoons.[298][299] In a two-year study by Purdue University researchers, published in 2004, raccoons were responsible for 87% of the damage to corn plants.[300] Like other predators, raccoons searching for food can break into poultry houses to feed on chickens, ducks, their eggs, or food.[141][277][301]
|
115 |
+
|
116 |
+
Since raccoons in high mortality areas have a higher rate of reproduction, extensive hunting may not solve problems with raccoon populations. Older males also claim larger home ranges than younger ones, resulting in a lower population density.
|
117 |
+
|
118 |
+
In the mythology of the indigenous peoples of the Americas, the raccoon is the subject of folk tales.[302] Stories such as "How raccoons catch so many crayfish" from the Tuscarora centered on its skills at foraging.[303] In other tales, the raccoon played the role of the trickster which outsmarts other animals, like coyotes and wolves.[304] Among others, the Dakota Sioux believe the raccoon has natural spirit powers, since its mask resembled the facial paintings, two-fingered swashes of black and white, used during rituals to connect to spirit beings.[305] The Aztecs linked supernatural abilities especially to females, whose commitment to their young was associated with the role of wise women in their society.[306]
|
119 |
+
|
120 |
+
The raccoon also appears in Native American art across a wide geographic range. Petroglyphs with engraved raccoon tracks were found in Lewis Canyon, Texas;[307] at the Crow Hollow petroglyph site in Grayson County, Kentucky;[308] and in river drainages near Tularosa, New Mexico and San Francisco, California.[309] A true-to-detail figurine made of quartz, the Ohio Mound Builders' Stone Pipe, was found near the Scioto River. The meaning and significance of the Raccoon Priests Gorget, which features a stylized carving of a raccoon and was found at the Spiro Mounds, Oklahoma, remains unknown.[310][311]
|
121 |
+
|
122 |
+
In Western culture, several autobiographical novels about living with a raccoon have been written, mostly for children. The best-known is Sterling North's Rascal, which recounts how he raised a kit during World War I. In recent years, anthropomorphic raccoons played main roles in the animated television series The Raccoons, the computer-animated film Over the Hedge, the live action film Guardians of the Galaxy (and the comics that it was based upon) and the video game series Sly Cooper.
|
123 |
+
|
124 |
+
The fur of raccoons is used for clothing, especially for coats and coonskin caps. At present, it is the material used for the inaccurately named "sealskin" cap worn by the Royal Fusiliers of Great Britain.[312] Sporrans made of raccoon pelt and hide have sometimes been used as part of traditional Scottish highland men's apparel since the 18th century, especially in North America. Such sporrans may or may not be of the "full-mask" type.[313] Historically, Native American tribes not only used the fur for winter clothing, but also used the tails for ornament.[314] The famous Sioux leader Spotted Tail took his name from a raccoon skin hat with the tail attached he acquired from a fur trader. Since the late 18th century, various types of scent hounds, called "coonhounds", which are able to tree animals have been bred in the United States.[315] In the 19th century, when coonskins occasionally even served as means of payment, several thousand raccoons were killed each year in the United States.[316][317] This number rose quickly when automobile coats became popular after the turn of the 20th century. In the 1920s, wearing a raccoon coat was regarded as status symbol among college students.[318] Attempts to breed raccoons in fur farms in the 1920s and 1930s in North America and Europe turned out not to be profitable, and farming was abandoned after prices for long-haired pelts dropped in the 1940s.[319][320] Although raccoons had become rare in the 1930s, at least 388,000 were killed during the hunting season of 1934/35.[318][321]
|
125 |
+
|
126 |
+
After persistent population increases began in the 1940s, the seasonal coon hunting harvest reached about one million animals in 1946/47 and two million in 1962/63.[322] The broadcast of three television episodes about the frontiersman Davy Crockett and the film Davy Crockett, King of the Wild Frontier in 1954 and 1955 led to a high demand for coonskin caps in the United States, although it is unlikely either Crockett or the actor who played him, Fess Parker, actually wore a cap made from raccoon fur.[323] The seasonal hunt reached an all-time high with 5.2 million animals in 1976/77 and ranged between 3.2 and 4.7 million for most of the 1980s. In 1982, the average pelt price was $20.[324] As of 1987, the raccoon was identified as the most important wild furbearer in North America in terms of revenue.[325] In the first half of the 1990s, the seasonal hunt dropped to 0.9 from 1.9 million due to decreasing pelt prices.[326]
|
127 |
+
|
128 |
+
While primarily hunted for their fur, raccoons were also a source of food for Native Americans and early American settlers.[327][328] According to Ernest Thompson Seton, young specimens killed without a fight are palatable, whereas old raccoons caught after a lengthy battle are inedible.[329] Raccoon meat was extensively eaten during the early years of California, where it was sold in the San Francisco market for $1–3 apiece.[330] American slaves occasionally ate raccoon at Christmas, but it was not necessarily a dish of the poor or rural. The first edition of The Joy of Cooking, released in 1931, contained a recipe for preparing raccoon, and US President Calvin Coolidge's pet raccoon Rebecca was originally sent to be served at the White House Thanksgiving Dinner.[331][332][333] Although the idea of eating raccoons seems repulsive to most mainstream consumers since they see them as endearing, cute, and/or vermin, several thousand raccoons are still eaten each year in the United States, primarily in the Southern United States.[334][335][336][337]
|
129 |
+
|
130 |
+
Raccoons are sometimes kept as pets, which is discouraged by many experts because the raccoon is not a domesticated species. Raccoons may act unpredictably and aggressively and it is extremely difficult to teach them to obey commands.[338][339] In places where keeping raccoons as pets is not forbidden, such as in Wisconsin and other U.S. states, an exotic pet permit may be required.[340][341] One notable raccoon pet was Rebecca, kept by US president Calvin Coolidge.[342]
|
131 |
+
|
132 |
+
Their propensity for unruly behavior exceeds that of captive skunks, and they are even less trustworthy when allowed to roam freely. Because of their intelligence and nimble forelimbs, even inexperienced raccoons are easily capable of unscrewing jars, uncorking bottles and opening door latches, with more experienced specimens having been recorded to open door knobs.[118] Sexually mature raccoons often show aggressive natural behaviors such as biting during the mating season.[338][343] Neutering them at around five or six months of age decreases the chances of aggressive behavior developing.[344] Raccoons can become obese and suffer from other disorders due to poor diet and lack of exercise.[345] When fed with cat food over a long time period, raccoons can develop gout.[346] With respect to the research results regarding their social behavior, it is now required by law in Austria and Germany to keep at least two individuals to prevent loneliness.[347][348] Raccoons are usually kept in a pen (indoor or outdoor), also a legal requirement in Austria and Germany, rather than in the apartment where their natural curiosity may result in damage to property.[347][348][338][349][350]
|
133 |
+
|
134 |
+
When orphaned, it is possible for kits to be rehabilitated and reintroduced to the wild. However, it is uncertain whether they readapt well to life in the wild.[351] Feeding unweaned kits with cow's milk rather than a kitten replacement milk or a similar product can be dangerous to their health.[338][352]
|
135 |
+
|
en/493.html.txt
ADDED
@@ -0,0 +1,183 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
A fixed-wing aircraft is a flying machine, such as an airplane (or aeroplane; see spelling differences), which is capable of flight using wings that generate lift caused by the aircraft's forward airspeed and the shape of the wings. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which the wings form a rotor mounted on a spinning shaft or "mast"), and ornithopters (in which the wings flap in a manner similar to that of a bird). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft and airplanes that use wing morphing are all examples of fixed-wing aircraft.
|
6 |
+
|
7 |
+
Gliding fixed-wing aircraft, including free-flying gliders of various kinds and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and some ground effect vehicles. Most fixed-wing aircraft are flown by a pilot on board the craft, but some are specifically designed to be unmanned and controlled either remotely or autonomously (using onboard computers).
|
8 |
+
|
9 |
+
Kites were used approximately 2,800 years ago in China, where materials ideal for kite building were readily available. Some authors hold that leaf kites were being flown much earlier in what is now Sulawesi, based on their interpretation of cave paintings on Muna Island off Sulawesi.[1] By at least 549 AD paper kites were being flown, as it was recorded in that year a paper kite was used as a message for a rescue mission.[2] Ancient and medieval Chinese sources list other uses of kites for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.[2]
|
10 |
+
|
11 |
+
Stories of kites were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries.[3] Although they were initially regarded as mere curiosities, by the 18th and 19th centuries kites were being used as vehicles for scientific research.[3]
|
12 |
+
|
13 |
+
Around 400 BC in Greece, Archytas was reputed to have designed and built the first artificial, self-propelled flying device, a bird-shaped model propelled by a jet of what was probably steam, said to have flown some 200 m (660 ft).[4][5] This machine may have been suspended for its flight.[6][7]
|
14 |
+
|
15 |
+
One of the earliest purported attempts with gliders was by the 11th-century monk Eilmer of Malmesbury, which ended in failure. A 17th-century account states that the 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.[8]
|
16 |
+
|
17 |
+
In 1799, Sir George Cayley set forth the concept of the modern airplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control.[9][10] Cayley was building and flying models of fixed-wing aircraft as early as 1803, and he built a successful passenger-carrying glider in 1853.[11] In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, by having his glider "L'Albatros artificiel" pulled by a horse on a beach.[citation needed] In 1884, the American John J. Montgomery made controlled flights in a glider as a part of a series of gliders built between 1883–1886.[12] Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
|
18 |
+
|
19 |
+
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His box kite designs were widely adopted. Although he also developed a type of rotary aircraft engine, he did not create and fly a powered fixed-wing aircraft.[13]
|
20 |
+
|
21 |
+
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan that was powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, which Maxim, it is presumed, realized, because he subsequently abandoned work on it.[14]
|
22 |
+
|
23 |
+
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight".[15] By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
|
24 |
+
|
25 |
+
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis 220 metres (720 ft) in less than 22 seconds.[16] The flight was certified by the FAI.[17]
|
26 |
+
|
27 |
+
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.[18]
|
28 |
+
|
29 |
+
World War I served as a testbed for the use of the aircraft as a weapon. Aircraft demonstrated their potential as mobile observation platforms, then proved themselves to be machines of war capable of causing casualties to the enemy. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, by German Luftstreitkräfte Leutnant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
|
30 |
+
|
31 |
+
Following WWI, aircraft technology continued to develop. Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights took place between the United States and Canada in 1919.
|
32 |
+
|
33 |
+
The so-called Golden Age of Aviation occurred between the two World Wars, during which both updated interpretations of earlier breakthroughs – as with Hugo Junkers' pioneering of all-metal airframes in 1915 leading to giant multi-engined aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft powerplant alongside powerful V-12 liquid-cooled aviation engines, and ever-greater instances of long-distance flight attempts – as with a Vickers Vimy in 1919, followed only months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts, pioneering the way for long-distance flights of the future to become commonplace.
|
34 |
+
|
35 |
+
Airplanes had a presence in all the major battles of World War II. They were an essential component of the military strategies of the period, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
|
36 |
+
|
37 |
+
Military gliders were developed and used in several campaigns, but they did not become widely used due to the high casualty rate often encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German submarines.
|
38 |
+
|
39 |
+
Before and during the war, both British and German designers were developing jet engines to power airplanes. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe and later in the war the British Gloster Meteor entered service but never saw action – top airspeeds of aircraft for that era went as high as 1,130 km/h (702 mph), with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.[19]
|
40 |
+
|
41 |
+
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound.[20]
|
42 |
+
|
43 |
+
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
|
44 |
+
|
45 |
+
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's biggest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005.
|
46 |
+
|
47 |
+
An airplane (also known as an aeroplane or simply a plane) is a powered fixed-wing aircraft that is propelled forward by thrust from a jet engine or propeller. Planes come in a variety of sizes, shapes, and wing configurations. The broad spectrum of uses for planes includes recreation, transportation of goods and people, military, and research.
|
48 |
+
|
49 |
+
A seaplane is a fixed-wing aircraft capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. These aircraft were sometimes called hydroplanes.[21] Seaplanes and amphibians are usually divided into two categories based on their technological characteristics: floatplanes and flying boats.
|
50 |
+
|
51 |
+
Many forms of glider (see below) may be modified by adding a small power plant. These include:
|
52 |
+
|
53 |
+
A ground effect vehicle (GEV) is a craft that attains level flight near the surface of the earth, making use of the ground effect – an aerodynamic interaction between the wings and the earth's surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.[22]
|
54 |
+
|
55 |
+
A glider is a heavier-than-air craft that is supported in flight by the dynamic reaction of the air against its lifting surfaces, and whose free flight does not depend on an engine. A sailplane is a fixed-wing glider designed for soaring – the ability to gain height in updrafts of air and to fly for long periods.
|
56 |
+
|
57 |
+
Gliders are mainly used for recreation, but have also been used for other purposes such as aerodynamics research, warfare and recovering spacecraft.
|
58 |
+
|
59 |
+
A motor glider does have an engine for extending its performance and some have engines powerful enough to take off, but the engine is not used in normal flight.
|
60 |
+
|
61 |
+
As is the case with planes, there are a wide variety of glider types differing in the construction of their wings, aerodynamic efficiency, location of the pilot and controls. Perhaps the most familiar type is the toy paper plane.
|
62 |
+
|
63 |
+
Large gliders are most commonly launched by a tow-plane or by a winch. Military gliders have been used in war to deliver assault troops, and specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have also made unpowered landings.
|
64 |
+
|
65 |
+
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is more common. After launch, further energy is obtained through the skillful exploitation of rising air in the atmosphere. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
|
66 |
+
|
67 |
+
The most numerous unpowered aircraft are paper airplanes, a handmade type of glider. Like hang gliders and paragliders, they are foot-launched and are in general slower, smaller, and less expensive than sailplanes. Hang gliders most often have flexible wings given shape by a frame, though some have rigid wings. Paragliders and paper airplanes have no frames in their wings.
|
68 |
+
|
69 |
+
Gliders and sailplanes can share a number of features in common with powered aircraft, including many of the same types of fuselage and wing structures. For example, the Horten H.IV was a tailless flying wing glider, and the delta wing-shaped Space Shuttle orbiter flew much like a conventional glider in the lower atmosphere. Many gliders also use similar controls and instruments as powered craft.
|
70 |
+
|
71 |
+
The main application today of glider aircraft is sport and recreation.
|
72 |
+
|
73 |
+
Gliders were developed from the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed longer glides to the next source of "lift", and so increase their chances of flying long distances. This gave rise to the popular sport of gliding.
|
74 |
+
|
75 |
+
Early gliders were mainly built of wood and metal but the majority of sailplanes now use composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings having a high aspect ratio. Both single-seat and two-seat gliders are available.
|
76 |
+
|
77 |
+
Initially training was done by short "hops" in primary gliders which are very basic aircraft with no cockpit and minimal instruments.[23] Since shortly after World War II training has always been done in two-seat dual control gliders, but high performance two-seaters are also used to share the workload and the enjoyment of long flights. Originally skids were used for landing, but the majority now land on wheels, often retractable. Some gliders, known as motor gliders, are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines.[24] Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of span and flaps.
|
78 |
+
|
79 |
+
A class of ultralight sailplanes, including some known as microlift gliders and some known as "airchairs", has been defined by the FAI based on a maximum weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some additional crash safety as the pilot can be strapped in an upright seat within a deformable structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Several commercial ultralight gliders have come and gone, but most current development is done by individual designers and home builders.
|
80 |
+
|
81 |
+
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. The advantage over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable, leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor.
|
82 |
+
|
83 |
+
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was originally developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for modern hang gliders.
|
84 |
+
|
85 |
+
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
|
86 |
+
|
87 |
+
A hang glider is a glider aircraft in which the pilot is ensconced in a harness suspended from the airframe, and exercises control by shifting body weight in opposition to a control frame. Most modern hang gliders are made of an aluminum alloy or composite-framed fabric wing. Pilots have the ability to soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
|
88 |
+
|
89 |
+
A paraglider is a lightweight, free-flying, foot-launched glider aircraft with no rigid primary structure.[25] The pilot sits in a harness suspended below a hollow fabric wing whose shape is formed by its suspension lines, the pressure of air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside. Paragliding is most often a recreational activity.
|
90 |
+
|
91 |
+
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
|
92 |
+
|
93 |
+
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
|
94 |
+
|
95 |
+
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flightpath rather than a ballistic one. This enables the carrying aircraft to attack a heavily defended target from a distance.
|
96 |
+
|
97 |
+
A kite is an aircraft tethered to a fixed point so that the wind blows over its wings.[26] Lift is generated when air flows over the kite's wing, producing low pressure above the wing and high pressure below it, and deflecting the airflow downwards. This deflection also generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the one or more rope lines or tethers attached to the wing.
|
98 |
+
|
99 |
+
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to develop it and confirm its flight characteristics, before adding an engine and flight controls, and flying it as an airplane.
|
100 |
+
|
101 |
+
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
|
102 |
+
|
103 |
+
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with very large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
|
104 |
+
|
105 |
+
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require a lot of wind, which may be not always possible with heavy equipment and a ground conductor.
|
106 |
+
|
107 |
+
Kites can be used to carry light effects such as lightsticks or battery powered lights.
|
108 |
+
|
109 |
+
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, several kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting has also become popular.
|
110 |
+
|
111 |
+
Kite sailing opens several possibilities not available in traditional sailing:
|
112 |
+
|
113 |
+
Conceptual research and development projects are being undertaken by over a hundred participants to investigate the use of kites in harnessing high altitude wind currents for electricity generation.[27]
|
114 |
+
|
115 |
+
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
|
116 |
+
|
117 |
+
The structural parts of a fixed-wing aircraft are called the airframe. The parts present can vary according to the aircraft's type and purpose. Early types were usually made of wood with fabric wing surfaces, When engines became available for a powered flight around a hundred years ago, their mounts were made of metal. Then as speeds increased more and more parts became metal until by the end of WWII all-metal aircraft were common. In modern times, increasing use of composite materials has been made.
|
118 |
+
|
119 |
+
Typical structural parts include:
|
120 |
+
|
121 |
+
The wings of a fixed-wing aircraft are static planes extending either side of the aircraft. When the aircraft travels forwards,
|
122 |
+
air flows over the wings which are shaped to create lift.
|
123 |
+
|
124 |
+
Kites and some light weight gliders and airplanes have flexible wing surfaces which are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces which provide additional strength.
|
125 |
+
|
126 |
+
Whether flexible or rigid, most wings have a strong frame to give them their shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and many ribs running from the leading (front) to the trailing (rear) edge.
|
127 |
+
|
128 |
+
Early airplane engines had little power and light weight was very important. Also, early aerofoil sections were very thin, and could not have strong frame installed within. So until the 1930s, most wings were too light weight to have enough strength and external bracing struts and wires were added. When the available engine power increased during the 1920s and 1930s, wings could be made heavy and strong enough that bracing was not needed anymore. This type of unbraced wing is called a cantilever wing.
|
129 |
+
|
130 |
+
The number and shape of the wings vary widely on different types. A given wing plane may be full-span or divided by a central fuselage into port (left) and starboard (right) wings. Occasionally, even more, wings have been used, with the three-winged triplane achieving some fame in WWI. The four-winged quadruplane and other Multiplane (Aeronautics) designs have had little success.
|
131 |
+
|
132 |
+
A monoplane, which derives from the prefix, mono means one which means it has a single wing plane, a biplane has two stacked one above the other, a tandem wing has two placed one behind the other. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form of powered type.
|
133 |
+
|
134 |
+
The wing planform is the shape when seen from above. To be aerodynamically efficient, a wing should be straight with a long span from side to side but have a short chord (high aspect ratio). But to be structurally efficient, and hence lightweight, a wing must have a short span but still enough area to provide lift (low aspect ratio).
|
135 |
+
|
136 |
+
At transonic speeds, near the speed of sound, it helps to sweep the wing backward or forwards to reduce drag from supersonic shock waves as they begin to form. The swept wing is just a straight wing swept backward or forwards.
|
137 |
+
|
138 |
+
The delta wing is a triangle shape which may be used for a number of reasons. As a flexible Rogallo wing it allows a stable shape under aerodynamic forces, and so is often used for kites and other ultralight craft. As a supersonic wing, it combines high strength with low drag and so is often used for fast jets.
|
139 |
+
|
140 |
+
A variable geometry wing can be changed in flight to a different shape. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage.
|
141 |
+
|
142 |
+
A fuselage is a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically smooth. The fuselage may contain the flight crew, passengers, cargo or payload, fuel and engines. The pilots of manned aircraft operate them from a cockpit located at the front or top of the fuselage and equipped with controls and usually windows and instruments. A plane may have more than one fuselage, or it may be fitted with booms with the tail located between the booms to allow the extreme rear of the fuselage to be useful for a variety of purposes.
|
143 |
+
|
144 |
+
A flying wing is a tailless aircraft which has no definite fuselage, with most of the crew, payload and equipment being housed inside the main wing structure.[28]:224
|
145 |
+
|
146 |
+
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany.
|
147 |
+
After the war, a number of experimental designs were based on the flying wing concept. Some general interest continued until the early 1950s, but designs did not necessarily offer a great advantage in range and presented a number of technical problems, leading to the adoption of "conventional" solutions like the Convair B-36 and the B-52 Stratofortress. Due to the practical need for a deep wing, the flying wing concept is most practical for designs in the slow-to-medium speed range, and there has been continual interest in using it as a tactical airlifter design.
|
148 |
+
|
149 |
+
Interest in flying wings was renewed in the 1980s due to their potentially low radar reflection cross-sections. Stealth technology relies on shapes which only reflect radar waves in certain directions, thus making the aircraft hard to detect unless the radar receiver is at a specific position relative to the aircraft – a position that changes continuously as the aircraft moves. This approach eventually led to the Northrop B-2 Spirit stealth bomber. In this case the aerodynamic advantages of the flying wing are not the primary needs. However, modern computer-controlled fly-by-wire systems allowed for many of the aerodynamic drawbacks of the flying wing to be minimized, making for an efficient and stable long-range bomber.
|
150 |
+
|
151 |
+
Blended wing body aircraft have a flattened and airfoil shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are smoothly blended in with the body.
|
152 |
+
|
153 |
+
Thus blended wing bodied aircraft incorporate design features from both a futuristic fuselage and flying wing design. The purported advantages of the blended wing body approach are efficient high-lift wings and a wide airfoil-shaped body. This enables the entire craft to contribute to lift generation with the result of potentially increased fuel economy.
|
154 |
+
|
155 |
+
A lifting body is a configuration in which the body itself produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for proper flight stability.
|
156 |
+
|
157 |
+
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build a small and lightweight manned spacecraft. The US built a number of famous lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles that were tested over the Pacific. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that the highly shaped fuselages made it difficult to fit fuel tankage.
|
158 |
+
|
159 |
+
The classic aerofoil section wing is unstable in flight and difficult to control. Flexible-wing types often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted aerofoil that is stable, or other ingenious mechanisms including, most recently, electronic artificial stability.
|
160 |
+
|
161 |
+
But in order to achieve trim, stability and control, most fixed-wing types have an empennage comprising a fin and rudder which act horizontally and a tailplane and elevator which act vertically. This is so common that it is known as the conventional layout. Sometimes there may be two or more fins, spaced out along the tailplane.
|
162 |
+
|
163 |
+
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it.[28]:86[29][30] This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
|
164 |
+
|
165 |
+
Kites are controlled by wires running down to the ground. Typically each wire acts as a tether to the part of the kite it is attached to.
|
166 |
+
|
167 |
+
Gliders and airplanes have more complex control systems, especially if they are piloted.
|
168 |
+
|
169 |
+
The main controls allow the pilot to direct the aircraft in the air. Typically these are:
|
170 |
+
|
171 |
+
Other common controls include:
|
172 |
+
|
173 |
+
A craft may have two pilots' seats with dual controls, allowing two pilots to take turns. This is often used for training or for longer flights.
|
174 |
+
|
175 |
+
The control system may allow full or partial automation of flight, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot but is controlled remotely or via means such as gyroscopes or other forms of autonomous control.
|
176 |
+
|
177 |
+
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
|
178 |
+
|
179 |
+
The six basic instruments, sometimes referred to as the "six pack", are as follows:[31]
|
180 |
+
|
181 |
+
Other cockpit instruments might include:
|
182 |
+
|
183 |
+
(Wayback Machine copy)
|
en/4930.html.txt
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Joseph Maurice Ravel (French: [ʒɔzɛf mɔʁis ʁavɛl];[n 1] 7 March 1875 – 28 December 1937) was a French composer, pianist and conductor. He is often associated with impressionism along with his elder contemporary Claude Debussy, although both composers rejected the term. In the 1920s and 1930s Ravel was internationally regarded as France's greatest living composer.
|
4 |
+
|
5 |
+
Born to a music-loving family, Ravel attended France's premier music college, the Paris Conservatoire; he was not well regarded by its conservative establishment, whose biased treatment of him caused a scandal. After leaving the conservatoire, Ravel found his own way as a composer, developing a style of great clarity and incorporating elements of modernism, baroque, neoclassicism and, in his later works, jazz. He liked to experiment with musical form, as in his best-known work, Boléro (1928), in which repetition takes the place of development. He made some orchestral arrangements of other composers' music, of which his 1922 version of Mussorgsky's Pictures at an Exhibition is the best known.
|
6 |
+
|
7 |
+
A slow and painstaking worker, Ravel composed fewer pieces than many of his contemporaries. Among his works to enter the repertoire are pieces for piano, chamber music, two piano concertos, ballet music, two operas and eight song cycles; he wrote no symphonies or church music. Many of his works exist in two versions: first, a piano score and later an orchestration. Some of his piano music, such as Gaspard de la nuit (1908), is exceptionally difficult to play, and his complex orchestral works such as Daphnis et Chloé (1912) require skilful balance in performance.
|
8 |
+
|
9 |
+
Ravel was among the first composers to recognise the potential of recording to bring their music to a wider public. From the 1920s, despite limited technique as a pianist or conductor, he took part in recordings of several of his works; others were made under his supervision.
|
10 |
+
|
11 |
+
Ravel was born in the Basque town of Ciboure, France, near Biarritz, 18 kilometres (11 mi) from the Spanish border. His father, Pierre-Joseph Ravel, was an educated and successful engineer, inventor and manufacturer, born in Versoix near the Franco-Swiss border.[4][n 2] His mother, Marie, née Delouart, was Basque but had grown up in Madrid. In 19th-century terms, Joseph had married beneath his status – Marie was illegitimate and barely literate – but the marriage was a happy one.[7] Some of Joseph's inventions were successful, including an early internal combustion engine and a notorious circus machine, the "Whirlwind of Death", an automotive loop-the-loop that was a major attraction until a fatal accident at Barnum and Bailey's Circus in 1903.[8]
|
12 |
+
|
13 |
+
Both Ravel's parents were Roman Catholics; Marie was also something of a free-thinker, a trait inherited by her elder son.[9] He was baptised in the Ciboure parish church six days after he was born. The family moved to Paris three months later, and there a younger son, Édouard, was born. (He was close to his father, whom he eventually followed into the engineering profession.)[10] Maurice was particularly devoted to their mother; her Basque-Spanish heritage was a strong influence on his life and music. Among his earliest memories were folk songs she sang to him.[10] The household was not rich, but the family was comfortable, and the two boys had happy childhoods.[11]
|
14 |
+
|
15 |
+
Ravel senior delighted in taking his sons to factories to see the latest mechanical devices, but he also had a keen interest in music and culture in general.[12] In later life, Ravel recalled, "Throughout my childhood I was sensitive to music. My father, much better educated in this art than most amateurs are, knew how to develop my taste and to stimulate my enthusiasm at an early age."[13] There is no record that Ravel received any formal general schooling in his early years; his biographer Roger Nichols suggests that the boy may have been chiefly educated by his father.[14]
|
16 |
+
|
17 |
+
When he was seven, Ravel started piano lessons with Henry Ghys, a friend of Emmanuel Chabrier; five years later, in 1887, he began studying harmony, counterpoint and composition with Charles-René, a pupil of Léo Delibes.[14] Without being anything of a child prodigy, he was a highly musical boy.[15] Charles-René found that Ravel's conception of music was natural to him "and not, as in the case of so many others, the result of effort".[16] Ravel's earliest known compositions date from this period: variations on a chorale by Schumann, variations on a theme by Grieg and a single movement of a piano sonata.[17] They survive only in fragmentary form.[18]
|
18 |
+
|
19 |
+
In 1888 Ravel met the young pianist Ricardo Viñes, who became not only a lifelong friend, but also one of the foremost interpreters of his works, and an important link between Ravel and Spanish music.[19] The two shared an appreciation of Wagner, Russian music, and the writings of Poe, Baudelaire and Mallarmé.[20] At the Exposition Universelle in Paris in 1889, Ravel was much struck by the new Russian works conducted by Nikolai Rimsky-Korsakov.[21] This music had a lasting effect on both Ravel and his older contemporary Claude Debussy, as did the exotic sound of the Javanese gamelan, also heard during the Exposition.[17]
|
20 |
+
|
21 |
+
Émile Decombes took over as Ravel's piano teacher in 1889; in the same year Ravel gave his earliest public performance.[22] Aged fourteen, he took part in a concert at the Salle Érard along with other pupils of Decombes, including Reynaldo Hahn and Alfred Cortot.[23]
|
22 |
+
|
23 |
+
With the encouragement of his parents, Ravel applied for entry to France's most important musical college, the Conservatoire de Paris. In November 1889, playing music by Chopin, he passed the examination for admission to the preparatory piano class run by Eugène Anthiome.[24] Ravel won the first prize in the Conservatoire's piano competition in 1891, but otherwise he did not stand out as a student.[25] Nevertheless, these years were a time of considerable advance in his development as a composer. The musicologist Arbie Orenstein writes that for Ravel the 1890s were a period "of immense growth ... from adolescence to maturity".[26]
|
24 |
+
|
25 |
+
In 1891 Ravel progressed to the classes of Charles-Wilfrid de Bériot, for piano, and Émile Pessard, for harmony.[22] He made solid, unspectacular progress, with particular encouragement from Bériot but, in the words of the musical scholar Barbara L. Kelly, he "was only teachable on his own terms".[27] His later teacher Gabriel Fauré understood this, but it was not generally acceptable to the conservative faculty of the Conservatoire of the 1890s.[27] Ravel was expelled in 1895, having won no more prizes.[n 3] His earliest works to survive in full are from these student days: Sérénade grotesque, for piano, and "Ballade de la Reine morte d'aimer",[n 4] a mélodie setting a poem by Roland de Marès (both 1893).[17]
|
26 |
+
|
27 |
+
Ravel was never so assiduous a student of the piano as his colleagues such as Viñes and Cortot were.[n 5] It was plain that as a pianist he would never match them, and his overriding ambition was to be a composer.[25] From this point he concentrated on composition. His works from the period include the songs "Un grand sommeil noir" and "D'Anne jouant de l'espinette" to words by Paul Verlaine and Clément Marot,[17][n 6] and the piano pieces Menuet antique and Habanera (for four hands), the latter eventually incorporated into the Rapsodie espagnole.[30] At around this time, Joseph Ravel introduced his son to Erik Satie, who was earning a living as a café pianist. Ravel was one of the first musicians – Debussy was another – who recognised Satie's originality and talent.[31] Satie's constant experiments in musical form were an inspiration to Ravel, who counted them "of inestimable value".[32]
|
28 |
+
|
29 |
+
In 1897 Ravel was readmitted to the Conservatoire, studying composition with Fauré, and taking private lessons in counterpoint with André Gedalge.[22] Both these teachers, particularly Fauré, regarded him highly and were key influences on his development as a composer.[17] As Ravel's course progressed, Fauré reported "a distinct gain in maturity ... engaging wealth of imagination".[33] Ravel's standing at the Conservatoire was nevertheless undermined by the hostility of the Director, Théodore Dubois, who deplored the young man's musically and politically progressive outlook.[34] Consequently, according to a fellow student, Michel-Dimitri Calvocoressi, he was "a marked man, against whom all weapons were good".[35] He wrote some substantial works while studying with Fauré, including the overture Shéhérazade and a violin sonata, but he won no prizes, and therefore was expelled again in 1900. As a former student he was allowed to attend Fauré's classes as a non-participating "auditeur" until finally abandoning the Conservatoire in 1903.[36]
|
30 |
+
|
31 |
+
In 1899 Ravel composed his first piece to become widely known, though it made little impact initially: Pavane pour une infante défunte ("Pavane for a dead princess").[37] It was originally a solo piano work, commissioned by the Princesse de Polignac.[38][n 7] In 1897 he conducted the first performance of the Shéhérazade overture, which had a mixed reception, with boos mingling with applause from the audience, and unflattering reviews from the critics. One described the piece as "a jolting debut: a clumsy plagiarism of the Russian School" and called Ravel a "mediocrely gifted debutant ... who will perhaps become something if not someone in about ten years, if he works hard".[39][n 8] Another critic, Pierre Lalo, thought that Ravel showed talent, but was too indebted to Debussy and should instead emulate Beethoven.[41] Over the succeeding decades Lalo became Ravel's most implacable critic.[41]
|
32 |
+
|
33 |
+
From the start of his career, Ravel appeared calmly indifferent to blame or praise. Those who knew him well believed that this was no pose but wholly genuine.[42] The only opinion of his music that he truly valued was his own, perfectionist and severely self-critical.[43] At twenty years of age he was, in the words of the biographer Burnett James, "self-possessed, a little aloof, intellectually biased, given to mild banter".[44] He dressed like a dandy and was meticulous about his appearance and demeanour.[45] Orenstein comments that, short in stature,[n 9] light in frame and bony in features, Ravel had the "appearance of a well-dressed jockey", whose large head seemed suitably matched to his formidable intellect.[46] During the late 1890s and into the early years of the next century, Ravel was bearded in the fashion of the day; from his mid-thirties he was clean-shaven.[47]
|
34 |
+
|
35 |
+
Around 1900 Ravel and a number of innovative young artists, poets, critics and musicians joined together in an informal group; they came to be known as Les Apaches ("The Hooligans"), a name coined by Viñes to represent their status as "artistic outcasts".[48] They met regularly until the beginning of the First World War, and members stimulated one another with intellectual argument and performances of their works. The membership of the group was fluid, and at various times included Igor Stravinsky and Manuel de Falla as well as their French friends.[n 10]
|
36 |
+
|
37 |
+
Among the enthusiasms of the Apaches was the music of Debussy. Ravel, twelve years his junior, had known Debussy slightly since the 1890s, and their friendship, though never close, continued for more than ten years.[50] In 1902 André Messager conducted the premiere of Debussy's opera Pelléas et Mélisande at the Opéra-Comique. It divided musical opinion. Dubois unavailingly forbade Conservatoire students to attend, and the conductor's friend and former teacher Camille Saint-Saëns was prominent among those who detested the piece.[51] The Apaches were loud in their support.[52] The first run of the opera consisted of fourteen performances: Ravel attended all of them.[53]
|
38 |
+
|
39 |
+
Debussy was widely held to be an impressionist composer – a label he intensely disliked. Many music lovers began to apply the same term to Ravel, and the works of the two composers were frequently taken as part of a single genre.[54] Ravel thought that Debussy was indeed an impressionist but that he himself was not.[55][n 11] Orenstein comments that Debussy was more spontaneous and casual in his composing while Ravel was more attentive to form and craftsmanship.[57] Ravel wrote that Debussy's "genius was obviously one of great individuality, creating its own laws, constantly in evolution, expressing itself freely, yet always faithful to French tradition. For Debussy, the musician and the man, I have had profound admiration, but by nature I am different from Debussy ... I think I have always personally followed a direction opposed to that of [his] symbolism."[58] During the first years of the new century Ravel's new works included the piano piece Jeux d'eau[n 12] (1901), the String Quartet and the orchestral song cycle Shéhérazade (both 1903).[59] Commentators have noted some Debussian touches in some parts of these works. Nichols calls the quartet "at once homage to and exorcism of Debussy's influence".[60]
|
40 |
+
|
41 |
+
The two composers ceased to be on friendly terms in the middle of the first decade of the 1900s, for musical and possibly personal reasons. Their admirers began to form factions, with adherents of one composer denigrating the other. Disputes arose about the chronology of the composers' works and who influenced whom.[50] Prominent in the anti-Ravel camp was Lalo, who wrote, "Where M. Debussy is all sensitivity, M. Ravel is all insensitivity, borrowing without hesitation not only technique but the sensitivity of other people."[61] The public tension led to personal estrangement.[61] Ravel said, "It's probably better for us, after all, to be on frigid terms for illogical reasons."[62] Nichols suggests an additional reason for the rift. In 1904 Debussy left his wife and went to live with the singer Emma Bardac. Ravel, together with his close friend and confidante Misia Edwards and the opera star Lucienne Bréval, contributed to a modest regular income for the deserted Lilly Debussy, a fact that Nichols suggests may have rankled with her husband.[63]
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
During the first years of the new century Ravel made five attempts to win France's most prestigious prize for young composers, the Prix de Rome, past winners of which included Berlioz, Gounod, Bizet, Massenet and Debussy.[64] In 1900 Ravel was eliminated in the first round; in 1901 he won the second prize for the competition.[65] In 1902 and 1903 he won nothing: according to the musicologist Paul Landormy, the judges suspected Ravel of making fun of them by submitting cantatas so academic as to seem like parodies.[59][n 13] In 1905 Ravel, by now thirty, competed for the last time, inadvertently causing a furore. He was eliminated in the first round, which even critics unsympathetic to his music, including Lalo, denounced as unjustifiable.[67] The press's indignation grew when it emerged that the senior professor at the Conservatoire, Charles Lenepveu, was on the jury, and only his students were selected for the final round;[68] his insistence that this was pure coincidence was not well received.[69] L'affaire Ravel became a national scandal, leading to the early retirement of Dubois and his replacement by Fauré, appointed by the government to carry out a radical reorganisation of the Conservatoire.[70]
|
46 |
+
|
47 |
+
Among those taking a close interest in the controversy was Alfred Edwards, owner and editor of Le Matin, for which Lalo wrote. Edwards was married to Ravel's friend Misia;[n 14] the couple took Ravel on a seven-week Rhine cruise on their yacht in June and July 1905, the first time he had travelled abroad.[72]
|
48 |
+
|
49 |
+
By the latter part of the 1900s Ravel had established a pattern of writing works for piano and subsequently arranging them for full orchestra.[73] He was in general a slow and painstaking worker, and reworking his earlier piano compositions enabled him to increase the number of pieces published and performed.[74] There appears to have been no mercenary motive for this; Ravel was known for his indifference to financial matters.[75] The pieces that began as piano compositions and were then given orchestral dress were Pavane pour une infante défunte (orchestrated 1910), Une barque sur l'océan (1906, from the 1905 piano suite Miroirs), the Habanera section of Rapsodie espagnole (1907–08), Ma mère l'Oye (1908–10, orchestrated 1911), Valses nobles et sentimentales (1911, orchestrated 1912), Alborada del gracioso (from Miroirs, orchestrated 1918) and Le tombeau de Couperin (1914–17, orchestrated 1919).[17]
|
50 |
+
|
51 |
+
Ravel was not by inclination a teacher, but he gave lessons to a few young musicians he felt could benefit from them. Manuel Rosenthal was one, and records that Ravel was a very demanding teacher when he thought his pupil had talent. Like his own teacher, Fauré, he was concerned that his pupils should find their own individual voices and not be excessively influenced by established masters.[76] He warned Rosenthal that it was impossible to learn from studying Debussy's music: "Only Debussy could have written it and made it sound like only Debussy can sound."[77] When George Gershwin asked him for lessons in the 1920s, Ravel, after serious consideration, refused, on the grounds that they "would probably cause him to write bad Ravel and lose his great gift of melody and spontaneity".[78][n 15] The best known composer who studied with Ravel was probably Ralph Vaughan Williams, who was his pupil for three months in 1907–08. Vaughan Williams recalled that Ravel helped him escape from "the heavy contrapuntal Teutonic manner ... Complexe mais pas compliqué was his motto."[80]
|
52 |
+
|
53 |
+
Vaughan Williams's recollections throw some light on Ravel's private life, about which the latter's reserved and secretive personality has led to much speculation. Vaughan Williams, Rosenthal and Marguerite Long have all recorded that Ravel frequented brothels,[81] Long attributed this to his self-consciousness about his diminutive stature, and consequent lack of confidence with women.[75] By other accounts, none of them first-hand, Ravel was in love with Misia Edwards,[71] or wanted to marry the violinist Hélène Jourdan-Morhange.[82] Rosenthal records and discounts contemporary speculation that Ravel, a lifelong bachelor, may have been homosexual.[83] Such speculation recurred in a 2000 life of Ravel by Benjamin Ivry;[84] subsequent studies have concluded that Ravel's sexuality and personal life remain a mystery.[85]
|
54 |
+
|
55 |
+
Ravel's first concert outside France was in 1909. As the guest of the Vaughan Williamses, he visited London, where he played for the Société des Concerts Français, gaining favourable reviews and enhancing his growing international reputation.[86][n 16]
|
56 |
+
|
57 |
+
The Société Nationale de Musique, founded in 1871 to promote the music of rising French composers, had been dominated since the mid-1880s by a conservative faction led by Vincent d'Indy.[88] Ravel, together with several other former pupils of Fauré, set up a new, modernist organisation, the Société Musicale Indépendente, with Fauré as its president.[n 17] The new society's inaugural concert took place on 20 April 1910; the seven items on the programme included premieres of Fauré's song cycle La chanson d'Ève, Debussy's piano suite D'un cahier d'esquisses, Zoltán Kodály's Six pièces pour piano and the original piano duet version of Ravel's Ma mère l'Oye. The performers included Fauré, Florent Schmitt, Ernest Bloch, Pierre Monteux and, in the Debussy work, Ravel.[90] Kelly considers it a sign of Ravel's new influence that the society featured Satie's music in a concert in January 1911.[17]
|
58 |
+
|
59 |
+
The first of Ravel's two operas, the one-act comedy L'heure espagnole[n 18] was premiered in 1911. The work had been completed in 1907, but the manager of the Opéra-Comique, Albert Carré, repeatedly deferred its presentation. He was concerned that its plot – a bedroom farce – would be badly received by the ultra-respectable mothers and daughters who were an important part of the Opéra-Comique's audience.[91] The piece was only modestly successful at its first production, and it was not until the 1920s that it became popular.[92]
|
60 |
+
|
61 |
+
In 1912 Ravel had three ballets premiered. The first, to the orchestrated and expanded version of Ma mère l'Oye, opened at the Théâtre des Arts in January.[93] The reviews were excellent: the Mercure de France called the score "absolutely ravishing, a masterwork in miniature".[94] The music rapidly entered the concert repertoire; it was played at the Queen's Hall, London, within weeks of the Paris premiere, and was repeated at the Proms later in the same year. The Times praised "the enchantment of the work ... the effect of mirage, by which something quite real seems to float on nothing".[95] New York audiences heard the work in the same year.[96] Ravel's second ballet of 1912 was Adélaïde ou le langage des fleurs, danced to the score of Valses nobles et sentimentales, which opened at the Châtelet in April. Daphnis et Chloé opened at the same theatre in June. This was his largest-scale orchestral work, and took him immense trouble and several years to complete.[97]
|
62 |
+
|
63 |
+
Daphnis et Chloé was commissioned in or about 1909 by the impresario Sergei Diaghilev for his company, the Ballets Russes.[n 19] Ravel began work with Diaghilev's choreographer, Michel Fokine, and designer, Léon Bakst.[99] Fokine had a reputation for his modern approach to dance, with individual numbers replaced by continuous music. This appealed to Ravel, and after discussing the action in great detail with Fokine, Ravel began composing the music.[100] There were frequent disagreements between the collaborators, and the premiere was under-rehearsed because of the late completion of the work.[101] It had an unenthusiastic reception and was quickly withdrawn, although it was revived successfully a year later in Monte Carlo and London.[102] The effort to complete the ballet took its toll on Ravel's health;[n 20] neurasthenia obliged him to rest for several months after the premiere.[104]
|
64 |
+
|
65 |
+
Ravel composed little during 1913. He collaborated with Stravinsky on a performing version of Mussorgsky's unfinished opera Khovanshchina, and his own works were the Trois poèmes de Mallarmé for soprano and chamber ensemble, and two short piano pieces, À la manière de Borodine and À la manière de Chabrier.[22] In 1913, together with Debussy, Ravel was among the musicians present at the dress rehearsal of The Rite of Spring.[105] Stravinsky later said that Ravel was the only person who immediately understood the music.[106] Ravel predicted that the premiere of the Rite would be seen as an event of historic importance equal to that of Pelléas et Mélisande.[107][n 21]
|
66 |
+
|
67 |
+
When Germany invaded France in 1914 Ravel tried to join the French Air Force. He considered his small stature and light weight ideal for an aviator, but was rejected because of his age and a minor heart complaint.[109] While waiting to be enlisted, Ravel composed Trois Chansons, his only work for a cappella choir, setting his own texts in the tradition of French 16th-century chansons. He dedicated the three songs to people who might help him to enlist.[110] After several unsuccessful attempts to enlist, Ravel finally joined the Thirteenth Artillery Regiment as a lorry driver in March 1915, when he was forty.[111] Stravinsky expressed admiration for his friend's courage: "at his age and with his name he could have had an easier place, or done nothing".[112] Some of Ravel's duties put him in mortal danger, driving munitions at night under heavy German bombardment. At the same time his peace of mind was undermined by his mother's failing health. His own health also deteriorated; he suffered from insomnia and digestive problems, underwent a bowel operation following amoebic dysentery in September 1916, and had frostbite in his feet the following winter.[113]
|
68 |
+
|
69 |
+
During the war, the Ligue Nationale pour la Defense de la Musique Française was formed by Saint-Saëns, Dubois, d'Indy and others, campaigning for a ban on the performance of contemporary German music.[114] Ravel declined to join, telling the committee of the league in 1916, "It would be dangerous for French composers to ignore systematically the productions of their foreign colleagues, and thus form themselves into a sort of national coterie: our musical art, which is so rich at the present time, would soon degenerate, becoming isolated in banal formulas."[115] The league responded by banning Ravel's music from its concerts.[116]
|
70 |
+
|
71 |
+
Ravel's mother died in January 1917, and he fell into a "horrible despair", compounding the distress he felt at the suffering endured by the people of his country during the war.[117] He composed few works in the war years. The Piano Trio was almost complete when the conflict began, and the most substantial of his wartime works is Le tombeau de Couperin, composed between 1914 and 1917. The suite celebrates the tradition of François Couperin, the 18th-century French composer; each movement is dedicated to a friend of Ravel's who died in the war.[118]
|
72 |
+
|
73 |
+
After the war, those close to Ravel recognised that he had lost much of his physical and mental stamina. As the musicologist Stephen Zank puts it, "Ravel's emotional equilibrium, so hard won in the previous decade, had been seriously compromised."[119] His output, never large, became smaller.[119] Nonetheless, after the death of Debussy in 1918, he was generally seen, in France and abroad, as the leading French composer of the era.[120] Fauré wrote to him, "I am happier than you can imagine about the solid position which you occupy and which you have acquired so brilliantly and so rapidly. It is a source of joy and pride for your old professor."[120] Ravel was offered the Legion of Honour in 1920,[n 22] and although he declined the decoration, he was viewed by the new generation of composers typified by Satie's protégés Les Six as an establishment figure. Satie had turned against him, and commented, "Ravel refuses the Légion d'honneur, but all his music accepts it."[123][n 23] Despite this attack, Ravel continued to admire Satie's early music, and always acknowledged the older man's influence on his own development.[55] Ravel took a benign view of Les Six, promoting their music, and defending it against journalistic attacks. He regarded their reaction against his works as natural, and preferable to their copying his style.[127] Through the Société Musicale Indépendente, he was able to encourage them and composers from other countries. The Société presented concerts of recent works by American composers including Aaron Copland, Virgil Thomson and George Antheil and by Vaughan Williams and his English colleagues Arnold Bax and Cyril Scott.[128]
|
74 |
+
|
75 |
+
Orenstein and Zank both comment that, although Ravel's post-war output was small, averaging only one composition a year, it included some of his finest works.[129] In 1920 he completed La valse, in response to a commission from Diaghilev. He had worked on it intermittently for some years, planning a concert piece, "a sort of apotheosis of the Viennese waltz, mingled with, in my mind, the impression of a fantastic, fatal whirling".[130] It was rejected by Diaghilev, who said, "It's a masterpiece, but it's not a ballet. It's the portrait of a ballet."[131] Ravel heard Diaghilev's verdict without protest or argument, left, and had no further dealings with him.[132][n 24] Nichols comments that Ravel had the satisfaction of seeing the ballet staged twice by other managements before Diaghilev died.[135] A ballet danced to the orchestral version of Le tombeau de Couperin was given at the Théâtre des Champs-Elysées in November 1920, and the premiere of La valse followed in December.[136] The following year Daphnis et Chloé and L'heure espagnole were successfully revived at the Paris Opéra.[136]
|
76 |
+
|
77 |
+
In the post-war era there was a reaction against the large-scale music of composers such as Gustav Mahler and Richard Strauss.[137] Stravinsky, whose Rite of Spring was written for a huge orchestra, began to work on a much smaller scale. His 1923 ballet score Les noces is composed for voices and twenty-one instruments.[138] Ravel did not like the work (his opinion caused a cooling in Stravinsky's friendship with him)[139] but he was in sympathy with the fashion for "dépouillement" – the "stripping away" of pre-war extravagance to reveal the essentials.[127] Many of his works from the 1920s are noticeably sparer in texture than earlier pieces.[140] Other influences on him in this period were jazz and atonality. Jazz was popular in Parisian cafés, and French composers such as Darius Milhaud incorporated elements of it in their work.[141] Ravel commented that he preferred jazz to grand opera,[142] and its influence is heard in his later music.[143] Arnold Schönberg's abandonment of conventional tonality also had echoes in some of Ravel's music such as the Chansons madécasses[n 25] (1926), which Ravel doubted he could have written without the example of Pierrot Lunaire.[144] His other major works from the 1920s include the orchestral arrangement of Mussorgsky's piano suite Pictures at an Exhibition (1922), the opera L'enfant et les sortilèges[n 26] to a libretto by Colette (1926), Tzigane (1924) and the Violin Sonata (1927).[136]
|
78 |
+
|
79 |
+
Finding city life fatiguing, Ravel moved to the countryside.[145] In May 1921 he took up residence at Le Belvédère, a small house on the fringe of Montfort-l'Amaury, 50 kilometres (31 mi) west of Paris, in the Yvelines département. Looked after by a devoted housekeeper, Mme Revelot, he lived there for the rest of his life.[146] At Le Belvédère Ravel composed and gardened, when not performing in Paris or abroad. His touring schedule increased considerably in the 1920s, with concerts in Britain, Sweden, Denmark, the US, Canada, Spain, Austria and Italy.[136]
|
80 |
+
|
81 |
+
Arbie Orenstein[147]
|
82 |
+
|
83 |
+
After two months of planning, Ravel made a four-month tour of North America in 1928, playing and conducting. His fee was a guaranteed minimum of $10,000 and a constant supply of Gauloises cigarettes.[148] He appeared with most of the leading orchestras in Canada and the US and visited twenty-five cities.[149] Audiences were enthusiastic and the critics were complimentary.[n 27] At an all-Ravel programme conducted by Serge Koussevitzky in New York, the entire audience stood up and applauded as the composer took his seat. Ravel was touched by this spontaneous gesture and observed, "You know, this doesn't happen to me in Paris."[147] Orenstein, commenting that this tour marked the zenith of Ravel's international reputation, lists its non-musical highlights as a visit to Poe's house in New York, and excursions to Niagara Falls and the Grand Canyon.[147] Ravel was unmoved by his new international celebrity. He commented that the critics' recent enthusiasm was of no more importance than their earlier judgment, when they called him "the most perfect example of insensitivity and lack of emotion".[151]
|
84 |
+
|
85 |
+
The last composition Ravel completed in the 1920s, Boléro, became his most famous. He was commissioned to provide a score for Ida Rubinstein's ballet company, and having been unable to secure the rights to orchestrate Albéniz's Iberia, he decided on "an experiment in a very special and limited direction ... a piece lasting seventeen minutes and consisting wholly of orchestral tissue without music".[152] Ravel continued that the work was "one long, very gradual crescendo. There are no contrasts, and there is practically no invention except the plan and the manner of the execution. The themes are altogether impersonal."[152] He was astonished, and not wholly pleased, that it became a mass success. When one elderly member of the audience at the Opéra shouted "Rubbish!" at the premiere, he remarked, "That old lady got the message!"[153] The work was popularised by the conductor Arturo Toscanini,[154] and has been recorded several hundred times.[n 28] Ravel commented to Arthur Honegger, one of Les Six, "I've written only one masterpiece – Boléro. Unfortunately there's no music in it."[156]
|
86 |
+
|
87 |
+
At the beginning of the 1930s Ravel was working on two piano concertos. He completed the Piano Concerto in D major for the Left Hand first. It was commissioned by the Austrian pianist Paul Wittgenstein, who had lost his right arm during the war. Ravel was stimulated by the technical challenges of the project: "In a work of this kind, it is essential to give the impression of a texture no thinner than that of a part written for both hands."[157] Ravel, not proficient enough to perform the work with only his left hand, demonstrated it with both hands.[n 29] Wittgenstein was initially disappointed by the piece, but after long study he became fascinated by it and ranked it as a great work.[159] In January 1932 he premiered it in Vienna to instant acclaim, and performed it in Paris with Ravel conducting the following year.[160] The critic Henry Prunières wrote, "From the opening measures, we are plunged into a world in which Ravel has but rarely introduced us."[151]
|
88 |
+
|
89 |
+
The Piano Concerto in G major was completed a year later. After the premiere in January 1932 there was high praise for the soloist, Marguerite Long, and for Ravel's score, though not for his conducting.[161] Long, the dedicatee, played the concerto in more than twenty European cities, with the composer conducting;[162] they planned to record it together, but at the sessions Ravel confined himself to supervising proceedings and Pedro de Freitas Branco conducted.[163]
|
90 |
+
|
91 |
+
Igor Stravinsky[164]
|
92 |
+
|
93 |
+
In October 1932 Ravel suffered a blow to the head in a taxi accident. The injury was not thought serious at the time, but in a study for the British Medical Journal in 1988 the neurologist R. A. Henson concludes that it may have exacerbated an existing cerebral condition.[165] As early as 1927 close friends had been concerned at Ravel's growing absent-mindedness, and within a year of the accident he started to experience symptoms suggesting aphasia.[166] Before the accident he had begun work on music for a film, Don Quixote (1933), but he was unable to meet the production schedule, and Jacques Ibert wrote most of the score.[167] Ravel completed three songs for baritone and orchestra intended for the film; they were published as Don Quichotte à Dulcinée. The manuscript orchestral score is in Ravel's hand, but Lucien Garban and Manuel Rosenthal helped in transcription. Ravel composed no more after this.[165] The exact nature of his illness is unknown. Experts have ruled out the possibility of a tumour, and have variously suggested frontotemporal dementia, Alzheimer's disease and Creutzfeldt–Jakob disease.[168][n 30] Though no longer able to write music or perform, Ravel remained physically and socially active until his last months. Henson notes that Ravel preserved most or all his auditory imagery and could still hear music in his head.[165]
|
94 |
+
|
95 |
+
In 1937 Ravel began to suffer pain from his condition, and was examined by Clovis Vincent, a well-known Paris neurosurgeon. Vincent advised surgical treatment. He thought a tumour unlikely, and expected to find ventricular dilatation that surgery might prevent from progressing. Ravel's brother Edouard accepted this advice; as Henson comments, the patient was in no state to express a considered view. After the operation there seemed to be an improvement in his condition, but it was short-lived, and he soon lapsed into a coma. He died on 28 December, at the age of 62.[171]
|
96 |
+
|
97 |
+
On 30 December 1937 Ravel was interred next to his parents in a granite tomb at Levallois-Perret cemetery, in north-west Paris. He was an atheist and there was no religious ceremony.[172]
|
98 |
+
|
99 |
+
Marcel Marnat's catalogue of Ravel's complete works lists eighty-five works, including many incomplete or abandoned.[173] Though that total is small in comparison with the output of his major contemporaries,[n 31] it is nevertheless inflated by Ravel's frequent practice of writing works for piano and later rewriting them as independent pieces for orchestra.[74] The performable body of works numbers about sixty; slightly more than half are instrumental. Ravel's music includes pieces for piano, chamber music, two piano concerti, ballet music, opera and song cycles. He wrote no symphonies or church works.[173]
|
100 |
+
|
101 |
+
Ravel drew on many generations of French composers from Couperin and Rameau to Fauré and the more recent innovations of Satie and Debussy. Foreign influences include Mozart, Schubert, Liszt and Chopin.[175] He considered himself in many ways a classicist, often using traditional structures and forms, such as the ternary, to present his new melodic and rhythmic content and innovative harmonies.[176] The influence of jazz on his later music is heard within conventional classical structures in the Piano Concerto and the Violin Sonata.[177]
|
102 |
+
|
103 |
+
Ravel to Vaughan Williams[178]
|
104 |
+
|
105 |
+
Ravel placed high importance on melody, telling Vaughan Williams that there is "an implied melodic outline in all vital music".[179] His themes are frequently modal instead of using the familiar major or minor scales.[180] As a result, there are few leading notes in his output.[181] Chords of the ninth and eleventh and unresolved appoggiaturas, such as those in the Valses nobles et sentimentales, are characteristic of Ravel's harmonic language.[182]
|
106 |
+
|
107 |
+
Dance forms appealed to Ravel, most famously the bolero and pavane, but also the minuet, forlane, rigaudon, waltz, czardas, habanera and passacaglia. National and regional consciousness was important to him, and although a planned concerto on Basque themes never materialised, his works include allusions to Hebraic, Greek, Hungarian and gypsy themes.[183] He wrote several short pieces paying tribute to composers he admired – Borodin, Chabrier, Fauré and Haydn, interpreting their characteristics in a Ravellian style.[184] Another important influence was literary rather than musical: Ravel said that he learnt from Poe that "true art is a perfect balance between pure intellect and emotion",[185] with the corollary that a piece of music should be a perfectly balanced entity with no irrelevant material allowed to intrude.[186]
|
108 |
+
|
109 |
+
Ravel completed two operas, and worked on three others. The unrealised three were Olympia, La cloche engloutie and Jeanne d'Arc. Olympia was to be based on Hoffmann's The Sandman; he made sketches for it in 1898–99, but did not progress far. La cloche engloutie after Hauptmann's The Sunken Bell occupied him intermittently from 1906 to 1912, Ravel destroyed the sketches for both these works, except for a "Symphonie horlogère" which he incorporated into the opening of L'heure espagnole.[187] The third unrealised project was an operatic version of Joseph Delteil's 1925 novel about Joan of Arc. It was to be a large-scale, full-length work for the Paris Opéra, but Ravel's final illness prevented him from writing it.[188]
|
110 |
+
|
111 |
+
Ravel's first completed opera was L'heure espagnole (premiered in 1911), described as a "comédie musicale".[189] It is among the works set in or illustrating Spain that Ravel wrote throughout his career. Nichols comments that the essential Spanish colouring gave Ravel a reason for virtuoso use of the modern orchestra, which the composer considered "perfectly designed for underlining and exaggerating comic effects".[190] Edward Burlingame Hill found Ravel's vocal writing particularly skilful in the work, "giving the singers something besides recitative without hampering the action", and "commenting orchestrally upon the dramatic situations and the sentiments of the actors without diverting attention from the stage".[191] Some find the characters artificial and the piece lacking in humanity.[189] The critic David Murray writes that the score "glows with the famous Ravel tendresse."[192]
|
112 |
+
|
113 |
+
The second opera, also in one act, is L'enfant et les sortilèges (1926), a "fantaisie lyrique" to a libretto by Colette. She and Ravel had planned the story as a ballet, but at the composer's suggestion Colette turned it into an opera libretto. It is more uncompromisingly modern in its musical style than L'heure espagnole, and the jazz elements and bitonality of much of the work upset many Parisian opera-goers. Ravel was once again accused of artificiality and lack of human emotion, but Nichols finds "profoundly serious feeling at the heart of this vivid and entertaining work".[193] The score presents an impression of simplicity, disguising intricate links between themes, with, in Murray's phrase, "extraordinary and bewitching sounds from the orchestra pit throughout".[194]
|
114 |
+
|
115 |
+
Although one-act operas are generally staged less often than full-length ones,[195] Ravel's are produced regularly in France and abroad.[196]
|
116 |
+
|
117 |
+
A substantial proportion of Ravel's output was vocal. His early works in that sphere include cantatas written for his unsuccessful attempts at the Prix de Rome. His other vocal music from that period shows Debussy's influence, in what Kelly describes as "a static, recitative-like vocal style", prominent piano parts and rhythmic flexibility.[17] By 1906 Ravel was taking even further than Debussy the natural, sometimes colloquial, setting of the French language in Histoires naturelles. The same technique is highlighted in Trois poèmes de Mallarmé (1913); Debussy set two of the three poems at the same time as Ravel, and the former's word-setting is noticeably more formal than the latter's, in which syllables are often elided. In the cycles Shéhérazade and Chansons madécasses, Ravel gives vent to his taste for the exotic, even the sensual, in both the vocal line and the accompaniment.[17][197]
|
118 |
+
|
119 |
+
Ravel's songs often draw on vernacular styles, using elements of many folk traditions in such works as Cinq mélodies populaires grecques, Deux mélodies hébraïques and Chants populaires.[198] Among the poets on whose lyrics he drew were Marot, Léon-Paul Fargue, Leconte de Lisle and Verlaine. For three songs dating from 1914–15, he wrote his own texts.[199]
|
120 |
+
|
121 |
+
Although Ravel wrote for mixed choirs and male solo voices, he is chiefly associated, in his songs, with the soprano and mezzo-soprano voices. Even when setting lyrics clearly narrated by a man, he often favoured a female voice,[200] and he seems to have preferred his best-known cycle, Shéhérazade, to be sung by a woman, although a tenor voice is a permitted alternative in the score.[201]
|
122 |
+
|
123 |
+
During his lifetime it was above all as a master of orchestration that Ravel was famous.[202] He minutely studied the ability of each orchestral instrument to determine its potential, putting its individual colour and timbre to maximum use.[203] The critic Alexis Roland-Manuel wrote, "In reality he is, with Stravinsky, the one man in the world who best knows the weight of a trombone-note, the harmonics of a 'cello or a pp tam-tam in the relationships of one orchestral group to another."[204]
|
124 |
+
|
125 |
+
For all Ravel's orchestral mastery, only four of his works were conceived as concert works for symphony orchestra: Rapsodie espagnole, La valse and the two concertos. All the other orchestral works were written either for the stage, as in Daphnis et Chloé, or as a reworking of piano pieces, Alborada del gracioso and Une barque sur l'ocean, (Miroirs), Valses nobles et sentimentales, Ma mère l'Oye, Tzigane (originally for violin and piano) and Le tombeau de Couperin.[205] In the orchestral versions, the instrumentation generally clarifies the harmonic language of the score and brings sharpness to classical dance rhythms.[206] Occasionally, as in the Alborada del gracioso, critics have found the later orchestral version less persuasive than the sharp-edged piano original.[207]
|
126 |
+
|
127 |
+
In some of his scores from the 1920s, including Daphnis et Chloé, Ravel frequently divides his upper strings, having them play in six to eight parts while the woodwind are required to play with extreme agility. His writing for the brass ranges from softly muted to triple-forte outbursts at climactic points.[208] In the 1930s he tended to simplify his orchestral textures. The lighter tone of the G major Piano Concerto follows the models of Mozart and Saint-Saëns, alongside use of jazz-like themes.[209] The critics Edward Sackville-West and Desmond Shawe-Taylor comment that in the slow movement, "one of the most beautiful tunes Ravel ever invented", the composer "can truly be said to join hands with Mozart".[210] The most popular of Ravel's orchestral works, Boléro (1928), was conceived several years before its completion; in 1924 he said that he was contemplating "a symphonic poem without a subject, where the whole interest will be in the rhythm".[211]
|
128 |
+
|
129 |
+
Ravel made orchestral versions of piano works by Schumann, Chabrier, Debussy and Mussorgsky's piano suite Pictures at an Exhibition. Orchestral versions of the last by Mikhail Tushmalov, Sir Henry Wood and Leo Funtek predated Ravel's 1922 version, and many more have been made since, but Ravel's remains the best known.[212] Kelly remarks on its "dazzling array of instrumental colour",[17] and a contemporary reviewer commented on how, in dealing with another composer's music, Ravel had produced an orchestral sound wholly unlike his own.[213]
|
130 |
+
|
131 |
+
Although Ravel wrote fewer than thirty works for the piano, they exemplify his range; Orenstein remarks that the composer keeps his personal touch "from the striking simplicity of Ma mère l'Oye to the transcendental virtuosity of Gaspard de la nuit".[214] Ravel's earliest major work for piano, Jeux d'eau (1901), is frequently cited as evidence that he evolved his style independently of Debussy, whose major works for piano all came later.[215] When writing for solo piano, Ravel rarely aimed at the intimate chamber effect characteristic of Debussy, but sought a Lisztian virtuosity.[216] The authors of The Record Guide consider that works such as Gaspard de la Nuit and Miroirs have a beauty and originality with a deeper inspiration "in the harmonic and melodic genius of Ravel himself".[216]
|
132 |
+
|
133 |
+
Most of Ravel's piano music is extremely difficult to play, and presents pianists with a balance of technical and artistic challenges.[217][n 32] Writing of the piano music the critic Andrew Clark commented in 2013, "A successful Ravel interpretation is a finely balanced thing. It involves subtle musicianship, a feeling for pianistic colour and the sort of lightly worn virtuosity that masks the advanced technical challenges he makes in Alborada del gracioso ... and the two outer movements of Gaspard de la nuit. Too much temperament, and the music loses its classical shape; too little, and it sounds pale."[219] This balance caused a breach between the composer and Viñes, who said that if he observed the nuances and speeds Ravel stipulated in Gaspard de la nuit, "Le gibet" would "bore the audience to death".[220] Some pianists continue to attract criticism for over-interpreting Ravel's piano writing.[221][n 33]
|
134 |
+
|
135 |
+
Ravel's regard for his predecessors is heard in several of his piano works; Menuet sur le nom de Haydn (1909), À la manière de Borodine (1912), À la manière de Chabrier (1913) and Le tombeau de Couperin all incorporate elements of the named composers interpreted in a characteristically Ravellian manner.[223] Clark comments that those piano works which Ravel later orchestrated are overshadowed by the revised versions: "Listen to Le tombeau de Couperin and the complete ballet music for Ma mère L'Oye in the classic recordings conducted by André Cluytens, and the piano versions never sound quite the same again."[219]
|
136 |
+
|
137 |
+
Apart from a one-movement sonata for violin and piano dating from 1899, unpublished in the composer's lifetime, Ravel wrote seven chamber works.[17] The earliest is the String Quartet (1902–03), dedicated to Fauré, and showing the influence of Debussy's quartet of ten years earlier. Like the Debussy, it differs from the more monumental quartets of the established French school of Franck and his followers, with more succinct melodies, fluently interchanged, in flexible tempos and varieties of instrumental colour.[224] The Introduction and Allegro for harp, flute, clarinet and string quartet (1905) was composed very quickly by Ravel's standards. It is an ethereal piece in the vein of the Pavane pour une infante défunte.[225] Ravel also worked at unusual speed on the Piano Trio (1914) to complete it before joining the French Army. It contains Basque, Baroque and far Eastern influences, and shows Ravel's growing technical skill, dealing with the difficulties of balancing the percussive piano with the sustained sound of the violin and cello, "blending the two disparate elements in a musical language that is unmistakably his own," in the words of the commentator Keith Anderson.[226]
|
138 |
+
|
139 |
+
Ravel's four chamber works composed after the First World War are the Sonata for Violin and Cello (1920–22), the "Berceuse sur le nom de Gabriel Fauré" for violin and piano (1922), the chamber original of Tzigane for violin and piano (1924) and finally the Violin Sonata (1923–27).[17] The two middle works are respectively an affectionate tribute to Ravel's teacher,[227] and a virtuoso display piece for the violinist Jelly d'Arányi.[228] The Violin and Cello Sonata is a departure from the rich textures and harmonies of the pre-war Piano Trio: the composer said that it marked a turning point in his career, with thinness of texture pushed to the extreme and harmonic charm renounced in favour of pure melody.[229] His last chamber work, the Violin Sonata (sometimes called the Second after the posthumous publication of his student sonata), is a frequently dissonant work. Ravel said that the violin and piano are "essentially incompatible" instruments, and that his Sonata reveals their incompatibility.[229] Sackville-West and Shawe-Taylor consider the post-war sonatas "rather laboured and unsatisfactory",[230] and neither work has matched the popularity of Ravel's pre-war chamber works.[231]
|
140 |
+
|
141 |
+
Ravel's interpretations of some of his piano works were captured on piano roll between 1914 and 1928, although some rolls supposedly played by him may have been made under his supervision by Robert Casadesus, a better pianist.[232] Transfers of the rolls have been released on compact disc.[232] In 1913 there was a gramophone recording of Jeux d'eau played by Mark Hambourg, and by the early 1920s there were discs featuring the Pavane pour une infante défunte and Ondine, and movements from the String Quartet, Le tombeau de Couperin and Ma mère l'Oye.[233] Ravel was among the first composers who recognised the potential of recording to bring their music to a wider public,[n 34] and throughout the 1920s there was a steady stream of recordings of his works, some of which featured the composer as pianist or conductor.[235] A 1932 recording of the G major Piano Concerto was advertised as "Conducted by the composer",[236] although he had in fact supervised the sessions while a more proficient conductor took the baton.[237] Recordings for which Ravel actually was the conductor included a Boléro in 1930, and a sound film of a 1933 performance of the D major concerto with Wittgenstein as soloist.[238]
|
142 |
+
|
143 |
+
Ravel declined not only the Légion d'honneur, but all state honours from France, refusing to let his name go forward for election to the Institut de France.[239] He accepted foreign awards, including honorary membership of the Royal Philharmonic Society in 1921,[240] the Belgian Ordre de Léopold in 1926, and an honorary doctorate from the University of Oxford in 1928.[241]
|
144 |
+
|
145 |
+
After Ravel's death, his brother and legatee, Edouard, turned the composer's house at Montfort-l'Amaury into a museum, leaving it substantially as Ravel had known it. As of 2018[update] the maison-musée de Maurice Ravel remains open for guided tours.[242]
|
146 |
+
|
147 |
+
In his later years, Edouard Ravel declared his intention to leave the bulk of the composer's estate to the city of Paris for the endowment of a Nobel Prize in music, but evidently changed his mind.[243] After his death in 1960, the estate passed through several hands. Despite the substantial royalties paid for performing Ravel's music, the news magazine Le Point reported in 2000 that it was unclear who the beneficiaries were.[244] The British newspaper The Guardian reported in 2001 that no money from royalties had been forthcoming for the maintenance of the Ravel museum at Montfort-l'Amaury, which was in a poor state of repair.[243]
|
en/4931.html.txt
ADDED
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Ravenna (/rəˈvɛnə/ rə-VEN-ə, Italian: [raˈvenna], also locally [raˈvɛnna] (listen); Romagnol: Ravèna) is the capital city of the Province of Ravenna, in the Emilia-Romagna region of Northern Italy. It was the capital city of the Western Roman Empire from 402 until that empire collapsed in 476. It then served as the capital of the Ostrogothic Kingdom until it was re-conquered in 540 by the Byzantine Empire. Afterwards, the city formed the centre of the Byzantine Exarchate of Ravenna until the invasion of the Lombards in 751, after which it became the seat of the Kingdom of the Lombards.
|
2 |
+
|
3 |
+
Although it is an inland city, Ravenna is connected to the Adriatic Sea by the Candiano Canal. It is known for its well-preserved late Roman and Byzantine architecture, with eight buildings comprising the UNESCO World Heritage Site "Early Christian Monuments of Ravenna".[5]
|
4 |
+
|
5 |
+
The origin of the name Ravenna is unclear, although it is believed the name is Etruscan.[6] Some have speculated that "ravenna" is related to "Rasenna" (later "Rasna"), the term that the Etruscans used for themselves, but there is no agreement on this point.[citation needed]
|
6 |
+
|
7 |
+
The origins of Ravenna are uncertain.[7] The first settlement is variously attributed to (and then has seen the copresence of) the Thessalians, the Etruscans and the Umbrians. Afterwards its territory was settled also by the Senones, especially the southern countryside of the city (that wasn't part of the lagoon), the Ager Decimanus. Ravenna consisted of houses built on piles on a series of small islands in a marshy lagoon – a situation similar to Venice several centuries later. The Romans ignored it during their conquest of the Po River Delta, but later accepted it into the Roman Republic as a federated town in 89 BC.
|
8 |
+
|
9 |
+
In 49 BC, it was the location where Julius Caesar gathered his forces before crossing the Rubicon. Later, after his battle against Mark Antony in 31 BC, Emperor Augustus founded the military harbor of Classe.[8] This harbor, protected at first by its own walls, was an important station of the Roman Imperial Fleet. Nowadays the city is landlocked, but Ravenna remained an important seaport on the Adriatic until the early Middle Ages. During the Germanic campaigns, Thusnelda, widow of Arminius, and Marbod, King of the Marcomanni, were confined at Ravenna.
|
10 |
+
|
11 |
+
Ravenna greatly prospered under Roman rule. Emperor Trajan built a 70 km (43.50 mi) long aqueduct at the beginning of the 2nd century. During the Marcomannic Wars, Germanic settlers in Ravenna revolted and managed to seize possession of the city. For this reason, Marcus Aurelius decided not only against bringing more barbarians into Italy, but even banished those who had previously been brought there.[9] In AD 402, Emperor Honorius transferred the capital of the Western Roman Empire from Milan to Ravenna. At that time it was home to 50,000 people.[10] The transfer was made partly for defensive purposes: Ravenna was surrounded by swamps and marshes, and was perceived to be easily defensible (although in fact the city fell to opposing forces numerous times in its history); it is also likely that the move to Ravenna was due to the city's port and good sea-borne connections to the Eastern Roman Empire. However, in 409, King Alaric I of the Visigoths simply bypassed Ravenna, and went on to sack Rome in 410 and to take Galla Placidia, daughter of Emperor Theodosius I, hostage.
|
12 |
+
|
13 |
+
After many vicissitudes, Galla Placidia returned to Ravenna with her son, Emperor Valentinian III, due to the support of her nephew Theodosius II. Ravenna enjoyed a period of peace, during which time the Christian religion was favoured by the imperial court, and the city gained some of its most famous monuments, including the Orthodox Baptistery, the misnamed Mausoleum of Galla Placidia (she was not actually buried there), and San Giovanni Evangelista.
|
14 |
+
|
15 |
+
The late 5th century saw the dissolution of Roman authority in the west, and the last person to hold the title of emperor in the West was deposed in 476 by the general Odoacer. Odoacer ruled as King of Italy for 13 years, but in 489 the Eastern Emperor Zeno sent the Ostrogoth King Theodoric the Great to re-take the Italian peninsula. After losing the Battle of Verona, Odoacer retreated to Ravenna, where he withstood a siege of three years by Theodoric, until the taking of Rimini deprived Ravenna of supplies. Theodoric took Ravenna in 493, supposedly slew Odoacer with his own hands, and Ravenna became the capital of the Ostrogothic Kingdom of Italy. Theodoric, following his imperial predecessors, also built many splendid buildings in and around Ravenna, including his palace church Sant'Apollinare Nuovo, an Arian cathedral (now Santo Spirito) and Baptistery, and his own Mausoleum just outside the walls.
|
16 |
+
|
17 |
+
Both Odoacer and Theodoric and their followers were Arian Christians, but co-existed peacefully with the Latins, who were largely Catholic Orthodox. Ravenna's Orthodox bishops carried out notable building projects, of which the sole surviving one is the Capella Arcivescovile. Theodoric allowed Roman citizens within his kingdom to be subject to Roman law and the Roman judicial system. The Goths, meanwhile, lived under their own laws and customs. In 519, when a mob had burned down the synagogues of Ravenna, Theodoric ordered the town to rebuild them at its own expense.
|
18 |
+
|
19 |
+
Theodoric died in 526 and was succeeded by his young grandson Athalaric under the authority of his daughter Amalasunta, but by 535 both were dead and Theodoric's line was represented only by Amalasuntha's daughter Matasuntha. Various Ostrogothic military leaders took the Kingdom of Italy, but none were as successful as Theodoric had been. Meanwhile, the orthodox Christian Byzantine Emperor Justinian I, opposed both Ostrogoth rule and the Arian variety of Christianity. In 535 his general Belisarius invaded Italy and in 540 conquered Ravenna. After the conquest of Italy was completed in 554, Ravenna became the seat of Byzantine government in Italy.
|
20 |
+
|
21 |
+
From 540 to 600, Ravenna's bishops embarked upon a notable building program of churches in Ravenna and in and around the port city of Classe. Surviving monuments include the Basilica of San Vitale and the Basilica of Sant'Apollinare in Classe, as well as the partially surviving San Michele in Africisco.
|
22 |
+
|
23 |
+
Following the conquests of Belisarius for the Emperor Justinian I in the 6th century, Ravenna became the seat of the Byzantine governor of Italy, the Exarch, and was known as the Exarchate of Ravenna. It was at this time that the Ravenna Cosmography was written.
|
24 |
+
|
25 |
+
Under Byzantine rule, the archbishop of the Archdiocese of Ravenna was temporarily granted autocephaly from the Roman Church by the emperor, in 666, but this was soon revoked. Nevertheless, the archbishop of Ravenna held the second place in Italy after the pope, and played an important role in many theological controversies during this period.
|
26 |
+
|
27 |
+
The Lombards, under King Liutprand, occupied Ravenna in 712, but were forced to return it to the Byzantines.[11] However, in 751 the Lombard king, Aistulf, succeeded in conquering Ravenna, thus ending Byzantine rule in northern Italy.
|
28 |
+
|
29 |
+
King Pepin of the Franks attacked the Lombards under orders of Pope Stephen II. Ravenna then gradually came under the direct authority of the Popes, although this was contested by the archbishops at various times. Pope Adrian I authorized Charlemagne to take away anything from Ravenna that he liked, and an unknown quantity of Roman columns, mosaics, statues, and other portable items were taken north to enrich his capital of Aachen.
|
30 |
+
|
31 |
+
In 1198 Ravenna led a league of Romagna cities against the Emperor, and the Pope was able to subdue it. After the war of 1218 the Traversari family was able to impose its rule in the city, which lasted until 1240. After a short period under an Imperial vicar, Ravenna was returned to the Papal States in 1248 and again to the Traversari until, in 1275, the Da Polenta established their long-lasting seigniory. One of the most illustrious residents of Ravenna at this time was the exiled poet Dante. The last of the Da Polenta, Ostasio III, was ousted by the Republic of Venice in 1440, and the city was annexed to the Venetian territories in the Treaty of Cremona.
|
32 |
+
|
33 |
+
Ravenna was ruled by Venice until 1509, when the area was invaded in the course of the Italian Wars. In 1512, during the Holy League wars, Ravenna was sacked by the French following the Battle of Ravenna. Ravenna was also known during the Renaissance as the birthplace of the Monster of Ravenna.
|
34 |
+
|
35 |
+
After the Venetian withdrawal, Ravenna was again ruled by legates of the Pope as part of the Papal States. The city was damaged in a tremendous flood in May 1636. Over the next 300 years, a network of canals diverted nearby rivers and drained nearby swamps, thus reducing the possibility of flooding and creating a large belt of agricultural land around the city.
|
36 |
+
|
37 |
+
Apart from another short occupation by Venice (1527–1529), Ravenna was part of the Papal States until 1796, when it was annexed to the French puppet state of the Cisalpine Republic, (Italian Republic from 1802, and Kingdom of Italy from 1805). It was returned to the Papal States in 1814. Occupied by Piedmontese troops in 1859, Ravenna and the surrounding Romagna area became part of the new unified Kingdom of Italy in 1861.
|
38 |
+
|
39 |
+
During World War II, troops of the British 27th Lancers entered and occupied Ravenna on 5 December 1944. A total of 937 Commonwealth soldiers who died in the winter of 1944-45 are buried in Ravenna War Cemetery, including 438 Canadians. <https://www.veterans.gc.ca/eng/remembrance/history/second-world-war/canada-Italy-1943-to-1945> The town suffered very little damage.
|
40 |
+
|
41 |
+
Eight early Christian monuments of Ravenna are inscribed on the World Heritage List. These are
|
42 |
+
|
43 |
+
Other attractions include:
|
44 |
+
|
45 |
+
The city annually hosts the Ravenna Festival, one of Italy's prominent classical music gatherings. Opera performances are held at the Teatro Alighieri while concerts take place at the Palazzo Mauro de André as well as in the ancient Basilica of San Vitale and Basilica of Sant'Apollinare in Classe. Chicago Symphony Orchestra music director Riccardo Muti, a longtime resident of the city, regularly participates in the festival, which invites orchestras and other performers from around the world.
|
46 |
+
|
47 |
+
Michelangelo Antonioni filmed his 1964 movie Red Desert (Deserto Rosso) within the industrialised areas of the Pialassa valley within the city limits.
|
48 |
+
|
49 |
+
Ravenna has an important commercial and tourist port.
|
50 |
+
|
51 |
+
Ravenna railway station has direct Trenitalia service to Bologna, Ferrara, Lecce, Milan, Parma, Rimini, and Verona.
|
52 |
+
|
53 |
+
Ravenna Airport is located in Ravenna. The nearest commercial airports are those of Forlì, Rimini and Bologna.
|
54 |
+
|
55 |
+
Freeways crossing Ravenna include: A14-bis from the hub of Bologna; on the north–south axis of EU routes E45 (from Rome) and E55 (SS-309 "Romea" from Venice); and on the regional Ferrara-Rimini axis of SS-16 (partially called "Adriatica").
|
56 |
+
|
57 |
+
Ravenna is twinned with:[17]
|
58 |
+
|
59 |
+
The historical Italian football of the city is Ravenna F.C. Currently it plays in the third league of Italian football, commonly known as "Serie C".
|
60 |
+
|
61 |
+
A.P.D. Ribelle 1927 is the Italian football of Castiglione di Ravenna, a fraction of Ravenna and was founded in 1927. Currently it plays in Italy's Serie D after promotion from Eccellenza Emilia-Romagna Girone B in the 2013–14 season.
|
62 |
+
|
63 |
+
The president is Marcello Missiroli and the manager is Enrico Zaccaroni.
|
64 |
+
|
65 |
+
Its home ground is Stadio Massimo Sbrighi of the fraction with 1,000 seats. The team's colors are white and blue.
|
66 |
+
|
67 |
+
The beaches of Ravenna hosted the 2011 FIFA Beach Soccer World Cup, in September 2011.
|
en/4932.html.txt
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
In classical geometry, a radius of a circle or sphere is any of the line segments from its center to its perimeter, and in more modern usage, it is also their length. The name comes from the Latin radius, meaning ray but also the spoke of a chariot wheel.[1] The plural of radius can be either radii (from the Latin plural) or the conventional English plural radiuses.[2] The typical abbreviation and mathematical variable name for radius is r. By extension, the diameter d is defined as twice the radius:[3]
|
2 |
+
|
3 |
+
If an object does not have a center, the term may refer to its circumradius, the radius of its circumscribed circle or circumscribed sphere. In either case, the radius may be more than half the diameter, which is usually defined as the maximum distance between any two points of the figure. The inradius of a geometric figure is usually the radius of the largest circle or sphere contained in it. The inner radius of a ring, tube or other hollow object is the radius of its cavity.
|
4 |
+
|
5 |
+
For regular polygons, the radius is the same as its circumradius.[4] The inradius of a regular polygon is also called apothem. In graph theory, the radius of a graph is the minimum over all vertices u of the maximum distance from u to any other vertex of the graph.[5]
|
6 |
+
|
7 |
+
The radius of the circle with perimeter (circumference) C is
|
8 |
+
|
9 |
+
For many geometric figures, the radius has a well-defined relationship with other measures of the figure.
|
10 |
+
|
11 |
+
The radius of a circle with area A is
|
12 |
+
|
13 |
+
The radius of the circle that passes through the three non-collinear points P1, P2, and P3 is given by
|
14 |
+
|
15 |
+
where θ is the angle ∠P1P2P3. This formula uses the law of sines. If the three points are given by their coordinates (x1,y1), (x2,y2), and (x3,y3), the radius can be expressed as
|
16 |
+
|
17 |
+
The radius r of a regular polygon with n sides of length s is given by r = Rn s, where
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
|
22 |
+
R
|
23 |
+
|
24 |
+
n
|
25 |
+
|
26 |
+
|
27 |
+
=
|
28 |
+
1
|
29 |
+
|
30 |
+
/
|
31 |
+
|
32 |
+
(
|
33 |
+
|
34 |
+
2
|
35 |
+
sin
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
π
|
40 |
+
n
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
)
|
45 |
+
|
46 |
+
|
47 |
+
|
48 |
+
.
|
49 |
+
|
50 |
+
|
51 |
+
{\displaystyle R_{n}=1\left/\left(2\sin {\frac {\pi }{n}}\right)\right..}
|
52 |
+
|
53 |
+
Values of Rn for small values of n are given in the table. If s = 1 then these values are also the radii of the corresponding regular polygons.
|
54 |
+
|
55 |
+
|
56 |
+
|
57 |
+
The radius of a d-dimensional hypercube with side s is
|
58 |
+
|
59 |
+
The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction.
|
60 |
+
|
61 |
+
The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the fixed direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is the angular coordinate, polar angle, or azimuth.[6]
|
62 |
+
|
63 |
+
In the cylindrical coordinate system, there is a chosen reference axis and a chosen reference plane perpendicular to that axis. The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis.
|
64 |
+
|
65 |
+
The axis is variously called the cylindrical or longitudinal axis, to differentiate it from
|
66 |
+
the polar axis, which is the ray that lies in the reference plane,
|
67 |
+
starting at the origin and pointing in the reference direction.
|
68 |
+
|
69 |
+
The distance from the axis may be called the radial distance or radius,
|
70 |
+
while the angular coordinate is sometimes referred to as the angular position or as the azimuth.
|
71 |
+
The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane.
|
72 |
+
The third coordinate may be called the height or altitude (if the reference plane is considered horizontal),
|
73 |
+
longitudinal position,[7]
|
74 |
+
or axial position.[8]
|
75 |
+
|
76 |
+
In a spherical coordinate system, the radius describes the distance of a point from a fixed origin. Its position if further defined by the polar angle measured between the radial direction and a fixed zenith direction, and the azimuth angle, the angle between the orthogonal projection of the radial direction on a reference plane that passes through the origin and is orthogonal to the zenith, and a fixed reference direction in that plane.
|
en/4933.html.txt
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Light or visible light is electromagnetic radiation within the portion of the electromagnetic spectrum that can be perceived by the human eye.[1] Visible light is usually defined as having wavelengths in the range of 400–700 nanometers (nm), or 4.00 × 10−7 to 7.00 × 10−7 m, between the infrared (with longer wavelengths) and the ultraviolet (with shorter wavelengths).[2][3] This wavelength means a frequency range of roughly 430–750 terahertz (THz).
|
4 |
+
|
5 |
+
The main source of light on Earth is the Sun. Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey.
|
6 |
+
|
7 |
+
The primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarization, while its speed in a vacuum, 299,792,458 meters per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in a vacuum.[4]
|
8 |
+
|
9 |
+
In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not.[5][6] In this sense, gamma rays, X-rays, microwaves and radio waves are also light. Like all types of EM radiation, visible light propagates as waves. However, the energy imparted by the waves is absorbed at single locations the way particles are absorbed. The absorbed energy of the EM waves is called a photon, and represents the quanta of light. When a wave of light is transformed and absorbed as a photon, the energy of the wave instantly collapses to a single location, and this location is where the photon "arrives." This is what is called the wave function collapse. This dual wave-like and particle-like nature of light is known as the wave–particle duality. The study of light, known as optics, is an important research area in modern physics.
|
10 |
+
|
11 |
+
Generally, EM radiation (the designation "radiation" excludes static electric, magnetic, and near fields), or EMR, is classified by wavelength into radio waves, microwaves, infrared, the visible spectrum that we perceive as light, ultraviolet, X-rays, and gamma rays.
|
12 |
+
|
13 |
+
The behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries.
|
14 |
+
|
15 |
+
EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which leads to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina, which change triggers the sensation of vision.
|
16 |
+
|
17 |
+
There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it.
|
18 |
+
|
19 |
+
Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nm and the internal lens below 400 nm. Furthermore, the rods and cones located in the retina of the human eye cannot detect the very short (below 360 nm) ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses (such as insects and shrimp) are able to detect ultraviolet, by quantum photon-absorption mechanisms, in much the same chemical way that humans detect visible light.
|
20 |
+
|
21 |
+
Various sources define visible light as narrowly as 420–680 nm[7][8] to as broadly as 380–800 nm.[9][10] Under ideal laboratory conditions, people can see infrared up to at least 1050 nm;[11] children and young adults may perceive ultraviolet wavelengths down to about 310–313 nm.[12][13][14]
|
22 |
+
|
23 |
+
Plant growth is also affected by the color spectrum of light, a process known as photomorphogenesis.
|
24 |
+
|
25 |
+
The speed of light in a vacuum is defined to be exactly 299,792,458 m/s (approx. 186,282 miles per second). The fixed value of the speed of light in SI units results from the fact that the meter is now defined in terms of the speed of light. All forms of electromagnetic radiation move at exactly this same speed in vacuum.
|
26 |
+
|
27 |
+
Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. An early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Rømer observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, he calculated that light takes about 22 minutes to traverse the diameter of Earth's orbit.[15] However, its size was not known at that time. If Rømer had known the diameter of the Earth's orbit, he would have calculated a speed of 227,000,000 m/s.
|
28 |
+
|
29 |
+
Another more accurate measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel, and the rate of rotation, Fizeau was able to calculate the speed of light as 313,000,000 m/s.
|
30 |
+
|
31 |
+
Léon Foucault carried out an experiment which used rotating mirrors to obtain a value of 298,000,000 m/s in 1862. Albert A. Michelson conducted experiments on the speed of light from 1877 until his death in 1931. He refined Foucault's methods in 1926 using improved rotating mirrors to measure the time it took light to make a round trip from Mount Wilson to Mount San Antonio in California. The precise measurements yielded a speed of 299,796,000 m/s.[16]
|
32 |
+
|
33 |
+
The effective velocity of light in various transparent substances containing ordinary matter, is less than in vacuum. For example, the speed of light in water is about 3/4 of that in vacuum.
|
34 |
+
|
35 |
+
Two independent teams of physicists were said to bring light to a "complete standstill" by passing it through a Bose–Einstein condensate of the element rubidium, one team at Harvard University and the Rowland Institute for Science in Cambridge, Massachusetts, and the other at the Harvard–Smithsonian Center for Astrophysics, also in Cambridge.[17] However, the popular description of light being "stopped" in these experiments refers only to light being stored in the excited states of atoms, then re-emitted at an arbitrary later time, as stimulated by a second laser pulse. During the time it had "stopped" it had ceased to be light.
|
36 |
+
|
37 |
+
The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the aurora borealis offer many clues as to the nature of light.
|
38 |
+
|
39 |
+
Refraction is the bending of light rays when passing through a surface between one transparent material and another. It is described by Snell's Law:
|
40 |
+
|
41 |
+
where θ1 is the angle between the ray and the surface normal in the first medium, θ2 is the angle between the ray and the surface normal in the second medium, and n1 and n2 are the indices of refraction, n = 1 in a vacuum and n > 1 in a transparent substance.
|
42 |
+
|
43 |
+
When a beam of light crosses the boundary between a vacuum and another medium, or between two different media, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal (or rather normal) to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction.
|
44 |
+
|
45 |
+
The refractive quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation.
|
46 |
+
|
47 |
+
There are many sources of light. A body at a given temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is sunlight, the radiation emitted by the chromosphere of the Sun at around 6,000 kelvins (5,730 degrees Celsius; 10,340 degrees Fahrenheit) peaks in the visible region of the electromagnetic spectrum when plotted in wavelength units[18] and roughly 44% of sunlight energy that reaches the ground is visible.[19] Another example is incandescent light bulbs, which emit only around 10% of their energy as visible light and the remainder as infrared. A common thermal light source in history is the glowing solid particles in flames, but these also emit most of their radiation in the infrared, and only a fraction in the visible spectrum.
|
48 |
+
|
49 |
+
The peak of the black-body spectrum is in the deep infrared, at about 10 micrometre wavelength, for relatively cool objects like human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a red glow, then a white one, and finally a blue-white color as the peak moves out of the visible part of the spectrum and into the ultraviolet. These colors can be seen when metal is heated to "red hot" or "white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-blue color in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH radicals (emitting a wavelength band around 425 nm, and is not seen in stars or pure thermal radiation).
|
50 |
+
|
51 |
+
Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser.
|
52 |
+
|
53 |
+
Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron radiation, synchrotron radiation, and bremsstrahlung radiation are all examples of this. Particles moving through a medium faster than the speed of light in that medium can produce visible Cherenkov radiation. Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means, and boats moving through water can disturb plankton which produce a glowing wake.
|
54 |
+
|
55 |
+
Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example. This mechanism is used in cathode ray tube television sets and computer monitors.
|
56 |
+
|
57 |
+
Certain other mechanisms can produce light:
|
58 |
+
|
59 |
+
When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include:
|
60 |
+
|
61 |
+
Light is measured with two main alternative sets of units: radiometry consists of measurements of light power at all wavelengths, while photometry measures light with wavelength weighted with respect to a standardized model of human brightness perception. Photometry is useful, for example, to quantify Illumination (lighting) intended for human use. The SI units for both systems are summarized in the following tables.
|
62 |
+
|
63 |
+
|
64 |
+
|
65 |
+
|
66 |
+
|
67 |
+
The photometry units are different from most systems of physical units in that they take into account how the human eye responds to light. The cone cells in the human eye are of three types which respond differently across the visible spectrum, and the cumulative response peaks at a wavelength of around 555 nm. Therefore, two sources of light which produce the same intensity (W/m2) of visible light do not necessarily appear equally bright. The photometry units are designed to take this into account, and therefore are a better representation of how "bright" a light appears to be than raw intensity. They relate to raw power by a quantity called luminous efficacy, and are used for purposes like determining how to best achieve sufficient illumination for various tasks in indoor and outdoor settings. The illumination measured by a photocell sensor does not necessarily correspond to what is perceived by the human eye, and without filters which may be costly, photocells and charge-coupled devices (CCD) tend to respond to some infrared, ultraviolet or both.
|
68 |
+
|
69 |
+
Light exerts physical pressure on objects in its path, a phenomenon which can be deduced by Maxwell's equations, but can be more easily explained by the particle nature of light: photons strike and transfer their momentum. Light pressure is equal to the power of the light beam divided by c, the speed of light. Due to the magnitude of c, the effect of light pressure is negligible for everyday objects. For example, a one-milliwatt laser pointer exerts a force of about 3.3 piconewtons on the object being illuminated; thus, one could lift a U.S. penny with laser pointers, but doing so would require about 30 billion 1-mW laser pointers.[20] However, in nanometre-scale applications such as nanoelectromechanical systems (NEMS), the effect of light pressure is more significant, and exploiting light pressure to drive NEMS mechanisms and to flip nanometre-scale physical switches in integrated circuits is an active area of research.[21] At larger scales, light pressure can cause asteroids to spin faster,[22] acting on their irregular shapes as on the vanes of a windmill. The possibility of making solar sails that would accelerate spaceships in space is also under investigation.[23][24]
|
70 |
+
|
71 |
+
Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum.[25] This should not be confused with the Nichols radiometer, in which the (slight) motion caused by torque (though not enough for full rotation against friction) is directly caused by light pressure.[26]
|
72 |
+
As a consequence of light pressure, Einstein[27] in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backwardacting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
|
73 |
+
|
74 |
+
Usually light momentum is aligned with its direction of motion. However, for example in evanescent waves momentum is transverse to direction of propagation.[28]
|
75 |
+
|
76 |
+
In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun.[29]
|
77 |
+
|
78 |
+
In about 300 BC, Euclid wrote Optica, in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. If the beam from the eye travels infinitely fast this is not a problem.[30]
|
79 |
+
|
80 |
+
In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote that "The light & heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." (from On the nature of the Universe). Despite being similar to later particle theories, Lucretius's views were not generally accepted. Ptolemy (c. 2nd century) wrote about the refraction of light in his book Optics.[31]
|
81 |
+
|
82 |
+
In ancient India, the Hindu schools of Samkhya and Vaisheshika, from around the early centuries AD developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous.[32]
|
83 |
+
On the other hand, the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See Indian atomism.) The basic atoms are those of earth (prthivi), water (pani), fire (agni), and air (vayu) Light rays are taken to be a stream of high velocity of tejas (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the tejas atoms.[citation needed]
|
84 |
+
The Vishnu Purana refers to sunlight as "the seven rays of the sun".[32]
|
85 |
+
|
86 |
+
The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy.[32]
|
87 |
+
|
88 |
+
René Descartes (1596–1650) held that light was a mechanical property of the luminous body, rejecting the "forms" of Ibn al-Haytham and Witelo as well as the "species" of Bacon, Grosseteste, and Kepler.[33] In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves.[citation needed] Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media.
|
89 |
+
|
90 |
+
Descartes is not the first to use the mechanical analogies but because he clearly asserts that light is only a mechanical property of the luminous body and the transmitting medium, Descartes' theory of light is regarded as the start of modern physical optics.[33]
|
91 |
+
|
92 |
+
Pierre Gassendi (1592–1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and preferred his view to Descartes' theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether.
|
93 |
+
|
94 |
+
Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the 18th century. The particle theory of light led Laplace to argue that a body could be so massive that light could not escape from it. In other words, it would become what is now called a black hole. Laplace withdrew his suggestion later, after a wave theory of light became firmly established as the model for light (as has been explained, neither a particle or wave theory is fully correct). A translation of Newton's essay on light appears in The large scale structure of space-time, by Stephen Hawking and George F. R. Ellis.
|
95 |
+
|
96 |
+
The fact that light could be polarized was for the first time qualitatively explained by Newton using the particle theory. Étienne-Louis Malus in 1810 created a mathematical particle theory of polarization. Jean-Baptiste Biot in 1812 showed that this theory explained all known phenomena of light polarization. At that time the polarization was considered as the proof of the particle theory.
|
97 |
+
|
98 |
+
To explain the origin of colors, Robert Hooke (1635–1703) developed a "pulse theory" and compared the spreading of light to that of waves in water in his 1665 work Micrographia ("Observation IX"). In 1672 Hooke suggested that light's vibrations could be perpendicular to the direction of propagation. Christiaan Huygens (1629–1695) worked out a mathematical wave theory of light in 1678, and published it in his Treatise on light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the Luminiferous ether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium.[34]
|
99 |
+
|
100 |
+
The wave theory predicted that light waves could interfere with each other like sound waves (as noted around 1800 by Thomas Young). Young showed by means of a diffraction experiment that light behaved as waves. He also proposed that different colors were caused by different wavelengths of light, and explained color vision in terms of three-colored receptors in the eye. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory. In 1816 André-Marie Ampère gave Augustin-Jean Fresnel an idea that the polarization of light can be explained by the wave theory if light were a transverse wave.[35]
|
101 |
+
|
102 |
+
Later, Fresnel independently worked out his own wave theory of light, and presented it to the Académie des Sciences in 1817. Siméon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favor of the wave theory, helping to overturn Newton's corpuscular theory.[dubious – discuss] By the year 1821, Fresnel was able to show via mathematical methods that polarization could be explained by the wave theory of light if and only if light was entirely transverse, with no longitudinal vibration whatsoever.[citation needed]
|
103 |
+
|
104 |
+
The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. The existence of the hypothetical substance luminiferous aether proposed by Huygens in 1678 was cast into strong doubt in the late nineteenth century by the Michelson–Morley experiment.
|
105 |
+
|
106 |
+
Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850.[36] His result supported the wave theory, and the classical particle theory was finally abandoned, only to partly re-emerge in the 20th century.
|
107 |
+
|
108 |
+
In 1845, Michael Faraday discovered that the plane of polarization of linearly polarized light is rotated when the light rays travel along the magnetic field direction in the presence of a transparent dielectric, an effect now known as Faraday rotation.[37] This was the first evidence that light was related to electromagnetism. In 1846 he speculated that light might be some form of disturbance propagating along magnetic field lines.[37] Faraday proposed in 1847 that light was a high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether.[38]
|
109 |
+
|
110 |
+
Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behavior of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory, and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging, and wireless communications.
|
111 |
+
|
112 |
+
In the quantum theory, photons are seen as wave packets of the waves described in the classical theory of Maxwell. The quantum theory was needed to explain effects even with visual light that Maxwell's classical theory could not (such as spectral lines).
|
113 |
+
|
114 |
+
In 1900 Max Planck, attempting to explain black-body radiation suggested that although light was a wave, these waves could gain or lose energy only in finite amounts related to their frequency. Planck called these "lumps" of light energy "quanta" (from a Latin word for "how much"). In 1905, Albert Einstein used the idea of light quanta to explain the photoelectric effect, and suggested that these light quanta had a "real" existence. In 1923 Arthur Holly Compton showed that the wavelength shift seen when low intensity X-rays scattered from electrons (so called Compton scattering) could be explained by a particle-theory of X-rays, but not a wave theory. In 1926 Gilbert N. Lewis named these light quanta particles photons.[39]
|
115 |
+
|
116 |
+
Eventually the modern theory of quantum mechanics came to picture light as (in some sense) both a particle and a wave, and (in another sense), as a phenomenon which is neither a particle nor a wave (which actually are macroscopic phenomena, such as baseballs or ocean waves). Instead, modern physics sees light as something that can be described sometimes with mathematics appropriate to one type of macroscopic metaphor (particles), and sometimes another macroscopic metaphor (water waves), but is actually something that cannot be fully imagined. As in the case for radio waves and the X-rays involved in Compton scattering, physicists have noted that electromagnetic radiation tends to behave more like a classical wave at lower frequencies, but more like a classical particle at higher frequencies, but never completely loses all qualities of one or the other. Visible light, which occupies a middle ground in frequency, can easily be shown in experiments to be describable using either a wave or particle model, or sometimes both.
|
117 |
+
|
118 |
+
In February 2018, scientists reported, for the first time, the discovery of a new form of light, which may involve polaritons, that could be useful in the development of quantum computers.[40][41]
|
en/4934.html.txt
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
An X-ray, or X-radiation, is a penetrating form of high-energy electromagnetic radiation. Most X-rays have a wavelength ranging from 10 picometres to 10 nanometres, corresponding to frequencies in the range 30 petahertz to 30 exahertz (3×1015 Hz to 3×1018 Hz) and energies in the range 124 eV to 124 keV. X-ray wavelengths are shorter than those of UV rays and typically longer than those of gamma rays. In many languages, X-radiation is referred to as Röntgen radiation, after the German scientist Wilhelm Röntgen, who discovered it on November 8, 1895.[1] He named it X-radiation to signify an unknown type of radiation.[2] Spellings of X-ray(s) in English include the variants x-ray(s), xray(s), and X ray(s).[3]
|
2 |
+
|
3 |
+
Before their discovery in 1895, X-rays were just a type of unidentified radiation emanating from experimental discharge tubes. They were noticed by scientists investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Many of the early Crookes tubes (invented around 1875) undoubtedly radiated X-rays, because early researchers noticed effects that were attributable to them, as detailed below. Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube.[4]
|
4 |
+
|
5 |
+
The earliest experimenter thought to have (unknowingly) produced X-rays was actuary William Morgan. In 1785 he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays.[5][6] This work was further explored by Humphry Davy and his assistant Michael Faraday.
|
6 |
+
|
7 |
+
When Stanford University physics professor Fernando Sanford created his "electric photography" he also unknowingly generated and detected X-rays. From 1886 to 1888 he had studied in the Hermann Helmholtz laboratory in Berlin, where he became familiar with the cathode rays generated in vacuum tubes when a voltage was applied across separate electrodes, as previously studied by Heinrich Hertz and Philipp Lenard. His letter of January 6, 1893 (describing his discovery as "electric photography") to The Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner.[7]
|
8 |
+
|
9 |
+
Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a "window" in the end made of thin aluminum, facing the cathode so the cathode rays would strike it (later called a "Lenard tube"). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these "Lenard rays" were actually X-rays.[8]
|
10 |
+
|
11 |
+
In 1889 Ukrainian-born Ivan Puluj, a lecturer in experimental physics at the Prague Polytechnic who since 1877 had been constructing various designs of gas-filled tubes to investigate their properties, published a paper on how sealed photographic plates became dark when exposed to the emanations from the tubes.[9]
|
12 |
+
|
13 |
+
Hermann von Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. It was formed on the basis of the electromagnetic theory of light.[10] However, he did not work with actual X-rays.
|
14 |
+
|
15 |
+
In 1894 Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this radiant energy of "invisible" kinds.[11][12] After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design,[13] as well as Crookes tubes.
|
16 |
+
|
17 |
+
On November 8, 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report "On a new kind of ray: A preliminary communication" and on December 28, 1895 submitted it to Würzburg's Physical-Medical Society journal.[14] This was the first paper written on X-rays. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. The name stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Bulgarian, Swedish, Finnish, Estonian, Russian, Latvian, Japanese, Dutch, Georgian, Hebrew and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery.[15]
|
18 |
+
|
19 |
+
There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers:[16][17] Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about 1 meter away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper.[18]
|
20 |
+
|
21 |
+
Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said "I have seen my death."[21]
|
22 |
+
|
23 |
+
The discovery of X-rays stimulated a veritable sensation. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published.[22] This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone.[23] Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy.[24][25]
|
24 |
+
|
25 |
+
Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission he sent a letter to physicians he knew around Europe (January 1, 1896).[26] News (and the creation of "shadowgrams") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray (of a hand). Through February there were 46 experimenters taking up the technique in North America alone.[26]
|
26 |
+
|
27 |
+
The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On February 14, 1896 Hall-Edwards was also the first to use X-rays in a surgical operation.[27] In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays "not only photograph, but also affect the living function".[28]
|
28 |
+
|
29 |
+
The first medical X-ray made in the United States was obtained using a discharge tube of Pului's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pului tube produced X-rays. This was a result of Pului's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work.[29]
|
30 |
+
|
31 |
+
Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images "live" using some form of luminescent screen.[26] Röntgen used a screen coated with barium platinocyanide. On February 5, 1896 live imaging devices were developed by both Italian scientist Enrico Salvioni (his "cryptoscope") and Professor McGie of Princeton University (his "Skiascope"), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896 he developed the first mass-produced live imaging device, his "Vitascope", later called the fluoroscope, which became the standard for medical X-ray examinations.[26] Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure.[26] During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes.[30][26]
|
32 |
+
|
33 |
+
In 1901, U.S. President William McKinley was shot twice in an assassination attempt. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later.[31]
|
34 |
+
|
35 |
+
With the widespread experimentation with x‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and Dr. William Lofland Dudley of Vanderbilt University reported hair loss after Dr. Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet an experiment was attempted, for which Dudley "with his characteristic devotion to science"[32][33][34] volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot 2 inches (5.1 cm) in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half inch from the hair."[35]
|
36 |
+
|
37 |
+
In August 1896 Dr. HD. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an x-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with x-rays being sent in to the publication.[36] Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and suffered pain, swelling, and blistering.[37] Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone.[38] Many physicians claimed there were no effects from X-ray exposure at all.[37]
|
38 |
+
On August 3, 1905 at San Francisco, California, Elizabeth Fleischman, American X-ray pioneer, died from complications as a result of her work with X-rays.[39][40][41]
|
39 |
+
|
40 |
+
The many applications of X-rays immediately generated enormous interest. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920.
|
41 |
+
|
42 |
+
A typical early 20th century medical x-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes.[42] The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the "hardness" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected. To detect the hardness of the tube, the spark gap was initially opened to the widest setting. While the coil was operating, the operator reduced the gap until sparks began to appear. A tube in which the spark gap began to spark at around 2 1/2 inches was considered soft (low vacuum) and suitable for thin body parts such as hands and arms. A 5-inch spark indicated the tube was suitable for shoulders and knees. A 7-9 inch spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased in order to operate the tube for imaging. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax. The plates may have a small addition of fluorescent salt to reduce exposure times.[42]
|
43 |
+
|
44 |
+
Crookes tubes were unreliable. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate "harder" X-rays until it soon stopped operating. Larger and more frequently used tubes were provided with devices for restoring the air, known as "softeners". These often took the form of a small side tube which contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency. However, the mica had a limited life, and the restoration process was difficult to control.
|
45 |
+
|
46 |
+
In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode tubes by about 1920.
|
47 |
+
|
48 |
+
In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery.
|
49 |
+
|
50 |
+
In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William Lawrence Bragg, gave birth to the field of X-ray crystallography.
|
51 |
+
|
52 |
+
In 1913, Henry Moseley performed crystallography experiments with X-rays emanating from various metals and formulated Moseley's law which relates the frequency of the X-rays to the atomic number of the metal.
|
53 |
+
|
54 |
+
The Coolidge X-ray tube was invented the same year by William D. Coolidge. It made possible the continuous emissions of X-rays. Modern X-ray tubes are based on this design, often employing the use of rotating targets which allow for significantly higher heat dissipation than static targets, further allowing higher quantity X-ray output for use in high powered applications such as rotational CT scanners.
|
55 |
+
|
56 |
+
The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated because of the spread of X-ray dermatitis on his arm.[43]
|
57 |
+
|
58 |
+
In 1914 Marie Curie developed radiological cars to support soldiers injured in World War I. The cars would allow for rapid X-ray imaging of wounded soldiers so battlefield surgeons could quickly and more accurately operate.[44]
|
59 |
+
|
60 |
+
From the 1920s through to the 1950s, X-ray machines were developed to assist in the fitting of shoes and were sold to commercial shoe stores.[45][46][47] Concerns regarding the impact of frequent or poorly controlled use were expressed in the 1950s,[48][49] leading to the practice's eventual end that decade.[50]
|
61 |
+
|
62 |
+
The X-ray microscope was developed during the 1950s.
|
63 |
+
|
64 |
+
The Chandra X-ray Observatory, launched on July 23, 1999, has been allowing the exploration of the very violent processes in the universe which produce X-rays. Unlike visible light, which gives a relatively stable view of the universe, the X-ray universe is unstable. It features stars being torn apart by black holes, galactic collisions, and novae, and neutron stars that build up layers of plasma that then explode into space.
|
65 |
+
|
66 |
+
An X-ray laser device was proposed as part of the Reagan Administration's Strategic Defense Initiative in the 1980s, but the only test of the device (a sort of laser "blaster" or death ray, powered by a thermonuclear explosion) gave inconclusive results. For technical and political reasons, the overall project (including the X-ray laser) was de-funded (though was later revived by the second Bush Administration as National Missile Defense using different technologies).
|
67 |
+
|
68 |
+
Phase-contrast X-ray imaging refers to a variety of techniques that use phase information of a coherent X-ray beam to image soft tissues. It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for X-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the X-rays emerging from an object into intensity variations.[51][52] These include propagation-based phase contrast,[53] talbot interferometry,[52] refraction-enhanced imaging,[54] and X-ray interferometry.[55] These methods provide higher contrast compared to normal absorption-contrast X-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, X-ray optics, and high resolution X-ray detectors.
|
69 |
+
|
70 |
+
X-rays with high photon energies (above 5–10 keV, below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays.[56] Due to their penetrating ability, hard X-rays are widely used to image the inside of objects, e.g., in medical radiography and airport security. The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.[57]
|
71 |
+
|
72 |
+
There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus.[58][59][60][61] This definition has several problems: other processes also can generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation.[62]
|
73 |
+
This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei.[58]
|
74 |
+
Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source.
|
75 |
+
Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays.[63]
|
76 |
+
|
77 |
+
X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be utilized in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy.
|
78 |
+
|
79 |
+
Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image.
|
80 |
+
|
81 |
+
X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals.
|
82 |
+
|
83 |
+
X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates.
|
84 |
+
|
85 |
+
The probability of a photoelectric absorption per unit mass is approximately proportional to Z3/E3, where Z is the atomic number and E is the energy of the incident photon.[64] This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances this limit is higher. The high amount of calcium (Z = 20) in bones, together with their high density, is what makes them show up so clearly on medical radiographs.
|
86 |
+
|
87 |
+
A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy.
|
88 |
+
|
89 |
+
Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging.[65] Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles are described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum.
|
90 |
+
|
91 |
+
Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime.[66] Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1.[67]
|
92 |
+
|
93 |
+
Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced.
|
94 |
+
|
95 |
+
X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays.[70] In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem.
|
96 |
+
|
97 |
+
The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes:
|
98 |
+
|
99 |
+
So, the resulting output of a tube consists of a continuous bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV.[71]
|
100 |
+
|
101 |
+
Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat.
|
102 |
+
|
103 |
+
A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization.[72]
|
104 |
+
|
105 |
+
Short nanosecond bursts of X-rays peaking at 15-keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging.[73]
|
106 |
+
|
107 |
+
X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom.[74] An overview of these cross sections is given in the same reference.
|
108 |
+
|
109 |
+
X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung.[75] This produces photons with energies of some few keV and several tens of MeV.[76] In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed.[77] A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons;[78] however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significantly number of run-away electrons.[79] Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges.[80][81]
|
110 |
+
|
111 |
+
X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose a person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For x-ray diffraction applications, such as x-ray crystallography, hybrid photon counting detectors are widely used.[82]
|
112 |
+
|
113 |
+
Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging.[83] The first medical use was less than a month after his paper on the subject.[29] Up to 2010, five billion medical imaging examinations had been conducted worldwide.[84] Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.[85]
|
114 |
+
|
115 |
+
Projectional radiography is the practice of producing two-dimensional images using x-ray radiation. Bones contain much calcium, which due to its relatively high atomic number absorbs x-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see.
|
116 |
+
|
117 |
+
Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations) and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs.[86][87]
|
118 |
+
|
119 |
+
Dental radiography is commonly used in the diagnoses of common oral problems, such as cavities.
|
120 |
+
|
121 |
+
In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) x-rays.
|
122 |
+
|
123 |
+
To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel.
|
124 |
+
|
125 |
+
Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions.[88] These cross-sectional images can be combined into a three-dimensional image of the inside of the body and used for diagnostic and therapeutic purposes in various medical disciplines....
|
126 |
+
|
127 |
+
Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages) and barium swallow (to examine for esophageal disorders and swallowing disorders).
|
128 |
+
|
129 |
+
The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy x-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast.[89][90]
|
130 |
+
|
131 |
+
Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed.[91][92][93] X-rays are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government.[84][94] It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5-2% with 2007 rates of CT usage.[95]
|
132 |
+
|
133 |
+
Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer.[96] However, this is under increasing doubt.[97] It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%.[98] The amount of absorbed radiation depends upon the type of X-ray test and the body part involved.[99] CT and fluoroscopy entail higher doses of radiation than do plain X-rays.
|
134 |
+
|
135 |
+
To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation.[100] Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000.[100] This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime.[101] For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy.[102] A head CT scan (1.5mSv, 64mGy)[103] that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used.[104]
|
136 |
+
|
137 |
+
The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus.[105][106] In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children.[99] Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk.[107]
|
138 |
+
|
139 |
+
Medical X-rays are a significant source of man-made radiation exposure. In 1987, they accounted for 58% of exposure from man-made sources in the United States. Since man-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine.[85][108]
|
140 |
+
|
141 |
+
Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 0.5 to 4 mrem. A full mouth series of X-rays may result in an exposure of up to 6 (digital) to 18 (film) mrem, for a yearly average of up to 40 mrem.[109][110][111][112][113][114][115]
|
142 |
+
|
143 |
+
Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays.[116]
|
144 |
+
|
145 |
+
Early photon tomography or EPT[117] (as of 2015) along with other techniques[118] are being researched as potential alternatives to X-rays for imaging applications.
|
146 |
+
|
147 |
+
Other notable uses of X-rays include:
|
148 |
+
|
149 |
+
While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye itself.[124] Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light.
|
150 |
+
|
151 |
+
Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the ID11 at the European Synchrotron Radiation Facility is one example of such high intensity.[125]
|
152 |
+
|
153 |
+
The measure of X-rays ionizing ability is called the exposure:
|
154 |
+
|
155 |
+
However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose:
|
156 |
+
|
157 |
+
The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal to the absorbed dose.
|
en/4935.html.txt
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
An X-ray, or X-radiation, is a penetrating form of high-energy electromagnetic radiation. Most X-rays have a wavelength ranging from 10 picometres to 10 nanometres, corresponding to frequencies in the range 30 petahertz to 30 exahertz (3×1015 Hz to 3×1018 Hz) and energies in the range 124 eV to 124 keV. X-ray wavelengths are shorter than those of UV rays and typically longer than those of gamma rays. In many languages, X-radiation is referred to as Röntgen radiation, after the German scientist Wilhelm Röntgen, who discovered it on November 8, 1895.[1] He named it X-radiation to signify an unknown type of radiation.[2] Spellings of X-ray(s) in English include the variants x-ray(s), xray(s), and X ray(s).[3]
|
2 |
+
|
3 |
+
Before their discovery in 1895, X-rays were just a type of unidentified radiation emanating from experimental discharge tubes. They were noticed by scientists investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Many of the early Crookes tubes (invented around 1875) undoubtedly radiated X-rays, because early researchers noticed effects that were attributable to them, as detailed below. Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube.[4]
|
4 |
+
|
5 |
+
The earliest experimenter thought to have (unknowingly) produced X-rays was actuary William Morgan. In 1785 he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays.[5][6] This work was further explored by Humphry Davy and his assistant Michael Faraday.
|
6 |
+
|
7 |
+
When Stanford University physics professor Fernando Sanford created his "electric photography" he also unknowingly generated and detected X-rays. From 1886 to 1888 he had studied in the Hermann Helmholtz laboratory in Berlin, where he became familiar with the cathode rays generated in vacuum tubes when a voltage was applied across separate electrodes, as previously studied by Heinrich Hertz and Philipp Lenard. His letter of January 6, 1893 (describing his discovery as "electric photography") to The Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner.[7]
|
8 |
+
|
9 |
+
Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a "window" in the end made of thin aluminum, facing the cathode so the cathode rays would strike it (later called a "Lenard tube"). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these "Lenard rays" were actually X-rays.[8]
|
10 |
+
|
11 |
+
In 1889 Ukrainian-born Ivan Puluj, a lecturer in experimental physics at the Prague Polytechnic who since 1877 had been constructing various designs of gas-filled tubes to investigate their properties, published a paper on how sealed photographic plates became dark when exposed to the emanations from the tubes.[9]
|
12 |
+
|
13 |
+
Hermann von Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. It was formed on the basis of the electromagnetic theory of light.[10] However, he did not work with actual X-rays.
|
14 |
+
|
15 |
+
In 1894 Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this radiant energy of "invisible" kinds.[11][12] After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design,[13] as well as Crookes tubes.
|
16 |
+
|
17 |
+
On November 8, 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report "On a new kind of ray: A preliminary communication" and on December 28, 1895 submitted it to Würzburg's Physical-Medical Society journal.[14] This was the first paper written on X-rays. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. The name stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Bulgarian, Swedish, Finnish, Estonian, Russian, Latvian, Japanese, Dutch, Georgian, Hebrew and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery.[15]
|
18 |
+
|
19 |
+
There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers:[16][17] Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about 1 meter away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper.[18]
|
20 |
+
|
21 |
+
Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said "I have seen my death."[21]
|
22 |
+
|
23 |
+
The discovery of X-rays stimulated a veritable sensation. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published.[22] This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone.[23] Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy.[24][25]
|
24 |
+
|
25 |
+
Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission he sent a letter to physicians he knew around Europe (January 1, 1896).[26] News (and the creation of "shadowgrams") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray (of a hand). Through February there were 46 experimenters taking up the technique in North America alone.[26]
|
26 |
+
|
27 |
+
The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On February 14, 1896 Hall-Edwards was also the first to use X-rays in a surgical operation.[27] In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays "not only photograph, but also affect the living function".[28]
|
28 |
+
|
29 |
+
The first medical X-ray made in the United States was obtained using a discharge tube of Pului's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pului tube produced X-rays. This was a result of Pului's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work.[29]
|
30 |
+
|
31 |
+
Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images "live" using some form of luminescent screen.[26] Röntgen used a screen coated with barium platinocyanide. On February 5, 1896 live imaging devices were developed by both Italian scientist Enrico Salvioni (his "cryptoscope") and Professor McGie of Princeton University (his "Skiascope"), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896 he developed the first mass-produced live imaging device, his "Vitascope", later called the fluoroscope, which became the standard for medical X-ray examinations.[26] Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure.[26] During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes.[30][26]
|
32 |
+
|
33 |
+
In 1901, U.S. President William McKinley was shot twice in an assassination attempt. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later.[31]
|
34 |
+
|
35 |
+
With the widespread experimentation with x‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and Dr. William Lofland Dudley of Vanderbilt University reported hair loss after Dr. Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet an experiment was attempted, for which Dudley "with his characteristic devotion to science"[32][33][34] volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot 2 inches (5.1 cm) in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half inch from the hair."[35]
|
36 |
+
|
37 |
+
In August 1896 Dr. HD. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an x-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with x-rays being sent in to the publication.[36] Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and suffered pain, swelling, and blistering.[37] Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone.[38] Many physicians claimed there were no effects from X-ray exposure at all.[37]
|
38 |
+
On August 3, 1905 at San Francisco, California, Elizabeth Fleischman, American X-ray pioneer, died from complications as a result of her work with X-rays.[39][40][41]
|
39 |
+
|
40 |
+
The many applications of X-rays immediately generated enormous interest. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920.
|
41 |
+
|
42 |
+
A typical early 20th century medical x-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes.[42] The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the "hardness" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected. To detect the hardness of the tube, the spark gap was initially opened to the widest setting. While the coil was operating, the operator reduced the gap until sparks began to appear. A tube in which the spark gap began to spark at around 2 1/2 inches was considered soft (low vacuum) and suitable for thin body parts such as hands and arms. A 5-inch spark indicated the tube was suitable for shoulders and knees. A 7-9 inch spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased in order to operate the tube for imaging. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax. The plates may have a small addition of fluorescent salt to reduce exposure times.[42]
|
43 |
+
|
44 |
+
Crookes tubes were unreliable. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate "harder" X-rays until it soon stopped operating. Larger and more frequently used tubes were provided with devices for restoring the air, known as "softeners". These often took the form of a small side tube which contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency. However, the mica had a limited life, and the restoration process was difficult to control.
|
45 |
+
|
46 |
+
In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode tubes by about 1920.
|
47 |
+
|
48 |
+
In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery.
|
49 |
+
|
50 |
+
In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William Lawrence Bragg, gave birth to the field of X-ray crystallography.
|
51 |
+
|
52 |
+
In 1913, Henry Moseley performed crystallography experiments with X-rays emanating from various metals and formulated Moseley's law which relates the frequency of the X-rays to the atomic number of the metal.
|
53 |
+
|
54 |
+
The Coolidge X-ray tube was invented the same year by William D. Coolidge. It made possible the continuous emissions of X-rays. Modern X-ray tubes are based on this design, often employing the use of rotating targets which allow for significantly higher heat dissipation than static targets, further allowing higher quantity X-ray output for use in high powered applications such as rotational CT scanners.
|
55 |
+
|
56 |
+
The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated because of the spread of X-ray dermatitis on his arm.[43]
|
57 |
+
|
58 |
+
In 1914 Marie Curie developed radiological cars to support soldiers injured in World War I. The cars would allow for rapid X-ray imaging of wounded soldiers so battlefield surgeons could quickly and more accurately operate.[44]
|
59 |
+
|
60 |
+
From the 1920s through to the 1950s, X-ray machines were developed to assist in the fitting of shoes and were sold to commercial shoe stores.[45][46][47] Concerns regarding the impact of frequent or poorly controlled use were expressed in the 1950s,[48][49] leading to the practice's eventual end that decade.[50]
|
61 |
+
|
62 |
+
The X-ray microscope was developed during the 1950s.
|
63 |
+
|
64 |
+
The Chandra X-ray Observatory, launched on July 23, 1999, has been allowing the exploration of the very violent processes in the universe which produce X-rays. Unlike visible light, which gives a relatively stable view of the universe, the X-ray universe is unstable. It features stars being torn apart by black holes, galactic collisions, and novae, and neutron stars that build up layers of plasma that then explode into space.
|
65 |
+
|
66 |
+
An X-ray laser device was proposed as part of the Reagan Administration's Strategic Defense Initiative in the 1980s, but the only test of the device (a sort of laser "blaster" or death ray, powered by a thermonuclear explosion) gave inconclusive results. For technical and political reasons, the overall project (including the X-ray laser) was de-funded (though was later revived by the second Bush Administration as National Missile Defense using different technologies).
|
67 |
+
|
68 |
+
Phase-contrast X-ray imaging refers to a variety of techniques that use phase information of a coherent X-ray beam to image soft tissues. It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for X-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the X-rays emerging from an object into intensity variations.[51][52] These include propagation-based phase contrast,[53] talbot interferometry,[52] refraction-enhanced imaging,[54] and X-ray interferometry.[55] These methods provide higher contrast compared to normal absorption-contrast X-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, X-ray optics, and high resolution X-ray detectors.
|
69 |
+
|
70 |
+
X-rays with high photon energies (above 5–10 keV, below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays.[56] Due to their penetrating ability, hard X-rays are widely used to image the inside of objects, e.g., in medical radiography and airport security. The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.[57]
|
71 |
+
|
72 |
+
There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus.[58][59][60][61] This definition has several problems: other processes also can generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation.[62]
|
73 |
+
This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei.[58]
|
74 |
+
Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source.
|
75 |
+
Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays.[63]
|
76 |
+
|
77 |
+
X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be utilized in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy.
|
78 |
+
|
79 |
+
Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image.
|
80 |
+
|
81 |
+
X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals.
|
82 |
+
|
83 |
+
X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates.
|
84 |
+
|
85 |
+
The probability of a photoelectric absorption per unit mass is approximately proportional to Z3/E3, where Z is the atomic number and E is the energy of the incident photon.[64] This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances this limit is higher. The high amount of calcium (Z = 20) in bones, together with their high density, is what makes them show up so clearly on medical radiographs.
|
86 |
+
|
87 |
+
A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy.
|
88 |
+
|
89 |
+
Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging.[65] Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles are described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum.
|
90 |
+
|
91 |
+
Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime.[66] Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1.[67]
|
92 |
+
|
93 |
+
Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced.
|
94 |
+
|
95 |
+
X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays.[70] In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem.
|
96 |
+
|
97 |
+
The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes:
|
98 |
+
|
99 |
+
So, the resulting output of a tube consists of a continuous bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV.[71]
|
100 |
+
|
101 |
+
Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat.
|
102 |
+
|
103 |
+
A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization.[72]
|
104 |
+
|
105 |
+
Short nanosecond bursts of X-rays peaking at 15-keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging.[73]
|
106 |
+
|
107 |
+
X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom.[74] An overview of these cross sections is given in the same reference.
|
108 |
+
|
109 |
+
X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung.[75] This produces photons with energies of some few keV and several tens of MeV.[76] In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed.[77] A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons;[78] however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significantly number of run-away electrons.[79] Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges.[80][81]
|
110 |
+
|
111 |
+
X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose a person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For x-ray diffraction applications, such as x-ray crystallography, hybrid photon counting detectors are widely used.[82]
|
112 |
+
|
113 |
+
Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging.[83] The first medical use was less than a month after his paper on the subject.[29] Up to 2010, five billion medical imaging examinations had been conducted worldwide.[84] Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.[85]
|
114 |
+
|
115 |
+
Projectional radiography is the practice of producing two-dimensional images using x-ray radiation. Bones contain much calcium, which due to its relatively high atomic number absorbs x-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see.
|
116 |
+
|
117 |
+
Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations) and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs.[86][87]
|
118 |
+
|
119 |
+
Dental radiography is commonly used in the diagnoses of common oral problems, such as cavities.
|
120 |
+
|
121 |
+
In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) x-rays.
|
122 |
+
|
123 |
+
To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel.
|
124 |
+
|
125 |
+
Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions.[88] These cross-sectional images can be combined into a three-dimensional image of the inside of the body and used for diagnostic and therapeutic purposes in various medical disciplines....
|
126 |
+
|
127 |
+
Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages) and barium swallow (to examine for esophageal disorders and swallowing disorders).
|
128 |
+
|
129 |
+
The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy x-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast.[89][90]
|
130 |
+
|
131 |
+
Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed.[91][92][93] X-rays are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government.[84][94] It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5-2% with 2007 rates of CT usage.[95]
|
132 |
+
|
133 |
+
Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer.[96] However, this is under increasing doubt.[97] It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%.[98] The amount of absorbed radiation depends upon the type of X-ray test and the body part involved.[99] CT and fluoroscopy entail higher doses of radiation than do plain X-rays.
|
134 |
+
|
135 |
+
To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation.[100] Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000.[100] This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime.[101] For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy.[102] A head CT scan (1.5mSv, 64mGy)[103] that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used.[104]
|
136 |
+
|
137 |
+
The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus.[105][106] In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children.[99] Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk.[107]
|
138 |
+
|
139 |
+
Medical X-rays are a significant source of man-made radiation exposure. In 1987, they accounted for 58% of exposure from man-made sources in the United States. Since man-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine.[85][108]
|
140 |
+
|
141 |
+
Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 0.5 to 4 mrem. A full mouth series of X-rays may result in an exposure of up to 6 (digital) to 18 (film) mrem, for a yearly average of up to 40 mrem.[109][110][111][112][113][114][115]
|
142 |
+
|
143 |
+
Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays.[116]
|
144 |
+
|
145 |
+
Early photon tomography or EPT[117] (as of 2015) along with other techniques[118] are being researched as potential alternatives to X-rays for imaging applications.
|
146 |
+
|
147 |
+
Other notable uses of X-rays include:
|
148 |
+
|
149 |
+
While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye itself.[124] Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light.
|
150 |
+
|
151 |
+
Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the ID11 at the European Synchrotron Radiation Facility is one example of such high intensity.[125]
|
152 |
+
|
153 |
+
The measure of X-rays ionizing ability is called the exposure:
|
154 |
+
|
155 |
+
However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose:
|
156 |
+
|
157 |
+
The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal to the absorbed dose.
|
en/4936.html.txt
ADDED
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A tsunami (Japanese: 津波) (/(t)suːˈnɑːmi, (t)sʊˈ-/ (t)soo-NAH-mee, (t)suu-[1][2][3][4][5] pronounced [tsɯnami]) is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami.[6] Unlike normal ocean waves, which are generated by wind, or tides, which are generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water.
|
4 |
+
|
5 |
+
Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer.[7] Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide.[8] For this reason, it is often referred to as a tidal wave,[9] although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis.[10] Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "wave train".[11] Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean.
|
6 |
+
|
7 |
+
The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes,[12][13] but the understanding of tsunamis remained slim until the 20th century and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do; accurately forecasting the passage of tsunamis across the oceans; and forecasting how tsunami waves interact with shorelines.
|
8 |
+
|
9 |
+
The term "tsunami" is a borrowing from the Japanese tsunami 津波, meaning "harbour wave". For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese.[14] Some English speakers alter the word's initial /ts/ to an /s/ by dropping the "t", since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is /ts/.
|
10 |
+
|
11 |
+
Tsunamis are sometimes referred to as tidal waves.[15] This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling"[16] or "having the form or character of"[17] the tides, use of the term tidal wave is discouraged by geologists and oceanographers.
|
12 |
+
|
13 |
+
A 1969 episode of the TV crime show Hawaii Five-O entitled "Forty Feet High and It Kills!" used the terms "tsunami" and "tidal wave" interchangeably.[18]
|
14 |
+
|
15 |
+
The term seismic sea wave is also used to refer to the phenomenon, because the waves most often are generated by seismic activity such as earthquakes.[19] Prior to the rise of the use of the term tsunami in English, scientists generally encouraged the use of the term seismic sea wave rather than tidal wave. However, like tsunami, seismic sea wave is not a completely accurate term, as forces other than earthquakes – including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly – can generate such waves by displacing water.[20][21]
|
16 |
+
|
17 |
+
While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people.[22] The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island.[23]
|
18 |
+
|
19 |
+
Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar Transform Fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves.
|
20 |
+
|
21 |
+
As early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause.[12][13]
|
22 |
+
|
23 |
+
The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen.[24]
|
24 |
+
|
25 |
+
The Roman historian Ammianus Marcellinus (Res Gestae 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria.[25][26]
|
26 |
+
|
27 |
+
The principal generation mechanism of a tsunami is the displacement of a substantial volume of water or perturbation of the sea.[27] This displacement of water is usually attributed to either earthquakes, landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests.[28][29]
|
28 |
+
|
29 |
+
Tsunamis can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position.[30] More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events.[31][32]
|
30 |
+
|
31 |
+
Drawing of tectonic plate boundary before earthquake
|
32 |
+
|
33 |
+
Over-riding plate bulges under strain, causing tectonic uplift.
|
34 |
+
|
35 |
+
Plate slips, causing subsidence and releasing energy into water.
|
36 |
+
|
37 |
+
The energy released produces tsunami waves.
|
38 |
+
|
39 |
+
Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres),[33] which is why they generally pass unnoticed at sea, forming only a slight swell usually about 300 millimetres (12 in) above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas.
|
40 |
+
|
41 |
+
On April 1, 1946, the 8.6 Mw Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a 14-metre high (46 ft) surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska.
|
42 |
+
|
43 |
+
Examples of tsunamis originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks in 1929, and Papua New Guinea in 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances.
|
44 |
+
|
45 |
+
The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.).
|
46 |
+
|
47 |
+
The 1960 Valdivia earthquake (Mw 9.5), 1964 Alaska earthquake (Mw 9.2), 2004 Indian Ocean earthquake (Mw 9.2), and 2011 Tōhoku earthquake (Mw9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller (Mw 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time.
|
48 |
+
|
49 |
+
In the 1950s, it was discovered that tsunamis larger than had previously been believed possible can be caused by giant submarine landslides. These rapidly displace large water volumes, as energy transfers to the water at a rate faster than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of 524 metres (1,719 ft).[34] The wave did not travel far, as it struck land almost immediately. The wave struck three boats — each with two people aboard — anchored in the bay. One boat rode out the wave, but the wave sank the other two, killing both people aboard one of them.[35][36][37]
|
50 |
+
|
51 |
+
Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the reservoir behind the Vajont Dam in Italy. The resulting wave surged over the 262-metre (860 ft)-high dam by 250 metres (820 ft) and destroyed several towns. Around 2,000 people died.[38][39] Scientists named these waves megatsunamis.
|
52 |
+
|
53 |
+
Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others.
|
54 |
+
|
55 |
+
In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are.[40]
|
56 |
+
|
57 |
+
Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths comparable to seismic tsunamis, but usually with lower energies. These are essentially dynamically equivalent to seismic tsunamis, the only differences being that meteotsunamis lack the transoceanic reach of significant seismic tsunamis and that the force that displaces the water is sustained over some length of time such that meteotsunamis cannot be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, rissaga. In Sicily they are called marubbio and in Nagasaki Bay, they are called abiki. Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros.[41]
|
58 |
+
|
59 |
+
Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves.[41] They are unable to propagate beyond their sources, as waves do.
|
60 |
+
|
61 |
+
There have been studies of the potential of the induction of and at least one actual attempt to create tsunami waves as a tectonic weapon.
|
62 |
+
|
63 |
+
In World War II, the New Zealand Military Forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of today's Shakespear Regional Park; the attempt failed.[42]
|
64 |
+
|
65 |
+
There has been considerable speculation on the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Even during World War II consideration of the idea using conventional explosives was explored. Nuclear testing in the Pacific Proving Ground by the United States seemed to generate poor results. Operation Crossroads fired two 20 kilotonnes of TNT (84 TJ) bombs, one in the air and one underwater, above and below the shallow (50 m (160 ft)) waters of the Bikini Atoll lagoon. Fired about 6 km (3.7 mi) from the nearest island, the waves there were no higher than 3–4 m (9.8–13.1 ft) upon reaching the shoreline. Other underwater tests, mainly Hardtack I/Wahoo (deep water) and Hardtack I/Umbrella (shallow water) confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions does not easily generate the kind of deep, all-ocean waveforms which are tsunamis; most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms.[43] Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions.
|
66 |
+
|
67 |
+
Tsunamis cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large.
|
68 |
+
|
69 |
+
While everyday wind waves have a wavelength (from crest to crest) of about 100 metres (330 ft) and a height of roughly 2 metres (6.6 ft), a tsunami in the deep ocean has a much larger wavelength of up to 200 kilometres (120 mi). Such a wave travels at well over 800 kilometres per hour (500 mph), but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about 1 metre (3.3 ft).[44] This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage.
|
70 |
+
|
71 |
+
The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be the square root of √(5000 × 10) = √50000 = ~224 metres per second (735 feet per second), which equates to a speed of ~806 kilometres per hour or about 500 miles per hour. This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison.
|
72 |
+
|
73 |
+
The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave.
|
74 |
+
|
75 |
+
As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below 80 kilometres per hour (50 mph). Its wavelength diminishes to less than 20 kilometres (12 mi) and its amplitude grows enormously – in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore.[45] Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front.
|
76 |
+
|
77 |
+
When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed run up. Run up is measured in metres above a reference sea level.[45] A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up.[46]
|
78 |
+
|
79 |
+
About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. They are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides.
|
80 |
+
|
81 |
+
All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed.
|
82 |
+
|
83 |
+
A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves.
|
84 |
+
|
85 |
+
As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events.[47]
|
86 |
+
|
87 |
+
The first scales used routinely to measure the intensity of tsunamis were the Sieberg-Ambraseys scale (1962), used in the Mediterranean Sea and the Imamura-Iida intensity scale (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity "I" according to the formula:
|
88 |
+
|
89 |
+
where
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
|
96 |
+
H
|
97 |
+
|
98 |
+
|
99 |
+
|
100 |
+
a
|
101 |
+
v
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
|
106 |
+
{\displaystyle {\mathit {H}}_{av}}
|
107 |
+
|
108 |
+
is the "tsunami height", averaged along the nearest coastline, with the tsunami height defined as the rise of the water level above the normal tidal level at the time of occurrence of the tsunami.[48] This scale, known as the Soloviev-Imamura tsunami intensity scale, is used in the global tsunami catalogues compiled by the NGDC/NOAA[49] and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami.
|
109 |
+
|
110 |
+
This formula yields:
|
111 |
+
|
112 |
+
In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales.[50][51]
|
113 |
+
|
114 |
+
The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy.[47] Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the tsunami magnitude scale
|
115 |
+
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
M
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
+
t
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
{\displaystyle {\mathit {M}}_{t}}
|
131 |
+
|
132 |
+
, calculated from,
|
133 |
+
|
134 |
+
where h is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance R from the epicentre, a, b and D are constants used to make the Mt scale match as closely as possible with the moment magnitude scale.[52]
|
135 |
+
|
136 |
+
Several terms are used to describe the different characteristics of tsunami in terms of their height:[53][54][55][56]
|
137 |
+
|
138 |
+
Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound), can survive only if they immediately run for high ground or seek the upper floors of nearby buildings. In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney.
|
139 |
+
|
140 |
+
In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the initial wave moved downwards on the eastern side of the megathrust and upwards on the western side. The western pulse hit coastal Africa and other western areas.
|
141 |
+
|
142 |
+
A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column.
|
143 |
+
|
144 |
+
Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to Pacific Ocean tsunami, warning signs indicate evacuation routes. In Japan, the community is well-educated about earthquakes and tsunamis, and along the Japanese shorelines the tsunami warning signs are reminders of the natural hazards together with a network of warning sirens, typically at the top of the cliff of surroundings hills.[58]
|
145 |
+
|
146 |
+
The Pacific Tsunami Warning System is based in Honolulu, Hawaiʻi. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses.
|
147 |
+
|
148 |
+
Tsunami hazard sign at Bamfield, British Columbia
|
149 |
+
|
150 |
+
A tsunami warning sign in Kamakura, Japan
|
151 |
+
|
152 |
+
A Tsunami hazard sign (Spanish - English) in Iquique, Chile.
|
153 |
+
|
154 |
+
Tsunami Evacuation Route signage along U.S. Route 101, in Washington
|
155 |
+
|
156 |
+
As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean.
|
157 |
+
|
158 |
+
Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population.
|
159 |
+
|
160 |
+
Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System.
|
161 |
+
|
162 |
+
Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes and tsunamis. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake.[59][60] It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result.
|
163 |
+
|
164 |
+
In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore.
|
165 |
+
|
166 |
+
Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans.[61] The country has built many tsunami walls of up to 12 metres (39 ft) high to protect populated coastal areas. Other localities have built floodgates of up to 15.5 metres (51 ft) high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunamis often overtop the barriers.
|
167 |
+
|
168 |
+
The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall.[62] Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling 25 kilometres (16 mi) long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage.[63]
|
169 |
+
|
170 |
+
The Okushiri, Hokkaidō tsunami which struck Okushiri Island of Hokkaidō within two to five minutes of the earthquake on July 12, 1993, created waves as much as 30 metres (100 ft) tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.[64]
|
en/4937.html.txt
ADDED
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A tsunami (Japanese: 津波) (/(t)suːˈnɑːmi, (t)sʊˈ-/ (t)soo-NAH-mee, (t)suu-[1][2][3][4][5] pronounced [tsɯnami]) is a series of waves in a water body caused by the displacement of a large volume of water, generally in an ocean or a large lake. Earthquakes, volcanic eruptions and other underwater explosions (including detonations, landslides, glacier calvings, meteorite impacts and other disturbances) above or below water all have the potential to generate a tsunami.[6] Unlike normal ocean waves, which are generated by wind, or tides, which are generated by the gravitational pull of the Moon and the Sun, a tsunami is generated by the displacement of water.
|
4 |
+
|
5 |
+
Tsunami waves do not resemble normal undersea currents or sea waves because their wavelength is far longer.[7] Rather than appearing as a breaking wave, a tsunami may instead initially resemble a rapidly rising tide.[8] For this reason, it is often referred to as a tidal wave,[9] although this usage is not favoured by the scientific community because it might give the false impression of a causal relationship between tides and tsunamis.[10] Tsunamis generally consist of a series of waves, with periods ranging from minutes to hours, arriving in a so-called "wave train".[11] Wave heights of tens of metres can be generated by large events. Although the impact of tsunamis is limited to coastal areas, their destructive power can be enormous, and they can affect entire ocean basins. The 2004 Indian Ocean tsunami was among the deadliest natural disasters in human history, with at least 230,000 people killed or missing in 14 countries bordering the Indian Ocean.
|
6 |
+
|
7 |
+
The Ancient Greek historian Thucydides suggested in his 5th century BC History of the Peloponnesian War that tsunamis were related to submarine earthquakes,[12][13] but the understanding of tsunamis remained slim until the 20th century and much remains unknown. Major areas of current research include determining why some large earthquakes do not generate tsunamis while other smaller ones do; accurately forecasting the passage of tsunamis across the oceans; and forecasting how tsunami waves interact with shorelines.
|
8 |
+
|
9 |
+
The term "tsunami" is a borrowing from the Japanese tsunami 津波, meaning "harbour wave". For the plural, one can either follow ordinary English practice and add an s, or use an invariable plural as in the Japanese.[14] Some English speakers alter the word's initial /ts/ to an /s/ by dropping the "t", since English does not natively permit /ts/ at the beginning of words, though the original Japanese pronunciation is /ts/.
|
10 |
+
|
11 |
+
Tsunamis are sometimes referred to as tidal waves.[15] This once-popular term derives from the most common appearance of a tsunami, which is that of an extraordinarily high tidal bore. Tsunamis and tides both produce waves of water that move inland, but in the case of a tsunami, the inland movement of water may be much greater, giving the impression of an incredibly high and forceful tide. In recent years, the term "tidal wave" has fallen out of favour, especially in the scientific community, because the causes of tsunamis have nothing to do with those of tides, which are produced by the gravitational pull of the moon and sun rather than the displacement of water. Although the meanings of "tidal" include "resembling"[16] or "having the form or character of"[17] the tides, use of the term tidal wave is discouraged by geologists and oceanographers.
|
12 |
+
|
13 |
+
A 1969 episode of the TV crime show Hawaii Five-O entitled "Forty Feet High and It Kills!" used the terms "tsunami" and "tidal wave" interchangeably.[18]
|
14 |
+
|
15 |
+
The term seismic sea wave is also used to refer to the phenomenon, because the waves most often are generated by seismic activity such as earthquakes.[19] Prior to the rise of the use of the term tsunami in English, scientists generally encouraged the use of the term seismic sea wave rather than tidal wave. However, like tsunami, seismic sea wave is not a completely accurate term, as forces other than earthquakes – including underwater landslides, volcanic eruptions, underwater explosions, land or ice slumping into the ocean, meteorite impacts, and the weather when the atmospheric pressure changes very rapidly – can generate such waves by displacing water.[20][21]
|
16 |
+
|
17 |
+
While Japan may have the longest recorded history of tsunamis, the sheer destruction caused by the 2004 Indian Ocean earthquake and tsunami event mark it as the most devastating of its kind in modern times, killing around 230,000 people.[22] The Sumatran region is also accustomed to tsunamis, with earthquakes of varying magnitudes regularly occurring off the coast of the island.[23]
|
18 |
+
|
19 |
+
Tsunamis are an often underestimated hazard in the Mediterranean Sea and parts of Europe. Of historical and current (with regard to risk assumptions) importance are the 1755 Lisbon earthquake and tsunami (which was caused by the Azores–Gibraltar Transform Fault), the 1783 Calabrian earthquakes, each causing several tens of thousands of deaths and the 1908 Messina earthquake and tsunami. The tsunami claimed more than 123,000 lives in Sicily and Calabria and is among the most deadly natural disasters in modern Europe. The Storegga Slide in the Norwegian Sea and some examples of tsunamis affecting the British Isles refer to landslide and meteotsunamis predominantly and less to earthquake-induced waves.
|
20 |
+
|
21 |
+
As early as 426 BC the Greek historian Thucydides inquired in his book History of the Peloponnesian War about the causes of tsunami, and was the first to argue that ocean earthquakes must be the cause.[12][13]
|
22 |
+
|
23 |
+
The cause, in my opinion, of this phenomenon must be sought in the earthquake. At the point where its shock has been the most violent the sea is driven back, and suddenly recoiling with redoubled force, causes the inundation. Without an earthquake I do not see how such an accident could happen.[24]
|
24 |
+
|
25 |
+
The Roman historian Ammianus Marcellinus (Res Gestae 26.10.15–19) described the typical sequence of a tsunami, including an incipient earthquake, the sudden retreat of the sea and a following gigantic wave, after the 365 AD tsunami devastated Alexandria.[25][26]
|
26 |
+
|
27 |
+
The principal generation mechanism of a tsunami is the displacement of a substantial volume of water or perturbation of the sea.[27] This displacement of water is usually attributed to either earthquakes, landslides, volcanic eruptions, glacier calvings or more rarely by meteorites and nuclear tests.[28][29]
|
28 |
+
|
29 |
+
Tsunamis can be generated when the sea floor abruptly deforms and vertically displaces the overlying water. Tectonic earthquakes are a particular kind of earthquake that are associated with the Earth's crustal deformation; when these earthquakes occur beneath the sea, the water above the deformed area is displaced from its equilibrium position.[30] More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events.[31][32]
|
30 |
+
|
31 |
+
Drawing of tectonic plate boundary before earthquake
|
32 |
+
|
33 |
+
Over-riding plate bulges under strain, causing tectonic uplift.
|
34 |
+
|
35 |
+
Plate slips, causing subsidence and releasing energy into water.
|
36 |
+
|
37 |
+
The energy released produces tsunami waves.
|
38 |
+
|
39 |
+
Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres),[33] which is why they generally pass unnoticed at sea, forming only a slight swell usually about 300 millimetres (12 in) above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas.
|
40 |
+
|
41 |
+
On April 1, 1946, the 8.6 Mw Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a 14-metre high (46 ft) surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska.
|
42 |
+
|
43 |
+
Examples of tsunamis originating at locations away from convergent boundaries include Storegga about 8,000 years ago, Grand Banks in 1929, and Papua New Guinea in 1998 (Tappin, 2001). The Grand Banks and Papua New Guinea tsunamis came from earthquakes which destabilised sediments, causing them to flow into the ocean and generate a tsunami. They dissipated before travelling transoceanic distances.
|
44 |
+
|
45 |
+
The cause of the Storegga sediment failure is unknown. Possibilities include an overloading of the sediments, an earthquake or a release of gas hydrates (methane etc.).
|
46 |
+
|
47 |
+
The 1960 Valdivia earthquake (Mw 9.5), 1964 Alaska earthquake (Mw 9.2), 2004 Indian Ocean earthquake (Mw 9.2), and 2011 Tōhoku earthquake (Mw9.0) are recent examples of powerful megathrust earthquakes that generated tsunamis (known as teletsunamis) that can cross entire oceans. Smaller (Mw 4.2) earthquakes in Japan can trigger tsunamis (called local and regional tsunamis) that can devastate stretches of coastline, but can do so in only a few minutes at a time.
|
48 |
+
|
49 |
+
In the 1950s, it was discovered that tsunamis larger than had previously been believed possible can be caused by giant submarine landslides. These rapidly displace large water volumes, as energy transfers to the water at a rate faster than the water can absorb. Their existence was confirmed in 1958, when a giant landslide in Lituya Bay, Alaska, caused the highest wave ever recorded, which had a height of 524 metres (1,719 ft).[34] The wave did not travel far, as it struck land almost immediately. The wave struck three boats — each with two people aboard — anchored in the bay. One boat rode out the wave, but the wave sank the other two, killing both people aboard one of them.[35][36][37]
|
50 |
+
|
51 |
+
Another landslide-tsunami event occurred in 1963 when a massive landslide from Monte Toc entered the reservoir behind the Vajont Dam in Italy. The resulting wave surged over the 262-metre (860 ft)-high dam by 250 metres (820 ft) and destroyed several towns. Around 2,000 people died.[38][39] Scientists named these waves megatsunamis.
|
52 |
+
|
53 |
+
Some geologists claim that large landslides from volcanic islands, e.g. Cumbre Vieja on La Palma in the Canary Islands, may be able to generate megatsunamis that can cross oceans, but this is disputed by many others.
|
54 |
+
|
55 |
+
In general, landslides generate displacements mainly in the shallower parts of the coastline, and there is conjecture about the nature of large landslides that enter the water. This has been shown to subsequently affect water in enclosed bays and lakes, but a landslide large enough to cause a transoceanic tsunami has not occurred within recorded history. Susceptible locations are believed to be the Big Island of Hawaii, Fogo in the Cape Verde Islands, La Reunion in the Indian Ocean, and Cumbre Vieja on the island of La Palma in the Canary Islands; along with other volcanic ocean islands. This is because large masses of relatively unconsolidated volcanic material occurs on the flanks and in some cases detachment planes are believed to be developing. However, there is growing controversy about how dangerous these slopes actually are.[40]
|
56 |
+
|
57 |
+
Some meteorological conditions, especially rapid changes in barometric pressure, as seen with the passing of a front, can displace bodies of water enough to cause trains of waves with wavelengths comparable to seismic tsunamis, but usually with lower energies. These are essentially dynamically equivalent to seismic tsunamis, the only differences being that meteotsunamis lack the transoceanic reach of significant seismic tsunamis and that the force that displaces the water is sustained over some length of time such that meteotsunamis cannot be modelled as having been caused instantaneously. In spite of their lower energies, on shorelines where they can be amplified by resonance, they are sometimes powerful enough to cause localised damage and potential for loss of life. They have been documented in many places, including the Great Lakes, the Aegean Sea, the English Channel, and the Balearic Islands, where they are common enough to have a local name, rissaga. In Sicily they are called marubbio and in Nagasaki Bay, they are called abiki. Some examples of destructive meteotsunamis include 31 March 1979 at Nagasaki and 15 June 2006 at Menorca, the latter causing damage in the tens of millions of euros.[41]
|
58 |
+
|
59 |
+
Meteotsunamis should not be confused with storm surges, which are local increases in sea level associated with the low barometric pressure of passing tropical cyclones, nor should they be confused with setup, the temporary local raising of sea level caused by strong on-shore winds. Storm surges and setup are also dangerous causes of coastal flooding in severe weather but their dynamics are completely unrelated to tsunami waves.[41] They are unable to propagate beyond their sources, as waves do.
|
60 |
+
|
61 |
+
There have been studies of the potential of the induction of and at least one actual attempt to create tsunami waves as a tectonic weapon.
|
62 |
+
|
63 |
+
In World War II, the New Zealand Military Forces initiated Project Seal, which attempted to create small tsunamis with explosives in the area of today's Shakespear Regional Park; the attempt failed.[42]
|
64 |
+
|
65 |
+
There has been considerable speculation on the possibility of using nuclear weapons to cause tsunamis near an enemy coastline. Even during World War II consideration of the idea using conventional explosives was explored. Nuclear testing in the Pacific Proving Ground by the United States seemed to generate poor results. Operation Crossroads fired two 20 kilotonnes of TNT (84 TJ) bombs, one in the air and one underwater, above and below the shallow (50 m (160 ft)) waters of the Bikini Atoll lagoon. Fired about 6 km (3.7 mi) from the nearest island, the waves there were no higher than 3–4 m (9.8–13.1 ft) upon reaching the shoreline. Other underwater tests, mainly Hardtack I/Wahoo (deep water) and Hardtack I/Umbrella (shallow water) confirmed the results. Analysis of the effects of shallow and deep underwater explosions indicate that the energy of the explosions does not easily generate the kind of deep, all-ocean waveforms which are tsunamis; most of the energy creates steam, causes vertical fountains above the water, and creates compressional waveforms.[43] Tsunamis are hallmarked by permanent large vertical displacements of very large volumes of water which do not occur in explosions.
|
66 |
+
|
67 |
+
Tsunamis cause damage by two mechanisms: the smashing force of a wall of water travelling at high speed, and the destructive power of a large volume of water draining off the land and carrying a large amount of debris with it, even with waves that do not appear to be large.
|
68 |
+
|
69 |
+
While everyday wind waves have a wavelength (from crest to crest) of about 100 metres (330 ft) and a height of roughly 2 metres (6.6 ft), a tsunami in the deep ocean has a much larger wavelength of up to 200 kilometres (120 mi). Such a wave travels at well over 800 kilometres per hour (500 mph), but owing to the enormous wavelength the wave oscillation at any given point takes 20 or 30 minutes to complete a cycle and has an amplitude of only about 1 metre (3.3 ft).[44] This makes tsunamis difficult to detect over deep water, where ships are unable to feel their passage.
|
70 |
+
|
71 |
+
The velocity of a tsunami can be calculated by obtaining the square root of the depth of the water in metres multiplied by the acceleration due to gravity (approximated to 10 m/s2). For example, if the Pacific Ocean is considered to have a depth of 5000 metres, the velocity of a tsunami would be the square root of √(5000 × 10) = √50000 = ~224 metres per second (735 feet per second), which equates to a speed of ~806 kilometres per hour or about 500 miles per hour. This is the formula used for calculating the velocity of shallow-water waves. Even the deep ocean is shallow in this sense because a tsunami wave is so long (horizontally from crest to crest) by comparison.
|
72 |
+
|
73 |
+
The reason for the Japanese name "harbour wave" is that sometimes a village's fishermen would sail out, and encounter no unusual waves while out at sea fishing, and come back to land to find their village devastated by a huge wave.
|
74 |
+
|
75 |
+
As the tsunami approaches the coast and the waters become shallow, wave shoaling compresses the wave and its speed decreases below 80 kilometres per hour (50 mph). Its wavelength diminishes to less than 20 kilometres (12 mi) and its amplitude grows enormously – in accord with Green's law. Since the wave still has the same very long period, the tsunami may take minutes to reach full height. Except for the very largest tsunamis, the approaching wave does not break, but rather appears like a fast-moving tidal bore.[45] Open bays and coastlines adjacent to very deep water may shape the tsunami further into a step-like wave with a steep-breaking front.
|
76 |
+
|
77 |
+
When the tsunami's wave peak reaches the shore, the resulting temporary rise in sea level is termed run up. Run up is measured in metres above a reference sea level.[45] A large tsunami may feature multiple waves arriving over a period of hours, with significant time between the wave crests. The first wave to reach the shore may not have the highest run-up.[46]
|
78 |
+
|
79 |
+
About 80% of tsunamis occur in the Pacific Ocean, but they are possible wherever there are large bodies of water, including lakes. They are caused by earthquakes, landslides, volcanic explosions, glacier calvings, and bolides.
|
80 |
+
|
81 |
+
All waves have a positive and negative peak; that is, a ridge and a trough. In the case of a propagating wave like a tsunami, either may be the first to arrive. If the first part to arrive at the shore is the ridge, a massive breaking wave or sudden flooding will be the first effect noticed on land. However, if the first part to arrive is a trough, a drawback will occur as the shoreline recedes dramatically, exposing normally submerged areas. The drawback can exceed hundreds of metres, and people unaware of the danger sometimes remain near the shore to satisfy their curiosity or to collect fish from the exposed seabed.
|
82 |
+
|
83 |
+
A typical wave period for a damaging tsunami is about twelve minutes. Thus, the sea recedes in the drawback phase, with areas well below sea level exposed after three minutes. For the next six minutes, the wave trough builds into a ridge which may flood the coast, and destruction ensues. During the next six minutes, the wave changes from a ridge to a trough, and the flood waters recede in a second drawback. Victims and debris may be swept into the ocean. The process repeats with succeeding waves.
|
84 |
+
|
85 |
+
As with earthquakes, several attempts have been made to set up scales of tsunami intensity or magnitude to allow comparison between different events.[47]
|
86 |
+
|
87 |
+
The first scales used routinely to measure the intensity of tsunamis were the Sieberg-Ambraseys scale (1962), used in the Mediterranean Sea and the Imamura-Iida intensity scale (1963), used in the Pacific Ocean. The latter scale was modified by Soloviev (1972), who calculated the tsunami intensity "I" according to the formula:
|
88 |
+
|
89 |
+
where
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
|
96 |
+
H
|
97 |
+
|
98 |
+
|
99 |
+
|
100 |
+
a
|
101 |
+
v
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
|
106 |
+
{\displaystyle {\mathit {H}}_{av}}
|
107 |
+
|
108 |
+
is the "tsunami height", averaged along the nearest coastline, with the tsunami height defined as the rise of the water level above the normal tidal level at the time of occurrence of the tsunami.[48] This scale, known as the Soloviev-Imamura tsunami intensity scale, is used in the global tsunami catalogues compiled by the NGDC/NOAA[49] and the Novosibirsk Tsunami Laboratory as the main parameter for the size of the tsunami.
|
109 |
+
|
110 |
+
This formula yields:
|
111 |
+
|
112 |
+
In 2013, following the intensively studied tsunamis in 2004 and 2011, a new 12-point scale was proposed, the Integrated Tsunami Intensity Scale (ITIS-2012), intended to match as closely as possible to the modified ESI2007 and EMS earthquake intensity scales.[50][51]
|
113 |
+
|
114 |
+
The first scale that genuinely calculated a magnitude for a tsunami, rather than an intensity at a particular location was the ML scale proposed by Murty & Loomis based on the potential energy.[47] Difficulties in calculating the potential energy of the tsunami mean that this scale is rarely used. Abe introduced the tsunami magnitude scale
|
115 |
+
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
+
|
121 |
+
M
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
+
t
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
{\displaystyle {\mathit {M}}_{t}}
|
131 |
+
|
132 |
+
, calculated from,
|
133 |
+
|
134 |
+
where h is the maximum tsunami-wave amplitude (in m) measured by a tide gauge at a distance R from the epicentre, a, b and D are constants used to make the Mt scale match as closely as possible with the moment magnitude scale.[52]
|
135 |
+
|
136 |
+
Several terms are used to describe the different characteristics of tsunami in terms of their height:[53][54][55][56]
|
137 |
+
|
138 |
+
Drawbacks can serve as a brief warning. People who observe drawback (many survivors report an accompanying sucking sound), can survive only if they immediately run for high ground or seek the upper floors of nearby buildings. In 2004, ten-year-old Tilly Smith of Surrey, England, was on Maikhao beach in Phuket, Thailand with her parents and sister, and having learned about tsunamis recently in school, told her family that a tsunami might be imminent. Her parents warned others minutes before the wave arrived, saving dozens of lives. She credited her geography teacher, Andrew Kearney.
|
139 |
+
|
140 |
+
In the 2004 Indian Ocean tsunami drawback was not reported on the African coast or any other east-facing coasts that it reached. This was because the initial wave moved downwards on the eastern side of the megathrust and upwards on the western side. The western pulse hit coastal Africa and other western areas.
|
141 |
+
|
142 |
+
A tsunami cannot be precisely predicted, even if the magnitude and location of an earthquake is known. Geologists, oceanographers, and seismologists analyse each earthquake and based on many factors may or may not issue a tsunami warning. However, there are some warning signs of an impending tsunami, and automated systems can provide warnings immediately after an earthquake in time to save lives. One of the most successful systems uses bottom pressure sensors, attached to buoys, which constantly monitor the pressure of the overlying water column.
|
143 |
+
|
144 |
+
Regions with a high tsunami risk typically use tsunami warning systems to warn the population before the wave reaches land. On the west coast of the United States, which is prone to Pacific Ocean tsunami, warning signs indicate evacuation routes. In Japan, the community is well-educated about earthquakes and tsunamis, and along the Japanese shorelines the tsunami warning signs are reminders of the natural hazards together with a network of warning sirens, typically at the top of the cliff of surroundings hills.[58]
|
145 |
+
|
146 |
+
The Pacific Tsunami Warning System is based in Honolulu, Hawaiʻi. It monitors Pacific Ocean seismic activity. A sufficiently large earthquake magnitude and other information triggers a tsunami warning. While the subduction zones around the Pacific are seismically active, not all earthquakes generate a tsunami. Computers assist in analysing the tsunami risk of every earthquake that occurs in the Pacific Ocean and the adjoining land masses.
|
147 |
+
|
148 |
+
Tsunami hazard sign at Bamfield, British Columbia
|
149 |
+
|
150 |
+
A tsunami warning sign in Kamakura, Japan
|
151 |
+
|
152 |
+
A Tsunami hazard sign (Spanish - English) in Iquique, Chile.
|
153 |
+
|
154 |
+
Tsunami Evacuation Route signage along U.S. Route 101, in Washington
|
155 |
+
|
156 |
+
As a direct result of the Indian Ocean tsunami, a re-appraisal of the tsunami threat for all coastal areas is being undertaken by national governments and the United Nations Disaster Mitigation Committee. A tsunami warning system is being installed in the Indian Ocean.
|
157 |
+
|
158 |
+
Computer models can predict tsunami arrival, usually within minutes of the arrival time. Bottom pressure sensors can relay information in real time. Based on these pressure readings and other seismic information and the seafloor's shape (bathymetry) and coastal topography, the models estimate the amplitude and surge height of the approaching tsunami. All Pacific Rim countries collaborate in the Tsunami Warning System and most regularly practise evacuation and other procedures. In Japan, such preparation is mandatory for government, local authorities, emergency services and the population.
|
159 |
+
|
160 |
+
Along the United States west coast, in addition to sirens, warnings are sent on television and radio via the National Weather Service, using the Emergency Alert System.
|
161 |
+
|
162 |
+
Some zoologists hypothesise that some animal species have an ability to sense subsonic Rayleigh waves from an earthquake or a tsunami. If correct, monitoring their behaviour could provide advance warning of earthquakes and tsunamis. However, the evidence is controversial and is not widely accepted. There are unsubstantiated claims about the Lisbon quake that some animals escaped to higher ground, while many other animals in the same areas drowned. The phenomenon was also noted by media sources in Sri Lanka in the 2004 Indian Ocean earthquake.[59][60] It is possible that certain animals (e.g., elephants) may have heard the sounds of the tsunami as it approached the coast. The elephants' reaction was to move away from the approaching noise. By contrast, some humans went to the shore to investigate and many drowned as a result.
|
163 |
+
|
164 |
+
In some tsunami-prone countries, earthquake engineering measures have been taken to reduce the damage caused onshore.
|
165 |
+
|
166 |
+
Japan, where tsunami science and response measures first began following a disaster in 1896, has produced ever-more elaborate countermeasures and response plans.[61] The country has built many tsunami walls of up to 12 metres (39 ft) high to protect populated coastal areas. Other localities have built floodgates of up to 15.5 metres (51 ft) high and channels to redirect the water from an incoming tsunami. However, their effectiveness has been questioned, as tsunamis often overtop the barriers.
|
167 |
+
|
168 |
+
The Fukushima Daiichi nuclear disaster was directly triggered by the 2011 Tōhoku earthquake and tsunami, when waves exceeded the height of the plant's sea wall.[62] Iwate Prefecture, which is an area at high risk from tsunami, had tsunami barriers walls (Taro sea wall) totalling 25 kilometres (16 mi) long at coastal towns. The 2011 tsunami toppled more than 50% of the walls and caused catastrophic damage.[63]
|
169 |
+
|
170 |
+
The Okushiri, Hokkaidō tsunami which struck Okushiri Island of Hokkaidō within two to five minutes of the earthquake on July 12, 1993, created waves as much as 30 metres (100 ft) tall—as high as a 10-storey building. The port town of Aonae was completely surrounded by a tsunami wall, but the waves washed right over the wall and destroyed all the wood-framed structures in the area. The wall may have succeeded in slowing down and moderating the height of the tsunami, but it did not prevent major destruction and loss of life.[64]
|
en/4938.html.txt
ADDED
@@ -0,0 +1,264 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
East Germany, officially the German Democratic Republic (GDR; German: Deutsche Demokratische Republik [ˈdɔʏtʃə demoˈkʁaːtɪʃə ʁepuˈbliːk], DDR), was a state that existed from 1949 to 1990, the period when the eastern portion of Germany was part of the Eastern Bloc during the Cold War. Commonly described as a communist state in English usage, it described itself as a socialist "workers' and peasants' state".[7] It consisted of territory that was administered and occupied by Soviet forces following the end of World War II—the Soviet occupation zone of the Potsdam Agreement, bounded on the east by the Oder–Neisse line. The Soviet zone surrounded West Berlin but did not include it and West Berlin remained outside the jurisdiction of the GDR.
|
4 |
+
|
5 |
+
The GDR was established in the Soviet zone while the Federal Republic of Germany, commonly referred to as West Germany, was established in the three western zones. A satellite state of the Soviet Union,[8] Soviet occupation authorities began transferring administrative responsibility to German communist leaders in 1948 and the GDR began to function as a state on 7 October 1949. However, Soviet forces remained in the country throughout the Cold War. Until 1989, the GDR was governed by the Socialist Unity Party of Germany (SED), although other parties nominally participated in its alliance organisation, the National Front of the German Democratic Republic.[9] The SED made the teaching of Marxism–Leninism and the Russian language compulsory in schools.[10]
|
6 |
+
|
7 |
+
The economy was centrally planned and increasingly state-owned.[11] Prices of housing, basic goods and services were heavily subsidised and set by central government planners rather than rising and falling through supply and demand. Although the GDR had to pay substantial war reparations to the Soviets, it became the most successful economy in the Eastern Bloc. Emigration to the West was a significant problem as many of the emigrants were well-educated young people and weakened the state economically. The government fortified its western borders and built the Berlin Wall in 1961. Many people attempting to flee[12][13] were killed by border guards or booby traps such as landmines.[14] Many others spent large amounts of time imprisoned for attempting to escape.[15][16]
|
8 |
+
|
9 |
+
In 1989, numerous social, economic and political forces in the GDR and abroad led to the fall of the Berlin Wall and the establishment of a government committed to liberalisation. The following year, free and fair elections were held[17] and international negotiations led to the signing of the Final Settlement treaty on the status and borders of Germany. The GDR dissolved itself and Germany was reunified on 3 October 1990, becoming a fully sovereign state in the reunified Federal Republic of Germany. Several of the GDR's leaders, notably its last communist leader Egon Krenz, were prosecuted after reunification for crimes committed during the Cold War.
|
10 |
+
|
11 |
+
Geographically, the GDR bordered the Baltic Sea to the north, Poland to the east, Czechoslovakia to the southeast and West Germany to the southwest and west. Internally, the GDR also bordered the Soviet sector of Allied-occupied Berlin, known as East Berlin, which was also administered as the state's de facto capital. It also bordered the three sectors occupied by the United States, United Kingdom and France known collectively as West Berlin. The three sectors occupied by the Western nations were sealed off from the GDR by the Berlin Wall from its construction in 1961 until it was brought down in 1989.
|
12 |
+
|
13 |
+
The official name was Deutsche Demokratische Republik (German Democratic Republic), usually abbreviated to DDR (GDR). Both terms were used in East Germany, with increasing usage of the abbreviated form, especially since East Germany considered West Germans and West Berliners to be foreigners following the promulgation of its second constitution in 1968. West Germans, the western media and statesmen initially avoided the official name and its abbreviation, instead using terms like Ostzone (Eastern Zone),[18] Sowjetische Besatzungszone (Soviet Occupation Zone; often abbreviated to SBZ) and sogenannte DDR[19] or "so-called GDR".[20]
|
14 |
+
|
15 |
+
The centre of political power in East Berlin was referred to as Pankow (the seat of command of the Soviet forces in East Germany was referred to as Karlshorst).[18] Over time, however, the abbreviation DDR was also increasingly used colloquially by West Germans and West German media.[note 1]
|
16 |
+
|
17 |
+
When used by West Germans, Westdeutschland (West Germany) was a term almost always in reference to the geographic region of Western Germany and not to the area within the boundaries of the Federal Republic of Germany. However, this use was not always consistent and West Berliners frequently used the term Westdeutschland to denote the Federal Republic.[21] Before World War II, Ostdeutschland (eastern Germany) was used to describe all the territories east of the Elbe (East Elbia), as reflected in the works of sociologist Max Weber and political theorist Carl Schmitt.[22][23][24][25][26]
|
18 |
+
|
19 |
+
Explaining the internal impact of the GDR regime from the perspective of German history in the long term, historian Gerhard A. Ritter (2002) has argued that the East German state was defined by two dominant forces – Soviet communism on the one hand, and German traditions filtered through the interwar experiences of German communists on the other. It always was constrained by the powerful example of the increasingly prosperous West, to which East Germans compared their nation. The changes wrought by the communists were most apparent in ending capitalism and transforming industry and agriculture, in the militarization of society, and in the political thrust of the educational system and the media. On the other hand, there was relatively little change made in the historically independent domains of the sciences, the engineering professions, the Protestant churches, and in many bourgeois lifestyles[citation needed]. Social policy, says Ritter, became a critical legitimization tool in the last decades and mixed socialist and traditional elements about equally.[27]
|
20 |
+
|
21 |
+
At the Yalta Conference during World War II, the Allies (the US, the UK, and the Soviet Union) agreed on dividing a defeated Nazi Germany into occupation zones,[28] and on dividing Berlin, the German capital, among the Allied powers as well. Initially this meant the construction of three zones of occupation, i.e., American, British, and Soviet. Later, a French zone was carved out of the US and British zones.
|
22 |
+
|
23 |
+
The ruling communist party, known as the Socialist Unity Party of Germany (SED), was formed in April 1946 from the merger between the Communist Party of Germany (KPD) and the Social Democratic Party of Germany (SPD).[29] The two former parties were notorious rivals when they were active before the Nazis consolidated all power and criminalised them, and official East German and Soviet histories portrayed this merger as a voluntary pooling of efforts by the socialist parties and symbolic of the new friendship of German socialists after defeating their common enemy; however, there is much evidence that the merger was more troubled than commonly portrayed, and that the Soviet occupation authorities applied great pressure on the SPD's eastern branch to merge with the KPD, and the communists, who held a majority, had virtually total control over policy.[30] The SED was the ruling party for the entire duration of the East German state. It had close ties with the Soviets, which maintained military forces in East Germany until its dissolution in 1991 (the Russian Federation continued to maintain forces in what had been East Germany until 1994), with the stated purpose of countering NATO bases in West Germany.
|
24 |
+
|
25 |
+
As West Germany was reorganized and gained independence from its occupiers, the GDR was established in East Germany in 1949. The creation of the two states solidified the 1945 division of Germany.[31] On 10 March 1952, (in what would become known as the "Stalin Note") Stalin put forth a proposal to reunify Germany with a policy of neutrality, with no conditions on economic policies and with guarantees for "the rights of man and basic freedoms, including freedom of speech, press, religious persuasion, political conviction, and assembly" and free activity of democratic parties and organizations.[32] This was turned down; reunification was not a priority for the leadership of West Germany, and the NATO powers declined the proposal, asserting that Germany should be able to join NATO and that such a negotiation with the Soviet Union would be seen as a capitulation. There have been several debates about whether a real chance for reunification had been missed in 1952.
|
26 |
+
|
27 |
+
In 1949, the Soviets turned control of East Germany over to the SED, headed by Wilhelm Pieck (1876–1960), who became president of the GDR and held the office until his death, while most executive authority was assumed by SED General Secretary Walter Ulbricht. Socialist leader Otto Grotewohl (1894–1964) became prime minister until his death.[33]
|
28 |
+
|
29 |
+
The government of East Germany denounced West German failures in accomplishing denazification and renounced ties to the Nazi past, imprisoning many former Nazis and preventing them from holding government positions. The SED set a primary goal of ridding East Germany of all traces of Nazism.[citation needed]
|
30 |
+
|
31 |
+
In the Yalta and Potsdam conferences, the Allies established their joint military occupation and administration of Germany via the Allied Control Council (ACC), a four-power (US, UK, USSR, France) military government effective until the restoration of German sovereignty. In eastern Germany, the Soviet Occupation Zone (SBZ – Sowjetische Besatzungszone) comprised the five states (Länder) of Mecklenburg-Vorpommern, Brandenburg, Saxony, Saxony-Anhalt, and Thuringia[citation needed]. Disagreements over the policies to be followed in the occupied zones quickly led to a breakdown in cooperation between the four powers, and the Soviets administered their zone without regard to the policies implemented in the other zones. The Soviets withdrew from the ACC in 1948; subsequently as the other three zones were increasingly unified and granted self-government, the Soviet administration instituted a separate socialist government in its zone[citation needed].
|
32 |
+
|
33 |
+
Yet, seven years after the Allies' Potsdam Agreement to a unified Germany, the USSR via the Stalin Note (10 March 1952) proposed German reunification and superpower disengagement from Central Europe, which the three Western Allies (the United States, France, the United Kingdom) rejected. Soviet leader Joseph Stalin, a Communist proponent of reunification, died in early March 1953. Similarly, Lavrenty Beria, the First Deputy Prime Minister of the USSR, pursued German reunification, but he was removed from power that same year before he could act on the matter. His successor, Nikita Khrushchev, rejected reunification as equivalent to returning East Germany for annexation to the West; hence reunification went unconsidered until 1989.[citation needed]
|
34 |
+
|
35 |
+
East Germany considered East Berlin to be its capital, and the Soviet Union and the rest of the Eastern Bloc diplomatically recognized East Berlin as the capital. However, the Western Allies disputed this recognition, considering the entire city of Berlin to be occupied territory governed by the Allied Control Council. According to Margarete Feinstein, East Berlin's status as the capital was largely unrecognized by the West and most Third World countries.[34] In practice, the ACC's authority was rendered moot by the Cold War, and East Berlin's status as occupied territory largely became a legal fiction, and the Soviet sector became fully integrated into the GDR.[citation needed]
|
36 |
+
|
37 |
+
The deepening Cold War conflict between the Western Powers and the Soviet Union over the unresolved status of West Berlin led to the Berlin Blockade (24 June 1948 – 12 May 1949). The Soviet army initiated the blockade by halting all Allied rail, road, and water traffic to and from West Berlin. The Allies countered the Soviets with the Berlin Airlift (1948–49) of food, fuel, and supplies to West Berlin.[35]
|
38 |
+
|
39 |
+
On 21 April 1946, the Communist Party of Germany (Kommunistische Partei Deutschlands – KPD) and the part of the Social Democratic Party of Germany (Sozialdemokratische Partei Deutschlands – SPD) in the Soviet zone merged to form the Socialist Unity Party of Germany (SED – Sozialistische Einheitspartei Deutschlands), which then won the elections of 1946. The SED's government nationalised infrastructure and industrial plants.
|
40 |
+
|
41 |
+
In 1948, the German Economic Commission (Deutsche Wirtschaftskomission—DWK) under its chairman Heinrich Rau assumed administrative authority in the Soviet occupation zone, thus becoming the predecessor of an East German government.[36][37]
|
42 |
+
|
43 |
+
On 7 October 1949, the SED established the Deutsche Demokratische Republik (German Democratic Republic – GDR), based on a socialist political constitution establishing its control of the Anti-Fascist National Front of the German Democratic Republic (NF, Nationale Front der Deutschen Demokratischen Republik), an omnibus alliance of every party and mass organisation in East Germany. The NF was established to stand for election to the Volkskammer (People's Chamber), the East German parliament. The first and only president of the German Democratic Republic was Wilhelm Pieck. However, after 1950, political power in East Germany was held by the First Secretary of the SED, Walter Ulbricht.[38]
|
44 |
+
|
45 |
+
On 16 June 1953, workers constructing the new Stalinallee boulevard in East Berlin, according to The Sixteen Principles of Urban Design, rioted against a 10% production quota increase. Initially a labour protest, it soon included the general populace, and on 17 June similar protests occurred throughout the GDR, with more than a million people striking in some 700 cities and towns. Fearing anti-communist counter-revolution on 18 June 1953, the government of the GDR enlisted the Soviet Occupation Forces to aid the police in ending the riot; some fifty people were killed and 10,000 were jailed.[clarification needed][39][40] (See Uprising of 1953 in East Germany.)
|
46 |
+
|
47 |
+
The German war reparations owed to the Soviets impoverished the Soviet Zone of Occupation and severely weakened the East German economy. In the 1945–46 period, the Soviets confiscated and transported to the USSR approximately 33% of the industrial plant and by the early 1950s had extracted some US$10 billion in reparations in agricultural and industrial products.[41] The poverty of East Germany induced by reparations provoked the Republikflucht ("desertion from the republic") to West Germany, further weakening the GDR's economy. Western economic opportunities induced a brain drain. In response, the GDR closed the Inner German Border, and on the night of 12 August 1961, East German soldiers began erecting the Berlin Wall.[42]
|
48 |
+
|
49 |
+
In 1971, Soviet leader Leonid Brezhnev had Ulbricht removed; Erich Honecker replaced him. While the Ulbricht government had experimented with liberal reforms, the Honecker government reversed them. The new government introduced a new East German Constitution which defined the German Democratic Republic as a "republic of workers and peasants".[43]
|
50 |
+
|
51 |
+
Initially, East Germany claimed an exclusive mandate for all of Germany, a claim supported by most of the Communist bloc. It claimed that West Germany was an illegally-constituted NATO puppet state. However, from the 1960s onward, East Germany began recognizing itself as a separate country from West Germany, and shared the legacy of the united German state of 1871–1945. This was formalized in 1974, when the reunification clause was removed from the revised East German constitution. West Germany, in contrast, maintained that it was the only legitimate government of Germany. From 1949 to the early 1970s, West Germany maintained that East Germany was an illegally constituted state. It argued that the GDR was a Soviet puppet state, and frequently referred to it as the "Soviet occupation zone". This position was shared by West Germany's allies as well until 1973. East Germany was recognized primarily by Communist countries and the Arab bloc, along with some "scattered sympathizers".[44] According to the Hallstein Doctrine (1955), West Germany also did not establish (formal) diplomatic ties with any country – except the Soviets – that recognized East German sovereignty.
|
52 |
+
|
53 |
+
In the early 1970s, the Ostpolitik ("Eastern Policy") of "Change Through Rapprochement" of the pragmatic government of FRG Chancellor Willy Brandt, established normal diplomatic relations with the East Bloc states. This policy saw the Treaty of Moscow (August 1970), the Treaty of Warsaw (December 1970), the Four Power Agreement on Berlin (September 1971), the Transit Agreement (May 1972), and the Basic Treaty (December 1972), which relinquished any claims to an exclusive mandate over Germany as a whole and established normal relations between the two Germanys. Both countries were admitted into the United Nations on 18 September 1973. This also increased the number of countries recognizing East Germany to 55, including the US, UK and France, though these three still refused to recognize East Berlin as the capital, and insisted on a specific provision in the UN resolution accepting the two Germanys into the UN to that effect.[44] Following the Ostpolitik the West German view was that East Germany was a de facto government within a single German nation and a de jure state organisation of parts of Germany outside the Federal Republic. The Federal Republic continued to maintain that it could not within its own structures recognize the GDR de jure as a sovereign state under international law; but it fully acknowledged that, within the structures of international law, the GDR was an independent sovereign state. By distinction, West Germany then viewed itself as being within its own boundaries, not only the de facto and de jure government, but also the sole de jure legitimate representative of a dormant "Germany as whole".[45] The two Germanys relinquished any claim to represent the other internationally; which they acknowledged as necessarily implying a mutual recognition of each other as both capable of representing their own populations de jure in participating in international bodies and agreements, such as the United Nations and the Helsinki Final Act.
|
54 |
+
|
55 |
+
This assessment of the Basic Treaty was confirmed in a decision of the Federal Constitutional Court in 1973;[46]
|
56 |
+
|
57 |
+
the German Democratic Republic is in the international-law sense a State and as such a subject of international law. This finding is independent of recognition in international law of the German Democratic Republic by the Federal Republic of Germany. Such recognition has not only never been formally pronounced by the Federal Republic of Germany but on the contrary repeatedly explicitly rejected. If the conduct of the Federal Republic of Germany towards the German Democratic Republic is assessed in the light of its détente policy, in particular the conclusion of the Treaty as de facto recognition, then it can only be understood as de facto recognition of a special kind. The special feature of this Treaty is that while it is a bilateral Treaty between two States, to which the rules of international law apply and which like any other international treaty possesses validity, it is between two States that are parts of a still existing, albeit incapable of action as not being reorganized, comprehensive State of the Whole of Germany with a single body politic.[47]
|
58 |
+
|
59 |
+
Travel between the GDR and Poland, Czechoslovakia, and Hungary became visa-free from 1972.[48]
|
60 |
+
|
61 |
+
From the beginning, the newly formed GDR tried to establish its own separate identity.[49] Because of the imperial and military legacy of Prussia, the SED repudiated continuity between Prussia and the GDR. The SED destroyed a number of symbolic relics of the former Prussian aristocracy: the Junker manor houses were torn down, the Berliner Stadtschloß was razed, and the equestrian statue of Frederick the Great was removed from East Berlin. Instead the SED focused on the progressive heritage of German history, including Thomas Müntzer's role in the German Peasants' War and the role played by the heroes of the class struggle during Prussia's industrialization.
|
62 |
+
|
63 |
+
Especially after the Ninth Party Congress in 1976, East Germany upheld historical reformers such as Karl Freiherr vom Stein, Karl August von Hardenberg, Wilhelm von Humboldt, and Gerhard von Scharnhorst as examples and role models.[50]
|
64 |
+
|
65 |
+
In May 1989, following widespread public anger over the faking of results of local government elections, many citizens applied for exit visas or left the country contrary to GDR laws. The impetus for this exodus of East Germans was the removal of the electrified fence along Hungary's border with Austria on 2 May. Although formally the Hungarian frontier was still closed, many East Germans took the opportunity to enter the country via Czechoslovakia, and then make the illegal crossing from Hungary into Austria and West Germany beyond.[51] By July, 25,000 East Germans had crossed into Hungary,[52] most of whom did not attempt the risky crossing into Austria but remained instead in Hungary or claimed asylum in West German embassies in Prague or Budapest.
|
66 |
+
|
67 |
+
The opening of a border gate between Austria and Hungary at the Pan-European Picnic on August 19, 1989 then set in motion a chain reaction, at the end of which there was no longer a GDR and the Eastern Bloc had disintegrated. It was the largest escape movement from East Germany since the Berlin Wall was built in 1961. The idea of opening the border at a ceremony came from Otto von Habsburg and was brought up by him to Miklós Németh, the then Hungarian Prime Minister, who promoted the idea.[53] The patrons of the picnic, Habsburg and the Hungarian Minister of State Imre Pozsgay, who were not present at the event, saw the planned event as an opportunity to test Mikhail Gorbachev`s reaction to an opening of the border on the Iron Curtain. In particular, it was examined whether Moscow would give the Soviet troops stationed in Hungary the command to intervene. Extensive advertising for the planned picnic was made by posters and flyers among the GDR holidaymakers in Hungary. The Austrian branch of the Paneuropean Union, which was then headed by Karl von Habsburg, distributed thousands of brochures inviting them to a picnic near the border at Sopron.[54][55] The local Sopron organizers knew nothing of possible GDR refugees, but thought of a local party with Austrian and Hungarian participation.[56] But with the mass exodus at the Pan-European Picnic, the subsequent hesitant behavior of the Socialist Unity Party of East Germany and the non-intervention of the Soviet Union broke the dams. Thus the bracket of the Eastern Bloc was broken. The reaction to this from Erich Honecker in the "Daily Mirror" of August 19, 1989 was too late and showed the current loss of power: “Habsburg distributed leaflets far into Poland, on which the East German holidaymakers were invited to a picnic. When they came to the picnic, they were given gifts, food and Deutsche Mark, and then they were persuaded to come to the West.” Now tens of thousands of the media-informed East Germans made their way to Hungary, which was no longer ready to keep its borders completely closed or to oblige its border troops to use force of arms. The leadership of the GDR in East Berlin did not dare to completely lock the borders of their own country.[54][55][57][58]
|
68 |
+
|
69 |
+
The next major turning point in the exodus came on 10 September, when the Hungarian Foreign Minister Gyula Horn announced that his country would no longer restrict movement from Hungary into Austria. Within two days 22,000 East Germans crossed into Austria, with tens of thousands following in the coming weeks.[51]
|
70 |
+
|
71 |
+
Many others demonstrated against the ruling party, especially in the city of Leipzig. The Leipzig demonstrations became a weekly occurrence, showing a turnout of 10,000 people at the first demonstration on 2 October and peaking at an estimated 300,000 by the end of the month.[59] The protests were surpassed in East Berlin, where half a million demonstrators turned out against the regime on 4 November.[59] Kurt Masur, the conductor of the Leipzig Gewandhaus Orchestra, led local negotiations with the government and held town meetings in the concert hall.[60] The demonstrations eventually led Erich Honecker to resign in October, and he was replaced by a slightly more moderate communist, Egon Krenz.[61]
|
72 |
+
|
73 |
+
The massive demonstration in East Berlin on 4 November coincided with Czechoslovakia formally opening its border into West Germany.[62] With the West more accessible than ever before, 30,000 East Germans made the crossing via Czechoslovakia in the first two days alone. To try to stem the outward flow of the population, the SED proposed a concessionary law loosening restrictions on travel. When this was rejected in the Volkskammer on 5 November, the Cabinet and the Politburo of the GDR resigned.[62] It left only one avenue open for Krenz and the SED, that of completely abolishing travel restrictions between East and West.
|
74 |
+
|
75 |
+
On 9 November 1989, a few sections of the Berlin Wall were opened, resulting in thousands of East Germans crossing freely into West Berlin and West Germany for the first time in nearly 30 years. Krenz resigned a month later, and the SED opened negotiations with the leaders of the incipient Democratic movement, Neues Forum, to schedule free elections and begin the process of democratization. As part of this, the SED eliminated the clause in the East German constitution guaranteeing the Communists leadership of the state. This was approved in the Volkskammer on 1 December 1989 by a vote of 420 to 0.[63]
|
76 |
+
|
77 |
+
East Germany held its last elections in March 1990. The winner was a coalition headed by the East German branch of West Germany's Christian Democratic Union, which advocated speedy reunification. Negotiations (2+4 Talks) were held involving the two German states and the former Allies which led to agreement on the conditions for German unification. By a two-thirds vote in the Volkskammer on 23 August 1990, the German Democratic Republic declared its accession to the Federal Republic of Germany. The five original East German states that had been abolished in the 1952 redistricting were restored.[61] On 3 October 1990, the five states officially joined the Federal Republic of Germany, while East and West Berlin united as a third city-state (in the same manner as Bremen and Hamburg). On 1 July a currency union preceded the political union: the "Ostmark" was abolished, and the Western German "Deutsche Mark" became common currency.
|
78 |
+
|
79 |
+
Although the Volkskammer's declaration of accession to the Federal Republic had initiated the process of reunification; the act of reunification itself (with its many specific terms, conditions and qualifications; some of which involved amendments to the West German Basic Law) was achieved constitutionally by the subsequent Unification Treaty of 31 August 1990; that is through a binding agreement between the former Democratic Republic and the Federal Republic now recognising each other as separate sovereign states in international law.[64] This treaty was then voted into effect prior to the agreed date for Unification by both the Volkskammer and the Bundestag by the constitutionally required two-thirds majorities; effecting on the one hand, the extinction of the GDR, and on the other, the agreed amendments to the Basic Law of the Federal Republic.
|
80 |
+
|
81 |
+
The great economic and socio-political inequalities between the former Germanies required government subsidy for the full integration of the German Democratic Republic into the Federal Republic of Germany. Because of the resulting deindustrialization in the former East Germany, the causes of the failure of this integration continue to be debated. Some western commentators claim that the depressed eastern economy is a natural aftereffect of a demonstrably inefficient command economy. But many East German critics contend that the shock-therapy style of privatization, the artificially high rate of exchange offered for the Ostmark, and the speed with which the entire process was implemented did not leave room for East German enterprises to adapt.[65]
|
82 |
+
|
83 |
+
There were four periods in East German political history.[66] These included: 1949–61, which saw the building of socialism; 1961–1970 after the Berlin Wall closed off escape was a period of stability and consolidation; 1971–85 was termed the Honecker Era, and saw closer ties with West Germany; and 1985–89 saw the decline and extinction of East Germany.
|
84 |
+
|
85 |
+
The ruling political party in East Germany was the Sozialistische Einheitspartei Deutschlands (Socialist Unity Party of Germany, SED). It was created in 1946 through the Soviet-directed merger of the Communist Party of Germany (KPD) and the Social Democratic Party of Germany (SPD) in the Soviet controlled zone. However, the SED quickly transformed into a full-fledged Communist party as the more independent-minded Social Democrats were pushed out.[50]
|
86 |
+
|
87 |
+
The Potsdam Agreement committed the Soviets to supporting a democratic form of government in Germany, though the Soviets' understanding of democracy was radically different from that of the West. As in other Soviet-bloc countries, non-communist political parties were allowed. Nevertheless, every political party in the GDR was forced to join the National Front of Democratic Germany, a broad coalition of parties and mass political organisations, including:
|
88 |
+
|
89 |
+
The member parties were almost completely subservient to the SED, and had to accept its "leading role" as a condition of their existence. However, the parties did have representation in the Volkskammer and received some posts in the government.
|
90 |
+
|
91 |
+
The Volkskammer also included representatives from the mass organisations like the Free German Youth (Freie Deutsche Jugend or FDJ), or the Free German Trade Union Federation. There was also a Democratic Women's Federation of Germany, with seats in the Volkskammer.
|
92 |
+
|
93 |
+
Important non-parliamentary mass organisations in East German society included the German Gymnastics and Sports Association (Deutscher Turn- und Sportbund or DTSB), and People's Solidarity (Volkssolidarität), an organisation for the elderly. Another society of note was the Society for German-Soviet Friendship.
|
94 |
+
|
95 |
+
After the fall of Communism, the SED was renamed the "Party of Democratic Socialism" (PDS) which continued for a decade after reunification before merging with the West German WASG to form the Left Party (Die Linke). The Left Party continues to be a political force in many parts of Germany, albeit drastically less powerful than the SED.[68]
|
96 |
+
|
97 |
+
The East German population declined by three million people throughout its forty-one year history, from 19 million in 1948 to 16 million in 1990; of the 1948 population, some 4 million were deported from the lands east of the Oder-Neisse line, which made the home of millions of Germans part of Poland and the Soviet Union.[69] This was a stark contrast from Poland, which increased during that time; from 24 million in 1950 (a little more than East Germany) to 38 million (more than twice of East Germany's population). This was primarily a result of emigration—about one quarter of East Germans left the country before the Berlin Wall was completed in 1961,[70] and after that time, East Germany had very low birth rates,[71] except for a recovery in the 1980s when the birth rate in East Germany was considerably higher than in West Germany.[72]
|
98 |
+
|
99 |
+
[73]
|
100 |
+
|
101 |
+
(1988 populations)
|
102 |
+
|
103 |
+
Until 1952, East Germany comprised the capital, East Berlin (though legally it was not fully part of the GDR's territory), and the five German states of Mecklenburg-Vorpommern (in 1947 renamed Mecklenburg), Brandenburg, Saxony-Anhalt, Thuringia, and Saxony, their post-war territorial demarcations approximating the pre-war German demarcations of the Middle German Länder (states) and Provinzen (provinces of Prussia). The western parts of two provinces, Pomerania and Lower Silesia, the remainder of which were annexed by Poland, remained in the GDR and were attached to Mecklenburg and Saxony, respectively.
|
104 |
+
|
105 |
+
The East German Administrative Reform of 1952 established 14 Bezirke (districts) and de facto disestablished the five Länder. The new Bezirke, named after their district centres, were as follows: (i) Rostock, (ii) Neubrandenburg, and (iii) Schwerin created from the Land (state) of Mecklenburg; (iv) Potsdam, (v) Frankfurt (Oder), and (vii) Cottbus from Brandenburg; (vi) Magdeburg and (viii) Halle from Saxony-Anhalt; (ix) Leipzig, (xi) Dresden, and (xii) Karl-Marx-Stadt (Chemnitz until 1953 and again from 1990) from Saxony; and (x) Erfurt, (xiii) Gera, and (xiv) Suhl from Thuringia.
|
106 |
+
|
107 |
+
East Berlin was made the country's 15th Bezirk in 1961 but retained special legal status until 1968, when the residents approved the new (draft) constitution. Despite the city as a whole being legally under the control of the Allied Control Council, and diplomatic objections of the Allied governments, the GDR administered the Bezirk of Berlin as part of its territory.
|
108 |
+
|
109 |
+
The government of East Germany had control over a large number of military and paramilitary organisations through various ministries. Chief among these was the Ministry of National Defence. Because of East Germany's proximity to the West during the Cold War (1945–92), its military forces were among the most advanced of the Warsaw Pact. Defining what was a military force and what was not is a matter of some dispute.
|
110 |
+
|
111 |
+
The Nationale Volksarmee (NVA) was the largest military organisation in East Germany. It was formed in 1956 from the Kasernierte Volkspolizei (Barracked People's Police), the military units of the regular police (Volkspolizei), when East Germany joined the Warsaw Pact. From its creation, it was controlled by the Ministry of National Defence (East Germany). It was an all volunteer force until an eighteen-month conscription period was introduced in 1962.[citation needed]
|
112 |
+
It was regarded by NATO officers as the best military in the Warsaw Pact.[78]
|
113 |
+
The NVA consisted of the following branches:
|
114 |
+
|
115 |
+
The border troops of the Eastern sector were originally organised as a police force, the Deutsche Grenzpolizei, similar to the Bundesgrenzschutz in West Germany. It was controlled by the Ministry of the Interior. Following the remilitarisation of East Germany in 1956, the Deutsche Grenzpolizei was transformed into a military force in 1961, modeled after the Soviet Border Troops, and transferred to the Ministry of National Defense, as part of the National People's Army. In 1973, it was separated from the NVA, but it remained under the same ministry. At its peak, it numbered approximately 47,000 men.
|
116 |
+
|
117 |
+
After the NVA was separated from the Volkspolizei in 1956, the Ministry of the Interior maintained its own public order barracked reserve, known as the Volkspolizei-Bereitschaften (VPB). These units were, like the Kasernierte Volkspolizei, equipped as motorised infantry, and they numbered between 12,000 and 15,000 men.
|
118 |
+
|
119 |
+
The Ministry of State Security (Stasi) included the Felix Dzerzhinsky Guards Regiment, which was mainly involved with facilities security and plain clothes events security. They were the only part of the feared Stasi that was visible to the public, and so were very unpopular within the population. The Stasi numbered around 90,000 men, the Guards Regiment around 11,000-12,000 men.
|
120 |
+
|
121 |
+
The Kampfgruppen der Arbeiterklasse (combat groups of the working class) numbered around 400,000 for much of their existence, and were organised around factories. The KdA was the political-military instrument of the SED; it was essentially a "party Army". All KdA directives and decisions were made by the ZK's Politbüro. They received their training from the Volkspolizei and the Ministry of the Interior. Membership was voluntary, but SED members were required to join as part of their membership obligation.
|
122 |
+
|
123 |
+
Every man was required to serve eighteen months of compulsory military service; for the medically unqualified and conscientious objector, there were the Baueinheiten (construction units), established in 1964, two years after the introduction of conscription, in response to political pressure by the national Lutheran Protestant Church upon the GDR's government. In the 1970s, East German leaders acknowledged that former construction soldiers were at a disadvantage when they rejoined the civilian sphere.
|
124 |
+
|
125 |
+
The East German state promoted an "anti-imperialist" line that was reflected in all its media and all the schools.[79] This line followed Lenin's theory of imperialism as the highest and last stage of capitalism, and Dimitrov's theory of fascism as the dictatorship of the most reactionary elements of financial capitalism. Popular reaction to these measures was mixed, and Western media penetrated the country both through cross-border television and radio broadcasts from West Germany and from the U.S. propaganda network Radio Free Europe. Dissidents, particularly professionals, sometimes fled to West Germany, which was relatively easy before the construction of the Berlin Wall in 1961.[80][81]
|
126 |
+
|
127 |
+
After receiving wider international diplomatic recognition in 1972–73, the GDR began active cooperation with Third World socialist governments and national liberation movements. While the USSR was in control of the overall strategy and Cuban armed forces were involved in the actual combat (mostly in the People's Republic of Angola and socialist Ethiopia), the GDR provided experts for military hardware maintenance and personnel training, and oversaw creation of secret security agencies based on its own Stasi model.
|
128 |
+
|
129 |
+
Already in the 1960s contacts were established with Angola's MPLA, Mozambique's FRELIMO and the PAIGC in Guinea Bissau and Cape Verde. In the 1970s official cooperation was established with other self-proclaimed socialist governments and people's republics: People's Republic of the Congo, People's Democratic Republic of Yemen, Somali Democratic Republic, Libya, and the People's Republic of Benin.
|
130 |
+
|
131 |
+
The first military agreement was signed in 1973 with the People's Republic of the Congo. In 1979 friendship treaties were signed with Angola, Mozambique and Ethiopia.
|
132 |
+
|
133 |
+
It was estimated that altogether, 2000–4000 DDR military and security experts were dispatched to Africa. In addition, representatives from African and Arab countries and liberation movements underwent military training in the GDR.[82]
|
134 |
+
|
135 |
+
East Germany pursued an anti-Zionist policy; Jeffrey Herf argues that East Germany was waging an undeclared war on Israel.[83] According to Herf, "the Middle East was one of the crucial battlefields of the global Cold War between the Soviet Union and the West; it was also a region in which East Germany played a salient role in the Soviet bloc's antagonism toward Israel."[84] While East Germany saw itself as an "anti-fascist state", it regarded Israel as a "fascist state"[85] and East Germany strongly supported the Palestine Liberation Organization in its armed struggle against Israel. In 1974, the GDR government recognized the PLO as the "sole legitimate representative of the Palestinian people".[86] The PLO declared the Palestinian state on 15 November 1988 during the First Intifada and the GDR recognized the state prior to reunification.[87] After becoming a member of the UN, East Germany "made excellent use of the UN to wage political warfare against Israel [and was] an enthusiastic, high-profile, and vigorous member" of the anti-Israeli majority of the General Assembly.[83]
|
136 |
+
|
137 |
+
The East German economy began poorly because of the devastation caused by the Second World War; the loss of so many young soldiers, the disruption of business and transportation, the allied bombing campaigns that decimated cities, and reparations owed to the USSR. The Red Army dismantled and transported to Russia the infrastructure and industrial plants of the Soviet Zone of Occupation. By the early 1950s, the reparations were paid in agricultural and industrial products; and Lower Silesia, with its coal mines and Szczecin, an important natural port, were given to Poland by the decision of Stalin and in accordance with Potsdam Agreement.[41]
|
138 |
+
|
139 |
+
The socialist centrally planned economy of the German Democratic Republic was like that of the USSR. In 1950, the GDR joined the COMECON trade bloc. In 1985, collective (state) enterprises earned 96.7% of the net national income. To ensure stable prices for goods and services, the state paid 80% of basic supply costs. The estimated 1984 per capita income was $9,800 ($22,600 in 2015 dollars). In 1976, the average annual growth of the GDP was approximately five percent. This made East German economy the richest in all of the Soviet Bloc until reunification in 1990.[88]
|
140 |
+
|
141 |
+
Notable East German exports were photographic cameras, under the Praktica brand; automobiles under the Trabant, Wartburg, and the IFA brands; hunting rifles, sextants, typewriters and wristwatches.
|
142 |
+
|
143 |
+
Until the 1960s, East Germans endured shortages of basic foodstuffs such as sugar and coffee. East Germans with friends or relatives in the West (or with any access to a hard currency) and the necessary Staatsbank foreign currency account could afford Western products and export-quality East German products via Intershop. Consumer goods also were available, by post, from the Danish Jauerfood, and Genex companies.
|
144 |
+
|
145 |
+
The government used money and prices as political devices, providing highly subsidised prices for a wide range of basic goods and services, in what was known as "the second pay packet".[89] At the production level, artificial prices made for a system of semi-barter and resource hoarding. For the consumer, it led to the substitution of GDR money with time, barter, and hard currencies. The socialist economy became steadily more dependent on financial infusions from hard-currency loans from West Germany. East Germans, meanwhile, came to see their soft currency as worthless relative to the Deutsche Mark (DM).[90]
|
146 |
+
Economic issues would also persist in the east of Germany after the reunification of the west and the east, James Hawes in his book 'the shortest history of Germany'. Quotes from the federal office of political education (23 June 2009) 'In 1991 alone, 153 billion Deutschmarks had to be transferred to eastern Germany to secure incomes, support businesses and improve infrastructure... by 1999 the total had amounted to 1.634 trillion Marks net... The sums were so large that public debt in Germany more than doubled.'[91]
|
147 |
+
|
148 |
+
Many western commentators have maintained that loyalty to the SED was a primary criterion for getting a good job, and that professionalism was secondary to political criteria in personnel recruitment and development.[93]
|
149 |
+
|
150 |
+
Beginning in 1963 with a series of secret international agreements, East Germany recruited workers from Poland, Hungary, Cuba, Albania, Mozambique, Angola and North Vietnam. They numbered more than 100,000 by 1989. Many, such as future politician Zeca Schall (who emigrated from Angola in 1988 as a contract worker) stayed in Germany after the Wende.[94]
|
151 |
+
|
152 |
+
Religion became contested ground in the GDR, with the governing Communists promoting state atheism, although some people remained loyal to Christian communities.[95] In 1957 the State authorities established a State Secretariat for Church Affairs to handle the government's contact with churches and with religious groups;[citation needed] the SED remained officially atheist.[96]
|
153 |
+
|
154 |
+
In 1950, 85% of the GDR citizens were Protestants, while 10% were Catholics. In 1961, the renowned philosophical theologian Paul Tillich claimed that the Protestant population in East Germany had the most admirable Church in Protestantism, because the Communists there had not been able to win a spiritual victory over them.[97] By 1989, membership in the Christian churches dropped significantly. Protestants constituted 25% of the population, Catholics 5%. The share of people who considered themselves non-religious rose from 5% in 1950 to 70% in 1989.
|
155 |
+
|
156 |
+
When it first came to power, the Communist party asserted the compatibility of Christianity and Marxism and sought Christian participation in the building of socialism. At first the promotion of Marxist-Leninist atheism received little official attention. In the mid-1950s, as the Cold War heated up, atheism became a topic of major interest for the state, in both domestic and foreign contexts. University chairs and departments devoted to the study of scientific atheism were founded and much literature (scholarly and popular) on the subject was produced.[by whom?] This activity subsided in the late 1960s amid perceptions that it had started to become counterproductive. Official and scholarly attention to atheism renewed beginning in 1973, though this time with more emphasis on scholarship and on the training of cadres than on propaganda. Throughout, the attention paid to atheism in East Germany was never intended to jeopardise the cooperation that was desired from those East Germans who were religious.[98]
|
157 |
+
|
158 |
+
East Germany, historically, was majority Protestant (primarily Lutheran) from the early stages of the Protestant Reformation onwards. In 1948, freed from the influence of the Nazi-oriented German Christians, Lutheran, Reformed and United churches from most parts of Germany came together as the Evangelical Church in Germany (EKD) at the Conference of Eisenach (Kirchenversammlung von Eisenach).
|
159 |
+
|
160 |
+
In 1969 the regional Protestant churches in East Germany and East Berlin[note 2] broke away from the EKD and formed the Federation of Protestant Churches in the German Democratic Republic (German: Bund der Evangelischen Kirchen in der DDR, BEK), in 1970 also joined by the Moravian Herrnhuter Brüdergemeine. In June 1991, following the German reunification, the BEK churches again merged with the EKD ones.
|
161 |
+
|
162 |
+
Between 1956 and 1971 the leadership of the East German Lutheran churches gradually changed its relations with the state from hostility to cooperation.[99] From the founding of the GDR in 1949, the Socialist Unity Party sought to weaken the influence of the church on the rising generation. The church adopted an attitude of confrontation and distance toward the state. Around 1956 this began to develop into a more neutral stance accommodating conditional loyalty. The government was no longer regarded as illegitimate; instead, the church leaders started viewing the authorities as installed by God and, therefore, deserving of obedience by Christians. But on matters where the state demanded something which the churches felt was not in accordance with the will of God, the churches reserved their right to say no. There were both structural and intentional causes behind this development. Structural causes included the hardening of Cold War tensions in Europe in the mid-1950s, which made it clear that the East German state was not temporary. The loss of church members also made it clear to the leaders of the church that they had to come into some kind of dialogue with the state. The intentions behind the change of attitude varied from a traditional liberal Lutheran acceptance of secular power to a positive attitude toward socialist ideas.[100]
|
163 |
+
|
164 |
+
Manfred Stolpe became a lawyer for the Brandenburg Protestant Church in 1959 before taking up a position at church headquarters in Berlin. In 1969 he helped found the Bund der Evangelischen Kirchen in der DDR (BEK), where he negotiated with the government while at the same time working within the institutions of this Protestant body. He won the regional elections for the Brandenburg state assembly at the head of the SPD list in 1990. Stolpe remained in the Brandenburg government until he joined the federal government in 2002.
|
165 |
+
|
166 |
+
Apart from the Protestant state churches (German: Landeskirchen) united in the EKD/BEK and the Catholic Church there was a number of smaller Protestant bodies, including Protestant Free Churches (German: Evangelische Freikirchen) united in the Federation of the Free Protestant Churches in the German Democratic Republic and the Federation of the Free Protestant Churches in Germany, as well as the Free Lutheran Church, the Old Lutheran Church and Federation of the Reformed Churches in the German Democratic Republic. The Moravian Church also had its presence as the Herrnhuter Brüdergemeine. There were also other Protestants such as Methodists, Adventists, Mennonites and Quakers.
|
167 |
+
|
168 |
+
The smaller Catholic Church in eastern Germany had a fully functioning episcopal hierarchy that was in full accord with the Vatican. During the early postwar years, tensions were high. The Catholic Church as a whole (and particularly the bishops) resisted both the East German state and Marxist ideology. The state allowed the bishops to lodge protests, which they did on issues such as abortion.[100]
|
169 |
+
|
170 |
+
After 1945 the Church did fairly well in integrating Catholic exiles from lands to the east (which mostly became part of Poland) and in adjusting its institutional structures to meet the needs of a church within an officially atheist society. This meant an increasingly hierarchical church structure, whereas in the area of religious education, press, and youth organisations, a system of temporary staff was developed, one that took into account the special situation of Caritas, a Catholic charity organisation. By 1950, therefore, there existed a Catholic subsociety that was well adjusted to prevailing specific conditions and capable of maintaining Catholic identity.[101][page needed]
|
171 |
+
|
172 |
+
With a generational change in the episcopacy taking place in the early 1980s, the state hoped for better relations with the new bishops, but the new bishops instead began holding unauthorised mass meetings, promoting international ties in discussions with theologians abroad, and hosting ecumenical conferences. The new bishops became less politically oriented and more involved in pastoral care and attention to spiritual concerns. The government responded by limiting international contacts for bishops.[102][need quotation to verify]
|
173 |
+
|
174 |
+
List of apostolic administrators:
|
175 |
+
|
176 |
+
East Germany's culture was strongly influenced by communist thought and was marked by an attempt to define itself in opposition to the west, particularly West Germany and the United States. Critics of the East German state[who?] have claimed that the state's commitment to Communism was a hollow and cynical tool, Machiavellian in nature, but this assertion has been challenged by studies[which?] that have found that the East German leadership was genuinely committed to the advance of scientific knowledge, economic development, and social progress. However, Pence and Betts argue, the majority of East Germans over time increasingly regarded the state's ideals to be hollow, though there was also a substantial number of East Germans who regarded their culture as having a healthier, more authentic mentality than that of West Germany.[103]
|
177 |
+
|
178 |
+
GDR culture and politics were limited by the harsh censorship.[104]
|
179 |
+
|
180 |
+
The Puhdys and Karat were some of the most popular mainstream bands in East Germany. Like most mainstream acts, they appeared in popular youth magazines such as Neues Leben and Magazin. Other popular rock bands were Wir, City, Silly and Pankow. Most of these artists recorded on the state-owned AMIGA label.[citation needed]
|
181 |
+
|
182 |
+
Schlager, which was very popular in the west, also gained a foothold early on in East Germany, and numerous musicians, such as Gerd Christian, Uwe Jensen, and Hartmut Schulze-Gerlach gained national fame. From 1962 to 1976, an international schlager festival was held in Rostock, garnering participants from between 18 and 22 countries each year.[105] The city of Dresden held a similar international festival for schlager musicians from 1971 until shortly before reunification.[106] There was a national schlager contest hosted yearly in Magdeburg from 1966 to 1971 as well.[107]
|
183 |
+
|
184 |
+
Bands and singers from other Communist countries were popular, e.g. Czerwone Gitary from Poland known as the Rote Gitarren.[108][109] Czech Karel Gott, the Golden Voice from Prague, was beloved in both German states.[110] Hungarian band Omega performed in both German states, and Yugoslavian band Korni Grupa toured East Germany in the 1970s.[111][112]
|
185 |
+
|
186 |
+
West German television and radio could be received in many parts of the East. The Western influence led to the formation of more "underground" groups with a decisively western-oriented sound. A few of these bands – the so-called Die anderen Bands ("the other bands") – were Die Skeptiker, Die Art [de] and Feeling B. Additionally, hip hop culture reached the ears of the East German youth. With videos such as Beat Street and Wild Style, young East Germans were able to develop a hip hop culture of their own.[113] East Germans accepted hip hop as more than just a music form. The entire street culture surrounding rap entered the region and became an outlet for oppressed youth.[114]
|
187 |
+
|
188 |
+
The government of the GDR was invested in both promoting the tradition of German classical music, and in supporting composers to write new works in that tradition. Notable East German composers include Hanns Eisler, Paul Dessau, Ernst Hermann Meyer, Rudolf Wagner-Régeny, and Kurt Schwaen.
|
189 |
+
|
190 |
+
The birthplace of Johann Sebastian Bach (1685–1750), Eisenach, was rendered as a museum about him, featuring more than three hundred instruments, which, in 1980, received some 70,000 visitors. In Leipzig, the Bach archive contains his compositions and correspondence and recordings of his music.[115]
|
191 |
+
|
192 |
+
Governmental support of classical music maintained some fifty symphony orchestras, such as Gewandhausorchester and Thomanerchor in Leipzig; Sächsische Staatskapelle in Dresden; and Berliner Sinfonie Orchester and Staatsoper Unter den Linden in Berlin.[citation needed] Kurt Masur was their prominent conductor.[116]
|
193 |
+
|
194 |
+
East German theatre was originally dominated by Bertolt Brecht, who brought back many artists out of exile and reopened the Theater am Schiffbauerdamm with his Berliner Ensemble.[117] Alternatively, other influences tried to establish a "Working Class Theatre", played for the working class by the working class.[citation needed]
|
195 |
+
|
196 |
+
After Brecht's death, conflicts began to arise between his family (around Helene Weigel) and other artists about Brecht's legacy, including Slatan Dudow, Erwin Geschonneck, Erwin Strittmatter, Peter Hacks, Benno Besson, Peter Palitzsch and Ekkehard Schall.[118]
|
197 |
+
|
198 |
+
In the 1950s the Swiss director Benno Besson with the Deutsches Theater successfully toured Europe and Asia including Japan with The Dragon by Evgeny Schwarz. In the 1960s, he became the Intendant of the Volksbühne often working with Heiner Müller.[citation needed]
|
199 |
+
|
200 |
+
In the 1970s, a parallel theatre scene sprung up, creating theatre "outside of Berlin" in which artists played at provincial theatres. For example, Peter Sodann founded the Neues Theater in Halle/Saale and Frank Castorf at the theater Anklam.[citation needed]
|
201 |
+
|
202 |
+
Theatre and cabaret had high status in the GDR, which allowed it to be very proactive. This often brought it into confrontation with the state. Benno Besson once said, "In contrast to artists in the west, they took us seriously, we had a bearing."[119][120]
|
203 |
+
|
204 |
+
The Friedrichstadt-Palast in Berlin is the last major building erected by the GDR, making it an exceptional architectural testimony to how Germany overcame of its former division. Here, Berlin's great revue tradition lives on, today bringing viewers state-of-the-art shows.[121]
|
205 |
+
|
206 |
+
Important theatres include the Berliner Ensemble,[122] the Deutsches Theater,[123] the Maxim Gorki Theater,[124] and the Volksbühne.[125]
|
207 |
+
|
208 |
+
The prolific cinema of East Germany was headed by the DEFA,[126] Deutsche Film AG, which was subdivided in different local groups, for example Gruppe Berlin, Gruppe Babelsberg or Gruppe Johannisthal, where the local teams shot and produced films. The East German industry became known worldwide for its productions, especially children's movies (Das kalte Herz, film versions of the Brothers Grimm fairy tales and modern productions such as Das Schulgespenst).[citation needed]
|
209 |
+
|
210 |
+
Frank Beyer's Jakob der Lügner (Jacob the Liar), about the Holocaust, and Fünf Patronenhülsen (Five Cartridges), about resistance against fascism, became internationally famous.[127]
|
211 |
+
|
212 |
+
Films about daily life, such as Die Legende von Paul und Paula, by Heiner Carow, and Solo Sunny, directed by Konrad Wolf and Wolfgang Kohlhaase, were very popular.[citation needed]
|
213 |
+
|
214 |
+
The film industry was remarkable for its production of Ostern, or Western-like movies. Amerindians in these films often took the role of displaced people who fight for their rights, in contrast to the North American westerns of the time, where they were often either not mentioned at all or are portrayed as the villains. Yugoslavs were often cast as Native Americans because of the small number of Native Americans in Europe. Gojko Mitić was well known in these roles, often playing the righteous, kindhearted and charming chief (Die Söhne der großen Bärin directed by Josef Mach). He became an honorary Sioux chief when he visited the United States in the 1990s, and the television crew accompanying him showed the tribe one of his movies. American actor and singer Dean Reed, an expatriate who lived in East Germany, also starred in several films. These films were part of the phenomenon of Europe producing alternative films about the colonization of the Americas.[citation needed]
|
215 |
+
|
216 |
+
Cinemas in the GDR also showed foreign films. Czechoslovak and Polish productions were more common, but certain western movies were shown, though the numbers of these were limited because it cost foreign exchange to buy the licences. Further, films representing or glorifying what the state viewed as capitalist ideology were not bought. Comedies enjoyed great popularity, such as the Danish Olsen Gang or movies with the French comedian Louis de Funès.[citation needed]
|
217 |
+
|
218 |
+
Since the fall of the Berlin Wall, several films depicting life in the GDR have been critically acclaimed.[citation needed] Some of the most notable were Good Bye Lenin! by Wolfgang Becker,[128] Das Leben der Anderen (The Lives of Others) by Florian Henckel von Donnersmarck (won the Academy Award for best Film in a Foreign Language) in 2006,[129] and Alles auf Zucker! (Go for Zucker) by Dani Levi. Each film is heavily infused with cultural nuances unique to life in the GDR.[130]
|
219 |
+
|
220 |
+
East Germany was very successful in the sports of cycling, weight-lifting, swimming, gymnastics, track and field, boxing, ice skating, and winter sports. The success is largely attributed to doping under the direction of Manfred Höppner, a sports doctor, described as the architect of East Germany's state-sponsored drug program.[131]
|
221 |
+
|
222 |
+
Anabolic steroids were the most detected doping substances in IOC-accredited laboratories for many years.[132][133] The development and implementation of a state-supported sports doping program helped East Germany, with its small population, to become a world leader in sport during the 1970s and 1980s, winning a large number of Olympic and world gold medals and records.[134][135] Another factor for success was the furtherance system for young people in GDR. Sport teachers at school were encouraged to look for certain talents in children ages 6 to 10 years old. For older pupils it was possible to attend grammar schools with a focus on sports (for example sailing, football and swimming). This policy was also used for talented pupils with regard to music or mathematics.[citation needed]
|
223 |
+
|
224 |
+
Sports clubs were highly subsidized, especially sports in which it was possible to get international fame. For example, the major leagues for ice hockey and basketball just included 2 teams each. Football was the most popular sport. Club football teams such as Dynamo Dresden, 1. FC Magdeburg, FC Carl Zeiss Jena, 1. FC Lokomotive Leipzig and BFC Dynamo had successes in European competition. Many East German players such as Matthias Sammer and Ulf Kirsten became integral parts of the reunified national football team.
|
225 |
+
|
226 |
+
The East and the West also competed via sport; GDR athletes dominated several Olympic sports. Of special interest was the only football match between the Federal Republic of Germany and the German Democratic Republic, a first-round match during the 1974 FIFA World Cup, which the East won 1–0; but West Germany, the host, went on to win the World Cup.[136]
|
227 |
+
|
228 |
+
Television and radio in East Germany were state-run industries; the Rundfunk der DDR was the official radio broadcasting organisation from 1952 until unification. The organization was based in the Funkhaus Nalepastraße in East Berlin. Deutscher Fernsehfunk (DFF), from 1972 to 1990 known as Fernsehen der DDR or DDR-FS, was the state television broadcaster from 1952. Reception of Western broadcasts was widespread.[137]
|
229 |
+
|
230 |
+
The sexual culture of both German districts was different. East Germany had a cultural acceptance of birth control and premarital sex, in contrast to the West.[138] As a result of this East Germans had sex earlier and more often.[139] Scholars such as Kristen Ghodsee have attributed the higher women orgasm rate (80%) and higher sex satisfaction rate among women in East Germany to economics. Ghodsee writes that in East Germany women did not have to marry men for money which she attributes as the cause of this.[140] Historian Dagmar Herzog, who has researched sexuality in Germany, recalls "the women of the former GDR told me with gentle pity how much better they'd had it in the East."[141] The differences between the two sides were discussed in the documentary Do Communists Have Better Sex?.[138]
|
231 |
+
|
232 |
+
By the mid-1980s, East Germany possessed a well-developed communications system. There were approximately 3.6 million telephones in usage (21.8 for every 100 inhabitants), and 16,476 Telex stations. Both of these networks were run by the Deutsche Post der DDR (East German Post Office). East Germany was assigned telephone country code +37; in 1991, several months after reunification, East German telephone exchanges were incorporated into country code +49.
|
233 |
+
|
234 |
+
An unusual feature of the telephone network was that, in most cases, direct distance dialing for long-distance calls was not possible. Although area codes were assigned to all major towns and cities, they were only used for switching international calls. Instead, each location had its own list of dialing codes with shorter codes for local calls and longer codes for long-distance calls. After unification, the existing network was largely replaced, and area codes and dialing became standardised.
|
235 |
+
|
236 |
+
In 1976 East Germany inaugurated the operation of a ground-based radio station at Fürstenwalde for the purpose of relaying and receiving communications from Soviet satellites and to serve as a participant in the international telecommunications organization established by the Soviet government, Intersputnik.
|
237 |
+
|
238 |
+
Margot Honecker, former Minister for Education of East Germany, said:
|
239 |
+
|
240 |
+
In this state, each person had a place. All children could attend school free of charge, they received vocational training or studied, and were guaranteed a job after training. Work was more than just a means to earn money. Men and women received equal pay for equal work and performance. Equality for women was not just on paper. Care for children and the elderly was the law. Medical care was free, cultural and leisure activities affordable. Social security was a matter of course. We knew no beggars or homelessness. There was a sense of solidarity. People felt responsible not only for themselves, but worked in various democratic bodies on the basis of common interests.[142]
|
241 |
+
|
242 |
+
In contrast, German historian Jürgen Kocka in 2010 summarized the consensus of most recent scholarship:
|
243 |
+
|
244 |
+
Conceptualizing the GDR as a dictatorship has become widely accepted, while the meaning of the concept dictatorship varies. Massive evidence has been collected that proves the repressive, undemocratic, illiberal, nonpluralistic character of the GDR regime and its ruling party.[143]
|
245 |
+
|
246 |
+
Many East Germans initially regarded the dissolution of the GDR positively.[144] But this reaction soon turned sour.[145] West Germans often acted as if they had "won" and East Germans had "lost" in unification, leading many East Germans (Ossis) to resent West Germans (Wessis).[146] In 2004, Ascher Barnstone wrote, "East Germans resent the wealth possessed by West Germans; West Germans see the East Germans as lazy opportunists who want something for nothing. East Germans find 'Wessis' arrogant and pushy, West Germans think the 'Ossis' are lazy good-for-nothings."[147]
|
247 |
+
|
248 |
+
Unification and subsequent federal policies led to serious economic hardships for many East Germans that had not existed before the Wende. Unemployment and homelessness, which had been minimal during the communist era, grew and quickly became widespread; this, as well as the closures of countless factories and other workplaces in the east, fostered a growing sense that East Germans were being ignored or neglected by the federal government.
|
249 |
+
|
250 |
+
In addition, many east German women found the west more appealing, and left the region never to return, leaving behind an underclass of poorly educated and jobless men.[148]
|
251 |
+
|
252 |
+
These and other effects of unification led many East Germans to begin to think of themselves more strongly as "East" Germans rather than as simply "Germans". In many former GDR citizens this produced a longing for some aspects of the former East Germany, such as full employment and other perceived benefits of the GDR state, termed "Ostalgie" (a blend of Ost "east" and Nostalgie "nostalgia") and depicted in the Wolfgang Becker film Goodbye Lenin!.[149]
|
253 |
+
|
254 |
+
|
255 |
+
|
256 |
+
|
257 |
+
|
258 |
+
|
259 |
+
|
260 |
+
|
261 |
+
|
262 |
+
|
263 |
+
|
264 |
+
Mason University Libraries
|
en/4939.html.txt
ADDED
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another.[1] Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
|
4 |
+
|
5 |
+
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
|
6 |
+
|
7 |
+
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
|
8 |
+
|
9 |
+
Reactions may proceed in the forward or reverse direction until they go to completion or reach equilibrium. Reactions that proceed in the forward direction to approach equilibrium are often described as spontaneous, requiring no input of free energy to go forward. Non-spontaneous reactions require input of free energy to go forward (examples include charging a battery by applying an external electrical power source, or photosynthesis driven by absorption of electromagnetic radiation in the form of sunlight).
|
10 |
+
|
11 |
+
Different chemical reactions are used in combinations during chemical synthesis in order to obtain a desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperatures and concentrations present within a cell.
|
12 |
+
|
13 |
+
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays, and reactions between elementary particles, as described by quantum field theory.
|
14 |
+
|
15 |
+
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by Alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.[2]
|
16 |
+
|
17 |
+
The production of chemical substances that do not normally occur in nature has long been tried, such as the synthesis of sulfuric and nitric acids attributed to the controversial alchemist Jābir ibn Hayyān. The process involved heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s,[3] and the Haber process was developed in 1909–1910 for ammonia synthesis.[4]
|
18 |
+
|
19 |
+
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of the experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as reaction with oxygen from the air.[5]
|
20 |
+
|
21 |
+
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.[6]
|
22 |
+
|
23 |
+
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
|
24 |
+
|
25 |
+
Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields".[7] The tip of the arrow points in the direction in which the reaction proceeds. A double arrow (⇌) pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
|
30 |
+
A
|
31 |
+
,
|
32 |
+
B
|
33 |
+
,
|
34 |
+
C
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
{\displaystyle {\ce {A, B, C}}}
|
39 |
+
|
40 |
+
and
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
D
|
46 |
+
|
47 |
+
|
48 |
+
|
49 |
+
{\displaystyle {\ce {D}}}
|
50 |
+
|
51 |
+
in a schematic example below) by the appropriate integers a, b, c and d.[8]
|
52 |
+
|
53 |
+
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
|
54 |
+
|
55 |
+
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.[9]
|
56 |
+
|
57 |
+
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products.[10] Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.[11]
|
58 |
+
|
59 |
+
The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by an isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.[12]
|
60 |
+
|
61 |
+
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
|
62 |
+
|
63 |
+
For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction.
|
64 |
+
|
65 |
+
Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis.
|
66 |
+
|
67 |
+
for example
|
68 |
+
|
69 |
+
Most chemical reactions are reversible, that is they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on such parameters as temperature, pressure and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with the fewer moles of gas.[13]
|
70 |
+
|
71 |
+
The reaction yield stabilizes at equilibrium, but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant, but does affect the equilibrium position.
|
72 |
+
|
73 |
+
Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release energy. The associated free energy of the reaction is composed of two different thermodynamic quantities, enthalpy and entropy:[14]
|
74 |
+
|
75 |
+
Reactions can be exothermic, where ΔH is negative and energy is released. Typical examples of exothermic reactions are precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous reaction products, which have high entropy. Since the entropy increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur at low temperatures. Changes in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
|
76 |
+
|
77 |
+
This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature.[15] ΔH° is zero at 1855 K, and the reaction becomes exothermic above that temperature.
|
78 |
+
|
79 |
+
Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction
|
80 |
+
|
81 |
+
is favored by low temperatures, but its reverse is favored by high temperature. The shift in reaction direction tendency occurs at 1100 K.[15]
|
82 |
+
|
83 |
+
Reactions can also be characterized by the internal energy which takes into account changes in the entropy, volume and chemical potential. The latter depends, among other things, on the activities of the involved substances.[16]
|
84 |
+
|
85 |
+
The speed at which reactions takes place is studied by reaction kinetics. The rate depends on various parameters, such as:
|
86 |
+
|
87 |
+
Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate v of a first-order reaction, which could be disintegration of a substance A, is given by:
|
88 |
+
|
89 |
+
Its integration yields:
|
90 |
+
|
91 |
+
Here k is first-order rate constant having dimension 1/time, [A](t) is concentration at a time t and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with the characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
|
92 |
+
|
93 |
+
where Ea is the activation energy and kB is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.[17]
|
94 |
+
|
95 |
+
In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form:
|
96 |
+
|
97 |
+
Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide:
|
98 |
+
|
99 |
+
Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water.[18]
|
100 |
+
|
101 |
+
A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction, and can be written as[18][19]
|
102 |
+
|
103 |
+
One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas:
|
104 |
+
|
105 |
+
In a single replacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound[18] These reactions come in the general form of:
|
106 |
+
|
107 |
+
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make magnesium hydroxide and hydrogen gas:
|
108 |
+
|
109 |
+
In a double replacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds.[18] These reactions are in the general form:[19]
|
110 |
+
|
111 |
+
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
|
112 |
+
|
113 |
+
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
|
114 |
+
|
115 |
+
In a combustion reaction, an element or compound reacts with oxygen, often producing energy in the form of heat or light. Combustion reactions always involve oxygen, but also frequently involve a hydrocarbon.
|
116 |
+
|
117 |
+
A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen.
|
118 |
+
|
119 |
+
Redox reactions can be understood in terms of transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is oxidized and the latter is reduced. Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state, and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).[21][22]
|
120 |
+
|
121 |
+
In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt:
|
122 |
+
|
123 |
+
In the reaction, sodium metal goes from an oxidation state of 0 (as it is a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (it is also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces reduction in the other species and is considered the reducing agent.
|
124 |
+
|
125 |
+
Which of the involved reactants would be reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativity, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many ions with high oxidation numbers, such as H2O2, MnO−4, CrO3, Cr2O2−7, OsO4 can gain one or two extra electrons and are strong oxidizing agents.
|
126 |
+
|
127 |
+
The number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron respectively. Noble gases themselves are chemically inactive.[23]
|
128 |
+
|
129 |
+
An important class of redox reactions are the electrochemical reactions, where electrons from the power supply are used as the reducing agent. These reactions are particularly important for the production of chemical elements, such as chlorine[24] or aluminium. The reverse process in which electrons are released in redox reactions and can be used as electrical energy is possible and used in batteries.
|
130 |
+
|
131 |
+
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.[25]
|
132 |
+
|
133 |
+
In the Brønsted–Lowry acid–base theory, an acid-base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid.[26] In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
|
134 |
+
|
135 |
+
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants (Ka and Kb) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at exactly same amounts, form a neutral salt.
|
136 |
+
|
137 |
+
Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are:
|
138 |
+
|
139 |
+
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit[27] and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.[28]
|
140 |
+
|
141 |
+
Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.[29]
|
142 |
+
|
143 |
+
Reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range.[30][31] Reactions at the solid|gas interface are in some cases related to catalysis.
|
144 |
+
|
145 |
+
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.[32]
|
146 |
+
|
147 |
+
Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin.[12] In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence.[33] Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
|
148 |
+
|
149 |
+
In catalysis, the reaction does not proceed directly, but through reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid–liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors.[34][35] Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction which is kinetically inhibited by a high activation energy can take place in circumvention of this activation energy.
|
150 |
+
|
151 |
+
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.[36]
|
152 |
+
|
153 |
+
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involve covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
|
154 |
+
|
155 |
+
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group.[37] These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
|
156 |
+
|
157 |
+
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.[38]
|
158 |
+
|
159 |
+
The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.[39]
|
160 |
+
|
161 |
+
In the SN2 mechanism, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers (cis/trans). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.[40]
|
162 |
+
|
163 |
+
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2[41]
|
164 |
+
|
165 |
+
In the third type of substitution reaction, radical substitution, the attacking particle is a radical.[37] This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing the radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.[42]
|
166 |
+
|
167 |
+
The addition and its counterpart, the elimination, are reactions which change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms which are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, formation of the double bond, takes place with elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires participation of a base.[43] Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.[44]
|
168 |
+
|
169 |
+
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.[45]
|
170 |
+
|
171 |
+
The counterpart of elimination is the addition where double or triple bonds are converted into single bonds. Similar to the substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule.[46] This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."[47]
|
172 |
+
|
173 |
+
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, where in the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. At the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.[48]
|
174 |
+
|
175 |
+
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role for the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with an elimination, so that after the reaction the carbonyl group is present again. It is therefore called addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase by the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.[49]
|
176 |
+
|
177 |
+
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.[50][51][52]
|
178 |
+
|
179 |
+
Some additions which can not be executed with nucleophiles and electrophiles, can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.[53]
|
180 |
+
|
181 |
+
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.[54]
|
182 |
+
|
183 |
+
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.[55]
|
184 |
+
|
185 |
+
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light.[56] Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.[57]
|
186 |
+
|
187 |
+
Biochemical reactions are mainly controlled by enzymes. These proteins can specifically catalyze a single reaction, so that reactions can be controlled very precisely. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.[58]
|
188 |
+
|
189 |
+
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units.[59] Bioenergetics studies the sources of energy for such reactions. An important energy source is glucose, which can be produced by plants via photosynthesis or assimilated from food. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions.
|
190 |
+
|
191 |
+
Chemical reactions are central to chemical engineering where they are used for the synthesis of new compounds from natural raw materials such as petroleum and mineral ores. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the amount of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.[60][61]
|
192 |
+
|
193 |
+
Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.[62]
|
194 |
+
|
195 |
+
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at time scaled down to a few femtoseconds.[63]
|
en/494.html.txt
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Ibn Sina (Persian: ابن سینا), also known as Abu Ali Sina (ابوعلی سینا), Pur Sina (پورسینا), and often known in the West as Avicenna (/ˌævɪˈsɛnə, ˌɑːvɪ-/; c. 980 – June 1037), was a Persian[7][8][9] polymath who is regarded as one of the most significant physicians, astronomers, thinkers and writers of the Islamic Golden Age,[10] and the father of early modern medicine.[11][12][13] Avicenna is also called "the most influential philosopher of the pre-modern era".[14] He was a Peripatetic philosopher influenced by Aristotelian philosophy. Of the 450 works he is believed to have written, around 240 have survived, including 150 on philosophy and 40 on medicine.[15]
|
2 |
+
|
3 |
+
His most famous works are The Book of Healing, a philosophical and scientific encyclopedia, and The Canon of Medicine, a medical encyclopedia[16][17][18] which became a standard medical text at many medieval universities[19] and remained in use as late as 1650.[20]
|
4 |
+
|
5 |
+
Besides philosophy and medicine, Avicenna's corpus includes writings on astronomy, alchemy, geography and geology, psychology, Islamic theology, logic, mathematics, physics and works of poetry.[21]
|
6 |
+
|
7 |
+
Avicenna is a Latin corruption of the Arabic patronym ibn Sīnā (ابن سينا),[22] meaning "Son of Sina". However, Avicenna was not the son but the great-great-grandson of a man named Sina.[23] His formal Arabic name was Abū ʿAlī al-Ḥusayn ibn ʿAbdillāh ibn al-Ḥasan ibn ʿAlī ibn Sīnā[24] (أبو علي الحسين بن عبد الله بن الحسن بن علي بن سينا).
|
8 |
+
|
9 |
+
Ibn Sina created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations of Greco-Roman, Persian, and Indian texts were studied extensively. Greco-Roman (Mid- and Neo-Platonic, and Aristotelian) texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals, who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine.[25] The Samanid dynasty in the eastern part of Persia, Greater Khorasan and Central Asia as well as the Buyid dynasty in the western part of Persia and Iraq provided a thriving atmosphere for scholarly and cultural development. Under the Samanids, Bukhara rivaled Baghdad as a cultural capital of the Islamic world.[26] There, the study of the Quran and the Hadith thrived. Philosophy, Fiqh and theology (kalaam) were further developed, most noticeably by Avicenna and his opponents. Al-Razi and Al-Farabi had provided methodology and knowledge in medicine and philosophy. Avicenna had access to the great libraries of Balkh, Khwarezm, Gorgan, Rey, Isfahan and Hamadan. Various texts (such as the 'Ahd with Bahmanyar) show that he debated philosophical points with the greatest scholars of the time. Aruzi Samarqandi describes how before Avicenna left Khwarezm he had met Al-Biruni (a famous scientist and astronomer), Abu Nasr Iraqi (a renowned mathematician), Abu Sahl Masihi (a respected philosopher) and Abu al-Khayr Khammar (a great physician).
|
10 |
+
|
11 |
+
Avicenna was born c. 980 in Afshana, a village near Bukhara (in present-day Uzbekistan), the capital of the Samanids, a Persian dynasty in Central Asia and Greater Khorasan. His mother, named Sitāra, was from Bukhara;[27] While, according to most scholars, most of Avicenna's family were Sunnis,[28] his father, Abdullāh, was a respected scholar from Balkh who might have converted to Ismailism.[28][29][30] It was an important town of the Samanid Empire, in what is today Balkh Province, Afghanistan.[28] His father worked in the government of Samanid in the village Kharmasain, a Sunni regional power. After five years, his younger brother, Mahmoud, was born. Avicenna first began to learn the Quran and literature in such a way that when he was ten years old he had essentially learned all of them.[31]
|
12 |
+
|
13 |
+
According to his autobiography, Avicenna had memorised the entire Quran by the age of 10.[32] He learned Indian arithmetic from an Indian greengrocer, Mahmoud Massahi[33] and he began to learn more from a wandering scholar who gained a livelihood by curing the sick and teaching the young.[34] He also studied Fiqh (Islamic jurisprudence) under the Sunni Hanafi scholar Ismail al-Zahid.[35] Avicenna was taught some extent of philosophy books such as Introduction (Isagoge)'s Porphyry (philosopher), Euclid's Elements, Ptolemy's Almagest by an unpopular philosopher, Abu Abdullah Nateli, who claimed philosophizing.[36]
|
14 |
+
|
15 |
+
As a teenager, he was greatly troubled by the Metaphysics of Aristotle, which he could not understand until he read al-Farabi's commentary on the work.[30] For the next year and a half, he studied philosophy, in which he encountered greater obstacles. In such moments of baffled inquiry, he would leave his books, perform the requisite ablutions, then go to the mosque, and continue in prayer till light broke on his difficulties. Deep into the night, he would continue his studies, and even in his dreams problems would pursue him and work out their solution. Forty times, it is said, he read through the Metaphysics of Aristotle, till the words were imprinted on his memory; but their meaning was hopelessly obscure to him until he purchased a brief commentary by al-Farabi from a bookstall for three dirhams (a very low price at the time). So great was his joy at the discovery, made with the help of a work from which he had expected only mystery, that he hastened to return thanks to God, and bestowed alms upon the poor.[37]
|
16 |
+
|
17 |
+
He turned to medicine at 16, and not only learned medical theory, but also by gratuitous attendance of the sick had, according to his own account, discovered new methods of treatment.[37] The teenager achieved full status as a qualified physician at age 18,[32] and found that "Medicine is no hard and thorny science, like mathematics and metaphysics, so I soon made great progress; I became an excellent doctor and began to treat patients, using approved remedies." The youthful physician's fame spread quickly, and he treated many patients without asking for payment.
|
18 |
+
|
19 |
+
A number of theories have been proposed regarding Avicenna's madhab (school of thought within Islamic jurisprudence). Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) considered Avicenna to be a follower of the Brethren of Purity.[38] On the other hand, Dimitri Gutas along with Aisha Khan and Jules J. Janssens demonstrated that Avicenna was a Sunni Hanafi.[28][38] Avicenna studied Hanafi law, many of his notable teachers were Hanafi jurists, and he served under the Hanafi court of Ali ibn Mamun.[39][28] Avicenna said at an early age that he remained "unconvinced" by Ismaili missionary attempts to convert him.[28] However, the 14th century Shia faqih Nurullah Shushtari according to Seyyed Hossein Nasr, claimed he was a Twelver Shia.[40] Conversely, Sharaf Khorasani, citing a rejection of an invitation of the Sunni Governor Sultan Mahmoud Ghazanavi by Avicenna to his court, believes that Avicenna was an Ismaili.[41] Similar disagreements exist on the background of Avicenna's family, whereas most writers considered them Sunni, recent Shiite writers contested that they were Shia.[28]
|
20 |
+
|
21 |
+
Avicenna's first appointment was that of physician to the emir, Nuh II, who owed him his recovery from a dangerous illness (997). Ibn Sina's chief reward for this service was access to the royal library of the Samanids, well-known patrons of scholarship and scholars. When the library was destroyed by fire not long after, the enemies of Ibn Sina accused him of burning it, in order for ever to conceal the sources of his knowledge. Meanwhile, he assisted his father in his financial labors, but still found time to write some of his earliest works.[37]
|
22 |
+
|
23 |
+
At 22 years old, Avicenna lost his father. The Samanid dynasty came to its end in December 1004. Avicenna seems to have declined the offers of Mahmud of Ghazni, and proceeded westwards to Urgench in modern Turkmenistan, where the vizier, regarded as a friend of scholars, gave him a small monthly stipend. The pay was small, however, so Ibn Sina wandered from place to place through the districts of Nishapur and Merv to the borders of Khorasan, seeking an opening for his talents. Qabus, the generous ruler of Tabaristan, himself a poet and a scholar, with whom Ibn Sina had expected to find asylum, was on about that date (1012) starved to death by his troops who had revolted. Avicenna himself was at this time stricken by a severe illness. Finally, at Gorgan, near the Caspian Sea, Avicenna met with a friend, who bought a dwelling near his own house in which Avicenna lectured on logic and astronomy. Several of his treatises were written for this patron; and the commencement of his Canon of Medicine also dates from his stay in Hyrcania.[37]
|
24 |
+
|
25 |
+
Avicenna subsequently settled at Rey, in the vicinity of modern Tehran, the home town of Rhazes; where Majd Addaula, a son of the last Buwayhid emir, was nominal ruler under the regency of his mother (Seyyedeh Khatun). About thirty of Ibn Sina's shorter works are said to have been composed in Rey. Constant feuds which raged between the regent and her second son, Shams al-Daula, however, compelled the scholar to quit the place. After a brief sojourn at Qazvin he passed southwards to Hamadãn where Shams al-Daula, another Buwayhid emir, had established himself. At first, Ibn Sina entered into the service of a high-born lady; but the emir, hearing of his arrival, called him in as medical attendant, and sent him back with presents to his dwelling. Ibn Sina was even raised to the office of vizier. The emir decreed that he should be banished from the country. Ibn Sina, however, remained hidden for forty days in sheikh Ahmed Fadhel's house, until a fresh attack of illness induced the emir to restore him to his post. Even during this perturbed time, Ibn Sina persevered with his studies and teaching. Every evening, extracts from his great works, the Canon and the Sanatio, were dictated and explained to his pupils. On the death of the emir, Ibn Sina ceased to be vizier and hid himself in the house of an apothecary, where, with intense assiduity, he continued the composition of his works.[37]
|
26 |
+
|
27 |
+
Meanwhile, he had written to Abu Ya'far, the prefect of the dynamic city of Isfahan, offering his services. The new emir of Hamadan, hearing of this correspondence and discovering where Ibn Sina was hiding, incarcerated him in a fortress. War meanwhile continued between the rulers of Isfahan and Hamadãn; in 1024 the former captured Hamadan and its towns, expelling the Tajik mercenaries. When the storm had passed, Ibn Sina returned with the emir to Hamadan, and carried on his literary labors. Later, however, accompanied by his brother, a favorite pupil, and two slaves, Ibn Sina escaped from the city in the dress of a Sufi ascetic. After a perilous journey, they reached Isfahan, receiving an honorable welcome from the prince.[37]
|
28 |
+
|
29 |
+
The remaining ten or twelve years of Ibn Sīnā's life were spent in the service of the Kakuyid ruler Muhammad ibn Rustam Dushmanziyar (also known as Ala al-Dawla), whom he accompanied as physician and general literary and scientific adviser, even in his numerous campaigns.[37]
|
30 |
+
|
31 |
+
During these years he began to study literary matters and philology, instigated, it is asserted, by criticisms on his style. A severe colic, which seized him on the march of the army against Hamadan, was checked by remedies so violent that Ibn Sina could scarcely stand. On a similar occasion the disease returned; with difficulty he reached Hamadan, where, finding the disease gaining ground, he refused to keep up the regimen imposed, and resigned himself to his fate.[37]
|
32 |
+
|
33 |
+
His friends advised him to slow down and take life moderately. He refused, however, stating that: "I prefer a short life with width to a narrow one with length".[42] On his deathbed remorse seized him; he bestowed his goods on the poor, restored unjust gains, freed his slaves, and read through the Quran every three days until his death.[37][43] He died in June 1037, in his fifty-six year, in the month of Ramadan and was buried in Hamadan, Iran.[43]
|
34 |
+
|
35 |
+
Ibn Sīnā wrote extensively on early Islamic philosophy, especially the subjects logic, ethics, and metaphysics, including treatises named Logic and Metaphysics. Most of his works were written in Arabic – then the language of science in the Middle East – and some in Persian. Of linguistic significance even to this day are a few books that he wrote in nearly pure Persian language (particularly the Danishnamah-yi 'Ala', Philosophy for Ala' ad-Dawla'). Ibn Sīnā's commentaries on Aristotle often criticized the philosopher,[citation needed] encouraging a lively debate in the spirit of ijtihad.
|
36 |
+
|
37 |
+
Avicenna's Neoplatonic scheme of "emanations" became fundamental in the Kalam (school of theological discourse) in the 12th century.[44]
|
38 |
+
|
39 |
+
His Book of Healing became available in Europe in partial Latin translation some fifty years after its composition, under the title Sufficientia, and some authors have identified a "Latin Avicennism" as flourishing for some time, paralleling the more influential Latin Averroism, but suppressed by the Parisian decrees of 1210 and 1215.[45]
|
40 |
+
|
41 |
+
Avicenna's psychology and theory of knowledge influenced William of Auvergne, Bishop of Paris[46] and Albertus Magnus,[46] while his metaphysics influenced the thought of Thomas Aquinas.[46]
|
42 |
+
|
43 |
+
Early Islamic philosophy and Islamic metaphysics, imbued as it is with Islamic theology, distinguishes more clearly than Aristotelianism between essence and existence. Whereas existence is the domain of the contingent and the accidental, essence endures within a being beyond the accidental. The philosophy of Ibn Sīnā, particularly that part relating to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism can be seen in what is left of his work.
|
44 |
+
|
45 |
+
Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence cannot be inferred from or accounted for by the essence of existing things, and that form and matter by themselves cannot interact and originate the movement of the universe or the progressive actualization of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must be an existing thing and coexist with its effect.[47]
|
46 |
+
|
47 |
+
Avicenna's consideration of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being; namely impossibility, contingency, and necessity. Avicenna argued that the impossible being is that which cannot exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself' (wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself' and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed existence. It is what always exists.[48][49]
|
48 |
+
|
49 |
+
The Necessary exists 'due-to-Its-Self', and has no quiddity/essence (mahiyya) other than existence (wujud). Furthermore, It is 'One' (wahid ahad)[50] since there cannot be more than one 'Necessary-Existent-due-to-Itself' without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist 'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. However, if no differentia distinguishes them from each other, then there is no sense in which these 'Existents' are not one and the same.[51] Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd), nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity (kam), place (ayn), situation (wad), and time (waqt).[52][53][54]
|
50 |
+
|
51 |
+
Avicenna's theology on metaphysical issues (ilāhiyyāt) has been criticized by some Islamic scholars, among them al-Ghazali, Ibn Taymiyya, and Ibn al-Qayyim.[55][page needed] While discussing the views of the theists among the Greek philosophers, namely Socrates, Plato, and Aristotle in Al-Munqidh min ad-Dalal ("Deliverance from Error"), al-Ghazali noted that the Greek philosophers "must be taxed with unbelief, as must their partisans among the Muslim philosophers, such as Ibn Sina and al-Farabi and their likes." He added that "None, however, of the Muslim philosophers engaged so much in transmitting Aristotle's lore
|
52 |
+
as did the two men just mentioned. [...] The sum of what we regard as the authentic philosophy of Aristotle, as transmitted by al-Farabi and Ibn Sina, can be reduced to three parts: a part which must be branded as unbelief; a part which must be stigmatized as innovation; and a part which need not be repudiated at all.[56]
|
53 |
+
|
54 |
+
Avicenna made an argument for the existence of God which would be known as the "Proof of the Truthful" (Arabic: al-burhan al-siddiqin). Avicenna argued that there must be a "necessary existent" (Arabic: wajib al-wujud), an entity that cannot not exist[57] and through a series of arguments, he identified it with the Islamic conception of God.[58] Present-day historian of philosophy Peter Adamson called this argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy.[57]
|
55 |
+
|
56 |
+
Correspondence between Ibn Sina (with his student Ahmad ibn 'Ali al-Ma'sumi) and Al-Biruni has survived in which they debated Aristotelian natural philosophy and the Peripatetic school. Abu Rayhan began by asking Avicenna eighteen questions, ten of which were criticisms of Aristotle's On the Heavens.[59]
|
57 |
+
|
58 |
+
Avicenna was a devout Muslim and sought to reconcile rational philosophy with Islamic theology. His aim was to prove the existence of God and His creation of the world scientifically and through reason and logic.[60] Avicenna's views on Islamic theology (and philosophy) were enormously influential, forming part of the core of the curriculum at Islamic religious schools until the 19th century.[61] Avicenna wrote a number of short treatises dealing with Islamic theology. These included treatises on the prophets (whom he viewed as "inspired philosophers"), and also on various scientific and philosophical interpretations of the Quran, such as how Quranic cosmology corresponds to his own philosophical system. In general these treatises linked his philosophical writings to Islamic religious ideas; for example, the body's afterlife.
|
59 |
+
|
60 |
+
There are occasional brief hints and allusions in his longer works however that Avicenna considered philosophy as the only sensible way to distinguish real prophecy from illusion. He did not state this more clearly because of the political implications of such a theory, if prophecy could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could only be properly considered by other philosophers.[62]
|
61 |
+
|
62 |
+
Later interpretations of Avicenna's philosophy split into three different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs.[63]
|
63 |
+
|
64 |
+
Avicenna memorized the Quran by the age of ten, and as an adult, he wrote five treatises commenting on suras from the Quran. One of these texts included the Proof of Prophecies, in which he comments on several Quranic verses and holds the Quran in high esteem. Avicenna argued that the Islamic prophets should be considered higher than philosophers.[64]
|
65 |
+
|
66 |
+
While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "Floating Man" – literally falling man – a thought experiment to demonstrate human self-awareness and the substantiality and immateriality of the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario, one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off from sense experience, would still be capable of determining his own existence, the thought experiment points to the conclusions that the soul is a perfection, independent of the body, and an immaterial substance.[65] The conceivability of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature. Following is an English translation of the argument:
|
67 |
+
|
68 |
+
One of us (i.e. a human being) should be imagined as having been created in a single stroke; created perfect and complete but with his vision obscured so that he cannot perceive external entities; created falling through air or a void, in such a manner that he is not struck by the firmness of the air in any way that compels him to feel it, and with his limbs separated so that they do not come in contact with or touch each other. Then contemplate the following: can he be assured of the existence of himself? He does not have any doubt in that his self exists, without thereby asserting that he has any exterior limbs, nor any internal organs, neither heart nor brain, nor any one of the exterior things at all; but rather he can affirm the existence of himself, without thereby asserting there that this self has any extension in space. Even if it were possible for him in that state to imagine a hand or any other limb, he would not imagine it as being a part of his self, nor as a condition for the existence of that self; for as you know that which is asserted is different from that which is not asserted and that which is inferred is different from that which is not inferred. Therefore the self, the existence of which has been asserted, is a unique characteristic, in as much that it is not as such the same as the body or the limbs, which have not been ascertained. Thus that which is ascertained (i.e. the self), does have a way of being sure of the existence of the soul as something other than the body, even something non-bodily; this he knows, this he should understand intuitively, if it is that he is ignorant of it and needs to be beaten with a stick [to realize it].
|
69 |
+
|
70 |
+
However, Avicenna posited the brain as the place where reason interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect. The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human being: the soul exists and is self-aware.[67] Avicenna thus concluded that the idea of the self is not logically dependent on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance. The body is unnecessary; in relation to it, the soul is its perfection.[68][69][70] In itself, the soul is an immaterial substance.[71]
|
71 |
+
|
72 |
+
Avicenna authored a five-volume medical encyclopedia: The Canon of Medicine (Al-Qanun fi't-Tibb). It was used as the standard medical textbook in the Islamic world and Europe up to the 18th century.[72][73] The Canon still plays an important role in Unani medicine.[74]
|
73 |
+
|
74 |
+
Avicenna considered whether events like rare diseases or disorders have natural causes.[75] He used the example of polydactyly to explain his perception that causal reasons exist for all medical events. This view of medical phenomena anticipated developments in the Enlightenment by seven centuries.[76]
|
75 |
+
|
76 |
+
Ibn Sīnā wrote on Earth sciences such as geology in The Book of Healing.[77] While discussing the formation of mountains, he explained:
|
77 |
+
|
78 |
+
Either they are the effects of upheavals of the crust of the earth, such as might occur during a violent earthquake, or they are the effect of water, which, cutting itself a new route, has denuded the valleys, the strata being of different kinds, some soft, some hard ... It would require a long period of time for all such changes to be accomplished, during which the mountains themselves might be somewhat diminished in size.[77]
|
79 |
+
|
80 |
+
In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science and described an early scientific method of inquiry. He discusses Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explains that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty". Avicenna then adds two further methods for arriving at the first principles: the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he develops a "method of experimentation as a means for scientific inquiry."[78]
|
81 |
+
|
82 |
+
An early formal system of temporal logic was studied by Avicenna.[79] Although he did not develop a real theory of temporal propositions, he did study the relationship between temporalis and the implication.[80] Avicenna's work was further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern times.[81][82] Avicennian logic also influenced several early European logicians such as Albertus Magnus[83] and William of Ockham.[84][85] Avicenna endorsed the law of noncontradiction proposed by Aristotle, that a fact could not be both true and false at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of noncontradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned."[86]
|
83 |
+
|
84 |
+
In mechanics, Ibn Sīnā, in The Book of Healing, developed a theory of motion, in which he made a distinction between the inclination (tendency to motion) and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease.[87] He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance.[88]
|
85 |
+
|
86 |
+
The theory of motion presented by Avicenna was probably influenced by the 6th-century Alexandrian scholar John Philoponus. Avicenna's is a less sophisticated variant of the theory of impetus developed by Buridan in the 14th century. It is unclear if Buridan was influenced by Avicenna, or by Philoponus directly.[89]
|
87 |
+
|
88 |
+
In optics, Ibn Sina was among those who argued that light had a speed, observing that "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite."[90] He also provided a wrong explanation of the rainbow phenomenon. Carl Benjamin Boyer described Avicenna's ("Ibn Sīnā") theory on the rainbow as follows:
|
89 |
+
|
90 |
+
Independent observation had demonstrated to him that the bow is not formed in the dark cloud but rather in the very thin mist lying between the cloud and the sun or observer. The cloud, he thought, serves simply as the background of this thin substance, much as a quicksilver lining is placed upon the rear surface of the glass in a mirror. Ibn Sīnā would change the place not only of the bow, but also of the color formation, holding the iridescence to be merely a subjective sensation in the eye.[91]
|
91 |
+
|
92 |
+
In 1253, a Latin text entitled Speculum Tripartitum stated the following regarding Avicenna's theory on heat:
|
93 |
+
|
94 |
+
Avicenna says in his book of heaven and earth, that heat is generated from motion in external things.[92]
|
95 |
+
|
96 |
+
Avicenna's legacy in classical psychology is primarily embodied in the Kitab al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were known in Latin under the title De Anima (treatises "on the soul").[dubious – discuss] Notably, Avicenna develops what is called the Flying Man argument in the Psychology of The Cure I.1.7 as defense of the argument that the soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology designates as a form of an "epoche").[68][69]
|
97 |
+
|
98 |
+
Avicenna's psychology requires that connection between the body and soul be strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving features of the object by our external senses. This sensory information is supplied to the internal senses, which merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the universal from the concrete particular is the key to their relationship and interaction, which takes place in the physical body.[93]
|
99 |
+
|
100 |
+
The soul completes the action of intellection by accepting forms that have been abstracted from matter. This process requires a concrete particular (material) to be abstracted into the universal intelligible (immaterial). The material and immaterial interact through the Active Intellect, which is a "divine light" containing the intelligible forms.[94] The Active Intellect reveals the universals concealed in material objects much like the sun makes color available to our eyes.
|
101 |
+
|
102 |
+
Avicenna wrote an attack on astrology titled Resāla fī ebṭāl aḥkām al-nojūm, in which he cited passages from the Quran to dispute the power of astrology to foretell the future.[95] He believed that each planet had some influence on the earth, but argued against astrologers being able to determine the exact effects.[96]
|
103 |
+
|
104 |
+
Avicenna's astronomical writings had some influence on later writers, although in general his work could be considered less developed than Alhazen or Al-Biruni. One important feature of his writing is that he considers mathematical astronomy as a separate discipline to astrology.[97] He criticized Aristotle's view of the stars receiving their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous.[98] He claimed to have observed Venus as a spot on the Sun. This is possible, as there was a transit on May 24, 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology,[97] i.e. the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the prevailing geocentric model.[99][100]
|
105 |
+
|
106 |
+
He also wrote the Summary of the Almagest, (based on Ptolemy's Almagest), with an appended treatise "to bring that which is stated in the Almagest and what is understood from Natural Science into conformity". For example, Avicenna considers the motion of the solar apogee, which Ptolemy had taken to be fixed.[97]
|
107 |
+
|
108 |
+
Ibn Sīnā used steam distillation to produce essential oils such as rose essence, which he used as aromatherapeutic treatments for heart conditions.[101][102]
|
109 |
+
|
110 |
+
Unlike al-Razi, Ibn Sīnā explicitly disputed the theory of the transmutation of substances commonly believed by alchemists:
|
111 |
+
|
112 |
+
Those of the chemical craft know well that no change can be effected in the different species of substances, though they can produce the appearance of such change.[103]
|
113 |
+
|
114 |
+
Four works on alchemy attributed to Avicenna were translated into Latin as:[104]
|
115 |
+
|
116 |
+
Liber Aboali Abincine de Anima in arte Alchemiae was the most influential, having influenced later medieval chemists and alchemists such as Vincent of Beauvais. However Anawati argues (following Ruska) that the de Anima is a fake by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy).[104]
|
117 |
+
Ibn Sina classified minerals into stones, fusible substances, sulfurs, and salts, building on the ideas of Aristotle and Jabir.[105] The epistola de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in his career when he had not yet firmly decided that transmutation was impossible.[104]
|
118 |
+
|
119 |
+
Almost half of Ibn Sīnā's works are versified.[106] His poems appear in both Arabic and Persian. As an example, Edward Granville Browne claims that the following Persian verses are incorrectly attributed to Omar Khayyám, and were originally written by Ibn Sīnā:[107]
|
120 |
+
|
121 |
+
از قعر گل سیاه تا اوج زحل
|
122 |
+
کردم همه مشکلات گیتی را حل
|
123 |
+
بیرون جستم زقید هر مکر و حیل
|
124 |
+
هر بند گشاده شد مگر بند اجل
|
125 |
+
|
126 |
+
From the depth of the black earth up to Saturn's apogee,
|
127 |
+
All the problems of the universe have been solved by me.
|
128 |
+
I have escaped from the coils of snares and deceits;
|
129 |
+
I have unraveled all knots except the knot of Death.[108]:91
|
130 |
+
|
131 |
+
Robert Wisnovsky, a scholar of Avicenna attached to the McGill University, says that "Avicenna was the central figure in the long history of the rational sciences in Islam, particularly in the fields of metaphysics, logic and medicine" but that his works didn't only have an influence in these "secular" fields of knowledge alone, as "these works, or portions of them, were read, taught, copied, commented upon, quoted, paraphrased and cited by thousands of post-Avicennian scholars — not only philosophers, logicians, physicians and specialists in the mathematical or exact sciences, but also by those who specialized in the disciplines of ʿilm al-kalām (rational theology, but understood to include natural philosophy, epistemology and philosophy of mind) and usūl al-fiqh (jurisprudence, but understood to include philosophy of law, dialectic, and philosophy of language)."[109]
|
132 |
+
|
133 |
+
As early as the 13th century when Dante Alighieri depicted him in Limbo alongside the virtuous non-Christian thinkers in his Divine Comedy such as Virgil, Averroes, Homer, Horace, Ovid, Lucan, Socrates, Plato, and Saladin. Avicenna has been recognized by both East and West, as one of the great figures in intellectual history.
|
134 |
+
|
135 |
+
George Sarton, the author of The History of Science, described Ibn Sīnā as "one of the greatest thinkers and medical scholars in history"[110] and called him "the most famous scientist of Islam and one of the most famous of all races, places, and times." He was one of the Islamic world's leading writers in the field of medicine.
|
136 |
+
|
137 |
+
Along with Rhazes, Abulcasis, Ibn al-Nafis, and al-Ibadi, Ibn Sīnā is considered an important compiler of early Muslim medicine. He is remembered in the Western history of medicine as a major historical figure who made important contributions to medicine and the European Renaissance. His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters (such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into account post-Aristotelian advances in anatomical knowledge.[111] Aristotle's dominant intellectual influence among medieval European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with Hippocrates and Galen as one of the acknowledged authorities, princeps medicorum ("prince of physicians").[112]
|
138 |
+
|
139 |
+
In present-day Iran, Afghanistan and Tajikistan, he is considered a national icon, and is often regarded as among the greatest Persians. A monument was erected outside the Bukhara museum[year needed]. The Avicenna Mausoleum and Museum in Hamadan was built in 1952. Bu-Ali Sina University in Hamadan (Iran),
|
140 |
+
the biotechnology Avicenna Research Institute in Tehran (Iran), the ibn Sīnā Tajik State Medical University in Dushanbe, Ibn Sina Academy of Medieval Medicine and Sciences at Aligarh, India, Avicenna School in Karachi and Avicenna Medical College in Lahore, Pakistan,[113] Ibne Sina Balkh Medical School in his native province of Balkh in Afghanistan, Ibni Sina Faculty Of Medicine of Ankara University Ankara, Turkey, the main classroom building (the Avicenna Building) of the Sharif University of Technology, and Ibn Sina Integrated School in Marawi City (Philippines) are all named in his honour. His portrait hangs in the Hall of the Avicenna Faculty of Medicine in the University of Paris. There is a crater on the Moon named Avicenna and a mangrove genus.
|
141 |
+
|
142 |
+
In 1980, the Soviet Union, which then ruled his birthplace Bukhara, celebrated the thousandth anniversary of Avicenna's birth by circulating various commemorative stamps with artistic illustrations, and by erecting a bust of Avicenna based on anthropological research by Soviet scholars.[114]
|
143 |
+
Near his birthplace in Qishlak Afshona, some 25 km (16 mi) north of Bukhara, a training college for medical staff has been named for him.[year needed] On the grounds is a museum dedicated to his life, times and work.[115][self-published source?]
|
144 |
+
|
145 |
+
The Avicenna Prize, established in 2003, is awarded every two years by UNESCO and rewards individuals and groups for their achievements in the field of ethics in science.[116]
|
146 |
+
The aim of the award is to promote ethical reflection on issues raised by advances in science and technology, and to raise global awareness of the importance of ethics in science.
|
147 |
+
|
148 |
+
The Avicenna Directories (2008–15; now the World Directory of Medical Schools) list universities and schools where doctors, public health practitioners, pharmacists and others, are educated. The original project team stated "Why Avicenna? Avicenna ... was ... noted for his synthesis of knowledge from both east and west. He has had a lasting influence on the development of medicine and health sciences. The use of Avicenna's name symbolises the worldwide partnership that is needed for the promotion of health services of high quality."[117]
|
149 |
+
|
150 |
+
In June 2009, Iran donated a "Persian Scholars Pavilion" to United Nations Office in Vienna which is placed in the central Memorial Plaza of the Vienna International Center.[118] The "Persian Scholars Pavilion" at United Nations in Vienna, Austria is featuring the statues of four prominent Iranian figures. Highlighting the Iranian architectural features, the pavilion is adorned with Persian art forms and includes the statues of renowned Iranian scientists Avicenna, Al-Biruni, Zakariya Razi (Rhazes) and Omar Khayyam.[119][120]
|
151 |
+
|
152 |
+
The 1982 Soviet film Youth of Genius (Russian: Юность гения, romanized: Yunost geniya) by Elyor Ishmukhamedov [ru] recounts Avicenna's younger years. The film is set in Bukhara at the turn of the millennium.[121]
|
153 |
+
|
154 |
+
In Louis L'Amour's 1985 historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine.
|
155 |
+
|
156 |
+
In his book The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley.
|
157 |
+
|
158 |
+
The treatises of Ibn Sīnā influenced later Muslim thinkers in many areas including theology, philology, mathematics, astronomy, physics, and music. His works numbered almost 450 volumes on a wide range of subjects, of which around 240 have survived. In particular, 150 volumes of his surviving works concentrate on philosophy and 40 of them concentrate on medicine.[15]
|
159 |
+
His most famous works are The Book of Healing, and The Canon of Medicine.
|
160 |
+
|
161 |
+
Ibn Sīnā wrote at least one treatise on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo, are treatises giving a synoptic view of Aristotelian doctrine,[37] though Metaphysics demonstrates a significant departure from the brand of Neoplatonism known as Aristotelianism in Ibn Sīnā's world;
|
162 |
+
Arabic philosophers[who?][year needed] have hinted at the idea that Ibn Sīnā was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted into the Muslim world.
|
163 |
+
|
164 |
+
The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in 1493, 1495, and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic was published by Schmoelders in 1836).[122] Two encyclopedic treatises, dealing with philosophy, are often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account of Ibn Sina's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction, of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of these works have been modified by the corrections which the monastic editors confess that they applied. There is also a حكمت مشرقيه (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority of which is lost in antiquity, which according to Averroes was pantheistic in tone.[37]
|
165 |
+
|
166 |
+
Avicenna's works further include:[123][124]
|
167 |
+
|
168 |
+
Avicenna's most important Persian work is the Danishnama-i 'Alai (دانشنامه علائی, "the Book of Knowledge for [Prince] 'Ala ad-Daulah"). Avicenna created new scientific vocabulary that had not previously existed in Persian. The Danishnama covers such topics as logic, metaphysics, music theory and other sciences of his time. It has been translated into English by Parwiz Morewedge in 1977.[131] The book is also important in respect to Persian scientific works.
|
169 |
+
|
170 |
+
Andar Danesh-e Rag (اندر دانش رگ, "On the Science of the Pulse") contains nine chapters on the science of the pulse and is a condensed synopsis.
|
171 |
+
|
172 |
+
Persian poetry from Ibn Sina is recorded in various manuscripts and later anthologies such as Nozhat al-Majales.
|
173 |
+
|
174 |
+
يجب أن يتوهم الواحد منا كأنه خلق دفعةً وخلق كاملاً لكنه حجب بصره عن مشاهدة الخارجات وخلق يهوى في هواء أو خلاء هوياً لا يصدمه فيه قوام الهواء صدماً ما يحوج إلى أن يحس وفرق بين أعضائه فلم تتلاق ولم تتماس ثم يتأمل هل أنه يثبت وجود ذاته ولا يشكك في إثباته لذاته موجوداً ولا يثبت مع ذلك طرفاً من أعضائه ولا باطناً من أحشائه ولا قلباً ولا دماغاً ولا شيئاً من الأشياء من خارج بل كان يثبت ذاته ولا يثبت لها طولاً ولا عرضاً ولا عمقاً ولو أنه أمكنه في تلك الحالة أن يتخيل يداً أو عضواً آخر لم يتخيله جزء من ذاته ولا شرطاً في ذاته وأنت تعلم أن المثبت غير الذي لم يثبت والمقربه غير الذي لم يقربه فإذن للذات التي أثبت وجودها خاصية على أنها هو بعينه غير جسمه وأعضائه التي لم تثبت فإذن المثبت له سبيل إلى أن يثبته على وجود النفس شيئاً غير الجسم بل غير جسم وأنه عارف به مستشعر له وإن كان ذاهلاً عنه يحتاج إلى أن يقرع عصاه.
|
en/4940.html.txt
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A recipe is a set of instructions that describes how to prepare or make something, especially a dish of prepared food.
|
2 |
+
|
3 |
+
The term recipe is also used in medicine or in information technology (e.g., user acceptance). A doctor will usually begin a prescription with recipe, Latin for take, usually abbreviated as Rx or the equivalent symbol (℞).
|
4 |
+
|
5 |
+
The earliest known written recipes date to 1730 BC and were recorded on cuneiform tablets found in Mesopotamia.[1]
|
6 |
+
|
7 |
+
Other early written recipes date from approximately 1600 BC and come from an Akkadian tablet from southern Babylonia.[2] There are also works in ancient Egyptian hieroglyphs depicting the preparation of food.[3]
|
8 |
+
|
9 |
+
Many ancient Greek recipes are known. Mithaecus's cookbook was an early one, but most of it has been lost; Athenaeus quotes one short recipe in his Deipnosophistae. Athenaeus mentions many other cookbooks, all of them lost.[4]
|
10 |
+
|
11 |
+
Roman recipes are known starting in the 2nd century BCE with Cato the Elder's De Agri Cultura. Many authors of this period described eastern Mediterranean cooking in Greek and in Latin.[4] Some Punic recipes are known in Greek and Latin translation.[4]
|
12 |
+
|
13 |
+
The large collection of recipes De re coquinaria, conventionally titled Apicius, appeared in the 4th or 5th century and is the only complete surviving cookbook from the classical world.[4] It lists the courses served in a meal as Gustatio (appetizer), Primae Mensae (main course) and Secundae Mensae (dessert).[5] Each recipe begins with the Latin command "Take...," "Recipe...."[6]
|
14 |
+
|
15 |
+
Arabic recipes are documented starting in the 10th century; see al-Warraq and al-Baghdadi.
|
16 |
+
|
17 |
+
The earliest recipe in Persian dates from the 14th century. Several recipes have survived from the time of Safavids, including Karnameh (1521) by Mohammad Ali Bavarchi, which includes the cooking instruction of more than 130 different dishes and pastries, and Madat-ol-Hayat (1597) by Nurollah Ashpaz.[7] Recipe books from the Qajar era are numerous, the most notable being Khorak-ha-ye Irani by prince Nader Mirza.[8]
|
18 |
+
|
19 |
+
King Richard II of England commissioned a recipe book called Forme of Cury in 1390,[9] and around the same time, another book was published entitled Curye on Inglish, "cury" meaning cooking.[10] Both books give an impression of how food for the noble classes was prepared and served in England at that time. The luxurious taste of the aristocracy in the Early Modern Period brought with it the start of what can be called the modern recipe book. By the 15th century, numerous manuscripts were appearing detailing the recipes of the day. Many of these manuscripts give very good information and record the re-discovery of many herbs and spices including coriander, parsley, basil and rosemary, many of which had been brought back from the Crusades.[11]
|
20 |
+
|
21 |
+
A page from the Nimmatnama-i-Nasiruddin-Shahi, book of delicacies and recipes. It documents the fine art of making kheer.
|
22 |
+
|
23 |
+
Medieval Indian Manuscript (circa 16th century) showing samosas being served.
|
24 |
+
|
25 |
+
With the advent of the printing press in the 16th and 17th centuries, numerous books were written on how to manage households and prepare food. In Holland[12] and England[13] competition grew between the noble families as to who could prepare the most lavish banquet. By the 1660s, cookery had progressed to an art form and good cooks were in demand. Many of them published their own books detailing their recipes in competition with their rivals.[14] Many of these books have been translated and are available online.[15]
|
26 |
+
|
27 |
+
By the 19th century, the Victorian preoccupation for domestic respectability brought about the emergence of cookery writing in its modern form. Although eclipsed in fame and regard by Isabella Beeton, the first modern cookery writer and compiler of recipes for the home was Eliza Acton. Her pioneering cookbook, Modern Cookery for Private Families published in 1845, was aimed at the domestic reader rather than the professional cook or chef. This was immensely influential, establishing the format for modern writing about cookery. It introduced the now-universal practice of listing the ingredients and suggested cooking times with each recipe. It included the first recipe for Brussels sprouts.[16] Contemporary chef Delia Smith called Acton "the best writer of recipes in the English language."[17] Modern Cookery long survived Acton, remaining in print until 1914 and available more recently in facsimile.
|
28 |
+
|
29 |
+
Acton's work was an important influence on Isabella Beeton,[18] who published Mrs Beeton's Book of Household Management in 24 monthly parts between 1857 and 1861. This was a guide to running a Victorian household, with advice on fashion, child care, animal husbandry, poisons, the management of servants, science, religion, and industrialism.[19][20] Of the 1,112 pages, over 900 contained recipes. Most were illustrated with coloured engravings. It is said that many of the recipes were plagiarised from earlier writers such as Acton, but the Beetons never claimed that the book's contents were original. It was intended as a reliable guide for the aspirant middle classes.
|
30 |
+
|
31 |
+
The American cook Fannie Farmer (1857–1915) published in 1896 her famous work The Boston Cooking School Cookbook which contained some 1,849 recipes.[21]
|
32 |
+
|
33 |
+
Modern culinary recipes normally consist of several components
|
34 |
+
|
35 |
+
Earlier recipes often included much less information, serving more as a reminder of ingredients and proportions for someone who already knew how to prepare the dish.[22][23]
|
36 |
+
|
37 |
+
Recipe writers sometimes also list variations of a traditional dish, to give different tastes of the same recipes.
|
38 |
+
|
39 |
+
By the mid 20th century, there were thousands of cookery and recipe books available. The next revolution came with the introduction of the TV cooks. The first TV cook in England was Fanny Cradock with a show on the BBC. TV cookery programs brought recipes to a new audience. In the early days, recipes were available by post from the BBC; later with the introduction of CEEFAX text on screen, they became available on television.
|
40 |
+
|
41 |
+
The first Internet Usenet newsgroup dedicated to cooking was net.cooks created in 1982, later becoming rec.food.cooking.[24] It served as a forum to share recipes text files and cooking techniques.
|
42 |
+
|
43 |
+
In the early 21st century, there has been a renewed focus on cooking at home due to the late-2000s recession.[25] Television networks such as the Food Network and magazines are still a major source of recipe information, with international cooks and chefs such as Jamie Oliver, Gordon Ramsay, Nigella Lawson and Rachael Ray having prime-time shows and backing them up with Internet websites giving the details of all their recipes. These were joined by reality TV shows such as Top Chef or Iron Chef, and many Internet sites offering free recipes, [Healthies.Recipes].[26].
|
44 |
+
|
45 |
+
Molecular gastronomy provides chefs with cooking techniques and ingredients, but this discipline also provides new theories and methods which aid recipe design. These methods are used by chefs, foodies, home cooks and even mixologists worldwide to improve or design recipes.
|
en/4941.html.txt
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The rising average temperature of Earth's climate system, called global warming, is driving changes in rainfall patterns, extreme weather, arrival of seasons, and more. Collectively, global warming and its effects are known as climate change. While there have been prehistoric periods of global warming, observed changes since the mid-20th century have been unprecedented in rate and scale.[1]
|
6 |
+
|
7 |
+
The Intergovernmental Panel on Climate Change (IPCC) concluded that "human influence on climate has been the dominant cause of observed warming since the mid-20th century". These findings have been recognized by the national science academies of major nations and are not disputed by any scientific body of national or international standing.[4] The largest human influence has been the emission of greenhouse gases, with over 90% of the impact from carbon dioxide and methane.[5] Fossil fuel burning is the principal source of these gases, with agricultural emissions and deforestation also playing significant roles. Temperature rise is enhanced by self-reinforcing climate feedbacks, such as loss of snow cover, increased water vapour, and melting permafrost.
|
8 |
+
|
9 |
+
Land surfaces are heating faster than the ocean surface, leading to heat waves, wildfires, and the expansion of deserts.[6] Increasing atmospheric energy and rates of evaporation are causing more intense storms and weather extremes, damaging infrastructure and agriculture.[7] Surface temperature increases are greatest in the Arctic and have contributed to the retreat of glaciers, permafrost, and sea ice. Environmental impacts include the extinction or relocation of many species as their ecosystems change, most immediately in coral reefs, mountains, and the Arctic. Surface temperatures would stabilize and decline a little if emissions were cut off, but other impacts will continue for centuries, including rising sea levels from melting ice sheets, rising ocean temperatures, and ocean acidification from elevated levels of carbon dioxide.[8]
|
10 |
+
|
11 |
+
Habitat destruction. Many arctic animals rely on sea ice, which has been disappearing in a warming Arctic.
|
12 |
+
|
13 |
+
Pest propagation. Mild winters allow more pine beetles to survive to kill large swaths of forest.
|
14 |
+
|
15 |
+
Heat wave intensification. Events like the June 2019 European heat wave are becoming more common.
|
16 |
+
|
17 |
+
Extreme weather. Drought and high temperatures worsened the 2020 bushfires in Australia.
|
18 |
+
|
19 |
+
Farming. Droughts, rising temperatures, and extreme weather negatively impact agriculture.
|
20 |
+
|
21 |
+
Tidal flooding. Sea level rise increases flooding in low lying coastal regions. Shown: Venice, Italy.
|
22 |
+
|
23 |
+
Storm intensification. Bangladesh after Cyclone Sidr is an example of catastrophic flooding from increased rainfall.
|
24 |
+
|
25 |
+
Environmental migration. Sparser rainfall leads to desertification that harms agriculture and can displace populations.
|
26 |
+
|
27 |
+
Arctic warming. Permafrost thaws undermine infrastructure and release methane in a positive feedback loop.
|
28 |
+
|
29 |
+
Ecological collapse possibilities. Bleaching has damaged the Great Barrier Reef and threatens reefs worldwide.
|
30 |
+
|
31 |
+
Mitigation efforts to address global warming include the development and deployment of low carbon energy technologies, policies to reduce fossil fuel emissions, reforestation, forest preservation, as well as the development of potential climate engineering technologies. Societies and governments are also working to adapt to current and future global warming impacts, including improved coastline protection, better disaster management, and the development of more resistant crops.
|
32 |
+
|
33 |
+
Countries work together on climate change under the umbrella of the United Nations Framework Convention on Climate Change (UNFCCC), which has near-universal membership. The goal of the convention is to "prevent dangerous anthropogenic interference with the climate system". The IPCC has stressed the need to keep global warming below 1.5 °C (2.7 °F) compared to pre-industrial levels in order to avoid some irreversible impacts.[10] With current policies and pledges, global warming by the end of the century is expected to reach about 2.8 °C (5.0 °F).[11] At the current greenhouse gas (GHG) emission rate, the carbon budget for staying below 1.5 °C (2.7 °F) would be exhausted by 2028.[12]
|
34 |
+
|
35 |
+
Multiple independently produced instrumental datasets show that the climate system is warming,[14] with the 2009–2018 decade being 0.93 ± 0.07 °C (1.67 ± 0.13 °F) warmer than the pre-industrial baseline (1850–1900).[15] Currently, surface temperatures are rising by about 0.2 °C (0.36 °F) per decade.[16] Since 1950, the number of cold days and nights has decreased, and the number of warm days and nights has increased.[17] Historical patterns of warming and cooling, like the Medieval Climate Anomaly and the Little Ice Age, were not as synchronous as current warming, but may have reached temperatures as high as those of the late-20th century in a limited set of regions.[18] There have been prehistorical episodes of global warming, such as the Paleocene–Eocene Thermal Maximum.[19] However, the observed rise in temperature and CO2 concentrations has been so rapid that even abrupt geophysical events that took place in Earth's history do not approach current rates.[20]
|
36 |
+
|
37 |
+
Climate proxy records show that natural variations offset the early effects of the Industrial Revolution, so there was little net warming between the 18th century and the mid-19th century,[21] when thermometer records began to provide global coverage.[22] The Intergovernmental Panel on Climate Change (IPCC) has adopted the baseline reference period 1850–1900 as an approximation of pre-industrial global mean surface temperature.[21]
|
38 |
+
|
39 |
+
The warming evident in the instrumental temperature record is consistent with a wide range of observations, documented by many independent scientific groups.[23] Although the most common measure of global warming is the increase in the near-surface atmospheric temperature, over 90% of the additional energy in the climate system over the last 50 years has been stored in the ocean, warming it.[24] The remainder of the additional energy has melted ice and warmed the continents and the atmosphere.[25] The ocean heat uptake drives thermal expansion which has contributed to observed sea level rise.[26] Further indicators of climate change include an increase in the frequency and intensity of heavy precipitation, melting of snow and land ice and increased atmospheric humidity.[27] Flora and fauna also portray behaviour consistent with warming, such as the earlier flowering of plants in spring.[28]
|
40 |
+
|
41 |
+
Global warming refers to global averages, with the amount of warming varying by region. Since the pre-industrial period, global average land temperatures have increased almost twice as fast as global average temperatures.[29] This is due to the larger heat capacity of oceans and because oceans lose more heat by evaporation.[30] Patterns of warming are independent of the locations of greenhouse gas emissions because the gases persist long enough to diffuse across the planet; however, localized black carbon deposits on snow and ice do contribute to Arctic warming.[31]
|
42 |
+
|
43 |
+
The Northern Hemisphere and North Pole have warmed much faster than the South Pole and Southern Hemisphere. The Northern Hemisphere not only has much more land, but also more snow area and sea ice, because of how the land masses are arranged around the Arctic Ocean. As these surfaces flip from being reflective to dark after the ice has melted, they start absorbing more heat. The Southern Hemisphere already had little sea ice in summer before it started warming.[32] Arctic temperatures have increased and are predicted to continue to increase during this century at over twice the rate of the rest of the world.[33] Melting of glaciers and ice sheets in the Arctic disrupts ocean circulation, uncluding a weakened Gulf Stream, causing increased warming in some areas.[34]
|
44 |
+
|
45 |
+
Although record-breaking years attract considerable media attention, individual years are less significant than the overall global surface temperature, which is subject to short-term fluctuations that overlie long-term trends.[35] An example of such an episode is the slower rate of surface temperature increase from 1998 to 2012, which was described as the global warming hiatus.[36] Throughout this period, ocean heat storage continued to progress steadily upwards, and in subsequent years, surface temperatures have spiked upwards. The slower pace of warming can be attributed to a combination of natural fluctuations, reduced solar activity, and increased reflection sunlight of by particles from volcanic eruptions.[37]
|
46 |
+
|
47 |
+
By itself, the climate system experiences various cycles which can last for years (such as the El Niño–Southern Oscillation) to decades or centuries.[38] Other changes are caused by an imbalance of energy at the top of the atmosphere: external forcings. These forcings are "external" to the climate system, but not always external to the Earth.[39] Examples of external forcings include changes in the composition of the atmosphere (e.g. increased concentrations of greenhouse gases), solar luminosity, volcanic eruptions, and variations in the Earth's orbit around the Sun.[40]
|
48 |
+
|
49 |
+
Attribution of climate change is the effort to scientifically show which mechanisms are responsible for observed changes in Earth's climate. First, known internal climate variability and natural external forcings need to be ruled out. Therefore, a key approach is to use computer modelling of the climate system to determine unique "fingerprints" for all potential causes. By comparing these fingerprints with observed patterns and evolution of climate change, and the observed history of the forcings, the causes of the observed changes can be determined.[41] For example, solar forcing can be ruled out as major cause because its fingerprint is warming in the entire atmosphere, and only the lower atmosphere has warmed as expected for greenhouse gases.[42] The major causes of current climate change are primarily greenhouse gases, and secondarily land use changes, and aerosols and soot.[43]
|
50 |
+
|
51 |
+
Greenhouse gases trap heat radiating from the Earth to space.[44] This heat, in the form of infrared radiation, gets absorbed and emitted by these gases in the atmosphere, thus warming the lower atmosphere and the surface. Before the Industrial Revolution, naturally occurring amounts of greenhouse gases caused the air near the surface to be warmer by about 33 °C (59 °F) than it would be in their absence.[45] Without the Earth's atmosphere, the Earth's average temperature would be well below the freezing temperature of water.[46] While water vapour (~50%) and clouds (~25%) are the biggest contributors to the greenhouse effect, they increase as a function of temperature and are therefore considered feedbacks. Increased concentrations of gases such as CO2 (~20%), ozone and N2O are external forcing on the other hand.[47] Ozone acts as a greenhouse gas in the lowest layer of the atmosphere, the troposphere. Furthermore, it is highly reactive and interacts with other greenhouse gases and aerosols.[48]
|
52 |
+
|
53 |
+
Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels,[49] has increased the amount of greenhouse gases in the atmosphere. This CO2, methane, tropospheric ozone, CFCs, and nitrous oxide has increased radiative forcing. In 2018, the concentrations of CO2 and methane had increased by about 45% and 160%, respectively, since pre-industrial times.[50] In 2013, CO2 readings taken at the world's primary benchmark site in Mauna Loa surpassing 400 ppm for the first time.[51] These levels are much higher than at any time during the last 800,000 years, the period for which reliable data have been collected from ice cores.[52] Less direct geological evidence indicates that CO2 values have not been this high for millions of years.[53]
|
54 |
+
|
55 |
+
Global anthropogenic greenhouse gas emissions in 2018 excluding land use change were equivalent to 52 billion tonnes of carbon dioxide. Of these emissions, 72% was carbon dioxide from fossil fuel burning and industry, 19% was methane, largely from livestock,[54] 6% was nitrous oxide, mainly from agriculture, and 3% was fluorinated gases.[55] A further 4 billion tonnes of CO2 was released as a consequence of land use change, which is primarily due to deforestation.[56] From a production standpoint, the primary sources of global GHG emissions are estimated as: electricity and heat (25%), agriculture and forestry (24%), industry (21%), and transportation (14%).[57] Consumption based estimates of GHG emissions offer another useful way to understand sources of global warming, and may better capture the effects of trade.[58] From a consumption standpoint, the dominant sources of global 2010 emissions were found to be: food (30%), washing, heating, and lighting (26%); personal transport and freight (20%); and building construction (15%).[59]
|
56 |
+
|
57 |
+
Despite the contribution of deforestation to GHG emissions, the Earth's land surface, particularly its forests, remain a significant carbon sink for CO2. Natural processes, such as carbon fixation in the soil and photosynthesis, more than offset the GHG contributions from deforestation. The land surface sink is estimated to remove about 11 billion tonnes of CO2 annually from the atmosphere, or about 29% of global CO2 emissions.[60] The ocean also serves as a significant carbon sink via a two-step process. First, CO2 dissolves in the surface water. Afterwards, the ocean's overturning circulation distributes it deep into the ocean's interior, where it accumulates over time as part of the carbon cycle. Over the last two decades, the world's oceans have removed between 20 and 30% of emitted CO2.[61] The strength of both the land and ocean sinks increase as CO2 levels in the atmosphere rise. In this respect they act as negative feedbacks in global warming.[62]
|
58 |
+
|
59 |
+
Humans change the Earth's surface mainly to create more agricultural land. Today agriculture takes up 50% of the world's habitable land, while 37% is forests,[63] and that latter figure continues to decrease,[64] largely due to continued forest loss in the tropics.[65] This deforestation is the most significant aspect of land use change affecting global warming. The main causes are: deforestation through permanent land use change for agricultural products such as beef and palm oil (27%), forestry/forest products (26%), short term agricultural cultivation (24%), and wildfires (23%).[66]
|
60 |
+
|
61 |
+
In addition to impacting greenhouse gas concentrations, land use changes affect global warming through a variety of other chemical and physical dynamics. Changing the type of vegetation in a region impacts the local temperature by changing how much sunlight gets reflected back into space, called albedo, and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also contribute to changing temperatures by affecting the release of aerosols and other chemical compounds that affect clouds; and by changing wind patterns when the land surface has different obstacles.[67] Globally, these effects are estimated to have led to a slight cooling, dominated by an increase in surface albedo.[68] But there is significant geographic variation in how this works. In the tropics the net effect is to produce a significant warming, while at latitudes closer to the poles a loss of albedo leads to an overall cooling effect.[67]
|
62 |
+
|
63 |
+
Air pollution, in the form of aerosols, not only puts a large burden on human health, but also affects the climate on a large scale.[69] From 1961 to 1990, a gradual reduction in the amount of sunlight reaching the Earth's surface was observed, a phenomenon popularly known as global dimming,[70] typically attributed to aerosols from biofuel and fossil fuel burning.[71] Aerosol removal by precipitation gives tropospheric aerosols an atmospheric lifetime of only about a week, while stratospheric aerosols can remain in the atmosphere for a few years.[72] Globally, aerosols have been declining since 1990, removing some of the masking of global warming that they had been providing.[73]
|
64 |
+
|
65 |
+
In addition to their direct effect by scattering and absorbing solar radiation, aerosols have indirect effects on the Earth's radiation budget. Sulfate aerosols act as cloud condensation nuclei and thus lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets.[74] This effect also causes droplets to be of more uniform size, which reduces the growth of raindrops and makes clouds more reflective to incoming sunlight.[75] Indirect effects of aerosols are the largest uncertainty in radiative forcing.[76]
|
66 |
+
|
67 |
+
While aerosols typically limit global warming by reflecting sunlight, black carbon in soot that falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea level rise.[77] Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C by 2050.[78]
|
68 |
+
|
69 |
+
As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect the climate system.[79] Solar irradiance has been measured directly by satellites,[80] and indirect measurements are available beginning in the early 1600s.[79] There has been no upward trend in the amount of the Sun's energy reaching the Earth, so it cannot be responsible for the current warming.[81] Physical climate models are also unable to reproduce the rapid warming observed in recent decades when taking into account only variations in solar output and volcanic activity.[82] Another line of evidence for the warming not being due to the Sun is how temperature changes differ at different levels in the Earth's atmosphere.[83] According to basic physical principles, the greenhouse effect produces warming of the lower atmosphere (the troposphere), but cooling of the upper atmosphere (the stratosphere).[84] If solar variations were responsible for the observed warming, warming of both the troposphere and the stratosphere would be expected, but that has not been the case.[42] Explosive volcanic eruptions represent the largest natural forcing over the industrial era. When the eruption is sufficiently strong with sulfur dioxide reaching the stratosphere, sunlight can be partially blocked for a couple of years, with a temperature signal lasting about twice as long.[85]
|
70 |
+
|
71 |
+
The response of the climate system to an initial forcing is increased by self-reinforcing feedbacks and reduced by balancing feedbacks.[87] The main balancing feedback to global temperature change is radiative cooling to space as infrared radiation, which increases strongly with increasing temperature.[88] The main reinforcing feedbacks are the water vapour feedback, the ice–albedo feedback, and probably the net effect of clouds.[89] Uncertainty over feedbacks is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.[90]
|
72 |
+
|
73 |
+
As air gets warmer, it can hold more moisture. After an initial warming due to emissions of greenhouse gases, the atmosphere will hold more water. As water is a potent greenhouse gas, this further heats the climate: the water vapour feedback.[89] The reduction of snow cover and sea ice in the Arctic reduces the albedo of the Earth's surface.[91] More of the Sun's energy is now absorbed in these regions, contributing to Arctic amplification, which has caused Arctic temperatures to increase at more than twice the rate of the rest of the world.[92] Arctic amplification also causes methane to be released as permafrost melts, which is expected to surpass land use changes as the second strongest anthropogenic source of greenhouse gases by the end of the century.[93]
|
74 |
+
|
75 |
+
Cloud cover may change in the future. If cloud cover increases, more sunlight will be reflected back into space, cooling the planet. Simultaneously, the clouds enhance the greenhouse effect, warming the planet. The opposite is true if cloud cover decreases. It depends on the cloud type and location which process is more important. Overall, the net feedback over the industrial era has probably been self-reinforcing.[94]
|
76 |
+
|
77 |
+
Roughly half of each year's CO2 emissions have been absorbed by plants on land and in oceans.[95] Carbon dioxide and an extended growing season have stimulated plant growth making the land carbon cycle a balancing feedback. Climate change also increases droughts and heat waves that inhibit plant growth, which makes it uncertain whether this balancing feedback will persist in the future.[96] Soils contain large quantities of carbon and may release some when they heat up.[97] As more CO2 and heat are absorbed by the ocean, it is acidifying and ocean circulation can change, changing the rate at which the ocean can absorb atmospheric carbon.[98]
|
78 |
+
|
79 |
+
Future warming depends on the strenght of climate feedbacks and on emissions of greenhouse gases.[99] The former is often estimated using climate models. A climate model is a representation of the physical, chemical, and biological processes that affect the climate system.[100] They also include changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing.[101] Computer models attempt to reproduce and predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.[102] There are more than two dozen scientific institutions that develop climate models.[103] Models not only project different future temperature with different emissions of greenhouse gases, but also do not fully agree on the strength of different feedbacks on climate sensitivity and the amount of inertia of the system.[104]
|
80 |
+
|
81 |
+
The physical realism of models is tested by examining their ability to simulate contemporary or past climates.[105] Past models have underestimated the rate of Arctic shrinkage[106] and underestimated the rate of precipitation increase.[107] Sea level rise since 1990 was underestimated in older models, but now agrees well with observations.[108] The 2017 United States-published National Climate Assessment notes that "climate models may still be underestimating or missing relevant feedback processes".[109]
|
82 |
+
|
83 |
+
Four Representative Concentration Pathways (RCPs) are used as input for climate models: "a stringent mitigation scenario (RCP2.6), two intermediate scenarios (RCP4.5 and RCP6.0) and one scenario with very high GHG [greenhouse gas] emissions (RCP8.5)".[110] RCPs only look at concentrations of greenhouse gases, and so does not include the response of the carbon cycle.[111] Climate model projections summarized in the IPCC Fifth Assessment Report indicate that, during the 21st century, the global surface temperature is likely to rise a further 0.3 to 1.7 °C (0.5 to 3.1 °F) in a moderate scenario, or as much as 2.6 to 4.8 °C (4.7 to 8.6 °F) in an extreme scenario, depending on the rate of future greenhouse gas emissions and on climate feedback effects.[112]
|
84 |
+
|
85 |
+
A subset of climate models add societal factors to a simple physical climate model. These models simulate how population, economic growth, and energy use affect – and interact with – the physical climate. With this information, these models can produce scenarios of how greenhouse gas emissions may vary in the future. This output is then used as input for physical climate models to generate climate change projections.[113] In some scenarios emissions continue to rise over the century, while others have reduced emissions.[114] Fossil fuel resources are abundant, and cannot be relied on to limit carbon emissions in the 21st century.[115] Emission scenarios can be combined with modelling of the carbon cycle to predict how atmospheric concentrations of greenhouse gases might change in the future.[116] According to these combined models, by 2100 the atmospheric concentration of CO2 could be as low as 380 or as high as 1400 ppm, depending on the Shared Socioeconomic Pathway (SSP) and the mitigation scenario.[117]
|
86 |
+
|
87 |
+
The remaining carbon emissions budget is determined from modelling the carbon cycle and climate sensitivity to greenhouse gases.[118] According to the IPCC, global warming can be kept below 1.5 °C with a two-thirds chance if emissions after 2018 do not exceed 420 or 570 GtCO2 depending on the choice of the measure of global temperature. This amount corresponds to 10 to 13 years of current emissions. There are high uncertainties about the budget; for instance, it may be 100 GtCO2 smaller due to methane release from permafrost and wetlands.[119]
|
88 |
+
|
89 |
+
The environmental effects of global warming are broad and far-reaching. They include effects on the oceans, ice, and weather and may occur gradually or rapidly. Evidence for these effects come from studying climate change in the past, modelling and modern observations.[121] Since the 1950s, droughts and heat waves have appeared simultaneously with increasing frequency.[122] Extremely wet or dry events within the monsoon period have increased in India and East Asia.[123] Various mechanisms have been identified that might explain extreme weather in mid-latitudes from the rapidly warming Arctic, such as the jet stream becoming more erratic.[124] The maximum rainfall and wind speed from hurricanes and typhoons is likely increasing.[125]
|
90 |
+
|
91 |
+
Between 1993 and 2017, the global mean sea level rose on average by 3.1 ± 0.3 mm per year, with an acceleration detected as well.[126] Over the 21st century, the IPCC projects that in a very high emissions scenario the sea level could rise by 61–110 cm.[127] The rate of ice loss from glaciers and ice sheets in the Antarctic is a key area of uncertainty since this source could account for 90% of the potential sea level rise:[128] increased ocean warmth is undermining and threatening to unplug Antarctic glacier outlets, potentially resulting in more rapid sea level rise.[129] The retreat of non-polar glaciers also contributes to sea level rise.[130]
|
92 |
+
|
93 |
+
Global warming has led to decades of shrinking and thinning of the Arctic sea ice, making it vulnerable to atmospheric anomalies.[131] Projections of declines in Arctic sea ice vary.[132] While ice-free summers are expected to be rare at 1.5 °C (2.7 °F) degrees of warming, they are set to occur once every three to ten years at a warming level of 2.0 °C (3.6 °F),[133] increasing the ice–albedo feedback.[134] Higher atmospheric CO2 concentrations have led to an increase in dissolved CO2, which causes ocean acidification.[135] Furthermore, oxygen levels decrease because oxygen is less soluble in warmer water, an effect known as ocean deoxygenation.[136]
|
94 |
+
|
95 |
+
The long-term effects of global warming include further ice melt, ocean warming, sea level rise, and ocean acidification. On the timescale of centuries to millennia, the magnitude of global warming will be determined primarily by anthropogenic CO2 emissions.[137] This is due to carbon dioxide's very long lifetime in the atmosphere.[137] Carbon dioxide is slowly taking up by the ocean, such that ocean acidification will continue for hundreds to thousands of years.[138] The emissions are estimated to have prolonged the current interglacial period by at least 100,000 years.[139] Because the great mass of glaciers and ice caps depressed the Earth's crust, another long-term effect of ice melt and deglaciation is the gradual rising of landmasses, a process called post-glacial rebound.[140] Sea level rise will continue over many centuries, with an estimated rise of 2.3 metres per degree Celsius (4.2 ft/°F) after 2000 years.[141]
|
96 |
+
|
97 |
+
If global warming exceeds 1.5 °C, there is a greater risk of passing through ‘tipping points’, thresholds beyond which certain impacts can no longer be avoided even if temperatures are reduced.[142] Some large-scale changes could occur abruptly, i.e. over a short time period. One potential source of abrupt tipping would be the rapid release of methane and carbon dioxide from permafrost, which would amplify global warming.[143] Another example is the possibility for the Atlantic Meridional Overturning Circulation to collapse,[144] which could trigger cooling in the North Atlantic, Europe, and North America.[145] If multiple temperature and carbon cycle tipping points re-inforce each other, or if there were to be strong threshold behaviour in cloud cover, there could be a global tipping into a hothouse Earth.[146]
|
98 |
+
|
99 |
+
Recent warming has driven many terrestrial and freshwater species poleward and towards higher altitudes.[147] Higher atmospheric CO2 levels and an extended growing season have resulted in global greening, whereas heatwaves and drought have reduced ecosystem productivity in some regions. The future balance of these opposing effects is unclear.[148] Global warming has contributed to the expansion of drier climatic zones, such as, probably, the expansion of deserts in the subtropics.[149] Without substantial actions to reduce the rate of global warming, land-based ecosystems risk major shifts in their composition and structure.[150] Overall, it is expected that climate change will result in the extinction of many species and reduced diversity of ecosystems.[151]
|
100 |
+
|
101 |
+
The ocean has heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles as fast as or faster than species on land.[152] Just as on land, heat waves in the ocean occur more due to climate change, with harmful effects found on a wide range of organisms such as corals, kelp, and seabirds.[153] Ocean acidification threatens damage to coral reefs, fisheries, protected species, and other natural resources of value to society.[154] Coastal ecosystems are under stress, with almost half of wetlands having disappeared as a consquence of climate change and other human impacts. Harmful algae blooms have increased due to warming, ocean deoxygenation and eutrophication.[155]
|
102 |
+
|
103 |
+
Ecological collapse possibilities. Bleaching has damaged the Great Barrier Reef and threatens reefs worldwide.[156]
|
104 |
+
|
105 |
+
Extreme weather. Drought and high temperatures worsened the 2020 bushfires in Australia.[157]
|
106 |
+
|
107 |
+
Arctic warming. Permafrost thaws undermine infrastructure and release methane in a positive feedback loop.[143]
|
108 |
+
|
109 |
+
Habitat destruction. Many arctic animals rely on sea ice, which has been disappearing in a warming Arctic.[158]
|
110 |
+
|
111 |
+
Pest propagation. Mild winters allow more pine beetles to survive to kill large swaths of forest.[159]
|
112 |
+
|
113 |
+
The effects of climate change on human systems, mostly due to warming and shifts in precipitation, have been detected worldwide. The social impacts of climate change will be uneven across the world.[160] All regions are at risk of experiencing negative impacts,[161] with low-latitude, less developed areas facing the greatest risk.[162] Global warming has likely already increased global economic inequality, and is projected to do so in the future.[163] Regional impacts of climate change are now observable on all continents and across ocean regions.[164] The Arctic, Africa, small islands, and Asian megadeltas are regions that are likely to be especially affected by future climate change.[165] Many risks increase with higher magnitudes of global warming.[166]
|
114 |
+
|
115 |
+
Crop production will probably be negatively affected in low-latitude countries, while effects at northern latitudes may be positive or negative.[167] Global warming of around 4 °C relative to late 20th century levels could pose a large risk to global and regional food security.[168] The impact of climate change on crop productivity for the four major crops was negative for wheat and maize, and neutral for soy and rice, in the years 1960–2013.[169] Up to an additional 183 million people worldwide, particularly those with lower incomes, are at risk of hunger as a consequence of warming.[170] While increased CO2 levels help crop growth at lower temperature increases, those crops do become less nutritious.[170] Based on local and indigenous knowledge, climate change is already affecting food security in mountain regions in South America and Asia, and in various drylands, particularly in Africa.[170] Regions dependent on glacier water, regions that are already dry, and small islands are also at increased risk of water stress due to climate change.[171]
|
116 |
+
|
117 |
+
In small islands and mega deltas, inundation from sea level rise is expected to threaten vital infrastructure and human settlements.[172] This could lead to homelessness in countries with low-lying areas such as Bangladesh, as well as statelessness for populations in island nations, such as the Maldives and Tuvalu.[173] Climate change can be an important driver of migration, both within and between countries.[174]
|
118 |
+
|
119 |
+
The majority of severe impacts of climate change are expected in sub-Saharan Africa and South-East Asia, where existing poverty is exacerbated.[175] The World Bank estimates that global warming could drive over 120 million people into poverty by 2030.[176] Current inequalities between men and women, between rich and poor and between people of different ethnicity have been observed to worsen as a consequence of climate variability and climate change.[177] Existing stresses include poverty, political conflicts, and ecosystem degradation. Regions may even become uninhabitable, with humidity and temperatures reaching levels too high for humans to survive.[178]
|
120 |
+
|
121 |
+
Generally, impacts on public health will be more negative than positive.[179] Impacts include the direct effects of extreme weather, leading to injury and loss of life;[180] and indirect effects, such as undernutrition brought on by crop failures.[181] Various infectious diseases are more easily transmitted in a warming climate, such as dengue fever, which affects children most severely, and malaria.[182] Young children are further the most vulnerable to food shortages, and together with older people to extreme heat.[183] Climate change has been linked to an increase in violent conflict by amplifying poverty and economic shocks, which are well-documented drivers of these conflicts.[184] Links have been made between a wide range of violent behaviour including, violent crimes, civil unrest, and wars, but conclusive scientific evidence remains elusive.[185]
|
122 |
+
|
123 |
+
Environmental migration. Sparser rainfall leads to desertification that harms agriculture and can displace populations.[186]
|
124 |
+
|
125 |
+
Farming. Droughts, rising temperatures, and extreme weather negatively impact agriculture.[187]
|
126 |
+
|
127 |
+
Tidal flooding. Sea level rise increases flooding in low lying coastal regions. Shown: Venice, Italy.[188]
|
128 |
+
|
129 |
+
Storm intensification. Bangladesh after Cyclone Sidr is an example of catastrophic flooding from increased rainfall.[189]
|
130 |
+
|
131 |
+
Heat wave intensification. Events like the June 2019 European heat wave are becoming more common.[190]
|
132 |
+
|
133 |
+
Mitigation of and adaptation to climate change are two complementary responses to global warming. Successful adaptation is easier if there are substantial emission reductions. Many of the countries that have contributed least to global greenhouse gas emissions are among the most vulnerable to climate change, which raises questions about justice and fairness with regard to mitigation and adaptation.[192]
|
134 |
+
|
135 |
+
Climate change impacts can be mitigated by reducing greenhouse gas emissions and by enhancing the capacity of Earth's surface to absorb greenhouse gases from the atmosphere.[193]
|
136 |
+
In order to limit global warming to less than 1.5°C with a high likelihood of success, the IPCC estimates that global GHG emissions will need to be net zero by 2050,[194] or by 2070 with a 2°C target. This will require far-reaching, systemIc changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry.[195] To make progress towards that goal, the United Nations Environment Programme estimates that, within the next decade, countries will need to triple the amount of reductions they have committed to in their current Paris agreements.[196]
|
137 |
+
|
138 |
+
Long-term scenarios all point to rapid and significant investment in renewable energy and energy efficiency as key to reducing GHG emissions.[197] These technologies include solar and wind power, bioenergy, geothermal energy, and hydroelectricity. Combined, they are capable of supplying several times the world’s current energy needs.[198] Solar PV and wind, in particular, have seen substantial growth and progress over the last few years,[199] such that they are currently among the cheapest sources of new power generation.[200] Renewables represented 75% of all new electricity generation installed in 2019, with solar and wind constituting nearly all of that amount.[201] However, fossil fuels continue to dominate world energy supplies. In 2018 fossil fuels produced 80% of the world’s energy, with modern renewable sources, including solar and wind power, accounting for around 11%.[202]
|
139 |
+
|
140 |
+
There are obstacles to the rapid development of renewable energy. Environmental and land use concerns are sometimes associated with large solar, wind and hydropower projects.[203] Solar and wind power also require energy storage systems and other modifications to the electricity grid to operate effectively,[204] although several storage technologies are now emerging to supplement the traditional use of pumped-storage hydropower.[205] The use of rare metals and other hazardous materials has also been raised as a concern with solar power.[206] The use of bioenergy is often not carbon neutral, and may have negative consequences for food security,[207] largely due to the amount of land required compared to other renewable energy options.[208]
|
141 |
+
|
142 |
+
For certain energy supply needs, as well as specific CO2-intensive heavy industries, carbon capture and storage may be a viable method of reducing CO2 emissions. Although high costs have been a concern with this technology,[209] it may be able to play a significant role in limiting atmospheric CO2 concentrations by mid-century.[210] Greenhouse gas emissions can be offset by enhancing Earth’s land carbon sink to sequester significantly larger amounts of CO2 beyond naturally occurring levels.[211] Forest preservation, reforestation and tree planting on non-forest lands are considered the most effective, although they may present food security concerns. Soil management on croplands and grasslands is another effective mitigation technique. For all these approaches there remain large scientific uncertainties with implementing them on a global scale.[212]
|
143 |
+
|
144 |
+
Individuals can also take actions to reduce their carbon footprint. These include: driving an EV or other energy efficient car and reducing vehicles miles by using mass transit or cycling; adopting a plant-based diet; reducing energy use in the home; limiting consumption of goods and services; and foregoing air travel.[213]
|
145 |
+
|
146 |
+
Although there is no single pathway to limit global warming to 1.5 or 2°C,[214] most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions.[215] Forestry and agriculture components also include steps to reduce pressures on ecosystems and enhance their carbon sequestration capabilities.[216] Scenarios that limit global warming to 1.5°C generally project the large scale use of carbon dioxide removal methods to augment the greenhouse gas reduction approaches mentioned above.[217]
|
147 |
+
|
148 |
+
Renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. The use of electricity for other needs, such as heating, would rise to the point where electricity becomes the largest form of overall energy supply by 2050.[218] Investment in coal would be eliminated and coal use nearly phased out by 2050.[219]
|
149 |
+
|
150 |
+
In transport, scenarios envision sharp increases in the market share of electric vehicles, low carbon fuel substitution for other transportation modes like shipping, and changes in transportation patterns to reduce overall demand, for example increased public transport.[220] Buildings will see additional electrification with the use of technologies like heat pumps, as well as continued energy efficiency improvements achieved via low energy building codes.[221] Industrial efforts will focus on increasing the energy efficiency of production processes, such as the use of cleaner technology for cement production,[222] designing and creating less energy intensive products, increasing product lifetimes, and developing incentives to reduce product demand.[223]
|
151 |
+
|
152 |
+
The agriculture and forestry sector faces a triple challenge of limiting greenhouse gas emissions, preventing further conversion of forests to agricultural land, and meeting increases in world food demand.[224] A suite of actions could reduce agriculture/forestry based greenhouse gas emissions by 66% from 2010 levels by reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing GHG emissions from agricultural production.[225]
|
153 |
+
|
154 |
+
A wide range of policies, regulations and laws are being used to reduce greenhouse gases. Carbon pricing mechanisms include carbon taxes and emissions trading systems.[226] As of 2019, carbon pricing covers about 20% of global greenhouse gas emissions.[227] Renewable portfolio standards have been enacted in several countries to move utilities to increase the percentage of electricity they generate from renewable sources.[228] Phasing out of fossil fuel subsidies, currently estimated at $300 billion globally (about twice the level of renewable energy subsidies),[229] could reduce greenhouse gas emissions by 6%.[230] Subsidies could also be redirected to support the transition to clean energy.[231] More prescriptive methods that can reduce greenhouse gases include vehicle efficiency standards,[232] renewable fuel standards, and air pollution regulations on heavy industry.[233]
|
155 |
+
|
156 |
+
As the use of fossil fuels is reduced, there are Just Transition considerations involving the social and economic challenges that arise. An example is the employment of workers in the affected industries, along with the well-being of the broader communities involved.[234] Climate justice considerations, such as those facing indigenous populations in the Arctic,[235] are another important aspect of mitigation policies.[236]
|
157 |
+
|
158 |
+
Adaptation is “the process of adjustment to current or expected changes in climate and its effects”. As climate change varies across regions, adaptation does too.[237] While some adaptation responses call for trade-offs, others bring synergies and co-benefits.[238] Examples of adaptation are improved coastline protection, better disaster management, and the development of more resistant crops.[239] Increased use of air conditioning allows people to better cope with heat, but also increases energy demand.[240] Adaptation is especially important in developing countries since they are predicted to bear the brunt of the effects of global warming.[241] The capacity and potential for humans to adapt, called adaptive capacity, is unevenly distributed across different regions and populations, and developing countries generally have less capacity to adapt.[242] The public sector, private sector, and communities are all gaining experience with adaptation, and adaptation is becoming embedded within certain planning processes.[243] There are limits to adaptation and more severe climate change requires more transformative adaptation, which can be prohibitatively expensive.[237]
|
159 |
+
|
160 |
+
Geoengineering or climate engineering is the deliberate large-scale modification of the climate to counteract climate change.[244] Techniques fall generally into the categories of solar radiation management and carbon dioxide removal, although various other schemes have been suggested. A 2018 review paper concluded that although geo-engineering is physically possible, all the techniques are in early stages of development, carry large risks and uncertainties and raise significant ethical and legal issues.[245]
|
161 |
+
|
162 |
+
The geopolitics of climate change is complex and was often framed as a prisoners' dilemma, in which all countries benefit from mitigation done by other countries, but individual countries would lose from investing in a transition to a low-carbon economy themselves. Net importers of fossil fuels win economically from transitioning, and net exporters face stranded assets: fossil fuels they cannot sell.[246] Furthermore, the benefits to individual countries in terms of public health and local environmental improvements of coal phase out exceed the costs, potentially eliminating the free-rider problem.[247] The geopolitics may be further complicated by the supply chain of rare earth metals, which are necessary to produce clean technology.[248]
|
163 |
+
|
164 |
+
As of 2020[update] nearly all countries in the world are parties to the United Nations Framework Convention on Climate Change (UNFCCC).[249] The objective of the Convention is to prevent dangerous human interference with the climate system.[250] As stated in the Convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained.[251] The Framework Convention was agreed on in 1992, but global emissions have risen since then.[57] Its yearly conferences are the stage of global negotiations.[252]
|
165 |
+
|
166 |
+
This mandate was sustained in the 1997 Kyoto Protocol to the Framework Convention.[253] In ratifying the Kyoto Protocol, most developed countries accepted legally binding commitments to limit their emissions. These first-round commitments expired in 2012.[254] United States President George W. Bush rejected the treaty on the basis that "it exempts 80% of the world, including major population centres such as China and India, from compliance, and would cause serious harm to the US economy".[255] During these negotiations, the G77 (a lobbying group in the United Nations representing developing countries)[256] pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions.[257] This was justified on the basis that the developed countries' emissions had contributed most to the accumulation of greenhouse gases in the atmosphere, per-capita emissions were still relatively low in developing countries, and the emissions of developing countries would grow to meet their development needs.[258]
|
167 |
+
|
168 |
+
In 2009 several UNFCCC Parties produced the Copenhagen Accord,[260] which has been widely portrayed as disappointing because of its low goals, leading poorer nations to reject it.[261] Nations associated with the Accord aimed to limit the future increase in global mean temperature to below 2 °C.[262] In 2015 all UN countries negotiated the Paris Agreement, which aims to keep climate change well below 2 °C and contains an aspirational goal of keeping warming under 1.5 °C. The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets are set in the Paris Agreement. Instead, the procedure of regularly setting ever more ambitious goals and reevaluating these goals every five years has been made binding.[263] The Paris Agreement reiterated that developing countries must be financially supported.[264] As of November 2019[update], 194 states and the European Union have signed the treaty and 186 states and the EU have ratified or acceded to the agreement.[265] In November 2019 the Trump administration notified the UN that it would withdraw the United States from the Paris Agreement in 2020.[266]
|
169 |
+
|
170 |
+
In 2019, the British Parliament became the first national government in the world to officially declare a climate emergency.[267] Other countries and jurisdictions followed.[268] In November 2019 the European Parliament declared a "climate and environmental emergency",[269] and the European Commission presented its European Green Deal with which they hope to make the EU carbon-neutral in 2050.[270]
|
171 |
+
|
172 |
+
While the ozone layer and climate change are considered separate problems, the solution to the former has significantly mitigated global warming. The estimated mitigation of the Montreal Protocol, an international agreement to stop emitting ozone-depleting gases, is estimated to have been more effective than the Kyoto Protocol, which was specifically designed to curb greenhouse gas emissions.[271] It has been argued that the Montreal Protocol may have done more than any other measure, as of 2017[update], to mitigate climate change as those substances were also powerful greenhouse gases.[272]
|
173 |
+
|
174 |
+
In the scientific literature, there is an overwhelming consensus that global surface temperatures have increased in recent decades and that the trend is caused mainly by human-induced emissions of greenhouse gases.[274] No scientific body of national or international standing disagrees with this view.[275] Scientific discussion takes place in journal articles that are peer-reviewed, which scientists subject to assessment every couple of years in the Intergovernmental Panel on Climate Change reports.[276] In 2013, the IPCC Fifth Assessment Report stated that "is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century".[277] Their 2018 report expressed the scientific consensus as: "human influence on climate has been the dominant cause of observed warming since the mid-20th century".[278]
|
175 |
+
|
176 |
+
Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change, and national science academies have called on world leaders to cut global emissions.[279] In 2017, in the second warning to humanity, 15,364 scientists from 184 countries stated that "the current trajectory of potentially catastrophic climate change due to rising greenhouse gases from burning fossil fuels, deforestation, and agricultural production – particularly from farming ruminants for meat consumption" is "especially troubling".[280] In 2019, a group of more than 11,000 scientists from 153 countries named climate change an "emergency" that would lead to "untold human suffering" if no big shifts in action takes place.[281] The emergency declaration emphasized that economic growth and population growth "are among the most important drivers of increases in CO2 emissions from fossil fuel combustion" and that "we need bold and drastic transformations regarding economic and population policies".[282]
|
177 |
+
|
178 |
+
The global warming problem came to international public attention in the late 1980s.[283] Due to confusing media coverage in the early 1990s, issues such as ozone depletion and climate change were often mixed up, affecting public understanding of these issues.[284] Although there are a few areas of linkage, the relationship between the two is weak.[285]
|
179 |
+
|
180 |
+
Significant regional differences exist in how concerned people are about climate change and how much they understand the issue.[286] In 2010, just a little over half the US population viewed it as a serious concern for either themselves or their families, while 73% of people in Latin America and 74% in developed Asia felt this way.[287] Similarly, in 2015 a median of 54% of respondents considered it "a very serious problem", but Americans and Chinese (whose economies are responsible for the greatest annual CO2 emissions) were among the least concerned.[286] Worldwide in 2011, people were more likely to attribute global warming to human activities than to natural causes, except in the US where nearly half of the population attributed global warming to natural causes.[288] Public reactions to global warming and concern about its effects have been increasing, with many perceiving it as the worst global threat.[289] In a 2019 CBS poll, 64% of the US population said that climate change is a "crisis" or a "serious problem", with 44% saying human activity was a significant contributor.[290]
|
181 |
+
|
182 |
+
Public debate about climate change has been strongly affected by climate change denial and misinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. The actors behind climate change denial form a well-funded and relatively coordinated coalition of fossil fuel companies, industry groups, conservative think tanks, and contrarian scientists.[292] Like the tobacco industry before, the main strategy of these groups has been to manufacture doubt about scientific data and results.[293] Many who deny, dismiss, or hold unwarranted doubt about the scientific consensus on anthropogenic global warming are labelled as "climate change skeptics", which several scientists have noted is a misnomer.[294]
|
183 |
+
|
184 |
+
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimize the negative impacts of climate change.[295] Manufacturing uncertainty about the science later developed into a manufacturing of controversy: creating the belief that there remains significant uncertainty about climate change within the scientific community in order to delay policy changes.[296] Strategies to promote these ideas include a criticism of scientific institutions,[297] and questioning the motives of individual scientists.[298] An "echo chamber" of climate-denying blogs and media has further fomented misunderstanding of global warming.[299]
|
185 |
+
|
186 |
+
Protests seeking more ambitious climate action increased in the 2010s in the form of fossil fuel divestment,[300] and worldwide demonstrations.[301] In particular, youth across the globe protested by skipping school, inspired by Swedish teenager Greta Thunberg in the school strike for climate.[302] Mass civil disobedience actions by Extinction Rebellion and Ende Gelände have ended in police intervention and large-scale arrests.[303] Litigation is increasingly used as a tool to strengthen climate action, with governments being the biggest target of lawsuits demanding that they become ambitious on climate action or enforce existing laws. Cases against fossil-fuel companies, from activists, shareholders and investors, generally seek compensation for loss and damage.[304]
|
187 |
+
|
188 |
+
In 1681 Mariotte noted that glass, though transparent to sunlight, obstructs radiant heat.[305] Around 1774 de Saussure showed that non-luminous warm objects emit infrared heat, and used a glass-topped insulated box to trap and measure heat from sunlight.[306] In 1824 Joseph Fourier proposed by analogy a version of the greenhouse effect; transparent atmosphere lets through visible light, which warms the surface. The warmed surface emits infrared radiation, but the atmosphere is relatively opaque to infrared and slows the emission of energy, warming the planet.[307] Starting in 1859,[308] John Tyndall established that nitrogen and oxygen (99% of dry air) are transparent to infrared, but water vapour and traces of some gases (significantly methane and carbon dioxide) both absorb infrared and, when warmed, emit infrared radiation. Changing concentrations of these gases could have caused "all the mutations of climate which the researches of geologists reveal" including ice ages.[309]
|
189 |
+
|
190 |
+
Svante Arrhenius noted that water vapour in air continuously varied, but carbon dioxide (CO2) was determined by long term geological processes. At the end of an ice age, warming from increased CO2 would increase the amount of water vapour, amplifying its effect in a feedback process. In 1896, he published the first climate model of its kind, showing that halving of CO2 could have produced the drop in temperature initiating the ice age. Arrhenius calculated the temperature increase expected from doubling CO2 to be around 5–6 °C (9.0–10.8 °F).[310] Other scientists were initially sceptical and believed the greenhouse effect to be saturated so that adding more CO2 would make no difference. Experts thought climate would be self-regulating.[311] From 1938 Guy Stewart Callendar published evidence that climate was warming and CO2 levels increasing,[312] but his calculations met the same objections.[311]
|
191 |
+
|
192 |
+
Early calculations treated the atmosphere as a single layer: Gilbert Plass used digital computers to model the different layers and found added CO2 would cause warming. Hans Suess found evidence CO2 levels had been rising, Roger Revelle showed the oceans would not absorb the increase, and together they helped Charles Keeling to begin a record of continued increase, the Keeling Curve.[311] Scientists alerted the public,[313] and the dangers were highlighted at James Hansen's 1988 Congressional testimony.[314] The Intergovernmental Panel on Climate Change, set up in 1988 to provide formal advice to the world's governments, spurred interdisciplanary research.[315]
|
193 |
+
|
194 |
+
Before the 1980s, when it was unclear whether warming by greenhouse gases would dominate aerosol-induced cooling, scientist often used the term ‘’inadvertent climate modification’’ to refer to humankinds’ impact on the climate. With increasing evidence of warming, the terms ‘’global warming’’ and ‘’climate change’’ were introduced, with the former referring only to increasing surface warming, and the latter to the full effect of greenhouse gases on climate.[316] Global warming became the dominant popular term after NASA climate scientist James Hansen used it in his 1988 testimony in the U.S. Senate.[314] In the 2000s, the term climate change increased in popularity.[317] Global warming is almost only used to refer to human-induced warming of the Earth system, whereas climate change is sometimes used to refer to natural as well as anthropogenic change.[318] The two terms are often used interchangeably.[319]
|
195 |
+
|
196 |
+
Various scientists, politicians and news media have adopted the terms climate crisis or a climate emergency to talk about climate change, while using global heating instead of global warming.[320] The policy editor-in-chief of The Guardian explained why they included this language in their editorial guidelines: "We want to ensure that we are being scientifically precise, while also communicating clearly with readers on this very important issue".[321] Oxford Dictionary chose climate emergency as the word of the year 2019 and defines the term as "a situation in which urgent action is required to reduce or halt climate change and avoid potentially irreversible environmental damage resulting from it".[322]
|
197 |
+
|
198 |
+
AR4 Working Group I Report
|
199 |
+
|
200 |
+
AR4 Working Group II Report
|
201 |
+
|
202 |
+
AR4 Working Group III Report
|
203 |
+
|
204 |
+
AR4 Synthesis Report
|
205 |
+
|
206 |
+
AR5 Working Group I Report
|
207 |
+
|
208 |
+
AR5 Working Group II Report
|
209 |
+
|
210 |
+
AR5 Working Group III Report
|
211 |
+
|
212 |
+
AR5 Synthesis Report
|
213 |
+
|
214 |
+
Special Report: SR15
|
215 |
+
|
216 |
+
Special Report: Climate change and Land
|
217 |
+
|
218 |
+
Special Report: SROCC
|
en/4942.html.txt
ADDED
@@ -0,0 +1,218 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The rising average temperature of Earth's climate system, called global warming, is driving changes in rainfall patterns, extreme weather, arrival of seasons, and more. Collectively, global warming and its effects are known as climate change. While there have been prehistoric periods of global warming, observed changes since the mid-20th century have been unprecedented in rate and scale.[1]
|
6 |
+
|
7 |
+
The Intergovernmental Panel on Climate Change (IPCC) concluded that "human influence on climate has been the dominant cause of observed warming since the mid-20th century". These findings have been recognized by the national science academies of major nations and are not disputed by any scientific body of national or international standing.[4] The largest human influence has been the emission of greenhouse gases, with over 90% of the impact from carbon dioxide and methane.[5] Fossil fuel burning is the principal source of these gases, with agricultural emissions and deforestation also playing significant roles. Temperature rise is enhanced by self-reinforcing climate feedbacks, such as loss of snow cover, increased water vapour, and melting permafrost.
|
8 |
+
|
9 |
+
Land surfaces are heating faster than the ocean surface, leading to heat waves, wildfires, and the expansion of deserts.[6] Increasing atmospheric energy and rates of evaporation are causing more intense storms and weather extremes, damaging infrastructure and agriculture.[7] Surface temperature increases are greatest in the Arctic and have contributed to the retreat of glaciers, permafrost, and sea ice. Environmental impacts include the extinction or relocation of many species as their ecosystems change, most immediately in coral reefs, mountains, and the Arctic. Surface temperatures would stabilize and decline a little if emissions were cut off, but other impacts will continue for centuries, including rising sea levels from melting ice sheets, rising ocean temperatures, and ocean acidification from elevated levels of carbon dioxide.[8]
|
10 |
+
|
11 |
+
Habitat destruction. Many arctic animals rely on sea ice, which has been disappearing in a warming Arctic.
|
12 |
+
|
13 |
+
Pest propagation. Mild winters allow more pine beetles to survive to kill large swaths of forest.
|
14 |
+
|
15 |
+
Heat wave intensification. Events like the June 2019 European heat wave are becoming more common.
|
16 |
+
|
17 |
+
Extreme weather. Drought and high temperatures worsened the 2020 bushfires in Australia.
|
18 |
+
|
19 |
+
Farming. Droughts, rising temperatures, and extreme weather negatively impact agriculture.
|
20 |
+
|
21 |
+
Tidal flooding. Sea level rise increases flooding in low lying coastal regions. Shown: Venice, Italy.
|
22 |
+
|
23 |
+
Storm intensification. Bangladesh after Cyclone Sidr is an example of catastrophic flooding from increased rainfall.
|
24 |
+
|
25 |
+
Environmental migration. Sparser rainfall leads to desertification that harms agriculture and can displace populations.
|
26 |
+
|
27 |
+
Arctic warming. Permafrost thaws undermine infrastructure and release methane in a positive feedback loop.
|
28 |
+
|
29 |
+
Ecological collapse possibilities. Bleaching has damaged the Great Barrier Reef and threatens reefs worldwide.
|
30 |
+
|
31 |
+
Mitigation efforts to address global warming include the development and deployment of low carbon energy technologies, policies to reduce fossil fuel emissions, reforestation, forest preservation, as well as the development of potential climate engineering technologies. Societies and governments are also working to adapt to current and future global warming impacts, including improved coastline protection, better disaster management, and the development of more resistant crops.
|
32 |
+
|
33 |
+
Countries work together on climate change under the umbrella of the United Nations Framework Convention on Climate Change (UNFCCC), which has near-universal membership. The goal of the convention is to "prevent dangerous anthropogenic interference with the climate system". The IPCC has stressed the need to keep global warming below 1.5 °C (2.7 °F) compared to pre-industrial levels in order to avoid some irreversible impacts.[10] With current policies and pledges, global warming by the end of the century is expected to reach about 2.8 °C (5.0 °F).[11] At the current greenhouse gas (GHG) emission rate, the carbon budget for staying below 1.5 °C (2.7 °F) would be exhausted by 2028.[12]
|
34 |
+
|
35 |
+
Multiple independently produced instrumental datasets show that the climate system is warming,[14] with the 2009–2018 decade being 0.93 ± 0.07 °C (1.67 ± 0.13 °F) warmer than the pre-industrial baseline (1850–1900).[15] Currently, surface temperatures are rising by about 0.2 °C (0.36 °F) per decade.[16] Since 1950, the number of cold days and nights has decreased, and the number of warm days and nights has increased.[17] Historical patterns of warming and cooling, like the Medieval Climate Anomaly and the Little Ice Age, were not as synchronous as current warming, but may have reached temperatures as high as those of the late-20th century in a limited set of regions.[18] There have been prehistorical episodes of global warming, such as the Paleocene–Eocene Thermal Maximum.[19] However, the observed rise in temperature and CO2 concentrations has been so rapid that even abrupt geophysical events that took place in Earth's history do not approach current rates.[20]
|
36 |
+
|
37 |
+
Climate proxy records show that natural variations offset the early effects of the Industrial Revolution, so there was little net warming between the 18th century and the mid-19th century,[21] when thermometer records began to provide global coverage.[22] The Intergovernmental Panel on Climate Change (IPCC) has adopted the baseline reference period 1850–1900 as an approximation of pre-industrial global mean surface temperature.[21]
|
38 |
+
|
39 |
+
The warming evident in the instrumental temperature record is consistent with a wide range of observations, documented by many independent scientific groups.[23] Although the most common measure of global warming is the increase in the near-surface atmospheric temperature, over 90% of the additional energy in the climate system over the last 50 years has been stored in the ocean, warming it.[24] The remainder of the additional energy has melted ice and warmed the continents and the atmosphere.[25] The ocean heat uptake drives thermal expansion which has contributed to observed sea level rise.[26] Further indicators of climate change include an increase in the frequency and intensity of heavy precipitation, melting of snow and land ice and increased atmospheric humidity.[27] Flora and fauna also portray behaviour consistent with warming, such as the earlier flowering of plants in spring.[28]
|
40 |
+
|
41 |
+
Global warming refers to global averages, with the amount of warming varying by region. Since the pre-industrial period, global average land temperatures have increased almost twice as fast as global average temperatures.[29] This is due to the larger heat capacity of oceans and because oceans lose more heat by evaporation.[30] Patterns of warming are independent of the locations of greenhouse gas emissions because the gases persist long enough to diffuse across the planet; however, localized black carbon deposits on snow and ice do contribute to Arctic warming.[31]
|
42 |
+
|
43 |
+
The Northern Hemisphere and North Pole have warmed much faster than the South Pole and Southern Hemisphere. The Northern Hemisphere not only has much more land, but also more snow area and sea ice, because of how the land masses are arranged around the Arctic Ocean. As these surfaces flip from being reflective to dark after the ice has melted, they start absorbing more heat. The Southern Hemisphere already had little sea ice in summer before it started warming.[32] Arctic temperatures have increased and are predicted to continue to increase during this century at over twice the rate of the rest of the world.[33] Melting of glaciers and ice sheets in the Arctic disrupts ocean circulation, uncluding a weakened Gulf Stream, causing increased warming in some areas.[34]
|
44 |
+
|
45 |
+
Although record-breaking years attract considerable media attention, individual years are less significant than the overall global surface temperature, which is subject to short-term fluctuations that overlie long-term trends.[35] An example of such an episode is the slower rate of surface temperature increase from 1998 to 2012, which was described as the global warming hiatus.[36] Throughout this period, ocean heat storage continued to progress steadily upwards, and in subsequent years, surface temperatures have spiked upwards. The slower pace of warming can be attributed to a combination of natural fluctuations, reduced solar activity, and increased reflection sunlight of by particles from volcanic eruptions.[37]
|
46 |
+
|
47 |
+
By itself, the climate system experiences various cycles which can last for years (such as the El Niño–Southern Oscillation) to decades or centuries.[38] Other changes are caused by an imbalance of energy at the top of the atmosphere: external forcings. These forcings are "external" to the climate system, but not always external to the Earth.[39] Examples of external forcings include changes in the composition of the atmosphere (e.g. increased concentrations of greenhouse gases), solar luminosity, volcanic eruptions, and variations in the Earth's orbit around the Sun.[40]
|
48 |
+
|
49 |
+
Attribution of climate change is the effort to scientifically show which mechanisms are responsible for observed changes in Earth's climate. First, known internal climate variability and natural external forcings need to be ruled out. Therefore, a key approach is to use computer modelling of the climate system to determine unique "fingerprints" for all potential causes. By comparing these fingerprints with observed patterns and evolution of climate change, and the observed history of the forcings, the causes of the observed changes can be determined.[41] For example, solar forcing can be ruled out as major cause because its fingerprint is warming in the entire atmosphere, and only the lower atmosphere has warmed as expected for greenhouse gases.[42] The major causes of current climate change are primarily greenhouse gases, and secondarily land use changes, and aerosols and soot.[43]
|
50 |
+
|
51 |
+
Greenhouse gases trap heat radiating from the Earth to space.[44] This heat, in the form of infrared radiation, gets absorbed and emitted by these gases in the atmosphere, thus warming the lower atmosphere and the surface. Before the Industrial Revolution, naturally occurring amounts of greenhouse gases caused the air near the surface to be warmer by about 33 °C (59 °F) than it would be in their absence.[45] Without the Earth's atmosphere, the Earth's average temperature would be well below the freezing temperature of water.[46] While water vapour (~50%) and clouds (~25%) are the biggest contributors to the greenhouse effect, they increase as a function of temperature and are therefore considered feedbacks. Increased concentrations of gases such as CO2 (~20%), ozone and N2O are external forcing on the other hand.[47] Ozone acts as a greenhouse gas in the lowest layer of the atmosphere, the troposphere. Furthermore, it is highly reactive and interacts with other greenhouse gases and aerosols.[48]
|
52 |
+
|
53 |
+
Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels,[49] has increased the amount of greenhouse gases in the atmosphere. This CO2, methane, tropospheric ozone, CFCs, and nitrous oxide has increased radiative forcing. In 2018, the concentrations of CO2 and methane had increased by about 45% and 160%, respectively, since pre-industrial times.[50] In 2013, CO2 readings taken at the world's primary benchmark site in Mauna Loa surpassing 400 ppm for the first time.[51] These levels are much higher than at any time during the last 800,000 years, the period for which reliable data have been collected from ice cores.[52] Less direct geological evidence indicates that CO2 values have not been this high for millions of years.[53]
|
54 |
+
|
55 |
+
Global anthropogenic greenhouse gas emissions in 2018 excluding land use change were equivalent to 52 billion tonnes of carbon dioxide. Of these emissions, 72% was carbon dioxide from fossil fuel burning and industry, 19% was methane, largely from livestock,[54] 6% was nitrous oxide, mainly from agriculture, and 3% was fluorinated gases.[55] A further 4 billion tonnes of CO2 was released as a consequence of land use change, which is primarily due to deforestation.[56] From a production standpoint, the primary sources of global GHG emissions are estimated as: electricity and heat (25%), agriculture and forestry (24%), industry (21%), and transportation (14%).[57] Consumption based estimates of GHG emissions offer another useful way to understand sources of global warming, and may better capture the effects of trade.[58] From a consumption standpoint, the dominant sources of global 2010 emissions were found to be: food (30%), washing, heating, and lighting (26%); personal transport and freight (20%); and building construction (15%).[59]
|
56 |
+
|
57 |
+
Despite the contribution of deforestation to GHG emissions, the Earth's land surface, particularly its forests, remain a significant carbon sink for CO2. Natural processes, such as carbon fixation in the soil and photosynthesis, more than offset the GHG contributions from deforestation. The land surface sink is estimated to remove about 11 billion tonnes of CO2 annually from the atmosphere, or about 29% of global CO2 emissions.[60] The ocean also serves as a significant carbon sink via a two-step process. First, CO2 dissolves in the surface water. Afterwards, the ocean's overturning circulation distributes it deep into the ocean's interior, where it accumulates over time as part of the carbon cycle. Over the last two decades, the world's oceans have removed between 20 and 30% of emitted CO2.[61] The strength of both the land and ocean sinks increase as CO2 levels in the atmosphere rise. In this respect they act as negative feedbacks in global warming.[62]
|
58 |
+
|
59 |
+
Humans change the Earth's surface mainly to create more agricultural land. Today agriculture takes up 50% of the world's habitable land, while 37% is forests,[63] and that latter figure continues to decrease,[64] largely due to continued forest loss in the tropics.[65] This deforestation is the most significant aspect of land use change affecting global warming. The main causes are: deforestation through permanent land use change for agricultural products such as beef and palm oil (27%), forestry/forest products (26%), short term agricultural cultivation (24%), and wildfires (23%).[66]
|
60 |
+
|
61 |
+
In addition to impacting greenhouse gas concentrations, land use changes affect global warming through a variety of other chemical and physical dynamics. Changing the type of vegetation in a region impacts the local temperature by changing how much sunlight gets reflected back into space, called albedo, and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also contribute to changing temperatures by affecting the release of aerosols and other chemical compounds that affect clouds; and by changing wind patterns when the land surface has different obstacles.[67] Globally, these effects are estimated to have led to a slight cooling, dominated by an increase in surface albedo.[68] But there is significant geographic variation in how this works. In the tropics the net effect is to produce a significant warming, while at latitudes closer to the poles a loss of albedo leads to an overall cooling effect.[67]
|
62 |
+
|
63 |
+
Air pollution, in the form of aerosols, not only puts a large burden on human health, but also affects the climate on a large scale.[69] From 1961 to 1990, a gradual reduction in the amount of sunlight reaching the Earth's surface was observed, a phenomenon popularly known as global dimming,[70] typically attributed to aerosols from biofuel and fossil fuel burning.[71] Aerosol removal by precipitation gives tropospheric aerosols an atmospheric lifetime of only about a week, while stratospheric aerosols can remain in the atmosphere for a few years.[72] Globally, aerosols have been declining since 1990, removing some of the masking of global warming that they had been providing.[73]
|
64 |
+
|
65 |
+
In addition to their direct effect by scattering and absorbing solar radiation, aerosols have indirect effects on the Earth's radiation budget. Sulfate aerosols act as cloud condensation nuclei and thus lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets.[74] This effect also causes droplets to be of more uniform size, which reduces the growth of raindrops and makes clouds more reflective to incoming sunlight.[75] Indirect effects of aerosols are the largest uncertainty in radiative forcing.[76]
|
66 |
+
|
67 |
+
While aerosols typically limit global warming by reflecting sunlight, black carbon in soot that falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea level rise.[77] Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C by 2050.[78]
|
68 |
+
|
69 |
+
As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect the climate system.[79] Solar irradiance has been measured directly by satellites,[80] and indirect measurements are available beginning in the early 1600s.[79] There has been no upward trend in the amount of the Sun's energy reaching the Earth, so it cannot be responsible for the current warming.[81] Physical climate models are also unable to reproduce the rapid warming observed in recent decades when taking into account only variations in solar output and volcanic activity.[82] Another line of evidence for the warming not being due to the Sun is how temperature changes differ at different levels in the Earth's atmosphere.[83] According to basic physical principles, the greenhouse effect produces warming of the lower atmosphere (the troposphere), but cooling of the upper atmosphere (the stratosphere).[84] If solar variations were responsible for the observed warming, warming of both the troposphere and the stratosphere would be expected, but that has not been the case.[42] Explosive volcanic eruptions represent the largest natural forcing over the industrial era. When the eruption is sufficiently strong with sulfur dioxide reaching the stratosphere, sunlight can be partially blocked for a couple of years, with a temperature signal lasting about twice as long.[85]
|
70 |
+
|
71 |
+
The response of the climate system to an initial forcing is increased by self-reinforcing feedbacks and reduced by balancing feedbacks.[87] The main balancing feedback to global temperature change is radiative cooling to space as infrared radiation, which increases strongly with increasing temperature.[88] The main reinforcing feedbacks are the water vapour feedback, the ice–albedo feedback, and probably the net effect of clouds.[89] Uncertainty over feedbacks is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.[90]
|
72 |
+
|
73 |
+
As air gets warmer, it can hold more moisture. After an initial warming due to emissions of greenhouse gases, the atmosphere will hold more water. As water is a potent greenhouse gas, this further heats the climate: the water vapour feedback.[89] The reduction of snow cover and sea ice in the Arctic reduces the albedo of the Earth's surface.[91] More of the Sun's energy is now absorbed in these regions, contributing to Arctic amplification, which has caused Arctic temperatures to increase at more than twice the rate of the rest of the world.[92] Arctic amplification also causes methane to be released as permafrost melts, which is expected to surpass land use changes as the second strongest anthropogenic source of greenhouse gases by the end of the century.[93]
|
74 |
+
|
75 |
+
Cloud cover may change in the future. If cloud cover increases, more sunlight will be reflected back into space, cooling the planet. Simultaneously, the clouds enhance the greenhouse effect, warming the planet. The opposite is true if cloud cover decreases. It depends on the cloud type and location which process is more important. Overall, the net feedback over the industrial era has probably been self-reinforcing.[94]
|
76 |
+
|
77 |
+
Roughly half of each year's CO2 emissions have been absorbed by plants on land and in oceans.[95] Carbon dioxide and an extended growing season have stimulated plant growth making the land carbon cycle a balancing feedback. Climate change also increases droughts and heat waves that inhibit plant growth, which makes it uncertain whether this balancing feedback will persist in the future.[96] Soils contain large quantities of carbon and may release some when they heat up.[97] As more CO2 and heat are absorbed by the ocean, it is acidifying and ocean circulation can change, changing the rate at which the ocean can absorb atmospheric carbon.[98]
|
78 |
+
|
79 |
+
Future warming depends on the strenght of climate feedbacks and on emissions of greenhouse gases.[99] The former is often estimated using climate models. A climate model is a representation of the physical, chemical, and biological processes that affect the climate system.[100] They also include changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing.[101] Computer models attempt to reproduce and predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere.[102] There are more than two dozen scientific institutions that develop climate models.[103] Models not only project different future temperature with different emissions of greenhouse gases, but also do not fully agree on the strength of different feedbacks on climate sensitivity and the amount of inertia of the system.[104]
|
80 |
+
|
81 |
+
The physical realism of models is tested by examining their ability to simulate contemporary or past climates.[105] Past models have underestimated the rate of Arctic shrinkage[106] and underestimated the rate of precipitation increase.[107] Sea level rise since 1990 was underestimated in older models, but now agrees well with observations.[108] The 2017 United States-published National Climate Assessment notes that "climate models may still be underestimating or missing relevant feedback processes".[109]
|
82 |
+
|
83 |
+
Four Representative Concentration Pathways (RCPs) are used as input for climate models: "a stringent mitigation scenario (RCP2.6), two intermediate scenarios (RCP4.5 and RCP6.0) and one scenario with very high GHG [greenhouse gas] emissions (RCP8.5)".[110] RCPs only look at concentrations of greenhouse gases, and so does not include the response of the carbon cycle.[111] Climate model projections summarized in the IPCC Fifth Assessment Report indicate that, during the 21st century, the global surface temperature is likely to rise a further 0.3 to 1.7 °C (0.5 to 3.1 °F) in a moderate scenario, or as much as 2.6 to 4.8 °C (4.7 to 8.6 °F) in an extreme scenario, depending on the rate of future greenhouse gas emissions and on climate feedback effects.[112]
|
84 |
+
|
85 |
+
A subset of climate models add societal factors to a simple physical climate model. These models simulate how population, economic growth, and energy use affect – and interact with – the physical climate. With this information, these models can produce scenarios of how greenhouse gas emissions may vary in the future. This output is then used as input for physical climate models to generate climate change projections.[113] In some scenarios emissions continue to rise over the century, while others have reduced emissions.[114] Fossil fuel resources are abundant, and cannot be relied on to limit carbon emissions in the 21st century.[115] Emission scenarios can be combined with modelling of the carbon cycle to predict how atmospheric concentrations of greenhouse gases might change in the future.[116] According to these combined models, by 2100 the atmospheric concentration of CO2 could be as low as 380 or as high as 1400 ppm, depending on the Shared Socioeconomic Pathway (SSP) and the mitigation scenario.[117]
|
86 |
+
|
87 |
+
The remaining carbon emissions budget is determined from modelling the carbon cycle and climate sensitivity to greenhouse gases.[118] According to the IPCC, global warming can be kept below 1.5 °C with a two-thirds chance if emissions after 2018 do not exceed 420 or 570 GtCO2 depending on the choice of the measure of global temperature. This amount corresponds to 10 to 13 years of current emissions. There are high uncertainties about the budget; for instance, it may be 100 GtCO2 smaller due to methane release from permafrost and wetlands.[119]
|
88 |
+
|
89 |
+
The environmental effects of global warming are broad and far-reaching. They include effects on the oceans, ice, and weather and may occur gradually or rapidly. Evidence for these effects come from studying climate change in the past, modelling and modern observations.[121] Since the 1950s, droughts and heat waves have appeared simultaneously with increasing frequency.[122] Extremely wet or dry events within the monsoon period have increased in India and East Asia.[123] Various mechanisms have been identified that might explain extreme weather in mid-latitudes from the rapidly warming Arctic, such as the jet stream becoming more erratic.[124] The maximum rainfall and wind speed from hurricanes and typhoons is likely increasing.[125]
|
90 |
+
|
91 |
+
Between 1993 and 2017, the global mean sea level rose on average by 3.1 ± 0.3 mm per year, with an acceleration detected as well.[126] Over the 21st century, the IPCC projects that in a very high emissions scenario the sea level could rise by 61–110 cm.[127] The rate of ice loss from glaciers and ice sheets in the Antarctic is a key area of uncertainty since this source could account for 90% of the potential sea level rise:[128] increased ocean warmth is undermining and threatening to unplug Antarctic glacier outlets, potentially resulting in more rapid sea level rise.[129] The retreat of non-polar glaciers also contributes to sea level rise.[130]
|
92 |
+
|
93 |
+
Global warming has led to decades of shrinking and thinning of the Arctic sea ice, making it vulnerable to atmospheric anomalies.[131] Projections of declines in Arctic sea ice vary.[132] While ice-free summers are expected to be rare at 1.5 °C (2.7 °F) degrees of warming, they are set to occur once every three to ten years at a warming level of 2.0 °C (3.6 °F),[133] increasing the ice–albedo feedback.[134] Higher atmospheric CO2 concentrations have led to an increase in dissolved CO2, which causes ocean acidification.[135] Furthermore, oxygen levels decrease because oxygen is less soluble in warmer water, an effect known as ocean deoxygenation.[136]
|
94 |
+
|
95 |
+
The long-term effects of global warming include further ice melt, ocean warming, sea level rise, and ocean acidification. On the timescale of centuries to millennia, the magnitude of global warming will be determined primarily by anthropogenic CO2 emissions.[137] This is due to carbon dioxide's very long lifetime in the atmosphere.[137] Carbon dioxide is slowly taking up by the ocean, such that ocean acidification will continue for hundreds to thousands of years.[138] The emissions are estimated to have prolonged the current interglacial period by at least 100,000 years.[139] Because the great mass of glaciers and ice caps depressed the Earth's crust, another long-term effect of ice melt and deglaciation is the gradual rising of landmasses, a process called post-glacial rebound.[140] Sea level rise will continue over many centuries, with an estimated rise of 2.3 metres per degree Celsius (4.2 ft/°F) after 2000 years.[141]
|
96 |
+
|
97 |
+
If global warming exceeds 1.5 °C, there is a greater risk of passing through ‘tipping points’, thresholds beyond which certain impacts can no longer be avoided even if temperatures are reduced.[142] Some large-scale changes could occur abruptly, i.e. over a short time period. One potential source of abrupt tipping would be the rapid release of methane and carbon dioxide from permafrost, which would amplify global warming.[143] Another example is the possibility for the Atlantic Meridional Overturning Circulation to collapse,[144] which could trigger cooling in the North Atlantic, Europe, and North America.[145] If multiple temperature and carbon cycle tipping points re-inforce each other, or if there were to be strong threshold behaviour in cloud cover, there could be a global tipping into a hothouse Earth.[146]
|
98 |
+
|
99 |
+
Recent warming has driven many terrestrial and freshwater species poleward and towards higher altitudes.[147] Higher atmospheric CO2 levels and an extended growing season have resulted in global greening, whereas heatwaves and drought have reduced ecosystem productivity in some regions. The future balance of these opposing effects is unclear.[148] Global warming has contributed to the expansion of drier climatic zones, such as, probably, the expansion of deserts in the subtropics.[149] Without substantial actions to reduce the rate of global warming, land-based ecosystems risk major shifts in their composition and structure.[150] Overall, it is expected that climate change will result in the extinction of many species and reduced diversity of ecosystems.[151]
|
100 |
+
|
101 |
+
The ocean has heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles as fast as or faster than species on land.[152] Just as on land, heat waves in the ocean occur more due to climate change, with harmful effects found on a wide range of organisms such as corals, kelp, and seabirds.[153] Ocean acidification threatens damage to coral reefs, fisheries, protected species, and other natural resources of value to society.[154] Coastal ecosystems are under stress, with almost half of wetlands having disappeared as a consquence of climate change and other human impacts. Harmful algae blooms have increased due to warming, ocean deoxygenation and eutrophication.[155]
|
102 |
+
|
103 |
+
Ecological collapse possibilities. Bleaching has damaged the Great Barrier Reef and threatens reefs worldwide.[156]
|
104 |
+
|
105 |
+
Extreme weather. Drought and high temperatures worsened the 2020 bushfires in Australia.[157]
|
106 |
+
|
107 |
+
Arctic warming. Permafrost thaws undermine infrastructure and release methane in a positive feedback loop.[143]
|
108 |
+
|
109 |
+
Habitat destruction. Many arctic animals rely on sea ice, which has been disappearing in a warming Arctic.[158]
|
110 |
+
|
111 |
+
Pest propagation. Mild winters allow more pine beetles to survive to kill large swaths of forest.[159]
|
112 |
+
|
113 |
+
The effects of climate change on human systems, mostly due to warming and shifts in precipitation, have been detected worldwide. The social impacts of climate change will be uneven across the world.[160] All regions are at risk of experiencing negative impacts,[161] with low-latitude, less developed areas facing the greatest risk.[162] Global warming has likely already increased global economic inequality, and is projected to do so in the future.[163] Regional impacts of climate change are now observable on all continents and across ocean regions.[164] The Arctic, Africa, small islands, and Asian megadeltas are regions that are likely to be especially affected by future climate change.[165] Many risks increase with higher magnitudes of global warming.[166]
|
114 |
+
|
115 |
+
Crop production will probably be negatively affected in low-latitude countries, while effects at northern latitudes may be positive or negative.[167] Global warming of around 4 °C relative to late 20th century levels could pose a large risk to global and regional food security.[168] The impact of climate change on crop productivity for the four major crops was negative for wheat and maize, and neutral for soy and rice, in the years 1960–2013.[169] Up to an additional 183 million people worldwide, particularly those with lower incomes, are at risk of hunger as a consequence of warming.[170] While increased CO2 levels help crop growth at lower temperature increases, those crops do become less nutritious.[170] Based on local and indigenous knowledge, climate change is already affecting food security in mountain regions in South America and Asia, and in various drylands, particularly in Africa.[170] Regions dependent on glacier water, regions that are already dry, and small islands are also at increased risk of water stress due to climate change.[171]
|
116 |
+
|
117 |
+
In small islands and mega deltas, inundation from sea level rise is expected to threaten vital infrastructure and human settlements.[172] This could lead to homelessness in countries with low-lying areas such as Bangladesh, as well as statelessness for populations in island nations, such as the Maldives and Tuvalu.[173] Climate change can be an important driver of migration, both within and between countries.[174]
|
118 |
+
|
119 |
+
The majority of severe impacts of climate change are expected in sub-Saharan Africa and South-East Asia, where existing poverty is exacerbated.[175] The World Bank estimates that global warming could drive over 120 million people into poverty by 2030.[176] Current inequalities between men and women, between rich and poor and between people of different ethnicity have been observed to worsen as a consequence of climate variability and climate change.[177] Existing stresses include poverty, political conflicts, and ecosystem degradation. Regions may even become uninhabitable, with humidity and temperatures reaching levels too high for humans to survive.[178]
|
120 |
+
|
121 |
+
Generally, impacts on public health will be more negative than positive.[179] Impacts include the direct effects of extreme weather, leading to injury and loss of life;[180] and indirect effects, such as undernutrition brought on by crop failures.[181] Various infectious diseases are more easily transmitted in a warming climate, such as dengue fever, which affects children most severely, and malaria.[182] Young children are further the most vulnerable to food shortages, and together with older people to extreme heat.[183] Climate change has been linked to an increase in violent conflict by amplifying poverty and economic shocks, which are well-documented drivers of these conflicts.[184] Links have been made between a wide range of violent behaviour including, violent crimes, civil unrest, and wars, but conclusive scientific evidence remains elusive.[185]
|
122 |
+
|
123 |
+
Environmental migration. Sparser rainfall leads to desertification that harms agriculture and can displace populations.[186]
|
124 |
+
|
125 |
+
Farming. Droughts, rising temperatures, and extreme weather negatively impact agriculture.[187]
|
126 |
+
|
127 |
+
Tidal flooding. Sea level rise increases flooding in low lying coastal regions. Shown: Venice, Italy.[188]
|
128 |
+
|
129 |
+
Storm intensification. Bangladesh after Cyclone Sidr is an example of catastrophic flooding from increased rainfall.[189]
|
130 |
+
|
131 |
+
Heat wave intensification. Events like the June 2019 European heat wave are becoming more common.[190]
|
132 |
+
|
133 |
+
Mitigation of and adaptation to climate change are two complementary responses to global warming. Successful adaptation is easier if there are substantial emission reductions. Many of the countries that have contributed least to global greenhouse gas emissions are among the most vulnerable to climate change, which raises questions about justice and fairness with regard to mitigation and adaptation.[192]
|
134 |
+
|
135 |
+
Climate change impacts can be mitigated by reducing greenhouse gas emissions and by enhancing the capacity of Earth's surface to absorb greenhouse gases from the atmosphere.[193]
|
136 |
+
In order to limit global warming to less than 1.5°C with a high likelihood of success, the IPCC estimates that global GHG emissions will need to be net zero by 2050,[194] or by 2070 with a 2°C target. This will require far-reaching, systemIc changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry.[195] To make progress towards that goal, the United Nations Environment Programme estimates that, within the next decade, countries will need to triple the amount of reductions they have committed to in their current Paris agreements.[196]
|
137 |
+
|
138 |
+
Long-term scenarios all point to rapid and significant investment in renewable energy and energy efficiency as key to reducing GHG emissions.[197] These technologies include solar and wind power, bioenergy, geothermal energy, and hydroelectricity. Combined, they are capable of supplying several times the world’s current energy needs.[198] Solar PV and wind, in particular, have seen substantial growth and progress over the last few years,[199] such that they are currently among the cheapest sources of new power generation.[200] Renewables represented 75% of all new electricity generation installed in 2019, with solar and wind constituting nearly all of that amount.[201] However, fossil fuels continue to dominate world energy supplies. In 2018 fossil fuels produced 80% of the world’s energy, with modern renewable sources, including solar and wind power, accounting for around 11%.[202]
|
139 |
+
|
140 |
+
There are obstacles to the rapid development of renewable energy. Environmental and land use concerns are sometimes associated with large solar, wind and hydropower projects.[203] Solar and wind power also require energy storage systems and other modifications to the electricity grid to operate effectively,[204] although several storage technologies are now emerging to supplement the traditional use of pumped-storage hydropower.[205] The use of rare metals and other hazardous materials has also been raised as a concern with solar power.[206] The use of bioenergy is often not carbon neutral, and may have negative consequences for food security,[207] largely due to the amount of land required compared to other renewable energy options.[208]
|
141 |
+
|
142 |
+
For certain energy supply needs, as well as specific CO2-intensive heavy industries, carbon capture and storage may be a viable method of reducing CO2 emissions. Although high costs have been a concern with this technology,[209] it may be able to play a significant role in limiting atmospheric CO2 concentrations by mid-century.[210] Greenhouse gas emissions can be offset by enhancing Earth’s land carbon sink to sequester significantly larger amounts of CO2 beyond naturally occurring levels.[211] Forest preservation, reforestation and tree planting on non-forest lands are considered the most effective, although they may present food security concerns. Soil management on croplands and grasslands is another effective mitigation technique. For all these approaches there remain large scientific uncertainties with implementing them on a global scale.[212]
|
143 |
+
|
144 |
+
Individuals can also take actions to reduce their carbon footprint. These include: driving an EV or other energy efficient car and reducing vehicles miles by using mass transit or cycling; adopting a plant-based diet; reducing energy use in the home; limiting consumption of goods and services; and foregoing air travel.[213]
|
145 |
+
|
146 |
+
Although there is no single pathway to limit global warming to 1.5 or 2°C,[214] most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions.[215] Forestry and agriculture components also include steps to reduce pressures on ecosystems and enhance their carbon sequestration capabilities.[216] Scenarios that limit global warming to 1.5°C generally project the large scale use of carbon dioxide removal methods to augment the greenhouse gas reduction approaches mentioned above.[217]
|
147 |
+
|
148 |
+
Renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. The use of electricity for other needs, such as heating, would rise to the point where electricity becomes the largest form of overall energy supply by 2050.[218] Investment in coal would be eliminated and coal use nearly phased out by 2050.[219]
|
149 |
+
|
150 |
+
In transport, scenarios envision sharp increases in the market share of electric vehicles, low carbon fuel substitution for other transportation modes like shipping, and changes in transportation patterns to reduce overall demand, for example increased public transport.[220] Buildings will see additional electrification with the use of technologies like heat pumps, as well as continued energy efficiency improvements achieved via low energy building codes.[221] Industrial efforts will focus on increasing the energy efficiency of production processes, such as the use of cleaner technology for cement production,[222] designing and creating less energy intensive products, increasing product lifetimes, and developing incentives to reduce product demand.[223]
|
151 |
+
|
152 |
+
The agriculture and forestry sector faces a triple challenge of limiting greenhouse gas emissions, preventing further conversion of forests to agricultural land, and meeting increases in world food demand.[224] A suite of actions could reduce agriculture/forestry based greenhouse gas emissions by 66% from 2010 levels by reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing GHG emissions from agricultural production.[225]
|
153 |
+
|
154 |
+
A wide range of policies, regulations and laws are being used to reduce greenhouse gases. Carbon pricing mechanisms include carbon taxes and emissions trading systems.[226] As of 2019, carbon pricing covers about 20% of global greenhouse gas emissions.[227] Renewable portfolio standards have been enacted in several countries to move utilities to increase the percentage of electricity they generate from renewable sources.[228] Phasing out of fossil fuel subsidies, currently estimated at $300 billion globally (about twice the level of renewable energy subsidies),[229] could reduce greenhouse gas emissions by 6%.[230] Subsidies could also be redirected to support the transition to clean energy.[231] More prescriptive methods that can reduce greenhouse gases include vehicle efficiency standards,[232] renewable fuel standards, and air pollution regulations on heavy industry.[233]
|
155 |
+
|
156 |
+
As the use of fossil fuels is reduced, there are Just Transition considerations involving the social and economic challenges that arise. An example is the employment of workers in the affected industries, along with the well-being of the broader communities involved.[234] Climate justice considerations, such as those facing indigenous populations in the Arctic,[235] are another important aspect of mitigation policies.[236]
|
157 |
+
|
158 |
+
Adaptation is “the process of adjustment to current or expected changes in climate and its effects”. As climate change varies across regions, adaptation does too.[237] While some adaptation responses call for trade-offs, others bring synergies and co-benefits.[238] Examples of adaptation are improved coastline protection, better disaster management, and the development of more resistant crops.[239] Increased use of air conditioning allows people to better cope with heat, but also increases energy demand.[240] Adaptation is especially important in developing countries since they are predicted to bear the brunt of the effects of global warming.[241] The capacity and potential for humans to adapt, called adaptive capacity, is unevenly distributed across different regions and populations, and developing countries generally have less capacity to adapt.[242] The public sector, private sector, and communities are all gaining experience with adaptation, and adaptation is becoming embedded within certain planning processes.[243] There are limits to adaptation and more severe climate change requires more transformative adaptation, which can be prohibitatively expensive.[237]
|
159 |
+
|
160 |
+
Geoengineering or climate engineering is the deliberate large-scale modification of the climate to counteract climate change.[244] Techniques fall generally into the categories of solar radiation management and carbon dioxide removal, although various other schemes have been suggested. A 2018 review paper concluded that although geo-engineering is physically possible, all the techniques are in early stages of development, carry large risks and uncertainties and raise significant ethical and legal issues.[245]
|
161 |
+
|
162 |
+
The geopolitics of climate change is complex and was often framed as a prisoners' dilemma, in which all countries benefit from mitigation done by other countries, but individual countries would lose from investing in a transition to a low-carbon economy themselves. Net importers of fossil fuels win economically from transitioning, and net exporters face stranded assets: fossil fuels they cannot sell.[246] Furthermore, the benefits to individual countries in terms of public health and local environmental improvements of coal phase out exceed the costs, potentially eliminating the free-rider problem.[247] The geopolitics may be further complicated by the supply chain of rare earth metals, which are necessary to produce clean technology.[248]
|
163 |
+
|
164 |
+
As of 2020[update] nearly all countries in the world are parties to the United Nations Framework Convention on Climate Change (UNFCCC).[249] The objective of the Convention is to prevent dangerous human interference with the climate system.[250] As stated in the Convention, this requires that greenhouse gas concentrations are stabilized in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained.[251] The Framework Convention was agreed on in 1992, but global emissions have risen since then.[57] Its yearly conferences are the stage of global negotiations.[252]
|
165 |
+
|
166 |
+
This mandate was sustained in the 1997 Kyoto Protocol to the Framework Convention.[253] In ratifying the Kyoto Protocol, most developed countries accepted legally binding commitments to limit their emissions. These first-round commitments expired in 2012.[254] United States President George W. Bush rejected the treaty on the basis that "it exempts 80% of the world, including major population centres such as China and India, from compliance, and would cause serious harm to the US economy".[255] During these negotiations, the G77 (a lobbying group in the United Nations representing developing countries)[256] pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions.[257] This was justified on the basis that the developed countries' emissions had contributed most to the accumulation of greenhouse gases in the atmosphere, per-capita emissions were still relatively low in developing countries, and the emissions of developing countries would grow to meet their development needs.[258]
|
167 |
+
|
168 |
+
In 2009 several UNFCCC Parties produced the Copenhagen Accord,[260] which has been widely portrayed as disappointing because of its low goals, leading poorer nations to reject it.[261] Nations associated with the Accord aimed to limit the future increase in global mean temperature to below 2 °C.[262] In 2015 all UN countries negotiated the Paris Agreement, which aims to keep climate change well below 2 °C and contains an aspirational goal of keeping warming under 1.5 °C. The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets are set in the Paris Agreement. Instead, the procedure of regularly setting ever more ambitious goals and reevaluating these goals every five years has been made binding.[263] The Paris Agreement reiterated that developing countries must be financially supported.[264] As of November 2019[update], 194 states and the European Union have signed the treaty and 186 states and the EU have ratified or acceded to the agreement.[265] In November 2019 the Trump administration notified the UN that it would withdraw the United States from the Paris Agreement in 2020.[266]
|
169 |
+
|
170 |
+
In 2019, the British Parliament became the first national government in the world to officially declare a climate emergency.[267] Other countries and jurisdictions followed.[268] In November 2019 the European Parliament declared a "climate and environmental emergency",[269] and the European Commission presented its European Green Deal with which they hope to make the EU carbon-neutral in 2050.[270]
|
171 |
+
|
172 |
+
While the ozone layer and climate change are considered separate problems, the solution to the former has significantly mitigated global warming. The estimated mitigation of the Montreal Protocol, an international agreement to stop emitting ozone-depleting gases, is estimated to have been more effective than the Kyoto Protocol, which was specifically designed to curb greenhouse gas emissions.[271] It has been argued that the Montreal Protocol may have done more than any other measure, as of 2017[update], to mitigate climate change as those substances were also powerful greenhouse gases.[272]
|
173 |
+
|
174 |
+
In the scientific literature, there is an overwhelming consensus that global surface temperatures have increased in recent decades and that the trend is caused mainly by human-induced emissions of greenhouse gases.[274] No scientific body of national or international standing disagrees with this view.[275] Scientific discussion takes place in journal articles that are peer-reviewed, which scientists subject to assessment every couple of years in the Intergovernmental Panel on Climate Change reports.[276] In 2013, the IPCC Fifth Assessment Report stated that "is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century".[277] Their 2018 report expressed the scientific consensus as: "human influence on climate has been the dominant cause of observed warming since the mid-20th century".[278]
|
175 |
+
|
176 |
+
Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change, and national science academies have called on world leaders to cut global emissions.[279] In 2017, in the second warning to humanity, 15,364 scientists from 184 countries stated that "the current trajectory of potentially catastrophic climate change due to rising greenhouse gases from burning fossil fuels, deforestation, and agricultural production – particularly from farming ruminants for meat consumption" is "especially troubling".[280] In 2019, a group of more than 11,000 scientists from 153 countries named climate change an "emergency" that would lead to "untold human suffering" if no big shifts in action takes place.[281] The emergency declaration emphasized that economic growth and population growth "are among the most important drivers of increases in CO2 emissions from fossil fuel combustion" and that "we need bold and drastic transformations regarding economic and population policies".[282]
|
177 |
+
|
178 |
+
The global warming problem came to international public attention in the late 1980s.[283] Due to confusing media coverage in the early 1990s, issues such as ozone depletion and climate change were often mixed up, affecting public understanding of these issues.[284] Although there are a few areas of linkage, the relationship between the two is weak.[285]
|
179 |
+
|
180 |
+
Significant regional differences exist in how concerned people are about climate change and how much they understand the issue.[286] In 2010, just a little over half the US population viewed it as a serious concern for either themselves or their families, while 73% of people in Latin America and 74% in developed Asia felt this way.[287] Similarly, in 2015 a median of 54% of respondents considered it "a very serious problem", but Americans and Chinese (whose economies are responsible for the greatest annual CO2 emissions) were among the least concerned.[286] Worldwide in 2011, people were more likely to attribute global warming to human activities than to natural causes, except in the US where nearly half of the population attributed global warming to natural causes.[288] Public reactions to global warming and concern about its effects have been increasing, with many perceiving it as the worst global threat.[289] In a 2019 CBS poll, 64% of the US population said that climate change is a "crisis" or a "serious problem", with 44% saying human activity was a significant contributor.[290]
|
181 |
+
|
182 |
+
Public debate about climate change has been strongly affected by climate change denial and misinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. The actors behind climate change denial form a well-funded and relatively coordinated coalition of fossil fuel companies, industry groups, conservative think tanks, and contrarian scientists.[292] Like the tobacco industry before, the main strategy of these groups has been to manufacture doubt about scientific data and results.[293] Many who deny, dismiss, or hold unwarranted doubt about the scientific consensus on anthropogenic global warming are labelled as "climate change skeptics", which several scientists have noted is a misnomer.[294]
|
183 |
+
|
184 |
+
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimize the negative impacts of climate change.[295] Manufacturing uncertainty about the science later developed into a manufacturing of controversy: creating the belief that there remains significant uncertainty about climate change within the scientific community in order to delay policy changes.[296] Strategies to promote these ideas include a criticism of scientific institutions,[297] and questioning the motives of individual scientists.[298] An "echo chamber" of climate-denying blogs and media has further fomented misunderstanding of global warming.[299]
|
185 |
+
|
186 |
+
Protests seeking more ambitious climate action increased in the 2010s in the form of fossil fuel divestment,[300] and worldwide demonstrations.[301] In particular, youth across the globe protested by skipping school, inspired by Swedish teenager Greta Thunberg in the school strike for climate.[302] Mass civil disobedience actions by Extinction Rebellion and Ende Gelände have ended in police intervention and large-scale arrests.[303] Litigation is increasingly used as a tool to strengthen climate action, with governments being the biggest target of lawsuits demanding that they become ambitious on climate action or enforce existing laws. Cases against fossil-fuel companies, from activists, shareholders and investors, generally seek compensation for loss and damage.[304]
|
187 |
+
|
188 |
+
In 1681 Mariotte noted that glass, though transparent to sunlight, obstructs radiant heat.[305] Around 1774 de Saussure showed that non-luminous warm objects emit infrared heat, and used a glass-topped insulated box to trap and measure heat from sunlight.[306] In 1824 Joseph Fourier proposed by analogy a version of the greenhouse effect; transparent atmosphere lets through visible light, which warms the surface. The warmed surface emits infrared radiation, but the atmosphere is relatively opaque to infrared and slows the emission of energy, warming the planet.[307] Starting in 1859,[308] John Tyndall established that nitrogen and oxygen (99% of dry air) are transparent to infrared, but water vapour and traces of some gases (significantly methane and carbon dioxide) both absorb infrared and, when warmed, emit infrared radiation. Changing concentrations of these gases could have caused "all the mutations of climate which the researches of geologists reveal" including ice ages.[309]
|
189 |
+
|
190 |
+
Svante Arrhenius noted that water vapour in air continuously varied, but carbon dioxide (CO2) was determined by long term geological processes. At the end of an ice age, warming from increased CO2 would increase the amount of water vapour, amplifying its effect in a feedback process. In 1896, he published the first climate model of its kind, showing that halving of CO2 could have produced the drop in temperature initiating the ice age. Arrhenius calculated the temperature increase expected from doubling CO2 to be around 5–6 °C (9.0–10.8 °F).[310] Other scientists were initially sceptical and believed the greenhouse effect to be saturated so that adding more CO2 would make no difference. Experts thought climate would be self-regulating.[311] From 1938 Guy Stewart Callendar published evidence that climate was warming and CO2 levels increasing,[312] but his calculations met the same objections.[311]
|
191 |
+
|
192 |
+
Early calculations treated the atmosphere as a single layer: Gilbert Plass used digital computers to model the different layers and found added CO2 would cause warming. Hans Suess found evidence CO2 levels had been rising, Roger Revelle showed the oceans would not absorb the increase, and together they helped Charles Keeling to begin a record of continued increase, the Keeling Curve.[311] Scientists alerted the public,[313] and the dangers were highlighted at James Hansen's 1988 Congressional testimony.[314] The Intergovernmental Panel on Climate Change, set up in 1988 to provide formal advice to the world's governments, spurred interdisciplanary research.[315]
|
193 |
+
|
194 |
+
Before the 1980s, when it was unclear whether warming by greenhouse gases would dominate aerosol-induced cooling, scientist often used the term ‘’inadvertent climate modification’’ to refer to humankinds’ impact on the climate. With increasing evidence of warming, the terms ‘’global warming’’ and ‘’climate change’’ were introduced, with the former referring only to increasing surface warming, and the latter to the full effect of greenhouse gases on climate.[316] Global warming became the dominant popular term after NASA climate scientist James Hansen used it in his 1988 testimony in the U.S. Senate.[314] In the 2000s, the term climate change increased in popularity.[317] Global warming is almost only used to refer to human-induced warming of the Earth system, whereas climate change is sometimes used to refer to natural as well as anthropogenic change.[318] The two terms are often used interchangeably.[319]
|
195 |
+
|
196 |
+
Various scientists, politicians and news media have adopted the terms climate crisis or a climate emergency to talk about climate change, while using global heating instead of global warming.[320] The policy editor-in-chief of The Guardian explained why they included this language in their editorial guidelines: "We want to ensure that we are being scientifically precise, while also communicating clearly with readers on this very important issue".[321] Oxford Dictionary chose climate emergency as the word of the year 2019 and defines the term as "a situation in which urgent action is required to reduce or halt climate change and avoid potentially irreversible environmental damage resulting from it".[322]
|
197 |
+
|
198 |
+
AR4 Working Group I Report
|
199 |
+
|
200 |
+
AR4 Working Group II Report
|
201 |
+
|
202 |
+
AR4 Working Group III Report
|
203 |
+
|
204 |
+
AR4 Synthesis Report
|
205 |
+
|
206 |
+
AR5 Working Group I Report
|
207 |
+
|
208 |
+
AR5 Working Group II Report
|
209 |
+
|
210 |
+
AR5 Working Group III Report
|
211 |
+
|
212 |
+
AR5 Synthesis Report
|
213 |
+
|
214 |
+
Special Report: SR15
|
215 |
+
|
216 |
+
Special Report: Climate change and Land
|
217 |
+
|
218 |
+
Special Report: SROCC
|
en/4943.html.txt
ADDED
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A coral reef is an underwater ecosystem characterized by reef-building corals. Reefs are formed of colonies of coral polyps held together by calcium carbonate. Most coral reefs are built from stony corals, whose polyps cluster in groups.
|
4 |
+
|
5 |
+
Coral belongs to the class Anthozoa in the animal phylum Cnidaria, which includes sea anemones and jellyfish. Unlike sea anemones, corals secrete hard carbonate exoskeletons that support and protect the coral. Most reefs grow best in warm, shallow, clear, sunny and agitated water. Coral reefs first appeared 485 million years ago, at the dawn of the Early Ordovician, displacing the microbial and sponge reefs of the Cambrian.[1]
|
6 |
+
|
7 |
+
Sometimes called rainforests of the sea,[2] shallow coral reefs form some of Earth's most diverse ecosystems. They occupy less than 0.1% of the world's ocean area, about half the area of France, yet they provide a home for at least 25% of all marine species,[3][4][5][6] including fish, mollusks, worms, crustaceans, echinoderms, sponges, tunicates and other cnidarians.[7] Coral reefs flourish in ocean waters that provide few nutrients. They are most commonly found at shallow depths in tropical waters, but deep water and cold water coral reefs exist on smaller scales in other areas.
|
8 |
+
|
9 |
+
Coral reefs deliver ecosystem services for tourism, fisheries and shoreline protection. The annual global economic value of coral reefs is estimated between US$30–375 billion[8][9] and US$9.9 trillion.[10] Coral reefs are fragile, partly because they are sensitive to water conditions. They are under threat from excess nutrients (nitrogen and phosphorus), rising temperatures, oceanic acidification, overfishing (e.g., from blast fishing, cyanide fishing, spearfishing on scuba), sunscreen use,[11] and harmful land-use practices, including runoff and seeps (e.g., from injection wells and cesspools).[12][13][14]
|
10 |
+
|
11 |
+
Most coral reefs were formed after the last glacial period when melting ice caused sea level to rise and flood continental shelves. Most coral reefs are less than 10,000 years old. As communities established themselves, the reefs grew upwards, pacing rising sea levels. Reefs that rose too slowly could become drowned, without sufficient light.[15] Coral reefs are found in the deep sea away from continental shelves, around oceanic islands and atolls. The majority of these islands are volcanic in origin. Others have tectonic origins where plate movements lifted the deep ocean floor.
|
12 |
+
|
13 |
+
In The Structure and Distribution of Coral Reefs,[16] Charles Darwin set out his theory of the formation of atoll reefs, an idea he conceived during the voyage of the Beagle. He theorized that uplift and subsidence of the Earth's crust under the oceans formed the atolls.[17] Darwin set out a sequence of three stages in atoll formation. A fringing reef forms around an extinct volcanic island as the island and ocean floor subsides. As the subsidence continues, the fringing reef becomes a barrier reef and ultimately an atoll reef.
|
14 |
+
|
15 |
+
Darwin's theory starts with a volcanic island which becomes extinct
|
16 |
+
|
17 |
+
As the island and ocean floor subside, coral growth builds a fringing reef, often including a shallow lagoon between the land and the main reef.
|
18 |
+
|
19 |
+
As the subsidence continues, the fringing reef becomes a larger barrier reef further from the shore with a bigger and deeper lagoon inside.
|
20 |
+
|
21 |
+
Ultimately, the island sinks below the sea, and the barrier reef becomes an atoll enclosing an open lagoon.
|
22 |
+
|
23 |
+
Darwin predicted that underneath each lagoon would be a bedrock base, the remains of the original volcano. Subsequent research supported this hypothesis. Darwin's theory followed from his understanding that coral polyps thrive in the tropics where the water is agitated, but can only live within a limited depth range, starting just below low tide. Where the level of the underlying earth allows, the corals grow around the coast to form fringing reefs, and can eventually grow to become a barrier reef.
|
24 |
+
|
25 |
+
Where the bottom is rising, fringing reefs can grow around the coast, but coral raised above sea level dies. If the land subsides slowly, the fringing reefs keep pace by growing upwards on a base of older, dead coral, forming a barrier reef enclosing a lagoon between the reef and the land. A barrier reef can encircle an island, and once the island sinks below sea level a roughly circular atoll of growing coral continues to keep up with the sea level, forming a central lagoon. Barrier reefs and atolls do not usually form complete circles, but are broken in places by storms. Like sea level rise, a rapidly subsiding bottom can overwhelm coral growth, killing the coral and the reef, due to what is called coral drowning.[19] Corals that rely on zooxanthellae can die when the water becomes too deep for their symbionts to adequately photosynthesize, due to decreased light exposure.[20]
|
26 |
+
|
27 |
+
The two main variables determining the geomorphology, or shape, of coral reefs are the nature of the substrate on which they rest, and the history of the change in sea level relative to that substrate.
|
28 |
+
|
29 |
+
The approximately 20,000-year-old Great Barrier Reef offers an example of how coral reefs formed on continental shelves. Sea level was then 120 m (390 ft) lower than in the 21st century.[21][22] As sea level rose, the water and the corals encroached on what had been hills of the Australian coastal plain. By 13,000 years ago, sea level had risen to 60 m (200 ft) lower than at present, and many hills of the coastal plains had become continental islands. As sea level rise continued, water topped most of the continental islands. The corals could then overgrow the hills, forming cays and reefs. Sea level on the Great Barrier Reef has not changed significantly in the last 6,000 years.[22] The age of living reef structure is estimated to be between 6,000 and 8,000 years.[23] Although the Great Barrier Reef formed along a continental shelf, and not around a volcanic island, Darwin's principles apply. Development stopped at the barrier reef stage, since Australia is not about to submerge. It formed the world's largest barrier reef, 300–1,000 m (980–3,280 ft) from shore, stretching for 2,000 km (1,200 mi).[24]
|
30 |
+
|
31 |
+
Healthy tropical coral reefs grow horizontally from 1 to 3 cm (0.39 to 1.18 in) per year, and grow vertically anywhere from 1 to 25 cm (0.39 to 9.84 in) per year; however, they grow only at depths shallower than 150 m (490 ft) because of their need for sunlight, and cannot grow above sea level.[25]
|
32 |
+
|
33 |
+
As the name implies, coral reefs are made up of coral skeletons from mostly intact coral colonies. As other chemical elements present in corals become incorporated into the calcium carbonate deposits, aragonite is formed. However, shell fragments and the remains of coralline algae such as the green-segmented genus Halimeda can add to the reef's ability to withstand damage from storms and other threats. Such mixtures are visible in structures such as Eniwetok Atoll.[26]
|
34 |
+
|
35 |
+
Since Darwin's identification of the three classical reef formations – the fringing reef around a volcanic island becoming a barrier reef and then an atoll[27] – scientists have identified further reef types. While some sources find only three,[28][29] Thomas and Goudie list four "principal large-scale coral reef types" – the fringing reef, barrier reef, atoll and table reef[30] – while Spalding et al. list five "main types" – the fringing reef, barrier reef, atoll, "bank or platform reef" and patch reef.[31]
|
36 |
+
|
37 |
+
A fringing reef, also called a shore reef,[32] is directly attached to a shore,[33] or borders it with an intervening narrow, shallow channel or lagoon.[34] It is the most common reef type.[34] Fringing reefs follow coastlines and can extend for many kilometres.[35] They are usually less than 100 metres wide, but some are hundreds of metres wide.[36] Fringing reefs are initially formed on the shore at the low water level and expand seawards as they grow in size. The final width depends on where the sea bed begins to drop steeply. The surface of the fringe reef generally remains at the same height: just below the waterline. In older fringing reefs, whose outer regions pushed far out into the sea, the inner part is deepened by erosion and eventually forms a lagoon.[37] Fringing reef lagoons can become over 100 metres wide and several metres deep. Like the fringing reef itself, they run parallel to the coast. The fringing reefs of the Red Sea are "some of the best developed in the world" and occur along all its shores except off sandy bays.[38]
|
38 |
+
|
39 |
+
Barrier reefs are separated from a mainland or island shore by a deep channel or lagoon.[34] They resemble the later stages of a fringing reef with its lagoon, but differ from the latter mainly in size and origin. Their lagoons can be several kilometres wide and 30 to 70 metres deep. Above all, the offshore outer reef edge formed in open water rather than next to a shoreline. Like an atoll, it is thought that these reefs are formed either as the seabed lowered or sea level rose. Formation takes considerably longer than for a fringing reef, thus barrier reefs are much rarer.
|
40 |
+
|
41 |
+
The best known and largest example of a barrier reef is the Australian Great Barrier Reef.[34][39] Other major examples are the Belize Barrier Reef and the New Caledonian Barrier Reef.[39] Barrier reefs are also found on the coasts of Providencia,[39] Mayotte, the Gambier Islands, on the southeast coast of Kalimantan, on parts of the coast of Sulawesi, southeastern New Guinea and the south coast of the Louisiade Archipelago.
|
42 |
+
|
43 |
+
Platform reefs, variously called bank or table reefs, can form on the continental shelf, as well as in the open ocean, in fact anywhere where the seabed rises close enough to the surface of the ocean to enable the growth of zooxanthemic, reef-forming corals.[40] Platform reefs are found in the southern Great Barrier Reef, the Swain[41] and Capricorn Group[42] on the continental shelf, about 100–200 km from the coast. Some platform reefs of the northern Mascarenes are several thousand kilometres from the mainland. Unlike fringing and barrier reefs which extend only seaward, platform reefs grow in all directions.[40] They are variable in size, ranging from a few hundred metres to many kilometres across. Their usual shape is oval to elongated. Parts of these reefs can reach the surface and form sandbanks and small islands around which may form fringing reefs. A lagoon may form In the middle of a platform reef.
|
44 |
+
|
45 |
+
Platform reefs can be found within atolls. There they are called patch reefs and may reach only a few dozen metres in diameter. Where platform reefs form on an elongated structure, e. g. an old, eroded barrier reef, they can form a linear arrangement. This is the case, for example, on the east coast of the Red Sea near Jeddah. In old platform reefs, the inner part can be so heavily eroded that it forms a pseudo-atoll.[40] These can be distinguished from real atolls only by detailed investigation, possibly including core drilling. Some platform reefs of the Laccadives are U-shaped, due to wind and water flow.
|
46 |
+
|
47 |
+
Atolls or atoll reefs are a more or less circular or continuous barrier reef that extends all the way around a lagoon without a central island.[43] They are usually formed from fringing reefs around volcanic islands.[34] Over time, the island erodes away and sinks below sea level.[34] Atolls may also be formed by the sinking of the seabed or rising of the sea level. A ring of reefs results, which enclose a lagoon. Atolls are numerous in the South Pacific, where they usually occur in mid-ocean, for example, in the Caroline Islands, the Cook Islands, French Polynesia, the Marshall Islands and Micronesia.[39]
|
48 |
+
|
49 |
+
Atolls are found in the Indian Ocean, for example, in the Maldives, the Chagos Islands, the Seychelles and around Cocos Island.[39] The entire Maldives consist of 26 atolls.[44]
|
50 |
+
|
51 |
+
Coral reef ecosystems contain distinct zones that host different kinds of habitats. Usually, three major zones are recognized: the fore reef, reef crest, and the back reef (frequently referred to as the reef lagoon).
|
52 |
+
|
53 |
+
The three zones are physically and ecologically interconnected. Reef life and oceanic processes create opportunities for exchange of seawater, sediments, nutrients and marine life.
|
54 |
+
|
55 |
+
Most coral reefs exist in waters less than 50 m deep. Some inhabit tropical continental shelves where cool, nutrient-rich upwelling does not occur, such as the Great Barrier Reef. Others are found in the deep ocean surrounding islands or as atolls, such as in the Maldives. The reefs surrounding islands form when islands subside into the ocean, and atolls form when an island subsides below the surface of the sea.
|
56 |
+
|
57 |
+
Alternatively, Moyle and Cech distinguish six zones, though most reefs possess only some of the zones.[47]
|
58 |
+
|
59 |
+
The reef surface is the shallowest part of the reef. It is subject to surge and tides. When waves pass over shallow areas, they shoal, as shown in the adjacent diagram. This means the water is often agitated. These are the precise condition under which corals flourish. The light is sufficient for photosynthesis by the symbiotic zooxanthellae, and agitated water brings plankton to feed the coral.
|
60 |
+
|
61 |
+
The off-reef floor is the shallow sea floor surrounding a reef. This zone occurs next to reefs on continental shelves. Reefs around tropical islands and atolls drop abruptly to great depths, and do not have such a floor. Usually sandy, the floor often supports seagrass meadows which are important foraging areas for reef fish.
|
62 |
+
|
63 |
+
The reef drop-off is, for its first 50 m, habitat for reef fish who find shelter on the cliff face and plankton in the water nearby. The drop-off zone applies mainly to the reefs surrounding oceanic islands and atolls.
|
64 |
+
|
65 |
+
The reef face is the zone above the reef floor or the reef drop-off. This zone is often the reef's most diverse area. Coral and calcareous algae provide complex habitats and areas that offer protection, such as cracks and crevices. Invertebrates and epiphytic algae provide much of the food for other organisms.[47] A common feature on this forereef zone is spur and groove formations that serve to transport sediment downslope.
|
66 |
+
|
67 |
+
The reef flat is the sandy-bottomed flat, which can be behind the main reef, containing chunks of coral. This zone may border a lagoon and serve as a protective area, or it may lie between the reef and the shore, and in this case is a flat, rocky area. Fish tend to prefer it when it is present.[47]
|
68 |
+
|
69 |
+
The reef lagoon is an entirely enclosed region, which creates an area less affected by wave action and often contains small reef patches.[47]
|
70 |
+
|
71 |
+
However, the "topography of coral reefs is constantly changing. Each reef is made up of irregular patches of algae, sessile invertebrates, and bare rock and sand. The size, shape and relative abundance of these patches changes from year to year in response to the various factors that favor one type of patch over another. Growing coral, for example, produces constant change in the fine structure of reefs. On a larger scale, tropical storms may knock out large sections of reef and cause boulders on sandy areas to move."[48]
|
72 |
+
|
73 |
+
Coral reefs are estimated to cover 284,300 km2 (109,800 sq mi),[49] just under 0.1% of the oceans' surface area. The Indo-Pacific region (including the Red Sea, Indian Ocean, Southeast Asia and the Pacific) account for 91.9% of this total. Southeast Asia accounts for 32.3% of that figure, while the Pacific including Australia accounts for 40.8%. Atlantic and Caribbean coral reefs account for 7.6%.[4]
|
74 |
+
|
75 |
+
Although corals exist both in temperate and tropical waters, shallow-water reefs form only in a zone extending from approximately 30° N to 30° S of the equator. Tropical corals do not grow at depths of over 50 meters (160 ft). The optimum temperature for most coral reefs is 26–27 °C (79–81 °F), and few reefs exist in waters below 18 °C (64 °F).[50] However, reefs in the Persian Gulf have adapted to temperatures of 13 °C (55 °F) in winter and 38 °C (100 °F) in summer.[51] 37 species of scleractinian corals inhabit such an environment around Larak Island.[52]
|
76 |
+
|
77 |
+
Deep-water coral inhabits greater depths and colder temperatures at much higher latitudes, as far north as Norway.[53] Although deep water corals can form reefs, little is known about them.
|
78 |
+
|
79 |
+
Coral reefs are rare along the west coasts of the Americas and Africa, due primarily to upwelling and strong cold coastal currents that reduce water temperatures in these areas (the Peru, Benguela and Canary Currents respectively).[54] Corals are seldom found along the coastline of South Asia—from the eastern tip of India (Chennai) to the Bangladesh and Myanmar borders[4]—as well as along the coasts of northeastern South America and Bangladesh, due to the freshwater release from the Amazon and Ganges Rivers respectively.
|
80 |
+
|
81 |
+
When alive, corals are colonies of small animals embedded in calcium carbonate shells. Coral heads consist of accumulations of individual animals called polyps, arranged in diverse shapes.[59] Polyps are usually tiny, but they can range in size from a pinhead to 12 inches (30 cm) across.
|
82 |
+
|
83 |
+
Reef-building or hermatypic corals live only in the photic zone (above 50 m), the depth to which sufficient sunlight penetrates the water.
|
84 |
+
|
85 |
+
Coral polyps do not photosynthesize, but have a symbiotic relationship with microscopic algae (dinoflagellates) of the genus Symbiodinium, commonly referred to as zooxanthellae. These organisms live within the polyps' tissues and provide organic nutrients that nourish the polyp in the form of glucose, glycerol and amino acids.[60] Because of this relationship, coral reefs grow much faster in clear water, which admits more sunlight. Without their symbionts, coral growth would be too slow to form significant reef structures. Corals get up to 90% of their nutrients from their symbionts.[61] In return, as an example of mutualism, the corals shelter the zooxanthellae, averaging one million for every cubic centimeter of coral, and provide a constant supply of the carbon dioxide they need for photosynthesis.
|
86 |
+
|
87 |
+
The varying pigments in different species of zooxanthellae give them an overall brown or golden-brown appearance, and give brown corals their colors. Other pigments such as reds, blues, greens, etc. come from colored proteins made by the coral animals. Coral that loses a large fraction of its zooxanthellae becomes white (or sometimes pastel shades in corals that are pigmented with their own proteins) and is said to be bleached, a condition which, unless corrected, can kill the coral.
|
88 |
+
|
89 |
+
There are eight clades of Symbiodinium phylotypes. Most research has been conducted on clades A–D. Each clade contributes their own benefits as well as less compatible attributes to the survival of their coral hosts. Each photosynthetic organism has a specific level of sensitivity to photodamage to compounds needed for survival, such as proteins. Rates of regeneration and replication determine the organism's ability to survive. Phylotype A is found more in the shallow waters. It is able to produce mycosporine-like amino acids that are UV resistant, using a derivative of glycerin to absorb the UV radiation and allowing them to better adapt to warmer water temperatures. In the event of UV or thermal damage, if and when repair occurs, it will increase the likelihood of survival of the host and symbiont. This leads to the idea that, evolutionarily, clade A is more UV resistant and thermally resistant than the other clades.[63]
|
90 |
+
|
91 |
+
Clades B and C are found more frequently in deeper water, which may explain their higher vulnerability to increased temperatures. Terrestrial plants that receive less sunlight because they are found in the undergrowth are analogous to clades B, C, and D. Since clades B through D are found at deeper depths, they require an elevated light absorption rate to be able to synthesize as much energy. With elevated absorption rates at UV wavelengths, these phylotypes are more prone to coral bleaching versus the shallow clade A.
|
92 |
+
|
93 |
+
Clade D has been observed to be high temperature-tolerant, and has a higher rate of survival than clades B and C during modern bleaching events.[63]
|
94 |
+
|
95 |
+
Reefs grow as polyps and other organisms deposit calcium carbonate,[64][65] the basis of coral, as a skeletal structure beneath and around themselves, pushing the coral head's top upwards and outwards.[66] Waves, grazing fish (such as parrotfish), sea urchins, sponges and other forces and organisms act as bioeroders, breaking down coral skeletons into fragments that settle into spaces in the reef structure or form sandy bottoms in associated reef lagoons.
|
96 |
+
|
97 |
+
Typical shapes for coral species are named by their resemblance to terrestrial objects such as wrinkled brains, cabbages, table tops, antlers, wire strands and pillars. These shapes can depend on the life history of the coral, like light exposure and wave action,[67] and events such as breakages.[68]
|
98 |
+
|
99 |
+
Corals reproduce both sexually and asexually. An individual polyp uses both reproductive modes within its lifetime. Corals reproduce sexually by either internal or external fertilization. The reproductive cells are found on the mesenteries, membranes that radiate inward from the layer of tissue that lines the stomach cavity. Some mature adult corals are hermaphroditic; others are exclusively male or female. A few species change sex as they grow.
|
100 |
+
|
101 |
+
Internally fertilized eggs develop in the polyp for a period ranging from days to weeks. Subsequent development produces a tiny larva, known as a planula. Externally fertilized eggs develop during synchronized spawning. Polyps across a reef simultaneously release eggs and sperm into the water en masse. Spawn disperse over a large area. The timing of spawning depends on time of year, water temperature, and tidal and lunar cycles. Spawning is most successful given little variation between high and low tide. The less water movement, the better the chance for fertilization. Ideal timing occurs in the spring. Release of eggs or planula usually occurs at night, and is sometimes in phase with the lunar cycle (three to six days after a full moon). The period from release to settlement lasts only a few days, but some planulae can survive afloat for several weeks. During this process the larvae may use several different cues to find a suitable location for settlement. At long distances sounds from existing reefs are likely important [69], while at short distances chemical compounds become important [70]. The larvae are vulnerable to predation and environmental conditions. The lucky few planulae that successfully attach to substrate then compete for food and space.[citation needed]
|
102 |
+
|
103 |
+
Corals are the most prodigious reef-builders. However many other organisms living in the reef community contribute skeletal calcium carbonate in the same manner as corals. These include coralline algae and some sponges.[71] Reefs are always built by the combined efforts of these different phyla, with different organisms leading reef-building in different geological periods.[citation needed]
|
104 |
+
|
105 |
+
Coralline algae are important contributors to reef structure. Although their mineral deposition-rates are much slower than corals, they are more tolerant of rough wave-action, and so help to create a protective crust over those parts of the reef subjected to the greatest forces by waves, such as the reef front facing the open ocean. They also strengthen the reef structure by depositing limestone in sheets over the reef surface.[citation needed]
|
106 |
+
|
107 |
+
"Sclerosponge" is the descriptive name for all Porifera that build reefs. In the early Cambrian period, Archaeocyatha sponges were the world's first reef-building organisms, and sponges were the only reef-builders until the Ordovician. Sclerosponges still assist corals building modern reefs, but like coralline algae are much slower-growing than corals and their contribution is (usually) minor.[citation needed]
|
108 |
+
|
109 |
+
In the northern Pacific Ocean cloud sponges still create deep-water mineral-structures without corals, although the structures are not recognizable from the surface like tropical reefs. They are the only extant organisms known to build reef-like structures in cold water.[citation needed]
|
110 |
+
|
111 |
+
Fluorescent coral[72]
|
112 |
+
|
113 |
+
Brain coral
|
114 |
+
|
115 |
+
Staghorn coral
|
116 |
+
|
117 |
+
Spiral wire coral
|
118 |
+
|
119 |
+
Pillar coral
|
120 |
+
|
121 |
+
Mushroom coral
|
122 |
+
|
123 |
+
Maze coral
|
124 |
+
|
125 |
+
Black coral
|
126 |
+
|
127 |
+
Corraline algae Mesophyllum sp.
|
128 |
+
|
129 |
+
Encrusting corraline algae
|
130 |
+
|
131 |
+
coralline algae Corallina officinalis
|
132 |
+
|
133 |
+
Recent oceanographic research has brought to light the reality of this paradox by confirming that the oligotrophy of the ocean euphotic zone persists right up to the swell-battered reef crest. When you approach the reef edges and atolls from the quasidesert of the open sea, the near absence of living matter suddenly becomes a plethora of life, without transition. So why is there something rather than nothing, and more precisely, where do the necessary nutrients for the functioning of this extraordinary coral reef machine come from?"
|
134 |
+
|
135 |
+
In The Structure and Distribution of Coral Reefs, published in 1842, Darwin described how coral reefs were found in some tropical areas but not others, with no obvious cause. The largest and strongest corals grew in parts of the reef exposed to the most violent surf and corals were weakened or absent where loose sediment accumulated.[74]
|
136 |
+
|
137 |
+
Tropical waters contain few nutrients[75] yet a coral reef can flourish like an "oasis in the desert".[76] This has given rise to the ecosystem conundrum, sometimes called "Darwin's paradox": "How can such high production flourish in such nutrient poor conditions?"[77][78][79]
|
138 |
+
|
139 |
+
Coral reefs support over one-quarter of all marine species. This diversity results in complex food webs, with large predator fish eating smaller forage fish that eat yet smaller zooplankton and so on. However, all food webs eventually depend on plants, which are the primary producers. Coral reefs typically produce 5–10 grams of carbon per square meter per day (gC·m−2·day−1) biomass.[80][81]
|
140 |
+
|
141 |
+
One reason for the unusual clarity of tropical waters is their nutrient deficiency and drifting plankton. Further, the sun shines year-round in the tropics, warming the surface layer, making it less dense than subsurface layers. The warmer water is separated from deeper, cooler water by a stable thermocline, where the temperature makes a rapid change. This keeps the warm surface waters floating above the cooler deeper waters. In most parts of the ocean, there is little exchange between these layers. Organisms that die in aquatic environments generally sink to the bottom, where they decompose, which releases nutrients in the form of nitrogen (N), phosphorus (P) and potassium (K). These nutrients are necessary for plant growth, but in the tropics, they do not directly return to the surface.[citation needed]
|
142 |
+
|
143 |
+
Plants form the base of the food chain and need sunlight and nutrients to grow. In the ocean, these plants are mainly microscopic phytoplankton which drift in the water column. They need sunlight for photosynthesis, which powers carbon fixation, so they are found only relatively near the surface, but they also need nutrients. Phytoplankton rapidly use nutrients in the surface waters, and in the tropics, these nutrients are not usually replaced because of the thermocline.[82]
|
144 |
+
|
145 |
+
Around coral reefs, lagoons fill in with material eroded from the reef and the island. They become havens for marine life, providing protection from waves and storms.
|
146 |
+
|
147 |
+
Most importantly, reefs recycle nutrients, which happens much less in the open ocean. In coral reefs and lagoons, producers include phytoplankton, as well as seaweed and coralline algae, especially small types called turf algae, which pass nutrients to corals.[83] The phytoplankton form the base of the food chain and are eaten by fish and crustaceans. Recycling reduces the nutrient inputs needed overall to support the community.[61]
|
148 |
+
|
149 |
+
Corals also absorb nutrients, including inorganic nitrogen and phosphorus, directly from water. Many corals extend their tentacles at night to catch zooplankton that pass near. Zooplankton provide the polyp with nitrogen, and the polyp shares some of the nitrogen with the zooxanthellae, which also require this element.[83]
|
150 |
+
|
151 |
+
Sponges live in crevices in the reefs. They are efficient filter feeders, and in the Red Sea they consume about 60% of the phytoplankton that drifts by. Sponges eventually excrete nutrients in a form that corals can use.[84]
|
152 |
+
|
153 |
+
The roughness of coral surfaces is key to coral survival in agitated waters. Normally, a boundary layer of still water surrounds a submerged object, which acts as a barrier. Waves breaking on the extremely rough edges of corals disrupt the boundary layer, allowing the corals access to passing nutrients. Turbulent water thereby promotes reef growth. Without the access to nutrients brought by rough coral surfaces, even the most effective recycling would not suffice.[85]
|
154 |
+
|
155 |
+
Deep nutrient-rich water entering coral reefs through isolated events may have significant effects on temperature and nutrient systems.[86][87] This water movement disrupts the relatively stable thermocline that usually exists between warm shallow water and deeper colder water. Temperature regimes on coral reefs in the Bahamas and Florida are highly variable with temporal scales of minutes to seasons and spatial scales across depths.[88]
|
156 |
+
|
157 |
+
Water can pass through coral reefs in various ways, including current rings, surface waves, internal waves and tidal changes.[86][89][90][91] Movement is generally created by tides and wind. As tides interact with varying bathymetry and wind mixes with surface water, internal waves are created. An internal wave is a gravity wave that moves along density stratification within the ocean. When a water parcel encounters a different density it oscillates and creates internal waves.[92] While internal waves generally have a lower frequency than surface waves, they often form as a single wave that breaks into multiple waves as it hits a slope and moves upward.[93] This vertical breakup of internal waves causes significant diapycnal mixing and turbulence.[94][95] Internal waves can act as nutrient pumps, bringing plankton and cool nutrient-rich water to the surface.[86][91][96][97][98][99][100][101][102][103][104]
|
158 |
+
|
159 |
+
The irregular structure characteristic of coral reef bathymetry may enhance mixing and produce pockets of cooler water and variable nutrient content.[105] Arrival of cool, nutrient-rich water from depths due to internal waves and tidal bores has been linked to growth rates of suspension feeders and benthic algae[91][104][106] as well as plankton and larval organisms.[91][107] The seaweed Codium isthmocladum reacts to deep water nutrient sources because their tissues have different concentrations of nutrients dependent upon depth.[104] Aggregations of eggs, larval organisms and plankton on reefs respond to deep water intrusions.[98] Similarly, as internal waves and bores move vertically, surface-dwelling larval organisms are carried toward the shore.[107] This has significant biological importance to cascading effects of food chains in coral reef ecosystems and may provide yet another key to unlocking the paradox.
|
160 |
+
|
161 |
+
Cyanobacteria provide soluble nitrates via nitrogen fixation.[108]
|
162 |
+
|
163 |
+
Coral reefs often depend on surrounding habitats, such as seagrass meadows and mangrove forests, for nutrients. Seagrass and mangroves supply dead plants and animals that are rich in nitrogen and serve to feed fish and animals from the reef by supplying wood and vegetation. Reefs, in turn, protect mangroves and seagrass from waves and produce sediment in which the mangroves and seagrass can root.[51]
|
164 |
+
|
165 |
+
Coral reefs form some of the world's most productive ecosystems, providing complex and varied marine habitats that support a wide range of other organisms.[109][110] Fringing reefs just below low tide level have a mutually beneficial relationship with mangrove forests at high tide level and sea grass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and sea grass protect the coral from large influxes of silt, fresh water and pollutants. This level of variety in the environment benefits many coral reef animals, which, for example, may feed in the sea grass and use the reefs for protection or breeding.[111]
|
166 |
+
|
167 |
+
Reefs are home to a variety of animals, including fish, seabirds, sponges, cnidarians (which includes some types of corals and jellyfish), worms, crustaceans (including shrimp, cleaner shrimp, spiny lobsters and crabs), mollusks (including cephalopods), echinoderms (including starfish, sea urchins and sea cucumbers), sea squirts, sea turtles and sea snakes. Aside from humans, mammals are rare on coral reefs, with visiting cetaceans such as dolphins the main exception. A few species feed directly on corals, while others graze on algae on the reef.[4][83] Reef biomass is positively related to species diversity.[112]
|
168 |
+
|
169 |
+
The same hideouts in a reef may be regularly inhabited by different species at different times of day. Nighttime predators such as cardinalfish and squirrelfish hide during the day, while damselfish, surgeonfish, triggerfish, wrasses and parrotfish hide from eels and sharks.[26]:49
|
170 |
+
|
171 |
+
The great number and diversity of hiding places in coral reefs, i.e. refuges, are the most important factor causing the great diversity and high biomass of the organisms in coral reefs.[113][114]
|
172 |
+
|
173 |
+
Reefs are chronically at risk of algal encroachment. Overfishing and excess nutrient supply from onshore can enable algae to outcompete and kill the coral.[115][116] Increased nutrient levels can be a result of sewage or chemical fertilizer runoff. Runoff can carry nitrogen and phosphorus which promote excess algae growth. Algae can sometimes out-compete the coral for space. The algae can then smother the coral by decreasing the oxygen supply available to the reef.[117] Decreased oxygen levels can slow down calcification rates, weakening the coral and leaving it more susceptible to disease and degradation.[118] Algae inhabit a large percentage of surveyed coral locations.[119] The algal population consists of turf algae, coralline algae and macro algae. Some sea urchins (such as Diadema antillarum) eat these algae and could thus decrease the risk of algal encroachment.
|
174 |
+
|
175 |
+
Sponges are essential for the functioning of the coral reef that system. Algae and corals in coral reefs produce organic material. This is filtered through sponges which convert this organic material into small particles which in turn are absorbed by algae and corals.[120]
|
176 |
+
|
177 |
+
Over 4,000 species of fish inhabit coral reefs.[4] The reasons for this diversity remain unclear. Hypotheses include the "lottery", in which the first (lucky winner) recruit to a territory is typically able to defend it against latecomers, "competition", in which adults compete for territory, and less-competitive species must be able to survive in poorer habitat, and "predation", in which population size is a function of postsettlement piscivore mortality.[121] Healthy reefs can produce up to 35 tons of fish per square kilometer each year, but damaged reefs produce much less.[122]
|
178 |
+
|
179 |
+
Sea urchins, Dotidae and sea slugs eat seaweed. Some species of sea urchins, such as Diadema antillarum, can play a pivotal part in preventing algae from overrunning reefs.[123] Researchers are investigating the use of native collector urchins, Tripneustes gratilla, for their potential as biocontrol agents to mitigate the spread of invasive algae species on coral reefs.[124][125] Nudibranchia and sea anemones eat sponges.
|
180 |
+
|
181 |
+
A number of invertebrates, collectively called "cryptofauna," inhabit the coral skeletal substrate itself, either boring into the skeletons (through the process of bioerosion) or living in pre-existing voids and crevices. Animals boring into the rock include sponges, bivalve mollusks, and sipunculans. Those settling on the reef include many other species, particularly crustaceans and polychaete worms.[54]
|
182 |
+
|
183 |
+
Coral reef systems provide important habitats for seabird species, some endangered. For example, Midway Atoll in Hawaii supports nearly three million seabirds, including two-thirds (1.5 million) of the global population of Laysan albatross, and one-third of the global population of black-footed albatross.[126] Each seabird species has specific sites on the atoll where they nest. Altogether, 17 species of seabirds live on Midway. The short-tailed albatross is the rarest, with fewer than 2,200 surviving after excessive feather hunting in the late 19th century.[127]
|
184 |
+
|
185 |
+
Sea snakes feed exclusively on fish and their eggs.[128][129][130] Marine birds, such as herons, gannets, pelicans and boobies, feed on reef fish. Some land-based reptiles intermittently associate with reefs, such as monitor lizards, the marine crocodile and semiaquatic snakes, such as Laticauda colubrina. Sea turtles, particularly hawksbill sea turtles, feed on sponges.[131][132][133]
|
186 |
+
|
187 |
+
Schooling reef fish
|
188 |
+
|
189 |
+
Caribbean reef squid
|
190 |
+
|
191 |
+
Banded coral shrimp
|
192 |
+
|
193 |
+
Whitetip reef shark
|
194 |
+
|
195 |
+
Green turtle
|
196 |
+
|
197 |
+
Giant clam
|
198 |
+
|
199 |
+
Soft coral, cup coral, sponges and ascidians
|
200 |
+
|
201 |
+
Banded sea krait
|
202 |
+
|
203 |
+
The shell of Latiaxis wormaldi, a coral snail
|
204 |
+
|
205 |
+
Coral reefs deliver ecosystem services to tourism, fisheries and coastline protection. The global economic value of coral reefs has been estimated to be between US$29.8 billion[8] and $375 billion per year.[9]
|
206 |
+
|
207 |
+
The economic cost over a 25-year period of destroying one kilometer of coral reef has been estimated to be somewhere between $137,000 and $1,200,000.[134]
|
208 |
+
|
209 |
+
To improve the management of coastal coral reefs, the World Resources Institute (WRI) developed and published tools for calculating the value of coral reef-related tourism, shoreline protection and fisheries, partnering with five Caribbean countries. As of April 2011, published working papers covered St. Lucia, Tobago, Belize, and the Dominican Republic. The WRI was "making sure that the study results support improved coastal policies and management planning".[135] The Belize study estimated the value of reef and mangrove services at $395–559 million annually.[136]
|
210 |
+
|
211 |
+
Bermuda's coral reefs provide economic benefits to the Island worth on average $722 million per year, based on six key ecosystem services, according to Sarkis et al (2010).[137]
|
212 |
+
|
213 |
+
Coral reefs protect shorelines by absorbing wave energy, and many small islands would not exist without reefs. Coral reefs can reduce wave energy by 97%, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without. Reefs can attenuate waves as well as or better than artificial structures designed for coastal defence such as breakwaters.[138] An estimated 197 million people who live both below 10 m elevation and within 50 km of a reef consequently may receive risk reduction benefits from reefs. Restoring reefs is significantly cheaper than building artificial breakwaters in tropical environments. Expected damages from flooding would double, and costs from frequent storms would triple without the topmost meter of reefs. For 100-year storm events, flood damages would increase by 91% to $US 272 billion without the top meter.[139]
|
214 |
+
|
215 |
+
About six million tons of fish are taken each year from coral reefs. Well-managed reefs have an average annual yield of 15 tons of seafood per square kilometer. Southeast Asia's coral reef fisheries alone yield about $2.4 billion annually from seafood.[134]
|
216 |
+
|
217 |
+
Since their emergence 485 million years ago, coral reefs have faced many threats, including disease,[141] predation,[142] invasive species, bioerosion by grazing fish,[143] algal blooms, geologic hazards, and recent human activity.
|
218 |
+
|
219 |
+
This include coral mining, bottom trawling,[144] and the digging of canals and accesses into islands and bays, all of which can damage marine ecosystems if not done sustainably. Other localized threats include blast fishing, overfishing, coral overmining,[145] and marine pollution, including use of the banned anti-fouling biocide tributyltin; although absent in developed countries, these activities continue in places with few environmental protections or poor regulatory enforcement.[146][147][148] Chemicals in sunscreens may awaken latent viral infections in zooxanthellae[11] and impact reproduction.[149] However, concentrating tourism activities via offshore platforms has been shown to limit the spread of coral disease by tourists.[150]
|
220 |
+
|
221 |
+
Greenhouse gas emissions present a broader threat through sea temperature rise and sea level rise,[151] though corals adapt their calcifying fluids to changes in seawater pH and carbonate levels and are not directly threatened by ocean acidification.[152] Volcanic and manmade aerosol pollution can modulate regional sea surface temperatures.[153]
|
222 |
+
|
223 |
+
In 2011, two researchers suggested that "extant marine invertebrates face the same synergistic effects of multiple stressors" that occurred during the end-Permian extinction, and that genera "with poorly buffered respiratory physiology and calcareous shells", such as corals, were particularly vulnerable.[154][155][156]
|
224 |
+
|
225 |
+
Corals respond to stress by "bleaching," or expelling their colorful zooxanthellate endosymbionts. Corals with Clade C zooxanthellae are generally vulnerable to heat-induced bleaching, whereas corals with the hardier Clade A or D are generally resistant,[157] as are tougher coral genera like Porites and Montipora.[158]
|
226 |
+
|
227 |
+
Every 4–7 years, an El Niño event causes some reefs with heat-sensitive corals to bleach,[159] with especially widespread bleachings in 1998 and 2010.[160][161] However, reefs that experience a severe bleaching event become resistant to future heat-induced bleaching,[162][163][158] due to rapid directional selection.[163] Similar rapid adaption may protect coral reefs from global warming.[164]
|
228 |
+
|
229 |
+
A large-scale systematic study of the Jarvis Island coral community, which experienced ten El Niño-coincident coral bleaching events from 1960 to 2016, found that the reef recovered from almost complete death after severe events.[159]
|
230 |
+
|
231 |
+
Marine protected areas (MPAs) are designated areas that provide various kinds of protection to ocean and/or estuarine areas. They are intended to promote responsible fishery management and habitat protection. MPAs can encompass both social and biological objectives, including reef restoration, aesthetics, biodiversity and economic benefits.
|
232 |
+
|
233 |
+
However, research in Indonesia, Philippines and Papua New Guinea found no significant difference between an MPA site and an unprotected site.[165][166] Further, they can generate conflicts driven by lack of community participation, clashing views of the government and fisheries, effectiveness of the area and funding.[167] In some situations, as in the Phoenix Islands Protected Area, MPAs provide revenue, potentially equal to the income they would have generated without controls.[168]
|
234 |
+
|
235 |
+
According to the Caribbean Coral Reefs - Status Report 1970–2012, states that; stop overfishing especially fishes key to coral reef like parrotfish, coastal zone management that reduce human pressure on reef, (for example restricting coastal settlement, development and tourism) and control pollution specially sewage, may reduce coral decline or even reverse it. The report shows that healthier reefs in the Caribbean are those with large populations of parrotfish in countries that protect these key fishes and sea urchins, banning fish trapping and spearfishing, creating "resilient reefs".[169][170]
|
236 |
+
|
237 |
+
To help combat ocean acidification, some laws are in place to reduce greenhouse gases such as carbon dioxide. The United States Clean Water Act puts pressure on state governments to monitor and limit runoff.
|
238 |
+
|
239 |
+
Many land use laws aim to reduce CO2 emissions by limiting deforestation. Deforestation can release significant amounts of CO2 absent sequestration via active follow-up forestry programs. Deforestation can also cause erosion, which flows into the ocean, contributing to ocean acidification. Incentives are used to reduce miles traveled by vehicles, which reduces carbon emissions into the atmosphere, thereby reducing the amount of dissolved CO2 in the ocean. State and federal governments also regulate land activities that affect coastal erosion.[171] High-end satellite technology can monitor reef conditions.[172]
|
240 |
+
|
241 |
+
Designating a reef as a biosphere reserve, marine park, national monument or world heritage site can offer protections. For example, Belize's barrier reef, Sian Ka'an, the Galapagos islands, Great Barrier Reef, Henderson Island, Palau and Papahānaumokuākea Marine National Monument are world heritage sites.[173]
|
242 |
+
|
243 |
+
In Australia, the Great Barrier Reef is protected by the Great Barrier Reef Marine Park Authority, and is the subject of much legislation, including a biodiversity action plan.[174] Australia compiled a Coral Reef Resilience Action Plan. This plan consists of adaptive management strategies, including reducing carbon footprint. A public awareness plan provides education on the "rainforests of the sea" and how people can reduce carbon emissions.[175]
|
244 |
+
|
245 |
+
Inhabitants of Ahus Island, Manus Province, Papua New Guinea, have followed a generations-old practice of restricting fishing in six areas of their reef lagoon. Their cultural traditions allow line fishing, but no net or spear fishing. Both biomass and individual fish sizes are significantly larger than in places where fishing is unrestricted.[176][177]
|
246 |
+
|
247 |
+
Coral reef restoration has grown in prominence over the past several decades because of the unprecedented reef die offs around the planet. Coral stressors can include pollution, warming ocean temperatures, extreme weather events, and overfishing. With the deterioration of global reefs, fish nurseries, biodiversity, coastal development and livelihood, and natural beauty are under threat. Fortunately, researchers have taken it upon themselves to develop a new field, coral restoration, in the 1970s-1980s[178]
|
248 |
+
|
249 |
+
Coral aquaculture, also known as coral farming or coral gardening, is showing promise as a potentially effective tool for restoring coral reefs.[179][180][181] The "gardening" process bypasses the early growth stages of corals when they are most at risk of dying. Coral seeds are grown in nurseries, then replanted on the reef.[182] Coral is farmed by coral farmers whose interests range from reef conservation to increased income. Due to its straight forward process and substantial evidence of the technique having a significant effect on coral reef growth, coral nurseries became the most widespread and arguably the most effective method for coral restoration.[183]
|
250 |
+
|
251 |
+
Coral gardens take advantage of a coral's natural ability to fragment and continuing to grow if the fragments are able to anchor themselves onto new substrates. This method was first tested by Baruch Rinkevich [184] in 1995 which found success at the time. By today's standards, coral farming has grown into a variety of different forms, but still have the same goals of cultivating corals. Consequently, coral farming quickly replaced previously used transplantation methods, or the act of physically moving sections or whole colonies of corals into a new area.[183] Transplantation has seen success in the past and decades of experiments have led to a high success and survival rate. However, this method still requires the removal of corals from existing reefs. With the current state of reefs, this kind of method should generally be avoided if possible. Saving healthy corals from eroding substrates or reefs that are doomed to collapse could be a major advantage of utilizing transplantation.
|
252 |
+
|
253 |
+
Coral gardens generally take on the safe forms no matter where you go. It begins with the establishment of a nursery where operators can observe and care for coral fragments.[183] It goes without saying that nurseries should be established in areas that are going to maximize growth and minimize mortality. Floating offshore coral trees or even aquariums are possible locations where corals can grow. After a location has been determined, collection and cultivation can occur.
|
254 |
+
|
255 |
+
The major benefit for using coral farms is it lowers polyp and juvenile mortality rates. By removing predators and recruitment obstacles, corals are able to mature without much hindrance. However, nurseries cannot stop climate stressors. Warming temperatures or hurricanes can still disrupt or even kill nursery corals.
|
256 |
+
|
257 |
+
Efforts to expand the size and number of coral reefs generally involve supplying substrate to allow more corals to find a home. Substrate materials include discarded vehicle tires, scuttled ships, subway cars and formed concrete, such as reef balls. Reefs grow unaided on marine structures such as oil rigs. In large restoration projects, propagated hermatypic coral on substrate can be secured with metal pins, superglue or milliput. Needle and thread can also attach A-hermatype coral to substrate.
|
258 |
+
|
259 |
+
Biorock is a substrate produced by a patented process that runs low voltage electrical currents through seawater to cause dissolved minerals to precipitate onto steel structures. The resultant white carbonate (aragonite) is the same mineral that makes up natural coral reefs. Corals rapidly colonize and grow at accelerated rates on these coated structures. The electrical currents also accelerate formation and growth of both chemical limestone rock and the skeletons of corals and other shell-bearing organisms, such as oysters. The vicinity of the anode and cathode provides a high-pH environment which inhibits the growth of competitive filamentous and fleshy algae. The increased growth rates fully depend on the accretion activity. Under the influence of the electric field, corals display an increased growth rate, size and density.
|
260 |
+
|
261 |
+
Simply having many structures on the ocean floor is not enough to form coral reefs. Restoration projects must consider the complexity of the substrates they are creating for future reefs. Researchers conducted an experiment near Ticao Island in the Philippines in 2013[185] where several substrates in varying complexities were laid in the nearby degraded reefs. Large complexity consisted of plots that had both a man-made substrates of both smooth and rough rocks with a surrounding fence, medium consisted of only the man-made substrates, and small had neither the fence or substrates. After one month, researchers found that there was positive correlation between structure complexity and recruitment rates of larvae.[185] The medium complexity performed the best with larvae favoring rough rocks over smooth rocks. Following one year of their study, researchers visited the site and found that many of the sites were able to support local fisheries. They came to the conclusion that reef restoration could be done cost-effectively and will yield long term benefits given they are protected and maintained.[185]
|
262 |
+
|
263 |
+
One case study with coral reef restoration was conducted on the island of Oahu in Hawaii. The University of Hawaii operates a Coral Reef Assessment and Monitoring Program to help relocate and restore coral reefs in Hawaii. A boat channel from the island of Oahu to the Hawaii Institute of Marine Biology on Coconut Island was overcrowded with coral reefs. Many areas of coral reef patches in the channel had been damaged from past dredging in the channel.
|
264 |
+
|
265 |
+
Dredging covers corals with sand. Coral larvae cannot settle on sand; they can only build on existing reefs or compatible hard surfaces, such as rock or concrete. Because of this, the University decided to relocate some of the coral. They transplanted them with the help of United States Army divers, to a site relatively close to the channel. They observed little if any damage to any of the colonies during transport and no mortality of coral reefs was observed on the transplant site. While attaching the coral to the transplant site, they found that coral placed on hard rock grew well, including on the wires that attached the corals to the site.
|
266 |
+
|
267 |
+
No environmental effects were seen from the transplantation process, recreational activities were not decreased, and no scenic areas were affected.
|
268 |
+
|
269 |
+
As an alternative to transplanting coral themselves, juvenile fish can also be encouraged to relocate to existing coral reefs by auditory simulation. In damaged sections of the Great Barrier Reef, loudspeakers playing recordings of healthy reef environments, were found to attract fish twice as often as equivalent patches where no sound was played, and also increased species biodiversity by 50%.
|
270 |
+
|
271 |
+
Another possibility for coral restoration is gene therapy: inoculating coral with genetically modified bacteria, or naturally-occurring heat-tolerant varieties of coral symbiotes, may make it possible to grow corals that are more resistant to climate change and other threats.[186] Warming oceans are forcing corals to adapt the to a unprecedented temperatures. Those that do not have a tolerance for the elevated temperatures experience coral bleaching and eventually mortality. There is already research that looks to create genetically modified corals that can withstand a warming ocean. Madeleine J. H. van Oppen, James K. Oliver, Hollie M. Putnam, and Ruth D. Gates described four different ways that gradually increase in human intervention to genetically modify corals.[187] These methods focus on altering the genetics of the zooxanthellae within coral rather than the alternative.
|
272 |
+
|
273 |
+
The first method is to induced acclimatization of the first generation of corals.[187] The idea is that when adult and offspring corals are exposed to stressors, the zooxanthellae will gain a mutation. This method is based mostly on the chance that the zooxanthellae will acquire the specific trait that will allow it to better survive in warmer waters. The second method focuses on identifying what different kinds of zooxanthellae are within the coral and configuring how much of each zooxanthellae lives within to the coral at a given age.[187] Use of zooxanthellae from the previous method would only boost success rates for this method. However, this method would only be applicable to younger corals, for now, because previous experiments of manipulation zooxanthellae communities at later life stages have all failed. The third method focuses on selective breeding tactics.[187] Once selected, corals would be reared and exposed to simulated stressors in a laboratory. The last method is to genetically modify the zooxanthellae itself.[187] When preferred mutations are acquired, the genetically modified zooxanthellae will be introduced to an aposymbiotic poly and a new coral will be produced. This method is the most laborious of the fourth, but researchers believe this method should be utilized more and holds the most promise in genetic engineering for coral restoration.
|
274 |
+
|
275 |
+
Hawaiian coral reefs smothered by the spread of invasive algae were managed with a two-prong approach: divers manually removed invasive algae, with the support of super-sucker barges. Grazing pressure on invasive algae needed to be increased to prevent the regrowth of the algae. Researchers found that native collector urchins were reasonable candidate grazers for algae biocontrol, to extirpate the remaining invasive algae from the reef.[124]
|
276 |
+
|
277 |
+
Macroalgae, or better known as seaweed, has to potential to cause reef collapse because they can outcompete many coral species. Macroalgae can overgrow on corals, shade, block recruitment, release biochemicals that can hinder spawning, and potentially form bacteria harmful to corals.[188][189] Historically, algae growth was controlled by herbivorous fish and sea urchins. Parrotfish are a prime example of reef caretakers. Consequently, these two species can be considered as keystone species for reef environments because of their role in protecting reefs.
|
278 |
+
|
279 |
+
Before the 1980s, Jamaica's reefs were thriving and well cared for, however, this all changed after Hurricane Allen occurred in 1980 and an unknown disease spread across the Caribbean. In the wake of these events, massive damage was caused to both the reefs and sea urchin population across Jamaican's reefs and into the Caribbean Sea. As little as 2% of the original sea urchin population survived the disease.[189] Primary macroalgae succeeded the destroyed reefs and eventually larger, more resilient macroalgae soon took its place as the dominant organism.[189][190] Parrotfish and other herbivorous fish were few in numbers because of decades of overfishing and bycatch at the time.[190] Historically, the Jamaican coast had 90% coral cover and was reduced to 5% in the 1990s.[190] Eventually, corals were able to recover in areas where sea urchin populations were increasing. Sea urchins were able to feed and multiply and clear off substrates, leaving areas for coral polyps to anchor and mature. However, sea urchin populations are still not recovering as fast as researchers predicted, despite being highly fecundate.[189] It is unknown whether or not the mysterious disease is still present and preventing sea urchin populations from rebounding. Regardless, these areas are slowly recovering with the aid of sea urchin grazing. This event supports an early restoration idea of cultivating and releasing sea urchins into reefs to prevent algal overgrowth.[191][192]
|
280 |
+
|
281 |
+
In 2014, Christopher Page, Erinn Muller, and David Vaughan from the International Center for Coral Reef Research & Restoration at Mote Marine Laboratory in Summerland Key, Florida developed a new technology called "microfragmentation," in which they use a specialized diamond band saw to cut corals into 1 cm2 fragments instead of 6 cm2 to advance the growth of brain, boulder, and star corals.[193] Corals Orbicella faveolata and Montastraea cavernosa were outplanted off the Florida's shores in several microfragment arrays. After two years, O. faveolata had grown 6.5x its original size while M. cavernosa had grown nearly twice its size.[193] Under conventional means, both corals would have required decades to reach the same size. It is suspected that if predation events had not occurred near the beginning of the experiment O. faveolata would have grown at least ten times its original size.[193] By using this method, Mote Marine Laboratory produced 25,000 corals and planted 10,000 in the Florida Keys in only one year. Shortly after, they discovered that these microfragments fused with other microfragments from the same parent coral. Typically, corals that are not from the same parent fight and kill nearby corals in an attempt to survive and expand. This new technology is known as "fusion" and has been shown to grow coral heads in just two years instead of the typical 25–75 years. After fusion occurs, the reef will act as a single organism rather than several independent reefs. Currently, there has been no published research into this method.[193]
|
282 |
+
|
283 |
+
The times of maximum reef development were in the Middle Cambrian (513–501 Ma), Devonian (416–359 Ma) and Carboniferous (359–299 Ma), owing to order Rugosa extinct corals and Late Cretaceous (100–66 Ma) and all Neogene (23 Ma–present), owing to order Scleractinia corals.
|
284 |
+
|
285 |
+
Not all reefs in the past were formed by corals: those in the Early Cambrian (542–513 Ma) resulted from calcareous algae and archaeocyathids (small animals with conical shape, probably related to sponges) and in the Late Cretaceous (100–66 Ma), when reefs formed by a group of bivalves called rudists existed; one of the valves formed the main conical structure and the other, much smaller valve acted as a cap.
|
286 |
+
|
287 |
+
Measurements of the oxygen isotopic composition of the aragonitic skeleton of coral reefs, such as Porites, can indicate changes in sea surface temperature and sea surface salinity conditions during the growth of the coral. This technique is often used by climate scientists to infer a region's paleoclimate.[194]
|
en/4944.html.txt
ADDED
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A coral reef is an underwater ecosystem characterized by reef-building corals. Reefs are formed of colonies of coral polyps held together by calcium carbonate. Most coral reefs are built from stony corals, whose polyps cluster in groups.
|
4 |
+
|
5 |
+
Coral belongs to the class Anthozoa in the animal phylum Cnidaria, which includes sea anemones and jellyfish. Unlike sea anemones, corals secrete hard carbonate exoskeletons that support and protect the coral. Most reefs grow best in warm, shallow, clear, sunny and agitated water. Coral reefs first appeared 485 million years ago, at the dawn of the Early Ordovician, displacing the microbial and sponge reefs of the Cambrian.[1]
|
6 |
+
|
7 |
+
Sometimes called rainforests of the sea,[2] shallow coral reefs form some of Earth's most diverse ecosystems. They occupy less than 0.1% of the world's ocean area, about half the area of France, yet they provide a home for at least 25% of all marine species,[3][4][5][6] including fish, mollusks, worms, crustaceans, echinoderms, sponges, tunicates and other cnidarians.[7] Coral reefs flourish in ocean waters that provide few nutrients. They are most commonly found at shallow depths in tropical waters, but deep water and cold water coral reefs exist on smaller scales in other areas.
|
8 |
+
|
9 |
+
Coral reefs deliver ecosystem services for tourism, fisheries and shoreline protection. The annual global economic value of coral reefs is estimated between US$30–375 billion[8][9] and US$9.9 trillion.[10] Coral reefs are fragile, partly because they are sensitive to water conditions. They are under threat from excess nutrients (nitrogen and phosphorus), rising temperatures, oceanic acidification, overfishing (e.g., from blast fishing, cyanide fishing, spearfishing on scuba), sunscreen use,[11] and harmful land-use practices, including runoff and seeps (e.g., from injection wells and cesspools).[12][13][14]
|
10 |
+
|
11 |
+
Most coral reefs were formed after the last glacial period when melting ice caused sea level to rise and flood continental shelves. Most coral reefs are less than 10,000 years old. As communities established themselves, the reefs grew upwards, pacing rising sea levels. Reefs that rose too slowly could become drowned, without sufficient light.[15] Coral reefs are found in the deep sea away from continental shelves, around oceanic islands and atolls. The majority of these islands are volcanic in origin. Others have tectonic origins where plate movements lifted the deep ocean floor.
|
12 |
+
|
13 |
+
In The Structure and Distribution of Coral Reefs,[16] Charles Darwin set out his theory of the formation of atoll reefs, an idea he conceived during the voyage of the Beagle. He theorized that uplift and subsidence of the Earth's crust under the oceans formed the atolls.[17] Darwin set out a sequence of three stages in atoll formation. A fringing reef forms around an extinct volcanic island as the island and ocean floor subsides. As the subsidence continues, the fringing reef becomes a barrier reef and ultimately an atoll reef.
|
14 |
+
|
15 |
+
Darwin's theory starts with a volcanic island which becomes extinct
|
16 |
+
|
17 |
+
As the island and ocean floor subside, coral growth builds a fringing reef, often including a shallow lagoon between the land and the main reef.
|
18 |
+
|
19 |
+
As the subsidence continues, the fringing reef becomes a larger barrier reef further from the shore with a bigger and deeper lagoon inside.
|
20 |
+
|
21 |
+
Ultimately, the island sinks below the sea, and the barrier reef becomes an atoll enclosing an open lagoon.
|
22 |
+
|
23 |
+
Darwin predicted that underneath each lagoon would be a bedrock base, the remains of the original volcano. Subsequent research supported this hypothesis. Darwin's theory followed from his understanding that coral polyps thrive in the tropics where the water is agitated, but can only live within a limited depth range, starting just below low tide. Where the level of the underlying earth allows, the corals grow around the coast to form fringing reefs, and can eventually grow to become a barrier reef.
|
24 |
+
|
25 |
+
Where the bottom is rising, fringing reefs can grow around the coast, but coral raised above sea level dies. If the land subsides slowly, the fringing reefs keep pace by growing upwards on a base of older, dead coral, forming a barrier reef enclosing a lagoon between the reef and the land. A barrier reef can encircle an island, and once the island sinks below sea level a roughly circular atoll of growing coral continues to keep up with the sea level, forming a central lagoon. Barrier reefs and atolls do not usually form complete circles, but are broken in places by storms. Like sea level rise, a rapidly subsiding bottom can overwhelm coral growth, killing the coral and the reef, due to what is called coral drowning.[19] Corals that rely on zooxanthellae can die when the water becomes too deep for their symbionts to adequately photosynthesize, due to decreased light exposure.[20]
|
26 |
+
|
27 |
+
The two main variables determining the geomorphology, or shape, of coral reefs are the nature of the substrate on which they rest, and the history of the change in sea level relative to that substrate.
|
28 |
+
|
29 |
+
The approximately 20,000-year-old Great Barrier Reef offers an example of how coral reefs formed on continental shelves. Sea level was then 120 m (390 ft) lower than in the 21st century.[21][22] As sea level rose, the water and the corals encroached on what had been hills of the Australian coastal plain. By 13,000 years ago, sea level had risen to 60 m (200 ft) lower than at present, and many hills of the coastal plains had become continental islands. As sea level rise continued, water topped most of the continental islands. The corals could then overgrow the hills, forming cays and reefs. Sea level on the Great Barrier Reef has not changed significantly in the last 6,000 years.[22] The age of living reef structure is estimated to be between 6,000 and 8,000 years.[23] Although the Great Barrier Reef formed along a continental shelf, and not around a volcanic island, Darwin's principles apply. Development stopped at the barrier reef stage, since Australia is not about to submerge. It formed the world's largest barrier reef, 300–1,000 m (980–3,280 ft) from shore, stretching for 2,000 km (1,200 mi).[24]
|
30 |
+
|
31 |
+
Healthy tropical coral reefs grow horizontally from 1 to 3 cm (0.39 to 1.18 in) per year, and grow vertically anywhere from 1 to 25 cm (0.39 to 9.84 in) per year; however, they grow only at depths shallower than 150 m (490 ft) because of their need for sunlight, and cannot grow above sea level.[25]
|
32 |
+
|
33 |
+
As the name implies, coral reefs are made up of coral skeletons from mostly intact coral colonies. As other chemical elements present in corals become incorporated into the calcium carbonate deposits, aragonite is formed. However, shell fragments and the remains of coralline algae such as the green-segmented genus Halimeda can add to the reef's ability to withstand damage from storms and other threats. Such mixtures are visible in structures such as Eniwetok Atoll.[26]
|
34 |
+
|
35 |
+
Since Darwin's identification of the three classical reef formations – the fringing reef around a volcanic island becoming a barrier reef and then an atoll[27] – scientists have identified further reef types. While some sources find only three,[28][29] Thomas and Goudie list four "principal large-scale coral reef types" – the fringing reef, barrier reef, atoll and table reef[30] – while Spalding et al. list five "main types" – the fringing reef, barrier reef, atoll, "bank or platform reef" and patch reef.[31]
|
36 |
+
|
37 |
+
A fringing reef, also called a shore reef,[32] is directly attached to a shore,[33] or borders it with an intervening narrow, shallow channel or lagoon.[34] It is the most common reef type.[34] Fringing reefs follow coastlines and can extend for many kilometres.[35] They are usually less than 100 metres wide, but some are hundreds of metres wide.[36] Fringing reefs are initially formed on the shore at the low water level and expand seawards as they grow in size. The final width depends on where the sea bed begins to drop steeply. The surface of the fringe reef generally remains at the same height: just below the waterline. In older fringing reefs, whose outer regions pushed far out into the sea, the inner part is deepened by erosion and eventually forms a lagoon.[37] Fringing reef lagoons can become over 100 metres wide and several metres deep. Like the fringing reef itself, they run parallel to the coast. The fringing reefs of the Red Sea are "some of the best developed in the world" and occur along all its shores except off sandy bays.[38]
|
38 |
+
|
39 |
+
Barrier reefs are separated from a mainland or island shore by a deep channel or lagoon.[34] They resemble the later stages of a fringing reef with its lagoon, but differ from the latter mainly in size and origin. Their lagoons can be several kilometres wide and 30 to 70 metres deep. Above all, the offshore outer reef edge formed in open water rather than next to a shoreline. Like an atoll, it is thought that these reefs are formed either as the seabed lowered or sea level rose. Formation takes considerably longer than for a fringing reef, thus barrier reefs are much rarer.
|
40 |
+
|
41 |
+
The best known and largest example of a barrier reef is the Australian Great Barrier Reef.[34][39] Other major examples are the Belize Barrier Reef and the New Caledonian Barrier Reef.[39] Barrier reefs are also found on the coasts of Providencia,[39] Mayotte, the Gambier Islands, on the southeast coast of Kalimantan, on parts of the coast of Sulawesi, southeastern New Guinea and the south coast of the Louisiade Archipelago.
|
42 |
+
|
43 |
+
Platform reefs, variously called bank or table reefs, can form on the continental shelf, as well as in the open ocean, in fact anywhere where the seabed rises close enough to the surface of the ocean to enable the growth of zooxanthemic, reef-forming corals.[40] Platform reefs are found in the southern Great Barrier Reef, the Swain[41] and Capricorn Group[42] on the continental shelf, about 100–200 km from the coast. Some platform reefs of the northern Mascarenes are several thousand kilometres from the mainland. Unlike fringing and barrier reefs which extend only seaward, platform reefs grow in all directions.[40] They are variable in size, ranging from a few hundred metres to many kilometres across. Their usual shape is oval to elongated. Parts of these reefs can reach the surface and form sandbanks and small islands around which may form fringing reefs. A lagoon may form In the middle of a platform reef.
|
44 |
+
|
45 |
+
Platform reefs can be found within atolls. There they are called patch reefs and may reach only a few dozen metres in diameter. Where platform reefs form on an elongated structure, e. g. an old, eroded barrier reef, they can form a linear arrangement. This is the case, for example, on the east coast of the Red Sea near Jeddah. In old platform reefs, the inner part can be so heavily eroded that it forms a pseudo-atoll.[40] These can be distinguished from real atolls only by detailed investigation, possibly including core drilling. Some platform reefs of the Laccadives are U-shaped, due to wind and water flow.
|
46 |
+
|
47 |
+
Atolls or atoll reefs are a more or less circular or continuous barrier reef that extends all the way around a lagoon without a central island.[43] They are usually formed from fringing reefs around volcanic islands.[34] Over time, the island erodes away and sinks below sea level.[34] Atolls may also be formed by the sinking of the seabed or rising of the sea level. A ring of reefs results, which enclose a lagoon. Atolls are numerous in the South Pacific, where they usually occur in mid-ocean, for example, in the Caroline Islands, the Cook Islands, French Polynesia, the Marshall Islands and Micronesia.[39]
|
48 |
+
|
49 |
+
Atolls are found in the Indian Ocean, for example, in the Maldives, the Chagos Islands, the Seychelles and around Cocos Island.[39] The entire Maldives consist of 26 atolls.[44]
|
50 |
+
|
51 |
+
Coral reef ecosystems contain distinct zones that host different kinds of habitats. Usually, three major zones are recognized: the fore reef, reef crest, and the back reef (frequently referred to as the reef lagoon).
|
52 |
+
|
53 |
+
The three zones are physically and ecologically interconnected. Reef life and oceanic processes create opportunities for exchange of seawater, sediments, nutrients and marine life.
|
54 |
+
|
55 |
+
Most coral reefs exist in waters less than 50 m deep. Some inhabit tropical continental shelves where cool, nutrient-rich upwelling does not occur, such as the Great Barrier Reef. Others are found in the deep ocean surrounding islands or as atolls, such as in the Maldives. The reefs surrounding islands form when islands subside into the ocean, and atolls form when an island subsides below the surface of the sea.
|
56 |
+
|
57 |
+
Alternatively, Moyle and Cech distinguish six zones, though most reefs possess only some of the zones.[47]
|
58 |
+
|
59 |
+
The reef surface is the shallowest part of the reef. It is subject to surge and tides. When waves pass over shallow areas, they shoal, as shown in the adjacent diagram. This means the water is often agitated. These are the precise condition under which corals flourish. The light is sufficient for photosynthesis by the symbiotic zooxanthellae, and agitated water brings plankton to feed the coral.
|
60 |
+
|
61 |
+
The off-reef floor is the shallow sea floor surrounding a reef. This zone occurs next to reefs on continental shelves. Reefs around tropical islands and atolls drop abruptly to great depths, and do not have such a floor. Usually sandy, the floor often supports seagrass meadows which are important foraging areas for reef fish.
|
62 |
+
|
63 |
+
The reef drop-off is, for its first 50 m, habitat for reef fish who find shelter on the cliff face and plankton in the water nearby. The drop-off zone applies mainly to the reefs surrounding oceanic islands and atolls.
|
64 |
+
|
65 |
+
The reef face is the zone above the reef floor or the reef drop-off. This zone is often the reef's most diverse area. Coral and calcareous algae provide complex habitats and areas that offer protection, such as cracks and crevices. Invertebrates and epiphytic algae provide much of the food for other organisms.[47] A common feature on this forereef zone is spur and groove formations that serve to transport sediment downslope.
|
66 |
+
|
67 |
+
The reef flat is the sandy-bottomed flat, which can be behind the main reef, containing chunks of coral. This zone may border a lagoon and serve as a protective area, or it may lie between the reef and the shore, and in this case is a flat, rocky area. Fish tend to prefer it when it is present.[47]
|
68 |
+
|
69 |
+
The reef lagoon is an entirely enclosed region, which creates an area less affected by wave action and often contains small reef patches.[47]
|
70 |
+
|
71 |
+
However, the "topography of coral reefs is constantly changing. Each reef is made up of irregular patches of algae, sessile invertebrates, and bare rock and sand. The size, shape and relative abundance of these patches changes from year to year in response to the various factors that favor one type of patch over another. Growing coral, for example, produces constant change in the fine structure of reefs. On a larger scale, tropical storms may knock out large sections of reef and cause boulders on sandy areas to move."[48]
|
72 |
+
|
73 |
+
Coral reefs are estimated to cover 284,300 km2 (109,800 sq mi),[49] just under 0.1% of the oceans' surface area. The Indo-Pacific region (including the Red Sea, Indian Ocean, Southeast Asia and the Pacific) account for 91.9% of this total. Southeast Asia accounts for 32.3% of that figure, while the Pacific including Australia accounts for 40.8%. Atlantic and Caribbean coral reefs account for 7.6%.[4]
|
74 |
+
|
75 |
+
Although corals exist both in temperate and tropical waters, shallow-water reefs form only in a zone extending from approximately 30° N to 30° S of the equator. Tropical corals do not grow at depths of over 50 meters (160 ft). The optimum temperature for most coral reefs is 26–27 °C (79–81 °F), and few reefs exist in waters below 18 °C (64 °F).[50] However, reefs in the Persian Gulf have adapted to temperatures of 13 °C (55 °F) in winter and 38 °C (100 °F) in summer.[51] 37 species of scleractinian corals inhabit such an environment around Larak Island.[52]
|
76 |
+
|
77 |
+
Deep-water coral inhabits greater depths and colder temperatures at much higher latitudes, as far north as Norway.[53] Although deep water corals can form reefs, little is known about them.
|
78 |
+
|
79 |
+
Coral reefs are rare along the west coasts of the Americas and Africa, due primarily to upwelling and strong cold coastal currents that reduce water temperatures in these areas (the Peru, Benguela and Canary Currents respectively).[54] Corals are seldom found along the coastline of South Asia—from the eastern tip of India (Chennai) to the Bangladesh and Myanmar borders[4]—as well as along the coasts of northeastern South America and Bangladesh, due to the freshwater release from the Amazon and Ganges Rivers respectively.
|
80 |
+
|
81 |
+
When alive, corals are colonies of small animals embedded in calcium carbonate shells. Coral heads consist of accumulations of individual animals called polyps, arranged in diverse shapes.[59] Polyps are usually tiny, but they can range in size from a pinhead to 12 inches (30 cm) across.
|
82 |
+
|
83 |
+
Reef-building or hermatypic corals live only in the photic zone (above 50 m), the depth to which sufficient sunlight penetrates the water.
|
84 |
+
|
85 |
+
Coral polyps do not photosynthesize, but have a symbiotic relationship with microscopic algae (dinoflagellates) of the genus Symbiodinium, commonly referred to as zooxanthellae. These organisms live within the polyps' tissues and provide organic nutrients that nourish the polyp in the form of glucose, glycerol and amino acids.[60] Because of this relationship, coral reefs grow much faster in clear water, which admits more sunlight. Without their symbionts, coral growth would be too slow to form significant reef structures. Corals get up to 90% of their nutrients from their symbionts.[61] In return, as an example of mutualism, the corals shelter the zooxanthellae, averaging one million for every cubic centimeter of coral, and provide a constant supply of the carbon dioxide they need for photosynthesis.
|
86 |
+
|
87 |
+
The varying pigments in different species of zooxanthellae give them an overall brown or golden-brown appearance, and give brown corals their colors. Other pigments such as reds, blues, greens, etc. come from colored proteins made by the coral animals. Coral that loses a large fraction of its zooxanthellae becomes white (or sometimes pastel shades in corals that are pigmented with their own proteins) and is said to be bleached, a condition which, unless corrected, can kill the coral.
|
88 |
+
|
89 |
+
There are eight clades of Symbiodinium phylotypes. Most research has been conducted on clades A–D. Each clade contributes their own benefits as well as less compatible attributes to the survival of their coral hosts. Each photosynthetic organism has a specific level of sensitivity to photodamage to compounds needed for survival, such as proteins. Rates of regeneration and replication determine the organism's ability to survive. Phylotype A is found more in the shallow waters. It is able to produce mycosporine-like amino acids that are UV resistant, using a derivative of glycerin to absorb the UV radiation and allowing them to better adapt to warmer water temperatures. In the event of UV or thermal damage, if and when repair occurs, it will increase the likelihood of survival of the host and symbiont. This leads to the idea that, evolutionarily, clade A is more UV resistant and thermally resistant than the other clades.[63]
|
90 |
+
|
91 |
+
Clades B and C are found more frequently in deeper water, which may explain their higher vulnerability to increased temperatures. Terrestrial plants that receive less sunlight because they are found in the undergrowth are analogous to clades B, C, and D. Since clades B through D are found at deeper depths, they require an elevated light absorption rate to be able to synthesize as much energy. With elevated absorption rates at UV wavelengths, these phylotypes are more prone to coral bleaching versus the shallow clade A.
|
92 |
+
|
93 |
+
Clade D has been observed to be high temperature-tolerant, and has a higher rate of survival than clades B and C during modern bleaching events.[63]
|
94 |
+
|
95 |
+
Reefs grow as polyps and other organisms deposit calcium carbonate,[64][65] the basis of coral, as a skeletal structure beneath and around themselves, pushing the coral head's top upwards and outwards.[66] Waves, grazing fish (such as parrotfish), sea urchins, sponges and other forces and organisms act as bioeroders, breaking down coral skeletons into fragments that settle into spaces in the reef structure or form sandy bottoms in associated reef lagoons.
|
96 |
+
|
97 |
+
Typical shapes for coral species are named by their resemblance to terrestrial objects such as wrinkled brains, cabbages, table tops, antlers, wire strands and pillars. These shapes can depend on the life history of the coral, like light exposure and wave action,[67] and events such as breakages.[68]
|
98 |
+
|
99 |
+
Corals reproduce both sexually and asexually. An individual polyp uses both reproductive modes within its lifetime. Corals reproduce sexually by either internal or external fertilization. The reproductive cells are found on the mesenteries, membranes that radiate inward from the layer of tissue that lines the stomach cavity. Some mature adult corals are hermaphroditic; others are exclusively male or female. A few species change sex as they grow.
|
100 |
+
|
101 |
+
Internally fertilized eggs develop in the polyp for a period ranging from days to weeks. Subsequent development produces a tiny larva, known as a planula. Externally fertilized eggs develop during synchronized spawning. Polyps across a reef simultaneously release eggs and sperm into the water en masse. Spawn disperse over a large area. The timing of spawning depends on time of year, water temperature, and tidal and lunar cycles. Spawning is most successful given little variation between high and low tide. The less water movement, the better the chance for fertilization. Ideal timing occurs in the spring. Release of eggs or planula usually occurs at night, and is sometimes in phase with the lunar cycle (three to six days after a full moon). The period from release to settlement lasts only a few days, but some planulae can survive afloat for several weeks. During this process the larvae may use several different cues to find a suitable location for settlement. At long distances sounds from existing reefs are likely important [69], while at short distances chemical compounds become important [70]. The larvae are vulnerable to predation and environmental conditions. The lucky few planulae that successfully attach to substrate then compete for food and space.[citation needed]
|
102 |
+
|
103 |
+
Corals are the most prodigious reef-builders. However many other organisms living in the reef community contribute skeletal calcium carbonate in the same manner as corals. These include coralline algae and some sponges.[71] Reefs are always built by the combined efforts of these different phyla, with different organisms leading reef-building in different geological periods.[citation needed]
|
104 |
+
|
105 |
+
Coralline algae are important contributors to reef structure. Although their mineral deposition-rates are much slower than corals, they are more tolerant of rough wave-action, and so help to create a protective crust over those parts of the reef subjected to the greatest forces by waves, such as the reef front facing the open ocean. They also strengthen the reef structure by depositing limestone in sheets over the reef surface.[citation needed]
|
106 |
+
|
107 |
+
"Sclerosponge" is the descriptive name for all Porifera that build reefs. In the early Cambrian period, Archaeocyatha sponges were the world's first reef-building organisms, and sponges were the only reef-builders until the Ordovician. Sclerosponges still assist corals building modern reefs, but like coralline algae are much slower-growing than corals and their contribution is (usually) minor.[citation needed]
|
108 |
+
|
109 |
+
In the northern Pacific Ocean cloud sponges still create deep-water mineral-structures without corals, although the structures are not recognizable from the surface like tropical reefs. They are the only extant organisms known to build reef-like structures in cold water.[citation needed]
|
110 |
+
|
111 |
+
Fluorescent coral[72]
|
112 |
+
|
113 |
+
Brain coral
|
114 |
+
|
115 |
+
Staghorn coral
|
116 |
+
|
117 |
+
Spiral wire coral
|
118 |
+
|
119 |
+
Pillar coral
|
120 |
+
|
121 |
+
Mushroom coral
|
122 |
+
|
123 |
+
Maze coral
|
124 |
+
|
125 |
+
Black coral
|
126 |
+
|
127 |
+
Corraline algae Mesophyllum sp.
|
128 |
+
|
129 |
+
Encrusting corraline algae
|
130 |
+
|
131 |
+
coralline algae Corallina officinalis
|
132 |
+
|
133 |
+
Recent oceanographic research has brought to light the reality of this paradox by confirming that the oligotrophy of the ocean euphotic zone persists right up to the swell-battered reef crest. When you approach the reef edges and atolls from the quasidesert of the open sea, the near absence of living matter suddenly becomes a plethora of life, without transition. So why is there something rather than nothing, and more precisely, where do the necessary nutrients for the functioning of this extraordinary coral reef machine come from?"
|
134 |
+
|
135 |
+
In The Structure and Distribution of Coral Reefs, published in 1842, Darwin described how coral reefs were found in some tropical areas but not others, with no obvious cause. The largest and strongest corals grew in parts of the reef exposed to the most violent surf and corals were weakened or absent where loose sediment accumulated.[74]
|
136 |
+
|
137 |
+
Tropical waters contain few nutrients[75] yet a coral reef can flourish like an "oasis in the desert".[76] This has given rise to the ecosystem conundrum, sometimes called "Darwin's paradox": "How can such high production flourish in such nutrient poor conditions?"[77][78][79]
|
138 |
+
|
139 |
+
Coral reefs support over one-quarter of all marine species. This diversity results in complex food webs, with large predator fish eating smaller forage fish that eat yet smaller zooplankton and so on. However, all food webs eventually depend on plants, which are the primary producers. Coral reefs typically produce 5–10 grams of carbon per square meter per day (gC·m−2·day−1) biomass.[80][81]
|
140 |
+
|
141 |
+
One reason for the unusual clarity of tropical waters is their nutrient deficiency and drifting plankton. Further, the sun shines year-round in the tropics, warming the surface layer, making it less dense than subsurface layers. The warmer water is separated from deeper, cooler water by a stable thermocline, where the temperature makes a rapid change. This keeps the warm surface waters floating above the cooler deeper waters. In most parts of the ocean, there is little exchange between these layers. Organisms that die in aquatic environments generally sink to the bottom, where they decompose, which releases nutrients in the form of nitrogen (N), phosphorus (P) and potassium (K). These nutrients are necessary for plant growth, but in the tropics, they do not directly return to the surface.[citation needed]
|
142 |
+
|
143 |
+
Plants form the base of the food chain and need sunlight and nutrients to grow. In the ocean, these plants are mainly microscopic phytoplankton which drift in the water column. They need sunlight for photosynthesis, which powers carbon fixation, so they are found only relatively near the surface, but they also need nutrients. Phytoplankton rapidly use nutrients in the surface waters, and in the tropics, these nutrients are not usually replaced because of the thermocline.[82]
|
144 |
+
|
145 |
+
Around coral reefs, lagoons fill in with material eroded from the reef and the island. They become havens for marine life, providing protection from waves and storms.
|
146 |
+
|
147 |
+
Most importantly, reefs recycle nutrients, which happens much less in the open ocean. In coral reefs and lagoons, producers include phytoplankton, as well as seaweed and coralline algae, especially small types called turf algae, which pass nutrients to corals.[83] The phytoplankton form the base of the food chain and are eaten by fish and crustaceans. Recycling reduces the nutrient inputs needed overall to support the community.[61]
|
148 |
+
|
149 |
+
Corals also absorb nutrients, including inorganic nitrogen and phosphorus, directly from water. Many corals extend their tentacles at night to catch zooplankton that pass near. Zooplankton provide the polyp with nitrogen, and the polyp shares some of the nitrogen with the zooxanthellae, which also require this element.[83]
|
150 |
+
|
151 |
+
Sponges live in crevices in the reefs. They are efficient filter feeders, and in the Red Sea they consume about 60% of the phytoplankton that drifts by. Sponges eventually excrete nutrients in a form that corals can use.[84]
|
152 |
+
|
153 |
+
The roughness of coral surfaces is key to coral survival in agitated waters. Normally, a boundary layer of still water surrounds a submerged object, which acts as a barrier. Waves breaking on the extremely rough edges of corals disrupt the boundary layer, allowing the corals access to passing nutrients. Turbulent water thereby promotes reef growth. Without the access to nutrients brought by rough coral surfaces, even the most effective recycling would not suffice.[85]
|
154 |
+
|
155 |
+
Deep nutrient-rich water entering coral reefs through isolated events may have significant effects on temperature and nutrient systems.[86][87] This water movement disrupts the relatively stable thermocline that usually exists between warm shallow water and deeper colder water. Temperature regimes on coral reefs in the Bahamas and Florida are highly variable with temporal scales of minutes to seasons and spatial scales across depths.[88]
|
156 |
+
|
157 |
+
Water can pass through coral reefs in various ways, including current rings, surface waves, internal waves and tidal changes.[86][89][90][91] Movement is generally created by tides and wind. As tides interact with varying bathymetry and wind mixes with surface water, internal waves are created. An internal wave is a gravity wave that moves along density stratification within the ocean. When a water parcel encounters a different density it oscillates and creates internal waves.[92] While internal waves generally have a lower frequency than surface waves, they often form as a single wave that breaks into multiple waves as it hits a slope and moves upward.[93] This vertical breakup of internal waves causes significant diapycnal mixing and turbulence.[94][95] Internal waves can act as nutrient pumps, bringing plankton and cool nutrient-rich water to the surface.[86][91][96][97][98][99][100][101][102][103][104]
|
158 |
+
|
159 |
+
The irregular structure characteristic of coral reef bathymetry may enhance mixing and produce pockets of cooler water and variable nutrient content.[105] Arrival of cool, nutrient-rich water from depths due to internal waves and tidal bores has been linked to growth rates of suspension feeders and benthic algae[91][104][106] as well as plankton and larval organisms.[91][107] The seaweed Codium isthmocladum reacts to deep water nutrient sources because their tissues have different concentrations of nutrients dependent upon depth.[104] Aggregations of eggs, larval organisms and plankton on reefs respond to deep water intrusions.[98] Similarly, as internal waves and bores move vertically, surface-dwelling larval organisms are carried toward the shore.[107] This has significant biological importance to cascading effects of food chains in coral reef ecosystems and may provide yet another key to unlocking the paradox.
|
160 |
+
|
161 |
+
Cyanobacteria provide soluble nitrates via nitrogen fixation.[108]
|
162 |
+
|
163 |
+
Coral reefs often depend on surrounding habitats, such as seagrass meadows and mangrove forests, for nutrients. Seagrass and mangroves supply dead plants and animals that are rich in nitrogen and serve to feed fish and animals from the reef by supplying wood and vegetation. Reefs, in turn, protect mangroves and seagrass from waves and produce sediment in which the mangroves and seagrass can root.[51]
|
164 |
+
|
165 |
+
Coral reefs form some of the world's most productive ecosystems, providing complex and varied marine habitats that support a wide range of other organisms.[109][110] Fringing reefs just below low tide level have a mutually beneficial relationship with mangrove forests at high tide level and sea grass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and sea grass protect the coral from large influxes of silt, fresh water and pollutants. This level of variety in the environment benefits many coral reef animals, which, for example, may feed in the sea grass and use the reefs for protection or breeding.[111]
|
166 |
+
|
167 |
+
Reefs are home to a variety of animals, including fish, seabirds, sponges, cnidarians (which includes some types of corals and jellyfish), worms, crustaceans (including shrimp, cleaner shrimp, spiny lobsters and crabs), mollusks (including cephalopods), echinoderms (including starfish, sea urchins and sea cucumbers), sea squirts, sea turtles and sea snakes. Aside from humans, mammals are rare on coral reefs, with visiting cetaceans such as dolphins the main exception. A few species feed directly on corals, while others graze on algae on the reef.[4][83] Reef biomass is positively related to species diversity.[112]
|
168 |
+
|
169 |
+
The same hideouts in a reef may be regularly inhabited by different species at different times of day. Nighttime predators such as cardinalfish and squirrelfish hide during the day, while damselfish, surgeonfish, triggerfish, wrasses and parrotfish hide from eels and sharks.[26]:49
|
170 |
+
|
171 |
+
The great number and diversity of hiding places in coral reefs, i.e. refuges, are the most important factor causing the great diversity and high biomass of the organisms in coral reefs.[113][114]
|
172 |
+
|
173 |
+
Reefs are chronically at risk of algal encroachment. Overfishing and excess nutrient supply from onshore can enable algae to outcompete and kill the coral.[115][116] Increased nutrient levels can be a result of sewage or chemical fertilizer runoff. Runoff can carry nitrogen and phosphorus which promote excess algae growth. Algae can sometimes out-compete the coral for space. The algae can then smother the coral by decreasing the oxygen supply available to the reef.[117] Decreased oxygen levels can slow down calcification rates, weakening the coral and leaving it more susceptible to disease and degradation.[118] Algae inhabit a large percentage of surveyed coral locations.[119] The algal population consists of turf algae, coralline algae and macro algae. Some sea urchins (such as Diadema antillarum) eat these algae and could thus decrease the risk of algal encroachment.
|
174 |
+
|
175 |
+
Sponges are essential for the functioning of the coral reef that system. Algae and corals in coral reefs produce organic material. This is filtered through sponges which convert this organic material into small particles which in turn are absorbed by algae and corals.[120]
|
176 |
+
|
177 |
+
Over 4,000 species of fish inhabit coral reefs.[4] The reasons for this diversity remain unclear. Hypotheses include the "lottery", in which the first (lucky winner) recruit to a territory is typically able to defend it against latecomers, "competition", in which adults compete for territory, and less-competitive species must be able to survive in poorer habitat, and "predation", in which population size is a function of postsettlement piscivore mortality.[121] Healthy reefs can produce up to 35 tons of fish per square kilometer each year, but damaged reefs produce much less.[122]
|
178 |
+
|
179 |
+
Sea urchins, Dotidae and sea slugs eat seaweed. Some species of sea urchins, such as Diadema antillarum, can play a pivotal part in preventing algae from overrunning reefs.[123] Researchers are investigating the use of native collector urchins, Tripneustes gratilla, for their potential as biocontrol agents to mitigate the spread of invasive algae species on coral reefs.[124][125] Nudibranchia and sea anemones eat sponges.
|
180 |
+
|
181 |
+
A number of invertebrates, collectively called "cryptofauna," inhabit the coral skeletal substrate itself, either boring into the skeletons (through the process of bioerosion) or living in pre-existing voids and crevices. Animals boring into the rock include sponges, bivalve mollusks, and sipunculans. Those settling on the reef include many other species, particularly crustaceans and polychaete worms.[54]
|
182 |
+
|
183 |
+
Coral reef systems provide important habitats for seabird species, some endangered. For example, Midway Atoll in Hawaii supports nearly three million seabirds, including two-thirds (1.5 million) of the global population of Laysan albatross, and one-third of the global population of black-footed albatross.[126] Each seabird species has specific sites on the atoll where they nest. Altogether, 17 species of seabirds live on Midway. The short-tailed albatross is the rarest, with fewer than 2,200 surviving after excessive feather hunting in the late 19th century.[127]
|
184 |
+
|
185 |
+
Sea snakes feed exclusively on fish and their eggs.[128][129][130] Marine birds, such as herons, gannets, pelicans and boobies, feed on reef fish. Some land-based reptiles intermittently associate with reefs, such as monitor lizards, the marine crocodile and semiaquatic snakes, such as Laticauda colubrina. Sea turtles, particularly hawksbill sea turtles, feed on sponges.[131][132][133]
|
186 |
+
|
187 |
+
Schooling reef fish
|
188 |
+
|
189 |
+
Caribbean reef squid
|
190 |
+
|
191 |
+
Banded coral shrimp
|
192 |
+
|
193 |
+
Whitetip reef shark
|
194 |
+
|
195 |
+
Green turtle
|
196 |
+
|
197 |
+
Giant clam
|
198 |
+
|
199 |
+
Soft coral, cup coral, sponges and ascidians
|
200 |
+
|
201 |
+
Banded sea krait
|
202 |
+
|
203 |
+
The shell of Latiaxis wormaldi, a coral snail
|
204 |
+
|
205 |
+
Coral reefs deliver ecosystem services to tourism, fisheries and coastline protection. The global economic value of coral reefs has been estimated to be between US$29.8 billion[8] and $375 billion per year.[9]
|
206 |
+
|
207 |
+
The economic cost over a 25-year period of destroying one kilometer of coral reef has been estimated to be somewhere between $137,000 and $1,200,000.[134]
|
208 |
+
|
209 |
+
To improve the management of coastal coral reefs, the World Resources Institute (WRI) developed and published tools for calculating the value of coral reef-related tourism, shoreline protection and fisheries, partnering with five Caribbean countries. As of April 2011, published working papers covered St. Lucia, Tobago, Belize, and the Dominican Republic. The WRI was "making sure that the study results support improved coastal policies and management planning".[135] The Belize study estimated the value of reef and mangrove services at $395–559 million annually.[136]
|
210 |
+
|
211 |
+
Bermuda's coral reefs provide economic benefits to the Island worth on average $722 million per year, based on six key ecosystem services, according to Sarkis et al (2010).[137]
|
212 |
+
|
213 |
+
Coral reefs protect shorelines by absorbing wave energy, and many small islands would not exist without reefs. Coral reefs can reduce wave energy by 97%, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without. Reefs can attenuate waves as well as or better than artificial structures designed for coastal defence such as breakwaters.[138] An estimated 197 million people who live both below 10 m elevation and within 50 km of a reef consequently may receive risk reduction benefits from reefs. Restoring reefs is significantly cheaper than building artificial breakwaters in tropical environments. Expected damages from flooding would double, and costs from frequent storms would triple without the topmost meter of reefs. For 100-year storm events, flood damages would increase by 91% to $US 272 billion without the top meter.[139]
|
214 |
+
|
215 |
+
About six million tons of fish are taken each year from coral reefs. Well-managed reefs have an average annual yield of 15 tons of seafood per square kilometer. Southeast Asia's coral reef fisheries alone yield about $2.4 billion annually from seafood.[134]
|
216 |
+
|
217 |
+
Since their emergence 485 million years ago, coral reefs have faced many threats, including disease,[141] predation,[142] invasive species, bioerosion by grazing fish,[143] algal blooms, geologic hazards, and recent human activity.
|
218 |
+
|
219 |
+
This include coral mining, bottom trawling,[144] and the digging of canals and accesses into islands and bays, all of which can damage marine ecosystems if not done sustainably. Other localized threats include blast fishing, overfishing, coral overmining,[145] and marine pollution, including use of the banned anti-fouling biocide tributyltin; although absent in developed countries, these activities continue in places with few environmental protections or poor regulatory enforcement.[146][147][148] Chemicals in sunscreens may awaken latent viral infections in zooxanthellae[11] and impact reproduction.[149] However, concentrating tourism activities via offshore platforms has been shown to limit the spread of coral disease by tourists.[150]
|
220 |
+
|
221 |
+
Greenhouse gas emissions present a broader threat through sea temperature rise and sea level rise,[151] though corals adapt their calcifying fluids to changes in seawater pH and carbonate levels and are not directly threatened by ocean acidification.[152] Volcanic and manmade aerosol pollution can modulate regional sea surface temperatures.[153]
|
222 |
+
|
223 |
+
In 2011, two researchers suggested that "extant marine invertebrates face the same synergistic effects of multiple stressors" that occurred during the end-Permian extinction, and that genera "with poorly buffered respiratory physiology and calcareous shells", such as corals, were particularly vulnerable.[154][155][156]
|
224 |
+
|
225 |
+
Corals respond to stress by "bleaching," or expelling their colorful zooxanthellate endosymbionts. Corals with Clade C zooxanthellae are generally vulnerable to heat-induced bleaching, whereas corals with the hardier Clade A or D are generally resistant,[157] as are tougher coral genera like Porites and Montipora.[158]
|
226 |
+
|
227 |
+
Every 4–7 years, an El Niño event causes some reefs with heat-sensitive corals to bleach,[159] with especially widespread bleachings in 1998 and 2010.[160][161] However, reefs that experience a severe bleaching event become resistant to future heat-induced bleaching,[162][163][158] due to rapid directional selection.[163] Similar rapid adaption may protect coral reefs from global warming.[164]
|
228 |
+
|
229 |
+
A large-scale systematic study of the Jarvis Island coral community, which experienced ten El Niño-coincident coral bleaching events from 1960 to 2016, found that the reef recovered from almost complete death after severe events.[159]
|
230 |
+
|
231 |
+
Marine protected areas (MPAs) are designated areas that provide various kinds of protection to ocean and/or estuarine areas. They are intended to promote responsible fishery management and habitat protection. MPAs can encompass both social and biological objectives, including reef restoration, aesthetics, biodiversity and economic benefits.
|
232 |
+
|
233 |
+
However, research in Indonesia, Philippines and Papua New Guinea found no significant difference between an MPA site and an unprotected site.[165][166] Further, they can generate conflicts driven by lack of community participation, clashing views of the government and fisheries, effectiveness of the area and funding.[167] In some situations, as in the Phoenix Islands Protected Area, MPAs provide revenue, potentially equal to the income they would have generated without controls.[168]
|
234 |
+
|
235 |
+
According to the Caribbean Coral Reefs - Status Report 1970–2012, states that; stop overfishing especially fishes key to coral reef like parrotfish, coastal zone management that reduce human pressure on reef, (for example restricting coastal settlement, development and tourism) and control pollution specially sewage, may reduce coral decline or even reverse it. The report shows that healthier reefs in the Caribbean are those with large populations of parrotfish in countries that protect these key fishes and sea urchins, banning fish trapping and spearfishing, creating "resilient reefs".[169][170]
|
236 |
+
|
237 |
+
To help combat ocean acidification, some laws are in place to reduce greenhouse gases such as carbon dioxide. The United States Clean Water Act puts pressure on state governments to monitor and limit runoff.
|
238 |
+
|
239 |
+
Many land use laws aim to reduce CO2 emissions by limiting deforestation. Deforestation can release significant amounts of CO2 absent sequestration via active follow-up forestry programs. Deforestation can also cause erosion, which flows into the ocean, contributing to ocean acidification. Incentives are used to reduce miles traveled by vehicles, which reduces carbon emissions into the atmosphere, thereby reducing the amount of dissolved CO2 in the ocean. State and federal governments also regulate land activities that affect coastal erosion.[171] High-end satellite technology can monitor reef conditions.[172]
|
240 |
+
|
241 |
+
Designating a reef as a biosphere reserve, marine park, national monument or world heritage site can offer protections. For example, Belize's barrier reef, Sian Ka'an, the Galapagos islands, Great Barrier Reef, Henderson Island, Palau and Papahānaumokuākea Marine National Monument are world heritage sites.[173]
|
242 |
+
|
243 |
+
In Australia, the Great Barrier Reef is protected by the Great Barrier Reef Marine Park Authority, and is the subject of much legislation, including a biodiversity action plan.[174] Australia compiled a Coral Reef Resilience Action Plan. This plan consists of adaptive management strategies, including reducing carbon footprint. A public awareness plan provides education on the "rainforests of the sea" and how people can reduce carbon emissions.[175]
|
244 |
+
|
245 |
+
Inhabitants of Ahus Island, Manus Province, Papua New Guinea, have followed a generations-old practice of restricting fishing in six areas of their reef lagoon. Their cultural traditions allow line fishing, but no net or spear fishing. Both biomass and individual fish sizes are significantly larger than in places where fishing is unrestricted.[176][177]
|
246 |
+
|
247 |
+
Coral reef restoration has grown in prominence over the past several decades because of the unprecedented reef die offs around the planet. Coral stressors can include pollution, warming ocean temperatures, extreme weather events, and overfishing. With the deterioration of global reefs, fish nurseries, biodiversity, coastal development and livelihood, and natural beauty are under threat. Fortunately, researchers have taken it upon themselves to develop a new field, coral restoration, in the 1970s-1980s[178]
|
248 |
+
|
249 |
+
Coral aquaculture, also known as coral farming or coral gardening, is showing promise as a potentially effective tool for restoring coral reefs.[179][180][181] The "gardening" process bypasses the early growth stages of corals when they are most at risk of dying. Coral seeds are grown in nurseries, then replanted on the reef.[182] Coral is farmed by coral farmers whose interests range from reef conservation to increased income. Due to its straight forward process and substantial evidence of the technique having a significant effect on coral reef growth, coral nurseries became the most widespread and arguably the most effective method for coral restoration.[183]
|
250 |
+
|
251 |
+
Coral gardens take advantage of a coral's natural ability to fragment and continuing to grow if the fragments are able to anchor themselves onto new substrates. This method was first tested by Baruch Rinkevich [184] in 1995 which found success at the time. By today's standards, coral farming has grown into a variety of different forms, but still have the same goals of cultivating corals. Consequently, coral farming quickly replaced previously used transplantation methods, or the act of physically moving sections or whole colonies of corals into a new area.[183] Transplantation has seen success in the past and decades of experiments have led to a high success and survival rate. However, this method still requires the removal of corals from existing reefs. With the current state of reefs, this kind of method should generally be avoided if possible. Saving healthy corals from eroding substrates or reefs that are doomed to collapse could be a major advantage of utilizing transplantation.
|
252 |
+
|
253 |
+
Coral gardens generally take on the safe forms no matter where you go. It begins with the establishment of a nursery where operators can observe and care for coral fragments.[183] It goes without saying that nurseries should be established in areas that are going to maximize growth and minimize mortality. Floating offshore coral trees or even aquariums are possible locations where corals can grow. After a location has been determined, collection and cultivation can occur.
|
254 |
+
|
255 |
+
The major benefit for using coral farms is it lowers polyp and juvenile mortality rates. By removing predators and recruitment obstacles, corals are able to mature without much hindrance. However, nurseries cannot stop climate stressors. Warming temperatures or hurricanes can still disrupt or even kill nursery corals.
|
256 |
+
|
257 |
+
Efforts to expand the size and number of coral reefs generally involve supplying substrate to allow more corals to find a home. Substrate materials include discarded vehicle tires, scuttled ships, subway cars and formed concrete, such as reef balls. Reefs grow unaided on marine structures such as oil rigs. In large restoration projects, propagated hermatypic coral on substrate can be secured with metal pins, superglue or milliput. Needle and thread can also attach A-hermatype coral to substrate.
|
258 |
+
|
259 |
+
Biorock is a substrate produced by a patented process that runs low voltage electrical currents through seawater to cause dissolved minerals to precipitate onto steel structures. The resultant white carbonate (aragonite) is the same mineral that makes up natural coral reefs. Corals rapidly colonize and grow at accelerated rates on these coated structures. The electrical currents also accelerate formation and growth of both chemical limestone rock and the skeletons of corals and other shell-bearing organisms, such as oysters. The vicinity of the anode and cathode provides a high-pH environment which inhibits the growth of competitive filamentous and fleshy algae. The increased growth rates fully depend on the accretion activity. Under the influence of the electric field, corals display an increased growth rate, size and density.
|
260 |
+
|
261 |
+
Simply having many structures on the ocean floor is not enough to form coral reefs. Restoration projects must consider the complexity of the substrates they are creating for future reefs. Researchers conducted an experiment near Ticao Island in the Philippines in 2013[185] where several substrates in varying complexities were laid in the nearby degraded reefs. Large complexity consisted of plots that had both a man-made substrates of both smooth and rough rocks with a surrounding fence, medium consisted of only the man-made substrates, and small had neither the fence or substrates. After one month, researchers found that there was positive correlation between structure complexity and recruitment rates of larvae.[185] The medium complexity performed the best with larvae favoring rough rocks over smooth rocks. Following one year of their study, researchers visited the site and found that many of the sites were able to support local fisheries. They came to the conclusion that reef restoration could be done cost-effectively and will yield long term benefits given they are protected and maintained.[185]
|
262 |
+
|
263 |
+
One case study with coral reef restoration was conducted on the island of Oahu in Hawaii. The University of Hawaii operates a Coral Reef Assessment and Monitoring Program to help relocate and restore coral reefs in Hawaii. A boat channel from the island of Oahu to the Hawaii Institute of Marine Biology on Coconut Island was overcrowded with coral reefs. Many areas of coral reef patches in the channel had been damaged from past dredging in the channel.
|
264 |
+
|
265 |
+
Dredging covers corals with sand. Coral larvae cannot settle on sand; they can only build on existing reefs or compatible hard surfaces, such as rock or concrete. Because of this, the University decided to relocate some of the coral. They transplanted them with the help of United States Army divers, to a site relatively close to the channel. They observed little if any damage to any of the colonies during transport and no mortality of coral reefs was observed on the transplant site. While attaching the coral to the transplant site, they found that coral placed on hard rock grew well, including on the wires that attached the corals to the site.
|
266 |
+
|
267 |
+
No environmental effects were seen from the transplantation process, recreational activities were not decreased, and no scenic areas were affected.
|
268 |
+
|
269 |
+
As an alternative to transplanting coral themselves, juvenile fish can also be encouraged to relocate to existing coral reefs by auditory simulation. In damaged sections of the Great Barrier Reef, loudspeakers playing recordings of healthy reef environments, were found to attract fish twice as often as equivalent patches where no sound was played, and also increased species biodiversity by 50%.
|
270 |
+
|
271 |
+
Another possibility for coral restoration is gene therapy: inoculating coral with genetically modified bacteria, or naturally-occurring heat-tolerant varieties of coral symbiotes, may make it possible to grow corals that are more resistant to climate change and other threats.[186] Warming oceans are forcing corals to adapt the to a unprecedented temperatures. Those that do not have a tolerance for the elevated temperatures experience coral bleaching and eventually mortality. There is already research that looks to create genetically modified corals that can withstand a warming ocean. Madeleine J. H. van Oppen, James K. Oliver, Hollie M. Putnam, and Ruth D. Gates described four different ways that gradually increase in human intervention to genetically modify corals.[187] These methods focus on altering the genetics of the zooxanthellae within coral rather than the alternative.
|
272 |
+
|
273 |
+
The first method is to induced acclimatization of the first generation of corals.[187] The idea is that when adult and offspring corals are exposed to stressors, the zooxanthellae will gain a mutation. This method is based mostly on the chance that the zooxanthellae will acquire the specific trait that will allow it to better survive in warmer waters. The second method focuses on identifying what different kinds of zooxanthellae are within the coral and configuring how much of each zooxanthellae lives within to the coral at a given age.[187] Use of zooxanthellae from the previous method would only boost success rates for this method. However, this method would only be applicable to younger corals, for now, because previous experiments of manipulation zooxanthellae communities at later life stages have all failed. The third method focuses on selective breeding tactics.[187] Once selected, corals would be reared and exposed to simulated stressors in a laboratory. The last method is to genetically modify the zooxanthellae itself.[187] When preferred mutations are acquired, the genetically modified zooxanthellae will be introduced to an aposymbiotic poly and a new coral will be produced. This method is the most laborious of the fourth, but researchers believe this method should be utilized more and holds the most promise in genetic engineering for coral restoration.
|
274 |
+
|
275 |
+
Hawaiian coral reefs smothered by the spread of invasive algae were managed with a two-prong approach: divers manually removed invasive algae, with the support of super-sucker barges. Grazing pressure on invasive algae needed to be increased to prevent the regrowth of the algae. Researchers found that native collector urchins were reasonable candidate grazers for algae biocontrol, to extirpate the remaining invasive algae from the reef.[124]
|
276 |
+
|
277 |
+
Macroalgae, or better known as seaweed, has to potential to cause reef collapse because they can outcompete many coral species. Macroalgae can overgrow on corals, shade, block recruitment, release biochemicals that can hinder spawning, and potentially form bacteria harmful to corals.[188][189] Historically, algae growth was controlled by herbivorous fish and sea urchins. Parrotfish are a prime example of reef caretakers. Consequently, these two species can be considered as keystone species for reef environments because of their role in protecting reefs.
|
278 |
+
|
279 |
+
Before the 1980s, Jamaica's reefs were thriving and well cared for, however, this all changed after Hurricane Allen occurred in 1980 and an unknown disease spread across the Caribbean. In the wake of these events, massive damage was caused to both the reefs and sea urchin population across Jamaican's reefs and into the Caribbean Sea. As little as 2% of the original sea urchin population survived the disease.[189] Primary macroalgae succeeded the destroyed reefs and eventually larger, more resilient macroalgae soon took its place as the dominant organism.[189][190] Parrotfish and other herbivorous fish were few in numbers because of decades of overfishing and bycatch at the time.[190] Historically, the Jamaican coast had 90% coral cover and was reduced to 5% in the 1990s.[190] Eventually, corals were able to recover in areas where sea urchin populations were increasing. Sea urchins were able to feed and multiply and clear off substrates, leaving areas for coral polyps to anchor and mature. However, sea urchin populations are still not recovering as fast as researchers predicted, despite being highly fecundate.[189] It is unknown whether or not the mysterious disease is still present and preventing sea urchin populations from rebounding. Regardless, these areas are slowly recovering with the aid of sea urchin grazing. This event supports an early restoration idea of cultivating and releasing sea urchins into reefs to prevent algal overgrowth.[191][192]
|
280 |
+
|
281 |
+
In 2014, Christopher Page, Erinn Muller, and David Vaughan from the International Center for Coral Reef Research & Restoration at Mote Marine Laboratory in Summerland Key, Florida developed a new technology called "microfragmentation," in which they use a specialized diamond band saw to cut corals into 1 cm2 fragments instead of 6 cm2 to advance the growth of brain, boulder, and star corals.[193] Corals Orbicella faveolata and Montastraea cavernosa were outplanted off the Florida's shores in several microfragment arrays. After two years, O. faveolata had grown 6.5x its original size while M. cavernosa had grown nearly twice its size.[193] Under conventional means, both corals would have required decades to reach the same size. It is suspected that if predation events had not occurred near the beginning of the experiment O. faveolata would have grown at least ten times its original size.[193] By using this method, Mote Marine Laboratory produced 25,000 corals and planted 10,000 in the Florida Keys in only one year. Shortly after, they discovered that these microfragments fused with other microfragments from the same parent coral. Typically, corals that are not from the same parent fight and kill nearby corals in an attempt to survive and expand. This new technology is known as "fusion" and has been shown to grow coral heads in just two years instead of the typical 25–75 years. After fusion occurs, the reef will act as a single organism rather than several independent reefs. Currently, there has been no published research into this method.[193]
|
282 |
+
|
283 |
+
The times of maximum reef development were in the Middle Cambrian (513–501 Ma), Devonian (416–359 Ma) and Carboniferous (359–299 Ma), owing to order Rugosa extinct corals and Late Cretaceous (100–66 Ma) and all Neogene (23 Ma–present), owing to order Scleractinia corals.
|
284 |
+
|
285 |
+
Not all reefs in the past were formed by corals: those in the Early Cambrian (542–513 Ma) resulted from calcareous algae and archaeocyathids (small animals with conical shape, probably related to sponges) and in the Late Cretaceous (100–66 Ma), when reefs formed by a group of bivalves called rudists existed; one of the valves formed the main conical structure and the other, much smaller valve acted as a cap.
|
286 |
+
|
287 |
+
Measurements of the oxygen isotopic composition of the aragonitic skeleton of coral reefs, such as Porites, can indicate changes in sea surface temperature and sea surface salinity conditions during the growth of the coral. This technique is often used by climate scientists to infer a region's paleoclimate.[194]
|
en/4945.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An autobiography (from the Greek, αὐτός-autos self + βίος-bios life + γράφειν-graphein to write; also informally called an autobio[1]) is a self-written account of the life of oneself. The word "autobiography" was first used deprecatingly by William Taylor in 1797 in the English periodical The Monthly Review, when he suggested the word as a hybrid, but condemned it as "pedantic". However, its next recorded use was in its present sense, by Robert Southey in 1809.[2] Despite only being named early in the nineteenth century, first-person autobiographical writing originates in antiquity. Roy Pascal differentiates autobiography from the periodic self-reflective mode of journal or diary writing by noting that "[autobiography] is a review of a life from a particular moment in time, while the diary, however reflective it may be, moves through a series of moments in time".[3] Autobiography thus takes stock of the autobiographer's life from the moment of composition. While biographers generally rely on a wide variety of documents and viewpoints, autobiography may be based entirely on the writer's memory. The memoir form is closely associated with autobiography but it tends, as Pascal claims, to focus less on the self and more on others during the autobiographer's review of his or her life.[3]
|
4 |
+
|
5 |
+
Autobiographical works are by nature subjective. The inability—or unwillingness—of the author to accurately recall memories has in certain cases resulted in misleading or incorrect information. Some sociologists and psychologists have noted that autobiography offers the author the ability to recreate history.
|
6 |
+
|
7 |
+
Spiritual autobiography is an account of an author's struggle or journey towards God, followed by conversion a religious conversion, often interrupted by moments of regression. The author re-frames his or her life as a demonstration of divine intention through encounters with the Divine. The earliest example of a spiritual autobiography is Augustine's Confessions though the tradition has expanded to include other religious traditions in works such as Zahid Rohari's An Autobiography and Black Elk Speaks. The spiritual autobiography works as an endorsement of his or her religion.
|
8 |
+
|
9 |
+
A memoir is slightly different in character from an autobiography. While an autobiography typically focuses on the "life and times" of the writer, a memoir has a narrower, more intimate focus on his or her own memories, feelings and emotions. Memoirs have often been written by politicians or military leaders as a way to record and publish an account of their public exploits. One early example is that of Julius Caesar's Commentarii de Bello Gallico, also known as Commentaries on the Gallic Wars. In the work, Caesar describes the battles that took place during the nine years that he spent fighting local armies in the Gallic Wars. His second memoir, Commentarii de Bello Civili (or Commentaries on the Civil War) is an account of the events that took place between 49 and 48 BC in the civil war against Gnaeus Pompeius and the Senate.
|
10 |
+
|
11 |
+
Leonor López de Córdoba (1362–1420) wrote what is supposed to be the first autobiography in Spanish. The English Civil War (1642–1651) provoked a number of examples of this genre, including works by Sir Edmund Ludlow and Sir John Reresby. French examples from the same period include the memoirs of Cardinal de Retz (1614–1679) and the Duc de Saint-Simon.
|
12 |
+
|
13 |
+
The term "fictional autobiography" signifies novels about a fictional character written as though the character were writing their own autobiography, meaning that the character is the first-person narrator and that the novel addresses both internal and external experiences of the character. Daniel Defoe's Moll Flanders is an early example. Charles Dickens' David Copperfield is another such classic, and J.D. Salinger's The Catcher in the Rye is a well-known modern example of fictional autobiography. Charlotte Brontë's Jane Eyre is yet another example of fictional autobiography, as noted on the front page of the original version. The term may also apply to works of fiction purporting to be autobiographies of real characters, e.g., Robert Nye's Memoirs of Lord Byron.
|
14 |
+
|
15 |
+
In antiquity such works were typically entitled apologia, purporting to be self-justification rather than self-documentation. John Henry Newman's Christian confessional work (first published in 1864) is entitled Apologia Pro Vita Sua in reference to this tradition.
|
16 |
+
|
17 |
+
The Jewish historian Flavius Josephus introduces his autobiography (Josephi Vita, c. 99) with self-praise, which is followed by a justification of his actions as a Jewish rebel commander of Galilee.[4]
|
18 |
+
|
19 |
+
The pagan rhetor Libanius (c. 314–394) framed his life memoir (Oration I begun in 374) as one of his orations, not of a public kind, but of a literary kind that could not be aloud in privacy.
|
20 |
+
|
21 |
+
Augustine (354–430) applied the title Confessions to his autobiographical work, and Jean-Jacques Rousseau used the same title in the 18th century, initiating the chain of confessional and sometimes racy and highly self-critical, autobiographies of the Romantic era and beyond. Augustine's was arguably the first Western autobiography ever written, and became an influential model for Christian writers throughout the Middle Ages. It tells of the hedonistic lifestyle Augustine lived for a time within his youth, associating with young men who boasted of their sexual exploits; his following and leaving of the anti-sex and anti-marriage Manichaeism in attempts to seek sexual morality; and his subsequent return to Christianity due to his embracement of Skepticism and the New Academy movement (developing the view that sex is good, and that virginity is better, comparing the former to silver and the latter to gold; Augustine's views subsequently strongly influenced Western theology[5]). Confessions will always rank among the great masterpieces of western literature.[6]
|
22 |
+
|
23 |
+
In the spirit of Augustine's Confessions is the 12th-century Historia Calamitatum of Peter Abelard, outstanding as an autobiographical document of its period.
|
24 |
+
|
25 |
+
In the 15th century, Leonor López de Córdoba, a Spanish noblewoman, wrote her Memorias, which may be the first autobiography in Castillian.
|
26 |
+
|
27 |
+
Zāhir ud-Dīn Mohammad Bābur, who founded the Mughal dynasty of South Asia kept a journal Bāburnāma (Chagatai/Persian: بابر نامہ; literally: "Book of Babur" or "Letters of Babur") which was written between 1493 and 1529.
|
28 |
+
|
29 |
+
One of the first great autobiographies of the Renaissance is that of the sculptor and goldsmith Benvenuto Cellini (1500–1571), written between 1556 and 1558, and entitled by him simply Vita (Italian: Life). He declares at the start: "No matter what sort he is, everyone who has to his credit what are or really seem great achievements, if he cares for truth and goodness, ought to write the story of his own life in his own hand; but no one should venture on such a splendid undertaking before he is over forty."[7] These criteria for autobiography generally persisted until recent times, and most serious autobiographies of the next three hundred years conformed to them.
|
30 |
+
|
31 |
+
Another autobiography of the period is De vita propria, by the Italian mathematician, physician and astrologer Gerolamo Cardano (1574).
|
32 |
+
|
33 |
+
The earliest known autobiography written in English is the Book of Margery Kempe, written in 1438.[8] Following in the earlier tradition of a life story told as an act of Christian witness, the book describes Margery Kempe's pilgrimages to the Holy Land and Rome, her attempts to negotiate a celibate marriage with her husband, and most of all her religious experiences as a Christian mystic. Extracts from the book were published in the early sixteenth century but the whole text was published for the first time only in 1936.[9]
|
34 |
+
|
35 |
+
Possibly the first publicly available autobiography written in English was Captain John Smith's autobiography published in 1630[10] which was regarded by many as not much more than a collection of tall tales told by someone of doubtful veracity. This changed with the publication of Philip Barbour's definitive biography in 1964 which, amongst other things, established independent factual bases for many of Smith's "tall tales", many of which could not have been known by Smith at the time of writing unless he was actually present at the events recounted.[11]
|
36 |
+
|
37 |
+
Other notable English autobiographies of the 17th century include those of Lord Herbert of Cherbury (1643, published 1764) and John Bunyan (Grace Abounding to the Chief of Sinners, 1666).
|
38 |
+
|
39 |
+
Jarena Lee (1783–1864) was the first African American woman to have a published biography in the United States.[12]
|
40 |
+
|
41 |
+
Following the trend of Romanticism, which greatly emphasized the role and the nature of the individual, and in the footsteps of Jean-Jacques Rousseau's Confessions, a more intimate form of autobiography, exploring the subject's emotions, came into fashion. Stendhal's autobiographical writings of the 1830s, The Life of Henry Brulard and Memoirs of an Egotist, are both avowedly influenced by Rousseau.[13] An English example is William Hazlitt's Liber Amoris (1823), a painful examination of the writer's love-life.
|
42 |
+
|
43 |
+
With the rise of education, cheap newspapers and cheap printing, modern concepts of fame and celebrity began to develop, and the beneficiaries of this were not slow to cash in on this by producing autobiographies. It became the expectation—rather than the exception—that those in the public eye should write about themselves—not only writers such as Charles Dickens (who also incorporated autobiographical elements in his novels) and Anthony Trollope, but also politicians (e.g. Henry Brooks Adams), philosophers (e.g. John Stuart Mill), churchmen such as Cardinal Newman, and entertainers such as P. T. Barnum. Increasingly, in accordance with romantic taste, these accounts also began to deal, amongst other topics, with aspects of childhood and upbringing—far removed from the principles of "Cellinian" autobiography.
|
44 |
+
|
45 |
+
From the 17th century onwards, "scandalous memoirs" by supposed libertines, serving a public taste for titillation, have been frequently published. Typically pseudonymous, they were (and are) largely works of fiction written by ghostwriters. So-called "autobiographies" of modern professional athletes and media celebrities—and to a lesser extent about politicians—generally written by a ghostwriter, are routinely published. Some celebrities, such as Naomi Campbell, admit to not having read their "autobiographies".[citation needed] Some sensationalist autobiographies such as James Frey's A Million Little Pieces have been publicly exposed as having embellished or fictionalized significant details of the authors' lives.
|
46 |
+
|
47 |
+
Autobiography has become an increasingly popular and widely accessible form. A Fortunate Life by Albert Facey (1979) has become an Australian literary classic.[14] With the critical and commercial success in the United States of such memoirs as Angela’s Ashes and The Color of Water, more and more people have been encouraged to try their hand at this genre. Maggie Nelson's book The Argonauts is one of the recent autobiographies. Maggie Nelson calls it "autotheory"—a combination of autobiography and critical theory.[15]
|
48 |
+
|
49 |
+
A genre where the "claim for truth" overlaps with fictional elements though the work still purports to be autobiographical is autofiction.
|
en/4946.html.txt
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In Euclidean plane geometry, a rectangle is a quadrilateral with four right angles. It can also be defined as an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°). It can also be defined as a parallelogram containing a right angle. A rectangle with four sides of equal length is a square. The term oblong is occasionally used to refer to a non-square rectangle.[1][2][3] A rectangle with vertices ABCD would be denoted as ABCD.
|
4 |
+
|
5 |
+
The word rectangle comes from the Latin rectangulus, which is a combination of rectus (as an adjective, right, proper) and angulus (angle).
|
6 |
+
|
7 |
+
A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals.[4] It is a special case of an antiparallelogram, and its angles are not right angles. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles.
|
8 |
+
|
9 |
+
Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons.
|
10 |
+
|
11 |
+
A convex quadrilateral is a rectangle if and only if it is any one of the following:[5][6]
|
12 |
+
|
13 |
+
A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular.
|
14 |
+
|
15 |
+
A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which both pairs of opposite sides are parallel and equal in length.
|
16 |
+
|
17 |
+
A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides.
|
18 |
+
|
19 |
+
A convex quadrilateral is
|
20 |
+
|
21 |
+
De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides.[9] This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects.
|
22 |
+
|
23 |
+
Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia).
|
24 |
+
|
25 |
+
A rectangle is cyclic: all corners lie on a single circle.
|
26 |
+
|
27 |
+
It is equiangular: all its corner angles are equal (each of 90 degrees).
|
28 |
+
|
29 |
+
It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit.
|
30 |
+
|
31 |
+
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
|
32 |
+
|
33 |
+
The dual polygon of a rectangle is a rhombus, as shown in the table below.[10]
|
34 |
+
|
35 |
+
A rectangle is rectilinear: its sides meet at right angles.
|
36 |
+
|
37 |
+
A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area).
|
38 |
+
|
39 |
+
Two rectangles, neither of which will fit inside the other, are said to be incomparable.
|
40 |
+
|
41 |
+
If a rectangle has length
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
ℓ
|
46 |
+
|
47 |
+
|
48 |
+
{\displaystyle \ell }
|
49 |
+
|
50 |
+
and width
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
w
|
55 |
+
|
56 |
+
|
57 |
+
{\displaystyle w}
|
58 |
+
|
59 |
+
The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area.
|
60 |
+
|
61 |
+
The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle.
|
62 |
+
|
63 |
+
A parallelogram with equal diagonals is a rectangle.
|
64 |
+
|
65 |
+
The Japanese theorem for cyclic quadrilaterals[11] states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle.
|
66 |
+
|
67 |
+
The British flag theorem states that with vertices denoted A, B, C, and D, for any point P on the same plane of a rectangle:[12]
|
68 |
+
|
69 |
+
For every convex body C in the plane, we can inscribe a rectangle r in C such that a homothetic copy R of r is circumscribed about C and the positive homothety ratio is at most 2 and
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
0.5
|
74 |
+
|
75 |
+
× Area
|
76 |
+
|
77 |
+
(
|
78 |
+
R
|
79 |
+
)
|
80 |
+
≤
|
81 |
+
|
82 |
+
Area
|
83 |
+
|
84 |
+
(
|
85 |
+
C
|
86 |
+
)
|
87 |
+
≤
|
88 |
+
2
|
89 |
+
|
90 |
+
× Area
|
91 |
+
|
92 |
+
(
|
93 |
+
r
|
94 |
+
)
|
95 |
+
|
96 |
+
|
97 |
+
{\displaystyle 0.5{\text{ × Area}}(R)\leq {\text{Area}}(C)\leq 2{\text{ × Area}}(r)}
|
98 |
+
|
99 |
+
.[13]
|
100 |
+
|
101 |
+
A crossed (self-intersecting) quadrilateral consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a crossed quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex.
|
102 |
+
|
103 |
+
A crossed quadrilateral is sometimes likened to a bow tie or butterfly. A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie. A crossed rectangle is sometimes called an "angular eight".
|
104 |
+
|
105 |
+
The interior of a crossed rectangle can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise.
|
106 |
+
|
107 |
+
A crossed rectangle is not equiangular. The sum of its interior angles (two acute and two reflex), as with any crossed quadrilateral, is 720°.[14]
|
108 |
+
|
109 |
+
A rectangle and a crossed rectangle are quadrilaterals with the following properties in common:
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry.
|
114 |
+
|
115 |
+
In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length.
|
116 |
+
|
117 |
+
In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length.
|
118 |
+
|
119 |
+
The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings:
|
120 |
+
|
121 |
+
A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is perfect[15][16] if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is imperfect. In a perfect (or imperfect) triangled rectangle the triangles must be right triangles.
|
122 |
+
|
123 |
+
A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares.[15][17] The same is true if the tiles are unequal isosceles right triangles.
|
124 |
+
|
125 |
+
The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes.
|
en/4947.html.txt
ADDED
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In Euclidean plane geometry, a rectangle is a quadrilateral with four right angles. It can also be defined as an equiangular quadrilateral, since equiangular means that all of its angles are equal (360°/4 = 90°). It can also be defined as a parallelogram containing a right angle. A rectangle with four sides of equal length is a square. The term oblong is occasionally used to refer to a non-square rectangle.[1][2][3] A rectangle with vertices ABCD would be denoted as ABCD.
|
4 |
+
|
5 |
+
The word rectangle comes from the Latin rectangulus, which is a combination of rectus (as an adjective, right, proper) and angulus (angle).
|
6 |
+
|
7 |
+
A crossed rectangle is a crossed (self-intersecting) quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals.[4] It is a special case of an antiparallelogram, and its angles are not right angles. Other geometries, such as spherical, elliptic, and hyperbolic, have so-called rectangles with opposite sides equal in length and equal angles that are not right angles.
|
8 |
+
|
9 |
+
Rectangles are involved in many tiling problems, such as tiling the plane by rectangles or tiling a rectangle by polygons.
|
10 |
+
|
11 |
+
A convex quadrilateral is a rectangle if and only if it is any one of the following:[5][6]
|
12 |
+
|
13 |
+
A rectangle is a special case of a parallelogram in which each pair of adjacent sides is perpendicular.
|
14 |
+
|
15 |
+
A parallelogram is a special case of a trapezium (known as a trapezoid in North America) in which both pairs of opposite sides are parallel and equal in length.
|
16 |
+
|
17 |
+
A trapezium is a convex quadrilateral which has at least one pair of parallel opposite sides.
|
18 |
+
|
19 |
+
A convex quadrilateral is
|
20 |
+
|
21 |
+
De Villiers defines a rectangle more generally as any quadrilateral with axes of symmetry through each pair of opposite sides.[9] This definition includes both right-angled rectangles and crossed rectangles. Each has an axis of symmetry parallel to and equidistant from a pair of opposite sides, and another which is the perpendicular bisector of those sides, but, in the case of the crossed rectangle, the first axis is not an axis of symmetry for either side that it bisects.
|
22 |
+
|
23 |
+
Quadrilaterals with two axes of symmetry, each through a pair of opposite sides, belong to the larger class of quadrilaterals with at least one axis of symmetry through a pair of opposite sides. These quadrilaterals comprise isosceles trapezia and crossed isosceles trapezia (crossed quadrilaterals with the same vertex arrangement as isosceles trapezia).
|
24 |
+
|
25 |
+
A rectangle is cyclic: all corners lie on a single circle.
|
26 |
+
|
27 |
+
It is equiangular: all its corner angles are equal (each of 90 degrees).
|
28 |
+
|
29 |
+
It is isogonal or vertex-transitive: all corners lie within the same symmetry orbit.
|
30 |
+
|
31 |
+
It has two lines of reflectional symmetry and rotational symmetry of order 2 (through 180°).
|
32 |
+
|
33 |
+
The dual polygon of a rectangle is a rhombus, as shown in the table below.[10]
|
34 |
+
|
35 |
+
A rectangle is rectilinear: its sides meet at right angles.
|
36 |
+
|
37 |
+
A rectangle in the plane can be defined by five independent degrees of freedom consisting, for example, of three for position (comprising two of translation and one of rotation), one for shape (aspect ratio), and one for overall size (area).
|
38 |
+
|
39 |
+
Two rectangles, neither of which will fit inside the other, are said to be incomparable.
|
40 |
+
|
41 |
+
If a rectangle has length
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
ℓ
|
46 |
+
|
47 |
+
|
48 |
+
{\displaystyle \ell }
|
49 |
+
|
50 |
+
and width
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
w
|
55 |
+
|
56 |
+
|
57 |
+
{\displaystyle w}
|
58 |
+
|
59 |
+
The isoperimetric theorem for rectangles states that among all rectangles of a given perimeter, the square has the largest area.
|
60 |
+
|
61 |
+
The midpoints of the sides of any quadrilateral with perpendicular diagonals form a rectangle.
|
62 |
+
|
63 |
+
A parallelogram with equal diagonals is a rectangle.
|
64 |
+
|
65 |
+
The Japanese theorem for cyclic quadrilaterals[11] states that the incentres of the four triangles determined by the vertices of a cyclic quadrilateral taken three at a time form a rectangle.
|
66 |
+
|
67 |
+
The British flag theorem states that with vertices denoted A, B, C, and D, for any point P on the same plane of a rectangle:[12]
|
68 |
+
|
69 |
+
For every convex body C in the plane, we can inscribe a rectangle r in C such that a homothetic copy R of r is circumscribed about C and the positive homothety ratio is at most 2 and
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
0.5
|
74 |
+
|
75 |
+
× Area
|
76 |
+
|
77 |
+
(
|
78 |
+
R
|
79 |
+
)
|
80 |
+
≤
|
81 |
+
|
82 |
+
Area
|
83 |
+
|
84 |
+
(
|
85 |
+
C
|
86 |
+
)
|
87 |
+
≤
|
88 |
+
2
|
89 |
+
|
90 |
+
× Area
|
91 |
+
|
92 |
+
(
|
93 |
+
r
|
94 |
+
)
|
95 |
+
|
96 |
+
|
97 |
+
{\displaystyle 0.5{\text{ × Area}}(R)\leq {\text{Area}}(C)\leq 2{\text{ × Area}}(r)}
|
98 |
+
|
99 |
+
.[13]
|
100 |
+
|
101 |
+
A crossed (self-intersecting) quadrilateral consists of two opposite sides of a non-self-intersecting quadrilateral along with the two diagonals. Similarly, a crossed rectangle is a crossed quadrilateral which consists of two opposite sides of a rectangle along with the two diagonals. It has the same vertex arrangement as the rectangle. It appears as two identical triangles with a common vertex, but the geometric intersection is not considered a vertex.
|
102 |
+
|
103 |
+
A crossed quadrilateral is sometimes likened to a bow tie or butterfly. A three-dimensional rectangular wire frame that is twisted can take the shape of a bow tie. A crossed rectangle is sometimes called an "angular eight".
|
104 |
+
|
105 |
+
The interior of a crossed rectangle can have a polygon density of ±1 in each triangle, dependent upon the winding orientation as clockwise or counterclockwise.
|
106 |
+
|
107 |
+
A crossed rectangle is not equiangular. The sum of its interior angles (two acute and two reflex), as with any crossed quadrilateral, is 720°.[14]
|
108 |
+
|
109 |
+
A rectangle and a crossed rectangle are quadrilaterals with the following properties in common:
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
In spherical geometry, a spherical rectangle is a figure whose four edges are great circle arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length. The surface of a sphere in Euclidean solid geometry is a non-Euclidean surface in the sense of elliptic geometry. Spherical geometry is the simplest form of elliptic geometry.
|
114 |
+
|
115 |
+
In elliptic geometry, an elliptic rectangle is a figure in the elliptic plane whose four edges are elliptic arcs which meet at equal angles greater than 90°. Opposite arcs are equal in length.
|
116 |
+
|
117 |
+
In hyperbolic geometry, a hyperbolic rectangle is a figure in the hyperbolic plane whose four edges are hyperbolic arcs which meet at equal angles less than 90°. Opposite arcs are equal in length.
|
118 |
+
|
119 |
+
The rectangle is used in many periodic tessellation patterns, in brickwork, for example, these tilings:
|
120 |
+
|
121 |
+
A rectangle tiled by squares, rectangles, or triangles is said to be a "squared", "rectangled", or "triangulated" (or "triangled") rectangle respectively. The tiled rectangle is perfect[15][16] if the tiles are similar and finite in number and no two tiles are the same size. If two such tiles are the same size, the tiling is imperfect. In a perfect (or imperfect) triangled rectangle the triangles must be right triangles.
|
122 |
+
|
123 |
+
A rectangle has commensurable sides if and only if it is tileable by a finite number of unequal squares.[15][17] The same is true if the tiles are unequal isosceles right triangles.
|
124 |
+
|
125 |
+
The tilings of rectangles by other tiles which have attracted the most attention are those by congruent non-rectangular polyominoes, allowing all rotations and reflections. There are also tilings by congruent polyaboloes.
|
en/4948.html.txt
ADDED
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Recycling is the process of converting waste materials into new materials and objects. The recyclability of a material depends on its ability to reacquire the properties it had in its virgin or original state.[1] It is an alternative to "conventional" waste disposal that can save material and help lower greenhouse gas emissions. Recycling can prevent the waste of potentially useful materials and reduce the consumption of fresh raw materials, thereby reducing: energy usage, air pollution (from incineration), and water pollution (from landfilling).
|
6 |
+
|
7 |
+
Recycling is a key component of modern waste reduction and is the third component of the "Reduce, Reuse, and Recycle" waste hierarchy.[2][3] Thus, recycling aims at environmental sustainability by substituting raw material inputs into and redirecting waste outputs out of the economic system.[4] There are some ISO standards related to recycling such as ISO 15270:2008 for plastics waste and ISO 14001:2015 for environmental management control of recycling practice.
|
8 |
+
|
9 |
+
Recyclable materials include many kinds of glass, paper, cardboard, metal, plastic, tires, textiles, batteries, and electronics. The composting or other reuse of biodegradable waste—such as food or garden waste—is also a form of recycling.[5] Materials to be recycled are either delivered to a household recycling center or picked up from curbside bins, then sorted, cleaned, and reprocessed into new materials destined for manufacturing new products.
|
10 |
+
|
11 |
+
In the strictest sense, recycling of a material would produce a fresh supply of the same material—for example, used office paper would be converted into new office paper or used polystyrene foam into new polystyrene. This is accomplished when recycling certain types of materials, such as metal cans, which can become a can again and again, indefinitely, without losing purity in the product.[6] However, this is often difficult or too expensive (compared with producing the same product from raw materials or other sources), so "recycling" of many products or materials involves their reuse in producing different materials (for example, paperboard) instead. Another form of recycling is the salvage of certain materials from complex products, either due to their intrinsic value (such as lead from car batteries, or gold from printed circuit boards), or due to their hazardous nature (e.g., removal and reuse of mercury from thermometers and thermostats).
|
12 |
+
|
13 |
+
Recycling has been a common practice for most of human history, with recorded advocates as far back as Plato in the fourth century BC.[citation needed] During periods when resources were scarce and hard to come by, archaeological studies of ancient waste dumps show less household waste (such as ash, broken tools, and pottery)—implying more waste was being recycled in the absence of new material.[7]
|
14 |
+
|
15 |
+
In pre-industrial times, there is evidence of scrap bronze and other metals being collected in Europe and melted down for continuous reuse.[8] Paper recycling was first recorded in 1031 when Japanese shops sold repulped paper.[9][10] In Britain dust and ash from wood and coal fires was collected by "dustmen" and downcycled as a base material used in brick making. The main driver for these types of recycling was the economic advantage of obtaining recycled feedstock instead of acquiring virgin material, as well as a lack of public waste removal in ever more densely populated areas.[7] In 1813, Benjamin Law developed the process of turning rags into "shoddy" and "mungo" wool in Batley, Yorkshire. This material combined recycled fibers with virgin wool.[11] The West Yorkshire shoddy industry in towns such as Batley and Dewsbury lasted from the early 19th century to at least 1914.
|
16 |
+
|
17 |
+
Industrialization spurred demand for affordable materials; aside from rags, ferrous scrap metals were coveted as they were cheaper to acquire than virgin ore. Railroads both purchased and sold scrap metal in the 19th century, and the growing steel and automobile industries purchased scrap in the early 20th century. Many secondary goods were collected, processed and sold by peddlers who scoured dumps and city streets for discarded machinery, pots, pans, and other sources of metal. By World War I, thousands of such peddlers roamed the streets of American cities, taking advantage of market forces to recycle post-consumer materials back into industrial production.[12]
|
18 |
+
|
19 |
+
Beverage bottles were recycled with a refundable deposit at some drink manufacturers in Great Britain and Ireland around 1800, notably Schweppes.[13] An official recycling system with refundable deposits was established in Sweden for bottles in 1884 and aluminum beverage cans in 1982; the law led to a recycling rate for beverage containers of 84–99 percent depending on type, and a glass bottle can be refilled over 20 times on average.[14]
|
20 |
+
|
21 |
+
New chemical industries created in the late 19th century both invented new materials (e.g. Bakelite [1907]) and promised to transform valueless into valuable materials. Proverbially, you could not make a silk purse of a sow's ear—until the US firm Arthur D. Little published in 1921 "On the Making of Silk Purses from Sows' Ears", its research proving that when "chemistry puts on overalls and gets down to business ... new values appear. New and better paths are opened to reach the goals desired."[15]
|
22 |
+
|
23 |
+
Recycling (or "salvage", as it was then usually known) was a major issue for governments throughout World War II. Financial constraints and significant material shortages due to war efforts made it necessary for countries to reuse goods and recycle materials.[16] These resource shortages caused by the world wars, and other such world-changing occurrences, greatly encouraged recycling.[17] The struggles of war claimed much of the material resources available, leaving little for the civilian population.[16] It became necessary for most homes to recycle their waste, as recycling offered an extra source of materials allowing people to make the most of what was available to them. Recycling household materials meant more resources for war efforts and a better chance of victory.[16] Massive government promotion campaigns, such as the National Salvage Campaign in Britain and the Salvage for Victory campaign in the United States, were carried out on the home front in every combative nation, urging citizens to donate metal, paper, rags, and rubber as a matter of patriotism.
|
24 |
+
|
25 |
+
A considerable investment in recycling occurred in the 1970s, due to rising energy costs.[18] Recycling aluminium uses only 5% of the energy required by virgin production; glass, paper and other metals have less dramatic but very significant energy savings when recycled feedstock is used.[19]
|
26 |
+
|
27 |
+
Although consumer electronics such as the television have been popular since the 1920s, recycling of them was almost unheard of until early 1991.[20] The first electronic waste recycling scheme was implemented in Switzerland, beginning with collection of old refrigerators but gradually expanding to cover all devices.[21] After these schemes were set up, many countries did not have the capacity to deal with the sheer quantity of e-waste they generated or its hazardous nature. They began to export the problem to developing countries without enforced environmental legislation. This is cheaper, as recycling computer monitors in the United States costs 10 times more than in China. Demand in Asia for electronic waste began to grow when scrap yards found that they could extract valuable substances such as copper, silver, iron, silicon, nickel, and gold, during the recycling process.[22] The 2000s saw a large increase in both the sale of electronic devices and their growth as a waste stream: in 2002, e-waste grew faster than any other type of waste in the EU.[23] This caused investment in modern, automated facilities to cope with the influx of redundant appliances, especially after strict laws were implemented in 2003.[24][25][26][27]
|
28 |
+
|
29 |
+
As of 2014, the European Union had about 50% of world share of the waste and recycling industries, with over 60,000 companies employing 500,000 persons, with a turnover of €24 billion.[28] Countries have to reach recycling rates of at least 50%, while the lead countries were around 65% and the EU average was 39% as of 2013.[29]
|
30 |
+
The EU average has been rising steadily, to 45% in 2015.[30][31]
|
31 |
+
|
32 |
+
In 2018, changes in the recycling market have sparked a global "crisis" in the industry. On 31 December 2017, China announced its "National Sword" policy, setting new standards for imports of recyclable material and banning materials that were deemed too "dirty" or "hazardous". The new policy caused drastic disruptions in the global market in recycling and reduced the prices of scrap plastic and low-grade paper. Exports of recyclable materials from G7 countries to China dropped dramatically, with many exports shifting to countries in southeast Asia. The crisis generated significant concern about the practices and environmental sustainability of the recycling industry. The abrupt shift caused countries to accept more recyclable materials than they could process, raising fundamental questions about shipping recycling waste from economically developed countries to countries with few environmental regulations—a practice that predated the crisis.[32]
|
33 |
+
|
34 |
+
For a recycling program to work, having a large, stable supply of recyclable material is crucial. Three legislative options have been used to create such a supply: mandatory recycling collection, container deposit legislation, and refuse bans. Mandatory collection laws set recycling targets for cities to aim for, usually in the form that a certain percentage of a material must be diverted from the city's waste stream by a target date. The city is then responsible for working to meet this target.[5]
|
35 |
+
|
36 |
+
Container deposit legislation involves offering a refund for the return of certain containers, typically glass, plastic, and metal. When a product in such a container is purchased, a small surcharge is added to the price. This surcharge can be reclaimed by the consumer if the container is returned to a collection point. These programs have been very successful, often resulting in an 80 percent recycling rate.[33] Despite such good results, the shift in collection costs from local government to industry and consumers has created strong opposition to the creation of such programs in some areas.[5] A variation on this is where the manufacturer bears responsibility for the recycling of their goods. In the European Union, the WEEE Directive requires producers of consumer electronics to reimburse the recyclers' costs.[34]
|
37 |
+
|
38 |
+
An alternative way to increase the supply of recyclates is to ban the disposal of certain materials as waste, often including used oil, old batteries, tires, and garden waste. One aim of this method is to create a viable economy for proper disposal of banned products. Care must be taken that enough of these recycling services exist, or such bans simply lead to increased illegal dumping.[5]
|
39 |
+
|
40 |
+
Legislation has also been used to increase and maintain a demand for recycled materials. Four methods of such legislation exist: minimum recycled content mandates, utilization rates, procurement policies, and recycled product labeling.[5]
|
41 |
+
|
42 |
+
Both minimum recycled content mandates and utilization rates increase demand directly by forcing manufacturers to include recycling in their operations. Content mandates specify that a certain percentage of a new product must consist of recycled material. Utilization rates are a more flexible option: industries are permitted to meet the recycling targets at any point of their operation or even contract recycling out in exchange for tradeable credits. Opponents to both of these methods point to the large increase in reporting requirements they impose, and claim that they rob the industry of necessary flexibility.[5][35]
|
43 |
+
|
44 |
+
Governments have used their own purchasing power to increase recycling demand through what are called "procurement policies". These policies are either "set-asides", which reserve a certain amount of spending solely towards recycled products, or "price preference" programs which provide a larger budget when recycled items are purchased. Additional regulations can target specific cases: in the United States, for example, the Environmental Protection Agency mandates the purchase of oil, paper, tires and building insulation from recycled or re-refined sources whenever possible.[5]
|
45 |
+
|
46 |
+
The final government regulation towards increased demand is recycled product labeling. When producers are required to label their packaging with amount of recycled material in the product (including the packaging), consumers are better able to make educated choices. Consumers with sufficient buying power can then choose more environmentally conscious options, prompt producers to increase the amount of recycled material in their products, and indirectly increase demand. Standardized recycling labeling can also have a positive effect on supply of recyclates if the labeling includes information on how and where the product can be recycled.[5]
|
47 |
+
|
48 |
+
"Recyclate" is a raw material that is sent to, and processed in a waste recycling plant or materials recovery facility which will be used to form new products.[36] The material is collected in various methods and delivered to a facility where it undergoes re-manufacturing so that it can be used in the production of new materials or products. For example, plastic bottles that are collected can be re-used and made into plastic pellets, a new product.[37]
|
49 |
+
|
50 |
+
The quality of recyclates is recognized as one of the principal challenges that needs to be addressed for the success of a long-term vision of a green economy and achieving zero waste. Recyclate quality is generally referring to how much of the raw material is made up of target material compared to the amount of non-target material and other non-recyclable material.[38] For example, steel and metal are materials with a higher recyclate quality. It's estimated that two-thirds of all new steel manufactured comes from recycled steel.[39] Only target material is likely to be recycled, so a higher amount of non-target and non-recyclable material will reduce the quantity of recycling product.[38] A high proportion of non-target and non-recyclable material can make it more difficult for re-processors to achieve "high-quality" recycling. If the recyclate is of poor quality, it is more likely to end up being down-cycled or, in more extreme cases, sent to other recovery options or landfilled.[38] For example, to facilitate the re-manufacturing of clear glass products there are tight restrictions for colored glass going into the re-melt process. Another example is the downcycling of plastic, in which products such as plastic food packaging are often downcycled into lower quality products, and do not get recycled into the same plastic food packaging.
|
51 |
+
|
52 |
+
The quality of recyclate not only supports high-quality recycling, but it can also deliver significant environmental benefits by reducing, reusing and keeping products out of landfills.[38] High-quality recycling can help support growth in the economy by maximizing the economic value of the waste material collected.[38] Higher income levels from the sale of quality recyclates can return value which can be significant to local governments, households, and businesses.[38] Pursuing high-quality recycling can also provide consumer and business confidence in the waste and resource management sector and may encourage investment in that sector.
|
53 |
+
|
54 |
+
There are many actions along the recycling supply chain that can influence and affect the material quality of recyclate.[40] It begins with the waste producers who place non-target and non-recyclable wastes in recycling collection. This can affect the quality of final recyclate streams or require further efforts to discard those materials at later stages in the recycling process.[40] The different collection systems can result in different levels of contamination. Depending on which materials are collected together, extra effort is required to sort this material back into separate streams and can significantly reduce the quality of the final product.[40] Transportation and the compaction of materials can make it more difficult to separate material back into separate waste streams. Sorting facilities are not one hundred per cent effective in separating materials, despite improvements in technology and quality recyclate which can see a loss in recyclate quality.[40] The storage of materials outside, where the product can become wet, can cause problems for re-processors. Reprocessing facilities may require further sorting steps to further reduce the amount of non-target and non-recyclable material.[40] Each action along the recycling path plays a part in the quality of recyclate.
|
55 |
+
|
56 |
+
The Recyclate Quality Action Plan of Scotland sets out a number of proposed actions that the Scottish Government would like to take forward in order to drive up the quality of the materials being collected for recycling and sorted at materials recovery facilities before being exported or sold on to the reprocessing market.[40]
|
57 |
+
|
58 |
+
The plan's objectives are to:[41]
|
59 |
+
|
60 |
+
The plan focuses on three key areas, with fourteen actions which were identified to increase the quality of materials collected, sorted and presented to the processing market in Scotland.[41]
|
61 |
+
|
62 |
+
The three areas of focus are:[40]
|
63 |
+
|
64 |
+
A number of different systems have been implemented to collect recyclates from the general waste stream. These systems lie along the spectrum of trade-off between public convenience and government ease and expense. The three main categories of collection are "drop-off centers", "buy-back centers", and "curbside collection".[5]
|
65 |
+
|
66 |
+
Curbside collection encompasses many subtly different systems, which differ mostly on where in the process the recyclates are sorted and cleaned. The main categories are mixed waste collection, commingled recyclables, and source separation.[5] A waste collection vehicle generally picks up the waste.
|
67 |
+
|
68 |
+
At one end of the spectrum is mixed waste collection, in which all recyclates are collected mixed in with the rest of the waste, and the desired material is then sorted out and cleaned at a central sorting facility. This results in a large amount of recyclable waste, paper especially, being too soiled to reprocess, but has advantages as well: the city need not pay for a separate collection of recyclates and no public education is needed. Any changes to which materials are recyclable is easy to accommodate as all sorting happens in a central location.[5]
|
69 |
+
|
70 |
+
In a commingled or single-stream system, all recyclables for collection are mixed but kept separate from other waste. This greatly reduces the need for post-collection cleaning but does require public education on what materials are recyclable.[5][8]
|
71 |
+
|
72 |
+
Source separation is the other extreme, where each material is cleaned and sorted prior to collection. This method requires the least post-collection sorting and produces the purest recyclates, but incurs additional operating costs for collection of each separate material. An extensive public education program is also required, which must be successful if recyclate contamination is to be avoided.[5] In Oregon, USA, its environmental authority Oregon DEQ surveyed multi-family property managers and about half of them reported problems including contamination of recyclables due to trespassers such as transients gaining access to the collection areas.[42]
|
73 |
+
|
74 |
+
Source separation used to be the preferred method due to the high sorting costs incurred by commingled (mixed waste) collection. However, advances in sorting technology have lowered this overhead substantially. Many areas which had developed source separation programs have since switched to what is called co-mingled collection.[8]
|
75 |
+
|
76 |
+
Buy-back centers differ in that the cleaned recyclates are purchased, thus providing a clear incentive for use and creating a stable supply. The post-processed material can then be sold. If this is profitable, this conserves the emission of greenhouse gases; if unprofitable, it increases the emission of greenhouse gasses. Government subsidies are necessary to make buy-back centres a viable enterprise. In 1993, according to the U.S. National Waste & Recycling Association, it costs on average $50 to process a ton of material, which can be resold for $30.[5]
|
77 |
+
|
78 |
+
In the US, the value per ton of mixed recyclables was $180 in 2011, $80 in 2015, and $100 in 2017.[43]
|
79 |
+
|
80 |
+
In 2017, glass is essentially valueless, because of the low cost of sand, its major component; low oil costs thwarts plastic recycling.[43]
|
81 |
+
|
82 |
+
In 2017, Napa, California was reimbursed about 20% of its costs in recycling.[43]
|
83 |
+
|
84 |
+
Drop-off centers require the waste producer to carry the recyclates to a central location, either an installed or mobile collection station or the reprocessing plant itself. They are the easiest type of collection to establish but suffer from low and unpredictable throughput.
|
85 |
+
|
86 |
+
For some waste materials such as plastic, recent technical devices called recyclebots[44] enable a form of distributed recycling. Preliminary life-cycle analysis (LCA) indicates that such distributed recycling of HDPE to make filament of 3D printers in rural regions is energetically favorable to either using virgin resin or conventional recycling processes because of reductions in transportation energy.[45][46]
|
87 |
+
|
88 |
+
Once commingled recyclates are collected and delivered to a materials recovery facility, the different types of materials must be sorted. This is done in a series of stages, many of which involve automated processes such that a truckload of material can be fully sorted in less than an hour.[8] Some plants can now sort the materials automatically, known as single-stream recycling. Automatic sorting may be aided by robotics and machine-learning.[47][48] In plants, a variety of materials is sorted such as paper, different types of plastics, glass, metals, food scraps, and most types of batteries.[49] A 30 percent increase in recycling rates has been seen in the areas where these plants exist.[50] In the United States, there are over 300 materials recovery facilities.[51]
|
89 |
+
|
90 |
+
Initially, the commingled recyclates are removed from the collection vehicle and placed on a conveyor belt spread out in a single layer. Large pieces of corrugated fiberboard and plastic bags are removed by hand at this stage, as they can cause later machinery to jam.[8]
|
91 |
+
|
92 |
+
Next, automated machinery such as disk screens and air classifiers separate the recyclates by weight, splitting lighter paper and plastic from heavier glass and metal. Cardboard is removed from the mixed paper and the most common types of plastic, PET (#1) and HDPE (#2), are collected. This separation is usually done by hand but has become automated in some sorting centers: a spectroscopic scanner is used to differentiate between different types of paper and plastic based on the absorbed wavelengths, and subsequently divert each material into the proper collection channel.[8]
|
93 |
+
|
94 |
+
Strong magnets are used to separate out ferrous metals, such as iron, steel, and tin cans. Non-ferrous metals are ejected by magnetic eddy currents in which a rotating magnetic field induces an electric current around the aluminum cans, which in turn creates a magnetic eddy current inside the cans. This magnetic eddy current is repulsed by a large magnetic field, and the cans are ejected from the rest of the recyclate stream.[8]
|
95 |
+
|
96 |
+
Finally, glass is sorted according to its color: brown, amber, green, or clear. It may either be sorted by hand,[8] or via an automated machine that uses colored filters to detect different colors. Glass fragments smaller than 10 millimetres (0.39 in) across cannot be sorted automatically, and are mixed together as "glass fines".[52]
|
97 |
+
|
98 |
+
This process of recycling as well as reusing the recycled material has proven advantageous because it reduces amount of waste sent to landfills, conserves natural resources, saves energy, reduces greenhouse gas emissions, and helps create new jobs. Recycled materials can also be converted into new products that can be consumed again, such as paper, plastic, and glass.[53]
|
99 |
+
|
100 |
+
The City and County of San Francisco's Department of the Environment is attempting to achieve a citywide goal of generating zero waste by 2020.[54] San Francisco's refuse hauler, Recology, operates an effective recyclables sorting facility which helped the city reach a record-breaking diversion rate of 80%.[55]
|
101 |
+
|
102 |
+
Although many government programs are concentrated on recycling at home, 64% of waste in the United Kingdom is generated by industry.[56] The focus of many recycling programs done by industry is the cost–effectiveness of recycling. The ubiquitous nature of cardboard packaging makes cardboard a commonly recycled waste product by companies that deal heavily in packaged goods, like retail stores, warehouses, and distributors of goods. Other industries deal in niche or specialized products, depending on the nature of the waste materials that are present.
|
103 |
+
|
104 |
+
The glass, lumber, wood pulp and paper manufacturers all deal directly in commonly recycled materials; however, old rubber tires may be collected and recycled by independent tire dealers for a profit.
|
105 |
+
|
106 |
+
Levels of metals recycling are generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme (UNEP) published reports on metal stocks that exist within society[57] and their recycling rates.[57] The Panel reported that the increase in the use of metals during the 20th and into the 21st century has led to a substantial shift in metal stocks from below ground to use in applications within society above ground. For example, the in-use stock of copper in the USA grew from 73 to 238 kg per capita between 1932 and 1999.
|
107 |
+
|
108 |
+
The report authors observed that, as metals are inherently recyclable, the metal stocks in society can serve as huge mines above ground (the term "urban mining" has been coined with this idea in mind[58]). However, they found that the recycling rates of many metals are very low. The report warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells, are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology.
|
109 |
+
|
110 |
+
The military recycles some metals. The U.S. Navy's Ship Disposal Program uses ship breaking to reclaim the steel of old vessels. Ships may also be sunk to create an artificial reef. Uranium is a very dense metal that has qualities superior to lead and titanium for many military and industrial uses. The uranium left over from processing it into nuclear weapons and fuel for nuclear reactors is called depleted uranium, and is used by all branches of the U.S. military for the development of such things as armour-piercing shells and shielding.
|
111 |
+
|
112 |
+
The construction industry may recycle concrete and old road surface pavement, selling their waste materials for profit.
|
113 |
+
|
114 |
+
Some industries, like the renewable energy industry and solar photovoltaic technology, in particular, are being proactive in setting up recycling policies even before there is considerable volume to their waste streams, anticipating future demand during their rapid growth.[59]
|
115 |
+
|
116 |
+
Recycling of plastics is more difficult, as most programs are not able to reach the necessary level of quality. Recycling of PVC often results in downcycling of the material, which means only products of lower quality standard can be made with the recycled material. A new approach which allows an equal level of quality is the Vinyloop process. It was used after the London Olympics 2012 to fulfill the PVC Policy.[60]
|
117 |
+
|
118 |
+
E-waste is a growing problem, accounting for 20–50 million metric tons of global waste per year according to the EPA. It is also the fastest growing waste stream in the EU.[23] Many recyclers do not recycle e-waste responsibly. After the cargo barge Khian Sea dumped 14,000 metric tons of toxic ash in Haiti, the Basel Convention was formed to stem the flow of hazardous substances into poorer countries. They created the e-Stewards certification to ensure that recyclers are held to the highest standards for environmental responsibility and to help consumers identify responsible recyclers. This works alongside other prominent legislation, such as the Waste Electrical and Electronic Equipment Directive of the EU the United States National Computer Recycling Act, to prevent poisonous chemicals from entering waterways and the atmosphere.
|
119 |
+
|
120 |
+
In the recycling process, television sets, monitors, cell phones, and computers are typically tested for reuse and repaired. If broken, they may be disassembled for parts still having high value if labor is cheap enough. Other e-waste is shredded to pieces roughly 10 centimetres (3.9 in) in size, and manually checked to separate out toxic batteries and capacitors which contain poisonous metals. The remaining pieces are further shredded to 10 millimetres (0.39 in) particles and passed under a magnet to remove ferrous metals. An eddy current ejects non-ferrous metals, which are sorted by density either by a centrifuge or vibrating plates. Precious metals can be dissolved in acid, sorted, and smelted into ingots. The remaining glass and plastic fractions are separated by density and sold to re-processors. Television sets and monitors must be manually disassembled to remove lead from CRTs or the mercury backlight from LCDs.[61][62][63]
|
121 |
+
|
122 |
+
Plastic recycling is the process of recovering scrap or waste plastic and reprocessing the material into useful products, sometimes completely different in form from their original state. For instance, this could mean melting down soft drink bottles and then casting them as plastic chairs and tables.[64] For some types of plastic, the same piece of plastic can only be recycled about 2–3 times before its quality decreases to the point where it can no longer be used.[6]
|
123 |
+
|
124 |
+
Some plastics are remelted to form new plastic objects; for example, PET water bottles can be converted into polyester destined for clothing. A disadvantage of this type of recycling is that the molecular weight of the polymer can change further and the levels of unwanted substances in the plastic can increase with each remelt.[citation needed]
|
125 |
+
|
126 |
+
A commercial-built recycling facility was sent to the International Space Station in late 2019. The facility will take in plastic waste and unneeded plastic parts and physically convert them into spools of feedstock for the space station additive manufacturing facility used for in-space 3D printing.[65]
|
127 |
+
|
128 |
+
For some polymers, it is possible to convert them back into monomers, for example, PET can be treated with an alcohol and a catalyst to form a dialkyl terephthalate. The terephthalate diester can be used with ethylene glycol to form a new polyester polymer, thus making it possible to use the pure polymer again. In 2019, Eastman Chemical Company announced initiatives of methanolysis and syngas designed to handle a greater variety of used material.[66]
|
129 |
+
|
130 |
+
Another process involves the conversion of assorted polymers into petroleum by a much less precise thermal depolymerization process. Such a process would be able to accept almost any polymer or mix of polymers, including thermoset materials such as vulcanized rubber tires and the biopolymers in feathers and other agricultural waste. Like natural petroleum, the chemicals produced can be used as fuels or as feedstock. A RESEM Technology[67] plant of this type in Carthage, Missouri, US, uses turkey waste as input material. Gasification is a similar process but is not technically recycling since polymers are not likely to become the result.
|
131 |
+
Plastic Pyrolysis can convert petroleum based waste streams such as plastics into quality fuels, carbons. Given below is the list of suitable plastic raw materials for pyrolysis:
|
132 |
+
|
133 |
+
The (ideal) recycling process can be differentiated into three loops, one for manufacture (production-waste recycling) and two for disposal of the product (product and material recycling).[68]
|
134 |
+
|
135 |
+
The product's manufacturing phase, which consists of material processing and fabrication, forms the production-waste recycling loop. Industrial waste materials are fed back into, and reused in, the same production process.
|
136 |
+
|
137 |
+
The product's disposal process requires two recycling loops: product recycling and material recycling.[68]
|
138 |
+
The product or product parts are reused in the product recycling phase. This happens in one of two ways: the product is used retaining the product functionality ("reuse") or the product continues to be used but with altered functionality ("further use").[68] The product design is unmodified, or only slightly modified, in both scenarios.
|
139 |
+
|
140 |
+
Product disassembly requires material recycling where product materials are recovered and recycled. Ideally, the materials are processed so they can flow back into the production process.[68]
|
141 |
+
|
142 |
+
In order to meet recyclers' needs while providing manufacturers a consistent, uniform system, a coding system was developed. The recycling code for plastics was introduced in 1988 by the plastics industry through the Society of the Plastics Industry.[69] Because municipal recycling programs traditionally have targeted packaging—primarily bottles and containers—the resin coding system offered a means of identifying the resin content of bottles and containers commonly found in the residential waste stream.[70]
|
143 |
+
|
144 |
+
Plastic products are printed with numbers 1–7 depending on the type of resin. Type 1 (polyethylene terephthalate) is commonly found in soft drink and water bottles. Type 2 (high-density polyethylene) is found in most hard plastics such as milk jugs, laundry detergent bottles, and some dishware. Type 3 (polyvinyl chloride) includes items such as shampoo bottles, shower curtains, hula hoops, credit cards, wire jacketing, medical equipment, siding, and piping. Type 4 (low-density polyethylene) is found in shopping bags, squeezable bottles, tote bags, clothing, furniture, and carpet. Type 5 is polypropylene and makes up syrup bottles, straws, Tupperware, and some automotive parts. Type 6 is polystyrene and makes up meat trays, egg cartons, clamshell containers, and compact disc cases. Type 7 includes all other plastics such as bulletproof materials, 3- and 5-gallon water bottles, cell phone and tablet frames, safety goggles and sunglasses.[71] Having a recycling code or the chasing arrows logo on a material is not an automatic indicator that a material is recyclable but rather an explanation of what the material is. Types 1 and 2 are the most commonly recycled.
|
145 |
+
|
146 |
+
There is some debate over whether recycling is economically efficient. According to a Natural Resources Defense Council study, waste collection and landfill disposal creates less than one job per 1,000 tons of waste material managed; in contrast, the collection, processing, and manufacturing of recycled materials creates 6–13 or more jobs per 1,000 tons.[75] However, the cost effectiveness of creating the additional jobs remains unproven. According to the U.S. Recycling Economic Informational Study, there are over 50,000 recycling establishments that have created over a million jobs in the US.[76]
|
147 |
+
|
148 |
+
Two years after New York City declared that implementing recycling programs would be "a drain on the city", New York City leaders realized that an efficient recycling system could save the city over $20 million.[77] Municipalities often see fiscal benefits from implementing recycling programs, largely due to the reduced landfill costs.[78] A study conducted by the Technical University of Denmark according to the Economist found that in 83 percent of cases, recycling is the most efficient method to dispose of household waste.[8][19] However, a 2004 assessment by the Danish Environmental Assessment Institute concluded that incineration was the most effective method for disposing of drink containers, even aluminium ones.[79]
|
149 |
+
|
150 |
+
Fiscal efficiency is separate from economic efficiency. Economic analysis of recycling does not include what economists call externalities, which are unpriced costs and benefits that accrue to individuals outside of private transactions. Examples include: decreased air pollution and greenhouse gases from incineration, reduced hazardous waste leaching from landfills, reduced energy consumption, and reduced waste and resource consumption, which leads to a reduction in environmentally damaging mining and timber activity. About 4,000 minerals are known, of which only a few hundred are relatively common.[80] Known reserves of phosphorus will be exhausted within the next 100 years at current rates of usage.[81][82] Without mechanisms such as taxes or subsidies to internalize externalities, businesses may ignore them despite the costs imposed on society.[citation needed] To make such nonfiscal benefits economically relevant, advocates have pushed for legislative action to increase the demand for recycled materials.[5] The United States Environmental Protection Agency (EPA) has concluded in favor of recycling, saying that recycling efforts reduced the country's carbon emissions by a net 49 million metric tonnes in 2005.[8] In the United Kingdom, the Waste and Resources Action Programme stated that Great Britain's recycling efforts reduce CO2 emissions by 10–15 million tonnes a year.[8] Recycling is more efficient in densely populated areas, as there are economies of scale involved.[5]
|
151 |
+
|
152 |
+
Certain requirements must be met for recycling to be economically feasible and environmentally effective. These include an adequate source of recyclates, a system to extract those recyclates from the waste stream, a nearby factory capable of reprocessing the recyclates, and a potential demand for the recycled products. These last two requirements are often overlooked—without both an industrial market for production using the collected materials and a consumer market for the manufactured goods, recycling is incomplete and in fact only "collection".[5]
|
153 |
+
|
154 |
+
Free-market economist Julian Simon remarked "There are three ways society can organize waste disposal: (a) commanding, (b) guiding by tax and subsidy, and (c) leaving it to the individual and the market". These principles appear to divide economic thinkers today.[83]
|
155 |
+
|
156 |
+
Frank Ackerman favours a high level of government intervention to provide recycling services. He believes that recycling's benefit cannot be effectively quantified by traditional laissez-faire economics. Allen Hershkowitz supports intervention, saying that it is a public service equal to education and policing. He argues that manufacturers should shoulder more of the burden of waste disposal.[83]
|
157 |
+
|
158 |
+
Paul Calcott and Margaret Walls advocate the second option. A deposit refund scheme and a small refuse charge would encourage recycling but not at the expense of fly-tipping. Thomas C. Kinnaman concludes that a landfill tax would force consumers, companies and councils to recycle more.[83]
|
159 |
+
|
160 |
+
Most free-market thinkers detest subsidy and intervention because they waste resources. Terry Anderson and Donald Leal think that all recycling programmes should be privately operated, and therefore would only operate if the money saved by recycling exceeds its costs. Daniel K. Benjamin argues that it wastes people's resources and lowers the wealth of a population.[83]
|
161 |
+
|
162 |
+
The National Waste & Recycling Association (NWRA) then again reported in May 2015, that recycling and waste made a $6.7 billion economic impact in Ohio, U.S., and employed 14,000 people.[84]
|
163 |
+
|
164 |
+
Certain countries trade in unprocessed recyclates. Some have complained that the ultimate fate of recyclates sold to another country is unknown and they may end up in landfills instead of being reprocessed. According to one report, in America, 50–80 percent of computers destined for recycling are actually not recycled.[85][86] There are reports of illegal-waste imports to China being dismantled and recycled solely for monetary gain, without consideration for workers' health or environmental damage. Although the Chinese government has banned these practices, it has not been able to eradicate them.[87] In 2008, the prices of recyclable waste plummeted before rebounding in 2009. Cardboard averaged about £53/tonne from 2004 to 2008, dropped to £19/tonne, and then went up to £59/tonne in May 2009. PET plastic averaged about £156/tonne, dropped to £75/tonne and then moved up to £195/tonne in May 2009.[88]
|
165 |
+
|
166 |
+
Certain regions have difficulty using or exporting as much of a material as they recycle. This problem is most prevalent with glass: both Britain and the U.S. import large quantities of wine bottled in green glass. Though much of this glass is sent to be recycled, outside the American Midwest there is not enough wine production to use all of the reprocessed material. The extra must be downcycled into building materials or re-inserted into the regular waste stream.[5][8]
|
167 |
+
|
168 |
+
Similarly, the northwestern United States has difficulty finding markets for recycled newspaper, given the large number of pulp mills in the region as well as the proximity to Asian markets. In other areas of the U.S., however, demand for used newsprint has seen wide fluctuation.[5]
|
169 |
+
|
170 |
+
In some U.S. states, a program called RecycleBank pays people to recycle, receiving money from local municipalities for the reduction in landfill space which must be purchased. It uses a single stream process in which all material is automatically sorted.[89]
|
171 |
+
|
172 |
+
|
173 |
+
|
174 |
+
Critics[who?] dispute the net economic and environmental benefits of recycling over its costs, and suggest that proponents of recycling often make matters worse and suffer from confirmation bias. Specifically, critics argue that the costs and energy used in collection and transportation detract from (and outweigh) the costs and energy saved in the production process; also that the jobs produced by the recycling industry can be a poor trade for the jobs lost in logging, mining, and other industries associated with production; and that materials such as paper pulp can only be recycled a few times before material degradation prevents further recycling.[90]
|
175 |
+
|
176 |
+
Much of the difficulty inherent in recycling comes from the fact that most products are not designed with recycling in mind. The concept of sustainable design aims to solve this problem, and was laid out in the book Cradle to Cradle: Remaking the Way We Make Things by architect William McDonough and chemist Michael Braungart.[91] They suggest that every product (and all packaging it requires) should have a complete "closed-loop" cycle mapped out for each component—a way in which every component will either return to the natural ecosystem through biodegradation or be recycled indefinitely.[8][92]
|
177 |
+
|
178 |
+
Complete recycling is impossible from a practical standpoint. In summary, substitution and recycling strategies only delay the depletion of non-renewable stocks and therefore may buy time in the transition to true or strong sustainability, which ultimately is only guaranteed in an economy based on renewable resources.[93]:21
|
179 |
+
|
180 |
+
While recycling diverts waste from entering directly into landfill sites, current recycling misses the dispersive components. These critics believe that complete recycling is impracticable as highly dispersed wastes become so diluted that the energy needed for their recovery becomes increasingly excessive.
|
181 |
+
|
182 |
+
As with environmental economics, care must be taken to ensure a complete view of the costs and benefits involved. For example, paperboard packaging for food products is more easily recycled than most plastic, but is heavier to ship and may result in more waste from spoilage.[94]
|
183 |
+
|
184 |
+
|
185 |
+
|
186 |
+
The amount of energy saved through recycling depends upon the material being recycled and the type of energy accounting that is used. Correct accounting for this saved energy can be accomplished with life-cycle analysis using real energy values, and in addition, exergy, which is a measure of how much useful energy can be used. In general, it takes far less energy to produce a unit mass of recycled materials than it does to make the same mass of virgin materials.[95][96][97]
|
187 |
+
|
188 |
+
Some scholars use emergy (spelled with an m) analysis, for example, budgets for the amount of energy of one kind (exergy) that is required to make or transform things into another kind of product or service. Emergy calculations take into account economics which can alter pure physics-based results. Using emergy life-cycle analysis researchers have concluded that materials with large refining costs have the greatest potential for high recycle benefits. Moreover, the highest emergy efficiency accrues from systems geared toward material recycling, where materials are engineered to recycle back into their original form and purpose, followed by adaptive reuse systems where the materials are recycled into a different kind of product, and then by-product reuse systems where parts of the products are used to make an entirely different product.[98]
|
189 |
+
|
190 |
+
The Energy Information Administration (EIA) states on its website that "a paper mill uses 40 percent less energy to make paper from recycled paper than it does to make paper from fresh lumber."[99] Some critics argue that it takes more energy to produce recycled products than it does to dispose of them in traditional landfill methods, since the curbside collection of recyclables often requires a second waste truck. However, recycling proponents point out that a second timber or logging truck is eliminated when paper is collected for recycling, so the net energy consumption is the same. An emergy life-cycle analysis on recycling revealed that fly ash, aluminum, recycled concrete aggregate, recycled plastic, and steel yield higher efficiency ratios, whereas the recycling of lumber generates the lowest recycle benefit ratio. Hence, the specific nature of the recycling process, the methods used to analyse the process, and the products involved affect the energy savings budgets.[98]
|
191 |
+
|
192 |
+
It is difficult to determine the amount of energy consumed or produced in waste disposal processes in broader ecological terms, where causal relations dissipate into complex networks of material and energy flow. For example, "cities do not follow all the strategies of ecosystem development. Biogeochemical paths become fairly straight relative to wild ecosystems, with very reduced recycling, resulting in large flows of waste and low total energy efficiencies. By contrast, in wild ecosystems, one population's wastes are another population's resources, and succession results in efficient exploitation of available resources. However, even modernized cities may still be in the earliest stages of a succession that may take centuries or millennia to complete."[100]:720 How much energy is used in recycling also depends on the type of material being recycled and the process used to do so. Aluminium is generally agreed to use far less energy when recycled rather than being produced from scratch. The EPA states that "recycling aluminum cans, for example, saves 95 percent of the energy required to make the same amount of aluminum from its virgin source, bauxite."[101][102] In 2009, more than half of all aluminium cans produced came from recycled aluminium.[103] Similarly, it has been estimated that new steel produced with recycled cans reduces greenhouse gas emissions by 75%.[104]
|
193 |
+
|
194 |
+
Every year, millions of tons of materials are being exploited from the earth's crust, and processed into consumer and capital goods. After decades to centuries, most of these materials are "lost". With the exception of some pieces of art or religious relics, they are no longer engaged in the consumption process. Where are they? Recycling is only an intermediate solution for such materials, although it does prolong the residence time in the anthroposphere. For thermodynamic reasons, however, recycling cannot prevent the final need for an ultimate sink.[105]:1
|
195 |
+
|
196 |
+
Economist Steven Landsburg has suggested that the sole benefit of reducing landfill space is trumped by the energy needed and resulting pollution from the recycling process.[106] Others, however, have calculated through life-cycle assessment that producing recycled paper uses less energy and water than harvesting, pulping, processing, and transporting virgin trees.[107] When less recycled paper is used, additional energy is needed to create and maintain farmed forests until these forests are as self-sustainable as virgin forests.
|
197 |
+
|
198 |
+
Other studies have shown that recycling in itself is inefficient to perform the "decoupling" of economic development from the depletion of non-renewable raw materials that is necessary for sustainable development.[108] The international transportation or recycle material flows through "... different trade networks of the three countries result in different flows, decay rates, and potential recycling returns".[109]:1 As global consumption of a natural resources grows, their depletion is inevitable. The best recycling can do is to delay; complete closure of material loops to achieve 100 percent recycling of nonrenewables is impossible as micro-trace materials dissipate into the environment causing severe damage to the planet's ecosystems.[110][111][112] Historically, this was identified as the metabolic rift by Karl Marx, who identified the unequal exchange rate between energy and nutrients flowing from rural areas to feed urban cities that create effluent wastes degrading the planet's ecological capital, such as loss in soil nutrient production.[113][114] Energy conservation also leads to what is known as Jevon's paradox, where improvements in energy efficiency lowers the cost of production and leads to a rebound effect where rates of consumption and economic growth increases.[112][115]
|
199 |
+
|
200 |
+
|
201 |
+
|
202 |
+
The amount of money actually saved through recycling depends on the efficiency of the recycling program used to do it. The Institute for Local Self-Reliance argues that the cost of recycling depends on various factors, such as landfill fees and the amount of disposal that the community recycles. It states that communities begin to save money when they treat recycling as a replacement for their traditional waste system rather than an add-on to it and by "redesigning their collection schedules and/or trucks".[116]
|
203 |
+
|
204 |
+
In some cases, the cost of recyclable materials also exceeds the cost of raw materials. Virgin plastic resin costs 40 percent less than recycled resin.[117] Additionally, a United States Environmental Protection Agency (EPA) study that tracked the price of clear glass from 15 July to 2 August 1991, found that the average cost per ton ranged from $40 to $60[118] while a USGS report shows that the cost per ton of raw silica sand from years 1993 to 1997 fell between $17.33 and $18.10.[119]
|
205 |
+
|
206 |
+
Comparing the market cost of recyclable material with the cost of new raw materials ignores economic externalities—the costs that are currently not counted by the market. Creating a new piece of plastic, for instance, may cause more pollution and be less sustainable than recycling a similar piece of plastic, but these factors will not be counted in market cost. A life cycle assessment can be used to determine the levels of externalities and decide whether the recycling may be worthwhile despite unfavorable market costs. Alternatively, legal means (such as a carbon tax) can be used to bring externalities into the market, so that the market cost of the material becomes close to the true cost.
|
207 |
+
|
208 |
+
The recycling of waste electrical and electronic equipment can create a significant amount of pollution. This problem is specifically occurrent in India and China. Informal recycling in an underground economy of these countries has generated an environmental and health disaster. High levels of lead (Pb), polybrominated diphenylethers (PBDEs), polychlorinated dioxins and furans, as well as polybrominated dioxins and furans (PCDD/Fs and PBDD/Fs), concentrated in the air, bottom ash, dust, soil, water, and sediments in areas surrounding recycling sites.[120] These materials can make work sites harmful to the workers themselves and the surrounding environment.
|
209 |
+
|
210 |
+
|
211 |
+
|
212 |
+
Economist Steven Landsburg, author of a paper entitled "Why I Am Not an Environmentalist",[121] claimed that paper recycling actually reduces tree populations. He argues that because paper companies have incentives to replenish their forests, large demands for paper lead to large forests while reduced demand for paper leads to fewer "farmed" forests.[122]
|
213 |
+
|
214 |
+
When foresting companies cut down trees, more are planted in their place; however, such "farmed" forests are inferior to natural forests in several ways. Farmed forests are not able to fix the soil as quickly as natural forests. This can cause widespread soil erosion and often requiring large amounts of fertilizer to maintain the soil, while containing little tree and wild-life biodiversity compared to virgin forests.[123] Also, the new trees planted are not as big as the trees that were cut down, and the argument that there will be "more trees" is not compelling to forestry advocates when they are counting saplings.
|
215 |
+
|
216 |
+
In particular, wood from tropical rainforests is rarely harvested for paper because of their heterogeneity.[124] According to the United Nations Framework Convention on Climate Change secretariat, the overwhelming direct cause of deforestation is subsistence farming (48% of deforestation) and commercial agriculture (32%), which is linked to food, not paper production.[125]
|
217 |
+
|
218 |
+
The reduction of greenhouse gas emission reduction also benefits from the development of the recycling industry. In Kitakyushu, the only green growth model city in Asia selected by OECD, recycling industries are strongly promoted and financially supported as part of the Eco-town program in Japan. Given the industrial sector in Kitakyushu accounts for more than 60% energy consumption of the city, the development of recycling industry results in substantial energy reduction due to the economies of scale effects; the concentration of CO is, thus, found to decline accordingly.[126]
|
219 |
+
|
220 |
+
Other non-conventional methods of material recycling, like Waste-to-Energy (WTE) systems, have garnered increased attention in the recent past due to the polarizing nature of their emissions. While viewed as a sustainable method of capturing energy from material waste feedstocks by many, others have cited numerous explanations for why the technology has not been scaled globally.[127]
|
221 |
+
|
222 |
+
In some countries, recycling is performed by the entrepreneurial poor such as the karung guni, zabbaleen, the rag-and-bone man, waste picker, and junk man. With the creation of large recycling organizations that may be profitable, either by law or economies of scale,[128][129] the poor are more likely to be driven out of the recycling and the remanufacturing job market. To compensate for this loss of income, a society may need to create additional forms of societal programs to help support the poor.[130] Like the parable of the broken window, there is a net loss to the poor and possibly the whole of a society to make recycling artificially profitable, e.g. through the law. However, in Brazil and Argentina, waste pickers/informal recyclers work alongside the authorities, in fully or semi-funded cooperatives, allowing informal recycling to be legitimized as a paid public sector job.[131]
|
223 |
+
|
224 |
+
Because the social support of a country is likely to be less than the loss of income to the poor undertaking recycling, there is a greater chance the poor will come in conflict with the large recycling organizations.[132][133] This means fewer people can decide if certain waste is more economically reusable in its current form rather than being reprocessed. Contrasted to the recycling poor, the efficiency of their recycling may actually be higher for some materials because individuals have greater control over what is considered "waste".[130]
|
225 |
+
|
226 |
+
One labor-intensive underused waste is electronic and computer waste. Because this waste may still be functional and wanted mostly by those on lower incomes, who may sell or use it at a greater efficiency than large recyclers.
|
227 |
+
|
228 |
+
Some recycling advocates believe that laissez-faire individual-based recycling does not cover all of society's recycling needs. Thus, it does not negate the need for an organized recycling program.[130] Local government can consider the activities of the recycling poor as contributing to the ruining of property.
|
229 |
+
|
230 |
+
Changes that have been demonstrated to increase recycling rates include:
|
231 |
+
|
232 |
+
"Between 1960 and 2000, the world production of plastic resins increased 25 times its original amount, while recovery of the material remained below 5 percent."[134]:131 Many studies have addressed recycling behaviour and strategies to encourage community involvement in recycling programs. It has been argued[135] that recycling behavior is not natural because it requires a focus and appreciation for long-term planning, whereas humans have evolved to be sensitive to short-term survival goals; and that to overcome this innate predisposition, the best solution would be to use social pressure to compel participation in recycling programs. However, recent studies have concluded that social pressure will not work in this context.[136] One reason for this is that social pressure functions well in small group sizes of 50 to 150 individuals (common to nomadic hunter–gatherer peoples) but not in communities numbering in the millions, as we see today. Another reason is that individual recycling does not take place in the public view.
|
233 |
+
|
234 |
+
Following the increasing popularity of recycling collection being sent to the same landfills as trash, some people kept on putting recyclables on the recyclables bin.[137] In Baltimore, the government kept collecting glass separately for seven years even though it did not recycle it.[138]
|
235 |
+
|
236 |
+
Art objects are more and more often made from recycled material.
|
237 |
+
|
238 |
+
In a study done by social psychologist Shawn Burn,[139] it was found that personal contact with individuals within a neighborhood is the most effective way to increase recycling within a community. In his study, he had 10 block leaders talk to their neighbors and persuade them to recycle. A comparison group was sent fliers promoting recycling. It was found that the neighbors that were personally contacted by their block leaders recycled much more than the group without personal contact. As a result of this study, Shawn Burn believes that personal contact within a small group of people is an important factor in encouraging recycling. Another study done by Stuart Oskamp[140] examines the effect of neighbors and friends on recycling. It was found in his studies that people who had friends and neighbors that recycled were much more likely to also recycle than those who didn't have friends and neighbors that recycled.
|
239 |
+
|
240 |
+
Many schools have created recycling awareness clubs in order to give young students an insight on recycling. These schools believe that the clubs actually encourage students to not only recycle at school but at home as well.
|
en/4949.html.txt
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A writing system is a method of visually representing verbal communication, based on a script and a set of rules regulating its use. While both writing and speech are useful in conveying messages, writing differs in also being a reliable form of information storage and transfer.[1] Writing systems require shared understanding between writers and readers of the meaning behind the sets of characters that make up a script. Writing is usually recorded onto a durable medium, such as paper or electronic storage, although non-durable methods may also be used, such as writing on a computer display, on a blackboard, in sand, or by skywriting. Reading a text can be accomplished purely in the mind as an internal process, or expressed orally.
|
2 |
+
|
3 |
+
Writing systems can be placed into broad categories such as alphabets, syllabaries, or logographies, although any particular system may have attributes of more than one category. In the alphabetic category, a standard set of letters represent speech sounds. In a syllabary, each symbol correlates to a syllable or mora. In a logography, each character represents a semantic unit such as a word or morpheme. Abjads differ from alphabets in that vowels are not indicated, and in abugidas or alphasyllabaries each character represents a consonant–vowel pairing. Alphabets typically use a set of less than 100 symbols to fully express a language, whereas syllabaries can have several hundred, and logographies can have thousands of symbols. Many writing systems also include a special set of symbols known as punctuation which is used to aid interpretation and help capture nuances and variations in the message's meaning that are communicated verbally by cues in timing, tone, accent, inflection or intonation.
|
4 |
+
|
5 |
+
Writing systems were preceded by proto-writing, which used pictograms, ideograms and other mnemonic symbols. Proto-writing lacked the ability to capture and express a full range of thoughts and ideas. The invention of writing systems, which dates back to the beginning of the Bronze Age in the late Neolithic Era of the late 4th millennium BC, enabled the accurate durable recording of human history in a manner that was not prone to the same types of error to which oral history is vulnerable. Soon after, writing provided a reliable form of long distance communication. With the advent of publishing, it provided the medium for an early form of mass communication.
|
6 |
+
|
7 |
+
Writing systems are distinguished from other possible symbolic communication systems in that a writing system is always associated with at least one spoken language. In contrast, visual representations such as drawings, paintings, and non-verbal items on maps, such as contour lines, are not language-related. Some symbols on information signs, such as the symbols for male and female, are also not language related, but can grow to become part of language if they are often used in conjunction with other language elements. Some other symbols, such as numerals and the ampersand, are not directly linked to any specific language, but are often used in writing and thus must be considered part of writing systems.
|
8 |
+
|
9 |
+
Every human community possesses language, which many regard as an innate and defining condition of humanity. However, the development of writing systems, and the process by which they have supplanted traditional oral systems of communication, have been sporadic, uneven and slow. Once established, writing systems generally change more slowly than their spoken counterparts. Thus they often preserve features and expressions which are no longer current in the spoken language. One of the great benefits of writing systems is that they can preserve a permanent record of information expressed in a language.
|
10 |
+
|
11 |
+
All writing systems require:
|
12 |
+
|
13 |
+
In the examination of individual scripts, the study of writing systems has developed along partially independent lines. Thus, the terminology employed differs somewhat from field to field.
|
14 |
+
|
15 |
+
The generic term text[3] refers to an instance of written or spoken material with the latter having been transcribed in some way. The act of composing and recording a text may be referred to as writing,[4] and the act of viewing and interpreting the text as reading.[5] Orthography refers to the method and rules of observed writing structure (literal meaning, "correct writing"), and particularly for alphabetic systems, includes the concept of spelling.
|
16 |
+
|
17 |
+
A grapheme is a specific base unit of a writing system. Graphemes are the minimally significant elements which taken together comprise the set of "building blocks" out of which texts made up of one or more writing systems may be constructed, along with rules of correspondence and use. The concept is similar to that of the phoneme used in the study of spoken languages. For example, in the Latin-based writing system of standard contemporary English, examples of graphemes include the majuscule and minuscule forms of the twenty-six letters of the alphabet (corresponding to various phonemes), marks of punctuation (mostly non-phonemic), and a few other symbols such as those for numerals (logograms for numbers).
|
18 |
+
|
19 |
+
An individual grapheme may be represented in a wide variety of ways, where each variation is visually distinct in some regard, but all are interpreted as representing the "same" grapheme. These individual variations are known as allographs of a grapheme (compare with the term allophone used in linguistic study). For example, the minuscule letter a has different allographs when written as a cursive, block, or typed letter. The choice of a particular allograph may be influenced by the medium used, the writing instrument, the stylistic choice of the writer, the preceding and following graphemes in the text, the time available for writing, the intended audience, and the largely unconscious features of an individual's handwriting.
|
20 |
+
|
21 |
+
The terms glyph, sign and character are sometimes used to refer to a grapheme. Common usage varies from discipline to discipline; compare cuneiform sign, Maya glyph, Chinese character. The glyphs of most writing systems are made up of lines (or strokes) and are therefore called linear, but there are glyphs in non-linear writing systems made up of other types of marks, such as Cuneiform and Braille.
|
22 |
+
|
23 |
+
Writing systems may be regarded as complete according to the extent to which they are able to represent all that may be expressed in the spoken language, while a partial writing system is limited in what it can convey.[6]
|
24 |
+
|
25 |
+
Writing systems can be independent from languages, one can have multiple writing systems for a language, e.g., Hindi and Urdu;[7] and one can also have one writing system for multiple languages, e.g., the Arabic script. Chinese characters were also borrowed by other countries as their early writing systems, e.g., the early writing systems of Vietnamese language until the beginning of the 20th century.
|
26 |
+
|
27 |
+
To represent a conceptual system, one uses one or more languages, e.g., mathematics is a conceptual system[8] and one may use first-order logic and a natural language together in representation.
|
28 |
+
|
29 |
+
Writing systems were preceded by proto-writing, systems of ideographic and/or early mnemonic symbols. The best-known examples are:
|
30 |
+
|
31 |
+
The invention of the first writing systems is roughly contemporary with the beginning of the Bronze Age (following the late Neolithic) in the late 4th millennium BC. The Sumerian archaic cuneiform script closely followed by the Egyptian hieroglyphs are generally considered the earliest writing systems, both emerging out of their ancestral proto-literate symbol systems from 3400 to 3200 BC with earliest coherent texts from about 2600 BC. It is generally agreed that the historically earlier Sumerian writing was an independent invention; however, it is debated whether Egyptian writing was developed completely independently of Sumerian, or was a case of cultural diffusion.[12]
|
32 |
+
|
33 |
+
A similar debate exists for the Chinese script, which developed around 1200 BC.[13][14] The Chinese script is probably an independent invention, because there is no evidence of contact between China and the literate civilizations of the Near East,[15] and because of the distinct differences between the Mesopotamian and Chinese approaches to logography and phonetic representation.[16]
|
34 |
+
|
35 |
+
The pre-Columbian Mesoamerican writing systems (including among others Olmec and Maya scripts) are generally believed to have had independent origins.
|
36 |
+
|
37 |
+
A hieroglyphic writing system used by pre-colonial Mi'kmaq, which was observed by missionaries from the 17th to 19th centuries, is thought to have developed independently. There is some debate over whether or not this was a fully formed system or just a series of mnemonic pictographs.
|
38 |
+
|
39 |
+
It is thought that the first consonantal alphabetic writing appeared before 2000 BC, as a representation of language developed by Semitic tribes in the Sinai Peninsula (see History of the alphabet). Most other alphabets in the world today either descended from this one innovation, many via the Phoenician alphabet, or were directly inspired by its design.
|
40 |
+
|
41 |
+
The first true alphabet is the Greek script which consistently represents vowels since 800 BC.[17][18] The Latin alphabet, a direct descendant, is by far the most common writing system in use.[19]
|
42 |
+
|
43 |
+
Several approaches have been taken to classify writing systems, the most common and basic one is a broad division into three categories: logographic, syllabic, and alphabetic (or segmental); however, all three may be found in any given writing system in varying proportions, often making it difficult to categorise a system uniquely. The term complex system is sometimes used to describe those where the admixture makes classification problematic. Modern linguists regard such approaches, including Diringer's[20]
|
44 |
+
|
45 |
+
as too simplistic, often considering the categories to be incomparable.
|
46 |
+
Hill[21] split writing into three major categories of linguistic analysis, one of which covers discourses and is not usually considered writing proper:
|
47 |
+
|
48 |
+
Sampson draws a distinction between semasiography and glottography
|
49 |
+
|
50 |
+
DeFrancis,[22] criticizing Sampson's[23] introduction of semasiographic writing and featural alphabets stresses the phonographic quality of writing proper
|
51 |
+
|
52 |
+
Faber[24] categorizes phonographic writing by two levels, linearity and coding:
|
53 |
+
|
54 |
+
A logogram is a single written character which represents a complete grammatical word. Most traditional Chinese characters are classified as logograms.
|
55 |
+
|
56 |
+
As each character represents a single word (or, more precisely, a morpheme), many logograms are required to write all the words of language. The vast array of logograms and the memorization of what they mean are major disadvantages of logographic systems over alphabetic systems. However, since the meaning is inherent to the symbol, the same logographic system can theoretically be used to represent different languages. In practice, the ability to communicate across languages only works for the closely related varieties of Chinese, as differences in syntax reduce the crosslinguistic portability of a given logographic system. Japanese uses Chinese logograms extensively in its writing systems, with most of the symbols carrying the same or similar meanings. However, the grammatical differences between Japanese and Chinese are significant enough that a long Chinese text is not readily understandable to a Japanese reader without any knowledge of basic Chinese grammar, though short and concise phrases such as those on signs and newspaper headlines are much easier to comprehend.
|
57 |
+
|
58 |
+
While most languages do not use wholly logographic writing systems, many languages use some logograms. A good example of modern western logograms are the Arabic numerals: everyone who uses those symbols understands what 1 means whether they call it one, eins, uno, yi, ichi, ehad, ena, or jedan. Other western logograms include the ampersand &, used for and, the at sign @, used in many contexts for at, the percent sign % and the many signs representing units of currency ($, ¢, €, £, ¥ and so on.)
|
59 |
+
|
60 |
+
Logograms are sometimes called ideograms, a word that refers to symbols which graphically represent abstract ideas, but linguists avoid this use, as Chinese characters are often semantic–phonetic compounds, symbols which include an element that represents the meaning and a phonetic complement element that represents the pronunciation. Some nonlinguists distinguish between lexigraphy and ideography, where symbols in lexigraphies represent words and symbols in ideographies represent words or morphemes.
|
61 |
+
|
62 |
+
The most important (and, to a degree, the only surviving) modern logographic writing system is the Chinese one, whose characters have been used with varying degrees of modification in varieties of Chinese, Japanese, Korean, Vietnamese, and other east Asian languages. Ancient Egyptian hieroglyphs and the Mayan writing system are also systems with certain logographic features, although they have marked phonetic features as well and are no longer in current use. Vietnamese speakers switched to the Latin alphabet in the 20th century and the use of Chinese characters in Korean is increasingly rare. The Japanese writing system includes several distinct forms of writing including logography.
|
63 |
+
|
64 |
+
Another type of writing system with systematic syllabic linear symbols, the abugidas, is discussed below as well.
|
65 |
+
|
66 |
+
As logographic writing systems use a single symbol for an entire word, a syllabary is a set of written symbols that represent (or approximate) syllables, which make up words. A symbol in a syllabary typically represents a consonant sound followed by a vowel sound, or just a vowel alone.
|
67 |
+
|
68 |
+
In a "true syllabary", there is no systematic graphic similarity between phonetically related characters (though some do have graphic similarity for the vowels). That is, the characters for /ke/, /ka/ and /ko/ have no similarity to indicate their common "k" sound (voiceless velar plosive). More recent creations such as the Cree syllabary embody a system of varying signs, which can best be seen when arranging the syllabogram set in an onset–coda or onset–rime table.
|
69 |
+
|
70 |
+
Syllabaries are best suited to languages with relatively simple syllable structure, such as Japanese. The English language, on the other hand, allows complex syllable structures, with a relatively large inventory of vowels and complex consonant clusters, making it cumbersome to write English words with a syllabary. To write English using a syllabary, every possible syllable in English would have to have a separate symbol, and whereas the number of possible syllables in Japanese is around 100, in English there are approximately 15,000 to 16,000.
|
71 |
+
|
72 |
+
However, syllabaries with much larger inventories do exist. The Yi script, for example, contains 756 different symbols (or 1,164, if symbols with a particular tone diacritic are counted as separate syllables, as in Unicode). The Chinese script, when used to write Middle Chinese and the modern varieties of Chinese, also represents syllables, and includes separate glyphs for nearly all of the many thousands of syllables in Middle Chinese; however, because it primarily represents morphemes and includes different characters to represent homophonous morphemes with different meanings, it is normally considered a logographic script rather than a syllabary.
|
73 |
+
|
74 |
+
Other languages that use true syllabaries include Mycenaean Greek (Linear B) and Indigenous languages of the Americas such as Cherokee. Several languages of the Ancient Near East used forms of cuneiform, which is a syllabary with some non-syllabic elements.
|
75 |
+
|
76 |
+
An alphabet is a small set of letters (basic written symbols), each of which roughly represents or represented historically a segmental phoneme of a spoken language. The word alphabet is derived from alpha and beta, the first two symbols of the Greek alphabet.
|
77 |
+
|
78 |
+
The first type of alphabet that was developed was the abjad. An abjad is an alphabetic writing system where there is one symbol per consonant. Abjads differ from other alphabets in that they have characters only for consonantal sounds. Vowels are not usually marked in abjads. All known abjads (except maybe Tifinagh) belong to the Semitic family of scripts, and derive from the original Northern Linear Abjad. The reason for this is that Semitic languages and the related Berber languages have a morphemic structure which makes the denotation of vowels redundant in most cases. Some abjads, like Arabic and Hebrew, have markings for vowels as well. However, they use them only in special contexts, such as for teaching. Many scripts derived from abjads have been extended with vowel symbols to become full alphabets. Of these, the most famous example is the derivation of the Greek alphabet from the Phoenician abjad. This has mostly happened when the script was adapted to a non-Semitic language. The term abjad takes its name from the old order of the Arabic alphabet's consonants 'alif, bā', jīm, dāl, though the word may have earlier roots in Phoenician or Ugaritic. "Abjad" is still the word for alphabet in Arabic, Malay and Indonesian.
|
79 |
+
|
80 |
+
An abugida is an alphabetic writing system whose basic signs denote consonants with an inherent vowel and where consistent modifications of the basic sign indicate other following vowels than the inherent one. Thus, in an abugida there may or may not be a sign for "k" with no vowel, but also one for "ka" (if "a" is the inherent vowel), and "ke" is written by modifying the "ka" sign in a way that is consistent with how one would modify "la" to get "le". In many abugidas the modification is the addition of a vowel sign, but other possibilities are imaginable (and used), such as rotation of the basic sign, addition of diacritical marks and so on. The contrast with "true syllabaries" is that the latter have one distinct symbol per possible syllable, and the signs for each syllable have no systematic graphic similarity. The graphic similarity of most abugidas comes from the fact that they are derived from abjads, and the consonants make up the symbols with the inherent vowel and the new vowel symbols are markings added on to the base symbol. In the Ge'ez script, for which the linguistic term abugida was named, the vowel modifications do not always appear systematic, although they originally were more so. Canadian Aboriginal syllabics can be considered abugidas, although they are rarely thought of in those terms. The largest single group of abugidas is the Brahmic family of scripts, however, which includes nearly all the scripts used in India and Southeast Asia. The name abugida is derived from the first four characters of an order of the Ge'ez script used in some contexts. It was borrowed from Ethiopian languages as a linguistic term by Peter T. Daniels.
|
81 |
+
|
82 |
+
A featural script represents finer detail than an alphabet. Here symbols do not represent whole phonemes, but rather the elements (features) that make up the phonemes, such as voicing or its place of articulation. Theoretically, each feature could be written with a separate letter; and abjads or abugidas, or indeed syllabaries, could be featural, but the only prominent system of this sort is Korean hangul. In hangul, the featural symbols are combined into alphabetic letters, and these letters are in turn joined into syllabic blocks, so that the system combines three levels of phonological representation.
|
83 |
+
|
84 |
+
Many scholars, e.g. John DeFrancis, reject this class or at least labeling hangul as such.[citation needed] The Korean script is a conscious script creation by literate experts, which Daniels calls a "sophisticated grammatogeny".[citation needed] These include stenographies and constructed scripts of hobbyists and fiction writers (such as Tengwar), many of which feature advanced graphic designs corresponding to phonologic properties. The basic unit of writing in these systems can map to anything from phonemes to words. It has been shown that even the Latin script has sub-character "features".[26]
|
85 |
+
|
86 |
+
Most writing systems are not purely one type. The English writing system, for example, includes numerals and other logograms such as #, $, and &, and the written language often does not match well with the spoken one. As mentioned above, all logographic systems have phonetic components as well, whether along the lines of a syllabary, such as Chinese ("logo-syllabic"), or an abjad, as in Egyptian ("logo-consonantal").
|
87 |
+
|
88 |
+
Some scripts, however, are truly ambiguous. The semi-syllabaries of ancient Spain were syllabic for plosives such as p, t, k, but alphabetic for other consonants. In some versions, vowels were written redundantly after syllabic letters, conforming to an alphabetic orthography. Old Persian cuneiform was similar. Of 23 consonants (including null), seven were fully syllabic, thirteen were purely alphabetic, and for the other three, there was one letter for /Cu/ and another for both /Ca/ and /Ci/. However, all vowels were written overtly regardless; as in the Brahmic abugidas, the /Ca/ letter was used for a bare consonant.
|
89 |
+
|
90 |
+
The zhuyin phonetic glossing script for Chinese divides syllables in two or three, but into onset, medial, and rime rather than consonant and vowel. Pahawh Hmong is similar, but can be considered to divide syllables into either onset-rime or consonant-vowel (all consonant clusters and diphthongs are written with single letters); as the latter, it is equivalent to an abugida but with the roles of consonant and vowel reversed. Other scripts are intermediate between the categories of alphabet, abjad and abugida, so there may be disagreement on how they should be classified.
|
91 |
+
|
92 |
+
Perhaps the primary graphic distinction made in classifications is that of linearity. Linear writing systems are those in which the characters are composed of lines, such as the Latin alphabet and Chinese characters. Chinese characters are considered linear whether they are written with a ball-point pen or a calligraphic brush, or cast in bronze. Similarly, Egyptian hieroglyphs and Maya glyphs were often painted in linear outline form, but in formal contexts they were carved in bas-relief. The earliest examples of writing are linear: the Sumerian script of c. 3300 BC was linear, though its cuneiform descendants were not. Non-linear systems, on the other hand, such as braille, are not composed of lines, no matter what instrument is used to write them.
|
93 |
+
|
94 |
+
Cuneiform was probably the earliest non-linear writing. Its glyphs were formed by pressing the end of a reed stylus into moist clay, not by tracing lines in the clay with the stylus as had been done previously.[27][28] The result was a radical transformation of the appearance of the script.
|
95 |
+
|
96 |
+
Braille is a non-linear adaptation of the Latin alphabet that completely abandoned the Latin forms. The letters are composed of raised bumps on the writing substrate, which can be leather (Louis Braille's original material), stiff paper, plastic or metal.
|
97 |
+
|
98 |
+
There are also transient non-linear adaptations of the Latin alphabet, including Morse code, the manual alphabets of various sign languages, and semaphore, in which flags or bars are positioned at prescribed angles. However, if "writing" is defined as a potentially permanent means of recording information, then these systems do not qualify as writing at all, since the symbols disappear as soon as they are used. (Instead, these transient systems serve as signals.)
|
99 |
+
|
100 |
+
Scripts are also graphically characterized by the direction in which they are written. Egyptian hieroglyphs were written either left to right or right to left, with the animal and human glyphs turned to face the beginning of the line. The early alphabet could be written in multiple directions:[29] horizontally (side to side), or vertically (up or down). Prior to standardization, alphabetical writing was done both left-to-right (LTR or sinistrodextrally) and right-to-left (RTL or dextrosinistrally). It was most commonly written boustrophedonically: starting in one (horizontal) direction, then turning at the end of the line and reversing direction.
|
101 |
+
|
102 |
+
The Greek alphabet and its successors settled on a left-to-right pattern, from the top to the bottom of the page. Other scripts, such as Arabic and Hebrew, came to be written right-to-left. Scripts that incorporate Chinese characters have traditionally been written vertically (top-to-bottom), from the right to the left of the page, but nowadays are frequently written left-to-right, top-to-bottom, due to Western influence, a growing need to accommodate terms in the Latin script, and technical limitations in popular electronic document formats. Chinese characters sometimes, as in signage, especially when signifying something old or traditional, may also be written from right to left. The Old Uyghur alphabet and its descendants are unique in being written top-to-bottom, left-to-right; this direction originated from an ancestral Semitic direction by rotating the page 90° counter-clockwise to conform to the appearance of vertical Chinese writing. Several scripts used in the Philippines and Indonesia, such as Hanunó'o, are traditionally written with lines moving away from the writer, from bottom to top, but are read horizontally left to right; however, Kulitan, another Philippine script, is written top to bottom and right to left. Ogham is written bottom to top and read vertically, commonly on the corner of a stone.
|
103 |
+
|
104 |
+
Left-to-right has the advantage that, given that most people are right-handed, the hand won't interfere with the just written text which might not have dried yet, since the hand is on the right side of the pen. For partially this reason, left-handed children were historically in Europe and America often taught to use the right hand for writing.
|
105 |
+
|
106 |
+
In computers and telecommunication systems, writing systems are generally not codified as such,[clarification needed] but graphemes and other grapheme-like units that are required for text processing are represented by "characters" that typically manifest in encoded form. There are many character encoding standards and related technologies, such as ISO/IEC 8859-1 (a character repertoire and encoding scheme oriented toward the Latin script), CJK (Chinese, Japanese, Korean) and bi-directional text. Today, many such standards are re-defined in a collective standard, the ISO/IEC 10646 "Universal Character Set", and a parallel, closely related expanded work, The Unicode Standard. Both are generally encompassed by the term Unicode. In Unicode, each character, in every language's writing system, is (simplifying slightly) given a unique identification number, known as its code point. Computer operating systems use code points to look up characters in the font file, so the characters can be displayed on the page or screen.
|
107 |
+
|
108 |
+
A keyboard is the device most commonly used for writing via computer. Each key is associated with a standard code which the keyboard sends to the computer when it is pressed. By using a combination of alphabetic keys with modifier keys such as Ctrl, Alt, Shift and AltGr, various character codes are generated and sent to the CPU. The operating system intercepts and converts those signals to the appropriate characters based on the keyboard layout and input method, and then delivers those converted codes and characters to the running application software, which in turn looks up the appropriate glyph in the currently used font file, and requests the operating system to draw these on the screen.
|
en/495.html.txt
ADDED
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A fighter aircraft, often referred to simply as a fighter, is a military fixed-wing aircraft designed primarily for air-to-air combat against other aircraft. The key performance features of a fighter include not only its firepower but also its high speed and maneuverability relative to the target aircraft.
|
4 |
+
|
5 |
+
The fighter's main tactical purpose is to establish air superiority over the battlefield. The success or failure of a combatant's efforts to gain air superiority hinges on several factors including the skill of its pilots, the tactical soundness of its doctrine for deploying its fighters, and the numbers and performance of those fighters.
|
6 |
+
|
7 |
+
Many fighters have secondary capabilities such as ground attack and some types, such as fighter-bombers, are designed from the outset for dual roles. Other fighter designs are highly specialized while still filling the main air superiority role, these include the interceptor, heavy fighter, and night fighter.
|
8 |
+
|
9 |
+
A fighter aircraft is primarily designed for air-to-air combat.[1] A given type may be designed for specific combat conditions, and in some cases for additional roles such as air-to-ground fighting. Historically the British Royal Flying Corps and Royal Air Force referred to them as "scouts" until the early 1920s, while the U.S. Army called them "pursuit" aircraft until the late 1940s. The UK changed to calling them fighters in the 1920s, while the US Army did so in the 1940s. A short-range fighter designed to defend against incoming enemy aircraft is known as an interceptor.
|
10 |
+
|
11 |
+
Recognised classes of fighter include:
|
12 |
+
|
13 |
+
Of these, the Fighter-bomber, reconnaissance fighter and strike fighter classes are dual-role, possessing qualities of the fighter alongside some other battlefield role. Some fighter designs may be developed in variants performing other roles entirely, such as ground attack or unarmed reconnaissance. This may be for political or national security reasons, for advertising purposes, or other reasons.[2]
|
14 |
+
|
15 |
+
The Sopwith Camel and other "fighting scouts" of World War I performed a great deal of ground-attack work. In World War II, the USAAF and RAF often favored fighters over dedicated light bombers or dive bombers, and types such as the Republic P-47 Thunderbolt and Hawker Hurricane that were no longer competitive as aerial combat fighters were relegated to ground attack. Several aircraft, such as the F-111 and F-117, have received fighter designations though they had no fighter capability due to political or other reasons. The F-111B variant was originally intended for a fighter role with the U.S. Navy, but it was canceled. This blurring follows the use of fighters from their earliest days for "attack" or "strike" operations against ground targets by means of strafing or dropping small bombs and incendiaries. Versatile multirole fighter-bombers such as the McDonnell Douglas F/A-18 Hornet are a less expensive option than having a range of specialized aircraft types.
|
16 |
+
|
17 |
+
Some of the most expensive fighters such as the US Grumman F-14 Tomcat, McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor and Russian Sukhoi Su-27 were employed as all-weather interceptors as well as air superiority fighter aircraft, while commonly developing air-to-ground roles late in their careers. An interceptor is generally an aircraft intended to target (or intercept) bombers and so often trades maneuverability for climb rate.[3]
|
18 |
+
|
19 |
+
As a part of military nomenclature, a letter is often assigned to various types of aircraft to indicate their use, along with a number to indicate the specific aircraft. The letters used to designate a fighter differ in various countries – in the English-speaking world, "F" is now used to indicate a fighter (e.g. Lockheed Martin F-35 Lightning II or Supermarine Spitfire F.22), though when the pursuit designation was used in the US, they were "P" types (e.g. Curtiss P-40 Warhawk). In Russia "I" was used (Polikarpov I-16), while the French continue to use "C" (Nieuport 17 C.1).
|
20 |
+
|
21 |
+
As fighter types have proliferated, the air superiority fighter emerged as a specific role at the pinnacle of speed, maneuverability, and air-to-air weapon systems – able to hold its own against all other fighters and establish its dominance in the skies above the battlefield.
|
22 |
+
|
23 |
+
The interceptor is a fighter designed specifically to intercept and engage approaching enemy aircraft. There are two general classes of interceptor: relatively lightweight aircraft in the point-defence role, built for fast reaction, high performance and with a short range, and heavier aircraft with more comprehensive avionics and designed to fly at night or in all weathers and to operate over longer ranges. Originating during World War I, by 1929 this class of fighters had become known as the interceptor.[4]
|
24 |
+
|
25 |
+
The equipment necessary for daytime flight is inadequate when flying at night or in poor visibility. The night fighter was developed during World War I with additional equipment to aid the pilot in flying straight, navigating and finding the target. From modified variants of the Royal Aircraft Factory B.E.2c in 1915, the night fighter has evolved into the highly capable all-weather fighter.[5]
|
26 |
+
|
27 |
+
The strategic fighter is a fast, heavily-armed and long-range type, able to act as an escort fighter protecting bombers, to carry out offensive sorties of its own as a penetration fighter and maintain standing patrols at significant distance from its home base.[6]
|
28 |
+
|
29 |
+
Bombers are vulnerable due to their low speed and poor maneuvrability. The escort fighter was developed during World War II to come between the bombers and enemy attackers as a protective shield. The primary requirement was for long range, with several heavy fighters given the role. However they too proved unwieldy and vulnerable, so as the war progressed techniques such as drop tanks were developed to extend the range of more nimble conventional fighters.
|
30 |
+
|
31 |
+
The penetration fighter is typically also fitted for the ground-attack role, and so is able to defend itself while conducting attack sorties.
|
32 |
+
|
33 |
+
Since World War I, achieving and maintaining air superiority has been considered essential for victory in conventional warfare.[7]
|
34 |
+
|
35 |
+
Fighters continued to be developed throughout World War I, to deny enemy aircraft and dirigibles the ability to gather information by reconnaissance over the battlefield. Early fighters were very small and lightly armed by later standards, and most were biplanes built with a wooden frame covered with fabric, and a maximum airspeed of about 100 mph (160 km/h). As control of the airspace over armies became increasingly important, all of the major powers developed fighters to support their military operations. Between the wars, wood was largely replaced in part or whole by metal tubing, and finally aluminum stressed skin structures (monocoque) began to predominate.
|
36 |
+
|
37 |
+
By World War II, most fighters were all-metal monoplanes armed with batteries of machine guns or cannons and some were capable of speeds approaching 400 mph (640 km/h). Most fighters up to this point had one engine, but a number of twin-engine fighters were built; however they were found to be outmatched against single-engine fighters and were relegated to other tasks, such as night fighters equipped with primitive radar sets.
|
38 |
+
|
39 |
+
By the end of the war, turbojet engines were replacing piston engines as the means of propulsion, further increasing aircraft speed. Since the weight of the turbojet engine was far less than a piston engine, having two engines was no longer a handicap and one or two were used, depending on requirements. This in turn required the development of ejection seats so the pilot could escape, and G-suits to counter the much greater forces being applied to the pilot during maneuvers.
|
40 |
+
|
41 |
+
In the 1950s, radar was fitted to day fighters, since due to ever increasing air-to-air weapon ranges, pilots could no longer see far enough ahead to prepare for the opposition. Subsequently, radar capabilities grew enormously and are now the primary method of target acquisition. Wings were made thinner and swept back to reduce transonic drag, which required new manufacturing methods to obtain sufficient strength. Skins were no longer sheet metal riveted to a structure, but milled from large slabs of alloy. The sound barrier was broken, and after a few false starts due to required changes in controls, speeds quickly reached Mach 2, past which aircraft cannot maneuver sufficiently to avoid attack.
|
42 |
+
|
43 |
+
Air-to-air missiles largely replaced guns and rockets in the early 1960s since both were believed unusable at the speeds being attained, however the Vietnam War showed that guns still had a role to play, and most fighters built since then are fitted with cannon (typically between 20 and 30 mm in caliber) in addition to missiles. Most modern combat aircraft can carry at least a pair of air-to-air missiles.
|
44 |
+
|
45 |
+
In the 1970s, turbofans replaced turbojets, improving fuel economy enough that the last piston engined support aircraft could be replaced with jets, making multi-role combat aircraft possible. Honeycomb structures began to replace milled structures, and the first composite components began to appear on components subjected to little stress.
|
46 |
+
|
47 |
+
With the steady improvements in computers, defensive systems have become increasingly efficient. To counter this, stealth technologies have been pursued by the United States, Russia, India and China. The first step was to find ways to reduce the aircraft's reflectivity to radar waves by burying the engines, eliminating sharp corners and diverting any reflections away from the radar sets of opposing forces. Various materials were found to absorb the energy from radar waves, and were incorporated into special finishes that have since found widespread application. Composite structures have become widespread, including major structural components, and have helped to counterbalance the steady increases in aircraft weight—most modern fighters are larger and heavier than World War II medium bombers.
|
48 |
+
|
49 |
+
Because of the importance of air superiority, since the early days of aerial combat armed forces have constantly competed to develop technologically superior fighters and to deploy these fighters in greater numbers, and fielding a viable fighter fleet consumes a substantial proportion of the defense budgets of modern armed forces.[8]
|
50 |
+
|
51 |
+
The global combat aircraft market was worth $45.75 billion in 2017 and is projected by Frost & Sullivan at $47.2 billion in 2026: 35% modernization programs and 65% aircraft purchases, dominated by the Lockheed Martin F-35 with 3,000 deliveries over 20 years.[9]
|
52 |
+
|
53 |
+
The word "fighter" was first used to describe a two-seater aircraft with sufficient lift to carry a machine gun and its operator as well as the pilot. Some of the first such "fighters" belonged to the "gunbus" series of experimental gun carriers of the British Vickers company that culminated in the Vickers F.B.5 Gunbus of 1914. The main drawback of this type of aircraft was its lack of speed. Planners quickly realized that an aircraft intended to destroy its kind in the air had to be fast enough to catch its quarry.
|
54 |
+
|
55 |
+
One of the first companies to develop an armed aircraft was Vickers. Their Type 18 Destroyer of 1913 was a two-seat pusher type, with the pilot behind and an observer/gunner in front and a machine gun fitted in the nose on a pivoting mount. It would be developed as the F.B.5 "Gunbus" and introduced into service in 1915.[10]
|
56 |
+
|
57 |
+
However at the outbreak of World War I, front-line aircraft were unarmed and used almost entirely for reconnaissance. On 15 August 1914, Miodrag Tomić encountered an enemy plane while conducting a reconnaissance flight over Austria-Hungary. The enemy pilot shot at Tomić's plane with a revolver.[11] Tomić produced a pistol of his own and fired back.[12][13] It was considered the first exchange of fire between aircraft in history.[14] Within weeks, all Serbian and Austro-Hungarian aircraft were armed.[11] Machine guns were soon fitted to existing reconnaissance types for use by the observer, but none of these was a true fighter plane.
|
58 |
+
|
59 |
+
Another type of military aircraft was to form the basis for an effective "fighter" in the modern sense of the word. It was based on the small fast aircraft developed before the war for such air races as the Gordon Bennett Cup and Schneider Trophy. The military scout airplane was not expected to carry serious armament, but rather to rely on its speed to reach the scout or reconnoiter location and return quickly to report – essentially an aerial horse. British scout aircraft, in this sense, included the Sopwith Tabloid and Bristol Scout. French equivalents included the Morane-Saulnier N.
|
60 |
+
|
61 |
+
The next advance came with the fixed forward-firing machine gun, so that the pilot pointed the whole plane at the target and fired the gun, instead of relying on a second gunner. Roland Garros (aviator) bolted metal deflector plates to the propeller so that it would not shoot itself out of the sky and a number of Morane-Saulnier Ns were modified. The technique proved effective, however the deflected bullets were still highly dangerous.[15]
|
62 |
+
|
63 |
+
The next fighter manufactured in any quantity was the Fokker E.I Eindecker and its derivatives, whose introduction in 1915, only a few months after the appearance of the slower Gunbus, ushered in what the Allies came to call the "Fokker scourge" and a period of air superiority for the German forces. Although it still had mediocre flying qualities, the Fokker's unique innovation was an interrupter gear which allowed the gun to fire through the propeller arc without hitting the blades.[16]
|
64 |
+
|
65 |
+
Soon after the commencement of the war, pilots armed themselves with pistols, carbines, grenades, and an assortment of improvised weapons. Many of these proved ineffective as the pilot had to fly his airplane while attempting to aim a handheld weapon and make a difficult deflection shot. The first step in finding a real solution was to mount the weapon on the aircraft, but the propeller remained a problem since the best direction to shoot is straight ahead. Numerous solutions were tried. A second crew member behind the pilot could aim and fire a swivel-mounted machine gun at enemy airplanes; however, this limited the area of coverage chiefly to the rear hemisphere, and effective coordination of the pilot's maneuvering with the gunner's aiming was difficult. This option was chiefly employed as a defensive measure on two-seater reconnaissance aircraft from 1915 on. Both the SPAD S.A and the Royal Aircraft Factory B.E.9 added a second crewman ahead of the engine in a pod but this was both hazardous to the second crewman and limited performance. The Sopwith L.R.T.Tr. similarly added a pod on the top wing with no better luck.
|
66 |
+
|
67 |
+
An alternative was to build a "pusher" scout such as the Airco DH.2, with the propeller mounted behind the pilot. The main drawback was that the high drag of a pusher type's tail structure made it slower than a similar "tractor" aircraft.
|
68 |
+
|
69 |
+
A better solution for a single seat scout was to mount the machine gun (rifles and pistols having been dispensed with) to fire forwards but outside the propeller arc. Wing guns were tried but the unreliable weapons available required frequent clearing of jammed rounds and misfires and remained impractical until after the war. Mounting the machine gun over the top wing worked well and was used long after the ideal solution was found. The Nieuport 11 of 1916 and Royal Aircraft Factory S.E.5 of 1918 both used this system with considerable success; however, this placement made aiming difficult and the location made it difficult for a pilot to both maneuver and have access to the gun's breech. The British Foster mounting was specifically designed for this kind of application, fitted with the Lewis Machine gun, which due to its design was unsuitable for synchronizing.
|
70 |
+
|
71 |
+
The need to arm a tractor scout with a forward-firing gun whose bullets passed through the propeller arc was evident even before the outbreak of war and inventors in both France and Germany devised mechanisms that could time the firing of the individual rounds to avoid hitting the propeller blades. Franz Schneider, a Swiss engineer, had patented such a device in Germany in 1913, but his original work was not followed up. French aircraft designer Raymond Saulnier patented a practical device in April 1914, but trials were unsuccessful because of the propensity of the machine gun employed to hang fire due to unreliable ammunition.
|
72 |
+
|
73 |
+
In December 1914, French aviator Roland Garros asked Saulnier to install his synchronization gear on Garros' Morane-Saulnier Type L. Unfortunately the gas-operated Hotchkiss machine gun he was provided had an erratic rate of fire and it was impossible to synchronize it with a spinning propeller. As an interim measure, the propeller blades were armored and fitted with metal wedges to protect the pilot from ricochets. Garros' modified monoplane was first flown in March 1915 and he began combat operations soon thereafter. Garros scored three victories in three weeks before he himself was downed on 18 April and his airplane, along with its synchronization gear and propeller was captured by the Germans.
|
74 |
+
|
75 |
+
Meanwhile, the synchronization gear (called the Stangensteuerung in German, for "pushrod control system") devised by the engineers of Anthony Fokker's firm was the first system to see production contracts, and would make the Fokker Eindecker monoplane a feared name over the Western Front, despite its being an adaptation of an obsolete pre-war French Morane-Saulnier racing airplane, with a mediocre performance and poor flight characteristics. The first victory for the Eindecker came on 1 July 1915, when Leutnant Kurt Wintgens, flying with the Feldflieger Abteilung 6 unit on the Western Front, forced down a Morane-Saulnier Type L two-seat "parasol" monoplane just east of Luneville. Wintgens' aircraft, one of the five Fokker M.5K/MG production prototype examples of the Eindecker, was armed with a synchronized, air-cooled aviation version of the Parabellum MG14 machine gun.
|
76 |
+
|
77 |
+
The success of the Eindecker kicked off a competitive cycle of improvement among the combatants, both sides striving to build ever more capable single-seat fighters. The Albatros D.I and Sopwith Pup of 1916 set the classic pattern followed by fighters for about twenty years. Most were biplanes and only rarely monoplanes or triplanes. The strong box structure of the biplane provided a rigid wing that allowed the accurate lateral control essential for dogfighting. They had a single operator, who flew the aircraft and also controlled its armament. They were armed with one or two Maxim or Vickers machine guns, which were easier to synchronize than other types, firing through the propeller arc. Gun breeches were directly in front of the pilot, with obvious implications in case of accidents, but jams could be cleared in flight, while aiming was simplified.
|
78 |
+
|
79 |
+
The use of metal aircraft structures was pioneered before World War I by Breguet but would find its biggest proponent in Anthony Fokker, who used chrome-molybdenum steel tubing for the fuselage structure of all his fighter designs, while the innovative German engineer Hugo Junkers developed two all-metal, single-seat fighter monoplane designs with cantilever wings: the strictly experimental Junkers J 2 private-venture aircraft, made with steel, and some forty examples of the Junkers D.I, made with corrugated duralumin, all based on his experience in creating the pioneering Junkers J 1 all-metal airframe technology demonstration aircraft of late 1915. While Fokker would pursue steel tube fuselages with wooden wings until the late 1930s, and Junkers would focus on corrugated sheet metal, Dornier was the first to build a fighter (The Dornier-Zeppelin D.I) made with pre-stressed sheet aluminum and having cantilevered wings, a form that would replace all others in the 1930s.
|
80 |
+
|
81 |
+
As collective combat experience grew, the more successful pilots such as Oswald Boelcke, Max Immelmann, and Edward Mannock developed innovative tactical formations and maneuvers to enhance their air units' combat effectiveness.
|
82 |
+
|
83 |
+
Allied and – before 1918 – German pilots of World War I were not equipped with parachutes, so in-flight fires or structural failure were often fatal. Parachutes were well-developed by 1918 having previously been used by balloonists, and were adopted by the German flying services during the course of that year. The well known and feared Manfred von Richthofen "Red Baron" was wearing one when he was killed, but the allied command continued to oppose their use on various grounds.[17]
|
84 |
+
|
85 |
+
In April 1917, during a brief period of German aerial supremacy a British pilot's average life expectancy was 93 flying hours, or about three weeks of active service.[18][19] More than 50,000 airmen from both sides died during the war.[20]
|
86 |
+
|
87 |
+
Fighter development stagnated between the wars, especially in the United States and the United Kingdom, where budgets were small. In France, Italy and Russia, where large budgets continued to allow major development, both monoplanes and all metal structures were common. By the end of the 1920s, however, those countries overspent themselves and were overtaken in the 1930s by those powers that hadn't been spending heavily, namely the British, the Americans and the Germans.
|
88 |
+
|
89 |
+
Given limited defense budgets, air forces tended to be conservative in their aircraft purchases, and biplanes remained popular with pilots because of their agility, and remained in service long after they had ceased to be competitive. Designs such as the Gloster Gladiator, Fiat CR.42, and Polikarpov I-15 were common even in the late 1930s, and many were still in service as late as 1942. Up until the mid-1930s, the majority of fighters in the US, the UK, Italy and Russia remained fabric-covered biplanes.
|
90 |
+
|
91 |
+
Fighter armament eventually began to be mounted inside the wings, outside the arc of the propeller, though most designs retained two synchronized machine guns directly ahead of the pilot, where they were more accurate (that being the strongest part of the structure, reducing the vibration to which the guns were subjected to). Shooting with this traditional arrangement was also easier for the further reason that the guns shot directly ahead in the direction of the aircraft's flight, up to the limit of the guns range; unlike wing-mounted guns which to be effective required to be harmonised, that is, preset to shoot at an angle by ground crews so that their bullets would converge on a target area a set distance ahead of the fighter. Rifle-caliber .30 and .303 in (7.62 mm) caliber guns remained the norm, with larger weapons either being too heavy and cumbersome or deemed unnecessary against such lightly built aircraft. It was not considered unreasonable to use World War I-style armament to counter enemy fighters as there was insufficient air-to-air combat during most of the period to disprove this notion.
|
92 |
+
|
93 |
+
The rotary engine, popular during World War I, quickly disappeared, its development having reached the point where rotational forces prevented more fuel and air from being delivered to the cylinders, which limited horsepower. They were replaced chiefly by the stationary radial engine though major advances led to inline engines, which gained ground with several exceptional engines—including the 1,145 cu in (18.76 l) V-12 Curtiss D-12. Aircraft engines increased in power several-fold over the period, going from a typical 180 hp (130 kW) in the 900-kg Fokker D.VII of 1918 to 900 hp (670 kW) in the 2,500-kg Curtiss P-36 of 1936. The debate between the sleek in-line engines versus the more reliable radial models continued, with naval air forces preferring the radial engines, and land-based forces often choosing in-line units. Radial designs did not require a separate (and vulnerable) cooling system, but had increased drag. In-line engines often had a better power-to-weight ratio, but there were radial engines that kept working even after having suffered significant battle damage.
|
94 |
+
|
95 |
+
Some air forces experimented with "heavy fighters" (called "destroyers" by the Germans). These were larger, usually twin-engined aircraft, sometimes adaptations of light or medium bomber types. Such designs typically had greater internal fuel capacity (thus longer range) and heavier armament than their single-engine counterparts. In combat, they proved vulnerable to more agile single-engine fighters.
|
96 |
+
|
97 |
+
The primary driver of fighter innovation, right up to the period of rapid re-armament in the late 1930s, were not military budgets, but civilian aircraft racing. Aircraft designed for these races introduced innovations like streamlining and more powerful engines that would find their way into the fighters of World War II. The most significant of these was the Schneider Trophy races, where competition grew so fierce, only national governments could afford to enter.
|
98 |
+
|
99 |
+
At the very end of the inter-war period in Europe came the Spanish Civil War. This was just the opportunity the German Luftwaffe, Italian Regia Aeronautica, and the Soviet Union's Red Air Force needed to test their latest aircraft. Each party sent numerous aircraft types to support their sides in the conflict. In the dogfights over Spain, the latest Messerschmitt Bf 109 fighters did well, as did the Soviet Polikarpov I-16. The German design had considerably more room for development however and the lessons learned led to greatly improved models in World War II. The Russians, whose side lost, failed to keep up and despite newer models coming into service, I-16s were outfought by the improved Bf 109s in World War II, while remaining the most common Soviet front-line fighter into 1942. For their part, the Italians developed several monoplanes such as the Fiat G.50, but being short on funds, were forced to continue operating obsolete Fiat CR.42 biplanes.
|
100 |
+
|
101 |
+
From the early 1930s the Japanese had been at war against both the Chinese Nationalists and the Russians in China, and used the experience to improve both training and aircraft, replacing biplanes with modern cantilever monoplanes and creating a cadre of exceptional pilots for use in the Pacific War. In the United Kingdom, at the behest of Neville Chamberlain, (more famous for his 'peace in our time' speech) the entire British aviation industry was retooled, allowing it to change quickly from fabric covered metal framed biplanes to cantilever stressed skin monoplanes in time for the war with Germany.
|
102 |
+
|
103 |
+
The period of improving the same biplane design over and over was now coming to an end, and the Hawker Hurricane and Supermarine Spitfire finally started to supplant the Gloster Gladiator and Hawker Fury biplanes but many of the former remained in front-line service well past the start of World War II. While not a combatant themselves in Spain, they absorbed many of the lessons learned in time to use them.
|
104 |
+
|
105 |
+
The Spanish Civil War also provided an opportunity for updating fighter tactics. One of the innovations to result from the aerial warfare experience this conflict provided was the development of the "finger-four" formation by the German pilot Werner Mölders. Each fighter squadron (German: Staffel) was divided into several flights (Schwärme) of four aircraft. Each Schwarm was divided into two Rotten, which was a pair of aircraft. Each Rotte was composed of a leader and a wingman. This flexible formation allowed the pilots to maintain greater situational awareness, and the two Rotten could split up at any time and attack on their own. The finger-four would become widely adopted as the fundamental tactical formation over the course of World War.[clarification needed]
|
106 |
+
|
107 |
+
World War II featured fighter combat on a larger scale than any other conflict to date. German Field Marshal Erwin Rommel noted the effect of airpower: "Anyone who has to fight, even with the most modern weapons, against an enemy in complete command of the air, fights like a savage against modern European troops, under the same handicaps and with the same chances of success."[citation needed] Throughout the war, fighters performed their conventional role in establishing air superiority through combat with other fighters and through bomber interception, and also often performed roles such as tactical air support and reconnaissance.
|
108 |
+
|
109 |
+
Fighter design varied widely among combatants. The Japanese and Italians favored lightly armed and armored but highly maneuverable designs such as the Japanese Nakajima Ki-27, Nakajima Ki-43 and Mitsubishi A6M Zero and the Italian Fiat G.50 and Macchi MC.200. In contrast, designers in the United Kingdom, Germany, the Soviet Union, and the United States believed that the increased speed of fighter aircraft would create g-forces unbearable to pilots who attempted maneuvering dogfights typical of the First World War, and their fighters were instead optimized for speed and firepower. In practice, while light, highly maneuverable aircraft did possess some advantages in fighter-versus-fighter combat, those could usually be overcome by sound tactical doctrine, and the design approach of the Italians and Japanese made their fighters ill-suited as interceptors or attack aircraft.
|
110 |
+
|
111 |
+
During the invasion of Poland and the Battle of France, Luftwaffe fighters—primarily the Messerschmitt Bf 109—held air superiority, and the Luftwaffe played a major role in German victories in these campaigns. During the Battle of Britain, however, British Hurricanes and Spitfires proved roughly equal to Luftwaffe fighters. Additionally Britain's radar-based Dowding system directing fighters onto German attacks and the advantages of fighting above Britain's home territory allowed the RAF to deny Germany air superiority, saving the UK from possible German invasion and dealing the Axis a major defeat early in the Second World War.
|
112 |
+
|
113 |
+
On the Eastern Front, Soviet fighter forces were overwhelmed during the opening phases of Operation Barbarossa. This was a result of the tactical surprise at the outset of the campaign, the leadership vacuum within the Soviet military left by the Great Purge, and the general inferiority of Soviet designs at the time, such as the obsolescent I-15 biplane and the I-16. More modern Soviet designs, including the MiG-3, LaGG-3 and Yak-1, had not yet arrived in numbers and in any case were still inferior to the Messerschmitt Bf 109. As a result, during the early months of these campaigns, Axis air forces destroyed large numbers of Red Air Force aircraft on the ground and in one-sided dogfights.
|
114 |
+
|
115 |
+
In the later stages on the Eastern Front, Soviet training and leadership improved, as did their equipment. Since 1942 Soviet designs such as the Yakovlev Yak-9 and Lavochkin La-5 had performance comparable to the German Bf 109 and Focke-Wulf Fw 190. Also, significant numbers of British, and later U.S., fighter aircraft were supplied to aid the Soviet war effort as part of Lend-Lease, with the Bell P-39 Airacobra proving particularly effective in the lower-altitude combat typical of the Eastern Front. The Soviets were also helped indirectly by the American and British bombing campaigns, which forced the Luftwaffe to shift many of its fighters away from the Eastern Front in defense against these raids. The Soviets increasingly were able to challenge the Luftwaffe, and while the Luftwaffe maintained a qualitative edge over the Red Air Force for much of the war, the increasing numbers and efficacy of the Soviet Air Force were critical to the Red Army's efforts at turning back and eventually annihilating the Wehrmacht.
|
116 |
+
|
117 |
+
Meanwhile, air combat on the Western Front had a much different character. Much of this combat focused on the strategic bombing campaigns of the RAF and the USAAF against German industry intended to wear down the Luftwaffe. Axis fighter aircraft focused on defending against Allied bombers while Allied fighters' main role was as bomber escorts. The RAF raided German cities at night, and both sides developed radar-equipped night fighters for these battles. The Americans, in contrast, flew daylight bombing raids into Germany. Unescorted Consolidated B-24 Liberators and Boeing B-17 Flying Fortress bombers, however, proved unable to fend off German interceptors (primarily Bf 109s and Fw 190s). With the later arrival of long range fighters, particularly the North American P-51 Mustang, American fighters were able to escort far into Germany on daylight raids and established control of the skies over Western Europe.
|
118 |
+
|
119 |
+
By the time of Operation Overlord in June 1944, the Allies had gained near complete air superiority over the Western Front. This cleared the way both for intensified strategic bombing of German cities and industries, and for the tactical bombing of battlefield targets. With the Luftwaffe largely cleared from the skies, Allied fighters increasingly served as attack aircraft.
|
120 |
+
|
121 |
+
Allied fighters, by gaining air superiority over the European battlefield, played a crucial role in the eventual defeat of the Axis, which Reichmarshal Hermann Göring, commander of the German Luftwaffe summed up when he said: "When I saw Mustangs over Berlin, I knew the jig was up."[21]
|
122 |
+
|
123 |
+
Major air combat during the war in the Pacific began with the entry of the Western Allies following Japan's attack against Pearl Harbor. The Imperial Japanese Navy Air Service primarily operated the Mitsubishi A6M Zero, and the Imperial Japanese Army Air Service flew the Nakajima Ki-27 and the Nakajima Ki-43, initially enjoying great success, as these fighters generally had better range, maneuverability, speed and climb rates than their Allied counterparts.[22][23] Additionally, Japanese pilots had received excellent training and many were combat veterans from Japan's campaigns in China. They quickly gained air superiority over the Allies, who at this stage of the war were often disorganized, under-trained and poorly equipped, and Japanese air power contributed significantly to their successes in the Philippines, Malaysia and Singapore, the Dutch East Indies and Burma.
|
124 |
+
|
125 |
+
By mid-1942, the Allies began to regroup and while some Allied aircraft such as the Brewster Buffalo and the P-39 were hopelessly outclassed by fighters like Japan's Zero, others such as the Army's P-40 and the Navy's Wildcat possessed attributes such as superior firepower, ruggedness and dive speed, and the Allies soon developed tactics (such as the Thach Weave) to take advantage of these strengths. These changes soon paid dividends, as the Allied ability to deny Japan air superiority was critical to their victories at Coral Sea, Midway, Guadalcanal and New Guinea. In China, the Flying Tigers also used the same tactics with some success, although they were unable to stem the tide of Japanese advances there.
|
126 |
+
|
127 |
+
By 1943, the Allies began to gain the upper hand in the Pacific Campaign's air campaigns. Several factors contributed to this shift. First, the P-38 and second-generation Allied fighters such as the Hellcat and later the Corsair, the P-47 and the P-51, began arriving in numbers. These fighters outperformed Japanese fighters in all respects except maneuverability. Other problems with Japan's fighter aircraft also became apparent as the war progressed, such as their lack of armor and light armament, which made them inadequate as bomber interceptors or ground-attack planes – roles Allied fighters excelled at. Most importantly, Japan's training program failed to provide enough well-trained pilots to replace losses. In contrast, the Allies improved both the quantity and quality of pilots graduating from their training programs.
|
128 |
+
|
129 |
+
By mid-1944, Allied fighters had gained air superiority throughout the theater, which would not be contested again during the war. The extent of Allied quantitative and qualitative superiority by this point in the war was demonstrated during the Battle of the Philippine Sea, a lopsided Allied victory in which Japanese fliers were downed in such numbers and with such ease that American fighter pilots likened it to a great turkey shoot.
|
130 |
+
|
131 |
+
Late in the war, Japan did begin to produce new fighters such as the Nakajima Ki-84 and the Kawanishi N1K to replace the venerable Zero, but these were produced only in small numbers, and in any case by that time Japan lacked trained pilots or sufficient fuel to mount a sustained challenge to Allied fighters. During the closing stages of the war, Japan's fighter arm could not seriously challenge raids over Japan by American B-29s, and was largely relegated to Kamikaze tactics.
|
132 |
+
|
133 |
+
Fighter technology advanced rapidly during the Second World War. Piston-engines, which powered the vast majority of World War II fighters, grew more powerful: at the beginning of the war fighters typically had engines producing between 1,000 hp (750 kW) and 1,400 hp (1,000 kW), while by the end of the war many could produce over 2,000 hp (1,500 kW). For example, the Spitfire, one of the few fighters in continuous production throughout the war, was in 1939 powered by a 1,030 hp (770 kW) Merlin II, while variants produced in 1945 were equipped with the 2,035 hp (1,517 kW) Griffon 61. Nevertheless, these fighters could only achieve modest increases in top speed due to problems of compressibility created as aircraft and their propellers approached the sound barrier, and it was apparent that propeller-driven aircraft were approaching the limits of their performance. German jet and rocket-powered fighters entered combat in 1944, too late to impact the war's outcome. The same year the Allies' only operational jet fighter, the Gloster Meteor, also entered service.
|
134 |
+
|
135 |
+
World War II fighters also increasingly featured monocoque construction, which improved their aerodynamic efficiency while adding structural strength. Laminar flow wings, which improved high speed performance, also came into use on fighters such as the P-51, while the Messerschmitt Me 262 and the Messerschmitt Me 163 featured swept wings that dramatically reduced drag at high subsonic speeds.
|
136 |
+
|
137 |
+
Armament also advanced during the war. The rifle-caliber machine guns that were common on prewar fighters could not easily down the more rugged warplanes of the era. Air forces began to replace or supplement them with cannons, which fired explosive shells that could blast a hole in an enemy aircraft – rather than relying on kinetic energy from a solid bullet striking a critical component of the aircraft, such as a fuel line or control cable, or the pilot. Cannons could bring down even heavy bombers with just a few hits, but their slower rate of fire made it difficult to hit fast-moving fighters in a dogfight. Eventually, most fighters mounted cannons, sometimes in combination with machine guns.
|
138 |
+
|
139 |
+
The British epitomized this shift. Their standard early war fighters mounted eight .303-inch (7.7 mm) caliber machine guns, but by mid-war they often featured a combination of machine guns and 20 mm cannons, and late in the war often only cannons. The Americans, in contrast, had problems producing a native cannon design, so instead placed multiple .50 caliber (12.7 mm) heavy machine guns on their fighters. Fighters were also increasingly fitted with bomb racks and air-to-surface ordnance such as bombs or rockets beneath their wings, and pressed into close air support roles as fighter-bombers. Although they carried less ordnance than light and medium bombers, and generally had a shorter range, they were cheaper to produce and maintain and their maneuverability made it easier for them to hit moving targets such as motorized vehicles. Moreover, if they encountered enemy fighters, their ordnance (which reduced lift and increased drag and therefore decreased performance) could be jettisoned and they could engage the enemy fighters, which eliminated the need for the fighter escorts that bombers required. Heavily armed and sturdily constructed fighters such as Germany's Focke-Wulf Fw 190, Britain's Hawker Typhoon and Hawker Tempest, and America's P-40, Corsair, P-47 and P-38 all excelled as fighter-bombers, and since the Second World War ground attack has been an important secondary capability of many fighters.
|
140 |
+
|
141 |
+
World War II also saw the first use of airborne radar on fighters. The primary purpose of these radars was to help night fighters locate enemy bombers and fighters. Because of the bulkiness of these radar sets, they could not be carried on conventional single-engined fighters and instead were typically retrofitted to larger heavy fighters or light bombers such as Germany's Messerschmitt Bf 110 and Junkers Ju 88, Britain's Mosquito and Beaufighter, and America's A-20, which then served as night fighters. The Northrop P-61 Black Widow, a purpose-built night fighter, was the only fighter of the war that incorporated radar into its original design. Britain and America cooperated closely in the development of airborne radar, and Germany's radar technology generally lagged slightly behind Anglo-American efforts, while other combatants developed few radar-equipped fighters.
|
142 |
+
|
143 |
+
Several prototype fighter programs begun early in 1945 continued on after the war and led to advanced piston-engine fighters that entered production and operational service in 1946. A typical example is the Lavochkin La-9 'Fritz', which was an evolution of the successful wartime Lavochkin La-7 'Fin'. Working through a series of prototypes, the La-120, La-126 and La-130, the Lavochkin design bureau sought to replace the La-7's wooden airframe with a metal one, as well as fit a laminar-flow wing to improve maneuver performance, and increased armament. The La-9 entered service in August 1946 and was produced until 1948; it also served as the basis for the development of a long-range escort fighter, the La-11 'Fang', of which nearly 1200 were produced 1947–1951. Over the course of the Korean War, however, it became obvious that the day of the piston-engined fighter was coming to a close and that the future would lie with the jet fighter.
|
144 |
+
|
145 |
+
This period also witnessed experimentation with jet-assisted piston engine aircraft. La-9 derivatives included examples fitted with two underwing auxiliary pulsejet engines (the La-9RD) and a similarly mounted pair of auxiliary ramjet engines (the La-138); however, neither of these entered service. One that did enter service – with the U.S. Navy in March 1945 – was the Ryan FR-1 Fireball; production was halted with the war's end on VJ-Day, with only 66 having been delivered, and the type was withdrawn from service in 1947. The USAAF had ordered its first 13 mixed turboprop-turbojet-powered pre-production prototypes of the Consolidated Vultee XP-81 fighter, but this program was also canceled by VJ Day, with 80% of the engineering work completed.
|
146 |
+
|
147 |
+
The first rocket-powered aircraft was the Lippisch Ente, which made a successful maiden flight in March 1928.[24] The only pure rocket aircraft ever mass-produced was the Messerschmitt Me 163B Komet in 1944, one of several German World War II projects aimed at developing high speed, point-defense aircraft.[25] Later variants of the Me 262 (C-1a and C-2b) were also fitted with "mixed-power" jet/rocket powerplants, while earlier models were fitted with rocket boosters, but were not mass-produced with these modifications.[26]
|
148 |
+
|
149 |
+
The USSR experimented with a rocket-powered interceptor in the years immediately following World War II, the Mikoyan-Gurevich I-270. Only two were built.
|
150 |
+
|
151 |
+
In the 1950s, the British developed mixed-power jet designs employing both rocket and jet engines to cover the performance gap that existed in turbojet designs. The rocket was the main engine for delivering the speed and height required for high-speed interception of high-level bombers and the turbojet gave increased fuel economy in other parts of flight, most notably to ensure the aircraft was able to make a powered landing rather than risking an unpredictable gliding return.
|
152 |
+
|
153 |
+
The Saunders-Roe SR.53 was a successful design, and was planned for production when economics forced the British to curtail most aircraft programs in the late 1950s. Furthermore, rapid advancements in jet engine technology rendered mixed-power aircraft designs like Saunders-Roe's SR.53 (and the following SR.177) obsolete. The American Republic XF-91 Thunderceptor –the first U.S. fighter to exceed Mach 1 in level flight– met a similar fate for the same reason, and no hybrid rocket-and-jet-engine fighter design has ever been placed into service.
|
154 |
+
|
155 |
+
The only operational implementation of mixed propulsion was Rocket-Assisted Take Off (RATO), a system rarely used in fighters, such as with the zero-length launch, RATO-based takeoff scheme from special launch platforms, tested out by both the United States and the Soviet Union, and made obsolete with advancements in surface-to-air missile technology.
|
156 |
+
|
157 |
+
It has become common in the aviation community to classify jet fighters by "generations" for historical purposes.[27] No official definitions of these generations exist; rather, they represent the notion of stages in the development of fighter-design approaches, performance capabilities, and technological evolution. Different authors have packed jet fighters into different generations. For example, Richard P. Hallion of the Secretary of the Air Force's Action Group classified the F-16 as a sixth-generation jet fighter.[28]
|
158 |
+
|
159 |
+
The timeframes associated with each generation remain inexact and are only indicative of the period during which their design philosophies and technology employment enjoyed a prevailing influence on fighter design and development. These timeframes also encompass the peak period of service entry for such aircraft.
|
160 |
+
|
161 |
+
The first generation of jet fighters comprised the initial, subsonic jet-fighter designs introduced late in World War II (1939–1945) and in the early post-war period. They differed little from their piston-engined counterparts in appearance, and many employed unswept wings. Guns and cannon remained the principal armament. The need to obtain a decisive advantage in maximum speed pushed the development of turbojet-powered aircraft forward. Top speeds for fighters rose steadily throughout World War II as more powerful piston engines developed, and they approached transonic flight-speeds where the efficiency of propellers drops off, making further speed increases nearly impossible.
|
162 |
+
|
163 |
+
The first jets developed during World War II and saw combat in the last two years of the war. Messerschmitt developed the first operational jet fighter, the Me 262A, primarily serving with the Luftwaffe's JG 7, the world's first jet-fighter wing. It was considerably faster than contemporary piston-driven aircraft, and in the hands of a competent pilot, proved quite difficult for Allied pilots to defeat. The Luftwaffe never deployed the design in numbers sufficient to stop the Allied air campaign, and a combination of fuel shortages, pilot losses, and technical difficulties with the engines kept the number of sorties low. Nevertheless, the Me 262 indicated the obsolescence of piston-driven aircraft. Spurred by reports of the German jets, Britain's Gloster Meteor entered production soon after, and the two entered service around the same time in 1944. Meteors commonly served to intercept the V-1 flying bomb, as they were faster than available piston-engined fighters at the low altitudes used by the flying bombs. Nearer the end of World War II, the first military jet-powered light-fighter design, the Luftwaffe intended the Heinkel He 162A Spatz (sparrow) to serve as a simple jet fighter for German home defense, with a few examples seeing squadron service with JG 1 by April 1945. By the end of the war almost all work on piston-powered fighters had ended. A few designs combining piston- and jet-engines for propulsion – such as the Ryan FR Fireball – saw brief use, but by the end of the 1940s virtually all new fighters were jet-powered.
|
164 |
+
|
165 |
+
Despite their advantages, the early jet-fighters were far from perfect. The operational lifespan of turbines were very short and engines were temperamental, while power could be adjusted only slowly and acceleration was poor (even if top speed was higher) compared to the final generation of piston fighters. Many squadrons of piston-engined fighters remained in service until the early to mid-1950s, even in the air forces of the major powers (though the types retained were the best of the World War II designs). Innovations including ejection seats, air brakes and all-moving tailplanes became widespread in this period.
|
166 |
+
|
167 |
+
The Americans began using jet fighters operationally after World War II, the wartime Bell P-59 having proven a failure. The Lockheed P-80 Shooting Star (soon re-designated F-80) was less elegant than the swept-wing Me 262, but had a cruise speed (660 km/h (410 mph)) as high as the maximum speed attainable by many piston-engined fighters. The British designed several new jets, including the distinctive single-engined twin boom de Havilland Vampire which Britain sold to the air forces of many nations.
|
168 |
+
|
169 |
+
The British transferred the technology of the Rolls-Royce Nene jet-engine to the Soviets, who soon put it to use in their advanced Mikoyan-Gurevich MiG-15 fighter, which used fully swept wings that allowed flying closer to the speed of sound than straight-winged designs such as the F-80. The MiG-15s' top speed of 1,075 km/h (668 mph) proved quite a shock to the American F-80 pilots who encountered them in the Korean War, along with their armament of two 23 mm cannons and a single 37 mm cannon. Nevertheless, in the first jet-versus-jet dogfight, which occurred during the Korean War on 8 November 1950, an F-80 shot down two North Korean MiG-15s
|
170 |
+
|
171 |
+
The Americans responded by rushing their own swept-wing fighter – the North American F-86 Sabre – into battle against the MiGs, which had similar transsonic performance. The two aircraft had different strengths and weaknesses, but were similar enough that victory could go either way. While the Sabres focused primarily on downing MiGs and scored favorably against those flown by the poorly-trained North Koreans, the MiGs in turn decimated US bomber formations and forced the withdrawal of numerous American types from operational service.
|
172 |
+
|
173 |
+
The world's navies also transitioned to jets during this period, despite the need for catapult-launching of the new aircraft. The U.S. Navy adopted the Grumman F9F Panther as their primary jet fighter in the Korean War period, and it was one of the first jet fighters to employ an afterburner. The de Havilland Sea Vampire became the Royal Navy's first jet fighter. Radar was used on specialized night-fighters such as the Douglas F3D Skyknight, which also downed MiGs over Korea, and later fitted to the McDonnell F2H Banshee and swept-wing Vought F7U Cutlass and McDonnell F3H Demon as all-weather / night fighters. Early versions of Infra-red (IR) air-to-air missiles (AAMs) such as the AIM-9 Sidewinder and radar-guided missiles such as the AIM-7 Sparrow whose descendants remain in use as of 2019[update], were first introduced on swept-wing subsonic Demon and Cutlass naval fighters.
|
174 |
+
|
175 |
+
Technological breakthroughs, lessons learned from the aerial battles of the Korean War, and a focus on conducting operations in a nuclear warfare environment shaped the development of second-generation fighters. Technological advances in aerodynamics, propulsion and aerospace building-materials (primarily aluminum alloys) permitted designers to experiment with aeronautical innovations such as swept wings, delta wings, and area-ruled fuselages. Widespread use of afterburning turbojet engines made these the first production aircraft to break the sound barrier, and the ability to sustain supersonic speeds in level flight became a common capability amongst fighters of this generation.
|
176 |
+
|
177 |
+
Fighter designs also took advantage of new electronics technologies that made effective radars small enough to carry aboard smaller aircraft. Onboard radars permitted detection of enemy aircraft beyond visual range, thereby improving the handoff of targets by longer-ranged ground-based warning- and tracking-radars. Similarly, advances in guided-missile development allowed air-to-air missiles to begin supplementing the gun as the primary offensive weapon for the first time in fighter history. During this period, passive-homing infrared-guided (IR) missiles became commonplace, but early IR missile sensors had poor sensitivity and a very narrow field of view (typically no more than 30°), which limited their effective use to only close-range, tail-chase engagements. Radar-guided (RF) missiles were introduced[by whom?] as well, but early examples proved unreliable. These semi-active radar homing (SARH) missiles could track and intercept an enemy aircraft "painted" by the launching aircraft's onboard radar. Medium- and long-range RF air-to-air missiles promised to open up a new dimension of "beyond-visual-range" (BVR) combat, and much effort concentrated on further development of this technology.
|
178 |
+
|
179 |
+
The prospect of a potential third world war featuring large mechanized armies and nuclear-weapon strikes led to a degree of specialization along two design approaches: interceptors, such as the English Electric Lightning and Mikoyan-Gurevich MiG-21F; and fighter-bombers, such as the Republic F-105 Thunderchief and the Sukhoi Su-7B. Dogfighting, per se, became de-emphasized in both cases. The interceptor was an outgrowth of the vision that guided missiles would completely replace guns and combat would take place at beyond-visual ranges. As a result, strategists designed interceptors with a large missile-payload and a powerful radar, sacrificing agility in favor of high speed, altitude ceiling and rate of climb. With a primary air-defense role, emphasis was placed on the ability to intercept strategic bombers flying at high altitudes. Specialized point-defense interceptors often had limited range and little, if any, ground-attack capabilities. Fighter-bombers could swing between air-superiority and ground-attack roles, and were often designed for a high-speed, low-altitude dash to deliver their ordnance. Television- and IR-guided air-to-surface missiles were introduced to augment traditional gravity bombs, and some were also equipped to deliver a nuclear bomb.
|
180 |
+
|
181 |
+
The third generation witnessed continued maturation of second-generation innovations, but it is most marked by renewed emphases on maneuverability and on traditional ground-attack capabilities. Over the course of the 1960s, increasing combat experience with guided missiles demonstrated that combat would devolve into close-in dogfights. Analog avionics began to appear, replacing older "steam-gauge" cockpit instrumentation. Enhancements to the aerodynamic performance of third-generation fighters included flight control surfaces such as canards, powered slats, and blown flaps. A number of technologies would be tried for Vertical/Short Takeoff and Landing, but thrust vectoring would be successful on the Harrier.
|
182 |
+
|
183 |
+
Growth in air-combat capability focused on the introduction of improved air-to-air missiles, radar systems, and other avionics. While guns remained standard equipment (early models of F-4 being a notable exception), air-to-air missiles became the primary weapons for air-superiority fighters, which employed more sophisticated radars and medium-range RF AAMs to achieve greater "stand-off" ranges, however, kill probabilities proved unexpectedly low for RF missiles due to poor reliability and improved electronic countermeasures (ECM) for spoofing radar seekers. Infrared-homing AAMs saw their fields of view expand to 45°, which strengthened their tactical usability. Nevertheless, the low dogfight loss-exchange ratios experienced by American fighters in the skies over Vietnam led the U.S. Navy to establish its famous "TOPGUN" fighter-weapons school, which provided a graduate-level curriculum to train fleet fighter-pilots in advanced Air Combat Maneuvering (ACM) and Dissimilar air combat training (DACT) tactics and techniques.
|
184 |
+
|
185 |
+
This era also saw an expansion in ground-attack capabilities, principally in guided missiles, and witnessed the introduction of the first truly effective avionics for enhanced ground attack, including terrain-avoidance systems. Air-to-surface missiles (ASM) equipped with electro-optical (E-O) contrast seekers – such as the initial model of the widely used AGM-65 Maverick – became standard weapons, and laser-guided bombs (LGBs) became widespread in an effort to improve precision-attack capabilities. Guidance for such precision-guided munitions (PGM) was provided by externally-mounted targeting pods, which were introduced[by whom?] in the mid-1960s.
|
186 |
+
|
187 |
+
The third generation also led to the development of new automatic-fire weapons, primarily chain-guns that use an electric motor to drive the mechanism of a cannon. This allowed a plane to carry a single multi-barrel weapon (such as the 20 mm Vulcan), and provided greater accuracy and rates of fire. Powerplant reliability increased, and jet engines became "smokeless" to make it harder to sight aircraft at long distances.
|
188 |
+
|
189 |
+
Dedicated ground-attack aircraft (like the Grumman A-6 Intruder, SEPECAT Jaguar and LTV A-7 Corsair II) offered longer range, more sophisticated night-attack systems or lower cost than supersonic fighters. With variable-geometry wings, the supersonic F-111 introduced the Pratt & Whitney TF30, the first turbofan equipped with afterburner. The ambitious project sought to create a versatile common fighter for many roles and services. It would serve well as an all-weather bomber, but lacked the performance to defeat other fighters. The McDonnell F-4 Phantom was designed to capitalize on radar and missile technology as an all-weather interceptor, but emerged as a versatile strike-bomber nimble enough to prevail in air combat, adopted by the U.S. Navy, Air Force and Marine Corps. Despite numerous shortcomings that would be not be fully addressed until newer fighters, the Phantom claimed 280 aerial kills (more than any other U.S. fighter) over Vietnam.[29] With range and payload capabilities that rivaled that of World War II bombers such as B-24 Liberator, the Phantom would become a highly successful multirole aircraft.
|
190 |
+
|
191 |
+
Fourth-generation fighters continued the trend towards multirole configurations, and were equipped with increasingly sophisticated avionics- and weapon-systems. Fighter designs were significantly influenced by the Energy-Maneuverability (E-M) theory developed by Colonel John Boyd and mathematician Thomas Christie, based upon Boyd's combat experience in the Korean War and as a fighter-tactics instructor during the 1960s. E-M theory emphasized the value of aircraft-specific energy maintenance as an advantage in fighter combat. Boyd perceived maneuverability as the primary means of getting "inside" an adversary's decision-making cycle, a process Boyd called the "OODA loop" (for "Observation-Orientation-Decision-Action"). This approach emphasized aircraft designs capable of performing "fast transients" – quick changes in speed, altitude, and direction – as opposed to relying chiefly on high speed alone.
|
192 |
+
|
193 |
+
E-M characteristics were first applied to the McDonnell Douglas F-15 Eagle, but Boyd and his supporters believed these performance parameters called for a small, lightweight aircraft with a larger, higher-lift wing. The small size would minimize drag and increase the thrust-to-weight ratio, while the larger wing would minimize wing loading; while the reduced wing loading tends to lower top speed and can cut range, it increases payload capacity and the range reduction can be compensated for by increased fuel in the larger wing. The efforts of Boyd's "Fighter mafia" would result in the General Dynamics F-16 Fighting Falcon (now Lockheed Martin's).
|
194 |
+
|
195 |
+
The F-16's maneuverability was further enhanced by its slight aerodynamic instability. This technique, called "relaxed static stability" (RSS), was made possible by introduction of the "fly-by-wire" (FBW) flight-control system (FLCS), which in turn was enabled by advances in computers and in system-integration techniques. Analog avionics, required to enable FBW operations, became a fundamental requirement, but began to be replaced by digital flight-control systems in the latter half of the 1980s. Likewise, Full Authority Digital Engine Controls (FADEC) to electronically manage powerplant performance was introduced with the Pratt & Whitney F100 turbofan. The F-16's sole reliance on electronics and wires to relay flight commands, instead of the usual cables and mechanical linkage controls, earned it the sobriquet of "the electric jet". Electronic FLCS and FADEC quickly became essential components of all subsequent fighter designs.
|
196 |
+
|
197 |
+
Other innovative technologies introduced in fourth-generation fighters included pulse-Doppler fire-control radars (providing a "look-down/shoot-down" capability), head-up displays (HUD), "hands on throttle-and-stick" (HOTAS) controls, and multi-function displays (MFD), all essential equipment as of 2019[update]. Aircraft designers began to incorporate composite materials in the form of bonded-aluminum honeycomb structural elements and graphite epoxy laminate skins to reduce weight. Infrared search-and-track (IRST) sensors became widespread for air-to-ground weapons delivery, and appeared for air-to-air combat as well. "All-aspect" IR AAM became standard air superiority weapons, which permitted engagement of enemy aircraft from any angle (although the field of view remained relatively limited). The first long-range active-radar-homing RF AAM entered service with the AIM-54 Phoenix, which solely equipped the Grumman F-14 Tomcat, one of the few variable-sweep-wing fighter designs to enter production. Even with the tremendous advancement of air-to-air missiles in this era, internal guns were standard equipment.
|
198 |
+
|
199 |
+
Another revolution came in the form of a stronger reliance on ease of maintenance, which led to standardization of parts, reductions in the numbers of access panels and lubrication points, and overall parts reduction in more complicated equipment like the engines. Some early jet fighters required 50 man-hours of work by a ground crew for every hour the aircraft was in the air; later models substantially reduced this to allow faster turn-around times and more sorties in a day. Some modern military aircraft only require 10-man-hours of work per hour of flight time, and others are even more efficient.
|
200 |
+
|
201 |
+
Aerodynamic innovations included variable-camber wings and exploitation of the vortex lift effect to achieve higher angles of attack through the addition of leading-edge extension devices such as strakes.
|
202 |
+
|
203 |
+
Unlike interceptors of the previous eras, most fourth-generation air-superiority fighters were designed to be agile dogfighters (although the Mikoyan MiG-31 and Panavia Tornado ADV are notable exceptions). The continually rising cost of fighters, however, continued to emphasize the value of multirole fighters. The need for both types of fighters led to the "high/low mix" concept, which envisioned a high-capability and high-cost core of dedicated air-superiority fighters (like the F-15 and Su-27) supplemented by a larger contingent of lower-cost multi-role fighters (such as the F-16 and MiG-29).
|
204 |
+
|
205 |
+
Most fourth-generation fighters, such as the McDonnell Douglas F/A-18 Hornet, HAL Tejas, JF-17 and Dassault Mirage 2000, are true multirole warplanes, designed as such from the start. This was facilitated by multimode avionics that could switch seamlessly between air and ground modes. The earlier approaches of adding on strike capabilities or designing separate models specialized for different roles generally became passé (with the Panavia Tornado being an exception in this regard). Attack roles were generally assigned to dedicated ground-attack aircraft such as the Sukhoi Su-25 and the A-10 Thunderbolt II.
|
206 |
+
|
207 |
+
A typical US Air Force fighter wing of the period might contain a mix of one air superiority squadron (F-15C), one strike fighter squadron (F-15E), and two multirole fighter squadrons (F-16C).[30]
|
208 |
+
|
209 |
+
Perhaps the most novel technology introduced for combat aircraft was stealth, which involves the use of special "low-observable" (L-O) materials and design techniques to reduce the susceptibility of an aircraft to detection by the enemy's sensor systems, particularly radars. The first stealth aircraft introduced were the Lockheed F-117 Nighthawk attack aircraft (introduced in 1983) and the Northrop Grumman B-2 Spirit bomber (first flew in 1989). Although no stealthy fighters per se appeared among the fourth generation, some radar-absorbent coatings and other L-O treatments developed for these programs are reported to have been subsequently applied to fourth-generation fighters.
|
210 |
+
|
211 |
+
The end of the Cold War in 1991 led many governments to significantly decrease military spending as a "peace dividend". Air force inventories were cut. Research and development programs working on "fifth-generation" fighters took serious hits. Many programs were canceled during the first half of the 1990s, and those that survived were "stretched out". While the practice of slowing the pace of development reduces annual investment expenses, it comes at the penalty of increased overall program and unit costs over the long-term. In this instance, however, it also permitted designers to make use of the tremendous achievements being made in the fields of computers, avionics and other flight electronics, which had become possible largely due to the advances made in microchip and semiconductor technologies in the 1980s and 1990s. This opportunity enabled designers to develop fourth-generation designs – or redesigns – with significantly enhanced capabilities. These improved designs have become known as "Generation 4.5" fighters, recognizing their intermediate nature between the 4th and 5th generations, and their contribution in furthering development of individual fifth-generation technologies.
|
212 |
+
|
213 |
+
The primary characteristics of this sub-generation are the application of advanced digital avionics and aerospace materials, modest signature reduction (primarily RF "stealth"), and highly integrated systems and weapons. These fighters have been designed to operate in a "network-centric" battlefield environment and are principally multirole aircraft. Key weapons technologies introduced include beyond-visual-range (BVR) AAMs; Global Positioning System (GPS)-guided weapons, solid-state phased-array radars; helmet-mounted sights; and improved secure, jamming-resistant datalinks. Thrust vectoring to further improve transient maneuvering capabilities has also been adopted by many 4.5th generation fighters, and uprated powerplants have enabled some designs to achieve a degree of "supercruise" ability. Stealth characteristics are focused primarily on frontal-aspect radar cross section (RCS) signature-reduction techniques including radar-absorbent materials (RAM), L-O coatings and limited shaping techniques.
|
214 |
+
|
215 |
+
"Half-generation" designs are either based on existing airframes or are based on new airframes following similar design theory to previous iterations; however, these modifications have introduced the structural use of composite materials to reduce weight, greater fuel fractions to increase range, and signature reduction treatments to achieve lower RCS compared to their predecessors. Prime examples of such aircraft, which are based on new airframe designs making extensive use of carbon-fiber composites, include the Eurofighter Typhoon, Dassault Rafale, Saab JAS 39 Gripen, and HAL Tejas Mark 1A.
|
216 |
+
|
217 |
+
Apart from these fighter jets, most of the 4.5 generation aircraft are actually modified variants of existing airframes from the earlier fourth generation fighter jets. Such fighter jets are generally heavier and examples include the Boeing F/A-18E/F Super Hornet, which is an evolution of the F/A-18 Hornet, the F-15E Strike Eagle, which is a ground-attack/multi-role variant of the F-15 Eagle, the Su-30SM and Su-35S modified variants of the Sukhoi Su-27, and the MiG-35 upgraded version of the Mikoyan MiG-29. The Su-30SM/Su-35S and MiG-35 feature thrust vectoring engine nozzles to enhance maneuvering. The upgraded version of F-16 is also considered a member of the 4.5 generation aircraft.[31]
|
218 |
+
|
219 |
+
4.5 generation fighters first entered service in the early 1990s, and most of them are still being produced and evolved. It is quite possible that they may continue in production alongside fifth-generation fighters due to the expense of developing the advanced level of stealth technology needed to achieve aircraft designs featuring very low observables (VLO), which is one of the defining features of fifth-generation fighters. Of the 4.5th generation designs, the Strike Eagle, Super Hornet, Typhoon, Gripen, and Rafale have been used in combat.
|
220 |
+
|
221 |
+
The U.S. government has defined 4.5 generation fighter aircraft as those that "(1) have advanced capabilities, including— (A) AESA radar; (B) high capacity data-link; and (C) enhanced avionics; and (2) have the ability to deploy current and reasonably foreseeable advanced armaments."[32][33]
|
222 |
+
|
223 |
+
Currently the cutting edge of fighter design, fifth-generation fighters are characterized by being designed from the start to operate in a network-centric combat environment, and to feature extremely low, all-aspect, multi-spectral signatures employing advanced materials and shaping techniques. They have multifunction AESA radars with high-bandwidth, low-probability of intercept (LPI) data transmission capabilities. The Infra-red search and track sensors incorporated for air-to-air combat as well as for air-to-ground weapons delivery in the 4.5th generation fighters are now fused in with other sensors for Situational Awareness IRST or SAIRST, which constantly tracks all targets of interest around the aircraft so the pilot need not guess when he glances. These sensors, along with advanced avionics, glass cockpits, helmet-mounted sights (not currently on F-22), and improved secure, jamming-resistant LPI datalinks are highly integrated to provide multi-platform, multi-sensor data fusion for vastly improved situational awareness while easing the pilot's workload.[34] Avionics suites rely on extensive use of very high-speed integrated circuit (VHSIC) technology, common modules, and high-speed data buses. Overall, the integration of all these elements is claimed to provide fifth-generation fighters with a "first-look, first-shot, first-kill capability".
|
224 |
+
|
225 |
+
A key attribute of fifth-generation fighters is a small radar cross-section. Great care has been taken in designing its layout and internal structure to minimize RCS over a broad bandwidth of detection and tracking radar frequencies; furthermore, to maintain its VLO signature during combat operations, primary weapons are carried in internal weapon bays that are only briefly opened to permit weapon launch. Furthermore, stealth technology has advanced to the point where it can be employed without a tradeoff with aerodynamics performance, in contrast to previous stealth efforts. Some attention has also been paid to reducing IR signatures, especially on the F-22. Detailed information on these signature-reduction techniques is classified, but in general includes special shaping approaches, thermoset and thermoplastic materials, extensive structural use of advanced composites, conformal sensors, heat-resistant coatings, low-observable wire meshes to cover intake and cooling vents, heat ablating tiles on the exhaust troughs (seen on the Northrop YF-23), and coating internal and external metal areas with radar-absorbent materials and paint (RAM/RAP).
|
226 |
+
|
227 |
+
The AESA radar offers unique capabilities for fighters (and it is also quickly becoming essential for Generation 4.5 aircraft designs, as well as being retrofitted onto some fourth-generation aircraft). In addition to its high resistance to ECM and LPI features, it enables the fighter to function as a sort of "mini-AWACS", providing high-gain electronic support measures (ESM) and electronic warfare (EW) jamming functions. Other technologies common to this latest generation of fighters includes integrated electronic warfare system (INEWS) technology, integrated communications, navigation, and identification (CNI) avionics technology, centralized "vehicle health monitoring" systems for ease of maintenance, fiber optics data transmission, stealth technology and even hovering capabilities. Maneuver performance remains important and is enhanced by thrust-vectoring, which also helps reduce takeoff and landing distances. Supercruise may or may not be featured; it permits flight at supersonic speeds without the use of the afterburner – a device that significantly increases IR signature when used in full military power.
|
228 |
+
|
229 |
+
Such aircraft are sophisticated and expensive. The fifth generation was ushered in by the Lockheed Martin/Boeing F-22 Raptor in late 2005. The U.S. Air Force originally planned to acquire 650 F-22s, but now only 187 will be built. As a result, its unit flyaway cost (FAC) is around US$150 million. To spread the development costs – and production base – more broadly, the Joint Strike Fighter (JSF) program enrolls eight other countries as cost- and risk-sharing partners. Altogether, the nine partner nations anticipate procuring over 3,000 Lockheed Martin F-35 Lightning II fighters at an anticipated average FAC of $80–85 million. The F-35, however, is designed to be a family of three aircraft, a conventional take-off and landing (CTOL) fighter, a short take-off and vertical landing (STOVL) fighter, and a Catapult Assisted Take Off But Arrested Recovery (CATOBAR) fighter, each of which has a different unit price and slightly varying specifications in terms of fuel capacity (and therefore range), size and payload.
|
230 |
+
|
231 |
+
Other countries have initiated fifth-generation fighter development projects, with Russia's Sukhoi Su-57 and Mikoyan LMFS. In December 2010, it was discovered that China is developing the 5th generation fighter Chengdu J-20.[35] The J-20 took its maiden flight in January 2011. The Shenyang J-31 took its maiden flight on 31 October 2012.[36] Japan is exploring its technical feasibility to produce fifth-generation fighters. India is developing the Advanced Medium Combat Aircraft (AMCA), a medium weight stealth fighter jet designated to enter into serial production by late 2030s. India also had initiated a joint fifth generation heavy fighter with Russia called the FGFA. As of 2018[update] May, the project is suspected to have not yielded desired progress or results for India and has been put on hold or dropped altogether.[37] Other countries considering fielding an indigenous or semi-indigenous advanced fifth generation aircraft include Korea, Sweden, Turkey and Pakistan.
|
232 |
+
|
233 |
+
As of November 2018, France, Germany, Japan, Russia, the United Kingdom and the United States have announced the development of a sixth-generation aircraft program.
|
234 |
+
|
235 |
+
France and Germany will develop a joint sixth-generation fighter to replace their current fleet of Dassault Rafales, Eurofighter Typhoons, and Panavia Tornados by 2035.[38] The overall development will be led by a collaboration of Dassault and Airbus, while the engines will reportedly be jointly developed by Safran and MTU Aero Engines. Thales and MBDA are also seeking a stake in the project.[39] Spain is reportedly planning to join the program in the later stages and is expected to sign a letter of intent in early 2019.[39]
|
236 |
+
|
237 |
+
Currently at the concept stage, the first sixth-generation jet fighter is expected to enter service in the United States Navy in 2025–30 period.[40] The USAF seeks a new fighter for the 2030–50 period named the "Next Generation Tactical Aircraft" ("Next Gen TACAIR").[41][42] The US Navy looks to replace its F/A-18E/F Super Hornets beginning in 2025 with the Next Generation Air Dominance air superiority fighter.[43][44]
|
238 |
+
|
239 |
+
The United Kingdom's proposed stealth fighter is being developed by a European consortium called Team Tempest, consisting of BAE Systems, Rolls-Royce, Leonardo S.p.A. and MBDA. The aircraft is intended to enter service in 2035.[45][46]
|
240 |
+
|
241 |
+
Fighters were typically armed with guns only for air to air combat up through the late 1950s, though unguided rockets for mostly air to ground use and limited air to air use were deployed in WWII. From the late 1950s forward guided missiles came into use for air to air combat. Throughout this history fighters which by surprise or maneuver attain a good firing position have achieved the kill about one third to one half the time, no matter what weapons were carried.[47] The only major historic exception to this has been the low effectiveness shown by guided missiles in the first one to two decades of their existence.[48][49]
|
242 |
+
|
243 |
+
From WWI to the present fighter aircraft have featured machine guns and automatic cannons as weapons, and they are still considered as essential back-up weapons today. The power of air-to-air guns has increased greatly over time, and has kept them relevant in the guided missile era.[50] In WWI two rifle caliber machine guns was the typical armament producing a weight of fire of about 0.4 kg (0.88 lb) per second. The standard WWII American fighter armament of six 0.50-cal (12.7mm) machine guns fired a bullet weight of approximately 3.7 kg/sec (8.1 lbs/sec), at a muzzle velocity of 856 m/s (2,810 ft/s). British and German aircraft tended to use a mix of machine guns and autocannon, the latter firing explosive projectiles. The modern M61 Vulcan 20 mm rotating barrel Gatling gun that is standard on current American fighters fires a projectile weight of about 10 kg/s (22 lb/s), nearly three times that of six 0.50-cal machine guns, with higher velocity of 1,052 m/s (3450 ft/s) supporting a flatter trajectory, and with exploding projectiles.[51] Modern fighter gun systems also feature ranging radar and lead computing electronic gun sights to ease the problem of aim point to compensate for projectile drop and time of flight (target lead) in the complex three dimensional maneuvering of air-to-air combat. However, getting in position to use the guns is still a challenge. The range of guns is longer than in the past but still quite limited compared to missiles, with modern gun systems having a maximum effective range of approximately 1,000 meters.[52] High probability of kill also requires firing to usually occur from the rear hemisphere of the target.[53] Despite these limits, when pilots are well trained in air-to-air gunnery and these conditions are satisfied, gun systems are tactically effective and highly cost efficient. The cost of a gun firing pass is far less than firing a missile,[54] and the projectiles are not subject to the thermal and electronic countermeasures than can sometimes defeat missiles. When the enemy can be approached to within gun range, the lethality of guns is approximately a 25% to 50% chance of "kill per firing pass".[55]
|
244 |
+
|
245 |
+
The range limitations of guns, and the desire to overcome large variations in fighter pilot skill and thus achieve higher force effectiveness, led to the development of the guided air-to-air missile. There are two main variations, heat-seeking (infrared homing), and radar guided. Radar missiles are typically several times heavier and more expensive than heat-seekers, but with longer range, greater destructive power, and ability to track through clouds.
|
246 |
+
|
247 |
+
The highly successful AIM-9 Sidewinder heat-seeking (infrared homing) short-range missile was developed by the United States Navy in the 1950s. These small missiles are easily carried by lighter fighters, and provide effective ranges of approximately 10 to 35 km (~6 to 22 miles). Beginning with the AIM-9L in 1977, subsequent versions of Sidewinder have added all-aspect capability, the ability to use the lower heat of air to skin friction on the target aircraft to track from the front and sides. The latest (2003 service entry) AIM-9X also features "off-boresight" and "lock on after launch" capabilities, which allow the pilot to make a quick launch of a missile to track a target anywhere within the pilot's vision. The AIM-9X development cost was U.S. $3 billion in mid to late 1990s dollars,[56] and 2015 per unit procurement cost is $0.6 million each. The missile weighs 85.3 kg (188 lbs), and has a maximum range of 35 km (22 miles) at higher altitudes. Like most air-to-air missiles, lower altitude range can be as limited as only about one third of maximum due to higher drag and less ability to coast downward.[57]
|
248 |
+
|
249 |
+
The effectiveness of heat-seeking missiles was only 7% early in the Vietnam War,[58] but improved to approximately 15%–40% over the course of the war. The AIM-4 Falcon used by the USAF had kill rates of approximately 7% and was considered a failure. The AIM-9B Sidewinder introduced later achieved 15% kill rates, and the further improved AIM-9D and J models reached 19%. The AIM-9G used in the last year of the Vietnam air war achieved 40%.[59] Israel used almost totally guns in the 1967 Six-Day War, achieving 60 kills and 10 losses.[60] However, Israel made much more use of steadily improving heat-seeking missiles in the 1973 Yom Kippur War. In this extensive conflict Israel scored 171 of out of 261 total kills with heat-seeking missiles (65.5%), 5 kills with radar guided missiles (1.9%), and 85 kills with guns (32.6%).[61] The AIM-9L Sidewinder scored 19 kills out of 26 fired missiles (73%) in the 1982 Falklands War.[62] But, in a conflict against opponents using thermal countermeasures, the United States only scored 11 kills out of 48 fired (Pk = 23%) with the follow-on AIM-9M in the 1991 Gulf War.[63]
|
250 |
+
|
251 |
+
Radar guided missiles fall into two main missile guidance types. In the historically more common semi-active radar homing case the missile homes in on radar signals transmitted from launching aircraft and reflected from the target. This has the disadvantage that the firing aircraft must maintain radar lock on the target and is thus less free to maneuver and more vulnerable to attack. A widely deployed missile of this type was the AIM-7 Sparrow, which entered service in 1954 and was produced in improving versions until 1997. In more advanced active radar homing the missile is guided to the vicinity of the target by internal data on its projected position, and then "goes active" with an internally carried small radar system to conduct terminal guidance to the target. This eliminates the requirement for the firing aircraft to maintain radar lock, and thus greatly reduces risk. A prominent example is the AIM-120 AMRAAM, which was first fielded in 1991 as the AIM-7 replacement, and which has no firm retirement date as of 2016[update]. The current AIM-120D version has a maximum high altitude range of greater than 160 km (>99 miles), and cost approximately $2.4 million each (2016). As is typical with most other missiles, range at lower altitude may be as little as one third that of high altitude.
|
252 |
+
|
253 |
+
In the Vietnam air war radar missile kill reliability was approximately 10% at shorter ranges, and even worse at longer ranges due to reduced radar return and greater time for the target aircraft to detect the incoming missile and take evasive action. At one point in the Vietnam war, the U.S. Navy fired 50 AIM-7 Sparrow radar guided missiles in a row without a hit.[64] Between 1958 and 1982 in five wars there were 2,014 combined heat-seeking and radar guided missile firings by fighter pilots engaged in air-to-air combat, achieving 528 kills, of which 76 were radar missile kills, for a combined effectiveness of 26%. However, only four of the 76 radar missile kills were in the beyond-visual-range mode intended to be the strength of radar guided missiles.[65] The United States invested over $10 billion in air-to-air radar missile technology from the 1950s to the early 1970s.[66] Amortized over actual kills achieved by the U.S. and its allies, each radar guided missile kill thus cost over $130 million. The defeated enemy aircraft were for the most part older MiG-17s, −19s, and −21s, with new cost of $0.3 million to $3 million each. Thus, the radar missile investment over that period far exceeded the value of enemy aircraft destroyed, and furthermore had very little of the intended BVR effectiveness.
|
254 |
+
|
255 |
+
However, continuing heavy development investment and rapidly advancing electronic technology led to significant improvement in radar missile reliabilities from the late 1970s onward. Radar guided missiles achieved 75% Pk (9 kills out of 12 shots) in operations in the Gulf War in 1991.[67] The percentage of kills achieved by radar guided missiles also surpassed 50% of total kills for the first time by 1991. Since 1991, 20 of 61 kills worldwide have been beyond-visual-range using radar missiles.[68] Discounting an accidental friendly fire kill, in operational use the AIM-120D (the current main American radar guided missile) has achieved 9 kills out of 16 shots for a 56% Pk. Six of these kills were BVR, out of 13 shots, for a 46% BVR Pk.[69] Though all these kills were against less capable opponents who were not equipped with operating radar, electronic countermeasures, or a comparable weapon themselves, the BVR Pk was a significant improvement from earlier eras. However, a current concern is electronic countermeasures to radar missiles,[70] which are thought to be reducing the effectiveness of the AIM-120D. Some experts believe that as of 2016[update] the European Meteor missile, the Russian R-37M, and the Chinese PL-15 are more resistant to countermeasures and more effective than the AIM-120D.[70]
|
256 |
+
|
257 |
+
Now that higher reliabilities have been achieved, both types of missiles allow the fighter pilot to often avoid the risk of the short-range dogfight, where only the more experienced and skilled fighter pilots tend to prevail, and where even the finest fighter pilot can simply get unlucky. Taking maximum advantage of complicated missile parameters in both attack and defense against competent opponents does take considerable experience and skill,[71] but against surprised opponents lacking comparable capability and countermeasures, air-to-air missile warfare is relatively simple. By partially automating air-to-air combat and reducing reliance on gun kills mostly achieved by only a small expert fraction of fighter pilots, air-to-air missiles now serve as highly effective force multipliers.
|
258 |
+
|
en/4950.html.txt
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A writing system is a method of visually representing verbal communication, based on a script and a set of rules regulating its use. While both writing and speech are useful in conveying messages, writing differs in also being a reliable form of information storage and transfer.[1] Writing systems require shared understanding between writers and readers of the meaning behind the sets of characters that make up a script. Writing is usually recorded onto a durable medium, such as paper or electronic storage, although non-durable methods may also be used, such as writing on a computer display, on a blackboard, in sand, or by skywriting. Reading a text can be accomplished purely in the mind as an internal process, or expressed orally.
|
2 |
+
|
3 |
+
Writing systems can be placed into broad categories such as alphabets, syllabaries, or logographies, although any particular system may have attributes of more than one category. In the alphabetic category, a standard set of letters represent speech sounds. In a syllabary, each symbol correlates to a syllable or mora. In a logography, each character represents a semantic unit such as a word or morpheme. Abjads differ from alphabets in that vowels are not indicated, and in abugidas or alphasyllabaries each character represents a consonant–vowel pairing. Alphabets typically use a set of less than 100 symbols to fully express a language, whereas syllabaries can have several hundred, and logographies can have thousands of symbols. Many writing systems also include a special set of symbols known as punctuation which is used to aid interpretation and help capture nuances and variations in the message's meaning that are communicated verbally by cues in timing, tone, accent, inflection or intonation.
|
4 |
+
|
5 |
+
Writing systems were preceded by proto-writing, which used pictograms, ideograms and other mnemonic symbols. Proto-writing lacked the ability to capture and express a full range of thoughts and ideas. The invention of writing systems, which dates back to the beginning of the Bronze Age in the late Neolithic Era of the late 4th millennium BC, enabled the accurate durable recording of human history in a manner that was not prone to the same types of error to which oral history is vulnerable. Soon after, writing provided a reliable form of long distance communication. With the advent of publishing, it provided the medium for an early form of mass communication.
|
6 |
+
|
7 |
+
Writing systems are distinguished from other possible symbolic communication systems in that a writing system is always associated with at least one spoken language. In contrast, visual representations such as drawings, paintings, and non-verbal items on maps, such as contour lines, are not language-related. Some symbols on information signs, such as the symbols for male and female, are also not language related, but can grow to become part of language if they are often used in conjunction with other language elements. Some other symbols, such as numerals and the ampersand, are not directly linked to any specific language, but are often used in writing and thus must be considered part of writing systems.
|
8 |
+
|
9 |
+
Every human community possesses language, which many regard as an innate and defining condition of humanity. However, the development of writing systems, and the process by which they have supplanted traditional oral systems of communication, have been sporadic, uneven and slow. Once established, writing systems generally change more slowly than their spoken counterparts. Thus they often preserve features and expressions which are no longer current in the spoken language. One of the great benefits of writing systems is that they can preserve a permanent record of information expressed in a language.
|
10 |
+
|
11 |
+
All writing systems require:
|
12 |
+
|
13 |
+
In the examination of individual scripts, the study of writing systems has developed along partially independent lines. Thus, the terminology employed differs somewhat from field to field.
|
14 |
+
|
15 |
+
The generic term text[3] refers to an instance of written or spoken material with the latter having been transcribed in some way. The act of composing and recording a text may be referred to as writing,[4] and the act of viewing and interpreting the text as reading.[5] Orthography refers to the method and rules of observed writing structure (literal meaning, "correct writing"), and particularly for alphabetic systems, includes the concept of spelling.
|
16 |
+
|
17 |
+
A grapheme is a specific base unit of a writing system. Graphemes are the minimally significant elements which taken together comprise the set of "building blocks" out of which texts made up of one or more writing systems may be constructed, along with rules of correspondence and use. The concept is similar to that of the phoneme used in the study of spoken languages. For example, in the Latin-based writing system of standard contemporary English, examples of graphemes include the majuscule and minuscule forms of the twenty-six letters of the alphabet (corresponding to various phonemes), marks of punctuation (mostly non-phonemic), and a few other symbols such as those for numerals (logograms for numbers).
|
18 |
+
|
19 |
+
An individual grapheme may be represented in a wide variety of ways, where each variation is visually distinct in some regard, but all are interpreted as representing the "same" grapheme. These individual variations are known as allographs of a grapheme (compare with the term allophone used in linguistic study). For example, the minuscule letter a has different allographs when written as a cursive, block, or typed letter. The choice of a particular allograph may be influenced by the medium used, the writing instrument, the stylistic choice of the writer, the preceding and following graphemes in the text, the time available for writing, the intended audience, and the largely unconscious features of an individual's handwriting.
|
20 |
+
|
21 |
+
The terms glyph, sign and character are sometimes used to refer to a grapheme. Common usage varies from discipline to discipline; compare cuneiform sign, Maya glyph, Chinese character. The glyphs of most writing systems are made up of lines (or strokes) and are therefore called linear, but there are glyphs in non-linear writing systems made up of other types of marks, such as Cuneiform and Braille.
|
22 |
+
|
23 |
+
Writing systems may be regarded as complete according to the extent to which they are able to represent all that may be expressed in the spoken language, while a partial writing system is limited in what it can convey.[6]
|
24 |
+
|
25 |
+
Writing systems can be independent from languages, one can have multiple writing systems for a language, e.g., Hindi and Urdu;[7] and one can also have one writing system for multiple languages, e.g., the Arabic script. Chinese characters were also borrowed by other countries as their early writing systems, e.g., the early writing systems of Vietnamese language until the beginning of the 20th century.
|
26 |
+
|
27 |
+
To represent a conceptual system, one uses one or more languages, e.g., mathematics is a conceptual system[8] and one may use first-order logic and a natural language together in representation.
|
28 |
+
|
29 |
+
Writing systems were preceded by proto-writing, systems of ideographic and/or early mnemonic symbols. The best-known examples are:
|
30 |
+
|
31 |
+
The invention of the first writing systems is roughly contemporary with the beginning of the Bronze Age (following the late Neolithic) in the late 4th millennium BC. The Sumerian archaic cuneiform script closely followed by the Egyptian hieroglyphs are generally considered the earliest writing systems, both emerging out of their ancestral proto-literate symbol systems from 3400 to 3200 BC with earliest coherent texts from about 2600 BC. It is generally agreed that the historically earlier Sumerian writing was an independent invention; however, it is debated whether Egyptian writing was developed completely independently of Sumerian, or was a case of cultural diffusion.[12]
|
32 |
+
|
33 |
+
A similar debate exists for the Chinese script, which developed around 1200 BC.[13][14] The Chinese script is probably an independent invention, because there is no evidence of contact between China and the literate civilizations of the Near East,[15] and because of the distinct differences between the Mesopotamian and Chinese approaches to logography and phonetic representation.[16]
|
34 |
+
|
35 |
+
The pre-Columbian Mesoamerican writing systems (including among others Olmec and Maya scripts) are generally believed to have had independent origins.
|
36 |
+
|
37 |
+
A hieroglyphic writing system used by pre-colonial Mi'kmaq, which was observed by missionaries from the 17th to 19th centuries, is thought to have developed independently. There is some debate over whether or not this was a fully formed system or just a series of mnemonic pictographs.
|
38 |
+
|
39 |
+
It is thought that the first consonantal alphabetic writing appeared before 2000 BC, as a representation of language developed by Semitic tribes in the Sinai Peninsula (see History of the alphabet). Most other alphabets in the world today either descended from this one innovation, many via the Phoenician alphabet, or were directly inspired by its design.
|
40 |
+
|
41 |
+
The first true alphabet is the Greek script which consistently represents vowels since 800 BC.[17][18] The Latin alphabet, a direct descendant, is by far the most common writing system in use.[19]
|
42 |
+
|
43 |
+
Several approaches have been taken to classify writing systems, the most common and basic one is a broad division into three categories: logographic, syllabic, and alphabetic (or segmental); however, all three may be found in any given writing system in varying proportions, often making it difficult to categorise a system uniquely. The term complex system is sometimes used to describe those where the admixture makes classification problematic. Modern linguists regard such approaches, including Diringer's[20]
|
44 |
+
|
45 |
+
as too simplistic, often considering the categories to be incomparable.
|
46 |
+
Hill[21] split writing into three major categories of linguistic analysis, one of which covers discourses and is not usually considered writing proper:
|
47 |
+
|
48 |
+
Sampson draws a distinction between semasiography and glottography
|
49 |
+
|
50 |
+
DeFrancis,[22] criticizing Sampson's[23] introduction of semasiographic writing and featural alphabets stresses the phonographic quality of writing proper
|
51 |
+
|
52 |
+
Faber[24] categorizes phonographic writing by two levels, linearity and coding:
|
53 |
+
|
54 |
+
A logogram is a single written character which represents a complete grammatical word. Most traditional Chinese characters are classified as logograms.
|
55 |
+
|
56 |
+
As each character represents a single word (or, more precisely, a morpheme), many logograms are required to write all the words of language. The vast array of logograms and the memorization of what they mean are major disadvantages of logographic systems over alphabetic systems. However, since the meaning is inherent to the symbol, the same logographic system can theoretically be used to represent different languages. In practice, the ability to communicate across languages only works for the closely related varieties of Chinese, as differences in syntax reduce the crosslinguistic portability of a given logographic system. Japanese uses Chinese logograms extensively in its writing systems, with most of the symbols carrying the same or similar meanings. However, the grammatical differences between Japanese and Chinese are significant enough that a long Chinese text is not readily understandable to a Japanese reader without any knowledge of basic Chinese grammar, though short and concise phrases such as those on signs and newspaper headlines are much easier to comprehend.
|
57 |
+
|
58 |
+
While most languages do not use wholly logographic writing systems, many languages use some logograms. A good example of modern western logograms are the Arabic numerals: everyone who uses those symbols understands what 1 means whether they call it one, eins, uno, yi, ichi, ehad, ena, or jedan. Other western logograms include the ampersand &, used for and, the at sign @, used in many contexts for at, the percent sign % and the many signs representing units of currency ($, ¢, €, £, ¥ and so on.)
|
59 |
+
|
60 |
+
Logograms are sometimes called ideograms, a word that refers to symbols which graphically represent abstract ideas, but linguists avoid this use, as Chinese characters are often semantic–phonetic compounds, symbols which include an element that represents the meaning and a phonetic complement element that represents the pronunciation. Some nonlinguists distinguish between lexigraphy and ideography, where symbols in lexigraphies represent words and symbols in ideographies represent words or morphemes.
|
61 |
+
|
62 |
+
The most important (and, to a degree, the only surviving) modern logographic writing system is the Chinese one, whose characters have been used with varying degrees of modification in varieties of Chinese, Japanese, Korean, Vietnamese, and other east Asian languages. Ancient Egyptian hieroglyphs and the Mayan writing system are also systems with certain logographic features, although they have marked phonetic features as well and are no longer in current use. Vietnamese speakers switched to the Latin alphabet in the 20th century and the use of Chinese characters in Korean is increasingly rare. The Japanese writing system includes several distinct forms of writing including logography.
|
63 |
+
|
64 |
+
Another type of writing system with systematic syllabic linear symbols, the abugidas, is discussed below as well.
|
65 |
+
|
66 |
+
As logographic writing systems use a single symbol for an entire word, a syllabary is a set of written symbols that represent (or approximate) syllables, which make up words. A symbol in a syllabary typically represents a consonant sound followed by a vowel sound, or just a vowel alone.
|
67 |
+
|
68 |
+
In a "true syllabary", there is no systematic graphic similarity between phonetically related characters (though some do have graphic similarity for the vowels). That is, the characters for /ke/, /ka/ and /ko/ have no similarity to indicate their common "k" sound (voiceless velar plosive). More recent creations such as the Cree syllabary embody a system of varying signs, which can best be seen when arranging the syllabogram set in an onset–coda or onset–rime table.
|
69 |
+
|
70 |
+
Syllabaries are best suited to languages with relatively simple syllable structure, such as Japanese. The English language, on the other hand, allows complex syllable structures, with a relatively large inventory of vowels and complex consonant clusters, making it cumbersome to write English words with a syllabary. To write English using a syllabary, every possible syllable in English would have to have a separate symbol, and whereas the number of possible syllables in Japanese is around 100, in English there are approximately 15,000 to 16,000.
|
71 |
+
|
72 |
+
However, syllabaries with much larger inventories do exist. The Yi script, for example, contains 756 different symbols (or 1,164, if symbols with a particular tone diacritic are counted as separate syllables, as in Unicode). The Chinese script, when used to write Middle Chinese and the modern varieties of Chinese, also represents syllables, and includes separate glyphs for nearly all of the many thousands of syllables in Middle Chinese; however, because it primarily represents morphemes and includes different characters to represent homophonous morphemes with different meanings, it is normally considered a logographic script rather than a syllabary.
|
73 |
+
|
74 |
+
Other languages that use true syllabaries include Mycenaean Greek (Linear B) and Indigenous languages of the Americas such as Cherokee. Several languages of the Ancient Near East used forms of cuneiform, which is a syllabary with some non-syllabic elements.
|
75 |
+
|
76 |
+
An alphabet is a small set of letters (basic written symbols), each of which roughly represents or represented historically a segmental phoneme of a spoken language. The word alphabet is derived from alpha and beta, the first two symbols of the Greek alphabet.
|
77 |
+
|
78 |
+
The first type of alphabet that was developed was the abjad. An abjad is an alphabetic writing system where there is one symbol per consonant. Abjads differ from other alphabets in that they have characters only for consonantal sounds. Vowels are not usually marked in abjads. All known abjads (except maybe Tifinagh) belong to the Semitic family of scripts, and derive from the original Northern Linear Abjad. The reason for this is that Semitic languages and the related Berber languages have a morphemic structure which makes the denotation of vowels redundant in most cases. Some abjads, like Arabic and Hebrew, have markings for vowels as well. However, they use them only in special contexts, such as for teaching. Many scripts derived from abjads have been extended with vowel symbols to become full alphabets. Of these, the most famous example is the derivation of the Greek alphabet from the Phoenician abjad. This has mostly happened when the script was adapted to a non-Semitic language. The term abjad takes its name from the old order of the Arabic alphabet's consonants 'alif, bā', jīm, dāl, though the word may have earlier roots in Phoenician or Ugaritic. "Abjad" is still the word for alphabet in Arabic, Malay and Indonesian.
|
79 |
+
|
80 |
+
An abugida is an alphabetic writing system whose basic signs denote consonants with an inherent vowel and where consistent modifications of the basic sign indicate other following vowels than the inherent one. Thus, in an abugida there may or may not be a sign for "k" with no vowel, but also one for "ka" (if "a" is the inherent vowel), and "ke" is written by modifying the "ka" sign in a way that is consistent with how one would modify "la" to get "le". In many abugidas the modification is the addition of a vowel sign, but other possibilities are imaginable (and used), such as rotation of the basic sign, addition of diacritical marks and so on. The contrast with "true syllabaries" is that the latter have one distinct symbol per possible syllable, and the signs for each syllable have no systematic graphic similarity. The graphic similarity of most abugidas comes from the fact that they are derived from abjads, and the consonants make up the symbols with the inherent vowel and the new vowel symbols are markings added on to the base symbol. In the Ge'ez script, for which the linguistic term abugida was named, the vowel modifications do not always appear systematic, although they originally were more so. Canadian Aboriginal syllabics can be considered abugidas, although they are rarely thought of in those terms. The largest single group of abugidas is the Brahmic family of scripts, however, which includes nearly all the scripts used in India and Southeast Asia. The name abugida is derived from the first four characters of an order of the Ge'ez script used in some contexts. It was borrowed from Ethiopian languages as a linguistic term by Peter T. Daniels.
|
81 |
+
|
82 |
+
A featural script represents finer detail than an alphabet. Here symbols do not represent whole phonemes, but rather the elements (features) that make up the phonemes, such as voicing or its place of articulation. Theoretically, each feature could be written with a separate letter; and abjads or abugidas, or indeed syllabaries, could be featural, but the only prominent system of this sort is Korean hangul. In hangul, the featural symbols are combined into alphabetic letters, and these letters are in turn joined into syllabic blocks, so that the system combines three levels of phonological representation.
|
83 |
+
|
84 |
+
Many scholars, e.g. John DeFrancis, reject this class or at least labeling hangul as such.[citation needed] The Korean script is a conscious script creation by literate experts, which Daniels calls a "sophisticated grammatogeny".[citation needed] These include stenographies and constructed scripts of hobbyists and fiction writers (such as Tengwar), many of which feature advanced graphic designs corresponding to phonologic properties. The basic unit of writing in these systems can map to anything from phonemes to words. It has been shown that even the Latin script has sub-character "features".[26]
|
85 |
+
|
86 |
+
Most writing systems are not purely one type. The English writing system, for example, includes numerals and other logograms such as #, $, and &, and the written language often does not match well with the spoken one. As mentioned above, all logographic systems have phonetic components as well, whether along the lines of a syllabary, such as Chinese ("logo-syllabic"), or an abjad, as in Egyptian ("logo-consonantal").
|
87 |
+
|
88 |
+
Some scripts, however, are truly ambiguous. The semi-syllabaries of ancient Spain were syllabic for plosives such as p, t, k, but alphabetic for other consonants. In some versions, vowels were written redundantly after syllabic letters, conforming to an alphabetic orthography. Old Persian cuneiform was similar. Of 23 consonants (including null), seven were fully syllabic, thirteen were purely alphabetic, and for the other three, there was one letter for /Cu/ and another for both /Ca/ and /Ci/. However, all vowels were written overtly regardless; as in the Brahmic abugidas, the /Ca/ letter was used for a bare consonant.
|
89 |
+
|
90 |
+
The zhuyin phonetic glossing script for Chinese divides syllables in two or three, but into onset, medial, and rime rather than consonant and vowel. Pahawh Hmong is similar, but can be considered to divide syllables into either onset-rime or consonant-vowel (all consonant clusters and diphthongs are written with single letters); as the latter, it is equivalent to an abugida but with the roles of consonant and vowel reversed. Other scripts are intermediate between the categories of alphabet, abjad and abugida, so there may be disagreement on how they should be classified.
|
91 |
+
|
92 |
+
Perhaps the primary graphic distinction made in classifications is that of linearity. Linear writing systems are those in which the characters are composed of lines, such as the Latin alphabet and Chinese characters. Chinese characters are considered linear whether they are written with a ball-point pen or a calligraphic brush, or cast in bronze. Similarly, Egyptian hieroglyphs and Maya glyphs were often painted in linear outline form, but in formal contexts they were carved in bas-relief. The earliest examples of writing are linear: the Sumerian script of c. 3300 BC was linear, though its cuneiform descendants were not. Non-linear systems, on the other hand, such as braille, are not composed of lines, no matter what instrument is used to write them.
|
93 |
+
|
94 |
+
Cuneiform was probably the earliest non-linear writing. Its glyphs were formed by pressing the end of a reed stylus into moist clay, not by tracing lines in the clay with the stylus as had been done previously.[27][28] The result was a radical transformation of the appearance of the script.
|
95 |
+
|
96 |
+
Braille is a non-linear adaptation of the Latin alphabet that completely abandoned the Latin forms. The letters are composed of raised bumps on the writing substrate, which can be leather (Louis Braille's original material), stiff paper, plastic or metal.
|
97 |
+
|
98 |
+
There are also transient non-linear adaptations of the Latin alphabet, including Morse code, the manual alphabets of various sign languages, and semaphore, in which flags or bars are positioned at prescribed angles. However, if "writing" is defined as a potentially permanent means of recording information, then these systems do not qualify as writing at all, since the symbols disappear as soon as they are used. (Instead, these transient systems serve as signals.)
|
99 |
+
|
100 |
+
Scripts are also graphically characterized by the direction in which they are written. Egyptian hieroglyphs were written either left to right or right to left, with the animal and human glyphs turned to face the beginning of the line. The early alphabet could be written in multiple directions:[29] horizontally (side to side), or vertically (up or down). Prior to standardization, alphabetical writing was done both left-to-right (LTR or sinistrodextrally) and right-to-left (RTL or dextrosinistrally). It was most commonly written boustrophedonically: starting in one (horizontal) direction, then turning at the end of the line and reversing direction.
|
101 |
+
|
102 |
+
The Greek alphabet and its successors settled on a left-to-right pattern, from the top to the bottom of the page. Other scripts, such as Arabic and Hebrew, came to be written right-to-left. Scripts that incorporate Chinese characters have traditionally been written vertically (top-to-bottom), from the right to the left of the page, but nowadays are frequently written left-to-right, top-to-bottom, due to Western influence, a growing need to accommodate terms in the Latin script, and technical limitations in popular electronic document formats. Chinese characters sometimes, as in signage, especially when signifying something old or traditional, may also be written from right to left. The Old Uyghur alphabet and its descendants are unique in being written top-to-bottom, left-to-right; this direction originated from an ancestral Semitic direction by rotating the page 90° counter-clockwise to conform to the appearance of vertical Chinese writing. Several scripts used in the Philippines and Indonesia, such as Hanunó'o, are traditionally written with lines moving away from the writer, from bottom to top, but are read horizontally left to right; however, Kulitan, another Philippine script, is written top to bottom and right to left. Ogham is written bottom to top and read vertically, commonly on the corner of a stone.
|
103 |
+
|
104 |
+
Left-to-right has the advantage that, given that most people are right-handed, the hand won't interfere with the just written text which might not have dried yet, since the hand is on the right side of the pen. For partially this reason, left-handed children were historically in Europe and America often taught to use the right hand for writing.
|
105 |
+
|
106 |
+
In computers and telecommunication systems, writing systems are generally not codified as such,[clarification needed] but graphemes and other grapheme-like units that are required for text processing are represented by "characters" that typically manifest in encoded form. There are many character encoding standards and related technologies, such as ISO/IEC 8859-1 (a character repertoire and encoding scheme oriented toward the Latin script), CJK (Chinese, Japanese, Korean) and bi-directional text. Today, many such standards are re-defined in a collective standard, the ISO/IEC 10646 "Universal Character Set", and a parallel, closely related expanded work, The Unicode Standard. Both are generally encompassed by the term Unicode. In Unicode, each character, in every language's writing system, is (simplifying slightly) given a unique identification number, known as its code point. Computer operating systems use code points to look up characters in the font file, so the characters can be displayed on the page or screen.
|
107 |
+
|
108 |
+
A keyboard is the device most commonly used for writing via computer. Each key is associated with a standard code which the keyboard sends to the computer when it is pressed. By using a combination of alphabetic keys with modifier keys such as Ctrl, Alt, Shift and AltGr, various character codes are generated and sent to the CPU. The operating system intercepts and converts those signals to the appropriate characters based on the keyboard layout and input method, and then delivers those converted codes and characters to the running application software, which in turn looks up the appropriate glyph in the currently used font file, and requests the operating system to draw these on the screen.
|
en/4951.html.txt
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Red Hot Chili Peppers (commonly abbreviated as RHCP) are an American rock band formed in Los Angeles in 1983. Their music incorporates elements of alternative rock, funk, punk rock and psychedelic rock. The band consists of vocalist Anthony Kiedis, guitarist John Frusciante, bassist Flea, and drummer Chad Smith. With over 80 million records sold worldwide, Red Hot Chili Peppers are one of the best-selling bands of all time. They are the most successful band in the history of alternative rock, with the records for most number-one singles (13), most cumulative weeks at number one (85) and most top-ten songs (25) on the Billboard Alternative Songs chart.[1] They have won six Grammy Awards, and in 2012 were inducted into the Rock and Roll Hall of Fame.
|
6 |
+
|
7 |
+
Red Hot Chili Peppers were formed in Los Angeles by Kiedis, Flea, guitarist Hillel Slovak, and drummer Jack Irons. Because of commitments to other bands, Slovak and Irons did not play on the band's 1984 self-titled debut album; instead, the album featured Jack Sherman on guitar and Cliff Martinez on drums. However, Slovak rejoined shortly after its release and performed on the albums Freaky Styley (1985) and The Uplift Mofo Party Plan (1987), the latter of which saw a reunion of the original lineup. Slovak died of a drug overdose on June 25, 1988; Irons, devastated, left the band.
|
8 |
+
|
9 |
+
With new recruits Frusciante and Smith, Red Hot Chili Peppers recorded Mother's Milk (1989) and their first major commercial success, Blood Sugar Sex Magik (1991). Frusciante was uncomfortable with their newfound popularity and left abruptly during the 1992 Blood Sugar Sex Magik tour. His replacement, Dave Navarro, played on the sixth Red Hot Chili Peppers album, One Hot Minute (1995). Although commercially successful, the album failed to match the critical or popular acclaim of Blood Sugar Sex Magik, selling less than half as many copies.
|
10 |
+
|
11 |
+
In 1998, following Navarro's dismissal, Frusciante returned to Red Hot Chili Peppers. Their seventh album, Californication (1999), became their biggest commercial success, with 16 million copies sold worldwide. Their next albums, By the Way (2002) and Stadium Arcadium (2006), were also successful; Stadium Arcadium was the band's first album to reach number one on the Billboard 200 chart. After the Stadium Arcadium tour, Red Hot Chili Peppers went on an extended hiatus. Frusciante left again in 2009 to focus on his solo career; he was replaced by Josh Klinghoffer, who appeared on the albums I'm with You (2011) and The Getaway (2016), before Frusciante rejoined in 2019.
|
12 |
+
|
13 |
+
Red Hot Chili Peppers were formed in Los Angeles by singer Anthony Kiedis, guitarist Hillel Slovak, bassist Flea, and drummer Jack Irons, classmates at Fairfax High School.[2] Their early names included Tony Flow and the Miraculously Majestic Masters of Mayhem, and their first performance was at the Rhythm Lounge club to a crowd of approximately 30, opening for Gary and Neighbor's Voices.[3] Inspired by punk funk acts like Contortions and Defunkt, they improvised music while Kiedis rapped.[4]
|
14 |
+
|
15 |
+
At the time, Slovak and Irons were already committed to another group, What Is This?; however, the band was asked to return the following week.[3] The band changed its name to Red Hot Chili Peppers, playing several shows at LA venues. Six songs from these initial shows were on the band's first demo tape.[5] In November 1983, manager Lindy Goetz struck a seven-album deal with EMI America and Enigma Records. Two weeks earlier, however, What Is This? had also obtained a record deal with MCA, and in December Slovak and Irons quit Red Hot Chili Peppers to focus on What Is This?.[6] Flea and Keidis recruited Weirdos drummer Cliff Martinez and guitarist Jack Sherman.[7]
|
16 |
+
|
17 |
+
The band released their debut album, The Red Hot Chili Peppers, in August 1984. Airplay on college radio and MTV helped build a fan base,[8] and the album sold 300,000 copies. Gang of Four guitarist Andy Gill, who produced the album, pushed the band to play with a cleaner, more radio-friendly sound,[9] and the band was disappointed with the result, finding it over-polished.[10] The album included backing vocals by Gwen Dickey, the singer for the 1970s disco funk group Rose Royce.[11] The band embarked on a gruelling tour, performing 60 shows in 64 days. During the tour, continuing musical and lifestyle tension between Kiedis and Sherman complicated the transition between concert and daily band life.[12] Sherman was fired in February 1985.[13] Hillel Slovak, who had just quit What Is This?, rejoined in early 1985.[14]
|
18 |
+
|
19 |
+
George Clinton produced the second album, Freaky Styley (1985). Clinton combined various elements of punk and funk into the band's repertoire,[15] allowing their music to incorporate a variety of distinct styles. The album featured Maceo Parker and Fred Wesley. The band often indulged in heavy heroin use while recording the album, which influenced the lyrics and musical direction of the album.[16] The band had a much better relationship with Clinton than with Gill,[17] but Freaky Styley, released on August 16, 1985, also achieved little success, failing to make an impression on any chart. The subsequent tour was also considered unproductive by the band.[18][19][20] Despite the lack of success, the band was satisfied with Freaky Styley; Kiedis reflected that "it so surpassed anything we thought we could have done that we were thinking we were on the road to enormity."[21] Around this time, the band appeared in the 1986 films Thrashin', playing the song "Blackeyed Blonde" from Freaky Styley, and Tough Guys, performing "Set It Straight".[22]
|
20 |
+
|
21 |
+
In early 1986, EMI gave the Chili Peppers $5,000 to record a demo tape for their next album. They chose to work with producer Keith Levene from PIL, as he shared their interest in drugs.[23] Levene and Slovak put aside $2,000 of the budget to spend on heroin and cocaine, which created tension between the band members. Martinez's "heart was no longer in the band", but he did not quit, so Kiedis and Flea fired him in April 1986.[24] Irons rejoined the band, to their surprise; it marked the first time all four founding members were together since 1983. During the recording and subsequent tour of Freaky Styley, Kiedis and Slovak were dealing with debilitating heroin addictions. Due to his addiction, Kiedis "didn't have the same drive or desire to come up with ideas or lyrics" and appeared at rehearsal "literally asleep".[25]
|
22 |
+
|
23 |
+
For their third album, the Chili Peppers attempted to hire Rick Rubin to produce, but he declined due to the band's increasing drug problems. They eventually hired Michael Beinhorn from the art funk project Material, their last choice.[26] The early attempts at recording were halted due to Kiedis's worsening drug problems, and Kiedis was briefly fired.[27] After the band were named "band of the year" by LA Weekly, Kiedis entered drug rehabilitation.[28] The band auditioned new singers,[29] but Kiedis, now sober, rejoined the recording sessions with new enthusiasm.[30] Songs formed quickly, blending the funk feel and rhythms as Freaky Styley with a harder, more immediate approach to punk rock. The album was recorded in the basement of the Capitol Records Building.[31] The recording process was difficult; Kiedis would frequently disappear to seek drugs.[32] After fifty days of sobriety, Kiedis decided to take drugs again to celebrate his new music.[31]
|
24 |
+
|
25 |
+
The third Red Hot Chili Peppers album, The Uplift Mofo Party Plan, was released in September 1987. It peaked at No. 148 on the Billboard 200,[33] a significant improvement over their earlier albums. During this period, however, Kiedis and Slovak had both developed serious drug addictions,[34] often disappearing for days on end. Slovak died from a heroin overdose on June 25, 1988, soon after the conclusion of the Uplift tour.[35] Kiedis fled the city and did not attend Slovak's funeral.[36] Irons, troubled by the death, decided to leave the band; following years of depression, he became a member of Seattle grunge band Pearl Jam in 1994.[37]
|
26 |
+
|
27 |
+
DeWayne "Blackbyrd" McKnight, a former member of Parliament-Funkadelic, was hired as guitarist, and D. H. Peligro of the punk rock band Dead Kennedys replaced Irons. Kiedis re-entered rehab, and visited Slovak's grave as part of his rehabilitation, finally confronting his grief. Thirty days later, Kiedis left rehab and was ready to resume his career with the band. Three dates into the tour, McKnight was fired, having lacked chemistry with the band.[38] McKnight was so unhappy he threatened to burn down Kiedis' house.[39]
|
28 |
+
|
29 |
+
Peligro introduced Kiedis and Flea to teenage guitarist and Chili Peppers fan John Frusciante.[40] Frusciante performed his first show with the band in September 1988. The new lineup began writing for the next album and went on a short tour, the Turd Town Tour. In November, Kiedis and Flea fired Peligro due to his drug and alcohol problems.[citation needed] Following open auditions, they hired drummer Chad Smith in December 1988, who has remained since.[41]
|
30 |
+
|
31 |
+
The Chili Peppers began work on their fourth album in 1989.[42] Unlike the stop-start sessions for The Uplift Mofo Party Plan, preproduction went smoothly. However, the sessions were made tense by Beinhorn's desire to create a hit, frustrating Frusciante and Kiedis.[43] Released on August 16, 1989, Mother's Milk peaked at number 52 on the U.S. Billboard 200.[33] The record failed to chart in the United Kingdom and Europe, but climbed to number 33 in Australia.[44] "Knock Me Down" reached number six on the U.S. Modern Rock Tracks, whereas "Higher Ground" charted at number eleven[45] and reached number 54 in the UK and 45 in Australia and France.[46][47] Mother's Milk was certified gold in March 1990 and was the first Chili Peppers album to ship over 500,000 units.[48]
|
32 |
+
|
33 |
+
In 1990, after the success of Mother's Milk, the Chili Peppers left EMI and entered a major-label bidding war. They signed with Warner Bros. Records and hired producer Rick Rubin. Rubin had turned the band down in 1987 because of their drug problems but felt they were now healthier and more focused. He would go on to produce five more of their albums. The writing process was more productive than it had been for Mother's Milk, with Kiedis saying, "[every day], there was new music for me to lyricize".[49] At Rubin's suggestion, they recorded in the Mansion, a studio in a house where magician Harry Houdini once lived.[50]
|
34 |
+
|
35 |
+
In September 1991, Blood Sugar Sex Magik was released. "Give It Away" was the first single; it became one of the band's best known songs, and in 1992 won a Grammy Award for "Best Hard Rock Performance With Vocal". It became the band's first number-one single on the Modern Rock chart.[45][51] The ballad "Under the Bridge" was released as a second single, and went on to reach No. 2 on the Billboard Hot 100 chart,[45] the highest the band had reached on that chart as of 2016.[45]
|
36 |
+
|
37 |
+
The album sold over 12 million copies.[52] Blood Sugar Sex Magik was listed at number 310 on the Rolling Stone magazine list of 500 Greatest Albums of All Time, and in 1992 it rose to No. 3 on the U.S. album charts, almost a year after its release. The album was accompanied by a documentary, Funky Monks.[53] The band kicked off their Blood Sugar Sex Magik tour, which featured Nirvana, Pearl Jam and Smashing Pumpkins, three of the era's biggest upcoming bands in alternative music, as opening acts.[54]
|
38 |
+
|
39 |
+
Frusciante was troubled by his newfound fame, and began falling out with Kiedis.[55] Unknown to others, Frusciante was also starting a drug habit and isolating himself. Frusciante abruptly quit the band hours before a show during the Blood Sugar Japanese tour in May 1992.[2][56] The band contacted guitarist Dave Navarro, who had just split from Jane's Addiction, but Navarro was involved in his drug battles. After failed auditions with Zander Schloss, Arik Marshall of Los Angeles band, Marshall Law was hired, and the Chili Peppers headlined the Lollapalooza festival in 1992. Marshall would also appear in the music videos for "Breaking the Girl" and "If You Have to Ask", as well as in The Simpsons episode "Krusty Gets Kancelled".[citation needed] In September 1992, the Chili Peppers, with Marshall, performed "Give It Away" at the MTV Video Music Awards. The band was nominated for seven awards, winning three, including Viewer's Choice. In February 1993, they performed "Give It Away" at the Grammy Awards, and the song won the band their first Grammy later that evening.[citation needed]
|
40 |
+
|
41 |
+
The Chili Peppers dismissed Marshall as he was too busy to attend rehearsals.[citation needed] They held auditions for new guitarists, including Buckethead, whom Flea felt was not right for the band.[57] Guitarist Jesse Tobias of the Los Angeles-based band Mother Tongue was briefly hired, but dismissed as "the chemistry wasn't right".[58] However, Navarro said he was now ready to join the band.[59] In August 1993, the non-album single "Soul to Squeeze" was released and featured on the soundtrack to the film Coneheads.[60] The song topped the Billboard US Modern Rock chart.[61]
|
42 |
+
|
43 |
+
Navarro first appeared with the band at Woodstock '94, performing early versions of new songs. This was followed by a brief tour, including headlining appearances at Pukkelpop and Reading Festivals as well as two performances as the opening act for the Rolling Stones.[62] The relationship between Navarro and the band began to deteriorate;[63] Navarro admitted he did not care for funk music or jamming. Kiedis had relapsed into heroin addiction following a dental procedure in which an addictive sedative, Valium, was used, though the band did not discover this until later.[64]
|
44 |
+
|
45 |
+
Without Frusciante, songs were written at a far slower rate.[64] Kiedis said: "John had been a true anomaly when it came to songwriting ... I just figured that was how all guitar players were, that you showed them your lyrics and sang a little bit and the next thing you knew you had a song. That didn't happen right off the bat with Dave."[64] With Kiedis often absent from recording due to his drug problems, Flea took a larger role in the writing process, and sang lead on his song, "Pea".[citation needed]
|
46 |
+
|
47 |
+
One Hot Minute was released in September 1995 after several delays. It departed from the band's previous sound, with Navarro's guitar work incorporating heavy metal riffs and psychedelic rock.[65] The band described the album as a darker, sadder record.[66] Kiedis's lyrics addressed drugs, including the lead single, "Warped", and broken relationships and deaths of loved ones, including "Tearjerker," written about Kurt Cobain. Despite mixed reviews, the album sold eight million copies worldwide[67] and produced the band's third number-one single, "My Friends". The band also contributed to soundtracks including Working Class Hero: A Tribute to John Lennon and Beavis and Butt-Head Do America.[citation needed]
|
48 |
+
|
49 |
+
The band began the tour for One Hot Minute in Europe in 1995; the US tour was postponed after Smith broke his wrist. In 1997, several shows were cancelled following deteriorating band relations, injuries, and Navarro and Kiedis's drug use. They played their only show of the year at the first Fuji Rock Festival, in Japan.[citation needed] In April 1998, the band announced that Navarro had left due to creative differences; Kiedis stated that the decision was "mutual".[68] Reports at the time, however, indicated that Navarro's departure came after he attended a band practice under the influence of drugs.[67]
|
50 |
+
|
51 |
+
With no guitarist, the Red Hot Chili Peppers were on the verge of breaking up.[69] In the years following Frusciante's departure, it became public that he had developed a heroin addiction that left him in poverty and near death.[70] Flea convinced Frusciante to admit himself to Las Encinas Drug Rehabilitation Center in January 1998.[71][72] His addiction left him with permanent scarring on his arms, a restructured nose, and dental implants following an oral infection.[73][74] In April 1998, Flea visited the recovered Frusciante and asked him to rejoin the band; Frusciante began sobbing and said nothing would make him happier.[75][76]
|
52 |
+
|
53 |
+
In June 1999, after more than a year of production, the Red Hot Chili Peppers released Californication, their seventh studio album. It sold over 16 million copies, and remains their most successful album.[77] Californication contained fewer rap songs than its predecessors, instead integrating textured and melodic guitar riffs, vocals and basslines.[78] It produced three more number one modern rock hits, "Scar Tissue", "Otherside" and "Californication".[45] Californication received stronger reviews than One Hot Minute, and was a greater success worldwide.[78] While many critics credited the success of the album to Frusciante's return, they also felt Kiedis's vocals had also improved.[79] It was later listed at number 399 on the Rolling Stone magazine list of the 500 Greatest Albums of All Time.[citation needed]
|
54 |
+
|
55 |
+
Californication was supported with a two-year international world tour, producing the first Chili Peppers concert DVD, Off the Map (2001).[80] In July 1999, the Chili Peppers played the closing show at Woodstock 1999.[2][81] During the set, a small fire escalated into violence and vandalism, resulting in the intervention of riot control squads.[82] ATMs and several semi-tractor trailers were looted and destroyed.[83][84] The band was blamed in the media for inciting the riots after performing a cover of the Hendrix song "Fire". In his memoir, Keidis wrote: "It was clear that this situation had nothing to do with Woodstock anymore. It wasn't symbolic of peace and love, but of greed and cashing in."[85]
|
56 |
+
|
57 |
+
The Chili Peppers began writing their next album in early 2001, immediately following the Californication tour.[86] Frusciante and Kiedis would collaborate for days straight, discussing and sharing guitar progressions and lyrics.[87] For Kiedis, "writing By the Way ... was a whole different experience from Californication. John was back to himself and brimming with confidence."[86] The recording was difficult for Flea, who felt his role was being diminished[88] and fought with Frusciante about the musical direction.[88] Flea considered quitting the band after the album, but the two worked out their problems.[89]
|
58 |
+
|
59 |
+
By the Way was released in July 2002 and produced four singles; "By the Way", "The Zephyr Song", "Can't Stop" and "Universally Speaking". The album was their most subdued to date, focusing on melodic ballads over rap and funk, with layered textures, more keyboards, and string arrangements.[90] The album was followed by an eighteen-month world tour,[91] a concert DVD, Live at Slane Castle, and the band's first live album, Red Hot Chili Peppers Live in Hyde Park.[92] More than 258,000 fans paid over $17,100,000 for tickets over three nights, a 2004 record; the event ranked No. 1 on Billboard's Top Concert Boxscores of 2004.[93] In November 2003, the Chili Peppers released their Greatest Hits album, which featured new songs "Fortune Faded" and "Save the Population".[94]
|
60 |
+
|
61 |
+
In 2006, the Chili Peppers released their ninth album, Stadium Arcadium. Although they initially planned to release a trilogy of albums,[95] they chose to release a 28-track double album and released nine of the ten as B-sides. It was their first album to debut at number one on the US charts, where it stayed for two weeks, and debuted at number one in the UK and 25 other countries. Stadium Arcadium sold over seven million units.[96] It won five Grammys: Best Rock Album, Best Rock Song ("Dani California"), Best Rock Performance by a Duo Or Group With Vocal ("Dani California"), Best Boxed Or Special Limited Edition Package, and Best Producer (Rick Rubin).[51]
|
62 |
+
|
63 |
+
The first single, "Dani California", was the band's fastest-selling single, debuting on top of the Modern Rock chart in the U.S., peaking at number six on the Billboard Hot 100, and reaching number 2 in the UK. "Tell Me Baby", released next, also topped the charts in 2006. "Snow (Hey Oh)" was released in late 2006, breaking multiple records by 2007. The song became their eleventh number-one single, giving the band a cumulative total of 81 weeks at number one. It was also the first time three consecutive singles by the band made it to number one. "Desecration Smile" was released internationally in February 2007 and reached number 27 on the UK charts. "Hump de Bump" was planned to be the next single for the US, Canada, and Australia only, but due to positive feedback from the music video, it was released as a worldwide single in May 2007.[citation needed]
|
64 |
+
|
65 |
+
The Stadium Arcadium World Tour began in 2006, including several festival dates. Frusciante's friend and frequent musical collaborator Josh Klinghoffer joined the touring band, contributing guitar, backing vocals, and keyboards. The band was the musical guest for Saturday Night Live, which aired in May 2006 with featured host Tom Hanks.[97]
|
66 |
+
|
67 |
+
Following the last leg of the Stadium Arcadium tour, the Chili Peppers took an extended break. Kiedis attributed this to the band being worn out from their years of nonstop work since Californication (1999). Their only recording during this time was in 2008 with George Clinton on his album George Clinton and His Gangsters of Love; accompanied by Kim Manning, they recorded a new version of Shirley and Lee's classic "Let the Good Times Roll".[98]
|
68 |
+
|
69 |
+
Kiedis, who had recently become a father, planned to spend the time off taking care of his son and developing a television series based on his autobiography, Spider and Son.[99] Flea began taking music theory classes at the University of Southern California, and revealed plans to release a mainly instrumental solo record; guest musicians include Patti Smith and a choir from the Silverlake Conservatory.[100] He also joined Thom Yorke and touring Chili Peppers percussionist Mauro Refosco in the supergroup Atoms for Peace.[101] Frusciante released his solo album, The Empyrean.[102] Smith worked with Sammy Hagar, Joe Satriani, and Michael Anthony in the supergroup Chickenfoot, as well as on his solo project, Chad Smith's Bombastic Meatbats.[103]
|
70 |
+
|
71 |
+
In July 2009, Frusciante left the Chili Peppers, though no announcement was made until December 2009.[104] Frusciante explained on his Myspace page that there was no ill feeling about his departure this time, and that he wanted to focus on his solo work.[105] In October 2009, the Chili Peppers entered the studio to begin writing their tenth studio album, with Klinghoffer replacing Frusciante.[104]
|
72 |
+
|
73 |
+
In January 2010, the Chili Peppers, with Klinghoffer on guitar, made their live comeback in January 2010, paying tribute to Neil Young with a cover of "A Man Needs a Maid" at MusiCares. In February, after months of speculation, Klinghoffer was confirmed as Frusciante's replacement.[106]
|
74 |
+
|
75 |
+
The band began recording their tenth studio album with producer Rick Rubin in September, and finished in March 2011. They decided against releasing another double album, reducing the album to 14 tracks.[107] I'm with You, the tenth Red Hot Chili Peppers album, was released in the US in August 2011. It topped the charts in 18 countries, and received mostly positive reviews. "The Adventures of Rain Dance Maggie", became the band's 12th number-one single.[108][109] "Monarchy of Roses", "Look Around" and "Did I Let You Know" (released only in Brazil), and "Brendan's Death Song" were also released as singles.[110]
|
76 |
+
|
77 |
+
In July 2011, the Chili Peppers played three invitation-only warm-up shows in California, their first since 2007 and their first with Klinghoffer as guitarist.[111][112] The band kicked off a month-long promotional tour in August 2011, starting in Asia. The I'm With You World Tour began in September 2011, lasting into 2013. The North American leg, expected to begin in January 2012, was postponed to March due to a surgery Kiedis required for foot injuries he had suffered through since the Stadium Arcadium tour. Following the I'm with You World Tour, the band set out on another small tour, including their first shows in Alaska, Paraguay, the Philippines and Puerto Rico.[citation needed] Recordings from the tours were released in 2012 on the free 2011 Live EP.[citation needed] During the band's break, Flea and touring Chili Peppers percussionist Mauro Refosco toured with their project Atoms For Peace.[citation needed]
|
78 |
+
|
79 |
+
The Chili Peppers were nominated for two MTV Europe Music Awards for Best Rock Band and Best Live Artist[113] and nominated for Best Group at the 2012 People's Choice Awards[114] I'm with You was also nominated for a 2012 Grammy Award for Best Rock Album.[115] In April 2012, the Chili Peppers were inducted into the Rock and Roll Hall of Fame. May saw the release of the download-only Rock & Roll Hall of Fame Covers EP, comprising previously released studio and live covers of artists that had influenced the band. From August 2012, the band began releasing a series of singles as the I'm with You Sessions, which were compiled on the I'm Beside You LP in November 2013 as a Record Store Day exclusive.[citation needed]
|
80 |
+
|
81 |
+
In February 2014, the Chili Peppers joined Bruno Mars as performers at the Super Bowl XLVIII halftime show, watched by a record 115.3 million viewers. The performance was met with mixed reviews for its use of backing music; Flea responded that it was a NFL rule for bands to pre-record music due to time and technical issues, and that they had agreed because it was a once-in-a-lifetime opportunity. He said Kiedis's vocals were completely live and the band had recorded "Give it Away" during rehearsals.[116] The band began another tour in May 2013, which ended in June 2014. 2012-13 Live EP was released in July 2014 through their website as a free download.[citation needed]
|
82 |
+
|
83 |
+
The Chili Peppers released Fandemonium in November 2014, a book dedicated to their fans.[117] That December, they began work on their eleventh album, their first without producer Rick Rubin since 1989;[118] it was instead produced by Danger Mouse.[119] Flea suffered a broken arm during a skiing trip which delayed the recording for several months.[120] The band announced in May that "Dark Necessities", the first single from their upcoming album, would be released on May 5. On that same day, it was announced that the band's eleventh album would be titled The Getaway, and would be released in June.[121] Kiedis said many of the songs were influenced by a two-year relationship that fell apart.[122] "Dark Necessities" became the band's 25th top-ten single on the Billboard Alternative Songs chart, a record they hold over U2.[123] In February 2016, "Circle of the Noose", an unreleased song recorded with Navarro in 1998, was recorded in March 1998.[124]
|
84 |
+
|
85 |
+
In May, the band released "The Getaway", the title track on their upcoming album.[125] The music video for "Dark Necessities", directed by actress Olivia Wilde, was released in June 2016.[126] The Getaway made its debut at number 2 on the Billboard 200 chart, behind Drake, who had the number-one album for eight consecutive weeks. The Getaway outsold Drake its opening week with album sales of 108,000 to 33,000 (actually placing him at 4th in sales for the week) though due to album streaming, Drake managed to top the band for the top position in the charts.[127][128] In July 2016, the Live In Paris EP was released exclusively through the music streaming website Deezer. "Go Robot" was announced as the second single from The Getaway. In the same month, the band members started to post images from the set of the music video.[129] The Getaway was re-issued on limited edition pink vinyl in September, as part of 10 Bands 1 Cause. All money from sales of the re-issue went to Gilda's Club NYC an organization that provides community support for both those diagnosed with cancer and their caretakers. It is named after comedian Gilda Radner.[130]
|
86 |
+
|
87 |
+
The band began the headlining portion of The Getaway World Tour in September with the North American leg, featuring Jack Irons, the band's original drummer as an opening act on all dates, beginning in January 2017.[131] Dave Rat, the band's sound engineer since 1991, announced that following the band's show of January 22, 2017 he would no longer be working with the band.[132]
|
88 |
+
|
89 |
+
The Getaway World Tour concluded in October 2017. The tour consisted of 151 shows lasting a year and almost five months.[133] In December, the band headlined the Band Together 2 Benefit Concert at the Bill Graham Civic Auditorium in San Francisco. Money raised from the concert went to the Tipping Point Emergency Relief Fund which between 2005 and 2017 raised $150 million to educate, employ, house and support those in need in the Bay Area.[134]
|
90 |
+
|
91 |
+
Work on a new album began in 2018,[135] with plans to release it in 2019.[136] The recording was delayed due to the Woolsey Fire; the band performed a benefit show for fire victims on January 13, 2019.[137] In February, the band performed "Dark Necessities" with rapper Post Malone at the 61st Annual Grammy Awards.[138][139] They appeared in Malone's music video for "Wow", released in March 2019.[140]
|
92 |
+
|
93 |
+
In February 2019, the Chili Peppers began a month-long tour, featuring their first headlining shows in Australia in twelve years,[141] including their first show in Tasmania, which was briefly halted due to a power outage.[142] On March 15, 2019, they performed in Egypt, becoming one of the few acts allowed to perform at the Pyramids of Giza.[143] The performance was live-streamed on YouTube, Twitter and Facebook.[144] On June 28, 2019, the Chili Peppers performed an unannounced private show in East Hampton, New York which was also livestreamed.[145] On July 12, 2019, they played a four-song show for the kids at Edwin Markham Middle School in Watts, Los Angeles.[146] The band performed in Abu Dhabi on September 4, 2019 as part of the UFC 242 event Abu Dhabi Showdown Week.[147]
|
94 |
+
|
95 |
+
On October 26, 2019, photographer David Mushegain announced that a Chili Peppers documentary was in the works.[148] Klinghoffer released his debut solo album, To Be One With You, on November 22, 2019, featuring Flea and former Chili Peppers drummer Jack Irons.[149] On November 2, 2019, the Chili Peppers performed at a charity event at the Silverlake Conservatory of Music in Los Angeles; it was their final show with Klinghoffer.[150] On December 15, 2019, the Chili Peppers released a statement via Instagram that, after 10 years, they had split with Klinghoffer and that Frusciante had rejoined the band. They wrote that Klinghoffer was "a beautiful musician who we respect and love".[151] In an interview on the podcast WTF with Marc Maron, Klinghoffer said there was no animosity: "It's absolutely John's place to be in that band ... I'm happy that he's back with them."[152]
|
96 |
+
|
97 |
+
In January 2020, Smith confirmed that the Chili Peppers had been working on a new album with Frusciante.[153] On February 8, Frusciante performed with the band for the first time in 13 years, at a memorial service held by the Tony Hawk Foundation for late film producer Andrew Burkle, son of billionaire Ronald Burkle.[154] The band's first full shows with Frusciante were scheduled for festivals in May,[155] but were cancelled due to the COVID-19 pandemic.[156]
|
98 |
+
|
99 |
+
The Chili Peppers' mix of hard rock, funk and hip hop has influenced genres such as funk metal,[157] rap metal,[158] rap rock[159] and nu metal.[160][158] In a 2002 interview with Penthouse, Kiedis said "We were early in creating the combination of hardcore funk with hip-hop-style vocals", and suggested they had influenced Limp Bizkit, Kid Rock, and Linkin Park.[161]
|
100 |
+
|
101 |
+
In an interview with Jason Tanamor, Smith said, "Certainly Anthony's singing style and voice lends itself to being unique, and nobody sounds like him. The cool thing about it is we can play any style of music whether it's hard and fast, or loud or quiet, slow or medium, whatever it is; rock or funk, and it still sounds like us. I'm proud of that because sometimes bands don't have that strong personality where you go, 'Oh, that's boom, right away.'"[162]
|
102 |
+
|
103 |
+
The band was inducted into the Rock and Roll Hall of Fame In April 2012. The induction lineup was Kiedis, Flea, Smith, Klinghoffer, Frusciante, Slovak (represented by his brother James), Irons and Martinez; Frusciante was invited, but did not attend.[110] Navarro and Sherman were not inducted; Sherman said he felt "dishonored".[163] The band performed "By the Way", "Give It Away" and "Higher Ground", which included Irons and Martinez on drums. It was the first time Kiedis and Flea had performed with Irons in 24 years and Martinez in 26 years.[164]
|
104 |
+
|
105 |
+
In 2012, Blood Sugar Sex Magik, Californication, and By the Way were ranked among Rolling Stone's 500 Greatest Albums of All Time at 310, 399, and 304, respectively.[165]
|
106 |
+
|
107 |
+
In 1990, the Chili Peppers appeared in PSA ads for Rock the Vote, a non-profit organization in the United States geared toward increasing voter turnout in the United States Presidential Election among voters ages 18 to 24.[166]
|
108 |
+
|
109 |
+
The band was invited by the Beastie Boys and the Milarepa Fund to perform at the Tibetan Freedom Concert in June 1996 in San Francisco. They also performed at the June 1998 Washington, D.C. concert as well. The concerts, which were held worldwide, were to support the cause of Tibetan independence.[citation needed] In September 2005, the band performed "Under the Bridge" at the ReAct Now: Music & Relief benefit which was held to raise money for the victims of Hurricane Katrina. The live event raised $30 million.[citation needed]
|
110 |
+
|
111 |
+
In July 2007, the band performed on behalf of former U.S. Vice President Al Gore who invited the band to perform at the London version of his Live Earth concerts which were held to raise awareness towards global warming and solving the most critical environmental issues of our time.[167] The band performed a free concert in downtown Cleveland, Ohio in April 2012 in support of President Obama's re-election campaign. The requirement for getting into the concert was agreeing to volunteer for the Obama 2012 phone bank. The event quickly met its capacity limit after being announced.[168]
|
112 |
+
|
113 |
+
In May 2013, the band performed a special concert in Portland, Oregon for the Dalai Lama as part of the Dalai Lama Environmental Summit.[169][170] In January 2015, they performed their first show of the new year for the Sean Penn & Friends Help Haiti Home fundraiser in support of the J/P Haitian Relief Organization.[171] The band were among over 120 entertainers and celebrities to sign up and announce that they would be voting for Bernie Sanders in the 2016 presidential election in September.[172][173] The band performed at a fundraiser event at the Belly Up Tavern in Solana Beach in the same month. All money was donated to A Reason To Survive (ARTS), Heartbeat Music Academy, San Diego Young Artists Music Academy, and the Silverlake Conservatory of Music.[174] In October, Kiedis and Flea hosted the annual benefit for the Silverlake Conservatory of Music. The band performed a special rare acoustic set.[175]
|
114 |
+
|
115 |
+
In February 2016, the band headlined a fundraiser concert in support of presidential candidate Bernie Sanders.[176] In April, the band performed at a private function on behalf of Facebook and Napster founder Sean Parker for his launch of The Parker Institute for Cancer Immunotherapy.[177] Chad Smith and Will Ferrell hosted the Red Hot Benefit Comedy + Music Show & Quinceanera in the same month. The benefit featured a performance by the Chili Peppers along with comedy acts selected by Ferrell and Funny or Die. A portion of the proceeds went to Ferrell's Cancer for College and Smith's Silverlake Conservatory of Music.[178]
|
116 |
+
|
117 |
+
In February 2018, Smith once again joined Ferrell at his One Classy Night benefit at the Moore Theater in Seattle to help raise money for Cancer for College. The event raised $300,000 in college scholarship money for students who have survived cancer.[179]
|
118 |
+
|
119 |
+
The musical style of the Red Hot Chili Peppers has been characterized as funk rock,[180][181][182][183] alternative rock,[184][185][186] funk metal[187][188][189] and rap rock,[182][183][190][191] with influences from hard, psychedelic and punk rock. The band's influences include Parliament-Funkadelic, Defunkt, Jimi Hendrix, the Misfits, Black Sabbath, Metallica, James Brown, Gang of Four, Bob Marley, Big Boys, Bad Brains, Sly and the Family Stone, Ohio Players, Queen, Stevie Wonder, Elvis Presley, Deep Purple, the Beach Boys, Black Flag, Ornette Coleman, Led Zeppelin, Yes,[192] Fugazi, Fishbone, Marvin Gaye, Billie Holiday, Santana, Elvis Costello, the Stooges,[193] the Clash, Siouxsie and the Banshees,[194][195] Devo, and Miles Davis.[196]
|
120 |
+
|
121 |
+
Kiedis provided multiple vocal styles. His primary approach up to Blood Sugar Sex Magik was spoken verse and rapping, which he complemented with traditional vocals. This helped the band to maintain a consistent style.[197] As the group matured, notably with Californication (1999), they reduced the number of rapped verses. By the Way (2002) contained only two songs with a rap-driven verse and melodic chorus.[198] Kiedis's more recent style was developed through ongoing coaching.[199]
|
122 |
+
|
123 |
+
Original guitarist Slovak's style was strongly based on blues and funk. Slovak was primarily influenced by hard-rock artists such as Hendrix, Kiss and Led Zeppelin,[200] while his playing method was based on improvisation common in funk.[201] He was noted for an aggressive playing style; he would often play with such force, that his fingers would "come apart".[201] Kiedis observed that his playing evolved during his time away from the group in What Is This?, when Slovak adopted a more fluid style featuring "sultry" elements compared to his earlier hard-rock techniques.[202] On The Uplift Mofo Party Plan (1987), Slovak experimented with genres outside of traditional funk music including reggae and speed metal.[203] His guitar riffs would often serve as the basis of the group's songs, with the other members writing their parts to complement his guitar work. His melodic riff featured in the song "Behind the Sun" inspired the group to create "pretty" songs with an emphasis on melody.[31] Kiedis describes the song as "pure Hillel inspiration".[204] Slovak also used a talk box on songs such as "Green Heaven" and "Funky Crime", in which he would sing into a tube while playing to create psychedelic effects.[205]
|
124 |
+
|
125 |
+
Frusciante's musical style has evolved over the course of his career. His guitar playing employs melody and emotion rather than virtuosity.[clarification needed] Although virtuoso influences can be heard throughout his career, he has said that he often minimizes this.[206] Frusciante brought a melodic and textured sound, notably on Californication, By the Way and Stadium Arcadium (2006). This contrasts with his earlier abrasive approach in Mother's Milk,[207][208] as well as his dry, funky and more docile arrangements on Blood Sugar Sex Magik. On Californicationand By the Way, Frusciante derived the technique of creating tonal texture through chord patterns from post-punk guitarist Vini Reilly of The Durutti Column, and bands such as Fugazi and The Cure.[209][210][211] On By The Way, he wanted people to be able to sing the lead guitar part, influenced by John McGeoch of Siouxsie and the Banshees, Johnny Marr of The Smiths and Bernard Sumner of Joy Division.[212] He initially wanted this album to be composed of "these punky, rough songs", drawing inspiration from early punk artists such as The Germs and The Damned. However, this was discouraged by producer Rick Rubin, and he instead built upon Californication's melodically driven style.[213] During the recording of Stadium Arcadium (2006), he moved away from his new-wave influences and concentrated on emulating flashier guitar players such as Hendrix and Van Halen.[214]
|
126 |
+
|
127 |
+
Navarro brought his own sound to the band during his tenure, with his style based on heavy metal, progressive rock and psychedelia.[215]
|
128 |
+
|
129 |
+
Klinghoffer's style employed a wide range of unconventional guitar effects and vocal treatments. In his debut Chili Peppers album, I'm with You (2011), he focused heavily on producing a textured, emotional sound to complement the vocals and atmosphere of each song. He has stated that he is a fan of jazz and funk.[citation needed]
|
130 |
+
|
131 |
+
Flea's bass guitar style can be considered an amalgamation of funk, psychedelic, punk, and hard rock.[216] The groove-heavy melodies, played through either finger-picking or slapping, contributed to their signature style. While Flea's slap bass style was prominent in earlier albums, albums after Blood Sugar Sex Magik[216] have more melodic and funk-driven bass lines. He has also used double stops on some newer songs. Flea's bass playing has changed considerably throughout the years. When he joined Fear, his technique centered largely around traditional punk-rock bass lines.[217] However, he changed this style when Red Hot Chili Peppers formed. He began to incorporate a "slap" bass style that drew influence largely from Bootsy Collins.[218] Blood Sugar Sex Magik saw a notable shift in style as it featured none of his signature technique but focused more on traditional and melodic roots.[219] His intellectual beliefs as a musician also shifted: "I was trying to play simply on Blood Sugar Sex Magik because I had been playing too much prior to that, so I thought, 'I've really got to chill out and play half as many notes'. When you play less, it's more exciting—there's more room for everything. If I do play something busy, it stands out, instead of the bass being a constant onslaught of notes. Space is good."[219]
|
132 |
+
|
133 |
+
Drummer Smith blends rock with funk, mixing metal and jazz to his beats. Influences include Buddy Rich and John Bonham.[220] He brought a different sound to Mother's Milk, playing tight and fast. In Blood Sugar Sex Magik, he displays greater power. He is recognized for his ghost notes, his beats and his fast right foot. MusicRadar put him in sixth place on their list of the "50 Greatest Drummers Of All Time".[221]
|
134 |
+
|
135 |
+
Early in the group's career, Kiedis wrote comical songs filled with sexual innuendos and songs inspired by friendship and the band members' personal experiences. However, after the death of his close friend and bandmate Hillel Slovak, Kiedis's lyrics became much more introspective and personal, as exemplified by the Mother's Milk song "Knock Me Down", which was dedicated to Slovak along with the Blood Sugar Sex Magik song "My Lovely Man".
|
136 |
+
|
137 |
+
When the band recorded One Hot Minute (1995) Kiedis had turned to drugs once again, which resulted in darker lyrics.[222] He began to write about anguish, and the self-mutilating thoughts he would experience as a result of his heroin and cocaine addiction.[223][224] The album also featured tributes to close friends the band lost during the recording process including Kurt Cobain on the song "Tearjerker" and River Phoenix on the song "Transcending".
|
138 |
+
|
139 |
+
After witnessing Frusciante's recovery from his heroin addiction, Kiedis wrote many songs inspired by rebirth and the meaning of life on Californication. He was also intrigued by the life lessons that the band had learned,[56] including Kiedis's experience with meeting a young mother at the YMCA, who was attempting to battle her crack addiction while living with her infant daughter.[69]
|
140 |
+
|
141 |
+
On By the Way, Kiedis was lyrically influenced by love, his girlfriend, and the emotions expressed when one fell in love.[225] Drugs also played an integral part in Kiedis's writings, as he had only been sober since 2000.[226] Tracks like "This Is the Place" and "Don't Forget Me" expressed his intense dislike for narcotics and the harmful physical and emotional effects they caused him. Stadium Arcadium (2006) continued the themes of love and romance; Kiedis stated, that "love and women, pregnancies and marriages, relationship struggles—those are real and profound influences on this record. And it's great, because it wasn't just me writing about the fact that I'm in love. It was everybody in the band. We were brimming with energy based on falling in love."[227] I'm with You (2011) again featured Kiedis writing about the loss of a close friend, this time in the song "Brendan's Death Song", a tribute to club owner Brendan Mullen who gave the band some of their earliest shows and showed support to them throughout their career.
|
142 |
+
|
143 |
+
Themes within Kiedis's repertoire include love and friendship,[228][229] teenage angst, good-time aggression,[230] various sexual topics and the link between sex and music, political and social commentary (Native American issues in particular),[231] romance,[228][232][233] loneliness,[234] globalization and the cons of fame and Hollywood,[235] poverty, drugs, alcohol, dealing with death, and California.[86]
|
144 |
+
|
145 |
+
Kiedis was convicted of indecent exposure and sexual battery in 1989 after he exposed himself to a woman following a show in Virginia.[236] In Daytona Beach, Florida, Smith and Flea were arrested after filming an MTV Spring Break performance in 1990. They sexually harassed a 20-year-old woman during their show, after Flea walked into the crowd and carried her away.[237] In 2016, a former music executive accused two members of the band of sexually harassing her during a business meeting in 1991.[238]
|
en/4952.html.txt
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A referendum (plural: referendums or less commonly referenda) is a direct and universal vote in which an entire electorate is invited to vote on a particular proposal and can have nationwide or local forms. This may result in the adoption of a new policy or specific law. In some countries, it is synonymous with a plebiscite or a vote on a ballot question.
|
2 |
+
|
3 |
+
Some definitions of 'plebiscite' suggest it is a type of vote to change the constitution or government of a country.[1] The word, 'referendum' is often a catchall, used for both legislative referrals and initiatives. Australia defines 'referendum' as a vote to change the constitution and 'plebiscite' as a vote which does not affect the constitution,[2] whereas in Ireland, 'plebiscite' referred to the vote to adopt its constitution, but a subsequent vote to amend the constitution is called a 'referendum', as is a poll of the electorate on a non-constitutional bill.
|
4 |
+
|
5 |
+
'Referendum' is the gerundive form of the Latin verb refero, literally "to carry back" (from the verb fero, "to bear, bring, carry"[3] plus the inseparable prefix re-, here meaning "back"[4]). As a gerundive is an adjective,[5] not a noun,[6] it cannot be used alone in Latin, and must be contained within a context attached to a noun such as Propositum quod referendum est populo, "A proposal which must be carried back to the people". The addition of the verb sum (3rd person singular, est) to a gerundive, denotes the idea of necessity or compulsion, that which "must" be done, rather than that which is "fit for" doing. Its use as a noun in English is not considered a strictly grammatical usage of a foreign word, but is rather a freshly coined English noun, which follows English grammatical usage, not Latin grammatical usage. This determines the form of the plural in English, which according to English grammar should be "referendums". The use of "referenda" as a plural form in English (treating it as a Latin word and attempting to apply to it the rules of Latin grammar) is unsupportable according to the rules of both Latin and English grammar. The use of "referenda" as a plural form is posited hypothetically as either a gerund or a gerundive by the Oxford English Dictionary, which rules out such usage in both cases as follows:[7]
|
6 |
+
|
7 |
+
Referendums is logically preferable as a plural form meaning 'ballots on one issue' (as a Latin gerund,[8] referendum has no plural). The Latin plural gerundive 'referenda', meaning 'things to be referred', necessarily connotes a plurality of issues.[9]
|
8 |
+
|
9 |
+
It is closely related to agenda, "those matters which must be driven forward", from ago, to drive (cattle); and memorandum, "that matter which must be remembered", from memoro, to call to mind, corrigenda, from rego, to rule, make straight, those things which must be made straight (corrected), etc.
|
10 |
+
|
11 |
+
The name and use of the 'referendum' is thought to have originated in the Swiss canton of Graubünden as early as the 16th century.[10][11]
|
12 |
+
|
13 |
+
The term 'plebiscite' has a generally similar meaning in modern usage, and comes from the Latin plebiscita, which originally meant a decree of the Concilium Plebis (Plebeian Council), the popular assembly of the Roman Republic. Today, a referendum can also often be referred to as a plebiscite, but in some countries the two terms are used differently to refer to votes with differing types of legal consequences. For example, Australia defines 'referendum' as a vote to change the constitution, and 'plebiscite' as a vote that does not affect the constitution.[2] In contrast, Ireland has only ever held one plebiscite, which was the vote to adopt its constitution, and every other vote has been called a referendum. Plebiscite has also been used to denote a non-binding vote count such as the one held by Nazi Germany to 'approve' in retrospect the so-called Anschluss with Austria, the question being not 'Do you permit?' but rather 'Do you approve?' of that which has most definitely already occurred.
|
14 |
+
|
15 |
+
The term referendum covers a variety of different meanings. A referendum can be binding or advisory.[12] In some countries, different names are used for these two types of referendum. Referendums can be further classified by who initiates them.[13]
|
16 |
+
|
17 |
+
A mandatory referendum is a automatically put to a vote if certain conditions are met and do not require any signatures from the public or legislative action. In areas that use referendums a mandatory referendum is commonly used as a legally required step for ratification for constitutional changes, ratifying international treaties and joining international organizations, and certain types of public spending.[14]
|
18 |
+
|
19 |
+
Some countries or local governments choose to enact any constitutional amendments with a mandatory referendum. These include Australia, Ireland, Switzerland, Denmark, and 49 of 50 U.S States (the only exception being Delaware).
|
20 |
+
|
21 |
+
Many localities have a mandatory referendum in order for the government to issue certain bonds, raise taxes above a specified amount, or take on certain amounts of debt. In California, the state government may not borrow more than $300,000 without a public voter in a statewide bond proposition.[15]
|
22 |
+
|
23 |
+
Switzerland has mandatory referendums on enacting international treaties that have to do with collective security and joining a supranational community. This type of referendum has only occurred once in the countries history, a failed attempt in 1986 for Switzerland to join the United Nations.[16]
|
24 |
+
|
25 |
+
A hypothetical type of referendum, first proposed by Immanuel Kant, is a referendum to approve a declaration of war in a war referendum. It has never been enacted by any country, but was debated in the United States in the 1930's as the Ludlow Amendment.
|
26 |
+
|
27 |
+
An optional referendum is a question that is put to the vote as a result of a demand. This may come from the executive branch, legislative branch, or a request from the people (often after meeting a signature requirement).
|
28 |
+
|
29 |
+
Voluntary referendums, also known as a legislative referral, are initiated by the legislature or government. These may be advisory questions to gauge public opinion or binding questions of law.
|
30 |
+
|
31 |
+
An initiative is a citizen led process to propose or amend laws or constitutional amendments, which are voted on in a referendum.
|
32 |
+
|
33 |
+
A popular referendum is a vote to strike down an existing law or part of an existing law.
|
34 |
+
|
35 |
+
A recall referendum (also known as a recall election) is a procedure to remove officials before the end of their term of office. Depending on the area and position a recall may be for a specific individual, such as an individual legislator, or more general such as an entire legislature. In the U.S States of Arizona, Montana, and Nevada, the recall may be used against any public official at any level of government including both elected and appointed officials.[17]
|
36 |
+
|
37 |
+
Some territories may hold referendums on whether to become independent sovereign states. These types of referendums may legally sanction and binding, such as the 2011 referendum for the independence of South Sudan, or in some cases may not be sanctioned and considered illegal, such as the 2017 referendum for the independence of Catalonia.
|
38 |
+
|
39 |
+
A deliberative referendum is a referendum specifically designed to improve the deliberative qualities of the campaign preceding the referendum vote, and/or of the act of voting itself.
|
40 |
+
|
41 |
+
From a political-philosophical perspective, referendums are an expression of direct democracy, but today, most referendums need to be understood within the context of representative democracy. They tend to be used quite selectively, covering issues such as changes in voting systems, where currently elected officials may not have the legitimacy or inclination to implement such changes.
|
42 |
+
|
43 |
+
Since the end of the 18th century, hundreds of national referendums have been organised in the world;[18] almost 600 national votes were held in Switzerland since its inauguration as a modern state in 1848.[19] Italy ranked second with 72 national referendums: 67 popular referendums (46 of which were proposed by the Radical Party), 3 constitutional referendums, one institutional referendum and one advisory referendum.[20]
|
44 |
+
|
45 |
+
A referendum usually offers the electorate a choice of accepting or rejecting a proposal, but not always. Some referendums give voters the choice among multiple choices and some use transferable voting.
|
46 |
+
|
47 |
+
In Switzerland, for example, multiple choice referendums are common. Two multiple choice referendums were held in Sweden, in 1957 and in 1980, in which voters were offered three options. In 1977, a referendum held in Australia to determine a new national anthem was held in which voters had four choices. In 1992, New Zealand held a five-option referendum on their electoral system. In 1982, Guam had referendum that used six options, with an additional blank option for those wishing to (campaign and) vote for their own seventh option.
|
48 |
+
|
49 |
+
A multiple choice referendum poses the question of how the result is to be determined. They may be set up so that if no single option receives the support of an absolute majority (more than half) of the votes, resort can be made to the two-round system or instant-runoff voting, which is also called IRV and PV.
|
50 |
+
|
51 |
+
In 2018 the Irish Citizens' Assembly considered the conduct of future referendums in Ireland, with 76 of the members in favour of allowing more than two options, and 52% favouring preferential voting in such cases.[21] Other people regard a non-majoritarian methodology like the Modified Borda Count (MBC) as more inclusive and more accurate.
|
52 |
+
|
53 |
+
Swiss referendums offer a separate vote on each of the multiple options as well as an additional decision about which of the multiple options should be preferred. In the Swedish case, in both referendums the 'winning' option was chosen by the Single Member Plurality ("first past the post") system. In other words, the winning option was deemed to be that supported by a plurality, rather than an absolute majority, of voters. In the 1977, Australian referendum, the winner was chosen by the system of preferential instant-runoff voting (IRV). Polls in Newfoundland (1949) and Guam (1982), for example, were counted under a form of the two-round system, and an unusual form of TRS was used in the 1992 New Zealand poll.
|
54 |
+
|
55 |
+
Although California has not held multiple-choice referendums in the Swiss or Swedish sense (in which only one of several counter-propositions can be victorious, and the losing proposals are wholly null and void), it does have so many yes-or-no referendums at each Election Day that conflicts arise. The State's Constitution provides a method for resolving conflicts when two or more inconsistent propositions are passed on the same day. This is a de facto form of approval voting—i.e. the proposition with the most "yes" votes prevails over the others to the extent of any conflict.
|
56 |
+
|
57 |
+
Another voting system that could be used in multiple-choice referendum is the Condorcet rule.
|
58 |
+
|
59 |
+
Critics[who?] of the referendum argue that voters in a referendum are more likely to be driven by transient whims than by careful deliberation, or that they are not sufficiently informed to make decisions on complicated or technical issues. Also, voters might be swayed by propaganda, strong personalities, intimidation, and expensive advertising campaigns. James Madison argued that direct democracy is the "tyranny of the majority".
|
60 |
+
|
61 |
+
Some opposition to the referendum has arisen from its use by dictators such as Adolf Hitler and Benito Mussolini who, it is argued,[22] used the plebiscite to disguise oppressive policies as populism. Dictators may also make use of referendums as well as show elections to further legitimize their authority such as António de Oliveira Salazar in 1933, Benito Mussolini in 1934, Adolf Hitler in 1936, Francisco Franco in 1947, Park Chung-hee in 1972, and Ferdinand Marcos in 1973. Hitler's use of plebiscites is argued[by whom?] as the reason why, since World War II, there has been no provision in Germany for the holding of referendums at the federal level.
|
62 |
+
|
63 |
+
In recent years, referendums have been used strategically by several European governments trying to pursue political and electoral goals.[23]
|
64 |
+
|
65 |
+
In 1995, Bruton considered that
|
66 |
+
|
67 |
+
All governments are unpopular. Given the chance, people would vote against them in a referendum. Therefore avoid referendums. Therefore don’t raise questions which require them, such as the big versus the little states.[24].
|
68 |
+
|
69 |
+
Some critics of the referendum attack the use of closed questions. A difficulty called the separability problem can plague a referendum on two or more issues. If one issue is in fact, or in perception, related to another on the ballot, the imposed simultaneous voting of first preference on each issue can result in an outcome which is displeasing to most.
|
70 |
+
|
71 |
+
Several commentators have noted that the use of citizens' initiatives to amend constitutions has so tied the government to a jumble of popular demands as to render the government unworkable. A 2009 article in The Economist argued that this had restricted the ability of the California state government to tax the people and pass the budget, and called for an entirely new Californian constitution.[25]
|
72 |
+
|
73 |
+
A similar problem also arises when elected governments accumulate excessive debts. That can severely reduce the effective margin for later governments.
|
74 |
+
|
75 |
+
Both these problems can be moderated by a combination of other measures as
|
en/4953.html.txt
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A referendum (plural: referendums or less commonly referenda) is a direct and universal vote in which an entire electorate is invited to vote on a particular proposal and can have nationwide or local forms. This may result in the adoption of a new policy or specific law. In some countries, it is synonymous with a plebiscite or a vote on a ballot question.
|
2 |
+
|
3 |
+
Some definitions of 'plebiscite' suggest it is a type of vote to change the constitution or government of a country.[1] The word, 'referendum' is often a catchall, used for both legislative referrals and initiatives. Australia defines 'referendum' as a vote to change the constitution and 'plebiscite' as a vote which does not affect the constitution,[2] whereas in Ireland, 'plebiscite' referred to the vote to adopt its constitution, but a subsequent vote to amend the constitution is called a 'referendum', as is a poll of the electorate on a non-constitutional bill.
|
4 |
+
|
5 |
+
'Referendum' is the gerundive form of the Latin verb refero, literally "to carry back" (from the verb fero, "to bear, bring, carry"[3] plus the inseparable prefix re-, here meaning "back"[4]). As a gerundive is an adjective,[5] not a noun,[6] it cannot be used alone in Latin, and must be contained within a context attached to a noun such as Propositum quod referendum est populo, "A proposal which must be carried back to the people". The addition of the verb sum (3rd person singular, est) to a gerundive, denotes the idea of necessity or compulsion, that which "must" be done, rather than that which is "fit for" doing. Its use as a noun in English is not considered a strictly grammatical usage of a foreign word, but is rather a freshly coined English noun, which follows English grammatical usage, not Latin grammatical usage. This determines the form of the plural in English, which according to English grammar should be "referendums". The use of "referenda" as a plural form in English (treating it as a Latin word and attempting to apply to it the rules of Latin grammar) is unsupportable according to the rules of both Latin and English grammar. The use of "referenda" as a plural form is posited hypothetically as either a gerund or a gerundive by the Oxford English Dictionary, which rules out such usage in both cases as follows:[7]
|
6 |
+
|
7 |
+
Referendums is logically preferable as a plural form meaning 'ballots on one issue' (as a Latin gerund,[8] referendum has no plural). The Latin plural gerundive 'referenda', meaning 'things to be referred', necessarily connotes a plurality of issues.[9]
|
8 |
+
|
9 |
+
It is closely related to agenda, "those matters which must be driven forward", from ago, to drive (cattle); and memorandum, "that matter which must be remembered", from memoro, to call to mind, corrigenda, from rego, to rule, make straight, those things which must be made straight (corrected), etc.
|
10 |
+
|
11 |
+
The name and use of the 'referendum' is thought to have originated in the Swiss canton of Graubünden as early as the 16th century.[10][11]
|
12 |
+
|
13 |
+
The term 'plebiscite' has a generally similar meaning in modern usage, and comes from the Latin plebiscita, which originally meant a decree of the Concilium Plebis (Plebeian Council), the popular assembly of the Roman Republic. Today, a referendum can also often be referred to as a plebiscite, but in some countries the two terms are used differently to refer to votes with differing types of legal consequences. For example, Australia defines 'referendum' as a vote to change the constitution, and 'plebiscite' as a vote that does not affect the constitution.[2] In contrast, Ireland has only ever held one plebiscite, which was the vote to adopt its constitution, and every other vote has been called a referendum. Plebiscite has also been used to denote a non-binding vote count such as the one held by Nazi Germany to 'approve' in retrospect the so-called Anschluss with Austria, the question being not 'Do you permit?' but rather 'Do you approve?' of that which has most definitely already occurred.
|
14 |
+
|
15 |
+
The term referendum covers a variety of different meanings. A referendum can be binding or advisory.[12] In some countries, different names are used for these two types of referendum. Referendums can be further classified by who initiates them.[13]
|
16 |
+
|
17 |
+
A mandatory referendum is a automatically put to a vote if certain conditions are met and do not require any signatures from the public or legislative action. In areas that use referendums a mandatory referendum is commonly used as a legally required step for ratification for constitutional changes, ratifying international treaties and joining international organizations, and certain types of public spending.[14]
|
18 |
+
|
19 |
+
Some countries or local governments choose to enact any constitutional amendments with a mandatory referendum. These include Australia, Ireland, Switzerland, Denmark, and 49 of 50 U.S States (the only exception being Delaware).
|
20 |
+
|
21 |
+
Many localities have a mandatory referendum in order for the government to issue certain bonds, raise taxes above a specified amount, or take on certain amounts of debt. In California, the state government may not borrow more than $300,000 without a public voter in a statewide bond proposition.[15]
|
22 |
+
|
23 |
+
Switzerland has mandatory referendums on enacting international treaties that have to do with collective security and joining a supranational community. This type of referendum has only occurred once in the countries history, a failed attempt in 1986 for Switzerland to join the United Nations.[16]
|
24 |
+
|
25 |
+
A hypothetical type of referendum, first proposed by Immanuel Kant, is a referendum to approve a declaration of war in a war referendum. It has never been enacted by any country, but was debated in the United States in the 1930's as the Ludlow Amendment.
|
26 |
+
|
27 |
+
An optional referendum is a question that is put to the vote as a result of a demand. This may come from the executive branch, legislative branch, or a request from the people (often after meeting a signature requirement).
|
28 |
+
|
29 |
+
Voluntary referendums, also known as a legislative referral, are initiated by the legislature or government. These may be advisory questions to gauge public opinion or binding questions of law.
|
30 |
+
|
31 |
+
An initiative is a citizen led process to propose or amend laws or constitutional amendments, which are voted on in a referendum.
|
32 |
+
|
33 |
+
A popular referendum is a vote to strike down an existing law or part of an existing law.
|
34 |
+
|
35 |
+
A recall referendum (also known as a recall election) is a procedure to remove officials before the end of their term of office. Depending on the area and position a recall may be for a specific individual, such as an individual legislator, or more general such as an entire legislature. In the U.S States of Arizona, Montana, and Nevada, the recall may be used against any public official at any level of government including both elected and appointed officials.[17]
|
36 |
+
|
37 |
+
Some territories may hold referendums on whether to become independent sovereign states. These types of referendums may legally sanction and binding, such as the 2011 referendum for the independence of South Sudan, or in some cases may not be sanctioned and considered illegal, such as the 2017 referendum for the independence of Catalonia.
|
38 |
+
|
39 |
+
A deliberative referendum is a referendum specifically designed to improve the deliberative qualities of the campaign preceding the referendum vote, and/or of the act of voting itself.
|
40 |
+
|
41 |
+
From a political-philosophical perspective, referendums are an expression of direct democracy, but today, most referendums need to be understood within the context of representative democracy. They tend to be used quite selectively, covering issues such as changes in voting systems, where currently elected officials may not have the legitimacy or inclination to implement such changes.
|
42 |
+
|
43 |
+
Since the end of the 18th century, hundreds of national referendums have been organised in the world;[18] almost 600 national votes were held in Switzerland since its inauguration as a modern state in 1848.[19] Italy ranked second with 72 national referendums: 67 popular referendums (46 of which were proposed by the Radical Party), 3 constitutional referendums, one institutional referendum and one advisory referendum.[20]
|
44 |
+
|
45 |
+
A referendum usually offers the electorate a choice of accepting or rejecting a proposal, but not always. Some referendums give voters the choice among multiple choices and some use transferable voting.
|
46 |
+
|
47 |
+
In Switzerland, for example, multiple choice referendums are common. Two multiple choice referendums were held in Sweden, in 1957 and in 1980, in which voters were offered three options. In 1977, a referendum held in Australia to determine a new national anthem was held in which voters had four choices. In 1992, New Zealand held a five-option referendum on their electoral system. In 1982, Guam had referendum that used six options, with an additional blank option for those wishing to (campaign and) vote for their own seventh option.
|
48 |
+
|
49 |
+
A multiple choice referendum poses the question of how the result is to be determined. They may be set up so that if no single option receives the support of an absolute majority (more than half) of the votes, resort can be made to the two-round system or instant-runoff voting, which is also called IRV and PV.
|
50 |
+
|
51 |
+
In 2018 the Irish Citizens' Assembly considered the conduct of future referendums in Ireland, with 76 of the members in favour of allowing more than two options, and 52% favouring preferential voting in such cases.[21] Other people regard a non-majoritarian methodology like the Modified Borda Count (MBC) as more inclusive and more accurate.
|
52 |
+
|
53 |
+
Swiss referendums offer a separate vote on each of the multiple options as well as an additional decision about which of the multiple options should be preferred. In the Swedish case, in both referendums the 'winning' option was chosen by the Single Member Plurality ("first past the post") system. In other words, the winning option was deemed to be that supported by a plurality, rather than an absolute majority, of voters. In the 1977, Australian referendum, the winner was chosen by the system of preferential instant-runoff voting (IRV). Polls in Newfoundland (1949) and Guam (1982), for example, were counted under a form of the two-round system, and an unusual form of TRS was used in the 1992 New Zealand poll.
|
54 |
+
|
55 |
+
Although California has not held multiple-choice referendums in the Swiss or Swedish sense (in which only one of several counter-propositions can be victorious, and the losing proposals are wholly null and void), it does have so many yes-or-no referendums at each Election Day that conflicts arise. The State's Constitution provides a method for resolving conflicts when two or more inconsistent propositions are passed on the same day. This is a de facto form of approval voting—i.e. the proposition with the most "yes" votes prevails over the others to the extent of any conflict.
|
56 |
+
|
57 |
+
Another voting system that could be used in multiple-choice referendum is the Condorcet rule.
|
58 |
+
|
59 |
+
Critics[who?] of the referendum argue that voters in a referendum are more likely to be driven by transient whims than by careful deliberation, or that they are not sufficiently informed to make decisions on complicated or technical issues. Also, voters might be swayed by propaganda, strong personalities, intimidation, and expensive advertising campaigns. James Madison argued that direct democracy is the "tyranny of the majority".
|
60 |
+
|
61 |
+
Some opposition to the referendum has arisen from its use by dictators such as Adolf Hitler and Benito Mussolini who, it is argued,[22] used the plebiscite to disguise oppressive policies as populism. Dictators may also make use of referendums as well as show elections to further legitimize their authority such as António de Oliveira Salazar in 1933, Benito Mussolini in 1934, Adolf Hitler in 1936, Francisco Franco in 1947, Park Chung-hee in 1972, and Ferdinand Marcos in 1973. Hitler's use of plebiscites is argued[by whom?] as the reason why, since World War II, there has been no provision in Germany for the holding of referendums at the federal level.
|
62 |
+
|
63 |
+
In recent years, referendums have been used strategically by several European governments trying to pursue political and electoral goals.[23]
|
64 |
+
|
65 |
+
In 1995, Bruton considered that
|
66 |
+
|
67 |
+
All governments are unpopular. Given the chance, people would vote against them in a referendum. Therefore avoid referendums. Therefore don’t raise questions which require them, such as the big versus the little states.[24].
|
68 |
+
|
69 |
+
Some critics of the referendum attack the use of closed questions. A difficulty called the separability problem can plague a referendum on two or more issues. If one issue is in fact, or in perception, related to another on the ballot, the imposed simultaneous voting of first preference on each issue can result in an outcome which is displeasing to most.
|
70 |
+
|
71 |
+
Several commentators have noted that the use of citizens' initiatives to amend constitutions has so tied the government to a jumble of popular demands as to render the government unworkable. A 2009 article in The Economist argued that this had restricted the ability of the California state government to tax the people and pass the budget, and called for an entirely new Californian constitution.[25]
|
72 |
+
|
73 |
+
A similar problem also arises when elected governments accumulate excessive debts. That can severely reduce the effective margin for later governments.
|
74 |
+
|
75 |
+
Both these problems can be moderated by a combination of other measures as
|
en/4954.html.txt
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A referendum (plural: referendums or less commonly referenda) is a direct and universal vote in which an entire electorate is invited to vote on a particular proposal and can have nationwide or local forms. This may result in the adoption of a new policy or specific law. In some countries, it is synonymous with a plebiscite or a vote on a ballot question.
|
2 |
+
|
3 |
+
Some definitions of 'plebiscite' suggest it is a type of vote to change the constitution or government of a country.[1] The word, 'referendum' is often a catchall, used for both legislative referrals and initiatives. Australia defines 'referendum' as a vote to change the constitution and 'plebiscite' as a vote which does not affect the constitution,[2] whereas in Ireland, 'plebiscite' referred to the vote to adopt its constitution, but a subsequent vote to amend the constitution is called a 'referendum', as is a poll of the electorate on a non-constitutional bill.
|
4 |
+
|
5 |
+
'Referendum' is the gerundive form of the Latin verb refero, literally "to carry back" (from the verb fero, "to bear, bring, carry"[3] plus the inseparable prefix re-, here meaning "back"[4]). As a gerundive is an adjective,[5] not a noun,[6] it cannot be used alone in Latin, and must be contained within a context attached to a noun such as Propositum quod referendum est populo, "A proposal which must be carried back to the people". The addition of the verb sum (3rd person singular, est) to a gerundive, denotes the idea of necessity or compulsion, that which "must" be done, rather than that which is "fit for" doing. Its use as a noun in English is not considered a strictly grammatical usage of a foreign word, but is rather a freshly coined English noun, which follows English grammatical usage, not Latin grammatical usage. This determines the form of the plural in English, which according to English grammar should be "referendums". The use of "referenda" as a plural form in English (treating it as a Latin word and attempting to apply to it the rules of Latin grammar) is unsupportable according to the rules of both Latin and English grammar. The use of "referenda" as a plural form is posited hypothetically as either a gerund or a gerundive by the Oxford English Dictionary, which rules out such usage in both cases as follows:[7]
|
6 |
+
|
7 |
+
Referendums is logically preferable as a plural form meaning 'ballots on one issue' (as a Latin gerund,[8] referendum has no plural). The Latin plural gerundive 'referenda', meaning 'things to be referred', necessarily connotes a plurality of issues.[9]
|
8 |
+
|
9 |
+
It is closely related to agenda, "those matters which must be driven forward", from ago, to drive (cattle); and memorandum, "that matter which must be remembered", from memoro, to call to mind, corrigenda, from rego, to rule, make straight, those things which must be made straight (corrected), etc.
|
10 |
+
|
11 |
+
The name and use of the 'referendum' is thought to have originated in the Swiss canton of Graubünden as early as the 16th century.[10][11]
|
12 |
+
|
13 |
+
The term 'plebiscite' has a generally similar meaning in modern usage, and comes from the Latin plebiscita, which originally meant a decree of the Concilium Plebis (Plebeian Council), the popular assembly of the Roman Republic. Today, a referendum can also often be referred to as a plebiscite, but in some countries the two terms are used differently to refer to votes with differing types of legal consequences. For example, Australia defines 'referendum' as a vote to change the constitution, and 'plebiscite' as a vote that does not affect the constitution.[2] In contrast, Ireland has only ever held one plebiscite, which was the vote to adopt its constitution, and every other vote has been called a referendum. Plebiscite has also been used to denote a non-binding vote count such as the one held by Nazi Germany to 'approve' in retrospect the so-called Anschluss with Austria, the question being not 'Do you permit?' but rather 'Do you approve?' of that which has most definitely already occurred.
|
14 |
+
|
15 |
+
The term referendum covers a variety of different meanings. A referendum can be binding or advisory.[12] In some countries, different names are used for these two types of referendum. Referendums can be further classified by who initiates them.[13]
|
16 |
+
|
17 |
+
A mandatory referendum is a automatically put to a vote if certain conditions are met and do not require any signatures from the public or legislative action. In areas that use referendums a mandatory referendum is commonly used as a legally required step for ratification for constitutional changes, ratifying international treaties and joining international organizations, and certain types of public spending.[14]
|
18 |
+
|
19 |
+
Some countries or local governments choose to enact any constitutional amendments with a mandatory referendum. These include Australia, Ireland, Switzerland, Denmark, and 49 of 50 U.S States (the only exception being Delaware).
|
20 |
+
|
21 |
+
Many localities have a mandatory referendum in order for the government to issue certain bonds, raise taxes above a specified amount, or take on certain amounts of debt. In California, the state government may not borrow more than $300,000 without a public voter in a statewide bond proposition.[15]
|
22 |
+
|
23 |
+
Switzerland has mandatory referendums on enacting international treaties that have to do with collective security and joining a supranational community. This type of referendum has only occurred once in the countries history, a failed attempt in 1986 for Switzerland to join the United Nations.[16]
|
24 |
+
|
25 |
+
A hypothetical type of referendum, first proposed by Immanuel Kant, is a referendum to approve a declaration of war in a war referendum. It has never been enacted by any country, but was debated in the United States in the 1930's as the Ludlow Amendment.
|
26 |
+
|
27 |
+
An optional referendum is a question that is put to the vote as a result of a demand. This may come from the executive branch, legislative branch, or a request from the people (often after meeting a signature requirement).
|
28 |
+
|
29 |
+
Voluntary referendums, also known as a legislative referral, are initiated by the legislature or government. These may be advisory questions to gauge public opinion or binding questions of law.
|
30 |
+
|
31 |
+
An initiative is a citizen led process to propose or amend laws or constitutional amendments, which are voted on in a referendum.
|
32 |
+
|
33 |
+
A popular referendum is a vote to strike down an existing law or part of an existing law.
|
34 |
+
|
35 |
+
A recall referendum (also known as a recall election) is a procedure to remove officials before the end of their term of office. Depending on the area and position a recall may be for a specific individual, such as an individual legislator, or more general such as an entire legislature. In the U.S States of Arizona, Montana, and Nevada, the recall may be used against any public official at any level of government including both elected and appointed officials.[17]
|
36 |
+
|
37 |
+
Some territories may hold referendums on whether to become independent sovereign states. These types of referendums may legally sanction and binding, such as the 2011 referendum for the independence of South Sudan, or in some cases may not be sanctioned and considered illegal, such as the 2017 referendum for the independence of Catalonia.
|
38 |
+
|
39 |
+
A deliberative referendum is a referendum specifically designed to improve the deliberative qualities of the campaign preceding the referendum vote, and/or of the act of voting itself.
|
40 |
+
|
41 |
+
From a political-philosophical perspective, referendums are an expression of direct democracy, but today, most referendums need to be understood within the context of representative democracy. They tend to be used quite selectively, covering issues such as changes in voting systems, where currently elected officials may not have the legitimacy or inclination to implement such changes.
|
42 |
+
|
43 |
+
Since the end of the 18th century, hundreds of national referendums have been organised in the world;[18] almost 600 national votes were held in Switzerland since its inauguration as a modern state in 1848.[19] Italy ranked second with 72 national referendums: 67 popular referendums (46 of which were proposed by the Radical Party), 3 constitutional referendums, one institutional referendum and one advisory referendum.[20]
|
44 |
+
|
45 |
+
A referendum usually offers the electorate a choice of accepting or rejecting a proposal, but not always. Some referendums give voters the choice among multiple choices and some use transferable voting.
|
46 |
+
|
47 |
+
In Switzerland, for example, multiple choice referendums are common. Two multiple choice referendums were held in Sweden, in 1957 and in 1980, in which voters were offered three options. In 1977, a referendum held in Australia to determine a new national anthem was held in which voters had four choices. In 1992, New Zealand held a five-option referendum on their electoral system. In 1982, Guam had referendum that used six options, with an additional blank option for those wishing to (campaign and) vote for their own seventh option.
|
48 |
+
|
49 |
+
A multiple choice referendum poses the question of how the result is to be determined. They may be set up so that if no single option receives the support of an absolute majority (more than half) of the votes, resort can be made to the two-round system or instant-runoff voting, which is also called IRV and PV.
|
50 |
+
|
51 |
+
In 2018 the Irish Citizens' Assembly considered the conduct of future referendums in Ireland, with 76 of the members in favour of allowing more than two options, and 52% favouring preferential voting in such cases.[21] Other people regard a non-majoritarian methodology like the Modified Borda Count (MBC) as more inclusive and more accurate.
|
52 |
+
|
53 |
+
Swiss referendums offer a separate vote on each of the multiple options as well as an additional decision about which of the multiple options should be preferred. In the Swedish case, in both referendums the 'winning' option was chosen by the Single Member Plurality ("first past the post") system. In other words, the winning option was deemed to be that supported by a plurality, rather than an absolute majority, of voters. In the 1977, Australian referendum, the winner was chosen by the system of preferential instant-runoff voting (IRV). Polls in Newfoundland (1949) and Guam (1982), for example, were counted under a form of the two-round system, and an unusual form of TRS was used in the 1992 New Zealand poll.
|
54 |
+
|
55 |
+
Although California has not held multiple-choice referendums in the Swiss or Swedish sense (in which only one of several counter-propositions can be victorious, and the losing proposals are wholly null and void), it does have so many yes-or-no referendums at each Election Day that conflicts arise. The State's Constitution provides a method for resolving conflicts when two or more inconsistent propositions are passed on the same day. This is a de facto form of approval voting—i.e. the proposition with the most "yes" votes prevails over the others to the extent of any conflict.
|
56 |
+
|
57 |
+
Another voting system that could be used in multiple-choice referendum is the Condorcet rule.
|
58 |
+
|
59 |
+
Critics[who?] of the referendum argue that voters in a referendum are more likely to be driven by transient whims than by careful deliberation, or that they are not sufficiently informed to make decisions on complicated or technical issues. Also, voters might be swayed by propaganda, strong personalities, intimidation, and expensive advertising campaigns. James Madison argued that direct democracy is the "tyranny of the majority".
|
60 |
+
|
61 |
+
Some opposition to the referendum has arisen from its use by dictators such as Adolf Hitler and Benito Mussolini who, it is argued,[22] used the plebiscite to disguise oppressive policies as populism. Dictators may also make use of referendums as well as show elections to further legitimize their authority such as António de Oliveira Salazar in 1933, Benito Mussolini in 1934, Adolf Hitler in 1936, Francisco Franco in 1947, Park Chung-hee in 1972, and Ferdinand Marcos in 1973. Hitler's use of plebiscites is argued[by whom?] as the reason why, since World War II, there has been no provision in Germany for the holding of referendums at the federal level.
|
62 |
+
|
63 |
+
In recent years, referendums have been used strategically by several European governments trying to pursue political and electoral goals.[23]
|
64 |
+
|
65 |
+
In 1995, Bruton considered that
|
66 |
+
|
67 |
+
All governments are unpopular. Given the chance, people would vote against them in a referendum. Therefore avoid referendums. Therefore don’t raise questions which require them, such as the big versus the little states.[24].
|
68 |
+
|
69 |
+
Some critics of the referendum attack the use of closed questions. A difficulty called the separability problem can plague a referendum on two or more issues. If one issue is in fact, or in perception, related to another on the ballot, the imposed simultaneous voting of first preference on each issue can result in an outcome which is displeasing to most.
|
70 |
+
|
71 |
+
Several commentators have noted that the use of citizens' initiatives to amend constitutions has so tied the government to a jumble of popular demands as to render the government unworkable. A 2009 article in The Economist argued that this had restricted the ability of the California state government to tax the people and pass the budget, and called for an entirely new Californian constitution.[25]
|
72 |
+
|
73 |
+
A similar problem also arises when elected governments accumulate excessive debts. That can severely reduce the effective margin for later governments.
|
74 |
+
|
75 |
+
Both these problems can be moderated by a combination of other measures as
|
en/4955.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/4956.html.txt
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In physics, refraction is the change in direction of a wave passing from one medium to another or from a gradual change in the medium.[1] Refraction of light is the most commonly observed phenomenon, but other waves such as sound waves and water waves also experience refraction. How much a wave is refracted is determined by the change in wave speed and the initial direction of wave propagation relative to the direction of change in speed.
|
4 |
+
|
5 |
+
For light, refraction follows Snell's law, which states that, for a given pair of media, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equal to the ratio of phase velocities (v1 / v2) in the two media, or equivalently, to the indices of refraction (n2 / n1) of the two media.[2]
|
6 |
+
|
7 |
+
Optical prisms and lenses use refraction to redirect light, as does the human eye. The refractive index of materials varies with the wavelength of light,[3] and thus the angle of the refraction also varies correspondingly. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors.[4]
|
8 |
+
|
9 |
+
Refraction of light can be seen in many places in our everyday life. It makes objects under a water surface appear closer than they really are. It is what optical lenses are based on, allowing for instruments such as glasses, cameras, binoculars, microscopes, and the human eye. Refraction is also responsible for some natural optical phenomena including rainbows and mirages.
|
10 |
+
|
11 |
+
A correct explanation of refraction involves two separate parts, both a result of the wave nature of light.
|
12 |
+
|
13 |
+
As described above, the speed of light is slower in a medium other than vacuum. This slowing applies to any medium such as air, water, or glass, and is responsible for phenomena such as refraction. When light leaves the medium and returns to a vacuum, and ignoring any effects of gravity, its speed returns to the usual speed of light in a vacuum, c.
|
14 |
+
|
15 |
+
Common explanations for this slowing, based upon the idea of light scattering from, or being absorbed and re-emitted by atoms, are both incorrect. Explanations like these would cause a "blurring" effect in the resulting light, as it would no longer be travelling in just one direction. But this effect is not seen in nature.
|
16 |
+
|
17 |
+
A more correct explanation rests on light's nature as an electromagnetic wave.[5] Because light is an oscillating electrical/magnetic wave, light traveling in a medium causes the electrically charged electrons of the material to also oscillate. (The material's protons also oscillate but as they are around 2000 times more massive, their movement and therefore their effect, is far smaller). A moving electrical charge emits electromagnetic waves of its own. The electromagnetic waves emitted by the oscillating electrons, interact with the electromagnetic waves that make up the original light, similar to water waves on a pond, a process known as constructive interference. When two waves interfere in this way, the resulting "combined" wave may have wave packets that pass an observer at a slower rate. The light has effectively been slowed. When the light leaves the material, this interaction with electrons no longer happens, and therefore the wave packet rate (and therefore its speed) return to normal.
|
18 |
+
|
19 |
+
Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, and therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side. This is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side.
|
20 |
+
|
21 |
+
Another way of understanding the same thing is to consider the change in wavelength at the interface. When the wave goes from one material to another where the wave has a different speed v, the frequency f of the wave will stay the same, but the distance between wavefronts or wavelength λ=v/f will change. If the speed is decreased, such as in the figure to the right, the wavelength will also decrease. With an angle between the wave fronts and the interface and change in distance between the wave fronts the angle must change over the interface to keep the wave fronts intact. From these considerations the relationship between the angle of incidence θ1, angle of transmission θ2 and the wave speeds v1 and v2 in the two materials can be derived. This is the law of refraction or Snell's law and can be written as[6]
|
22 |
+
|
23 |
+
The phenomenon of refraction can in a more fundamental way be derived from the 2 or 3-dimensional wave equation. The boundary condition at the interface will then require the tangential component of the wave vector to be identical on the two sides of the interface.[7] Since the magnitude of the wave vector depend on the wave speed this requires a change in direction of the wave vector.
|
24 |
+
|
25 |
+
The relevant wave speed in the discussion above is the phase velocity of the wave. This is typically close to the group velocity which can be seen as the truer speed of a wave, but when they differ it is important to use the phase velocity in all calculations relating to refraction.
|
26 |
+
|
27 |
+
A wave traveling perpendicular to a boundary, i.e. having its wavefronts parallel to the boundary, will not change direction even if the speed of the wave changes.
|
28 |
+
|
29 |
+
For light, the refractive index n of a material is more often used than the wave phase speed v in the material. They are, however, directly related through the speed of light in vacuum c as
|
30 |
+
|
31 |
+
In optics, therefore, the law of refraction is typically written as
|
32 |
+
|
33 |
+
Refraction occurs when light goes through a water surface since water has a refractive index of 1.33 and air has a refractive index of about 1. Looking at a straight object, such as a pencil in the figure here, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is.
|
34 |
+
|
35 |
+
The depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, and the fisher must aim lower to catch the fish. Conversely, an object above the water has a higher apparent height when viewed from below the water. The opposite correction must be made by an archer fish.[8]
|
36 |
+
|
37 |
+
For small angles of incidence (measured from the normal, when sin θ is approximately the same as tan θ), the ratio of apparent to real depth is the ratio of the refractive indexes of air to that of water. But, as the angle of incidence approaches 90o, the apparent depth approaches zero, albeit reflection increases, which limits observation at high angles of incidence. Conversely, the apparent height approaches infinity as the angle of incidence (from below) increases, but even earlier, as the angle of total internal reflection is approached, albeit the image also fades from view as this limit is approached.
|
38 |
+
|
39 |
+
Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air. When a beam of white light passes from air into a material having an index of refraction that varies with frequency, a phenomenon known as dispersion occurs, in which different coloured components of the white light are refracted at different angles, i.e., they bend by different amounts at the interface, so that they become separated. The different colors correspond to different frequencies.
|
40 |
+
|
41 |
+
The refractive index of air depends on the air density and thus vary with air temperature and pressure. Since the pressure is lower at higher altitudes, the refractive index is also lower, causing light rays to refract towards the earth surface when traveling long distances through the atmosphere. This shifts the apparent positions of stars slightly when they are close to the horizon and makes the sun visible before it geometrically rises above the horizon during a sunrise.
|
42 |
+
|
43 |
+
Temperature variations in the air can also cause refraction of light. This can be seen as a heat haze when hot and cold air is mixed e.g. over a fire, in engine exhaust, or when opening a window on a cold day. This makes objects viewed through the mixed air appear to shimmer or move around randomly as the hot and cold air moves. This effect is also visible from normal variations in air temperature during a sunny day when using high magnification telephoto lenses and is often limiting the image quality in these cases.
|
44 |
+
[9] In a similar way, atmospheric turbulence gives rapidly varying distortions in the images of astronomical telescopes limiting the resolution of terrestrial telescopes not using adaptive optics or other techniques for overcoming these atmospheric distortions.
|
45 |
+
|
46 |
+
Air temperature variations close to the surface can give rise to other optical phenomena, such as mirages and Fata Morgana. Most commonly, air heated by a hot road on a sunny day deflects light approaching at a shallow angle towards a viewer. This makes the road appear reflecting, giving an illusion of water covering the road.
|
47 |
+
|
48 |
+
In medicine, particularly optometry, ophthalmology and orthoptics, refraction (also known as refractometry) is a clinical test in which a phoropter may be used by the appropriate eye care professional to determine the eye's refractive error and the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented to determine which provides the sharpest, clearest vision.[10]
|
49 |
+
|
50 |
+
Water waves travel slower in shallower water. This can be used to demonstrate refraction in ripple tanks and also explains why waves on a shoreline tend to strike the shore close to a perpendicular angle. As the waves travel from deep water into shallower water near the shore, they are refracted from their original direction of travel to an angle more normal to the shoreline.[11]
|
51 |
+
|
52 |
+
In underwater acoustics, refraction is the bending or curving of a sound ray that results when the ray passes through a sound speed gradient from a region of one sound speed to a region of a different speed. The amount of ray bending is dependent on the amount of difference between sound speeds, that is, the variation in temperature, salinity, and pressure of the water.[12]
|
53 |
+
Similar acoustics effects are also found in the Earth's atmosphere. The phenomenon of refraction of sound in the atmosphere has been known for centuries;[13] however, beginning in the early 1970s, widespread analysis of this effect came into vogue through the designing of urban highways and noise barriers to address the meteorological effects of bending of sound rays in the lower atmosphere.[14]
|
en/4957.html.txt
ADDED
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A refrigerator (colloquially fridge) consists of a thermally insulated compartment and a heat pump (mechanical, electronic or chemical) that transfers heat from its inside to its external environment so that its inside is cooled to a temperature below the room temperature. Refrigeration is an essential food storage technique in developed countries. The lower temperature lowers the reproduction rate of bacteria, so the refrigerator reduces the rate of spoilage. A refrigerator maintains a temperature a few degrees above the freezing point of water. Optimum temperature range for perishable food storage is 3 to 5 °C (37 to 41 °F).[1] A similar device that maintains a temperature below the freezing point of water is called a freezer. The refrigerator replaced the icebox, which had been a common household appliance for almost a century and a half.
|
4 |
+
|
5 |
+
The first cooling systems for food involved ice. Artificial refrigeration began in the mid-1750s, and developed in the early 1800s. In 1834, the first working vapor-compression refrigeration system was built. The first commercial ice-making machine was invented in 1854. In 1913, refrigerators for home use were invented. In 1923 Frigidaire introduced the first self-contained unit. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s. Home freezers as separate compartments (larger than necessary just for ice cubes) were introduced in 1940. Frozen foods, previously a luxury item, became commonplace.
|
6 |
+
|
7 |
+
Freezer units are used in households as well as in industry and commerce. Commercial refrigerator and freezer units were in use for almost 40 years prior to the common home models. The freezer-over-refrigerator style had been the basic style since the 1940s, until modern, side-by-side refrigerators broke the trend. A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. Newer refrigerators may include automatic defrosting, chilled water, and ice from a dispenser in the door.
|
8 |
+
|
9 |
+
Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest are Peltier-type refrigerators designed to chill beverages. A large domestic refrigerator stands as tall as a person and may be about 1 m wide with a capacity of 600 L. Refrigerators and freezers may be free-standing, or built into a kitchen. The refrigerator allows the modern household to keep food fresh for longer than before. Freezers allow people to buy food in bulk and eat it at leisure, and bulk purchases save money.
|
10 |
+
|
11 |
+
Before the invention of the refrigerator, icehouses were used to provide cool storage for most of the year. Placed near freshwater lakes or packed with snow and ice during the winter, they were once very common. Natural means are still used to cool foods today. On mountainsides, runoff from melting snow is a convenient way to cool drinks, and during the winter one can keep milk fresh much longer just by keeping it outdoors. The word "refrigeratory" was used at least as early as the 17th century[2]
|
12 |
+
|
13 |
+
The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air.[3] The experiment even created a small amount of ice, but had no practical application at that time.
|
14 |
+
|
15 |
+
In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. In 1820, the British scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate in Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system. It was a closed-cycle device that could operate continuously.[4] A similar attempt was made in 1842, by American physician, John Gorrie,[5] who built a working prototype, but it was a commercial failure. American engineer Alexander Twining took out a British patent in 1850 for a vapor compression system that used ether.
|
16 |
+
|
17 |
+
The first practical vapor compression refrigeration system was built by James Harrison, a Scottish Australian. His 1856 patent was for a vapor compression system using ether, alcohol or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapor-compression refrigeration to breweries and meat packing houses, and by 1861, a dozen of his systems were in operation.
|
18 |
+
|
19 |
+
The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876. His new process made possible the use of gases such as ammonia (NH3), sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants and they were widely used for that purpose until the late 1920s.[6]
|
20 |
+
|
21 |
+
Commercial refrigerator and freezer units, which go by many other names, were in use for almost 40 years prior to the common home models. They used gas systems such as ammonia (R-717) or sulfur dioxide (R-764), which occasionally leaked, making them unsafe for home use. Practical household refrigerators were introduced in 1915 and gained wider acceptance in the United States in the 1930s as prices fell and non-toxic, non-flammable synthetic refrigerants such as Freon-12 (R-12) were introduced. However, R-12 damaged the ozone layer, causing governments to issue a ban on its use in new refrigerators and air-conditioning systems in 1994. The less harmful replacement for R-12, R-134a (tetrafluoroethane), has been in common use since 1990, but R-12 is still found in many old systems today.
|
22 |
+
|
23 |
+
A common commercial refrigerator is the glass fronted beverage cooler. These type of appliances are typically designed for specific re-load conditions meaning that they generally have a larger cooling system. This ensures that they are able to cope with a large throughput of drinks and frequent door opening. As a result, it is common for these types of commercial refrigerators to have energy consumption of over 4 kWh per day.[7]
|
24 |
+
|
25 |
+
In 1913, refrigerators for home and domestic use were invented by Fred W. Wolf of Fort Wayne, Indiana, with models consisting of a unit that was mounted on top of an ice box.[8][9] In 1914, engineer Nathaniel B. Wales of Detroit, Michigan, introduced an idea for a practical electric refrigeration unit, which later became the basis for the Kelvinator. A self-contained refrigerator, with a compressor on the bottom of the cabinet was invented by Alfred Mellowes in 1916. Mellowes produced this refrigerator commercially but was bought out by William C. Durant in 1918, who started the Frigidaire company to mass-produce refrigerators. In 1918, Kelvinator company introduced the first refrigerator with any type of automatic control. The absorption refrigerator was invented by Baltzar von Platen and Carl Munters from Sweden in 1922, while they were still students at the Royal Institute of Technology in Stockholm. It became a worldwide success and was commercialized by Electrolux. Other pioneers included Charles Tellier, David Boyle, and Raoul Pictet. Carl von Linde was the first to patent and make a practical and compact refrigerator.
|
26 |
+
|
27 |
+
These home units usually required the installation of the mechanical parts, motor and compressor, in the basement or an adjacent room while the cold box was located in the kitchen. There was a 1922 model that consisted of a wooden cold box, water-cooled compressor, an ice cube tray and a 9-cubic-foot (0.25 m3) compartment, and cost $714. (A 1922 Model-T Ford cost about $450.) By 1923, Kelvinator held 80 percent of the market for electric refrigerators. Also in 1923 Frigidaire introduced the first self-contained unit. About this same time porcelain-covered metal cabinets began to appear. Ice cube trays were introduced more and more during the 1920s; up to this time freezing was not an auxiliary function of the modern refrigerator.
|
28 |
+
|
29 |
+
The first refrigerator to see widespread use was the General Electric "Monitor-Top" refrigerator introduced in 1927, so-called, by the public, because of its resemblance to the gun turret on the ironclad warship USS Monitor of the 1860s.[11] The compressor assembly, which emitted a great deal of heat, was placed above the cabinet, and enclosed by a decorative ring. Over a million units were produced. As the refrigerating medium, these refrigerators used either sulfur dioxide, which is corrosive to the eyes and may cause loss of vision, painful skin burns and lesions, or methyl formate, which is highly flammable, harmful to the eyes, and toxic if inhaled or ingested.[12]
|
30 |
+
|
31 |
+
The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s and provided a safer, low-toxicity alternative to previously used refrigerants. Separate freezers became common during the 1940s; the popular term at the time for the unit was a deep freeze. These devices, or appliances, did not go into mass production for use in the home until after World War II.[13] The 1950s and 1960s saw technical advances like automatic defrosting and automatic ice making. More efficient refrigerators were developed in the 1970s and 1980s, even though environmental issues led to the banning of very effective (Freon) refrigerants. Early refrigerator models (from 1916) had a cold compartment for ice cube trays. From the late 1920s fresh vegetables were successfully processed through freezing by the Postum Company (the forerunner of General Foods), which had acquired the technology when it bought the rights to Clarence Birdseye's successful fresh freezing methods.
|
32 |
+
|
33 |
+
In the early 1950s most refrigerators were white, but from the mid-1950s through present day designers and manufacturers put color onto refrigerators. In the late-1950s/early-1960s, pastel colors like turquoise and pink became popular, and brushed chrome-plating (similar to stainless finish) was available on some models. In the late 1960s and throughout the 1970s, earth tone colors were popular, including Harvest Gold, Avocado Green and almond. In the 1980s, black became fashionable. In the late 1990s stainless steel came into vogue, and in 2009, one manufacturer introduced multi-color designs. Since 1961 the Color Marketing Group has attempted to coordinate the colors of appliances and other consumer goods.
|
34 |
+
|
35 |
+
Freezer units are used in households and in industry and commerce. Food stored at or below −18 °C (0 °F) is safe indefinitely.[14] Most household freezers maintain temperatures from −23 to −18 °C (−9 to 0 °F), although some freezer-only units can achieve −34 °C (−29 °F) and lower. Refrigerator freezers generally do not achieve lower than −23 °C (−9 °F), since the same coolant loop serves both compartments: Lowering the freezer compartment temperature excessively causes difficulties in maintaining above-freezing temperature in the refrigerator compartment. Domestic freezers can be included as a separate compartment in a refrigerator, or can be a separate appliance. Domestic freezers may be either upright units resembling a refrigerator, or chests (with the lid or door on top, sacrificing convenience for efficiency and partial immunity to power outages).[15] Many modern upright freezers come with an ice dispenser built into their door. Some upscale models include thermostat displays and controls, and sometimes flat screen televisions as well.
|
36 |
+
|
37 |
+
Home freezers as separate compartments (larger than necessary just for ice cubes), or as separate units, were introduced in the United States in 1940. Frozen foods, previously a luxury item, became commonplace.
|
38 |
+
|
39 |
+
The following table shows worldwide production of household refrigerator units as of 2005.[16]
|
40 |
+
|
41 |
+
A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. In this cycle, a circulating refrigerant such as R134a enters a compressor as low-pressure vapor at or slightly below the temperature of the refrigerator interior. The vapor is compressed and exits the compressor as high-pressure superheated vapor. The superheated vapor travels under pressure through coils or tubes that make up the condenser; the coils or tubes are passively cooled by exposure to air in the room. The condenser cools the vapor, which liquefies. As the refrigerant leaves the condenser, it is still under pressure but is now only slightly above room temperature. This liquid refrigerant is forced through a metering or throttling device, also known as an expansion valve (essentially a pin-hole sized constriction in the tubing) to an area of much lower pressure. The sudden decrease in pressure results in explosive-like flash evaporation of a portion (typically about half) of the liquid. The latent heat absorbed by this flash evaporation is drawn mostly from adjacent still-liquid refrigerant, a phenomenon known as auto-refrigeration. This cold and partially vaporized refrigerant continues through the coils or tubes of the evaporator unit. A fan blows air from the compartment ("box air") across these coils or tubes and the refrigerant completely vaporizes, drawing further latent heat from the box air. This cooled air is returned to the refrigerator or freezer compartment, and so keeps the box air cold. Note that the cool air in the refrigerator or freezer is still warmer than the refrigerant in the evaporator. Refrigerant leaves the evaporator, now fully vaporized and slightly heated, and returns to the compressor inlet to continue the cycle.
|
42 |
+
|
43 |
+
Modern domestic refrigerators are extremely reliable because motor and compressor are integrated within a welded container, "sealed unit", with greatly reduced likelihood of leakage or contamination. By comparison, externally-coupled refrigeration compressors, such as those in automobile air conditioning, inevitably leak fluid and lubricant past the shaft seals. This leads to a requirement for periodic recharging and, if ignored, possible compressor failure.
|
44 |
+
|
45 |
+
Refrigerators with two compartments need special design to control the cooling of refrigerator or freezer compartments. Typically, the compressors and condenser coils are mounted at the top of the cabinet, with a single fan to cool them both. This arrangement has a few downsides: each compartment cannot be controlled independently and the more humid refrigerator air is mixed with the dry freezer air.[17]
|
46 |
+
|
47 |
+
A few manufacturers offer dual compressor models. These models have separate freezer and refrigerator compartments that operate independently of each other, sometimes mounted within a single cabinet. Each has its own separate compressor, condenser and evaporator coils, insulation, thermostat, and door.
|
48 |
+
|
49 |
+
A hybrid between the two designs is using a separate fan for each compartment, the Dual Fan approach. Doing so allows for separate control and airflow on a single compressor system.
|
50 |
+
|
51 |
+
An absorption refrigerator works differently from a compressor refrigerator, using a source of heat, such as combustion of liquefied petroleum gas, solar thermal energy or an electric heating element. These heat sources are much quieter than the compressor motor in a typical refrigerator. A fan or pump might be the only mechanical moving parts; reliance on convection is considered impractical.
|
52 |
+
|
53 |
+
Other uses of an absorption refrigerator (or "chiller") include large systems used in office buildings or complexes such as hospitals and universities. These large systems are used to chill a brine solution that is circulated through the building.
|
54 |
+
|
55 |
+
The Peltier effect uses electricity to pump heat directly; refrigerators employing this system are sometimes used for camping, or in situations where noise is not acceptable. They can be totally silent (if a fan for air circulation is not fitted) but are less energy-efficient than other methods.
|
56 |
+
|
57 |
+
"Ultra-cold" or "ultra-low temperature (ULT)" (typically −80 C) freezers, as used for storing biological samples, also generally employ two stages of cooling, but in cascade. The lower temperature stage uses methane, or a similar gas, as a refrigerant, with its condenser kept at around −40 C by a second stage which uses a more conventional refrigerant. Well known brands include Forma and Revco (both now Thermo Scientific) and Thermoline. For much lower temperatures (around −196 C), laboratories usually purchase liquid nitrogen, kept in a Dewar flask, into which the samples are suspended.
|
58 |
+
|
59 |
+
Alternatives to the vapor-compression cycle not in current mass production include:
|
60 |
+
|
61 |
+
Many modern refrigerator/freezers have the freezer on top and the refrigerator on the bottom. Most refrigerator-freezers—except for manual defrost models or cheaper units—use what appears to be two thermostats. Only the refrigerator compartment is properly temperature controlled. When the refrigerator gets too warm, the thermostat starts the cooling process and a fan circulates the air around the freezer. During this time, the refrigerator also gets colder. The freezer control knob only controls the amount of air that flows into the refrigerator via a damper system.[19] Changing the refrigerator temperature will inadvertently change the freezer temperature in the opposite direction. [citation needed] Changing the freezer temperature will have no effect on the refrigerator temperature. The freezer control may also be adjusted to compensate for any refrigerator adjustment.
|
62 |
+
|
63 |
+
This means the refrigerator may become too warm. However, because only enough air is diverted to the refrigerator compartment, the freezer usually re-acquires the set temperature quickly, unless the door is opened. When a door is opened, either in the refrigerator or the freezer, the fan in some units stops immediately to prevent excessive frost build up on the freezer's evaporator coil, because this coil is cooling two areas. When the freezer reaches temperature, the unit cycles off, no matter what the refrigerator temperature is. Modern computerized refrigerators do not use the damper system. The computer manages fan speed for both compartments, although air is still blown from the freezer.
|
64 |
+
|
65 |
+
Newer refrigerators may include:
|
66 |
+
|
67 |
+
These older freezer compartments were the main cooling body of the refrigerator, and only maintained a temperature of around −6 °C (21 °F), which is suitable for keeping food for a week.
|
68 |
+
|
69 |
+
Later advances included automatic ice units and self compartmentalized freezing units.
|
70 |
+
|
71 |
+
Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest is a 4 L Peltier refrigerator advertised as being able to hold 6 cans of beer. A large domestic refrigerator stands as tall as a person and may be about 1 m wide with a capacity of 600 L. Some models for small households fit under kitchen work surfaces, usually about 86 cm high. Refrigerators may be combined with freezers, either stacked with refrigerator or freezer above, below, or side by side. A refrigerator without a frozen food storage compartment may have a small section just to make ice cubes. Freezers may have drawers to store food in, or they may have no divisions (chest freezers).
|
72 |
+
|
73 |
+
Refrigerators and freezers may be free-standing, or built into a kitchen.
|
74 |
+
|
75 |
+
Three distinct classes of refrigerator are common:
|
76 |
+
|
77 |
+
Other specialized cooling mechanisms may be used for cooling, but have not been applied to domestic or commercial refrigerators.
|
78 |
+
|
79 |
+
In a house without air-conditioning (space heating and/or cooling) refrigerators consumed more energy than any other home device.[24] In the early 1990s a competition was held among the major manufacturers to encourage energy efficiency.[25] Current US models that are Energy Star qualified use 50% less energy than the average models made in 1974.[26] The most energy-efficient unit made in the US consumes about half a kilowatt-hour per day (equivalent to 20 W continuously).[27] But even ordinary units are quite efficient; some smaller units use less than 0.2 kWh per day (equivalent to 8 W continuously).
|
80 |
+
Larger units, especially those with large freezers and icemakers, may use as much as 4 kW·h per day (equivalent to 170 W continuously).
|
81 |
+
The European Union uses a letter-based mandatory energy efficiency rating label instead of the Energy Star; thus EU refrigerators at the point of sale are labelled according to how energy-efficient they are.
|
82 |
+
|
83 |
+
For US refrigerators, the Consortium on Energy Efficiency (CEE) further differentiates between Energy Star qualified refrigerators. Tier 1 refrigerators are those that are 20% to 24.9% more efficient than the Federal minimum standards set by the National Appliance Energy Conservation Act (NAECA). Tier 2 are those that are 25% to 29.9% more efficient. Tier 3 is the highest qualification, for those refrigerators that are at least 30% more efficient than Federal standards.[28] About 82% of the Energy Star qualified refrigerators are Tier 1, with 13% qualifying as Tier 2, and just 5% at Tier 3.[citation needed]
|
84 |
+
|
85 |
+
Besides the standard style of compressor refrigeration used in normal household refrigerators and freezers, there are technologies such as absorption refrigeration and magnetic refrigeration. Although these designs generally use a much larger amount of energy compared to compressor refrigeration, other qualities such as silent operation or the ability to use gas can favor these refrigeration units in small enclosures, a mobile environment or in environments where unit failure would lead to devastating consequences.
|
86 |
+
|
87 |
+
Many refrigerators made in the 1930s and 1940s were far more efficient than most that were made later. This is partly attributable to the addition of new features, such as auto-defrost, that reduced efficiency. Additionally, after World War 2, refrigerator style became more important than efficiency. This was especially true in the US in the 1970s, when side-by-side models (known as American fridgefreezers outside of the US) with ice dispensers and water chillers became popular. However, the reduction in efficiency also arose partly from reduction in the amount of insulation to cut costs.
|
88 |
+
|
89 |
+
Because of the introduction of new energy efficiency standards, refrigerators made today are much more efficient than those made in the 1930s; they consume the same amount of energy while being three times as large.[29][30]
|
90 |
+
|
91 |
+
The efficiency of older refrigerators can be improved by defrosting (if the unit is manual defrost) and cleaning them regularly, replacing old and worn door seals with new ones, adjusting the thermostat to accommodate the actual contents (a refrigerator needn't be colder than 4 °C (39 °F) to store drinks and non-perishable items) and also replacing insulation, where applicable. Some sites recommend cleaning condenser coils every month or so on units with coils on the rear, to add life to the coils and not suffer an unnoticeable deterioration in efficiency over an extended period, the unit should be able to ventilate or "breathe" with adequate spaces around the front, back, sides and above the unit. If the refrigerator uses a fan to keep the condenser cool, then this must be cleaned or serviced, at per individual manufactures recommendations.
|
92 |
+
|
93 |
+
Frost-free refrigerators or freezers use electric fans to cool the appropriate compartment.[31] This could be called a "fan forced" refrigerator, whereas manual defrost units rely on colder air lying at the bottom, versus the warm air at the top to achieve adequate cooling. The air is drawn in through an inlet duct and passed through the evaporator where it is cooled, the air is then circulated throughout the cabinet via a series of ducts and vents. Because the air passing the evaporator is supposedly warm and moist, frost begins to form on the evaporator (especially on a freezer's evaporator). In cheaper and/or older models, a defrost cycle is controlled via a mechanical timer. This timer is set to shut off the compressor and fan and energize a heating element located near or around the evaporator for about 15 to 30 minutes at every 6 to 12 hours. This melts any frost or ice build up and allows the refrigerator to work normally once more. It is believed that frost free units have a lower tolerance for frost, due to their air-conditioner like evaporator coils. Therefore, if a door is left open accidentally (especially the freezer), the defrost system may not remove all frost, in this case, the freezer (or refrigerator) must be defrosted.[citation needed]
|
94 |
+
|
95 |
+
If the defrosting system melts all the ice before the timed defrosting period ends, then a small device (called a defrost limiter) acts like a thermostat and shuts off the heating element to prevent too large a temperature fluctuation, it also prevents hot blasts of air when the system starts again, should it finish defrosting early. On some early frost-free models, the defrost limiter also sends a signal to the defrost timer to start the compressor and fan as soon as it shuts off the heating element before the timed defrost cycle ends. When the defrost cycle is completed, the compressor and fan are allowed to cycle back on.[citation needed]
|
96 |
+
|
97 |
+
Frost-free refrigerators, including some early frost free refrigerator/freezers that used a cold plate in their refrigerator section instead of airflow from the freezer section, generally don't shut off their refrigerator fans during defrosting. This allows consumers to leave food in the main refrigerator compartment uncovered, and also helps keep vegetables moist. This method also helps reduce energy consumption, because the refrigerator is above freeze point and can pass the warmer-than-freezing air through the evaporator or cold plate to aid the defrosting cycle.
|
98 |
+
|
99 |
+
With the advent of digital inverter compressors, the energy consumption is even further reduced than a single-speed induction motor compressor, and thus contributes far less in the way of greenhouse gases.[32]
|
100 |
+
|
101 |
+
The energy consumption of a refrigerator is also dependent on the type of refrigeration being done. For instance, Inverter Refrigerators consume comparatively less energy than a typical non-inverter refrigerator. In an inverter refrigerator, the compressor is used conditionally on requirement basis. For instance, an inverter refrigerator might use less energy during the winters than it does during the summers. This is because the compressor works for a shorter time than it does during the summers.[33]
|
102 |
+
|
103 |
+
Further, newer models of inverter compressor refrigerators take in to account various external and internal conditions to adjust the compressor speed and thus optimize cooling and energy consumption. Most of them use at least 4 sensors which help detect variance in external temperature, internal temperature owing to opening of the refrigerator door or keeping new food inside; humidity and usage patterns. Depending on the sensor inputs, the compressor adjusts its speed. For example, if door is opened or new food is kept, the sensor detects an increase in temperature inside the cabin and signals the compressor to increase its speed till a pre-determined temperature is attained. After which, the compressor runs at a minimum speed to just maintain the internal temperature. The compressor typically runs between 1200 and 4500 RPM.
|
104 |
+
Inverter compressors not only optimizes cooling but is also superior in terms of durability and energy efficiency. [34]
|
105 |
+
A device consumes maximum energy and undergoes maximum wear and tear when it switches itself on. As an inverter compressor never switches itself off and instead runs on varying speed, it minimizes wear and tear and energy usage.
|
106 |
+
LG and Kenmore played a significant role in improving inverter compressors as we know it by reducing the friction points in the compressor and thus introducing Linear Inverter Compressors. Conventionally, all domestic refrigerators use a reciprocating drive which is connected to the piston. But in a linear inverter compressor, the piston which is a permanent magnet is suspended between two electromagnets. The AC changes the magnetic poles of the electromagnet, which results in the push and pull that compresses the refrigerant. LG claims that this helps reduce energy consumption by 32% and noise by 25% compared to their conventional compressors.
|
107 |
+
|
108 |
+
The phycial design of refrigerators also plays a large part in its energy efficiency. The most efficient is the chest-style freezer, as its top-opening design minimizes convection when opening the doors, reducing the amount of warm moist air entering the freezer. On the other hand, in-door ice dispensers cause more heat leakage, contributing to an increase in energy consumption.[35]
|
109 |
+
|
110 |
+
The refrigerator allows the modern family to keep food fresh for longer than before. The most notable improvement is for meat and other highly perishable wares, which needed to be refined to gain anything resembling shelf life.[citation needed] (On the other hand, refrigerators and freezers can also be stocked with processed, quick-cook foods that are less healthy.) Refrigeration in transit makes it possible to enjoy food from distant places.
|
111 |
+
|
112 |
+
Dairy products, meats, fish, poultry and vegetables can be kept refrigerated in the same space within the kitchen (although raw meat should be kept separate from other food for reasons of hygiene).
|
113 |
+
|
114 |
+
Freezers allow people to buy food in bulk and eat it at leisure, and bulk purchases save money. Ice cream, a popular commodity of the 20th century, could previously only be obtained by traveling to where the product was made and eating it on the spot. Now it is a common food item. Ice on demand not only adds to the enjoyment of cold drinks, but is useful for first-aid, and for cold packs that can be kept frozen for picnics or in case of emergency.
|
115 |
+
|
116 |
+
The capacity of a refrigerator is measured in either liters or cubic feet. Typically the volume of a combined refrigerator-freezer is split with 1/3rds to 1/4th of the volume allocated to the freezer although these values are highly variable.
|
117 |
+
|
118 |
+
Temperature settings for refrigerator and freezer compartments are often given arbitrary numbers by manufacturers (for example, 1 through 9, warmest to coldest), but generally 3 to 5 °C (37 to 41 °F)[1] is ideal for the refrigerator compartment and −18 °C (0 °F) for the freezer. Some refrigerators must be within certain external temperature parameters to run properly. This can be an issue when placing units in an unfinished area, such as a garage.
|
119 |
+
|
120 |
+
Some refrigerators are now divided into four zones to store different types of food:
|
121 |
+
|
122 |
+
European freezers, and refrigerators with a freezer compartment, have a four star rating system to grade freezers.[citation needed]
|
123 |
+
|
124 |
+
Although both the three and four star ratings specify the same storage times and same minimum temperature of −18 °C (0 °F), only a four star freezer is intended for freezing fresh food, and may include a "fast freeze" function (runs the compressor continually, down to as low as −26 °C (−15 °F)) to facilitate this. Three (or fewer) stars are used for frozen food compartments that are only suitable for storing frozen food; introducing fresh food into such a compartment is likely to result in unacceptable temperature rises. This difference in categorization is shown in the design of the 4-star logo, where the "standard" three stars are displayed in a box using "positive" colours, denoting the same normal operation as a 3-star freezer, and the fourth star showing the additional fresh food/fast freeze function is prefixed to the box in "negative" colours or with other distinct formatting.[citation needed]
|
125 |
+
|
126 |
+
Most European refrigerators include a moist cold refrigerator section (which does require (automatic) defrosting at irregular intervals) and a (rarely frost free) freezer section.
|
127 |
+
|
128 |
+
(from warmest to coolest)[36]
|
129 |
+
|
130 |
+
An increasingly important environmental concern is the disposal of old refrigerators—initially because freon coolant damages the ozone layer—but as older generation refrigerators wear out, the destruction of CFC-bearing insulation also causes concern. Modern refrigerators usually use a refrigerant called HFC-134a (1,1,1,2-Tetrafluoroethane), which does not deplete the ozone layer, instead of Freon. A R-134a is now becoming very uncommon in Europe. Newer refrigerants are being used instead. The main refrigerant now used is R-600a, or isobutane which has a smaller effect on the atmosphere if released. There have been reports of refrigerators exploding if the refrigerant leaks isobutane in the presence of a spark. If the coolant leaks into the fridge, at times when the door is not being opened (such as overnight) the concentration of coolant in the air within the fridge can build up to form an explosive mixture that can be ignited either by a spark from the thermostat or when the light comes on as the door is opened, resulting in documented cases of serious property damage and injury or even death from the resulting explosion.[43]
|
131 |
+
|
132 |
+
Disposal of discarded refrigerators is regulated, often mandating the removal of doors for safety reasons. Children playing hide-and-seek have been asphyxiated while hiding inside discarded refrigerators, particularly older models with latching doors, in a phenomenon called refrigerator death. Since 2 August 1956, under U.S. federal law, refrigerator doors are no longer permitted to latch so they cannot be opened from the inside.[44] Modern units use a magnetic door gasket that holds the door sealed but allows it to be pushed open from the inside.[45] This gasket was invented, developed and manufactured by Max Baermann (1903–1984) of Bergisch Gladbach/Germany.[46][47]
|
133 |
+
|
134 |
+
Regarding total life-cycle costs, many governments offer incentives to encourage recycling of old refrigerators. One example is the Phoenix refrigerator program launched in Australia. This government incentive picked up old refrigerators, paying their owners for "donating" the refrigerator. The refrigerator was then refurbished, with new door seals, a thorough cleaning and the removal of items, such as the cover that is strapped to the back of many older units. The resulting refrigerators, now over 10% more efficient, were then distributed to low income families.[citation needed]
|
135 |
+
|
136 |
+
McCray pre-electric home refrigerator ad from 1905; this company, founded in 1887, is still in business
|
137 |
+
|
138 |
+
A 1930s era General Electric "Globe Top" refrigerator in the Ernest Hemingway House
|
139 |
+
|
140 |
+
General Electric "Monitor-Top" refrigerator, still in use, June 2007
|
141 |
+
|
142 |
+
Frigidaire Imperial "Frost Proof" model FPI-16BC-63, top refrigerator/bottom freezer with brushed chrome door finish made by General Motors Canada in 1963
|
143 |
+
|
144 |
+
A side-by-side refrigerator-freezer with an icemaker (2011)
|
en/4958.html.txt
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Reggae (/ˈrɛɡeɪ/) is a music genre that originated in Jamaica in the late 1960s. The term also denotes the modern popular music of Jamaica and its diaspora.[1] A 1968 single by Toots and the Maytals, "Do the Reggay" was the first popular song to use the word "reggae", effectively naming the genre and introducing it to a global audience.[2][3] While sometimes used in a broad sense to refer to most types of popular Jamaican dance music, the term reggae more properly denotes a particular music style that was strongly influenced by traditional mento as well as American jazz and rhythm and blues, especially the New Orleans R&B practiced by Fats Domino and Allen Toussaint, and evolved out of the earlier genres ska and rocksteady.[4] Reggae usually relates news, social gossip, and political commentary. Reggae spread into a commercialized jazz field, being known first as "rudie blues", then "ska", later "blue beat", and "rock steady".[5] It is instantly recognizable from the counterpoint between the bass and drum downbeat and the offbeat rhythm section. The immediate origins of reggae were in ska and rocksteady; from the latter, reggae took over the use of the bass as a percussion instrument.[6]
|
2 |
+
|
3 |
+
Reggae is deeply linked to Rastafari, an Afrocentric religion which developed in Jamaica in the 1930s, aiming at promoting Pan Africanism.[7][8][9] Soon after the Rastafarian movement appeared, the international popularity of reggae music became associated with and increased the visibility of Rastafarianism spreading the Rastafari gospel throughout the world.[8] Reggae music is an important means of transporting vital messages of Rastafarianism. The musician becomes the messenger, and as Rastafarians see it, "the soldier and the musician are tools for change."[10]
|
4 |
+
|
5 |
+
Stylistically, reggae incorporates some of the musical elements of rhythm and blues, jazz, mento (a celebratory, rural folk form that served its largely rural audience as dance music and an alternative to the hymns and adapted chanteys of local church singing),[11] calypso,[12] and also draws influence from traditional African folk rhythms. One of the most easily recognizable elements is offbeat rhythms; staccato chords played by a guitar or piano (or both) on the offbeats of the measure. The tempo of reggae is usually slower paced than both ska and rocksteady.[13] The concept of call and response can be found throughout reggae music. The genre of reggae music is led by the drum and bass.[14][15] Some key players in this sound are Jackie Jackson from Toots and the Maytals,[16] Carlton Barrett from Bob Marley and the Wailers,[17] Lloyd Brevett from The Skatalites,[18] Paul Douglas from Toots and the Maytals,[19] Lloyd Knibb from The Skatalites,[20] Winston Grennan,[21] Sly Dunbar,[22] and Anthony "Benbow" Creary from The Upsetters.[23] The bass guitar often plays the dominant role in reggae. The bass sound in reggae is thick and heavy, and equalized so the upper frequencies are removed and the lower frequencies emphasized. The guitar in reggae usually plays on the offbeat of the rhythm. It is common for reggae to be sung in Jamaican Patois, Jamaican English, and Iyaric dialects. Reggae is noted for its tradition of social criticism and religion in its lyrics,[24] although many reggae songs discuss lighter, more personal subjects, such as love and socializing.
|
6 |
+
|
7 |
+
Reggae has spread to many countries across the world, often incorporating local instruments and fusing with other genres. Reggae en Español spread from the Spanish-speaking Central American country of Panama to the mainland South American countries of Venezuela and Guyana then to the rest of South America. Caribbean music in the United Kingdom, including reggae, has been popular since the late 1960s, and has evolved into several subgenres and fusions. Many reggae artists began their careers in the UK, and there have been a number of European artists and bands drawing their inspiration directly from Jamaica and the Caribbean community in Europe. Reggae in Africa was boosted by the visit of Bob Marley to Zimbabwe in 1980. In Jamaica, authentic reggae is one of the biggest sources of income.[25]
|
8 |
+
|
9 |
+
The 1967 edition of the Dictionary of Jamaican English lists reggae as "a recently estab. sp. for rege", as in rege-rege, a word that can mean either "rags, ragged clothing" or "a quarrel, a row".[26] Reggae as a musical term first appeared in print with the 1968 rocksteady hit "Do the Reggay" by The Maytals which named the genre of Reggae for the world.
|
10 |
+
|
11 |
+
Reggae historian Steve Barrow credits Clancy Eccles with altering the Jamaican patois word streggae (loose woman) into reggae.[27] However, Toots Hibbert said:
|
12 |
+
|
13 |
+
There's a word we used to use in Jamaica called 'streggae'. If a girl is walking and the guys look at her and say 'Man, she's streggae' it means she don't dress well, she look raggedy. The girls would say that about the men too. This one morning me and my two friends were playing and I said, 'OK man, let's do the reggay.' It was just something that came out of my mouth. So we just start singing 'Do the reggay, do the reggay' and created a beat. People tell me later that we had given the sound its name. Before that people had called it blue-beat and all kind of other things. Now it's in the Guinness World of Records.[28]
|
14 |
+
|
15 |
+
Bob Marley claimed that the word reggae came from a Spanish term for "the king's music".[29] The liner notes of To the King, a compilation of Christian gospel reggae, suggest that the word reggae was derived from the Latin regi meaning "to the king".[30]
|
16 |
+
|
17 |
+
Reggae's direct origins are in the ska and rocksteady of 1960s Jamaica, strongly influenced by traditional Caribbean mento and calypso music, as well as American jazz and rhythm and blues. Ska was originally a generic title for Jamaican music recorded between 1961 and 1967 and emerged from Jamaican R&B, which was based largely on American R&B and doo-wop.[31] Rastafari entered some countries primarily through reggae music; thus, the movement in these places is more stamped by its origins in reggae music and social milieu.[32] The Rastafari movement was a significant influence on reggae, with Rasta drummers like Count Ossie taking part in seminal recordings.[33] One of the predecessors of reggae drumming is the Nyabinghi rhythm, a style of ritual drumming performed as a communal meditative practice in the Rastafarian life.[34]
|
18 |
+
|
19 |
+
In the latter half of the 20th century, phonograph records became of central importance to the Jamaican music industry, playing a significant cultural and economic role in the development of reggae music.[35] "In the early 1950s, Jamaican entrepreneurs began issuing 78s"[35] but this format would soon be superseded by the 7" single, first released in 1949.[36] In 1951 the first recordings of mento music were released as singles and showcased two styles of mento: an acoustic rural style, and a jazzy pop style.[37] Other 7" singles to appear in Jamaica around this time were covers of popular American R&B hits, made by Kingston sound system operators to be played at public dances.[35] Meanwhile, Jamaican expatriates started issuing 45s on small independent labels in the United Kingdom, many mastered directly from Jamaican 45s.[35]
|
20 |
+
|
21 |
+
Ska arose in Jamaican studios in the late 1950s, developing from this mix of American R&B, mento and calypso music.[27] Notable for its jazz-influenced horn riffs, ska is characterized by a quarter note walking bass line, guitar and piano offbeats, and a drum pattern with cross-stick snare and bass drum on the backbeat and open hi-hat on the offbeats. When Jamaica gained independence in 1962, ska became the music of choice for young Jamaicans seeking music that was their own. Ska also became popular among mods in Britain.
|
22 |
+
|
23 |
+
In the mid-1960s, ska gave rise to rocksteady, a genre slower than ska featuring more romantic lyrics and less prominent horns.[38] Theories abound as to why Jamaican musicians slowed the ska tempo to create rocksteady; one is that the singer Hopeton Lewis was unable to sing his hit song "Take It Easy" at a ska tempo.[27] The name "rocksteady" was codified after the release of a single by Alton Ellis. Many rocksteady rhythms later were used as the basis of reggae recordings, whose slower tempos allowed for the "double skank" guitar strokes on the offbeat.
|
24 |
+
|
25 |
+
Reggae developed from ska and rocksteady in the late 1960s. Larry And Alvin's "Nanny Goat" and the Beltones’ "No More Heartaches" were among the songs in the genre. The beat was distinctive from rocksteady in that it dropped any of the pretensions to the smooth, soulful sound that characterized slick American R&B, and instead was closer in kinship to US southern funk, being heavily dependent on the rhythm section to drive it along. Reggae's great advantage was its almost limitless flexibility: from the early, jerky sound of Lee Perry's "People Funny Boy", to the uptown sounds of Third World's "Now That We’ve Found Love", it was an enormous leap through the years and styles, yet both are instantly recognizable as reggae.[39] The shift from rocksteady to reggae was illustrated by the organ shuffle pioneered by Jamaican musicians like Jackie Mittoo and Winston Wright and featured in transitional singles "Say What You're Saying" (1968) by Eric "Monty" Morris and "People Funny Boy" (1968) by Lee "Scratch" Perry.[citation needed]
|
26 |
+
|
27 |
+
Early 1968 was when the first bona fide reggae records were released: "Nanny Goat" by Larry Marshall and "No More Heartaches" by The Beltones. That same year, the newest Jamaican sound began to spawn big-name imitators in other countries. American artist Johnny Nash's 1968 hit "Hold Me Tight" has been credited with first putting reggae in the American listener charts. Around the same time, reggae influences were starting to surface in rock and pop music, one example being 1968's "Ob-La-Di, Ob-La-Da" by The Beatles.[40]
|
28 |
+
|
29 |
+
The Wailers, a band started by Bob Marley, Peter Tosh and Bunny Wailer in 1963, is perhaps the most recognized band that made the transition through all three stages of early Jamaican popular music: ska, rocksteady and reggae. Over a dozen Wailers songs are based on or use a line from Jamaican mento songs. Other significant ska artists who made the leap to reggae include Prince Buster, Desmond Dekker, Ken Boothe, and Millie Small, best known for her 1964 blue-beat/ska cover version of "My Boy Lollipop" which was a smash hit internationally.[41]
|
30 |
+
|
31 |
+
Notable Jamaican producers influential in the development of ska into rocksteady and reggae include: Coxsone Dodd, Lee "Scratch" Perry, Leslie Kong, Duke Reid, Joe Gibbs and King Tubby. Chris Blackwell, who founded Island Records in Jamaica in 1960,[42] relocated to England in 1962, where he continued to promote Jamaican music. He formed a partnership with Lee Gopthal's Trojan Records in 1968, which released reggae in the UK until bought by Saga records in 1974.
|
32 |
+
|
33 |
+
Reggae's influence bubbled to the top of the U.S. Billboard Hot 100 charts in late 1972. First Three Dog Night hit No. 1 in September with a cover of the Maytones' version of "Black and White". Then Johnny Nash was at No. 1 for four weeks in November with "I Can See Clearly Now". Paul Simon's single "Mother And Child Reunion" – a track which he recorded in Kingston, Jamaica with Jimmy Cliff's backing group – was ranked by Billboard as the No. 57 song of 1972.
|
34 |
+
|
35 |
+
In 1973, the film The Harder They Come starring Jimmy Cliff was released and introduced Jamaican music to cinema audiences outside Jamaica.[43] Though the film achieved cult status its limited appeal meant that it had a smaller impact than Eric Clapton's 1974 cover of Bob Marley's "I Shot the Sheriff" which made it onto the playlists of mainstream rock and pop radio stations worldwide. Clapton's "I Shot The Sheriff" used modern rock production and recording techniques and faithfully retained most of the original reggae elements; it was a breakthrough pastiche devoid of any parody and played an important part in bringing the music of Bob Marley to a wider rock audience.[27] By the mid-1970s, authentic reggae dub plates and specials were getting some exposure in the UK on John Peel's radio show, who promoted the genre for the rest of his career.[44] Around the same time, British filmmaker Jeremy Marre documented the Jamaican music scene in Roots Rock Reggae, capturing the heyday of Roots reggae.[45]
|
36 |
+
|
37 |
+
While the quality of Reggae records produced in Jamaica took a turn for the worse following the oil crisis of the 1970s, reggae produced elsewhere began to flourish.[46][35] In the late 1970s and early 1980s, the UK punk rock scene flourished, and reggae was a notable influence. The DJ Don Letts would play reggae and punk tracks at clubs such as The Roxy. Punk bands such as The Clash, The Ruts, The Members and The Slits played many reggae-influenced songs. Around the same time, reggae music took a new path in the UK; one that was created by the multiracial makeup of England's inner cities and exemplified by groups like Steel Pulse, Aswad and UB40, as well as artists such as Smiley Culture and Carroll Thompson. The Jamaican ghetto themes in the lyrics were replaced with UK inner city themes, and Jamaican patois became intermingled with Cockney slang. In South London around this time, a new subgenre of Lovers Rock, was being created. Unlike the Jamaican music of the same name which was mainly dominated by male artists such as Gregory Isaacs, the South London genre was led by female singers like Thompson and Janet Kay. The UK Lovers Rock had a softer and more commercial sound.Other reggae artists who enjoyed international appeal in the early 1980s include Third World, Black Uhuru and Sugar Minott. The Grammy Awards introduced the Grammy Award for Best Reggae Album category in 1985.
|
38 |
+
|
39 |
+
Women also play a role in the reggae music industry personnel such as Olivia Grange, president of Specs-Shang Musik; Trish Farrell, president of Island/Jamaica; Lisa Cortes, president of Loose Cannon; Jamaican-American Sharon Gordon, who has worked in the independent reggae music industry.[47]
|
40 |
+
|
41 |
+
Jamaican Prime Minister Bruce Golding made February 2008 the first annual Reggae Month in Jamaica. To celebrate, the Recording Industry Association of Jamaica (RIAJam) held its first Reggae Academy Awards on 24 February 2008. In addition, Reggae Month included a six-day Global Reggae conference, a reggae film festival, two radio station award functions, and a concert tribute to the late Dennis Brown, who Bob Marley cited as his favorite singer. On the business side, RIAJam held events focused on reggae's employment opportunities and potential international revenue. .[48] Reggae Month 2019 in Jamaica was welcomed with multiple events ranging from corporate reggae functions to major celebrations in honour of Bob Marley's Birthday on 6 February to a tribute concert in honour of Dennis Brown on 24 February along with a sold-out concert by 2019 Reggae Grammy nominated artiste Protoje for his A Matter of Time Live held at Hope Gardens in Kingston on 23 February.
|
42 |
+
|
43 |
+
In November 2018 "reggae music of Jamaica" was added to the UNESCO's Representative List of the Intangible Cultural Heritage of Humanity the decision recognised reggae's "contribution to international discourse on issues of injustice, resistance, love and humanity underscores the dynamics of the element as being at once cerebral, socio-political, sensual and spiritual."[49]
|
44 |
+
|
45 |
+
Stylistically, reggae incorporates some of the musical elements of rhythm and blues (R&B), jazz, mento, calypso, African, and Latin American music, as well as other genres. Reggae scenes consist of two guitars, one for rhythm and one for lead—drums, congas, and keyboards, with a couple vocalists.[51]
|
46 |
+
|
47 |
+
Reggae is played in 44 time because the symmetrical rhythmic pattern does not lend itself to other time signatures such as 34. One of the most easily recognizable elements is offbeat rhythms; staccato chords played by a guitar or piano (or both) on the offbeats of the measure, often referred to as the skank.[52]
|
48 |
+
|
49 |
+
This rhythmic pattern accents the second and fourth beats in each bar and combines with the drum's emphasis on beat three to create a unique sense of phrasing. The reggae offbeat can be counted so that it falls between each count as an "and" (example: 1 and 2 and 3 and 4 and, etc.) or counted as a half-time feel at twice the tempo so it falls on beats 2 and 4. This is in contrast to the way most other popular genres focus on beat one, the "downbeat".[53]
|
50 |
+
|
51 |
+
The tempo of reggae is usually slower than both ska and rocksteady.[13] It is this slower tempo, the guitar/piano offbeats, the emphasis on the third beat, and the use of syncopated, melodic bass lines that differentiate reggae from other music, although other musical styles have incorporated some of these innovations.
|
52 |
+
|
53 |
+
Harmonically the music is essentially the same as any other modern popular genre with a tendency to make use of simple chord progressions. Reggae sometimes uses the dominant chord in its minor form therefore never allowing a perfect cadence to be sounded; this lack of resolution between the tonic and the dominant imparts a sense of movement "without rest" and harmonic ambiguity. Extended chords like the major seventh chord ("Waiting in Vain" by Bob Marley) and minor seventh chord are used though suspended chords or diminished chords are rare. Minor keys are commonly used especially with the minor chord forms of the subdominant and dominant chord (for example in the key of G minor the progression may be played Gm – Dm – Gm – Dm – Cm – Dm – Cm – Dm). A simple progression borrowed from rhythm and blues and soul music is the tonic chord followed by the minor supertonic chord with the two chords repeated continuously to form a complete verse ("Just My Imagination" by The Temptations C – Dm7).
|
54 |
+
|
55 |
+
The concept of "call and response" can be found throughout reggae music, in the vocals but also in the way parts are composed and arranged for each instrument. The emphasis on the "third beat" of the bar also results in a different sense of musical phrasing, with bass lines and melody lines often emphasizing what might be considered "pick up notes" in other genres.
|
56 |
+
|
57 |
+
A standard drum kit is generally used in reggae, but the snare drum is often tuned very high to give it a timbales-type sound. Some reggae drummers use an additional timbale or high-tuned snare to get this sound. Cross-stick technique on the snare drum is commonly used, and tom-tom drums are often incorporated into the drumbeat itself.
|
58 |
+
|
59 |
+
Reggae drumbeats fall into three main categories: One drop, Rockers, and Steppers. With the One drop, the emphasis is entirely on the backbeat (usually on the snare, or as a rim shot combined with bass drum). Beat one is empty except for a closed high hat commonly used, which is unusual in popular music. There is some controversy about whether reggae should be counted so that this beat falls on two and four, or whether it should be counted twice as fast, so it falls on three. An example played by Barrett can be heard in the Bob Marley and the Wailers song "One Drop". Barrett often used an unusual triplet cross-rhythm on the hi-hat, which can be heard on many recordings by Bob Marley and the Wailers, such as "Running Away" on the Kaya album.
|
60 |
+
|
61 |
+
An emphasis on the backbeat is found in all reggae drumbeats, but with the Rockers beat, the emphasis is on all four beats of the bar (usually on bass drum). This beat was pioneered by Sly and Robbie, who later helped create the "Rub-a-Dub" sound that greatly influenced dancehall. Sly has stated he was influenced to create this style by listening to American drummer Earl Young as well as other disco and R&B drummers in the early to mid-1970s, as stated in the book "Wailing Blues". The prototypical example of the style is found in Sly Dunbar's drumming on "Right Time" by the Mighty Diamonds. The Rockers beat is not always straightforward, and various syncopations are often included. An example of this is the Black Uhuru song "Sponji Reggae".
|
62 |
+
|
63 |
+
In Steppers, the bass drum plays every quarter beat of the bar, giving the beat an insistent drive. An example is "Exodus" by Bob Marley and the Wailers. Another common name for the Steppers beat is the "four on the floor". Burning Spear's 1975 song "Red, Gold, and Green" (with Leroy Wallace on drums) is one of the earliest examples. The Steppers beat was adopted (at a much higher tempo) by some 2 Tone ska revival bands of the late 1970s and early 1980s.
|
64 |
+
|
65 |
+
An unusual characteristic of reggae drumming is that the drum fills often do not end with a climactic cymbal. A wide range of other percussion instrumentation are used in reggae. Bongos are often used to play free, improvised patterns, with heavy use of African-style cross-rhythms. Cowbells, claves and shakers tend to have more defined roles and a set pattern.
|
66 |
+
|
67 |
+
Reggae drummers often involved these three tips for other reggae performers: (1) go for open, ringing tones when playing ska and rocksteady, (2) use any available material to stuff the bass drum so that it tightens up the kick to a deep, punchy thud, and (3) go without a ride cymbal, focusing on the hi-hat for timekeeping and thin crashes with fast decay for accents.[54]
|
68 |
+
|
69 |
+
The bass guitar often plays the dominant role in reggae, and the drum and bass is often the most important part of what is called, in Jamaican music, a riddim (rhythm), a (usually simple) piece of music that is used repeatedly by different artists to write and record songs with. Hundreds of reggae singers have released different songs recorded over the same rhythm. The central role of the bass can be particularly heard in dub music – which gives an even bigger role to the drum and bass line, reducing the vocals and other instruments to peripheral roles.
|
70 |
+
|
71 |
+
The bass sound in reggae is thick and heavy, and equalized so the upper frequencies are removed and the lower frequencies emphasized. The bass line is often a repeated two or four bar riff when simple chord progressions are used. The simplest example of this might be Robbie Shakespeare's bass line for the Black Uhuru hit "Shine Eye Gal". In the case of more complex harmonic structures, such as John Holt's version of "Stranger in Love", these simpler patterns are altered to follow the chord progression either by directly moving the pattern around or by changing some of the interior notes in the phrase to better support the chords.
|
72 |
+
|
73 |
+
The guitar in reggae usually plays on the off beat of the rhythm. So if one is counting in 44 time and counting "1 and 2 and 3 and 4 and ...", one would play a downstroke on the "and" part of the beat.[55] A musical figure known as skank or the 'bang" has a very dampened, short and scratchy chop sound, almost like a percussion instrument. Sometimes a double chop is used when the guitar still plays the off beats, but also plays the following eighth-note beats on the up-stroke. An example is the intro to "Stir It Up" by The Wailers. Artist and producer Derrick Harriott says, "What happened was the musical thing was real widespread, but only among a certain sort of people. It was always a down-town thing, but more than just hearing the music. The equipment was so powerful and the vibe so strong that we feel it."[56]
|
74 |
+
|
75 |
+
From the earliest days of Ska recordings, a piano was used to double the rhythm guitar's skank, playing the chords in a staccato style to add body, and playing occasional extra beats, runs and riffs. The piano part was widely taken over by synthesizers during the 1980s, although synthesizers have been used in a peripheral role since the 1970s to play incidental melodies and countermelodies. Larger bands may include either an additional keyboardist, to cover or replace horn and melody lines, or the main keyboardist filling these roles on two or more keyboards.
|
76 |
+
|
77 |
+
The reggae organ-shuffle is unique to reggae. In the original version of reggae, the drummer played a reggae groove that was used in the four bar introduction, allowing the piano to serve as a percussion instrument.[57] Typically, a Hammond organ-style sound is used to play chords with a choppy feel. This is known as the bubble. This may be the most difficult reggae keyboard rhythm. The organ bubble can be broken down into 2 basic patterns. In the first, the 8th beats are played with a space-left-right-left-space-left-right-left pattern, where the spaces represent downbeats not played—that and the left-right-left falls on the ee-and-a, or and-2-and if counted at double time. In the second basic pattern, the left hand plays a double chop as described in the guitar section while the right hand plays longer notes on beat 2 (or beat 3 if counted at double time) or a syncopated pattern between the double chops. Both these patterns can be expanded on and improvised embellishments are sometimes used.
|
78 |
+
|
79 |
+
Horn sections are frequently used in reggae, often playing introductions and countermelodies. Instruments included in a typical reggae horn section include saxophone, trumpet or trombone. In more recent times, real horns are sometimes replaced in reggae by synthesizers or recorded samples. The horn section is often arranged around the first horn, playing a simple melody or counter melody. The first horn is usually accompanied by the second horn playing the same melodic phrase in unison, one octave higher. The third horn usually plays the melody an octave and a fifth higher than the first horn. The horns are generally played fairly softly, usually resulting in a soothing sound. However, sometimes punchier, louder phrases are played for a more up-tempo and aggressive sound.
|
80 |
+
|
81 |
+
The vocals in reggae are less of a defining characteristic of the genre than the instrumentation and rhythm, as almost any song can be performed in a reggae style. However, it is very common for reggae to be sung in Jamaican Patois, Jamaican English, and Iyaric dialects. Vocal harmony parts are often used, either throughout the melody (as with vocal groups such as the Mighty Diamonds), or as a counterpoint to the main vocal line (as with the backing vocalists, the I-Threes). More complex vocal arrangements can be found in the works of groups like The Abyssinians and British reggae band Steel Pulse.
|
82 |
+
|
83 |
+
An unusual aspect of reggae singing is that many singers use tremolo (volume oscillation) rather than vibrato (pitch oscillation). Notable exponents of this technique include Horace Andy and vocal group Israel Vibration. The toasting vocal style is unique to reggae, originating when DJs improvised spoken introductions to songs (or "toasts") to the point where it became a distinct rhythmic vocal style, and is generally considered to be a precursor to rap. It differs from rap mainly in that it is generally melodic, while rap is generally more a spoken form without melodic content.
|
84 |
+
|
85 |
+
Reggae is noted for its tradition of social criticism in its lyrics, although many reggae songs discuss lighter, more personal subjects, such as love and socializing. Many early reggae bands covered Motown or Atlantic soul and funk songs. Some reggae lyrics attempt to raise the political consciousness of the audience, such as by criticizing materialism, or by informing the listener about controversial subjects such as Apartheid. Many reggae songs promote the use of cannabis (also known as herb, ganja, or sinsemilla), considered a sacrament in the Rastafari movement. There are many artists who utilize religious themes in their music – whether it be discussing a specific religious topic, or simply giving praise to God (Jah). Other common socio-political topics in reggae songs include black nationalism, anti-racism, anti-colonialism,[58] anti-capitalism and criticism of political systems and "Babylon".
|
86 |
+
|
87 |
+
In recent years, Jamaican (and non-Jamaican) reggae musicians have used more positive themes in reggae music. The music is widely considered a treasured cultural export for Jamaica, so musicians who still desire progress for their island nation have begun focusing on themes of hopefulness, faith, and love. For elementary children, reggae songs such as "Give a Little Love", "One Love", or "Three Little Birds", all written by Bob Marley, can be sung and enjoyed for their optimism and cheerful lyrics.[59]
|
88 |
+
|
89 |
+
Some dancehall and ragga artists have been criticised for homophobia,[60] including threats of violence.[61] Buju Banton's song "Boom Bye-Bye" states that gays "haffi dead". Other notable dancehall artists who have been accused of homophobia include Elephant Man, Bounty Killer and Beenie Man. The controversy surrounding anti-gay lyrics has led to the cancellation of UK tours by Beenie Man and Sizzla. Toronto, Canada has also seen the cancellation of concerts due to artists such as Elephant Man and Sizzla refusing to conform to similar censorship pressures.[62][63]
|
90 |
+
|
91 |
+
After lobbying from the Stop Murder Music coalition, the dancehall music industry agreed in 2005 to stop releasing songs that promote hatred and violence against gay people.[64][65] In June 2007, Beenie Man, Sizzla and Capleton signed up to the Reggae Compassionate Act, in a deal brokered with top dancehall promoters and Stop Murder Music activists. They renounced homophobia and agreed to "not make statements or perform songs that incite hatred or violence against anyone from any community". Five artists targeted by the anti-homophobia campaign did not sign up to the act, including Elephant Man, TOK, Bounty Killa and Vybz Kartel.[66] Buju Banton and Beenie Man both gained positive press coverage around the world for publicly renouncing homophobia by signing the Reggae Compassion Act. However, both of these artists have since denied any involvement in anti-homophobia work and both deny having signed any such act.[67]
|
92 |
+
|
93 |
+
Reggae has spread to many countries across the world, often incorporating local instruments and fusing with other genres.[68] In November 2018 UNESCO added the "reggae music of Jamaica" to the Representative List of the Intangible Cultural Heritage of Humanity.[49][69]
|
94 |
+
|
95 |
+
Reggae en Español spread from mainland South American Caribbean from Venezuela and Guyana to the rest of South America. It does not have any specific characteristics other than being sung in Spanish, usually by artists of Latin American origin. Samba reggae originated in Brazil as a blend of samba with Jamaican reggae. Reggae also has a presence in Veracruz, Mexico. The most notable Jarocho reggae group being Los Aguas Aguas from Xalapa. Some of the most popular reggae groups across Latin America come from the Southern Cone, such as the Chilean band Gondwana, and the Argentinian band Los Cafres. The Puerto Rican band Cultura Profética is also widely recognized in the region. Hispanic reggae includes three elements: the incorporation of the Spanish language; the use of translations and versions based on known riddims and background music; and regional consciousness. It is a medium of rebellious contestation rising from the underground. Hispanic reggae is related to rap, sharing characteristics that can be found not only in the social conditions in which they developed in the region but also in the characteristics of social sectors and classes that welcome them.[70]
|
96 |
+
|
97 |
+
Brazilian samba-reggae utilized themes such as the civil rights movement and the Black Soul movement, and especially the Jamaican independence movement since the 1960s and its messages in reggae and Rastafarianism. Thus, the sudden popularity of reggae music and musicians in Bahia, Brazil, was not the result of the effects of the transnational music industry, but of the need to establish cultural and political links with black communities across the Americas that had faced and were facing similar sociopolitical situations.[71]
|
98 |
+
|
99 |
+
Musically, it was the bloco afro Olodum and its lead percussionist, Neguinho do Samba, that began to combine the basic samba beat of the blocos with merengue, salsa, and reggae rhythms and debuted their experimentations in the carnival of 1986. The new toques (drumming patterns) were labeled "samba-reggae" and consisted basically of a pattern in which the surdo bass drums (four of them at the minimum) divided themselves into four or five interlocking parts.
|
100 |
+
|
101 |
+
In the state of Maranhão, in northeastern Brazil, reggae is a very popular rhythm. São Luis, the state capital, is known as the Brazilian Jamaica. The city has more than 200 "radiolas", name given to sound teams formed by DJs and sound systems with dozens of powerful amplifiers boxes stacked. Reggae in Maranhão has its own characteristics, such as melody and the way of dancing, as well as having its own radio and television programs. In 2018, the Reggae Museum of Maranhão was inaugurated, the second reggae museum in the world (after Jamaica), with the objective of preserving the reggae culture history in the state.[72]
|
102 |
+
|
103 |
+
In the United States, bands like Rebelution, Slightly Stoopid, Stick Figure, and SOJA are considered progressive reggae bands sometimes referred to as Cali Reggae or Pacific Dub. The American reggae scene is heavily centred in Southern California, with large scenes also in New York City, Washington, D.C., Chicago, Miami, and Honolulu. For decades, Hawaiian reggae has had a big following on the Hawaiian islands and the West coast of the US.[73] On the east coast upstate NY has seen a rise in original roots reggae bands such as Giant Panda Guerilla Dub Squad and John Brown's Body who were inspired by Jamaican reggae bands that performed in the area back in the 80s and 90s.[74] Matisyahu gained prominence by blending traditional Jewish themes with reggae.[75] Compounding his use of the hazzan style, Matisyahu's lyrics are mostly English with more than occasional use of Hebrew and Yiddish. There is a large Caribbean presence in Toronto and Montreal, Canada, with English and French influences on the reggae genre.[clarification needed][citation needed] Canadian band Magic!'s 2013 single "Rude" was an international hit.
|
104 |
+
|
105 |
+
In 2017, Toots and the Maytals became the second reggae-based group to ever perform at the Coachella festival, after Chronixx in 2016.[76][77][78]
|
106 |
+
|
107 |
+
The UK was a primary destination for Caribbean people looking to emigrate as early as the 1950s. Because of this, Caribbean music in the United Kingdom, including reggae, has been popular since the late 1960s, and has evolved into several subgenres and fusions. Most notable of these is lovers rock, but this fusion of Jamaican music into English culture was seminal in the formation of other musical forms like drum and bass and dubstep. The UK became the base from which many Jamaican artists toured Europe and due to the large number of Jamaican musicians emigrating there, the UK is the root of the larger European scene that exists today. Many of the world's most famous reggae artists began their careers in UK. Singer and Grammy Award-winning reggae artist Maxi Priest began his career with seminal British sound system Saxon Studio International.
|
108 |
+
|
109 |
+
Three reggae-tinged singles from the Police's 1978 debut album, Outlandos d'Amour, laid down the template for the basic structure of a lot of rock/reggae songwriting: a reggae-infused verse containing upstrokes on guitar or keyboards and a more aggressive, on-the-beat punk/rock attack during the chorus. The end of the 1970s featured a ska revival in the UK. By the end of the '70s, a revival movement had begun in England, with such bands as the Specials, Madness, the (English) Beat, and the Selecter. The Specials' leader and keyboardist, Jerry Dammers, founded the 2 Tone record label, which released albums from the aforementioned racially integrated groups and was instrumental in creating a new social and cultural awareness. The 2 Tone movement referenced reggae's godfathers, popular styles (including the genre's faster and more dance-oriented precursors, ska and rocksteady), and previous modes of dress (such as black suits and porkpie hats) but updated the sound with a faster tempo, more guitar, and more attitude.[79]
|
110 |
+
|
111 |
+
Birmingham based reggae/pop music band UB40 were main contributors to the British reggae scene throughout the 1980s and 1990s. The achieved international success with hits such as "Red Red Wine", "Kingston Town" and "(I can't Help) Falling in Love with You."
|
112 |
+
|
113 |
+
Other UK based artists that had international impact include Aswad, Misty in Roots, Steel Pulse, Janet Kay, Tippa Irie, Smiley Culture and more recently Bitty McLean. There have been a number of European artists and bands drawing their inspiration directly from Jamaica and the Caribbean community in Europe, whose music and vocal styles are almost identical to contemporary Jamaican music. The best examples might be Alborosie (Italy) and Gentleman (Germany). Both Gentleman and Alborosie have had a significant chart impact in Jamaica, unlike many European artists. They have both recorded and released music in Jamaica for Jamaican labels and producers and are popular artists, likely to appear on many riddims. Alborosie has lived in Jamaica since the late 1990s and has recorded at Bob Marley's famous Tuff Gong Studios. Since the early 1990s, several Italian reggae bands have emerged, including Africa Unite, Gaudi, Reggae National Tickets, Sud Sound System, Pitura Freska and B.R. Stylers. Another Italian famous reggae singer was Rino Gaetano.
|
114 |
+
|
115 |
+
Reggae appeared on the Yugoslav popular music scene in the late 1970s, through sporadic songs by popular rock acts.[80] Reggae saw an expansion with the emergence of Yugoslav new wave scene.[80] The bands like Haustor, Šarlo Akrobata, Aerodrom, Laboratorija Zvuka, Piloti, Du Du A and others recorded reggae and reggae-influence songs.[80] In the mid-1980s appeared Del Arno Band, often considered the first real reggae band in Yugoslavia. Throughout the following decades they remained one of the most popular and influential reggae bands in the region.[80] In the 1990s and early 2000s, after the breakup of Yugoslavia, appeared a new generation of reggae bands, like Serbian band Eyesburn, which gained popularity with their combination of reggae with hardcore punk and crossover thrash, and Croatian band Radikal Dub Kolektiv, alongside bands which incorporated reggae into their sound, like Darkwood Dub, Kanda, Kodža i Nebojša and Lira Vega in Serbia and Dubioza Kolektiv in Bosnia and Herzegovina.[80] Late 2000s and 2010s brought a new generation of reggae acts in the region.[80]
|
116 |
+
|
117 |
+
The first homegrown Polish reggae bands started in the 1980s with groups like Izraelario. Singer and songwriter Alexander Barykin was considered as the father of Russian reggae.[81] In Sweden, Uppsala Reggae Festival attracts attendees from across Northern Europe, and features Swedish reggae bands such as Rootvälta and Svenska Akademien as well as many popular Jamaican artists. Summerjam, Europe's biggest reggae festival, takes place in Cologne, Germany and sees crowds of 25,000 or more. Rototom Sunsplash, a week-long festival which used to take place in Osoppo, Italy, until 2009, is now held in Benicassim, Spain and gathers up to 150,000 visitors every year.
|
118 |
+
|
119 |
+
In Iceland reggae band Hjálmar is well established having released six CDs in Iceland. They were the first reggae band in Iceland, but few Icelandic artists had written songs in the reggae style before their showing up at the Icelandic music scene. The Icelandic reggae scene is expanding and growing at a fast rate. RVK Soundsystem is the first Icelandic sound system, counting 5 DJ's. They hold reggae nights in Reykjavík every month at clubs Hemmi og Valdi and more recently in Faktorý as the crowd has grown so much.
|
120 |
+
|
121 |
+
In Germany, the three successful Reggae JSnrfti mer Jam open-air festivals were crucial parts of the renaissance of Caribbean music in Germany but at that year (1990) war broke out between the two main German promoters who had cooperated so well during the previous seasons. With a lot of infighting and personal quarrels, each of them pursued his own preparations for a big summer festival. The result was that two open-air events look place on the same day.
|
122 |
+
|
123 |
+
The Reggae Sammer Jam '90 was staged as usual, but this year for only one day. The event took place at the Lorelei Rock amphitheater with artists like Mad Professor's Ariwa Posse with Macka B and Kofi, Mutabaruka, the Mighty Diamonds, the Twinkle Brothers, Manu Dibango and Fela Kuti.
|
124 |
+
|
125 |
+
The other, ex-partner of the onceunited promoters succeeded in bringing the original Sunsplash package to Germany for the first time. Close to the Main River in the little village of Gemaunden deep down in rural south-central Germany, they staged a two-day festival that drew the bigger crowd. About 10,000 people came from all over the country as well as from neighboring states like trance and, for the first time, East Germany to see the lineup of top reggae artists.[82]
|
126 |
+
|
127 |
+
Reggae in Africa was much boosted by the visit of Bob Marley to Zimbabwe on Independence Day 18 April 1980. Nigerian reggae had developed in the 1970s with artists such as Majek Fashek proving popular. In South Africa, reggae music has played a unifying role amongst cultural groups in Cape Town. During the years of Apartheid, the music bonded people from all demographic groups. Lucky Dube recorded 25 albums, fusing reggae with Mbaqanga. The Marcus Garvey Rasta camp in Phillipi is regarded by many to be the reggae and Rastafari center of Cape Town. Reggae bands play regularly at community centres such as the Zolani center in Nyanga.
|
128 |
+
|
129 |
+
In Uganda musician Papa Cidy is very popular. Arthur Lutta is also a Ugandan gospel reggae drummer known for his reggae style drumming. In Ethiopia, Dub Colossus and Invisible System emerged in 2008 sharing core members, and have received wide acclaim.[83][84][85] In Mali, Askia Modibo fuses reggae with Malian music. In Malawi, Black Missionaries produced nine albums. In Ivory Coast a country where reggae music is extremely popular, Tiken Jah Fakoly fuses reggae with traditional music. Alpha Blondy from Ivory Coast sings reggae with religious lyrics. In Sudan, beats, drums and bass guitar from reggae music has been adopted into their music as reggae is a very popular among the generations from young to old, some spiritual (religious) groups grow their dreadlocks and have some reggae beats in their chants.
|
130 |
+
|
131 |
+
In the Philippines, several bands and sound systems play reggae and dancehall music. Their music is called Pinoy reggae. Japanese reggae emerged in the early 1980s. Reggae is becoming more prevalent in Thailand as well. Reggae music is quite popular in Sri Lanka. Aside from the reggae music and Rastafari influences seen ever more on Thailand's islands and beaches, a true reggae sub-culture is taking root in Thailand's cities and towns. Many Thai artists, such as Job 2 Do, keep the tradition of reggae music and ideals alive in Thailand. By the end of the 1980s, the local music scene in Hawaii was dominated by Jawaiian music, a local form of reggae.
|
132 |
+
|
133 |
+
Famous Indian singer Kailash Kher and music producer Clinton Cerejo created Kalapi, a rare fusion piece of Reggae and Indian music for Coke Studio India.[86] Other than this high-profile piece, Reggae is confined to a small, emerging scene in India.[87] Thaikkudam Bridge, a neo-Indian band based in Kerala, India is known for inducing Reggae into Indian regional blues.[88]
|
134 |
+
|
135 |
+
Reggae in Australia originated in the 1980s. Australian reggae groups include Sticky Fingers, Blue King Brown, Astronomy Class and The Red Eyes. Others such as The Fraud Millionaires combine reggae with rock, while many more artists include some reggae songs in their repertoires, but don't identify as reggae bands. Desert Reggae is a developing contemporary style possibly originating in Central Australia and featuring lyrics often sung in Australian Aboriginal languages.[89] Yirrmala by Yothu Yindi (1996) is an example of an Aboriginal reggae song.
|
136 |
+
|
137 |
+
New Zealand reggae was heavily inspired by Bob Marley's 1979 tour of the country, and early reggae groups such as Herbs.[90] The genre has seen many bands like Fat Freddy's Drop, Salmonella Dub, The Black Seeds and Katchafire emerging in more recent times, often involving fusion with electronica.[91]
|
138 |
+
|
139 |
+
The term cod reggae is popularly used to describe reggae done by non-Caribbean (often white) people, often in a disparaging manner because of perceived inauthenticity.[92] It has been applied to music by many artists, such as 10cc, Boy George, Suzi Quatro and Razorlight.
|
en/4959.html.txt
ADDED
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Communism (from Latin communis, 'common, universal')[1][2] is a philosophical, social, political, economic ideology and movement whose ultimate goal is the establishment of a communist society, namely a socioeconomic order structured upon the ideas of common ownership of the means of production and the absence of social classes, money[3][4] and the state.[5][6]
|
6 |
+
|
7 |
+
Communism includes a variety of schools of thought which broadly include Marxism and anarcho-communism as well as the political ideologies grouped around both, all of which share the analysis that the current order of society stems from capitalism, its economic system and mode of production; that in this system there are two major social classes; that conflict between these two classes is the root of all problems in society;[7] and that this situation will ultimately be resolved through a social revolution.
|
8 |
+
|
9 |
+
The two classes are the proletariat (the working class), who make up the majority of the population within society, and who must work to survive; and the bourgeoisie (the capitalist class)—a small minority who derives profit from employing the working class through private ownership of the means of production. According to this analysis, revolution would put the working class in power and in turn establish social ownership of the means of production which is the primary element in the transformation of society towards communism.
|
10 |
+
|
11 |
+
Along with social democracy, communism became the dominant political tendency within the international socialist movement by the 1920s.[8] While the emergence of the Soviet Union as the world's first nominally communist state led to communism's widespread association with the Soviet economic model and Marxism–Leninism,[1][a][9] some economists and intellectuals argued that in practice the model functioned as a form of state capitalism,[10][11][12] or a non-planned administrative or command economy.[13][14]
|
12 |
+
|
13 |
+
Communism derives from the French communisme which developed out of the Latin roots communis and the suffix isme.[15]
|
14 |
+
|
15 |
+
Semantically, communis can be translated to "of or for the community" while isme is a suffix that indicates the abstraction into a state, condition, action, or doctrine. Communism may be interpreted as "the state of being of or for the community". This semantic constitution has led to numerous usages of the word in its evolution. Prior to becoming associated with its more modern conception of an economic and political organization, the term was initially used in designating various social situations. The term ultimately came to be primarily associated with Marxism, most specifically embodied in The Communist Manifesto which proposed a particular type of communism.
|
16 |
+
|
17 |
+
One of the first uses of the word in its modern sense is in a letter sent by Victor d'Hupay to Restif de la Bretonne around 1785, in which d'Hupay describes himself as an auteur communiste ("communist author").[16] Years later, Restif would go on to use the term frequently in his writing and was the first to describe communism as a form of government.[17] John Goodwyn Barmby is credited with the first use of the term in English, around 1840.[15]
|
18 |
+
|
19 |
+
Since the 1840s, communism has usually been distinguished from socialism. The modern definition and usage of the latter would be settled by the 1860s, becoming the predominant term over the words associationist, co-operative and mutualist which had previously been used as synonyms. Instead, communism fell out of use during this period.[18]
|
20 |
+
|
21 |
+
An early distinction between communism and socialism was that the latter aimed to only socialise production, whereas the former aimed to socialise both production and consumption (in the form of free access to final goods).[19] By 1888, Marxists employed socialism in place of communism which had come to be considered an old-fashioned synonym for the former. It was not until 1917, with the Bolshevik Revolution, that socialism came to refer to a distinct stage between capitalism and communism, introduced by Vladimir Lenin as a means to defend the Bolshevik seizure of power against traditional Marxist criticism that Russia's productive forces were not sufficiently developed for socialist revolution.[20] A distinction between communist and socialist as descriptors of political ideologies arose in 1918 after the Russian Social-Democratic Labour Party renamed itself to the All-Russian Communist Party, where communist came to specifically refer to socialists who supported the politics and theories of Bolshevism, Leninism and later in the 1920s of Marxism–Leninism,[21] although communist parties continued to describe themselves as socialists dedicated to socialism.[18]
|
22 |
+
|
23 |
+
Both communism and socialism eventually accorded with the cultural attitude of adherents and opponents towards religion. In Christian Europe, communism was believed to be the atheist way of life. In Protestant England, communism was too culturally and aurally close to the Roman Catholic communion rite, hence English atheists denoted themselves socialists.[22] Friedrich Engels argued that in 1848, at the time when The Communist Manifesto was first published, "socialism was respectable on the continent, while communism was not". The Owenites in England and the Fourierists in France were considered respectable socialists while working-class movements that "proclaimed the necessity of total social change" denoted themselves communists. This latter branch of socialism produced the communist work of Étienne Cabet in France and Wilhelm Weitling in Germany.[23] While democrats looked to the Revolutions of 1848 as a democratic revolution which in the long run ensured liberty, equality and fraternity, Marxists denounced 1848 as a betrayal of working-class ideals by a bourgeoisie indifferent to the legitimate demands of the proletariat.[24]
|
24 |
+
|
25 |
+
According to The Oxford Handbook of Karl Marx, "Marx used many terms to refer to a post-capitalist society—positive humanism, socialism, Communism, realm of free individuality, free association of producers, etc. He used these terms completely interchangeably. The notion that "socialism" and "Communism" are distinct historical stages is alien to his work and only entered the lexicon of Marxism after his death".[25]
|
26 |
+
|
27 |
+
According to Richard Pipes, the idea of a classless, egalitarian society first emerged in Ancient Greece.[26] The 5th-century Mazdak movement in Persia (modern-day Iran) has been described as "communistic" for challenging the enormous privileges of the noble classes and the clergy; for criticizing the institution of private property; and for striving to create an egalitarian society.[27][28] At one time or another, various small communist communities existed, generally under the inspiration of Scripture.[29] In the medieval Christian Church, some monastic communities and religious orders shared their land and their other property.
|
28 |
+
|
29 |
+
Communist thought has also been traced back to the works of the 16th-century English writer Thomas More. In his 1516 treatise Utopia, More portrayed a society based on common ownership of property, whose rulers administered it through the application of reason. In the 17th century, communist thought surfaced again in England, where a Puritan religious group known as the Diggers advocated the abolition of private ownership of land.[30] In his 1895 Cromwell and Communism,[31] Eduard Bernstein argued that several groups during the English Civil War (especially the Diggers) espoused clear communistic, agrarian ideals and that Oliver Cromwell's attitude towards these groups was at best ambivalent and often hostile.[31] Criticism of the idea of private property continued into the Age of Enlightenment of the 18th century through such thinkers as Jean-Jacques Rousseau in France. Following the upheaval of the French Revolution, communism later emerged as a political doctrine.[32]
|
30 |
+
|
31 |
+
In the early 19th century, various social reformers founded communities based on common ownership. Unlike many previous communist communities, they replaced the religious emphasis with a rational and philanthropic basis.[33] Notable among them were Robert Owen, who founded New Harmony, Indiana, in 1825; and Charles Fourier, whose followers organized other settlements in the United States such as Brook Farm in 1841.[1]
|
32 |
+
|
33 |
+
In its modern form, communism grew out of the socialist movement in 19th-century Europe. As the Industrial Revolution advanced, socialist critics blamed capitalism for the misery of the proletariat—a new class of urban factory workers who labored under often-hazardous conditions. Foremost among these critics were Karl Marx and his associate Friedrich Engels. In 1848, Marx and Engels offered a new definition of communism and popularized the term in their famous pamphlet The Communist Manifesto.[1]
|
34 |
+
|
35 |
+
The 1917 October Revolution in Russia set the conditions for the rise to state power of Vladimir Lenin's Bolsheviks which was the first time any avowedly communist party reached that position. The revolution transferred power to the All-Russian Congress of Soviets in which the Bolsheviks had a majority.[34][35][36] The event generated a great deal of practical and theoretical debate within the Marxist movement. Marx predicted that socialism and communism would be built upon foundations laid by the most advanced capitalist development. However, Russia was one of the poorest countries in Europe with an enormous, largely illiterate peasantry and a minority of industrial workers. Marx had explicitly stated that Russia might be able to skip the stage of bourgeois rule.[37]
|
36 |
+
|
37 |
+
The moderate Mensheviks (minority) opposed Lenin's Bolsheviks (majority) plan for socialist revolution before capitalism was more fully developed. The Bolsheviks' successful rise to power was based upon the slogans such as "Peace, bread and land" which tapped into the massive public desire for an end to Russian involvement in World War I, the peasants' demand for land reform and popular support for the soviets.[38] The Soviet Union was established in 1922.
|
38 |
+
|
39 |
+
Following Lenin's democratic centralism, the Leninist parties were organized on a hierarchical basis, with active cells of members as the broad base. They were made up only of elite cadres approved by higher members of the party as being reliable and completely subject to party discipline.[39] In the Moscow Trials, many old Bolsheviks who had played prominent roles during the Russian Revolution of 1917 or in Lenin's Soviet government afterwards, including Lev Kamenev, Grigory Zinoviev, Alexei Rykov and Nikolai Bukharin, were accused, pleaded guilty of conspiracy against the Soviet Union, and were executed.[40]
|
40 |
+
|
41 |
+
Its leading role in World War II saw the emergence of the Soviet Union as an industrialized superpower, with strong influence over Eastern Europe and parts of Asia. The European and Japanese empires were shattered and communist parties played a leading role in many independence movements. Marxist–Leninist governments modeled on the Soviet Union took power with Soviet assistance in Bulgaria, Czechoslovakia, East Germany, Poland, Hungary and Romania. A Marxist–Leninist government was also created under Josip Broz Tito in Yugoslavia, but Tito's independent policies led to the expulsion of Yugoslavia from the Cominform which had replaced the Comintern and Titoism was branded "deviationist". Albania also became an independent Marxist–Leninist state after World War II.[41] Communism was seen as a rival of and a threat to western capitalism for most of the 20th century.[42]
|
42 |
+
|
43 |
+
The Soviet Union was dissolved on December 26, 1991. It was a result of the declaration number 142-Н of the Soviet of the Republics of the Supreme Soviet of the Soviet Union.[43]
|
44 |
+
|
45 |
+
The declaration acknowledged the independence of the former Soviet republics and created the Commonwealth of Independent States, although five of the signatories ratified it much later or did not do it at all. On the previous day, Soviet President Mikhail Gorbachev (the eighth and final leader of the Soviet Union) resigned, declared his office extinct and handed over its powers, including control of the Soviet nuclear missile launching codes, to Russian President Boris Yeltsin. That evening at 7:32, the Soviet flag was lowered from the Kremlin for the last time and replaced with the pre-revolutionary Russian flag.[44]
|
46 |
+
|
47 |
+
Previously from August to December 1991, all the individual republics, including Russia itself, had seceded from the union. The week before the union's formal dissolution, eleven republics signed the Alma-Ata Protocol, formally establishing the Commonwealth of Independent States and declaring that the Soviet Union had ceased to exist.[45][46]
|
48 |
+
|
49 |
+
At present, states controlled by Marxist–Leninist parties under a single-party system include the People's Republic of China, the Republic of Cuba, the Lao People's Democratic Republic and the Socialist Republic of Vietnam. The Democratic People's Republic of Korea currently refers to its leading ideology as Juche which is portrayed as a development of Marxism–Leninism.
|
50 |
+
|
51 |
+
Communist parties, or their descendant parties, remain politically important in several other countries. The South African Communist Party is a partner in the African National Congress-led government. In India as of March 2018[update], communists lead the government of Kerala. In Nepal, communists hold a majority in the parliament.[47] The Communist Party of Brazil was a part of the parliamentary coalition led by the ruling democratic socialist Workers' Party until August 2016.
|
52 |
+
|
53 |
+
The People's Republic of China has reassessed many aspects of the Maoist legacy, and along with Laos, Vietnam and to a lesser degree Cuba, has decentralized state control of the economy in order to stimulate growth. Chinese economic reforms were started in 1978 under the leadership of Deng Xiaoping, and since then China has managed to bring down the poverty rate from 53% in the Mao era to just 6% in 2001.[48] These reforms are sometimes described by outside commentators as a regression to capitalism, but the communist parties describe it as a necessary adjustment to existing realities in the post-Soviet world in order to maximize industrial productive capacity. In these countries, the land is a universal public monopoly administered by the state and so are natural resources and vital industries and services. The public sector is the dominant sector in these economies and the state plays a central role in coordinating economic development.
|
54 |
+
|
55 |
+
Marxism is a method of socioeconomic analysis that frames capitalism through a paradigm of exploitation, analyzes class relations and social conflict using a materialist interpretation of historical development and takes a dialectical view of social transformation. Marxism uses a materialist methodology, referred to by Marx and Engels as the materialist conception of history and now better known as historical materialism, to analyze and critique the development of class society and especially of capitalism as well as the role of class struggles in systemic economic, social and political change. First developed by Karl Marx and Friedrich Engels in the mid-19th century, it has been the foremost ideology of the communist movement. Marxism does not lay out a blueprint of a communist society per se and it merely presents an analysis that concludes the means by which its implementation will be triggered, distinguishing its fundamental characteristics as based on the derivation of real-life conditions. Marxism considers itself to be the embodiment of scientific socialism, but it does not model an ideal society based on the design of intellectuals, whereby communism is seen as a state of affairs to be established based on any intelligent design. Rather, it is a non-idealist attempt at the understanding of material history and society, whereby communism is the expression of a real movement, with parameters that are derived from actual life.[49]
|
56 |
+
|
57 |
+
According to Marxist theory, class conflict arises in capitalist societies due to contradictions between the material interests of the oppressed and exploited proletariat—a class of wage labourers employed to produce goods and services—and the bourgeoisie—the ruling class that owns the means of production and extracts its wealth through appropriation of the surplus product produced by the proletariat in the form of profit. This class struggle that is commonly expressed as the revolt of a society's productive forces against its relations of production, results in a period of short-term crises as the bourgeoisie struggle to manage the intensifying alienation of labor experienced by the proletariat, albeit with varying degrees of class consciousness. In periods of deep crisis, the resistance of the oppressed can culminate in a proletarian revolution which, if victorious, leads to the establishment of socialism—a socioeconomic system based on social ownership of the means of production, distribution based on one's contribution and production organized directly for use. As the productive forces continued to advance, socialism would be transformed into a communist society, i.e. a classless, stateless, humane society based on common ownership and distribution based on one's needs.
|
58 |
+
|
59 |
+
While it originates from the works of Marx and Engels, Marxism has developed into many different branches and schools of thought, with the result that there is now no single definitive Marxist theory.[50] Different Marxian schools place a greater emphasis on certain aspects of classical Marxism while rejecting or modifying other aspects. Many schools of thought have sought to combine Marxian concepts and non-Marxian concepts which has then led to contradictory conclusions.[51] However, there is a movement toward the recognition that historical materialism and dialectical materialism remains the fundamental aspect of all Marxist schools of thought.[52] Marxism–Leninism and its offshoots are the most well-known of these and have been a driving force in international relations during most of the 20th century.[53]
|
60 |
+
|
61 |
+
Classical Marxism is the economic, philosophical and sociological theories expounded by Marx and Engels as contrasted with later developments in Marxism, especially Leninism and Marxism–Leninism.[54] Orthodox Marxism is the body of Marxism thought that emerged after the death of Marx and which became the official philosophy of the socialist movement as represented in the Second International until World War I in 1914. Orthodox Marxism aims to simplify, codify and systematize Marxist method and theory by clarifying the perceived ambiguities and contradictions of classical Marxism. The philosophy of orthodox Marxism includes the understanding that material development (advances in technology in the productive forces) is the primary agent of change in the structure of society and of human social relations and that social systems and their relations (e.g. feudalism, capitalism and so on) become contradictory and inefficient as the productive forces develop, which results in some form of social revolution arising in response to the mounting contradictions. This revolutionary change is the vehicle for fundamental society-wide changes and ultimately leads to the emergence of new economic systems.[55] As a term, orthodox Marxism represents the methods of historical materialism and of dialectical materialism and not the normative aspects inherent to classical Marxism, without implying dogmatic adherence to the results of Marx's investigations.[56]
|
62 |
+
|
63 |
+
At the root of Marxism is historical materialism, the materialist conception of history which holds that the key characteristic of economic systems through history has been the mode of production and that the change between modes of production has been triggered by class struggle. According to this analysis, the Industrial Revolution ushered the world into capitalism as a new mode of production. Before capitalism, certain working classes had ownership of instruments utilized in production. However, because machinery was much more efficient, this property became worthless and the mass majority of workers could only survive by selling their labor to make use of someone else's machinery, thus making someone else profit. Accordingly, capitalism divided the world between two major classes, namely that of the proletariat and the bourgeoisie.[57] These classes are directly antagonistic as the latter possesses private ownership of the means of production, earning profit via the surplus value generated by the proletariat, who have no ownership of the means of production and therefore no option but to sell its labor to the bourgeoisie.
|
64 |
+
|
65 |
+
According to the materialist conception of history, it is through the furtherance of its own material interests that the rising bourgeoisie within feudalism captured power and abolished, of all relations of private property, only the feudal privilege, thereby taking the feudal ruling class out of existence. Such was another key element behind the consolidation of capitalism as the new mode of production, the final expression of class and property relations that has led to a massive expansion of production. It is only in capitalism that private property in itself can be abolished.[58] Similarly, the proletariat would capture political power, abolish bourgeois property through the common ownership of the means of production, therefore abolishing the bourgeoisie, ultimately abolishing the proletariat itself and ushering the world into communism as a new mode of production. In between capitalism and communism, there is the dictatorship of the proletariat, a democratic state where the whole of the public authority is elected and recallable under the basis of universal suffrage.[59] It is the defeat of the bourgeois state, but not yet of the capitalist mode of production and at the same time the only element which places into the realm of possibility moving on from this mode of production.
|
66 |
+
|
67 |
+
Marxian economics and its proponents view capitalism as economically unsustainable and incapable of improving the living standards of the population due to its need to compensate for falling rates of profit by cutting employee's wages, social benefits and pursuing military aggression. The communist system would succeed capitalism as humanity's mode of production through workers' revolution. According to Marxian crisis theory, communism is not an inevitability, but an economic necessity.[60]
|
68 |
+
|
69 |
+
An important concept in Marxism is socialization versus nationalization. Nationalization is state ownership of property whereas socialization is control and management of property by society. Marxism considers the latter as its goal and considers nationalization a tactical issue, as state ownership is still in the realm of the capitalist mode of production. In the words of Friedrich Engels, "the transformation [...] into State-ownership does not do away with the capitalistic nature of the productive forces. [...] State-ownership of the productive forces is not the solution of the conflict, but concealed within it are the technical conditions that form the elements of that solution".[b][61] This has led some Marxist groups and tendencies to label states based on nationalization such as the Soviet Union as state capitalist.[10][11][12][13][14]
|
70 |
+
|
71 |
+
We want to achieve a new and better order of society: in this new and better society there must be neither rich nor poor; all will have to work. Not a handful of rich people, but all the working people must enjoy the fruits of their common labour. Machines and other improvements must serve to ease the work of all and not to enable a few to grow rich at the expense of millions and tens of millions of people. This new and better society is called socialist society. The teachings about this society are called 'socialism'.
|
72 |
+
|
73 |
+
Leninism is the body of political theory, developed by and named after the Russian revolutionary and later-Soviet premier Vladimir Lenin, for the democratic organisation of a revolutionary vanguard party and the achievement of a dictatorship of the proletariat as political prelude to the establishment of socialism. Leninism comprises socialist political and economic theories developed from orthodox Marxism as well as Lenin's interpretations of Marxist theory for practical application to the socio-political conditions of the agrarian, early-20th-century Russian Empire.
|
74 |
+
|
75 |
+
Leninism was composed for revolutionary praxis and originally was neither a rigorously proper philosophy nor a discrete political theory. After the Russian Revolution and in History and Class Consciousness: Studies in Marxist Dialectics (1923), György Lukács developed and organised Lenin's pragmatic revolutionary practices and ideology into the formal philosophy of vanguard-party revolution. As a political-science term, Leninism entered common usage in 1922 after infirmity ended Lenin's participation in governing the Russian Communist Party. At the Fifth Congress of the Communist International in July 1924, Grigory Zinoviev popularized the term Leninism to denote "vanguard-party revolution".
|
76 |
+
|
77 |
+
Within Leninism, democratic centralism is a practice in which political decisions reached by voting processes are binding upon all members of the communist party. The party's political vanguard is composed of professional revolutionaries that elect leaders and officers as well as to determine policy through free discussion, then this is decisively realized through united action. In the context of the theory of Leninist revolutionary struggle, vanguardism is a strategy whereby the most class-conscious and politically advanced sections of the proletariat or working class, described as the revolutionary vanguard, form organizations in order to draw larger sections of the working class towards revolutionary politics and serve as manifestations of proletarian political power against its class enemies.
|
78 |
+
|
79 |
+
From 1917 to 1922, Leninism was the Russian application of Marxian economics and political philosophy, effected and realised by the Bolsheviks, the vanguard party who led the fight for the political independence of the working class. In the 1925–1929 period, Joseph Stalin established his interpretation of Leninism as the official and only legitimate form of Marxism in Russia by amalgamating the political philosophies as Marxism–Leninism which then became the state ideology of the Soviet Union.
|
80 |
+
|
81 |
+
Marxism–Leninism is a political ideology developed by Joseph Stalin.[63] According to its proponents, it is based in Marxism and Leninism. It describes the specific political ideology which Stalin implemented in the Communist Party of the Soviet Union and in a global scale in the Comintern. There is no definite agreement between historians of about whether Stalin actually followed the principles of Marx and Lenin.[64] It also contains aspects which according to some are deviations from Marxism such as socialism in one country.[65][66]
|
82 |
+
|
83 |
+
Social fascism was a theory supported by the Comintern and affiliated communist parties during the early 1930s which held that social democracy was a variant of fascism because it stood in the way of a dictatorship of the proletariat, in addition to a shared corporatist economic model.[67] At the time, leaders of the Comintern such as Stalin and Rajani Palme Dutt argued that capitalist society had entered the Third Period in which a working-class revolution was imminent, but it could be prevented by social democrats and other fascist forces.[67][68] The term social fascist was used pejoratively to describe social-democratic parties, anti-Comintern and progressive socialist parties and dissenters within Comintern affiliates throughout the interwar period. The social fascism theory was advocated vociferously by the Communist Party of Germany which was largely controlled and funded by the Soviet leadership from 1928.[68]
|
84 |
+
|
85 |
+
During the Cold War, Marxism–Leninism was the ideology of the most clearly visible communist movement and is the most prominent ideology associated with communism.[53] According to their proponents, Marxist–Leninist ideologies have been adapted to the material conditions of their respective countries and include Castroism (Cuba), Ceaușism (Romania), Gonzalo Thought (Peru), Guevarism (Cuba), Ho Chi Minh Thought (Vietnam), Hoxhaism (anti-revisionist Albania), Husakism (Czechoslovakia), Juche (North Korea), Kadarism (Hungary), Khmer Rouge (Cambodia), Khrushchevism (Soviet Union), Prachanda Path (Nepal), Shining Path (Peru) and Titoism (anti-Stalinist Yugoslavia).
|
86 |
+
|
87 |
+
Within Marxism–Leninism, anti-revisionism is a position which emerged in the 1950s in opposition to the reforms of Soviet leader Nikita Khrushchev. Where Khrushchev pursued an interpretation that differed from Stalin, the anti-revisionists within the international communist movement remained dedicated to Stalin's ideological legacy and criticized the Soviet Union under Khrushchev and his successors as state capitalist and social imperialist due to its hopes of achieving peace with the United States. The term Stalinism is also used to describe these positions, but it is often not used by its supporters who opine that Stalin simply synthesized and practiced orthodox Marxism and Leninism. Because different political trends trace the historical roots of revisionism to different eras and leaders, there is significant disagreement today as to what constitutes anti-revisionism. Modern groups which describe themselves as anti-revisionist fall into several categories. Some uphold the works of Stalin and Mao Zedong and some the works of Stalin while rejecting Mao and universally tend to oppose Trotskyism. Others reject both Stalin and Mao, tracing their ideological roots back to Marx and Lenin. In addition, other groups uphold various less-well-known historical leaders such as Enver Hoxha, who also broke with Mao during the Sino-Albanian split.
|
88 |
+
|
89 |
+
Within Marxism–Leninism, social imperialism was a term used by Mao to criticize the Soviet Union post-Stalin. Mao argued that the Soviet Union had itself become an imperialist power while maintaining a socialist façade.[69] Hoxha agreed with Mao in this analysis, before later using the expression to also condemn Mao's Three Worlds Theory.[70]
|
90 |
+
|
91 |
+
Stalinism represents Stalin's style of governance as opposed to Marxism–Leninism, the socioeconomic system and political ideology implemented by Stalin in the Soviet Union and later copied by other states based on the Soviet model such as central planning, nationalization and one-party state, along with public ownership of the means of production, accelerated industrialization, pro-active development of society's productive forces (research and development) and nationalised natural resources. Marxism–Leninism remained after de-Stalinization whereas Stalinism did not. In the last letters before his death, Lenin warned against the danger of Stalin's personality and urged the Soviet government to replace him.[71]
|
92 |
+
|
93 |
+
Marxism–Leninism has been criticized by other communist and Marxist tendencies. They argue that Marxist–Leninist states did not establish socialism, but rather state capitalism.[10][11][12][13][14] According to Marxism, the dictatorship of the proletariat represents the rule of the majority (democracy) rather than of one party, to the extent that co-founder of Marxism Friedrich Engels described its "specific form" as the democratic republic.[72] Additionally, according to Engels state property by itself is private property of capitalist nature[b] unless the proletariat has control of political power, in which case it forms public property.[c][61] Whether the proletariat was actually in control of the Marxist–Leninist states is a matter of debate between Marxism–Leninism and other communist tendencies. To these tendencies, Marxism–Leninism is neither Marxism nor Leninism nor the union of both, but rather an artificial term created to justify Stalin's ideological distortion,[73] forced into the Communist Party of the Soviet Union and the Comintern. In the Soviet Union, this struggle against Marxism–Leninism was represented by Trotskyism which describes itself as a Marxist and Leninist tendency.
|
94 |
+
|
95 |
+
Maoism is the theory derived from the teachings of the Chinese political leader Mao Zedong. Developed from the 1950s until the Deng Xiaoping Chinese economic reform in the 1970s, it was widely applied as the guiding political and military ideology of the Communist Party of China and as the theory guiding revolutionary movements around the world. A key difference between Maoism and other forms of Marxism–Leninism is that peasants should be the bulwark of the revolutionary energy which is led by the working class.[74]
|
96 |
+
|
97 |
+
The synthesis of Marxism–Leninism–Maoism which builds upon the two individual theories as the Chinese adaption of Marxism–Leninism did not occur during the life of Mao. After de-Stalinization, Marxism–Leninism was kept in the Soviet Union while certain anti-revisionist tendencies such as Hoxhaism and Maoism argued that such had deviated from its original concept. Different policies were applied in Albania and China which became more distanced from the Soviet Union. From the 1960s, groups who called themselves Maoists, or those who upheld Maoism, were not unified around a common understanding of Maoism, instead having their own particular interpretations of the political, philosophical, economical and military works of Mao. Its adherents claims that as a unified, coherent higher stage of Marxism, it was not consolidated until the 1980s, first being formalized by the Peruvian communist party Shining Path in 1982.[75] Through the experience of the people's war waged by the party, the Shining Path were able to posit Maoism as the newest development of Marxism.[75]
|
98 |
+
|
99 |
+
Proponents of Marxism–Leninism–Maoism refer to the theory as Maoism itself whereas Maoism is referred to as either Mao Zedong Thought or Marxism–Leninism–Mao Zedong Thought. Maoism–Third Worldism is concerned with the infusion and synthesis of Marxism–Leninism–Maoism with concepts of non-Marxist Third-Worldism such dependency theory and world-systems theory.
|
100 |
+
|
101 |
+
Trotskyism, developed by Leon Trotsky in opposition to Stalinism, is a Marxist and Leninist tendency that supports the theory of permanent revolution and world revolution rather than the two-stage theory and Joseph Stalin's socialism in one country. It supported proletarian internationalism and another communist revolution in the Soviet Union. Rather than represeting the dictatorship of the proletariat, Trotsky claimed that the Soviet Union had become a degenerated workers' state under the leadership of Stalin in which class relations had re-emerged in a new form. Trotsky's politics differed sharply from those of Stalin and Mao Zedong, most importantly in declaring the need for an international proletarian revolution—rather than socialism in one country—and support for a true dictatorship of the proletariat based on democratic principles.
|
102 |
+
|
103 |
+
Struggling against Stalin for power in the Soviet Union, Trotsky and his supporters organized into the Left Opposition, the platform of which became known as Trotskyism. Stalin eventually succeeded in gaining control of the Soviet regime and Trotskyist attempts to remove Stalin from power resulted in Trotsky's exile from the Soviet Union in 1929. While in exile, Trotsky continued his campaign against Stalin, founding in 1938 the Fourth International, a Trotskyist rival to the Comintern. In August 1940, Trotsky was assassinated in Mexico City upon Stalin's orders. Trotskyist currents include orthodox Trotskyism, third camp, Posadism, Pabloism and neo-Trotskyism.
|
104 |
+
|
105 |
+
In Trotskyist political theory, a degenerated workers' state is a dictatorship of the proletariat in which the working class's democratic control over the state has given way to control by a bureaucratic clique. The term was developed by Trotsky in The Revolution Betrayed and in other works. Deformed workers' states are states where the capitalist class has been overthrown, the economy is largely state-owned and planned, but there is no internal democracy or workers' control of industry. In a deformed workers' state, the working class has never held political power like it did in Russia shortly after the Bolshevik Revolution. These states are considered deformed because their political and economic structures have been imposed from the top (or from outside) and because revolutionary working class organizations are crushed. Like a degenerated workers' state, a deformed workers' state cannot be said to be a state that is transitioning to socialism. Most Trotskyists cite examples of deformed workers' states today as including Cuba, the People's Republic of China, North Korea and Vietnam. The Committee for a Workers' International has also included states such as Burma and Syria at times when they have had a nationalized economy.
|
106 |
+
|
107 |
+
Eurocommunism was a revisionist trend in the 1970s and 1980s within various Western European communist parties, claiming to develop a theory and practice of social transformation more relevant to their region. Especially prominent in France, Italy and Spain, communists of this nature sought to undermine the influence of the Soviet Union and its communist party during the Cold War.[76]
|
108 |
+
|
109 |
+
Libertarian Marxism is a broad range of economic and political philosophies that emphasize the anti-authoritarian aspects of Marxism. Early currents of libertarian Marxism, known as left communism,[77] emerged in opposition to Marxism–Leninism[78] and its derivatives such as Stalinism, Trotskyism and Maoism.[79]
|
110 |
+
|
111 |
+
Libertarian Marxism is also critical of reformist positions such as those held by social democrats.[80] Libertarian Marxist currents often draw from Marx and Engels' later works, specifically the Grundrisse and The Civil War in France,[81] emphasizing the Marxist belief in the ability of the working class to forge its own destiny without the need for a revolutionary party or state to mediate or aid its liberation.[82] Along with anarchism, libertarian Marxism is one of the main derivatives of libertarian socialism.[83]
|
112 |
+
|
113 |
+
Aside from left communism, libertarian Marxism includes such currents as autonomism, communization, council communism, De Leonism, the Johnson–Forest Tendency, Lettrism, Luxemburgism Situationism, Socialisme ou Barbarie, Solidarity, the World Socialist Movement, workerism as well as parts of Freudo-Marxism and the New Left.[84] Moreover, libertarian Marxism has often had a strong influence on both post-left and social anarchists. Notable theorists of libertarian Marxism have included Antonie Pannekoek, Raya Dunayevskaya, C. L. R. James, Antonio Negri, Cornelius Castoriadis, Maurice Brinton, Guy Debord, Daniel Guérin, Ernesto Screpanti, Raoul Vaneigem and Yanis Varoufakis,[85] who claims that Marx himself was a libertarian Marxist.[86]
|
114 |
+
|
115 |
+
Council communism is a movement originating in Germany and the Netherlands in the 1920s, whose primary organization was the Communist Workers Party of Germany. Council communism continues today as a theoretical and activist position within both libertarian Marxism and libertarian socialism.
|
116 |
+
|
117 |
+
The core principle of council communism is that the government and the economy should be managed by Workers' councils which are composed of delegates elected at workplaces and recallable at any moment. As such, council communists oppose state-run authoritarian state socialism and state capitalism. They also oppose the idea of a revolutionary party since council communists believe that a revolution led by a party will necessarily produce a party dictatorship. Council communists support a workers' democracy, produced through a federation of workers' councils.
|
118 |
+
|
119 |
+
Accordingly, the central argument of council communism in contrast to those of social democracy and Leninist communism is that democratic workers' councils arising in the factories and municipalities are the natural form of working-class organization and governmental power. This view is opposed to both the reformist and the Leninism ideologies which respectively stress parliamentary and institutional government by applying social reforms on the one hand and vanguard parties and participative democratic centralism on the other.
|
120 |
+
|
121 |
+
Left communism is the range of communist viewpoints held by the communist left which criticizes the political ideas and practices espoused, particularly following the series of revolutions that brought World War to an end by Bolsheviks and social democrats. Left communists assert positions which they regard as more authentically Marxist and proletarian than the views of Marxism–Leninism espoused by the Communist International after its first congress (March 1919) and during its second congress (July–August 1920).[87]
|
122 |
+
|
123 |
+
Left communists represent a range of political movements distinct from Marxist–Leninists, whom they largely view as merely the left-wing of capital; from anarcho-communists, some of whom they consider to be internationalist socialists; and from various other revolutionary socialist tendencies such as De Leonists, whom they tend to see as being internationalist socialists only in limited instances.[88]
|
124 |
+
|
125 |
+
Bordigism is a Leninist left-communist current named after Amadeo Bordiga, who did consider himself a Leninist and has been described as being "more Leninist than Lenin".[89]
|
126 |
+
|
127 |
+
The dominant forms of communism are based on Marxism, but non-Marxist versions of communism such as Christian communism and anarcho-communism also exist.
|
128 |
+
|
129 |
+
Anarcho-communism is a libertarian theory of anarchism and communism which advocates the abolition of the state, private property and capitalism in favor of common ownership of the means of production;[90][91] direct democracy; and a horizontal network of voluntary associations and workers' councils with production and consumption based on the guiding principle "From each according to his ability, to each according to his need".[92][93]
|
130 |
+
|
131 |
+
Anarcho-communism differs from Marxism in that it rejects its view about the need for a state socialism phase prior to establishing communism. Peter Kropotkin, the main theorist of anarcho-communism, argued that a revolutionary society should "transform itself immediately into a communist society", that it should go immediately into what Marx had regarded as the "more advanced, completed, phase of communism".[94] In this way, it tries to avoid the reappearance of "class divisions and the need for a state to oversee everything".[94]
|
132 |
+
|
133 |
+
Some forms of anarcho-communism such as insurrectionary anarchism are egoist and strongly influenced by radical individualism,[95][96][97] believing that anarchist communism does not require a communitarian nature at all. Most anarcho-communists view anarchist communism as a way of reconciling the opposition between the individual and society.[d][98][99] In human history to date, the best-known examples of an anarcho-communist society, i.e. established around the ideas as they exist today and that received worldwide attention and knowledge in the historical canon, are the anarchist territories during the Free Territory during the Russian Revolution, the Korean People's Association in Manchuria and the Spanish Revolution of 1936.
|
134 |
+
|
135 |
+
During the Russian Civil War, anarchists such as Nestor Makhno worked through the Revolutionary Insurrectionary Army of Ukraine to create and defend anarcho-communism in the Free Territory of the Ukraine from 1919 before being conquered by the Bolsheviks in 1921. In 1929, anarcho-communism was achieved in Korea by the Korean Anarchist Federation in Manchuria (KAFM) and the Korean Anarcho-Communist Federation (KACF), with help from anarchist general and independence activist Kim Chwa-chin, lasting until 1931, when Imperial Japan assassinated Kim and invaded from the south while the Chinese Nationalists invaded from the north, resulting in the creation of Manchukuo, a puppet state of the Empire of Japan. Through the efforts and influence of the Spanish anarchists during the Spanish Revolution within the Spanish Civil War, starting in 1936 anarcho-communism existed in most of Aragon; parts of the Levante and Andalusia; and in the stronghold of Revolutionary Catalonia, before being brutally crushed.
|
136 |
+
|
137 |
+
Christian communism is a form of religious communism based on Christianity. It is a theological and political theory based upon the view that the teachings of Jesus Christ compel Christians to support communism as the ideal social system. Although there is no universal agreement on the exact date when Christian communism was founded, many Christian communists assert that evidence from the Bible suggests that the first Christians, including the Apostles, established their own small communist society in the years following Jesus' death and resurrection. Many advocates of Christian communism argue that it was taught by Jesus and practiced by the Apostles themselves.
|
138 |
+
|
139 |
+
Christian communism can be seen as a radical form of Christian socialism. Christian communists may or may not agree with various aspects of Marxism. They do not agree with the atheist and anti-religious views held by secular Marxists, but they do agree with many of the economic and existential aspects of Marxist theory, such as the idea that capitalism exploits the working class by extracting surplus value from the workers in the form of profits and the idea that wage labor is a tool of human alienation that promotes arbitrary and unjust authority. Christian communism holds that capitalism encourages the negative aspects of humans, supplanting values such as mercy, kindness, justice and compassion in favor of greed, selfishness and blind ambition.
|
140 |
+
|
141 |
+
Criticism of communism can be divided into two broad categories, namely that which concerns itself with the practical aspects of 20th century communist states;[100] and that which concerns itself with communist principles and theory.[101]
|
142 |
+
|
143 |
+
Marxism is also subject to general criticism such as that historical materialism is a type of historical determinism;[dubious – discuss] that it requires necessary suppression of liberal democratic rights; that there are issues with the implementation of communism; and that there are economic issues such as the distortion or absence of price signals. In addition, empirical and epistemological problems are frequently cited.[102][103][104]
|